Skip to content

Class emulator::inference::LibTorchBackend

ClassList > emulator > inference > LibTorchBackend

LibTorch backend for native C++ PyTorch inference. More...

  • #include <libtorch_backend.hpp>

Inherits the following classes: emulator::inference::InferenceBackend

Classes

Type Name
struct Impl
Private implementation details for LibTorchBackend .

Public Functions

Type Name
LibTorchBackend ()
virtual void finalize () override
Release resources and finalize the backend.
size_t get_memory_usage_bytes () const
Get approximate memory usage in bytes.
virtual bool infer (const double * inputs, double * outputs, int batch_size) override
Run inference on input data.
virtual bool initialize (const InferenceConfig & config) override
Initialize the backend.
virtual bool is_initialized () override const
Check if the backend is ready for inference.
virtual std::string name () override const
Get the human-readable name of this backend.
virtual BackendType type () override const
Get the backend type enumeration.
~LibTorchBackend () override

Public Functions inherited from emulator::inference::InferenceBackend

See emulator::inference::InferenceBackend

Type Name
virtual void finalize () = 0
Release resources and finalize the backend.
virtual bool infer (const double * inputs, double * outputs, int batch_size) = 0
Run inference on input data.
virtual bool initialize (const InferenceConfig & config) = 0
Initialize the backend.
virtual bool is_initialized () const = 0
Check if the backend is ready for inference.
virtual std::string name () const = 0
Get the human-readable name of this backend.
virtual BackendType type () const = 0
Get the backend type enumeration.
virtual ValidationResult validate () const
Validate configuration before running.
virtual ~InferenceBackend () = default

Detailed Description

Public Functions Documentation

function LibTorchBackend

emulator::inference::LibTorchBackend::LibTorchBackend () 

function finalize

Release resources and finalize the backend.

virtual void emulator::inference::LibTorchBackend::finalize () override

After calling this, the backend is no longer usable until initialize() is called again.

Implements emulator::inference::InferenceBackend::finalize


function get_memory_usage_bytes

Get approximate memory usage in bytes.

size_t emulator::inference::LibTorchBackend::get_memory_usage_bytes () const

Returns:

Estimated memory used by model and buffers


function infer

Run inference on input data.

virtual bool emulator::inference::LibTorchBackend::infer (
    const double * inputs,
    double * outputs,
    int batch_size
) override

Run inference on input data.

Executes the model on the provided input batch and writes results to the output buffer.

Parameters:

  • inputs Input data array, size = batch_size * input_channels
  • outputs Output data array, size = batch_size * output_channels
  • batch_size Number of samples in the batch

Returns:

true if inference succeeded, false on error

Precondition:

initialize() must have been called successfully

Precondition:

outputs must be pre-allocated with sufficient size

Executes the loaded model on the provided input data. The backend expects input in [batch_size, input_channels] format. For CNN models requiring [N, C, H, W], the caller (EmulatorComp) must reshape first.

Parameters:

  • inputs Input data array of size [batch_size * input_channels]
  • outputs Output data array of size [batch_size * output_channels]
  • batch_size Number of samples in the batch

Returns:

true if inference succeeded, false on error

Implements emulator::inference::InferenceBackend::infer


function initialize

Initialize the backend.

virtual bool emulator::inference::LibTorchBackend::initialize (
    const InferenceConfig & config
) override

Initialize the LibTorch backend.

Loads the model, allocates resources, and prepares for inference. Must be called before infer().

Parameters:

  • config Configuration options

Returns:

true if initialization succeeded, false on error

Loads the TorchScript model from the configured path and sets up the execution device and precision.

Implements emulator::inference::InferenceBackend::initialize


function is_initialized

Check if the backend is ready for inference.

inline virtual bool emulator::inference::LibTorchBackend::is_initialized () override const

Returns:

true if initialized and ready

Implements emulator::inference::InferenceBackend::is_initialized


function name

Get the human-readable name of this backend.

inline virtual std::string emulator::inference::LibTorchBackend::name () override const

Returns:

Backend name (e.g., "LibTorch", "Stub")

Implements emulator::inference::InferenceBackend::name


function type

Get the backend type enumeration.

inline virtual BackendType emulator::inference::LibTorchBackend::type () override const

Returns:

BackendType value

Implements emulator::inference::InferenceBackend::type


function ~LibTorchBackend

emulator::inference::LibTorchBackend::~LibTorchBackend () override


The documentation for this class was generated from the following file components/emulator_comps/common/src/inference/libtorch_backend.hpp