Class emulator::inference::InferenceBackend
ClassList > emulator > inference > InferenceBackend
Abstract interface for inference backends. More...
#include <inference_backend.hpp>
Inherited by the following classes: emulator::inference::LibTorchBackend, emulator::inference::StubBackend
Public Functions
| Type | Name |
|---|---|
| virtual void | finalize () = 0 Release resources and finalize the backend. |
| virtual bool | infer (const double * inputs, double * outputs, int batch_size) = 0 Run inference on input data. |
| virtual bool | initialize (const InferenceConfig & config) = 0 Initialize the backend. |
| virtual bool | is_initialized () const = 0 Check if the backend is ready for inference. |
| virtual std::string | name () const = 0 Get the human-readable name of this backend. |
| virtual BackendType | type () const = 0 Get the backend type enumeration. |
| virtual ValidationResult | validate () const Validate configuration before running. |
| virtual | ~InferenceBackend () = default |
Detailed Description
Public Functions Documentation
function finalize
Release resources and finalize the backend.
virtual void emulator::inference::InferenceBackend::finalize () = 0
After calling this, the backend is no longer usable until initialize() is called again.
function infer
Run inference on input data.
virtual bool emulator::inference::InferenceBackend::infer (
const double * inputs,
double * outputs,
int batch_size
) = 0
Executes the model on the provided input batch and writes results to the output buffer.
Parameters:
inputsInput data array, size = batch_size * input_channelsoutputsOutput data array, size = batch_size * output_channelsbatch_sizeNumber of samples in the batch
Returns:
true if inference succeeded, false on error
Precondition:
initialize() must have been called successfully
Precondition:
outputs must be pre-allocated with sufficient size
function initialize
Initialize the backend.
virtual bool emulator::inference::InferenceBackend::initialize (
const InferenceConfig & config
) = 0
Loads the model, allocates resources, and prepares for inference. Must be called before infer().
Parameters:
configConfiguration options
Returns:
true if initialization succeeded, false on error
function is_initialized
Check if the backend is ready for inference.
virtual bool emulator::inference::InferenceBackend::is_initialized () const = 0
Returns:
true if initialized and ready
function name
Get the human-readable name of this backend.
virtual std::string emulator::inference::InferenceBackend::name () const = 0
Returns:
Backend name (e.g., "LibTorch", "Stub")
function type
Get the backend type enumeration.
virtual BackendType emulator::inference::InferenceBackend::type () const = 0
Returns:
BackendType value
function validate
Validate configuration before running.
inline virtual ValidationResult emulator::inference::InferenceBackend::validate () const
Checks that the model file exists, dimensions match, device is available, etc. Call this after initialize() to detect configuration errors early.
Returns:
ValidationResult with errors/warnings if any
function ~InferenceBackend
virtual emulator::inference::InferenceBackend::~InferenceBackend () = default
The documentation for this class was generated from the following file components/emulator_comps/common/src/inference/inference_backend.hpp