Struct emulator::inference::InferenceConfig
ClassList > emulator > inference > InferenceConfig
Configuration options for inference backends. More...
#include <inference_backend.hpp>
Public Attributes
| Type | Name |
|---|---|
| BackendType | backend = BackendType::STUBBackend implementation to use. |
| int | device_id = -1GPU device ID (-1 for CPU) |
| bool | dry_run = falseIf true, validate config and exit without running. |
| std::vector< std::string > | expected_input_vars Expected input variable names. |
| std::vector< std::string > | expected_output_vars Expected output variable names. |
| int | grid_height = 0Height dimension (for spatial_mode) |
| int | grid_width = 0Width dimension (for spatial_mode) |
| int | input_channels = 44Number of input features per sample. |
| std::string | model_path Path to model file (TorchScript .pt for LibTorch) |
| int | output_channels = 50Number of output features per sample. |
| bool | spatial_mode = /* multi line expression */If true, reshape to [N, C, H, W] for CNN models. |
| bool | use_fp16 = falseUse half precision (requires CUDA) |
| bool | verbose = falseEnable verbose output (for debugging) |
Detailed Description
Contains all parameters needed to initialize an inference backend, including model path, device selection, and tensor dimensions.
Public Attributes Documentation
variable backend
Backend implementation to use.
BackendType emulator::inference::InferenceConfig::backend;
variable device_id
GPU device ID (-1 for CPU)
int emulator::inference::InferenceConfig::device_id;
variable dry_run
If true, validate config and exit without running.
bool emulator::inference::InferenceConfig::dry_run;
variable expected_input_vars
Expected input variable names.
std::vector<std::string> emulator::inference::InferenceConfig::expected_input_vars;
variable expected_output_vars
Expected output variable names.
std::vector<std::string> emulator::inference::InferenceConfig::expected_output_vars;
variable grid_height
Height dimension (for spatial_mode)
int emulator::inference::InferenceConfig::grid_height;
variable grid_width
Width dimension (for spatial_mode)
int emulator::inference::InferenceConfig::grid_width;
variable input_channels
Number of input features per sample.
int emulator::inference::InferenceConfig::input_channels;
variable model_path
Path to model file (TorchScript .pt for LibTorch)
std::string emulator::inference::InferenceConfig::model_path;
variable output_channels
Number of output features per sample.
int emulator::inference::InferenceConfig::output_channels;
variable spatial_mode
If true, reshape to [N, C, H, W] for CNN models.
bool emulator::inference::InferenceConfig::spatial_mode;
variable use_fp16
Use half precision (requires CUDA)
bool emulator::inference::InferenceConfig::use_fp16;
variable verbose
Enable verbose output (for debugging)
bool emulator::inference::InferenceConfig::verbose;
The documentation for this class was generated from the following file components/emulator_comps/common/src/inference/inference_backend.hpp