Skip to main content

models

The models list defines the AI inference pipeline. It can contain detectors, classifiers, and an optional tracker. Entries are processed in the order they appear, which controls the inference chain.

models:
- name: "my-detector"
type: "detector"
architecture: "yolo"
onnx_file: /path/to/model.onnx
label_file: /path/to/labels.txt

Model fields

FieldTypeRequiredDefaultDescription
namestringNoauto-generatedUnique name for this model. Used by operate_on for chaining.
typestringYesdetector, classifier, or tracker
architecturestringYes (detectors)Model architecture (e.g., yolo)
onnx_filestringYesPath to ONNX model file
label_filestringYesPath to label file (one class name per line)
batch_sizeintegerNonext power of 2 ≥ stream countInference batch size
precisionstringNo"fp16"Inference precision: fp32 or fp16
active_classeslist of intNoall classesOnly detect these class IDs (detectors only)
operate_onstringYes (classifiers)Name of the parent model to chain from
operate_on_classeslist of intNoall parent classesOnly classify objects of these class IDs from the parent model
tip

The num_classes is automatically determined by counting lines in label_file. The DeepStream nvinfer config file is auto-generated — you only need to provide the ONNX model and labels.

See Model Types for detailed usage of each type.