models
The models list defines the AI inference pipeline. It can contain detectors, classifiers, and an optional tracker. Entries are processed in the order they appear, which controls the inference chain.
models:
- name: "my-detector"
type: "detector"
architecture: "yolo"
onnx_file: /path/to/model.onnx
label_file: /path/to/labels.txt
Model fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | No | auto-generated | Unique name for this model. Used by operate_on for chaining. |
type | string | Yes | — | detector, classifier, or tracker |
architecture | string | Yes (detectors) | — | Model architecture (e.g., yolo) |
onnx_file | string | Yes | — | Path to ONNX model file |
label_file | string | Yes | — | Path to label file (one class name per line) |
batch_size | integer | No | next power of 2 ≥ stream count | Inference batch size |
precision | string | No | "fp16" | Inference precision: fp32 or fp16 |
active_classes | list of int | No | all classes | Only detect these class IDs (detectors only) |
operate_on | string | Yes (classifiers) | — | Name of the parent model to chain from |
operate_on_classes | list of int | No | all parent classes | Only classify objects of these class IDs from the parent model |
tip
The num_classes is automatically determined by counting lines in label_file. The DeepStream nvinfer config file is auto-generated — you only need to provide the ONNX model and labels.
See Model Types for detailed usage of each type.