Hi Axelera team,
I’m running parallel and cascading pipelines together with inference.py
—the pipelines initialize and generally work. The issue is input routing: during inference I’m unable to reliably select/bind the correct video (file/RTSP/USB) to each network. For example, launching two independent detectors plus one cascade with multiple media sources (e.g., ./inference.py modelA modelB media/a.mp4 media/b.mp4 --pipe=gst --verbose
) often results in the streams attaching to the wrong model or failing to open, while one stream runs as expected.
Is there a supported way to explicitly map inputs to models/pipelines when using inference.py
? For instance:
-
CLI flags to pin inputs by index/name (e.g.,
--inputd0]=...
,--inputd1]=...
) -
YAML fields that define per-network input URIs
-
A recommended
PipeManager
pattern for deterministic source‑to‑network routing when combining parallel and cascading graphs
the command i wrote:
./inference.py parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8 media/bottlecap.mp4 media/ppe.mp4 media/scp.mp4 --verbose
(venv) wgtech@wgtech-server:~/axelera/voyager-sdk$ ./inference.py parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8 media/bottlecap.mp4 media/ppe.mp4 media/scp.mp4 --verbose
DEBUG :axelera.app.device_manager: Using device metis-0:6:0
DEBUG :axelera.app.network: Create network from /home/wgtech/axelera/voyager-sdk/customers/wgtech/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8.yaml
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
DEBUG :axelera.app.network: Register custom operator 'decodeyolopose' with class DecodeYoloPose from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolopose.py
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/bottlecap/models/bottlecap.pt
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/ppe/ppe.pt
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/scpm/scp.pt
DEBUG :axelera.app.device_manager: Reconfiguring devices with device_firmware=1, mvm_utilisation_core_0=100%, clock_profile_core_0=800MHz, mvm_utilisation_core_1=100%, clock_profile_core_1=800MHz, mvm_utilisation_core_2=100%, clock_profile_core_2=800MHz, mvm_utilisation_core_3=100%, clock_profile_core_3=800MHz
DEBUG :axelera.app.utils: $ vainfo
DEBUG :axelera.app.utils: Found OpenCL GPU devices for platform Intel(R) OpenCL HD Graphics: Intel(R) UHD Graphics 770 D0x4680]
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- EXECUTION VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: │ └─yolov8n-bottlecap-pt
DEBUG :axelera.app.pipe.manager: │ └─yolov8n-ppe-pt
DEBUG :axelera.app.pipe.manager: └─keypoint_detections
DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- RESULT VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: │ └─yolov8n-bottlecap-pt
DEBUG :axelera.app.pipe.manager: │ └─yolov8n-ppe-pt
DEBUG :axelera.app.pipe.manager: └─keypoint_detections
DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
DEBUG :axelera.app.pipe.manager: Network type: NetworkType.COMPLEX_NETWORK
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (5 channels)
DEBUG :yolo: - Channel pattern: a64, 64, 64, 5, 5, 5]
DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 5], 81, 40, 40, 5], 1, 20, 20, 5]])
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (5 channels)
DEBUG :yolo: - Channel pattern: a64, 64, 64, 5, 5, 5]
DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 5], 81, 40, 40, 5], 1, 20, 20, 5]])
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (6 channels)
DEBUG :yolo: - Channel pattern: a64, 64, 64, 6, 6, 6]
DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 6], 81, 40, 40, 6], 1, 20, 20, 6]])
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/bottlecap.mp4: 30
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/ppe.mp4: 30
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/scp.mp4: 30
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task0 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-bottlecap-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task1 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-ppe-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task2 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolo11npose-coco-onnx/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task3 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-scpm-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.pipe.gst: GST representation written to build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/logs/gst_pipeline.yaml
DEBUG :axelera.app.pipe.gst: Started building gst pipeline
DEBUG :axelera.app.pipe.gst: Received first frame from gstreamer
DEBUG :axelera.app.pipe.gst: Finished building gst pipeline - build time = 1.451
DEBUG :axelera.app.display: System memory: 3837.43 MB axelera: 1420.70 MB, vms = 11133.25 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3934.57 MB axelera: 1483.63 MB, vms = 11283.24 MB display queue size: 9
DEBUG :axelera.app.display: System memory: 3873.24 MB axelera: 1490.35 MB, vms = 11285.33 MB display queue size: 10
DEBUG :axelera.app.display: System memory: 3853.47 MB axelera: 1491.75 MB, vms = 11285.33 MB display queue size: 9
DEBUG :axelera.app.display: System memory: 3852.30 MB axelera: 1492.07 MB, vms = 11283.30 MB display queue size: 8
DEBUG :axelera.app.display: System memory: 3864.85 MB axelera: 1493.94 MB, vms = 11284.33 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3862.25 MB axelera: 1494.41 MB, vms = 11285.33 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3845.40 MB axelera: 1494.41 MB, vms = 11282.49 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3845.26 MB axelera: 1495.19 MB, vms = 11284.32 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3842.05 MB axelera: 1496.75 MB, vms = 11284.59 MB display queue size: 8
DEBUG :axelera.app.display: System memory: 3835.86 MB axelera: 1498.32 MB, vms = 11286.29 MB display queue size: 7
DEBUG :axelera.app.display: System memory: 3837.49 MB axelera: 1498.32 MB, vms = 11287.29 MB display queue size: 7
DEBUG :axelera.app.display: System memory: 3842.35 MB axelera: 1499.10 MB, vms = 11280.86 MB display queue size: 5
DEBUG :axelera.app.pipe.gst_helper: End of stream
INFO : Core Temp : 39.0°C
INFO : CPU % : 16.8%
INFO : End-to-end : 144.8fps
DEBUG :axelera.app.meta.object_detection: Total number of detections: 20170
//my yaml file
axelera-model-format: 1.0.0
name: parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8
description: Parallel networks yolov8-bottle cap fit and liscence plate detections
pipeline:
- yolov8n-bottlecap-pt:
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
postprocess:
- decodeyolo: # fine-tune decoder settings
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001 # overwrites above parameter during accuracy measurements
- yolov8n-ppe-pt:
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
postprocess:
- decodeyolo: # fine-tune decoder settings
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001
# ── Stage 1: Pose (produces person boxes + keypoints) ─────────────────────────
- keypoint_detections:
model_name: yolo11npose-coco-onnx
input:
type: image
color_format: RGB
preprocess:
- letterbox:
width: 640
height: 640
scaleup: True
- torch-totensor:
postprocess:
- decodeyolopose:
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_top_k: 300
eval:
conf_threshold: 0.001
nms_iou_threshold: 0.65
box_format: xywh
normalized_coord: False
# ── Stage 2: SCP detector on person ROIs from stage 1 ────────────────────────
- scp_detections:
model_name: yolov8n-scpm-pt
# Reuse standard YOLO letterbox pipeline for the crops
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
input:
type: image
color_format: RGB
source: roi # <- use crops
where: keypoint_detections # <- from stage 1
postprocess:
- decodeyolo:
box_format: xywh
normalized_coord: False
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001
operators:
decodeyolo:
class: DecodeYolo
class_path: $AXELERA_FRAMEWORK/ax_models/decoders/yolo.py
decodeyolopose:
class: DecodeYoloPose
class_path: $AXELERA_FRAMEWORK/ax_models/decoders/yolopose.py
models:
yolov8n-bottlecap-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/bottlecap/models/bottlecap.pt
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: t1, 3, 640, 640]
input_color_format: RGB
num_classes: 5
dataset: BottlecapCalibration
extra_kwargs:
aipu_cores: 1
yolov8n-ppe-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/ppe/ppe.pt
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: <1, 3, 640, 640]
input_color_format: RGB
num_classes: 5
dataset: ppecalib
extra_kwargs:
aipu_cores: 1
# Stage 1 model: YOLO11n Pose (ONNX)
yolo11npose-coco-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
weight_path: weights/yolo11n-pose.onnx
weight_url: https://media.axelera.ai/artifacts/model_cards/weights/yolo/keypoint_detection/yolo11n-pose.onnx
weight_md5: ffa277388dc15a97330b396ca4565a8a
task_category: KeypointDetection
input_tensor_layout: NCHW
input_tensor_shape: e1, 3, 640, 640]
input_color_format: RGB
num_classes: 1
dataset: CocoDataset-keypoint-COCO2017
extra_kwargs:
compilation_config:
quantization_scheme: per_tensor_min_max
ignore_weight_buffers: False
aipu_cores: 1 # suggested split (Metis PCIe has 4 cores)
# Stage 2 model: your SCP detector (Ultralytics PT)
yolov8n-scpm-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/scpm/scp.pt # <-- your trained weights
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: R1, 3, 640, 640]
input_color_format: RGB
num_classes: 6
dataset: scpmcalib
extra_kwargs:
compilation_config:
quantization_scheme: per_tensor_min_max
aipu_cores: 1
datasets: # Python dataloader
BottlecapCalibration:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: bottlecap
label_type: YOLOv8
labels: data.yaml
cal_data: train # Text file with image paths or directory like `valid`
val_data: test
ppecalib:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: ppe
label_type: YOLOv8
labels: data.yaml
cal_data: train # Text file with image paths or directory like `valid`
val_data: test # Text file with image paths or directory like `test`
# Pose calibration (COCO keypoints)
CocoDataset-keypoint-COCO2017:
class: KptDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: coco
label_type: COCO2017
repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/coco2017_400_b680128
repr_imgs_url: https://media.axelera.ai/artifacts/data/coco/coco2017_repr400.zip
repr_imgs_md5: b680128512392586e3c86b670886d9fa
# SCP calibration/val (your dataset)
scpmcalib:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: scpm # folder containing train/ valid/ test/ data.yaml
label_type: YOLOv8
labels: data.yaml # must exist inside the scpm/ folder
cal_data: train
val_data: test