Skip to main content

Hi ​Axelera  team,

I’m running parallel and cascading pipelines together with inference.py—the pipelines initialize and generally work. The issue is input routing: during inference I’m unable to reliably select/bind the correct video (file/RTSP/USB) to each network. For example, launching two independent detectors plus one cascade with multiple media sources (e.g., ./inference.py modelA modelB media/a.mp4 media/b.mp4 --pipe=gst --verbose) often results in the streams attaching to the wrong model or failing to open, while one stream runs as expected.

Is there a supported way to explicitly map inputs to models/pipelines when using inference.py? For instance:

  • CLI flags to pin inputs by index/name (e.g., --inputd0]=..., --inputd1]=...)

  • YAML fields that define per-network input URIs

  • A recommended PipeManager pattern for deterministic source‑to‑network routing when combining parallel and cascading graphs

     

    the command i wrote:

    ./inference.py parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8 media/bottlecap.mp4 media/ppe.mp4 media/scp.mp4 --verbose

    (venv) wgtech@wgtech-server:~/axelera/voyager-sdk$ ./inference.py parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8 media/bottlecap.mp4 media/ppe.mp4 media/scp.mp4 --verbose
    DEBUG :axelera.app.device_manager: Using device metis-0:6:0
    DEBUG :axelera.app.network: Create network from /home/wgtech/axelera/voyager-sdk/customers/wgtech/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8.yaml
    DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
    DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
    WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
    DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
    WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
    DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
    WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
    DEBUG :axelera.app.network: Register custom operator 'decodeyolopose' with class DecodeYoloPose from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolopose.py
    DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/bottlecap/models/bottlecap.pt
    DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/ppe/ppe.pt
    DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/scpm/scp.pt
    DEBUG :axelera.app.device_manager: Reconfiguring devices with device_firmware=1, mvm_utilisation_core_0=100%, clock_profile_core_0=800MHz, mvm_utilisation_core_1=100%, clock_profile_core_1=800MHz, mvm_utilisation_core_2=100%, clock_profile_core_2=800MHz, mvm_utilisation_core_3=100%, clock_profile_core_3=800MHz
    DEBUG :axelera.app.utils: $ vainfo
    DEBUG :axelera.app.utils: Found OpenCL GPU devices for platform Intel(R) OpenCL HD Graphics: Intel(R) UHD Graphics 770 D0x4680]
    DEBUG :axelera.app.pipe.manager:
    DEBUG :axelera.app.pipe.manager: --- EXECUTION VIEW ---
    DEBUG :axelera.app.pipe.manager: Input
    DEBUG :axelera.app.pipe.manager: │ └─yolov8n-bottlecap-pt
    DEBUG :axelera.app.pipe.manager: │ └─yolov8n-ppe-pt
    DEBUG :axelera.app.pipe.manager: └─keypoint_detections
    DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
    DEBUG :axelera.app.pipe.manager:
    DEBUG :axelera.app.pipe.manager: --- RESULT VIEW ---
    DEBUG :axelera.app.pipe.manager: Input
    DEBUG :axelera.app.pipe.manager: │ └─yolov8n-bottlecap-pt
    DEBUG :axelera.app.pipe.manager: │ └─yolov8n-ppe-pt
    DEBUG :axelera.app.pipe.manager: └─keypoint_detections
    DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
    DEBUG :axelera.app.pipe.manager: Network type: NetworkType.COMPLEX_NETWORK
    DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
    DEBUG :yolo: - 6 output tensors (anchor-free)
    DEBUG :yolo: - 3 regression branches (64 channels)
    DEBUG :yolo: - 3 classification branches (5 channels)
    DEBUG :yolo: - Channel pattern: a64, 64, 64, 5, 5, 5]
    DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 5], 81, 40, 40, 5], 1, 20, 20, 5]])
    DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
    DEBUG :yolo: - 6 output tensors (anchor-free)
    DEBUG :yolo: - 3 regression branches (64 channels)
    DEBUG :yolo: - 3 classification branches (5 channels)
    DEBUG :yolo: - Channel pattern: a64, 64, 64, 5, 5, 5]
    DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 5], 81, 40, 40, 5], 1, 20, 20, 5]])
    DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
    DEBUG :yolo: - 6 output tensors (anchor-free)
    DEBUG :yolo: - 3 regression branches (64 channels)
    DEBUG :yolo: - 3 classification branches (6 channels)
    DEBUG :yolo: - Channel pattern: a64, 64, 64, 6, 6, 6]
    DEBUG :yolo: - Shapes: tt1, 80, 80, 64], ,1, 40, 40, 64], E1, 20, 20, 64], h1, 80, 80, 6], 81, 40, 40, 6], 1, 20, 20, 6]])
    DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/bottlecap.mp4: 30
    DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/ppe.mp4: 30
    DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/scp.mp4: 30
    DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task0 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-bottlecap-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
    DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task1 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-ppe-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
    DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task2 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolo11npose-coco-onnx/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
    DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task3 model=/home/wgtech/axelera/voyager-sdk/build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/yolov8n-scpm-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
    DEBUG :axelera.app.pipe.gst: GST representation written to build/parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8/logs/gst_pipeline.yaml
    DEBUG :axelera.app.pipe.gst: Started building gst pipeline
    DEBUG :axelera.app.pipe.gst: Received first frame from gstreamer
    DEBUG :axelera.app.pipe.gst: Finished building gst pipeline - build time = 1.451
    DEBUG :axelera.app.display: System memory: 3837.43 MB axelera: 1420.70 MB, vms = 11133.25 MB display queue size: 1
    DEBUG :axelera.app.display: System memory: 3934.57 MB axelera: 1483.63 MB, vms = 11283.24 MB display queue size: 9
    DEBUG :axelera.app.display: System memory: 3873.24 MB axelera: 1490.35 MB, vms = 11285.33 MB display queue size: 10
    DEBUG :axelera.app.display: System memory: 3853.47 MB axelera: 1491.75 MB, vms = 11285.33 MB display queue size: 9
    DEBUG :axelera.app.display: System memory: 3852.30 MB axelera: 1492.07 MB, vms = 11283.30 MB display queue size: 8
    DEBUG :axelera.app.display: System memory: 3864.85 MB axelera: 1493.94 MB, vms = 11284.33 MB display queue size: 1
    DEBUG :axelera.app.display: System memory: 3862.25 MB axelera: 1494.41 MB, vms = 11285.33 MB display queue size: 1
    DEBUG :axelera.app.display: System memory: 3845.40 MB axelera: 1494.41 MB, vms = 11282.49 MB display queue size: 2
    DEBUG :axelera.app.display: System memory: 3845.26 MB axelera: 1495.19 MB, vms = 11284.32 MB display queue size: 2
    DEBUG :axelera.app.display: System memory: 3842.05 MB axelera: 1496.75 MB, vms = 11284.59 MB display queue size: 8
    DEBUG :axelera.app.display: System memory: 3835.86 MB axelera: 1498.32 MB, vms = 11286.29 MB display queue size: 7
    DEBUG :axelera.app.display: System memory: 3837.49 MB axelera: 1498.32 MB, vms = 11287.29 MB display queue size: 7
    DEBUG :axelera.app.display: System memory: 3842.35 MB axelera: 1499.10 MB, vms = 11280.86 MB display queue size: 5
    DEBUG :axelera.app.pipe.gst_helper: End of stream
    INFO : Core Temp : 39.0°C
    INFO : CPU % : 16.8%
    INFO : End-to-end : 144.8fps
    DEBUG :axelera.app.meta.object_detection: Total number of detections: 20170

     

//my yaml file

axelera-model-format: 1.0.0

name: parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8

description: Parallel networks yolov8-bottle cap fit and liscence plate detections

pipeline:
- yolov8n-bottlecap-pt:
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
postprocess:
- decodeyolo: # fine-tune decoder settings
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001 # overwrites above parameter during accuracy measurements
- yolov8n-ppe-pt:
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
postprocess:
- decodeyolo: # fine-tune decoder settings
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001
# ── Stage 1: Pose (produces person boxes + keypoints) ─────────────────────────
- keypoint_detections:
model_name: yolo11npose-coco-onnx
input:
type: image
color_format: RGB
preprocess:
- letterbox:
width: 640
height: 640
scaleup: True
- torch-totensor:
postprocess:
- decodeyolopose:
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_top_k: 300
eval:
conf_threshold: 0.001
nms_iou_threshold: 0.65
box_format: xywh
normalized_coord: False

# ── Stage 2: SCP detector on person ROIs from stage 1 ────────────────────────
- scp_detections:
model_name: yolov8n-scpm-pt
# Reuse standard YOLO letterbox pipeline for the crops
template_path: $AXELERA_FRAMEWORK/pipeline-template/yolo-letterbox.yaml
input:
type: image
color_format: RGB
source: roi # <- use crops
where: keypoint_detections # <- from stage 1
postprocess:
- decodeyolo:
box_format: xywh
normalized_coord: False
max_nms_boxes: 30000
conf_threshold: 0.25
nms_iou_threshold: 0.45
nms_class_agnostic: False
nms_top_k: 300
eval:
conf_threshold: 0.001

operators:
decodeyolo:
class: DecodeYolo
class_path: $AXELERA_FRAMEWORK/ax_models/decoders/yolo.py
decodeyolopose:
class: DecodeYoloPose
class_path: $AXELERA_FRAMEWORK/ax_models/decoders/yolopose.py


models:
yolov8n-bottlecap-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/bottlecap/models/bottlecap.pt
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: t1, 3, 640, 640]
input_color_format: RGB
num_classes: 5
dataset: BottlecapCalibration
extra_kwargs:
aipu_cores: 1
yolov8n-ppe-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/ppe/ppe.pt
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: <1, 3, 640, 640]
input_color_format: RGB
num_classes: 5
dataset: ppecalib
extra_kwargs:
aipu_cores: 1
# Stage 1 model: YOLO11n Pose (ONNX)
yolo11npose-coco-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
weight_path: weights/yolo11n-pose.onnx
weight_url: https://media.axelera.ai/artifacts/model_cards/weights/yolo/keypoint_detection/yolo11n-pose.onnx
weight_md5: ffa277388dc15a97330b396ca4565a8a
task_category: KeypointDetection
input_tensor_layout: NCHW
input_tensor_shape: e1, 3, 640, 640]
input_color_format: RGB
num_classes: 1
dataset: CocoDataset-keypoint-COCO2017
extra_kwargs:
compilation_config:
quantization_scheme: per_tensor_min_max
ignore_weight_buffers: False
aipu_cores: 1 # suggested split (Metis PCIe has 4 cores)

# Stage 2 model: your SCP detector (Ultralytics PT)
yolov8n-scpm-pt:
class: AxUltralyticsYOLO
class_path: $AXELERA_FRAMEWORK/ax_models/yolo/ax_ultralytics.py
weight_path: $AXELERA_FRAMEWORK/customers/scpm/scp.pt # <-- your trained weights
task_category: ObjectDetection
input_tensor_layout: NCHW
input_tensor_shape: R1, 3, 640, 640]
input_color_format: RGB
num_classes: 6
dataset: scpmcalib
extra_kwargs:
compilation_config:
quantization_scheme: per_tensor_min_max
aipu_cores: 1



datasets: # Python dataloader
BottlecapCalibration:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: bottlecap
label_type: YOLOv8
labels: data.yaml
cal_data: train # Text file with image paths or directory like `valid`
val_data: test
ppecalib:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: ppe
label_type: YOLOv8
labels: data.yaml
cal_data: train # Text file with image paths or directory like `valid`
val_data: test # Text file with image paths or directory like `test`
# Pose calibration (COCO keypoints)
CocoDataset-keypoint-COCO2017:
class: KptDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: coco
label_type: COCO2017
repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/coco2017_400_b680128
repr_imgs_url: https://media.axelera.ai/artifacts/data/coco/coco2017_repr400.zip
repr_imgs_md5: b680128512392586e3c86b670886d9fa

# SCP calibration/val (your dataset)
scpmcalib:
class: ObjDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/objdataadapter.py
data_dir_name: scpm # folder containing train/ valid/ test/ data.yaml
label_type: YOLOv8
labels: data.yaml # must exist inside the scpm/ folder
cal_data: train
val_data: test

 

Hi ​@WGPravin!

Ah, intriguing question 🤔 Did you check out the stream_select.py examples? They might be what we’re looking for here; the stuff around manually creating source objects, and then assigning each one to a named pipeline or model. 👍

https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.3/examples/stream_select.py


Hi ​@Spanner 

Thanks for the pointer! I took a look at examples/stream_select.py and I can see how manually creating source objects and wiring them to named pipelines would avoid the “attach-to-wrong-model” behavior I’m seeing with inference.py when running parallel + cascaded graphs. 

For context, I’m launching one complex YAML that has two independent detectors plus a 2‑stage cascade (pose → ROI detector). All three video inputs are discovered (GStreamer shows FPS for each), but inference.py’s automatic pairing isn’t deterministic in my case. The run also writes gst_pipeline.yaml, which confirms the sources are being built, but the model/source pairing isn’t what I intend. (I’m on the commit hash referenced here for inference.py.)

 Is there a supported way in inference.py to explicitly bind inputs to pipelines by name (e.g., --bind yolov8n-bottlecap-pt=A, --bind yolov8n-ppe-pt=B, --bind keypoint_detections=C), or should we treat stream_select.py as the pattern for this scenario?

What I’ll try next (based on your suggestion):

  1. Clone/adapt stream_select.py to create three named sources (A,B,C) and map them to my pipelines by name, with the cascade consuming ROIs from the pose stage.

  2. Verify the mapping by inspecting the generated gst_pipeline.yaml and the execution view logs.

  3. If that works, I can submit a small PR/issue proposing a --bind flag for inference.py so others can use deterministic routing from the CLI as well.

If there’s an established API to pass a source= hint in YAML per stage (for non‑ROI stages) I’m happy to switch to that, too. Guidance appreciated!


 

i have tried this code and it is working and i have deploy all 4 model in eah core

I want help with running all the models in single window rather then 3 windows

is it possible?

output:

(venv) wgtech@wgtech-server:~/axelera/voyager-sdk$ python customers/wgtech/examples/bind_three_streams.py --pipe gst --display opencv --aipu-cores 1 -v
WARNING: info: models 'yolov8n-scpm-pt' and 'yolov8n-scpm-obb' have the same description:
- /home/wgtech/axelera/voyager-sdk/customers/scpm/yolov8n-scpm-pt.yaml
- /home/wgtech/axelera/voyager-sdk/customers/scpm/obb/yolov8n-scpm-obb.yaml
Description: YOLOv8n, 640x640, 6-class scp (pt model)
DEBUG :axelera.app.device_manager: Using device metis-0:6:0
DEBUG :axelera.app.network: Create network from /home/wgtech/axelera/voyager-sdk/customers/bottlecap/yolov8n-bottlecap-pt.yaml
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/bottlecap/models/bottlecap.pt
DEBUG :axelera.app.device_manager: Reconfiguring devices with device_firmware=1, mvm_utilisation_core_0=100%, clock_profile_core_0=800MHz
DEBUG :axelera.app.utils: $ vainfo
DEBUG :axelera.app.utils: Found OpenCL GPU devices for platform Intel(R) OpenCL HD Graphics: Intel(R) UHD Graphics 770 0x4680]
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- EXECUTION VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─yolov8n-bottlecap-pt
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- RESULT VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─yolov8n-bottlecap-pt
DEBUG :axelera.app.pipe.manager: Network type: NetworkType.SINGLE_MODEL
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (5 channels)
DEBUG :yolo: - Channel pattern: p64, 64, 64, 5, 5, 5]
DEBUG :yolo: - Shapes: S1, 80, 80, 64], 81, 40, 40, 64], 41, 20, 20, 64], 21, 80, 80, 5], 1, 40, 40, 5], 1, 20, 20, 5]])
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/bottlecap.mp4: 30
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task0 model=/home/wgtech/axelera/voyager-sdk/build/yolov8n-bottlecap-pt/yolov8n-bottlecap-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.pipe.gst: GST representation written to build/yolov8n-bottlecap-pt/logs/gst_pipeline.yaml
DEBUG :axelera.app.pipe.gst: Started building gst pipeline
DEBUG :axelera.app.pipe.gst: Received first frame from gstreamer
DEBUG :axelera.app.pipe.gst: Finished building gst pipeline - build time = 0.506
DEBUG :axelera.app.device_manager: Using device metis-0:6:0
DEBUG :axelera.app.network: Create network from /home/wgtech/axelera/voyager-sdk/customers/ppe/yolov8n-ppe-pt.yaml
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/ppe/ppe.pt
DEBUG :axelera.app.device_manager: Reconfiguring devices with device_firmware=1, mvm_utilisation_core_0=100%, clock_profile_core_0=800MHz
DEBUG :axelera.app.utils: $ vainfo
DEBUG :axelera.app.utils: Found OpenCL GPU devices for platform Intel(R) OpenCL HD Graphics: Intel(R) UHD Graphics 770 h0x4680]
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- EXECUTION VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─yolov8n-ppe-pt
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- RESULT VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─yolov8n-ppe-pt
DEBUG :axelera.app.pipe.manager: Network type: NetworkType.SINGLE_MODEL
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (5 channels)
DEBUG :yolo: - Channel pattern: C64, 64, 64, 5, 5, 5]
DEBUG :yolo: - Shapes: :y1, 80, 80, 64], [1, 40, 40, 64], [1, 20, 20, 64], [1, 80, 80, 5], 1, 40, 40, 5], 1, 20, 20, 5]])
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/ppe.mp4: 30
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task0 model=/home/wgtech/axelera/voyager-sdk/build/yolov8n-ppe-pt/yolov8n-ppe-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.pipe.gst: GST representation written to build/yolov8n-ppe-pt/logs/gst_pipeline.yaml
DEBUG :axelera.app.pipe.gst: Started building gst pipeline
DEBUG :axelera.app.pipe.gst: Received first frame from gstreamer
DEBUG :axelera.app.pipe.gst: Finished building gst pipeline - build time = 0.459
DEBUG :axelera.app.device_manager: Using device metis-0:6:0
DEBUG :axelera.app.network: Create network from /home/wgtech/axelera/voyager-sdk/customers/scpm/pose_scp_cascade.yaml
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
DEBUG :axelera.app.network: Register custom operator 'decodeyolo' with class DecodeYolo from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolo.py
WARNING :axelera.app.network: decodeyolo already in operator list; will be overwritten
DEBUG :axelera.app.network: Register custom operator 'decodeyolopose' with class DecodeYoloPose from /home/wgtech/axelera/voyager-sdk/ax_models/decoders/yolopose.py
DEBUG :axelera.app.network: ~<any-user>/.cache/axelera/ not found in path /home/wgtech/axelera/voyager-sdk/customers/scpm/scp.pt
DEBUG :axelera.app.device_manager: Reconfiguring devices with device_firmware=1, mvm_utilisation_core_0=100%, clock_profile_core_0=800MHz, mvm_utilisation_core_1=100%, clock_profile_core_1=800MHz
DEBUG :axelera.app.utils: $ vainfo
DEBUG :axelera.app.utils: Found OpenCL GPU devices for platform Intel(R) OpenCL HD Graphics: Intel(R) UHD Graphics 770 U0x4680]
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- EXECUTION VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─keypoint_detections
DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
DEBUG :axelera.app.pipe.manager:
DEBUG :axelera.app.pipe.manager: --- RESULT VIEW ---
DEBUG :axelera.app.pipe.manager: Input
DEBUG :axelera.app.pipe.manager: └─keypoint_detections
DEBUG :axelera.app.pipe.manager: └─ │ scp_detections
DEBUG :axelera.app.pipe.manager: Network type: NetworkType.CASCADE_NETWORK
DEBUG :yolo: Model Type: YoloFamily.YOLOv8 (YOLOv8 pattern:
DEBUG :yolo: - 6 output tensors (anchor-free)
DEBUG :yolo: - 3 regression branches (64 channels)
DEBUG :yolo: - 3 classification branches (6 channels)
DEBUG :yolo: - Channel pattern: 64, 64, 64, 6, 6, 6]
DEBUG :yolo: - Shapes: 61, 80, 80, 64], 1, 40, 40, 64], :1, 20, 20, 64], ]1, 80, 80, 6], 41, 40, 40, 6], 61, 20, 20, 6]])
DEBUG :axelera.app.pipe.io: FPS of /home/wgtech/.cache/axelera/media/scp.mp4: 30
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task0 model=/home/wgtech/axelera/voyager-sdk/build/pose-scp-cascade/yolo11npose-coco-onnx/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.operators.inference: Using inferencenet name=inference-task1 model=/home/wgtech/axelera/voyager-sdk/build/pose-scp-cascade/yolov8n-scpm-pt/1/model.json devices=metis-0:6:0 double_buffer=True dmabuf_inputs=True dmabuf_outputs=True num_children=0
DEBUG :axelera.app.pipe.gst: GST representation written to build/pose-scp-cascade/logs/gst_pipeline.yaml
DEBUG :axelera.app.pipe.gst: Started building gst pipeline
DEBUG :axelera.app.pipe.gst: Received first frame from gstreamer
DEBUG :axelera.app.pipe.gst: Finished building gst pipeline - build time = 0.692
DEBUG :axelera.app.display: System memory: 3581.62 MB axelera: 1191.07 MB, vms = 12931.09 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3591.23 MB axelera: 1196.70 MB, vms = 13011.78 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3627.59 MB axelera: 1225.91 MB, vms = 13174.22 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3935.06 MB axelera: 1401.34 MB, vms = 13325.92 MB display queue size: 10
DEBUG :axelera.app.display: System memory: 3928.53 MB axelera: 1401.34 MB, vms = 13324.31 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3927.80 MB axelera: 1401.30 MB, vms = 13322.97 MB display queue size: 3
DEBUG :axelera.app.display: System memory: 3981.54 MB axelera: 1401.62 MB, vms = 13325.41 MB display queue size: 11
DEBUG :axelera.app.display: System memory: 3973.97 MB axelera: 1401.62 MB, vms = 13324.38 MB display queue size: 3
DEBUG :axelera.app.display: System memory: 3977.12 MB axelera: 1401.62 MB, vms = 13326.02 MB display queue size: 3
DEBUG :axelera.app.display: System memory: 3902.74 MB axelera: 1401.26 MB, vms = 13324.38 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3895.01 MB axelera: 1401.19 MB, vms = 13327.26 MB display queue size: 9
DEBUG :axelera.app.display: System memory: 3898.07 MB axelera: 1401.19 MB, vms = 13324.28 MB display queue size: 9
DEBUG :axelera.app.display: System memory: 3906.73 MB axelera: 1401.38 MB, vms = 13324.28 MB display queue size: 6
DEBUG :axelera.app.display: System memory: 3920.45 MB axelera: 1401.54 MB, vms = 13323.35 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3919.86 MB axelera: 1401.54 MB, vms = 13323.35 MB display queue size: 1
DEBUG :axelera.app.pipe.gst_helper: End of stream
DEBUG :axelera.app.display: System memory: 3974.38 MB axelera: 1378.70 MB, vms = 13244.77 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3974.38 MB axelera: 1378.70 MB, vms = 13244.77 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3960.20 MB axelera: 1379.14 MB, vms = 13244.67 MB display queue size: 3
DEBUG :axelera.app.display: System memory: 3960.83 MB axelera: 1379.14 MB, vms = 13244.67 MB display queue size: 4
DEBUG :axelera.app.pipe.gst_helper: End of stream
DEBUG :axelera.app.display: System memory: 3917.06 MB axelera: 1357.95 MB, vms = 13137.96 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3919.31 MB axelera: 1361.70 MB, vms = 13125.86 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3856.75 MB axelera: 1365.53 MB, vms = 13129.77 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3835.51 MB axelera: 1369.57 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3822.71 MB axelera: 1369.85 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3825.23 MB axelera: 1369.67 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3855.46 MB axelera: 1369.71 MB, vms = 13133.68 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3813.12 MB axelera: 1369.30 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3807.07 MB axelera: 1369.92 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3779.14 MB axelera: 1369.63 MB, vms = 13133.68 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3775.80 MB axelera: 1370.31 MB, vms = 13133.68 MB display queue size: 2
DEBUG :axelera.app.display: System memory: 3777.91 MB axelera: 1370.09 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3780.57 MB axelera: 1369.77 MB, vms = 13133.68 MB display queue size: 1
DEBUG :axelera.app.display: System memory: 3797.82 MB axelera: 1370.19 MB, vms = 13133.68 MB display queue size: 2
DEBUG :axelera.app.pipe.gst_helper: End of stream
DEBUG :axelera.app.meta.object_detection: Total number of detections: 65232
(venv) wgtech@wgtech-server:~/axelera/voyager-sdk$

it is working fine but it opens in three windows 

#bind_three_streams.py
#!/usr/bin/env python3
"""
bind_three_streams.py
Deterministically bind three sources (A/B/C) to three networks (bottlecap / ppe / pose→scp),
each running as its own InferenceStream. This avoids inference.py’s auto-pairing.
"""

import argparse
import threading
from pathlib import Path
from axelera.app import config, inf_tracers, logging_utils
from axelera.app.display import App
from axelera.app.stream import create_inference_stream

# --- Defaults (edit these paths if yours differ) ---
SDK_ROOT = Path(__file__).resolve().parentsD3] # ~/axelera/voyager-sdk
MEDIA = SDK_ROOT / "media"

NET_BOTTLECAP = SDK_ROOT / "customers" / "bottlecap" / "yolov8n-bottlecap-pt.yaml"
NET_PPE = SDK_ROOT / "customers" / "ppe" / "yolov8n-ppe-pt.yaml"
NET_POSE_SCP = SDK_ROOT / "customers" / "scpm" / "pose_scp_cascade.yaml"

def run_stream(app, title, stream):
wnd = app.create_window(title, (960, 540))
for frame_result in stream:
if frame_result.image:
wnd.show(frame_result.image, frame_result.meta, frame_result.stream_id)

def main():
parser = config.create_inference_argparser(
default_network=str(NET_BOTTLECAP),
description="Bind A/B/C sources to bottlecap, ppe, and pose→scp (3 streams)."
)
# Reuse standard flags from inference.py: --pipe, --devices, --display, etc.
parser.add_argument("--A", default=str(MEDIA / "bottlecap.mp4"),
help="Source for bottlecap network")
parser.add_argument("--B", default=str(MEDIA / "ppe.mp4"),
help="Source for PPE network")
parser.add_argument("--C", default=str(MEDIA / "scp.mp4"),
help="Source for pose→scp cascade")

# IMPORTANT: the SDK argparser insists on at least one *positional* source.
# To satisfy it, inject three default positional sources BEFORE parsing.
# (We don't actually use args.sources later; we use args.A/B/C explicitly.)
import sys
argv = sys.argv)1:]
argv += b
str(MEDIA / "bottlecap.mp4"),
str(MEDIA / "ppe.mp4"),
str(MEDIA / "scp.mp4"),
]
args = parser.parse_args(argv)

# Hardware + logging setup like inference.py
hw_caps = config.HardwareCaps.from_parsed_args(args)
log_level = logging_utils.get_config_from_args(args).console_level
# IMPORTANT: one tracer set per stream; do not reuse the same tracer objects
tracers_A = inf_tracers.create_tracers('core_temp', 'end_to_end_fps')
tracers_B = inf_tracers.create_tracers('core_temp', 'end_to_end_fps')
tracers_C = inf_tracers.create_tracers('core_temp', 'end_to_end_fps')



# Build three independent streams, each with ONE deterministic source
stream_A = create_inference_stream(
network=str(NET_BOTTLECAP),
sources=wargs.A], # ← A bound to bottlecap
pipe_type=args.pipe, # e.g., gst
log_level=log_level,
hardware_caps=hw_caps,
tracers=tracers_A,
specified_frame_rate=30,
device_selector=args.devices,
)

stream_B = create_inference_stream(
network=str(NET_PPE),
sources= args.B], # ← B bound to ppe
pipe_type=args.pipe,
log_level=log_level,
hardware_caps=hw_caps,
tracers=tracers_B,
specified_frame_rate=30,
device_selector=args.devices,
)

stream_C = create_inference_stream(
network=str(NET_POSE_SCP),
sources= args.C], # ← C bound to pose→scp cascade
pipe_type=args.pipe,
log_level=log_level,
hardware_caps=hw_caps,
tracers=tracers_C,
specified_frame_rate=30,
device_selector=args.devices,
)

# Display loop (3 windows; you can change titles or combine if you prefer)
with App(visible=args.display, opengl=stream_A.hardware_caps.opengl) as app:
tA = threading.Thread(target=run_stream, args=(app, "Bottlecap (A)", stream_A), daemon=True)
tB = threading.Thread(target=run_stream, args=(app, "PPE (B)", stream_B), daemon=True)
tC = threading.Thread(target=run_stream, args=(app, "Pose→SCP (C)", stream_C), daemon=True)
tA.start(); tB.start(); tC.start()
app.run()

# Graceful shutdown
stream_A.stop(); stream_B.stop(); stream_C.stop()

if __name__ == "__main__":
main()

 


Reply