Skip to main content

Did You Check Out Voyager SDK 1.4.0 Yet?

  • September 3, 2025
  • 5 replies
  • 133 views

Spanner
Axelera Team
Forum|alt.badge.img+2

All the release notes for Voyager updates go straight out on the GitHub repo, so I don’t tend to parrot them here, but this is a pretty big one and feels like it deserves a spot here on the community!

If you’ve been using 1.3 already, there are some nice changes that make life easier.

A few things that particularlt stand out to me:

  • 🔹 Cascade pipelines feel smoother. Chaining models together (like running a classifier on detections) takes less fiddling around than before.
  • 🔹 More operator and metadata options. You can now shape pre/post-processing and inference results in ways that fit better with your own app.
  • 🔹 Model Zoo is easier to adapt. Swapping in your own weights on the reference pipelines is less hassle, so you can get something tuned to your dataset faster.
  • 🔹 Custom evaluators are handy if you want accuracy measured in a way that matches your own project rather than sticking with defaults.
  • 🔹 General quality-of-life polish in pipeline debugging, deployment consistency, benchmarking… all just smoother.
  • 🔹 You can never have too many new models. YOLO10[n,s,b], FastSAM, FaceNet, OSNet and Deep-OC-Sort can now be used within Voyager pipelines.

If you’ve been on 1.3, the upgrade feels like a nice step up rather than just “another version bump”.

Full notes are here for the deep dive: Voyager SDK v1.4.0 Release Notes

Curious if anyone else has tried it yet? Which of the new features are you putting to use first?

5 replies

  • Cadet
  • October 14, 2025

Is there any way to have attention layers running on metis m.2? I was trying to run omniparser V2 on metis, but compilation for base yolov8 model failed because it has 1 attention layer...


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Author
  • Axelera Team
  • October 15, 2025

Is there any way to have attention layers running on metis m.2? I was trying to run omniparser V2 on metis, but compilation for base yolov8 model failed because it has 1 attention layer...

Are you on the latest version of the SDK? I think attention layers were added in V1.3, so an earlier version may not work. Any modifications to the model that might be causing it?


  • Cadet
  • October 15, 2025

It is V1.4. Here is the error that i get. It is quite big and it basically complains about softmax, and attention layers…
Could you help me with that, please? I used the pipeline for yolov8 as template for mine. And i took the model from here: https://github.com/microsoft/OmniParser




(venv) bohdan@bohdan-desktop:/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk$ ./inference.py yolo11m-fullhd-ui-detection /home/bohdan/Pictures/Screenshots/Screenshot\ from\ 2025-10-14\ 21-53-38.png 2025-10-14 21:54:33.507966052 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: "/sys/class/drm/card1/device/vendor" arm_release_ver: g13p0-01eac0, rk_so_ver: 3 INFO : yolo11m-fullhd is being compiled for up to 2 cores cores (but can be run using 4 cores). INFO : Deploying model yolo11m-fullhd for 2 cores. This may take a while... |████████████████████████████████████████| 51:14.0 INFO : ## Quantizing network yolo11m-fullhd-ui-detection **/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/customers/omniparser/yolo11m-fullhd.yaml** yolo11m-fullhd INFO : Compile model: yolo11m-fullhd INFO : Using representative images from /home/bohdan/.cache/axelera/data/coco2017_repr400 with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB WARNING : Configuration of node '/model.10/m/m.0/attn/Reshape' may not be supported. 'Reshape' parameters: WARNING : data: Metadata(shape=(1, 512, 34, 60), is_constant=False) WARNING : shape: Metadata(shape=(4,), is_constant=True) WARNING : reshaped: Metadata(shape=(1, 4, 128, 2040), is_constant=False) WARNING : 'Reshape' parameters do not match supported configuration: np.array_equal(shape, data.shape) WARNING : 'Reshape' parameters do not match supported configuration: len(data.shape) >= 2 and len(shape) == 2 and shape[0] == data.shape[0] and (shape[1] == data.shape[1] or shape[1] == -1) WARNING : Configuration of node '/model.10/m/m.0/attn/Transpose' may not be supported. 'Transpose' parameters: WARNING : perm: [0, 1, 3, 2] WARNING : data: Metadata(shape=(1, 4, 32, 2040), is_constant=False) WARNING : transposed: Metadata(shape=(1, 4, 2040, 32), is_constant=False) WARNING : Unsatisfied constraint: perm == [0, 1, 2, 3] WARNING : Configuration of node '/model.10/m/m.0/attn/Reshape_2' may not be supported. 'Reshape' parameters: WARNING : data: Metadata(shape=(1, 4, 64, 2040), is_constant=False) WARNING : shape: Metadata(shape=(4,), is_constant=True) WARNING : reshaped: Metadata(shape=(1, 256, 34, 60), is_constant=False) WARNING : 'Reshape' parameters do not match supported configuration: np.array_equal(shape, data.shape) WARNING : 'Reshape' parameters do not match supported configuration: len(data.shape) >= 2 and len(shape) == 2 and shape[0] == data.shape[0] and (shape[1] == data.shape[1] or shape[1] == -1) WARNING : Node '/model.10/m/m.0/attn/MatMul' implements operator 'MatMul', which may not be supported. WARNING : Node '/model.10/m/m.0/attn/Softmax' implements operator 'Softmax', which may not be supported. WARNING : Configuration of node '/model.10/m/m.0/attn/Transpose_1' may not be supported. 'Transpose' parameters: WARNING : perm: [0, 1, 3, 2] WARNING : data: Metadata(shape=(1, 4, 2040, 2040), is_constant=False) WARNING : transposed: Metadata(shape=(1, 4, 2040, 2040), is_constant=False) WARNING : Unsatisfied constraint: perm == [0, 1, 2, 3] WARNING : Node '/model.10/m/m.0/attn/MatMul_1' implements operator 'MatMul', which may not be supported. WARNING : Configuration of node '/model.10/m/m.0/attn/Reshape_1' may not be supported. 'Reshape' parameters: WARNING : data: Metadata(shape=(1, 4, 64, 2040), is_constant=False) WARNING : shape: Metadata(shape=(4,), is_constant=True) WARNING : reshaped: Metadata(shape=(1, 256, 34, 60), is_constant=False) WARNING : 'Reshape' parameters do not match supported configuration: np.array_equal(shape, data.shape) WARNING : 'Reshape' parameters do not match supported configuration: len(data.shape) >= 2 and len(shape) == 2 and shape[0] == data.shape[0] and (shape[1] == data.shape[1] or shape[1] == -1) WARNING : The operator compatibility warnings above suggest this model may not be supported by the Axelera Compiler. ERROR : Traceback (most recent call last): ERROR : File "/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/axelera/app/compile.py", line 430, in compile ERROR : the_manifest = top_level.compile(model, compilation_cfg, output_path) ERROR : File "<frozen compiler.utils.error_report>", line 65, in wrapper ERROR : File "<frozen compiler.top_level>", line 554, in compile ERROR : File "<frozen compiler.utils.error_report>", line 65, in wrapper ERROR : File "<frozen compiler.top_level>", line 252, in quantize ERROR : File "<frozen qtools_tvm_interface.graph_exporter_v2.graph_exporter>", line 131, in __init__ ERROR : RuntimeError: External op model_dot_10_m_m_dot_0_attn_mul_to_model_dot_10_m_m_dot_0_attn_softmax_qre_to_model_dot_10_m_m_dot_0_attn_softmax_dem_to_model_dot_10_m_m_dot_0_attn_softmax_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model. ERROR : External op model_dot_10_m_m_dot_0_attn_mul_to_model_dot_10_m_m_dot_0_attn_softmax_qre_to_model_dot_10_m_m_dot_0_attn_softmax_dem_to_model_dot_10_m_m_dot_0_attn_softmax_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model. INFO : Quantizing yolo11m-fullhd-ui-detection: yolo11m-fullhd took 3051.114 seconds ERROR : Failed to deploy network INFO : ## Compiling network yolo11m-fullhd-ui-detection **/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/customers/omniparser/yolo11m-fullhd.yaml** yolo11m-fullhd INFO : Compile model: yolo11m-fullhd INFO : Using representative images from /home/bohdan/.cache/axelera/data/coco2017_repr400 with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB INFO : Prequantizing yolo11m-fullhd-ui-detection: yolo11m-fullhd Command failed: /media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/deploy.py --num-cal-images 200 --model yolo11m-fullhd --auto-vaapi --data-root /home/bohdan/.cache/axelera/data --pipe gst --build-root /media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/build yolo11m-fullhd-ui-detection --mode QUANTIZE --metis m2 Return code: 1 ERROR : Failed to prequantize yolo11m-fullhd-ui-detection: yolo11m-fullhd INFO : Compiling yolo11m-fullhd-ui-detection: yolo11m-fullhd took 3068.812 seconds 


Calibrating... -------------------- | 0% | ?it/s | 200it | 


….. MORE calibration



15.60s/it | 200it | Calibrating... #################### | 100% | 14.64s/it | 200it | ERROR : Failed to deploy model yolo11m-fullhd ERROR : Command '/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/deploy.py --model yolo11m-fullhd --num-cal-images 200 --aipu-cores 2 --data-root /home/bohdan/.cache/axelera/data --pipe gst --build-root /media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/build /media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk/customers/omniparser/yolo11m-fullhd.yaml --aipu-cores 2 --metis m2 ' returned non-zero exit status 1. (venv) bohdan@bohdan-desktop:/media/bohdan/nvme256/robotDir/Metis_setup_rock_5b+/voyager-sdk$


  • Cadet
  • October 15, 2025

@Spanner Yes, I'm on SDK v1.4.0 (latest). Here are the details:Model: OmniParser v2 YOLOv8n from microsoft/OmniParser-v2.0 (Hugging Face)

Direct .pt file: weights/icon_detect/model.pt

Architecture includes C2fAttn module at layer 10

Error during QUANTCOMPILE:NotImplementedError: Only pinned zero (zero_point=0) is supported for inputs to MatMul ops.

Full error context:MatMul operations in attention have non-zero zero_point after quantizationWarnings show: Reshape, Transpose (perm=���), MatMul, Softmax operations detected

Error occurs during quantize() step in graph_exporterPipeline

config: Using AxUltralyticsYOLO class with .pt weight_path (as required)

Question: Does v1.4 attention support work with all C2fAttn variants, or only specific implementations? Is there a way to make my model work?


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Author
  • Axelera Team
  • October 16, 2025

Hmm, v1.3 introdued attention layer support, but maybe it’s a model incompatibility? The YOLO11 models in the Model Zoo (yolo11n-coco-onnx, yolo11s-coco-onnx, etc.) have been validated to work with the SDK's attention support. Are you able to test those in your setup? If that runs successfully, the issue could be with OmniParser's attention implementation...