Hi everyone,
I'm integrating a single-object tracking model with the Voyager SDK and hitting a wall with multi-input quantization.
Onnx Model opset : 17
Inputs:
- template: [1, 3, 112, 112]
- online_template: [1, 3, 112, 112]
- search: [1, 3, 224, 224]
I have calibration data ready as .bin files for each input.
The problem: compiler.quantize() doesn't accept multi-input calibration data.
Things I've tried:
- Yielding a dict: {'template': arr, 'online_template': arr, 'search': arr} → "Failed to get input shape from calibration dataset"
- Yielding a list: [template, online_template, search] → "'list' object has no attribute 'shape'"
- Yielding a tuple → same error
- transform_fn returning a list → same error
I found the September 2025 post where it was mentioned multi-input isn't supported and the concat+identity conv workaround. However, my inputs have different spatial sizes (112x112 vs 224x224) which makes that workaround complex.
Two questions:
1. Has multi-input quantization been added in v1.5.3?
2. Is there a supported path to quantize a multi-input model with different spatial dimensions, either via compiler.quantize() or deploy.py?
Alternatively — would a pre-quantized ONNX (quantized externally via onnxruntime) work with --mode=PREQUANTIZED?
Running Voyager SDK v1.5.3 on Ubuntu 24.04 ARM64.
Thanks
Sanket Shah
