Hi,
I’m trying to deploy a custom YOLO ONNX model on the Metis compute board using the Voyager SDK.
The model fails during deployment because it expects (at least that’s what I understood^^) a fixed input shape of [8, 3, 128, 256], while the Voyager pipeline always provides a batch size of 1:
Got: 1 Expected: 8
My question is simple:
👉 Is it possible to modify or extend the Voyager SDK (operators, pipeline, custom code, etc.) so that I can create a batch of 8 images before inference?
Or is Voyager strictly unable to handle ONNX models with a fixed batch > 1, meaning the model must be re-exported with batch=1 or dynamic batch?
Any guidance or examples would be appreciated.
Thanks!

