Skip to main content

Hello, I’m new to voyager SDK and everything related to AXELERA and i need help deploying a custom onnx model that has 2 inputs and 20+ outputs, all i want for now is to manage to run an inference with my own dataset/image and to save to disk the raw outputs of the model. The input is a pair of 2 files. I have data in yuv format and it’s saved in input_y.npy and input_uv.npy. Do you have any idea how can i do this or if it’s possible from the voyager 1.3 or do i need to add a split layer so i need just 1 input and only then i can deploy it?  Or to upgrade the voyager version to 1.4

 

Hi there ​@Pepe!

Let’s see… so, firstly, Voyager can handle models with multiple inputs and many outputs. You shouldn’t need to merge your two inputs into one - inference.py supports multiple input files, so that link should be able to help you out in that respect. 👍

And either way, it’ll serve you well to update to Voyager 1.4. It improves YAML flexibility, custom model support, and makes multi-I/O pipelines much easier to manage, so it’d make sense to do that as a first task.


Hi there ​@Pepe!

Let’s see… so, firstly, Voyager can handle models with multiple inputs and many outputs. You shouldn’t need to merge your two inputs into one - inference.py supports multiple input files, so that link should be able to help you out in that respect. 👍

And either way, it’ll serve you well to update to Voyager 1.4. It improves YAML flexibility, custom model support, and makes multi-I/O pipelines much easier to manage, so it’d make sense to do that as a first task.

Do you know where the documentation for writing YAML files is? like how should i write the inputs shapes and stuff or what arguments there exists 


Hi mate!

The best places for this info is Voyager SDK docs on GitHub. These might help:

These cover the available operators, inputs/outputs, configuration options you can use in your YAML pipelines. 👍


Hi ​@Pepe , thanks for your question. I’d also suggest to have a look at our tutorials, like e.g. https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.4/docs/tutorials/application.md

Hope this helps. 


Hi ​@Pepe , thanks for your question. I’d also suggest to have a look at our tutorials, like e.g. https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.4/docs/tutorials/application.md

Hope this helps. 

Hello, I didn’t started to use the application yet, and didn’t look to much into it. But correct me if I’m wrong, for the application to run shouldn’t I use compiled version of my model, but that link is for the application at inference time and what other links i found are for inference also, like  my big questions are like what should i write here in the.yaml file, how to specify the second input?
 

    input_tensor_layout: NCHW

    input_tensor_shape: [1, 6, 512, 512]

    input_color_format: RGB


Hi ​@Pepe ,

Thanks for the clarification. I see that it’s not about multiple input streams, but rather two distinct inputs to a single model. At the moment, the SDK doesn’t support this directly, though work is underway to add it.

In the meantime, there’s a workaround you already hinted at: concatenate the inputs first, then split the outputs after an identity convolution. This requires modifying the ONNX. Also, for the compilation to pass, we suggest to insert an identity convolution in between the concatenated input and the split operation. 
 
Here’s an example:
 

 


Hi ​@Pepe ,


Thanks so much for your message! 🌟 Could you please share the ONNX file with me? That way I can help you run it smoothly with our SDK and get everything up and running 🚀✨

Cheers,
Jaydeep


Hi ​@Pepe ,

Another suggestion, you could concat the inputs, then use a grouped convolution, then split the output. Depending on your input settings this may work and could be a faster solution.

I am assuming here that your goal is to perform separate convolutions on each input with similar dimensions, which are later combined in some way. If the goal is different, please let us know. Having more graph information would help.

Thanks,

Bram 


Hi ​@Pepe ,


Thanks so much for your message! 🌟 Could you please share the ONNX file with me? That way I can help you run it smoothly with our SDK and get everything up and running 🚀✨

Cheers,
Jaydeep

Hello, unfortunately i can’t share the model only with an NDA from my company but i can share with you this image to show the 2 inputs look on my model 

 


Hi ​@Pepe ,

Another suggestion, you could concat the inputs, then use a grouped convolution, then split the output. Depending on your input settings this may work and could be a faster solution.

I am assuming here that your goal is to perform separate convolutions on each input with similar dimensions, which are later combined in some way. If the goal is different, please let us know. Having more graph information would help.

Thanks,

Bram 

Hello again, it’s different from project to project what exactly the input means, in the image that i presented to you the inputs are given in this format but they are one image but not in an other format, not RGB, but it can be multiple images (different angles), or even inputs form different types of sensors not necessary optics, but they return a matrix like input, or the output of other models. Again, trough this channel i can’t give to much info, but trough different (more private and secure) channels of communication i may be able to sent even the onnx file


And this is what i did already with the inputs, i concatenated them. I wanted to avoid this extra steps because i takes extra time on the board and it will results in less fps