Skip to main content

Hi there,

I've been truly impressed by this chip over the past couple of days—its object detection and LLM capabilities, combined with its energy efficiency, are remarkable. However, for my current project, I need to use Gemma3 and Whisper. From what I understand, your SDK requires using precompiled paths, which suggests that if a model isn’t officially supported by Axelera AI, we might not be able to run it directly.

Am I understanding this correctly, or is there an alternative way to use custom models like Gemma and Whisper within your ecosystem?

Thanks in advance!

Hi, 

Glad to hear you are impressed by our product!
It is possible to compile and deploy your own model if it is built with supported onnx operators.
https://github.com/axelera-ai-hub/voyager-sdk/blob/e64458844b27644001eabe26384ac04ab46ed37b/docs/reference/onnx-opset14-support.md?plain=1#L7
(Also see here https://github.com/axelera-ai-hub/voyager-sdk/blob/e64458844b27644001eabe26384ac04ab46ed37b/docs/tutorials/custom_model.md#custom-model-deployment-experimental

This means currently we do not support Gemma and Whisper. 

Regards


Thank you for your response Jonas. 


But it’d be awesome to hear if you have any success with it, or try out any experiements, ​@doctore


Reply