Skip to main content
Question

Import custom models

  • June 17, 2025
  • 3 replies
  • 159 views

Hi there,

I've been truly impressed by this chip over the past couple of days—its object detection and LLM capabilities, combined with its energy efficiency, are remarkable. However, for my current project, I need to use Gemma3 and Whisper. From what I understand, your SDK requires using precompiled paths, which suggests that if a model isn’t officially supported by Axelera AI, we might not be able to run it directly.

Am I understanding this correctly, or is there an alternative way to use custom models like Gemma and Whisper within your ecosystem?

Thanks in advance!

3 replies

Forum|alt.badge.img
  • Axelera Team
  • June 18, 2025

Hi, 

Glad to hear you are impressed by our product!
It is possible to compile and deploy your own model if it is built with supported onnx operators.
https://github.com/axelera-ai-hub/voyager-sdk/blob/e64458844b27644001eabe26384ac04ab46ed37b/docs/reference/onnx-opset14-support.md?plain=1#L7
(Also see here https://github.com/axelera-ai-hub/voyager-sdk/blob/e64458844b27644001eabe26384ac04ab46ed37b/docs/tutorials/custom_model.md#custom-model-deployment-experimental

This means currently we do not support Gemma and Whisper. 

Regards


  • Author
  • Cadet
  • June 18, 2025

Thank you for your response Jonas. 


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • June 20, 2025

But it’d be awesome to hear if you have any success with it, or try out any experiements, ​@doctore