Skip to main content
Question

Which models are you using with Metis? Or want to use.

  • June 25, 2025
  • 7 replies
  • 105 views

Spanner
Axelera Team
Forum|alt.badge.img+2

I’m interested in hearing from everyone here about which models you’re successfully using with your Metis device and Voyager SDK, and which ones you’d like to see being officially supported.

This could be really useful info in terms of prioritisation when it comes to adding more support in the model zoo.

7 replies

Forum|alt.badge.img+1
  • Ensign
  • December 20, 2025
r1-1776:70B
codellama
codegemma

want to use ...


  • Cadet
  • January 8, 2026

Would it be possible to run larger LLMs, such as Qwen/Qwen3-30B-A3B-Instruct-2507, Qwen/Qwen3-VL-32B-Instruct, or mistralai/Ministral-3-14B-Instruct-2512, on the larger PCIe boards (with 4 quad-core Metis AIPUs)? The memory and raw compute performance seem to be on paar with regular consumer GPUs, so I suppose it should run. Am I missing something?


Forum|alt.badge.img+1
  • Ensign
  • January 8, 2026

@akreal what Consumer GPU is in the Same Format and has 64GB vram <— is this the Part what you missing?


  • Cadet
  • January 8, 2026

Sorry, I meant that the specs are at least as large as on the GPUs that could run those models.

Is it correct that the 4 quad-core Metis AIPUs PCIe board can run a LLM of such size (e.g. 30B parameters)?


Forum|alt.badge.img+1
  • Ensign
  • January 8, 2026

I for the Moment Can run a 70B Model on a minisforum AI X1 Pro within 128GB ram and this Model will fit in the 64GB Metis Card if it will compiled for the Metis Card 

 

in other words: the 16GB Metis Card is for Video Applications and the 64GB Metis Card is for SLM/LLM


  • Cadet
  • January 8, 2026

Perfect, thank you for your answer!


Forum|alt.badge.img+1
  • Ensign
  • January 8, 2026

56GB Space on vram or ram is needed for the 70B Model 

the Parameter has nothing to do what size is the Model