Non c’è modo di aggiungere RAM a questa bellezza 😁?
E’ perfetta per il mio Laptop o su Orange PI ( Embedded ).
PCIe 5.0 ?
LLM non gira su M.2 ? 😣
Leggendo l’SDK… Interessante… 🤔
Non c’è modo di aggiungere RAM a questa bellezza 😁?
E’ perfetta per il mio Laptop o su Orange PI ( Embedded ).
PCIe 5.0 ?
LLM non gira su M.2 ? 😣
Leggendo l’SDK… Interessante… 🤔
Best answer by jonask-ai
Ciao Falcon9,
honestamente non riesco a seguirti 😅
I also find the thought interesting, that LLMs will be in a way become an addition to existing operating systems, like what we see in science fiction. Would be great to have it e.g. in a smart speaker for the beginning 🤓
Just to make sure I am understanding the technical part right:
You are already running (inference) LLMs like gemma3:12b on your Laptop (with intel CPU and NVIDIA GPU) via Ollama, but you are not happy with the performance right?
So in order for you to run Language Models with our current version of SDK, you’d need to have a PCIe card and a system which has a slot for the PCIe card. The M.2 will not work, as our Metis chip has a memory associated with it, which is in the case of M.2 not sufficient to run the LLMs in our test environment.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.