Skip to main content

Non c’è modo di aggiungere RAM a questa bellezza 😁?

E’ perfetta per il mio Laptop o su Orange PI ( Embedded ).

PCIe 5.0 ?

 

LLM non gira su M.2 ? 😣

Leggendo l’SDK… Interessante… 🤔

Non c’è modo di aggiungere RAM a questa bellezza 😁?

Hi there ​@Falcon9! Not to the Metis cards, though if you mean the hosts, some of them can certainly take more RAM.

 

E’ perfetta per il mio Laptop o su Orange PI ( Embedded ).

PCIe 5.0 ?

Yeah, been some great (and very promising) experimentation on Orange Pi! This for instance. And Metis is a PCIe 3, but it’s forward compatible (at PIe 3 speeds, I’d assume).

 

LLM non gira su M.2 ? 😣

Leggendo l’SDK… Interessante… 🤔

Not really enough RAM on the M.2 to run an LLM, at least the way everything currently works. Check out the PCIe card if LLMs are what you need, though.


Hello Spanner, thanks for all the reply...

Ollama say: 

Model: gemma3:12b

Microsoft Windows sVersione 10.0.26100.4652]
(c) Microsoft Corporation. Tutti i diritti riservati.

C:\Users\LucaF>ollama ps
NAME          ID              SIZE     PROCESSOR          UNTIL
gemma3:12b    f4031aab637d    11 GB    55%/45% CPU/GPU    4 minutes from now

means CUDA drivers ?

 

CPU:

11th Gen Intel Core i9-11900H 2.5Ghz ( 8 Cores )

RAM:

32GB DDR4 2666MHz

VIDEO CARD:

Nvidia RTX3600 6GB RAM ( Laptop series )

 

So seams, that ollama can allocate “thread” on CPU and GPU. In these configuration the model tends to be sloow ( mean inference, right ? )… The model is 12 Billion fo parameters…

What do you think ?

Seriously, my laptop need a new Brain. A way to communicate with me….…

Yes the cloud, other machines do their job… ( Embedded )

Why not ? Like HAL9000, may be ? Better ?

A new window ( OS ) for the future :)

 

I still believe that PCI 3 is not a problem…

PCIe 3.0, also known as PCIe Gen 3, has a transfer speed of 8 GT/s (gigatransfers per second) per lane. This translates to roughly 1 GB/s (gigabyte per second) of effective data transfer rate per lane. A standard PCIe 3.0 x4 slot, commonly found on motherboards, offers a total of 4 lanes, resulting in a maximum theoretical bandwidth of 4 GB/

Anyway my Orange has to arrive… So party ? :)


Ciao Falcon9,

honestamente non riesco a seguirti 😅

I also find the thought interesting, that LLMs will be in a way become an addition to existing operating systems, like what we see in science fiction. Would be great to have it e.g. in a smart speaker for the beginning 🤓

Just to make sure I am understanding the technical part right: 
You are already running (inference) LLMs like gemma3:12b on your Laptop (with intel CPU and NVIDIA GPU) via Ollama, but you are not happy with the performance right?

So in order for you to run Language Models with our current version of SDK, you’d need to have a PCIe card and a system which has a slot for the PCIe card. The M.2 will not work, as our Metis chip has a memory associated with it, which is in the case of M.2 not sufficient to run the LLMs in our test environment. 


Hello Jonask-ai

Yes, the actually M.2 Metis ( NOT THE CHIP ITSELF ) has only 1Gig of ram. Ok.

Means this product is suitable for AI machine vision.

Embedded… in…

Orange Pi 5 Plus 16GB LPDDR4/4x Rockchip RK3588 8-Core 64-Bit Single Board Computer con eMMC Socket. ( ARRIVED TODAY 🙂 )

Now i’m able to add computer vision to my smart home ( whit a Linux based Router, Access Point, Custom home management server, mqtt, zigbee. With a little cost…

And Alexa… Now i want say… Locally...

MentorPi Open Source Robot Car: ROS2 & Raspberry Pi 5

Hiwonder xArm AI Programmable Desktop Robot Arm with AI Vision & Voice Interaction

This isn't meant to be a criticism, just a consideration… The Metis Chip is capable of… ?

How many papers and very interesting youtube video about ai… Have you see the new Hugging Face hand ?

Anyway… Just a consideration...


Reply