Great question! And it’d be amazing to see someone create a project like this.
It’s possible to do general-purpose matrix multiplication on Metis, yep, but it needs to be framed as part of a neural network forward path and deployed as an ONNX model using the Gemm operator. It’s not a CUDA-style setup where you can run arbitrary math on memory directly—everything goes through the model deployment pipeline.
There’s a bit more info in the Gemm section of this doc:
👉 https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/docs/reference/onnx-opset14-support.md#gemm
Did you have a specific project or use-case in mind, @DominikD ?