🎉 The Pioneer 10 Have Been Selected! 🚀
Get involved in the Axelera community. Ask questions, get inspired, share projects and engage with the AI world
Share ideas and discuss Axelera’s products and innovations
The support hub for all things Axelera. Products, SDK, customer help and more.
Talk about innovations and advancements in the AI world.
Advancing language models and natural language understanding.
I’m putting an idea out to see if anyone has some data to back a theory.I’ve been talking with @Spanner about some problems I was having with a card, and, after looking over everything I’ve done, I’m wondering if a 75W slot is *required* rather than recommended, which would mean you need to use a PCIe x16 slot.One issue I had with the card was a small v shaped cut-out in the full height PCIe back-plate, which meant that the card wouldn’t sit fully into any PCIe x16 slot that I had (photo attached so you can see what I mean). This meant I had to use a 8x or 4x slot.Looking through the PCIe spec, only 16x slots are supplied with 75W, meaning that the 8x and 4x slots I tried were limited to 25W (which I confirmed), which I’m thinking could be the cause of the problems I saw.I have a 2nd hand machine turning up next week which I can test this theory on more easily (eBay still is useful for some things), but I’m wondering whether the PCIe card should be sized for a 16x slot in order to ens
Hello.I recently received the m.2 metis accelerators and did a test on the boards provided by Axelera and got results close to the published benchmarks. It was succesful.I then connected these accelerators to the Orin (via the m.2 slot) in order to use them also for the my Jetson AGX Orin environment. But in the voyager-sdk installation i got “WARNING: Failed to refresh pcie and firmware” error. The installation completes but the accelerator device does not show up on my system. In lspci -tv command I get the output attached to this post. Can’t see metis there.I didn't see any information about the compatibility of these accelerators with Jetson systems (my m2 slot is M-key). If they are compatible, can you help me to fix the installation?Thanks :)
Hello everyone, I am trying to use the Metis PCIe (4GB) in combination with the firefly ITX-3588J.Issue: I can not load the Metis kernel module (driver) into the Linux kernelSetup: I have the Metis PCIe with 4GB of ram the Firefly ITX-3588J with 16GB of ram with a RockChip. OS: Ubuntu 22.04.4 LTS. I have a Gen3 Pcie with 4 lanes, details below``` LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <16us ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+``` So far:I installed the ubuntu OS as described here: https://support.axelera.ai/hc/en-us/articles/25556437653138-System-Imaging-Guide-Firefly-RK3588 I installed the latest 1.3.3 voyager sdk release. i can see the metis board from lspci``` firefly@firefly:~$ lspci | grep accelerators01:00.0 Processing accelerators: Axelera AI Metis AIPU (rev 02)```however, i do not detect any drivers for the board, even after installing the sdk. Both the following commands do not return anything:```
Hi, I’m trying to deploy and inference with a yolov8 segmentation model. I tried to follow the tutorial, but I just get:INFO: deploying model yolov8sseg-coco for 4 cores. This may take a while… How long should this run? I left it on for 3.5 hours and nothing happened (except the loading bar). This seems to be a model included in your library: yolov8sseg-coco.Any idea if I can turn on more extensive logging to see what is happening? Thank you in advance!
Hi,My goal is to try out a few tutorials with the latest SDK release. Looks like a python package is not found, any clues what I could try?System: Axelere M.2 Card with Aetina Eval SystemVoyager SDK v1.3.1 - a fresh install of the SDK this timeFirst installed the latest driver https://software.axelera.ai/artifactory/axelera-apt-source/metis-dkms/metis-dkms_1.0.2_all.debthen I ran the installer with./install.sh --all --media --user <email addr> --token <token>I got a few suspicious warnings and hit ‘y’ a number of times. Then install of 186 packages completed.[185/186] Install gstreamer1.0-rtspbuilding operatorsrefreshing pcie and firmware0000:01:00.0 : DeviceDevice 0: metis-0:1:0 1GiB m2 flver=1.2.0-rc2 bcver=1.0 clock=800MHz(0-3:800MHz) mvm=0-3:100%Installation complete, but with unresolved issues (see above) firefly@aetina:~/Documents/g2-testing/voyager-sdk$ source venv/bin/activate I proceeded to try inference.py with one of the included videos as input(venv) firefly@aet
Hi,I was wondering if there exist an astral uv (pip alternative) build for the voyager-sdk?This would speed up setup of the sdk and allow easier merging with our development environment.Thanks in advance!
I just built a quick demo showing the Llama 3.2B chatbot running on our Metis® platform, totally offline. This model packs 3 billion parameters and runs smoothly on both a standard Lenovo P360 with our PCIe card and even on an Arduino-based dev board (Portenta X8).We hit 6+ tokens/sec with a single core – which means real-time chat. Perfect for smart customer support bot, digital concierge systems, any edge AI assistant application really, all running fully on-device. No cloud needed.Check out the video and let me know what you think. Any projects you can think of where you could use a self-contained, power-efficient, offline AI chatbot like this?
Hi,I'm having trouble installing the metis-dkms driver. Can you help me?The device is a NanoPC-T6 from Friendly Elec, and the operating system is Ubuntu 22.04, also provided by Friendly Elec.I'm leaving the error message and OS information below. If there is any more information I should provide, please let me know. Thank you. The Error:$ sudo dpkg -i metis-dkms_0.07.16_all.deb(Reading database ... 155441 files and directories currently installed.)Preparing to unpack metis-dkms_0.07.16_all.deb ...Deleting module metis-0.07.16 completely from the DKMS tree.Unpacking metis-dkms (0.07.16) over (0.07.16) ...Setting up metis-dkms (0.07.16) ...Loading new metis-0.07.16 DKMS files...Building for 6.1.99Building for architecture aarch64Building initial module for 6.1.99ERROR (dkms apport): kernel package linux-headers-6.1.99 is not supportedError! Bad return status for module build on kernel: 6.1.99 (aarch64)Consult /var/lib/dkms/metis/0.07.16/build/make.log for more information.dpkg: error pro
Hi Team,I’m working with the Voyager SDK (using docker container) and trying to build the axstreamer and gstaxstreamer components. The source files examples reference:#include <axruntime/axruntime.hpp> (examples/axruntime/axruntime_example.cpp)But I can’t find axruntime.hpp anywhere in the SDK.I’m planning to run inference using a webcam as the input on my mac, with a pipeline that includes ffmpeg and mediamtx for streaming.Is axruntime supposed to be part of the SDK repo, a separate dependency, or is there a script I missed?Thanks!Val.
Welcome to the Axelera AI Community – we’re so glad you’re here.This is a space for everyone from hardware engineers and AI devs to makers, enthusiasts, onlookers and partners, so don’t be shy. Whether you’re working with AI every day or just getting started, we’d love to know more about you.To get things going, let’s have a big round of hellos:Who you are and what you’re working on Your experience or interest in the AI/edge AI world What you’re hoping to learn, share or achieve in this communityCan’t wait to get to know you all, and to build something great together.Now – who’s going first? 👇
Hey everyone, I’m having trouble getting my Axelera Metis PCIe AI Accelerator to be recognized by `lspci`(and my system in general). I tested the card on two different systems. On my AMD setup, I am using an AMD 5950X with an ASUS B550-F. I tried installing the card in the slot I normally use for my RTX 3080 as well as in another slot that meets the specifications. In both cases, `lspci` does not list the card even though the fan spins and I know it is getting power. Also Voyager SDK dosn’t recognize the card. I also tried it on an Intel system with an Intel i5-8500T on a Supermicro X11SCA-F motherboard. The same issue occurs. The card is powered (fan spin) but not recognized. I noticed that the boot time increases significantly when the accelerator is installed. This makes me think that UEFI might be attempting to detect something, even though no error messages are shown. As I already tried a lot and I couldn’t get it running, I wonder if you have any ideas what I can test.Could there
Hi,I get an error when I want to deploy the WASB model for sports ball detection and tracking, converted to onnx with opset 15:ERROR : Traceback (most recent call last):ERROR : File "<frozen compiler.top_level>", line 584, in quantizeERROR : File "<frozen qtools_tvm_interface.graph_exporter_v2.graph_exporter>", line 254, in exportERROR : File "<frozen qtools_tvm_interface.graph_exporter_v2.graph_exporter>", line 209, in _convert_operatorsERROR : File "<frozen qtools_tvm_interface.graph_exporter_v2.replacement_functions>", line 242, in closureERROR : RuntimeError: Multiple consecutive arithmetic operations by runtime ops found in the graph. This is not allowed presently.ERROR : ERROR : The above exception was the direct cause of the following exception:ERROR : ERROR : Traceback (most recent call last):ERROR : File "/home/aetina/Desktop/voyager-sdk/axelera/app/compile.py", line 456, in compileERROR : the_manifest = top_level.c
There's everything to play for.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.