New to the Community? Here's everything you need to know
Ask questions, get inspired, share your projects and get involved in AI
Chat, ask questions, help out, get inspired
News, blogs, updates and announcements
Share your feedback, feature requests, ideas
Get support with Axelera AI hardware, the Voyager SDK, host platforms or anything else we can help with.
@Victor Labian I’m also trying to setup rpi5 with metis m2. I have question what host OS you was using and have you changed anything in config.txt?
Hi, I am also working on same thing deploying the YOLOPv2 model on to Metis M2. I have downloadded the data set from bdd100k and arranged in this manner.voyager-sdk/├── data/│ └── yolopv2_dataset/│ ├── images/│ ├── labels/│ ├── cal.txt│ ├── val.txt│ └── data.yaml├── customers/│ └── my_yolopv2/│ └── yolopv2.pt└── yolopv2-custom.yamlso when i am trying to deploy the model using yaml I am getting this error(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/my_yolopv2/yolopv2.yamlINFO : Using device metis-0:1:0INFO : Detected Metis type as pcieINFO : Compiling network yolopv2-custom /home/aravind/Desktop/voyager-sdk/customers/my_yolopv2/yolopv2.yamlINFO : Compile model: yolopv2-customINFO : Imported DataAdapter ObjDataAdaptor from /home/aravind/Desktop/voyager-sdk/ax_datasets/objdataadapter.py/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/torch/serialization.py:779: UserWarning: 'to
how let run a LLM learning process driven by system ram and cpu model outsource to metis device?or better say how to load 4,7GB into metis m.2 with 1GB
hello I hijack this a little bit … once I type “lspci” it gave me following output on a pi 5 0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 30)0001:01:00.0 Processing accelerators: Axelera AI Metis AIPU (rev 02)0002:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 30)0002:01:00.0 Ethernet controller: Raspberry Pi Ltd RP1 PCIe 2.0 South Bridge how to activate the device and getting to work with my python3 script
Hello.I recently received the m.2 metis accelerators and did a test on the boards provided by Axelera and got results close to the published benchmarks. It was succesful.I then connected these accelerators to the Orin (via the m.2 slot) in order to use them also for the my Jetson AGX Orin environment. But in the voyager-sdk installation i got “WARNING: Failed to refresh pcie and firmware” error. The installation completes but the accelerator device does not show up on my system. In lspci -tv command I get the output attached to this post. Can’t see metis there.I didn't see any information about the compatibility of these accelerators with Jetson systems (my m2 slot is M-key). If they are compatible, can you help me to fix the installation?Thanks :)
Hi, I followed the "Install Voyager SDK in a Docker Container" guide to try installing SDK version 1.2.5, but I encountered an error with one of the packages during the process.Could you please let me know how to resolve this issue?The attached file contains the installation log. --- [150/193] Installing development libraries [150/193] Install axelera_compiler [150/193] python3 -m pip --disable-pip-version-check install --index-url XXXX@software.axelera.ai/artifactory/axelera-dev-pypi/ --no-deps -c cfg//requirements-ubuntu-2204-amd64.txt "axelera_compiler" [150/193] Looking in indexes: XXXX:****@software.axelera.ai/artifactory/axelera-dev-pypi/ [150/193] WARNING: 401 Error, Credentials not correct for https://software.axelera.ai/artifactory/axelera-dev-pypi/axelera-compiler/(X) [150/193] ERROR: Cannot install axelera_compiler because these package versions have conflicting dependencies. [150/193] The conflict is caused by: [150/193] The user requested axeler
it ask me every time about token … yes I use my registered emailand I did not found any token in my accountwtf
About 40m of 2020 aluminium profile just arrived for the CNC machine I’m building! I suddenly feel like this might be a bigger job than I initially planned…Anyway, that’s for another day. What’s crossed my mind now is whether anyone’s seen any projects that combine CNC with vision AI? I’m thinking it’d be cool if I could combine the two, since I’m building one machine anyway.Predictive maintenance, maybe? Although this is a homemade machine, so that’s not a huge issue.Perhaps positioning and alignment could be something AI could get involved with? That could potentially be useful and reduce waste. Any other ideas of existing projects anyone’s seen in this area?
The world has been chaotic lately - market swings, tariffs, companies being acquired (Kinara by NXP), other companies are refusing acquisition (Furiosa allegedly declined an $800m buyout from Meta), DeepSeek made everyone question the future of closed AI models and model scaling, while OpenAI’s Sam Altman committed to an “open-weight” AI model to come out this summer - something no one thought would happen.As I was reflecting on all of these, and Axelera’s state in it all, I find myself incredibly grateful.Despite everything happening around us, our team has been focusing on what we can control: solving customer problems, building world-class technology, and partnering with some of the world’s best technology providers. So much to be thankful for, and I want to share some of these reasons with you.We have seen amazing progress towards our vision of bringing artificial intelligence to everyone, to truly democratize what could be the most revolutionary technology we have seen in our life
Hi everyone,I am Semih. I am a PhD researcher at University of Tuebingen on Open Source and AI. I am trying to understand Axelera’s hardware and in future I am planning to research legal and regulatory issues related to AI hardware, role of open source hardware, licensing contracts, regulatory compliance especially with Cyber Resilience Act, GDPR and AI Act. I was wondering if anyone is conducting a similar research or are there any pain points or interesting discussions under these themes?Thanks in advance
Hi all,I’m a docker noob, so please bear with me if this is a stupid question.To get the SDK operational on my Ubuntu-24.04 system I followed the Install Voyager SDK in a Docker Container and can make it up to test-running some inferences. Alas, while I expected the videos to be displayed in a dedicated window at full resolution, for me they are shown in the active text-console (ncurses-based I assume), obviously coarse and slow.I tried to learn how to activate graphics output from within docker, but all I got was that docker essentially is meant to be text-only and graphics usually is provided via web-browser of pages served from within docker.So to put it bluntly: those videos showing Voyager SDK performing object detection at exhibitions at full-resolution and full-framerate, are those from within docker or are they run from a native ubuntu-22 system?If it is possible to tune docker to have that video output, a mini-howto would help a lot.
Dear all,I am trying to get my hands on the Metis PCIe card on my current working PC, beingmainboard: Asus Prime X570-P CPU: AMD Ryzen 9 3900X OS: Ubuntu 24.04.2 LTS, 6.8.0-58-generic x86_64The first issue I face is that the warm-reset does not seem to work for me, i.e. if I issue a reboot without re-powering the system, the Metis device does not show up in lspci, while after a power-down (with physical PSU switching off) it is there. This is reproducible and based on my experience points to either my mainboard BIOS issuing a wrong reset sequence to the PCIe slot, or the device itself not issuing a full reset sequence on warm-boot. Since I know how to get the card detected, this is only an annoyance - but I saw that also another user of AMD64 system in the forum has similar issues and the problem might go deeper.Which leads to the second issue I am observing and which I filed already a support request but realized here would be the better place to discuss it.According to David M. to ch
May 4th is almost upon us, so naturally we’re thinking about AI… in space. More specifically: the AI and droid legends of the Star Wars universe.From multilingual protocol units to rogue assassin bots with surprising moral compasses, Star Wars has given us some of the most iconic (and occasionally chaotic) artificial intelligences in sci-fi.So here’s the question...
Hello my name is Kevin, i am trying to work on a project that the m.2 is going to interfere with my cpu and later on maybe my gpu.is there a way, and another question whats the command to go on rtsp? my system is:AMD 9950X3D CPUAMD 7900XTX GPU48 gb DDR5 on 8200MHZMetis Axelera AI on an m.2 port.
Hello,I know that axelera does not work in the attention mechanism. For example, can we convert the attention mechanism in the yolov11 architecture to convolution? Have you done any research or read on this subject? Does Softmax structure work in Axelera? (as far as I know, no) Is there a way to turn Softmax into a working structure?
Here's how gamification works on the Axelera AI community.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.