Pioneer 10 Project Challenge: Winner Announcement
Get involved in the Axelera community. Ask questions, get inspired, share projects and engage with the AI world
Share ideas and discuss Axelera’s products and innovations
The support hub for all things Axelera. Products, SDK, customer help and more.
Talk about innovations and advancements in the AI world.
Advancing language models and natural language understanding.
Hi there,i´m working with an Axelera M.2 chip at uni. For some research on green AI i´m doing i want to track the power usage of the chip. As far as i know i need to track the power through the PCIe connection. Is there a way to track it with the AIPU directly or do i need to use the PCIe? And if there is a way can i save the usage in an array easily or is this a rather tricky situation?Thanks in advance Morruebe
Hello, I’m new to voyager SDK and everything related to AXELERA and i need help deploying a custom onnx model that has 2 inputs and 20+ outputs, all i want for now is to manage to run an inference with my own dataset/image and to save to disk the raw outputs of the model. The input is a pair of 2 files. I have data in yuv format and it’s saved in input_y.npy and input_uv.npy. Do you have any idea how can i do this or if it’s possible from the voyager 1.3 or do i need to add a split layer so i need just 1 input and only then i can deploy it? Or to upgrade the voyager version to 1.4
What if you could run 24 simultaneous YOLO streams on a single M.2 card, 6x more than what most hardware can handle? Welcome to the performance revolution that's redefining what's possible at the edge.The computer vision world runs on Ultralytics YOLO models. From retail loss prevention to smart city surveillance, these models power the visual intelligence behind modern applications. But every YOLO developer knows the challenge: scaling real-time inference across multiple video streams while maintaining performance and staying within power budgets.The Axelera® AI Metis® platform addresses this multi-stream challenge head-on, delivering purpose-built performance for YOLO workloads at the edge. For developers who attended YOLO Vision London, you were able to see a preview of the upcoming YOLO26 release running on a new Metis form factor coming soon. Since both Ultralytics and Axelera AI are focused on ease of use, the new model was compiled and running within the same day. We're excited
Will Metis M.2 ( AXE-BME20M1AR01B02) work with QNAP NAS?
Does anyone have any suggestions on how to implement SAHI inference with Yolo via voyager-sdk to run on metis m2?
Our development team requires specialized expertise for Axelera AI platform implementation and edge AI acceleration optimization. We're seeking an experienced consultant with 5-7weeks availability to support us with
Hello,I’m running inference with model unet_fcn_512-cityscapes with pipe torch-aipu on the aetina eval board. It runs at 1.8 fps system and with 500ms of latency although device fps shows 11.5 fps capability. In the doc it is also mentioned that it should reach 18fps. I originally thought it was an issue with time wasted to load and decode png images from the SD card so I put them in shared memory but result are identical.I also tested yolov5s-v7-coco that should reach 805 fps but I can only achieve 214fps. Here are the output of: AXELERA_USE_CL_DOUBLE_BUFFER=0 ./inference.py yolov5s-v7-coco media/traffic3_720p.mp4 --show-stats --no-displayINFO : Deploying model yolov5s-v7-coco for 4 cores. This may take a while...|████████████████████████████████████████| 12:41.1 arm_release_ver: g13p0-01eac0, rk_so_ver: 9========================================================================
Thought you might want to check out the live stream from YOLO Vision that’s happening today in London. Some of the Axelera AI team are there, and will be up on stage soon!
Hello,I am currently running performance tests on the Metis M.2 card with different models (e.g., YOLOv8n).I observed a behavior that I would like to clarify: When running with 1 AIPU core, I get lower latency and the same FPS as when using multiple cores (4). However, with 4 AIPU cores, the latency per image is higher, even though the FPS does not increase. In addition, CPU utilization is noticeably higher with 4 cores (around 17%) compared to only ~0.8% with 1 core. I would like to know: Is this expected behavior (due to synchronization overhead between AIPU cores)? Is there a recommended configuration to optimize either latency or throughput (FPS) depending on the use case? Is it normal that CPU usage increases significantly when using 4 cores, given that processing is supposed to be largely handled by the AIPU? Thank you in advance for your clarifications.Best regards,
Hello everyone,I have been trying to compile the PRL Tracker model (see link below) for about a month using both Hailo and Vitis AI, but so far I have not been able to fully meet the requirements.I am now considering using the Metis Accelerator, but I am not sure whether I would run into the same issues.Has anyone here successfully deployed this model on the Metis Accelerator, or could you confirm if it is supported?Repository: https://github.com/vision4robotics/PRL-Track/tree/main Any guidance or shared experience would be greatly appreciated.Thanks in advance!
Hello everyone, I am trying to use the Metis PCIe (4GB) in combination with the firefly ITX-3588J.Issue: I can not load the Metis kernel module (driver) into the Linux kernelSetup: I have the Metis PCIe with 4GB of ram the Firefly ITX-3588J with 16GB of ram with a RockChip. OS: Ubuntu 22.04.4 LTS. I have a Gen3 Pcie with 4 lanes, details below``` LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <16us ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+``` So far:I installed the ubuntu OS as described here: https://support.axelera.ai/hc/en-us/articles/25556437653138-System-Imaging-Guide-Firefly-RK3588 I installed the latest 1.3.3 voyager sdk release. i can see the metis board from lspci``` firefly@firefly:~$ lspci | grep accelerators01:00.0 Processing accelerators: Axelera AI Metis AIPU (rev 02)```however, i do not detect any drivers for the board, even after installing the sdk. Both the following commands do not return anything:```
Hi,Bit of a weird issue with the M.2 Evaluation System (SBC model AIB-MR1B-A1), and I’m worried it could be a hardware fault. I somehow got to a state where I would see AXR_ERROR_CONNECTION_ERROR: No target device found in lspci output".lspci didn’t show the accelerator card, but instead listed “Non-VGA unclassified device”:00:00.0 PCI bridge: Rockchip Electronics Co., Ltd RK3588 (rev 01)01:00.0 Non-VGA unclassified device: Synopsys, Inc. DWC_usb3 / PCIe bridgedmesg showed:Tue Sep 2 19:51:14 2025] rk-pcie fe170000.pcie: PCIe Linking... LTSSM is 0x3 [Tue Sep 2 19:51:16 2025] rk-pcie fe170000.pcie: PCIe Link Fail [Tue Sep 2 19:51:16 2025] rk-pcie fe170000.pcie: failed to initialize hostAfter trying a few things, I resorted to re-imaging the SBC. Then, on the first log-in from adb shell, I still saw Non-VGA unclassified device, but after a reboot, SSH’ing into the Eval System, I saw it as:00:00.0 PCI bridge: Rockchip Electronics Co., Ltd Device 3588 (rev 01)01:00.0 Processing accelerators
Hi everyone,I'm currently developing a web application on top of the Axelera PCIe card and Voyager SDK, and I’ve hit a roadblock.Use Case Overview: I want to relay this annotated stream back to my web frontend for live viewing. I’ve tried: Using GStreamer pipelines for HLS (e.g., with hlssink) Embedding video using <video> tag on the frontend pointing to HLS or .mp4 System Details: Hardware: Axelera Metis PCIe Card Software: Voyager SDK (latest) YOLOv8n-based custom model Python 3.10 + GStreamer on Ubuntu 22.04
There's everything to play for.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.