Pioneer 10 Project Challenge: Winner Announcement
Ask questions, get inspired, share your projects and get involved in AI
See the wildest, smartest, and most creative AI projects ever built with Metis and vision AI, and find out who took the Axelera AI crown! 👑
Chat, ask questions, help out, get inspired
News, blogs, updates and announcements
Share your feedback, feature requests, ideas
All the release notes for Voyager updates go straight out on the GitHub repo, so I don’t tend to parrot them here, but this is a pretty big one and feels like it deserves a spot here on the community!If you’ve been using 1.3 already, there are some nice changes that make life easier.A few things that particularlt stand out to me:🔹 Cascade pipelines feel smoother. Chaining models together (like running a classifier on detections) takes less fiddling around than before. 🔹 More operator and metadata options. You can now shape pre/post-processing and inference results in ways that fit better with your own app. 🔹 Model Zoo is easier to adapt. Swapping in your own weights on the reference pipelines is less hassle, so you can get something tuned to your dataset faster. 🔹 Custom evaluators are handy if you want accuracy measured in a way that matches your own project rather than sticking with defaults. 🔹 General quality-of-life polish in pipeline debugging, deployment consistency, benchma
Dear Axelera Team, We are Master’s students in Business Engineering: Technology and Entrepreneurship at KU Leuven. As part of our course on Strategic Management of Technology, we are exploring how innovative companies manage and foster innovation. Through Axelera’s LinkedIn we noticed your commitment to supporting innovative ideas. We would be very interested in learning more about your approach to innovation management. Our project would involve studying your innovation practices and providing an analytical report as an outcome. Would you be open to collaborating with us on this project? Thank you very much for your time and consideration. Best regards,Anna VellaKU Leuven – Faculty of Economics and BusinessLeuven, Belgium
Hi! My name is Salvador and I’ve been trying to integrate the Metis M.2 on a Jetson Orin NX 16GB to perform Computer vision tasks.Recently the Axelera team shared a guide to Bring up Metis M.2 on Jetson Orin, where they explain the steps required to modify the kernel to use the Metis Accelerator.For me these changes were not enough, so I had to perform additional steps to make it work, I hope this post is useful for anybody trying to accomplish the same. The following steps build upon the instructions provided by the Axelera team.Prepare the Nvidia files necessaryWe will use a host machine running Ubuntu 20.04 or 22.04 (newer versions are not compatible with some Nvidia tools) where we will run all the described process, download the kernel sources, the Jetson Sample Filesystem, and the Jetson linux toolchain, and execute the cross compilation of the linux kernel, and the flashing into the Jetson Oring NX. To install the Jetson toolchain just follow the instructions provided on the lin
This one’s a small but tidy release, mainly focused on smoothing out a few crinkles in v1.4.1.🧹 Fixes: The installer no longer messes with system directory permissions (CST-803). Fixed an error that popped up when saving output from multiple streams (SDK-7862). 📚 Docs & Model Updates: ONNX opset docs got a refresh, now covering expanded support for concat and grouped convolutions. Updated performance metrics for the Real-ESRGAN-x4plus model — now reflecting the latest benchmarking results. If you missed v1.4.1, that one brought a handful of bug fixes and a solid performance boost, including double buffering for inference on Raspberry Pi 5 and Portenta X8, which really improves throughput on those platforms.So, if you haven’t updated since the big 1.4 release, this is a good moment to grab the latest build for a smoother, faster SDK experience.👉 Head over to your usual download spot or pull the latest branch from GitHub.
Buongiorno a tutti, Mi chiamo Vito e sono il titolare di una società Italiana, una start-up. Insegno anche in un istituto tecnico di Brescia Robotica, Elettronica e Informatica. Abbiamo avviato un progetto per la realizzazione di un robot umanoide nella mia società con alcuni dei miei studenti, qualche mese fa nell'ambito dell'assistenza. Ho visto l'intervista dell'Ing. Del Maffeo e sono rimasto colpito. Spanner, @Spanner dobbiamo implementare il robot con AI e quindi di una scheda potente, possiamo collaborare? Il potenziale è grande.. come potremmo approfondire l'argomento? Vi ringrazio a tutti in anticipo..
Hi,I'm having trouble installing the metis-dkms driver. Can you help me?The device is a NanoPC-T6 from Friendly Elec, and the operating system is Ubuntu 22.04, also provided by Friendly Elec.I'm leaving the error message and OS information below. If there is any more information I should provide, please let me know. Thank you. The Error:$ sudo dpkg -i metis-dkms_0.07.16_all.deb(Reading database ... 155441 files and directories currently installed.)Preparing to unpack metis-dkms_0.07.16_all.deb ...Deleting module metis-0.07.16 completely from the DKMS tree.Unpacking metis-dkms (0.07.16) over (0.07.16) ...Setting up metis-dkms (0.07.16) ...Loading new metis-0.07.16 DKMS files...Building for 6.1.99Building for architecture aarch64Building initial module for 6.1.99ERROR (dkms apport): kernel package linux-headers-6.1.99 is not supportedError! Bad return status for module build on kernel: 6.1.99 (aarch64)Consult /var/lib/dkms/metis/0.07.16/build/make.log for more information.dpkg: error pro
Hello,I’m running inference with model unet_fcn_512-cityscapes with pipe torch-aipu on the aetina eval board. It runs at 1.8 fps system and with 500ms of latency although device fps shows 11.5 fps capability. In the doc it is also mentioned that it should reach 18fps. I originally thought it was an issue with time wasted to load and decode png images from the SD card so I put them in shared memory but result are identical.I also tested yolov5s-v7-coco that should reach 805 fps but I can only achieve 214fps. Here are the output of: AXELERA_USE_CL_DOUBLE_BUFFER=0 ./inference.py yolov5s-v7-coco media/traffic3_720p.mp4 --show-stats --no-displayINFO : Deploying model yolov5s-v7-coco for 4 cores. This may take a while...|████████████████████████████████████████| 12:41.1 arm_release_ver: g13p0-01eac0, rk_so_ver: 9========================================================================
Problem SummaryPlatform: ARM64 (RK3588-based boards like Radxa ROCK 5B Plus)OS: Ubuntu 22.04Issue: GStreamer plugins (axinplace, axtransform, etc.) fail to build despite:All GStreamer development packages installed CMake detecting GStreamer successfully (GST_FOUND=1) No explicit build errors or warnings Source code present in operators/gstaxstreamer/Symptom: ERRORfailed to create element of type axinplace (decodebin-link0)when trying to do inferencebashgst-inspect-1.0 axinplace# Returns: "No such element or plugin 'axinplace'"After build, libgstaxstreamer.so is missing from operators/lib/ even though the build claims success.Root Cause AnalysisThe Silent Dependency FailureThe issue is a silent CMake conditional failure caused by a missing platform-specific dependency that's not documented anywhere in the Voyager SDK.What the developers missed:Incomplete dependency documentation - The SDK docs list standard GStreamer packages but omit hardware-specific requirements Silent build failure
Environment SoC/Board: Rockchip RK3588, Radxa ROCK 5B Plus . It has 2 M.2 slots! OS/Kernel: Debian bookworm, 6.1.43-15-rk2312 (aarch64) RAM: 16GB (also tested with mem=4G kernel parameter; total reported ~3.8Gi) Device: Axelera Metis PCIe accelerator, PCI ID 1f9d:1100 (shows as “Axelera AI Metis AIPU”) PCIe topology: Endpoint at 0001:11:00.0 behind upstream bridge 0001:10:00.0 (Gen3 x2) Drivers: axl/metis-dkms 1.2.3; “Kernel driver in use: axl” shown; /dev/metis-1:11:0 device node present when driver bound Container: Voyager SDK tested in Ubuntu 22.04 container (privileged); device access tests moved back to host after hangs observed inside container Device-tree overlay (added to expand ranges window to 48MB)Reason: Default 32-bit non-prefetchable MEM window under fe160000.pcie was ~14MB; Metis BAR2 asked for 32MB (plus headroom), so a 48MB window was used in different place that seems free. Overlay below is applied via extlinux overlays:text/dts-v1/;/plugin/; / { compat
Hi all,I am trying to compile a high-resolution model (1504×1504) on a Metis PCIe card, but the compilation fails due to a memory constraint:ERROR : RuntimeError: Could not find a tiling factor that fits the memory constraints l1_constraint=4011520 l2_constraint=7894528. After attempt=7 and h_size=1 and adj_factor=1, memory usage still is memory_usage={L1: {190: 141376, 193: 5017600, 191: 143360}} and per-pool memory usage {L1: 5302336}.The error indicates a required L1 size (≈5.3 MiB) exceeds the constraint (≈4.0 MiB).My questions are: Voyager SDK Solution: Is there a Voyager SDK flag or setting to adjust the tiling to overcome this ≈4 MiB L1 limit? Hardware Solution: Are there any Metis card variants (or multi-AIPU cards) that feature a larger per-core L1 cache? Any guidance on compiling this high-resolution model is appreciated. Thank you!
Hello guys;I did not follow the instructions in the link below on the Orangepi 5 device, but it does not see the device.“ERR DEVCICE NOT FOUND”help! https://support.axelera.ai/hc/en-us/articles/27059519168146-Bring-up-Voyager-SDK-in-Orange-Pi-5-Plus
Hello everyone, I’m trying to install some packages using APT with my Axelera Metis Compute Board with Voyager Linux 1. However the /etc/apt/sources.list is empty. What kind of repositories should I use?Are there any tutorials?Thanks.Best regards
Hi there,i´m working with an Axelera M.2 chip at uni. For some research on green AI i´m doing i want to track the power usage of the chip. As far as i know i need to track the power through the PCIe connection. Is there a way to track it with the AIPU directly or do i need to use the PCIe? And if there is a way can i save the usage in an array easily or is this a rather tricky situation?Thanks in advance Morruebe
Hello, I’m new to voyager SDK and everything related to AXELERA and i need help deploying a custom onnx model that has 2 inputs and 20+ outputs, all i want for now is to manage to run an inference with my own dataset/image and to save to disk the raw outputs of the model. The input is a pair of 2 files. I have data in yuv format and it’s saved in input_y.npy and input_uv.npy. Do you have any idea how can i do this or if it’s possible from the voyager 1.3 or do i need to add a split layer so i need just 1 input and only then i can deploy it? Or to upgrade the voyager version to 1.4
Will Metis M.2 ( AXE-BME20M1AR01B02) work with QNAP NAS?
Here's how gamification works on the Axelera AI community.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.