The Pioneer 10 Project Challenge - How It Works
Get involved in the Axelera community. Ask questions, get inspired, share projects and engage with the AI world
Share ideas and discuss Axelera’s products and innovations
The support hub for all things Axelera. Products, SDK, customer help and more.
Talk about innovations and advancements in the AI world.
Advancing language models and natural language understanding.
Have you seen that @Victor Labian’s successfully brought up the Voyager SDK on the Orange Pi 5 Plus with a Metis M.2 module? It took a bit of work—Ubuntu flashing, driver rebuilds, and tweaking the PCIe device tree—but it’s running, and it’s fast. 😃This setup is seriously impressive for something so compact and affordable. Inference runs smoothly, and with some tuning, it could potentially be running advanced models like YOLOv8 without trouble.It’s just an experiment and a test at this point. No big plans around the Orange Pi, but it looks like a really promising start.If you’re experimenting with AI on SBCs—or thinking about it—I’d love to hear what you’re working on. Orange Pi, Raspberry Pi, custom boards, anything edge. 👍
Hi again @Spanner ! 😊I need to infer an image obtained from byte data stored in a memory space (https://docs.python.org/3/library/mmap.html) and after converting the bytes I get an image.I've seen the sources available for create_inference_stream and I know that I can make inferences to images (https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/docs/tutorials/application.md), but I have to specify the/path/to/file. But in this case I won't have that information.In the code I'll have this line where I read the bytes in the memory space: frame_size=image_width*image_height*3 //RGB frame_bytes = mm.read(frame_size)And then I get the new image: frame = np.frombuffer(frame_bytes,dtype=np.uint8).reshape((image_height,image_width, 3))Is there any way to infer “frame”?Thank you!
I just built a quick demo showing the Llama 3.2B chatbot running on our Metis platform, totally offline. This model packs 3 billion parameters and runs smoothly on both a standard Lenovo P360 with our PCIe card and even on an Arduino-based dev board (Portenta X8).We hit 6+ tokens/sec per core – which means real-time chat. Perfect for smart customer support bot, digital concierge systems, any edge AI assistant application really, all running fully on-device. No cloud needed.Check out the video and let me know what you think. Any projects you can think of where you could use a self-contained, power-efficient, offline AI chatbot like this? //EDIT I am aware that the youtube link is currently broken. I will reupload it soon.
Hey everyone,Excited to share a brand-new video I put together for anyone getting started with the Metis card and Voyager SDK. Whether you're setting up your eval kit or just curious about how inference works on Metis, this walkthrough is for you.🛠️ What’s inside:Setting up your environment and verifying camera input Running YOLOv5s-v7-COCO on a live camera stream Using download_prebuilt for faster setup Running inference on media files and datasets Benchmarking performance Measuring accuracy with mAP and more📺 You can find this tutorial, and more, on our YouTube ChannelThis guide is based on SDK version 1.2.5 and follows the official Quick Start Guide, but with some extra tips and tricks to get you up and running even faster.💬 Have you tried running inference on Metis yet?Drop your thoughts, questions, or your own setup experiences in the comments, I'd love to hear how it's going for you or help troubleshoot if you're stuck!See you there,Jonas
Hello!I am facing some problems tring to run a network inference on the Metis M.2 with Raspberry Pi via Docker. To try the inference I have followed the steps in the guide (https://support.axelera.ai/hc/en-us/articles/26362016484114-Bring-up-Voyager-SDK-in-Raspberry-Pi-5) but after trying to run the inference.py, I face the following errors: (venv) root@iak:/home/voyager-sdk# ./inference.py yolov8s-coco-onnx ./media/traffic1_480p.mp4INFO : Using device metis-1:1:0WARNING : Failed to get OpenCL platforms : clGetPlatformIDs failed: PLATFORM_NOT_FOUND_KHRWARNING : Please check the documentation for installation instructionsINFO : Default OpenGL backend gl,3,3 overridden, using gles,3,1INFO : Network type: NetworkType.SINGLE_MODELINFO : InputINFO : └─detectionsStream Paused: 57%|██████████████████████████████████████████████████████▊ | 4/7 [00:00<00:00, 5.56/s][ERROR][axeWaitForCommandList]: Uio wait kernel failed with return c
Hello, I try to use a YOLO model on 8 camera streams in parallel on the Metis PCIe card. 20-30 fps per camera stream is all I need. With the 548 fps end-to-end stated for YOLOv8s in https://axelera.ai/metis-aipu-benchmarks, it should be possible to reach ~68 fps per camera. As a small test I wrote a python script, which creates a inference stream with 8 videos as input:from axelera.app import config, display, inf_tracersfrom axelera.app.stream import create_inference_streamdef run(window, stream): for frame_result in stream: window.show(frame_result.image, frame_result.meta, frame_result.stream_id) fps = stream.get_all_metrics()['end_to_end_fps'] print(fps.value)def main(): tracers = inf_tracers.create_tracers('core_temp', 'end_to_end_fps', 'cpu_usage') stream = create_inference_stream( network="yolov5s-v7-coco", sources=[ str(config.env.framework / "media/traffic1_1080p.mp4"), str(config.env.framework / "media/traffic1_
Hello everyone,I’m new to working with this hardware and recently installed the Metis M.2 chip (Firmware version: v1.2.0-rc2+bl1-stage0) on an ASRock Z170 Pro4S motherboard. The device was detected successfully.I’ve installed the Voyager SDK and attempted to run the following command from the Quick Start Guide:./inference.py yolov5s-v7-coco dataset --no-display However, I encountered an input/output error when executing the command. I’ve included the full error message below for reference.I would appreciate any guidance or suggestions you may have to help resolve this issue.Thank you in advance for your support. INFO : Using default val dataset INFO : Using device metis-0:5:0 INFO : Network type: NetworkType.SINGLE_MODEL INFO : Input INFO : └─detections INFO : Imported DataAdapter ObjDataAdaptor from /home/tud/voyager-sdk/ax_datasets/objdataadapter.py Stream Playing: 71%|██████████████████████▏ | 5/7 [00:00<00:00, 6.26/s][AxeleraDmaBuf.cpp:234] UIO_IOCTL
We frequently attend trade shows and aim to stand out from the competition. Showcasing compelling demos is a key strategy to achieve this. Which demos would most effectively highlight the unique strengths of our product?
Welcome to the Axelera AI Community – we’re so glad you’re here.This is a space for everyone from hardware engineers and AI devs to makers, enthusiasts, onlookers and partners, so don’t be shy. Whether you’re working with AI every day or just getting started, we’d love to know more about you.To get things going, let’s have a big round of hellos:Who you are and what you’re working on Your experience or interest in the AI/edge AI world What you’re hoping to learn, share or achieve in this communityCan’t wait to get to know you all, and to build something great together.Now – who’s going first? 👇
Hi, I'm working on an RPI5 and I have the voyager SDK inside a container with ubuntu 22 and it works. I wanted to try running an inference on a stream using GStreamer. First, I tried to access the stream and I managed with the following command:GST_PLUGIN_PATH=/voyager-sdk/operators/lib AXELERA_DEVICE_DIR=../opt/axelera/device-1.2.5-1/omega LD_LIBRARY_PATH=/voyager-sdk/operators/lib:/opt/axelera/runtime-1.2.5-1/lib:$LD_LIBRARY_PATH gst-launch-1.0 rtspsrc location='rtsp://…..' latency=200 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink Now I wanted to try the axinferencenet plugin but I'm having problems. I've tested various commands and formats, such as: GST_PLUGIN_PATH=/voyager-sdk/operators/lib AXELERA_DEVICE_DIR=../opt/axelera/device-1.2.5-1/omega LD_LIBRARY_PATH=/voyager-sdk/operators/lib:/opt/axelera/runtime-1.2.5-1/lib:$LD_LIBRARY_PATH gst-launch-1.0 rtspsrc location='rtsp://...' latency=200 ! rtph264depay ! avdec_h264 ! videoconvert ! video/x-raw,format=BGRA ! axtran
0000:01:00.0 : Axelera AI Metis AIPU (rev 02)ERROR:axelera.runtime:No AIPU driver found in lsmod outputERROR: AXR_ERROR_CONNECTION_ERROR: No AIPU driver found in lsmod output
Hello,I am currently working with a lane detection model named culane_res18.onnx, and I would like to know how I can run this model on the Metis M2 device.Here are some details about the setup:The model is culane_res18.onnx, which is a lane detection model. The training dataset is from the CULane dataset, and I have the link to the dataset here. The model file can be found here.Could you kindly guide me on how to run this lane detection model on Metis M2? I would appreciate any step-by-step instructions or basic guidance on how to deploy this model, considering that I have both the ONNX model and the dataset.Looking forward to your response!Best regards,Chinmay Mulgund
Hi,I'm having trouble installing the metis-dkms driver. Can you help me?The device is a NanoPC-T6 from Friendly Elec, and the operating system is Ubuntu 22.04, also provided by Friendly Elec.I'm leaving the error message and OS information below. If there is any more information I should provide, please let me know. Thank you. The Error:$ sudo dpkg -i metis-dkms_0.07.16_all.deb(Reading database ... 155441 files and directories currently installed.)Preparing to unpack metis-dkms_0.07.16_all.deb ...Deleting module metis-0.07.16 completely from the DKMS tree.Unpacking metis-dkms (0.07.16) over (0.07.16) ...Setting up metis-dkms (0.07.16) ...Loading new metis-0.07.16 DKMS files...Building for 6.1.99Building for architecture aarch64Building initial module for 6.1.99ERROR (dkms apport): kernel package linux-headers-6.1.99 is not supportedError! Bad return status for module build on kernel: 6.1.99 (aarch64)Consult /var/lib/dkms/metis/0.07.16/build/make.log for more information.dpkg: error pro
@Victor Labian I’m also trying to setup rpi5 with metis m2. I have question what host OS you was using and have you changed anything in config.txt?
Hi, I am also working on same thing deploying the YOLOPv2 model on to Metis M2. I have downloadded the data set from bdd100k and arranged in this manner.voyager-sdk/├── data/│ └── yolopv2_dataset/│ ├── images/│ ├── labels/│ ├── cal.txt│ ├── val.txt│ └── data.yaml├── customers/│ └── my_yolopv2/│ └── yolopv2.pt└── yolopv2-custom.yamlso when i am trying to deploy the model using yaml I am getting this error(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/my_yolopv2/yolopv2.yamlINFO : Using device metis-0:1:0INFO : Detected Metis type as pcieINFO : Compiling network yolopv2-custom /home/aravind/Desktop/voyager-sdk/customers/my_yolopv2/yolopv2.yamlINFO : Compile model: yolopv2-customINFO : Imported DataAdapter ObjDataAdaptor from /home/aravind/Desktop/voyager-sdk/ax_datasets/objdataadapter.py/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/torch/serialization.py:779: UserWarning: 'to
There's everything to play for.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.