🎉 The Pioneer 10 Have Been Selected! 🚀
Hi,When trying to deploy yolo11 with custom weights, I noticed a significant performance drop due to the model conversion to onnx. Can I ask what method and parameters were used to convert the axelera yolo11s-coco-onnx model?Thank you!
Hi Axelera team,I’m running parallel and cascading pipelines together with inference.py—the pipelines initialize and generally work. The issue is input routing: during inference I’m unable to reliably select/bind the correct video (file/RTSP/USB) to each network. For example, launching two independent detectors plus one cascade with multiple media sources (e.g., ./inference.py modelA modelB media/a.mp4 media/b.mp4 --pipe=gst --verbose) often results in the streams attaching to the wrong model or failing to open, while one stream runs as expected.Is there a supported way to explicitly map inputs to models/pipelines when using inference.py? For instance: CLI flags to pin inputs by index/name (e.g., --input[0]=..., --input[1]=...) YAML fields that define per-network input URIs A recommended PipeManager pattern for deterministic source‑to‑network routing when combining parallel and cascading graphs the command i wrote:./inference.py parellel5_yv11pose_scpmyv8_bcfyv8_yv11_ppeyv8 med
We’re excited to announce that Axelera is now fully supported in the DeGirum Cloud Compiler, our browser-based tool that lets you compile YOLO models in just a few clicks. No toolchains, no local setup, no guesswork. Just upload your PyTorch checkpoint and receive an Axelera-optimized binary, often in under 10 minutes.Note: A free DeGirum AI Hub account is required to access the compiler and test your models directly in the browser.✅ What It Supports YOLOv8 and YOLO11 All model sizes: n, s, m, l, x Tasks: Object Detection Classification Semantic Segmentation Oriented Bounding Box Detection Keypoint Estimation Custom input resolutions during compilation Integrated C++ postprocessing via DeGirum PySDK Instant in-browser testing — no download required Fast compilation — typically under 10 minutes 🎯 Who This Is ForThis tool is built for developers and researchers who want to: Skip manual toolchain setup Quickly compile PyTorch YOLO models to run on Ax
Hi Axelera Team, I'm exploring the Axelera Metis AI device for edge deployment and had a quick question: Is it possible to run our own custom Python scripts or shell commands on the device Or is the device strictly limited to running AI model inference pipelines through your SDK, without the ability to execute external scripts or commands? Looking forward to your guidance. Thanks in advance!
Hi,My goal is to try out a few tutorials with the latest SDK release. Looks like a python package is not found, any clues what I could try?System: Axelere M.2 Card with Aetina Eval SystemVoyager SDK v1.3.1 - a fresh install of the SDK this timeFirst installed the latest driver https://software.axelera.ai/artifactory/axelera-apt-source/metis-dkms/metis-dkms_1.0.2_all.debthen I ran the installer with./install.sh --all --media --user <email addr> --token <token>I got a few suspicious warnings and hit ‘y’ a number of times. Then install of 186 packages completed.[185/186] Install gstreamer1.0-rtspbuilding operatorsrefreshing pcie and firmware0000:01:00.0 : DeviceDevice 0: metis-0:1:0 1GiB m2 flver=1.2.0-rc2 bcver=1.0 clock=800MHz(0-3:800MHz) mvm=0-3:100%Installation complete, but with unresolved issues (see above) firefly@aetina:~/Documents/g2-testing/voyager-sdk$ source venv/bin/activate I proceeded to try inference.py with one of the included videos as input(venv) firefly@aet
I am using zsh as my shell application. While activating the environment, I encountered this error: voyager-sdk git:(release/v1.3) ✗ source venv/bin/activate _AX_set_env:4: bad substitutionDo you think providing support for shells other than bash would help?
Esistono e quali sono le architetture di riferimento ?Oppure dei benchmark ( in caso di tuning 🚗)...A scanso di equivoci e per un Proof-Of-Concept 😎
Why Voice Search SEO is the Future of Mobile Marketing1. The Mobile-First Era is Already HereThe mobile revolution has reshaped how we live, work, and interact with technology. Today, most users turn to their smartphones not just to browse, but to speak to search engines. Whether it’s asking Siri for the nearest coffee shop or commanding Google Assistant to play a song, voice interaction has become second nature.With over 70% of internet traffic now coming from mobile devices, optimizing for mobile-first behavior is no longer optional. Consumers are increasingly looking for faster, hands-free ways to access information—making voice search a major player in modern marketing. 2. Why Voice Search is Taking OverVoice search isn’t just a novelty—it’s a necessity. People now expect their phones to understand and respond to them conversationally. This shift is driven by:Convenience: Talking is faster than typing. Accuracy: Voice recognition accuracy has improved drastically. Contextual se
I’ve used the yolov5s.pt file provided from the official yolov5 repository and converted it to onnx with this command: python3 export.py --weights yolov5s.pt --imgsz 640 --batch-size 1 --include onnx --opset 17 . I’ve then tried to compile this with this command: compile -i /home/axelera/Vision.AxeleraTesting/data/yolov5s.onnx -o /home/axelera/Vision.AxeleraTesting/data/compile --overwriteI got this output:09:49:45 [INFO] Dump used CLI arguments to: /home/vintecc/axelera/Vision.AxeleraTesting/data/compile/cli_args.json09:49:45 [INFO] Dump used compiler configuration to: /home/vintecc/axelera/Vision.AxeleraTesting/data/compile/conf.json09:49:45 [INFO] Input model has static input shape(s): ((1, 3, 640, 640),). Use it for quantization.09:49:45 [INFO] Data layout of the input model: NCHW09:49:45 [INFO] Using dataset of size 100 for calibration.09:49:45 [INFO] In case of compilation failures, turn on 'save_error_artifact' and share the archive with Axelera AI.09:49:45 [INFO] Quantizing
Hi again @Spanner ! 😊I need to infer an image obtained from byte data stored in a memory space (https://docs.python.org/3/library/mmap.html) and after converting the bytes I get an image.I've seen the sources available for create_inference_stream and I know that I can make inferences to images (https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/docs/tutorials/application.md), but I have to specify the/path/to/file. But in this case I won't have that information.In the code I'll have this line where I read the bytes in the memory space: frame_size=image_width*image_height*3 //RGB frame_bytes = mm.read(frame_size)And then I get the new image: frame = np.frombuffer(frame_bytes,dtype=np.uint8).reshape((image_height,image_width, 3))Is there any way to infer “frame”?Thank you!
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.