🎉 The Pioneer 10 Have Been Selected! 🚀
Hello,I’m running inference with model unet_fcn_512-cityscapes with pipe torch-aipu on the aetina eval board. It runs at 1.8 fps system and with 500ms of latency although device fps shows 11.5 fps capability. In the doc it is also mentioned that it should reach 18fps. I originally thought it was an issue with time wasted to load and decode png images from the SD card so I put them in shared memory but result are identical.I also tested yolov5s-v7-coco that should reach 805 fps but I can only achieve 214fps. Here are the output of: AXELERA_USE_CL_DOUBLE_BUFFER=0 ./inference.py yolov5s-v7-coco media/traffic3_720p.mp4 --show-stats --no-displayINFO : Deploying model yolov5s-v7-coco for 4 cores. This may take a while...|████████████████████████████████████████| 12:41.1 arm_release_ver: g13p0-01eac0, rk_so_ver: 9========================================================================
Hello everyone!I am Luca Gessi, from Italy. I am a (lucky) owner of Aetina board with M2 Axelera Metis. Unfortunatly the board arrived without the SDK installed. I installed it using the github guide. The procedure faced several issues and broken packages. I had to run it several time but at the end the installation completed.From this I tried to run simple command for testing the board (as suggested on the getting started guide):./inference.py --no-display yolov5s-v7-coco dataset However the inference failed: (venv) aetina@aetina:~/voyager-sdk$ ./inference.py --no-display yolov5s-v7-coco datasetERROR : timeout for querying an inferencearm_release_ver: g13p0-01eac0, rk_so_ver: 9INFO : Using dataset valINFO : Dataset 'COCO2017' split 'val' downloaded successfully to /home/aetina/.cache/axelera/data/cocoINFO : Dataset 'COCO2017' split 'labels' downloaded successfully to /home/aetina/.cache/axelera/data/cocoINFO : Dataset 'COCO2017' split 'annotations' downloaded successfull
I'm evaluating your devices. I really love both the European heart and the great TOPS/Watt ratio.It would be amazing for edge computing.On the other side, our company is waiting for a GX10 from ASUS/NVidia with 128Gb of shared memory.The reason why I think that the Metis form factor is more interesting than the pci-ex would need a very long document :-)I read that your SDK is supporting off-load but reading on the forum I think that your off-loading support is limited only to data and not the core of the model.You know that when you use CUDA+torch you're able to off-load some part of the model out of the GPU memory and execute it running a small part of the model at once swapping coefficients from the GPU memory to the host RAM and vice versa.Are you planning to improve your off-loading system in your SDK?Are you planning to introduce a Metis with more RAM?
Hi @emustafa, please I need some helpI tried to upgrade firmware of Metis M2 but system (metis installed on orangepi 5 plus) gave me some errors and now after a few of reboots I obtain always same bad response from axdevice: libtriton_linux.c:1082] Device communication timed out: device did not respond within 1 seconds. (0)WARNING: Failed to get valid board type for device metis-0:1:0 got 6Device 0: metis-0:1:0 board_type=unknown (not responding) have I some possibilities to recover it? can I run some tools to diagnose bettere situation?thank in advance
Hello.I recently received the m.2 metis accelerators and did a test on the boards provided by Axelera and got results close to the published benchmarks. It was succesful.I then connected these accelerators to the Orin (via the m.2 slot) in order to use them also for the my Jetson AGX Orin environment. But in the voyager-sdk installation i got “WARNING: Failed to refresh pcie and firmware” error. The installation completes but the accelerator device does not show up on my system. In lspci -tv command I get the output attached to this post. Can’t see metis there.I didn't see any information about the compatibility of these accelerators with Jetson systems (my m2 slot is M-key). If they are compatible, can you help me to fix the installation?Thanks :)
Hi,I'm having trouble installing the metis-dkms driver. Can you help me?The device is a NanoPC-T6 from Friendly Elec, and the operating system is Ubuntu 22.04, also provided by Friendly Elec.I'm leaving the error message and OS information below. If there is any more information I should provide, please let me know. Thank you. The Error:$ sudo dpkg -i metis-dkms_0.07.16_all.deb(Reading database ... 155441 files and directories currently installed.)Preparing to unpack metis-dkms_0.07.16_all.deb ...Deleting module metis-0.07.16 completely from the DKMS tree.Unpacking metis-dkms (0.07.16) over (0.07.16) ...Setting up metis-dkms (0.07.16) ...Loading new metis-0.07.16 DKMS files...Building for 6.1.99Building for architecture aarch64Building initial module for 6.1.99ERROR (dkms apport): kernel package linux-headers-6.1.99 is not supportedError! Bad return status for module build on kernel: 6.1.99 (aarch64)Consult /var/lib/dkms/metis/0.07.16/build/make.log for more information.dpkg: error pro
I’ve read a number of posts and am not sure. I have a Minisforum um890 pro with Ryzen 9 8845s and 64gb ram. Windows. I’m experimenting with various hardware and setup and am wondering if this would help inference speed if used in my M.2 slot. I have USB4 and Oculink ports so I could use an external PCIe adapter. Is it plug n play or do I need to use the SDK to configure it? TIA!
I am using the Metis M.2 Card on my host PC. It is detected using `lspci` and `axdevice`. I am able to run inference using `./inference.py yolov5m-v7-coco-tracker usb:0`. However when I try to use the AxRuntime API (Python) I get the following: from axelera.runtime import Context,context = Context()model_path = "/home/user/axelera/testing/data/yolov5/compile/compiled_model/model.json"model = context.load_model(model_path)batch_size = 1connection = context.device_connect(None, batch_size)instance = connection.load_model_instance(model, num_sub_devices=batch_size, aipu_cores=batch_size)[ERROR][axeDeviceMemoryAllocate]: Not enough memory: free memory 1531904, request memory 16520704.[ERROR][axeDeviceMemAlloc]: Device memory allocate failed: size 16520704.[ERROR][axeMemAllocDevice]: Device memory allocate failed: 0x70010001.Error at zeMemAllocDevice(context, &desc, size, alignment, device, &addr): mem_alloc_device: 249: Exit with error code: 0x70010001 : ZE_RESULT_ERROR_NOT_AVAILAB
Hi again @Spanner 😊I'm doing other tests and I want to use the inference.py script to detect persons in a stream via rtsp.I’m using the command “ ./inference.py yolov8s-coco-onnx ‘rtsp….’ --pipe gst --frame-rate 0 --rtsp-latency 0 --show-stream-timing”.With the command ‘--show-stream-timing’ I get a latency of almost 2000 ms.I'd like to know if this latency is just in the stream or if it involves the inference time and what I can do to have the stream in real time. Thank you!
Hi! I'm testing the Metis M2 on a Rapsberry Pi 5 using the Ubuntu 22.04 container. I can successfully get results using the inference test suggested by the article:./inference.py yolov8s-coco-onnx ./media/traffic1_480p.mp4 Unfortunately, I get around 4FPS. Is this expected? If I profile the device and the host separately (using --show-host-fps and --show-device-fps) I see that the device is running at ~800FPS while the host is the one bottlenecking at ~4FPS.I tried enabling GLES processing using export AXELERA_OPENGL_BACKEND=gles,3,1but, unfortunately, it makes not much of a difference. I also tried setting the PCI to gen3 by setting the following in the /boot/firmware/config.txt, without any luck:dtparam=pciex1_gen=3As a reference: these are the instructions I'm following:- https://support.axelera.ai/hc/en-us/articles/26362016484114-Bring-up-Voyager-SDK-in-Raspberry-Pi-5- https://support.axelera.ai/hc/en-us/articles/25953148201362-Install-Voyager-SDK-in-a-Docker-ContainerIs this perfo
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.