Share your thoughts, suggest ideas, provide feedback, and help shape the future of Axelera
The examples that come with the SDK mostly use pipelines that do all the work of getting the data from the source (files or usb camera), sending for inference, and then displaying the inference results on the images. This is great for many use cases, I am sure, and enable a high degree of efficiency. However, for many users, the AI inference is part of a C/C++/python application where the images are obtained via OpenCV, processed (scale/crop/etc) in opencv based on application logic, and then a part of the image is sent for AI detection. Once the result is obtained, then the application can display the results over the opencv image using it’s own custom logic.So in summary, it would be great to have some examples that demonstrate this process. A simple example that shows how to use a detection model from the model zoo in C or Python along with opencv for obtaining image and displaying the results. Another good example would be running the model inference as a gstreamer pipeline. This would allow developers to feed the images into it, and get the results out using tools they are already familiar with (opencv, gstreamer, etc). p.s I am very new to Axlera/Metis so I apologize if I missed those examples :)
Hi all! I wanted to contribute a little bit about my experience bringing up the Metis M.2 in my LattePanda Sigma. Im an EE who is self taught, I have worked for Harley Davidson, Span.io, Enel X NA, and I’ve done several consulting projects on synthesizers and drum machines. My interest in AI is hobby and curiosity based, not professional. Software engineering is a new skill Im creating.I first tried to run Metis on the Windows 11 operating system that came with LattePanda Sigma, and following the instructions on the git was extremely confusing. I have ADD, and some other learning differences, but that wasn’t the reason installing was so hard on Windows. It’s because there isn’t a single page with coherent instructions telling the user how to bring up the hardware in a simple step by step format. Having the user click between git pages on firmware, installation of drivers, wsl, putting Windows into test mode, using multiple programming environments, etc is painful and makes installing the hardware on Windows a miserable, confusing, and difficult experience.I’ll probably make my own step by step guide for Windows at some point since the instructions on git are confusing.I then tried to do the install on an old version of Ubuntu (support 24.04 please!) and I had much less trouble. I HIGHLY RECOMMEND using Ubuntu 22.04 to run the hardware and software over Windows. The instructions to install the Voyager-SDK on Ubuntu 22.04 actually worked pretty well. I was surprised.I ran into an issue at one point after installing the SDK: I couldn’t detect the Metis M.2 card. I couldn’t really figure out how to install the driver from the instructions for Ubuntu installation, so I had to download the .deb driver and install it similar to the “instructions for installing Voyager-SDK using Docker” to get the driver installed. Once the driver is installed, and the clunky SDK is installed and running, the demos with YOLO worked great, with the Metis M.2 card barely breaking a sweat doing inferences.
I'm developing a new way of video with or without AI. I am marketing both small & large companies using their people as limited actors. I saw Titanium and i was very curious, can I merge dynamic ai actors into video with real actors? I am getting paperwork together to start, so I have time to learn. My background is in 2 & 3D animation & graphic design (degrees in both bs & bfa) What is the learning curve?
Looks like tags are a thing on posts and questions, and there’s a tag cloud in the sidebar. I can select existing tags but can’t create new ones. If community members could add their own tags it could be a good way to categorize content going forward.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK