Skip to main content

The examples that come with the SDK mostly use pipelines that do all the work of getting the data from the source (files or usb camera), sending for inference, and then displaying the inference results on the images. This is great for many use cases, I am sure, and enable a high degree of efficiency. 

However, for many users, the AI inference is part of a C/C++/python application where the images are obtained via OpenCV, processed (scale/crop/etc) in opencv based on application logic, and then a part of the image is sent for AI detection. Once the result is obtained, then the application can display the results over the opencv image using it’s own custom logic.

So in summary, it would be great to have some examples that demonstrate this process. A simple example that shows how to use a detection model from the model zoo in C or Python along with opencv for obtaining image and displaying the results. 

Another good example would be running the model inference as a gstreamer pipeline. This would allow developers to feed the images into it, and get the results out using tools they are already familiar with  (opencv, gstreamer, etc). 

p.s I am very new to Axlera/Metis so I apologize if I missed those examples :)

Hi ​@saadtiwana , you are right on point here. I agree that inference.py is build for maximum throughput but it’s not optimized for ML pipeline engineer. We have axruntime which is exactly build for that purpose. Please look into the exampled here : https://github.com/axelera-ai-hub/voyager-sdk/tree/release/v1.3/examples/axruntime . I hope that it helps!Â