Does anyone have any suggestions on how to implement SAHI inference with Yolo via voyager-sdk to run on metis m2?
Hi
Does this help:
Hello
Spanner is right, we did implement SAHI in the form of tiling in 1.4, but as we feel the interface is likely to change, and there is still some functionality to be added it is still in a preview feature status.
However, you can try it out with
./inference.py yolov8s-coco media/some_high_res.mp4 --tiled 1280
Where the 1280 indicates the size of the tile in pixels.
The args to control this are here : https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.4/axelera/app/config.py#L828 and you can see there are some other command line options to use there as well. The --tile-position is mostly for demo purposes, by specifying left for example, only the left half of the frame will be tiled.
The main limitation at the moment is that it only works with object detection models.
HTH, and please let us know how it works for you!
Sam
Hi,
It works well, thank you very much.
Sign up
Already have an account? Login
Log in, or create an Axelera AI account
Log In or Register HereEnter your E-mail address. We'll send you an e-mail with instructions to reset your password.