Skip to main content

Does anyone have any suggestions on how to implement SAHI inference with Yolo via voyager-sdk to run on metis m2?

Hi ​@Giodst ! I’m not really familiar with SAHI, if I’m honest, but from what I can tell after a superficial googlin’, it sounds a lot like the tiling feature in Voyager? Conceptually, if nothing else 😄

Does this help:

 


Hello ​@Giodst 

Spanner is right, we did implement SAHI in the form of tiling in 1.4, but as we feel the interface is likely to change, and there is still some functionality to be added it is still in a preview feature status.

However, you can try it out with 

./inference.py yolov8s-coco media/some_high_res.mp4 --tiled 1280

Where the 1280 indicates the size of the tile in pixels.

The args to control this are here : https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.4/axelera/app/config.py#L828 and you can see there are some other command line options to use there as well. The --tile-position is mostly for demo purposes, by specifying left for example, only the left half of the frame will be tiled.

The main limitation at the moment is that it only works with object detection models.

HTH, and please let us know how it works for you!

Sam


Hi,
It works well, thank you very much.