Skip to main content

Hi again ​@Spanner ! 😊

I need to infer an image obtained from byte data stored in a memory space (https://docs.python.org/3/library/mmap.html) and after converting the bytes I get an image.

I've seen the sources available for create_inference_stream and I know that I can make inferences to images (https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/docs/tutorials/application.md), but I have to specify the/path/to/file. But in this case I won't have that information.

In the code I'll have this line where I read the bytes in the memory space:

        frame_size=image_width*image_height*3 //RGB
        frame_bytes = mm.read(frame_size)
And then I get the new image:
      frame = np.frombuffer(frame_bytes,dtype=np.uint8).reshape((image_height,image_width, 3))

Is there any way to infer “frame”?

Thank you!

Ah, great question ​@sara! This is a really interesting use case, and I’m not sure I’ve seen that come up before! Let me quickly ask around, and see what we can find out about doing this 👍


Hi ​@sara! Sorry for the delay, but I was just chatting with ​@jaydeep.de and he says you’re on the right track with reading the image from memory. For your use case — inferring from a NumPy array without a file path — take a look at this: 

https://github.com/axelera-ai-hub/voyager-sdk/tree/release/v1.2.5/examples/axruntime

These examples show how to load a compiled model and perform inference directly on in-memory data using the low-level AxRuntime API. So it sounds like a good match for your setup, and you shouldn’t need to save the image to disk at all.

Let me know how it goes!


I'm having some doubts trying to use this code base for object detection with YOLOv8.

do you have any examples?

 


I'm having some doubts trying to use this code base for object detection with YOLOv8.

do you have any examples?

Not besides the examples in that folder at the moment, unfortunately. That script is set up for classification rather than detection, really - is it detection you’re looking for?

That being said, is it still useful for showing how to load a model and run inference from memory, which you were looking to achieve? If you’re working with a YOLOv8 model that’s already compiled with the Voyager SDK, could you reuse the same runtime approach?

Perhaps a few additional details about your setup (host, OS, project objective and such) might also help us to find the right path 🙂


Reply