Skip to main content

Hi all,

I’m a docker noob, so please bear with me if this is a stupid question.

To get the SDK operational on my Ubuntu-24.04 system I followed the Install Voyager SDK in a Docker Container and can make it up to test-running some inferences. Alas, while I expected the videos to be displayed in a dedicated window at full resolution, for me they are shown in the active text-console (ncurses-based I assume), obviously coarse and slow.

I tried to learn how to activate graphics output from within docker, but all I got was that docker essentially is meant to be text-only and graphics usually is provided via web-browser of pages served from within docker.

So to put it bluntly: those videos showing Voyager SDK performing object detection at exhibitions at full-resolution and full-framerate, are those from within docker or are they run from a native ubuntu-22 system?

If it is possible to tune docker to have that video output, a mini-howto would help a lot.

 

Hi ​@zefir!

As far as I know, by default, Docker containers don’t have access to your host’s graphical display, so if you run the Voyager SDK inside Docker without that bit of extra configuration, I don't think it can open proper video output windows. 

Let me ask some of team to find out if that's definitely the case, and what we can try if so 👍


Hey ​@zefir! So, it sounds like it may depend on whether you’re using X11 or Wayland display protocols. The former is more popular on Ubuntu 22.04, while the latter is more common on 24.04 - so it’s likely to be that one, by the looks of it.

This should tell you which you have:

echo $XDG_SESSION_TYPE

Running the following lets you use the display inside the container if you have X11:

xhost +local:root

If it’s Wayland, let’s explore further!


@Spanner: upgrading to voyager-sdk-1.3 fixed the issue for me - now seeing the realtime full-resolution graph with gauges and detection boxes - makes the difference, since seeing is believing.

 

Thanks 


Ah awesome, thanks for letting me know ​@zefir! That’s great to hear.

How’s the rest of the project going?


How’s the rest of the project going?

I’m still at the evaluation phase, since I can’t use the Metis card for a long time unless the noise issue is fixed (see my other thread).

Once that’s done, my goal is to misuse the chip for something it was not intended for: not building edge AI, but the most energy-efficient offline inference server. Goal is to minimize W/token at the system level, and if D-IMC promises to be 10x more energy efficient compared to GPU, that’s where it will culminate long-term, since in due time AI will not be limited by processing power but by available energy.


Reply