Skip to main content

Hi, I'm working on an RPI5 and I have the voyager SDK inside a container with ubuntu 22 and it works. I wanted to try running an inference on a stream using GStreamer. 
First, I tried to access the stream and I managed with the following command:

GST_PLUGIN_PATH=/voyager-sdk/operators/lib AXELERA_DEVICE_DIR=../opt/axelera/device-1.2.5-1/omega LD_LIBRARY_PATH=/voyager-sdk/operators/lib:/opt/axelera/runtime-1.2.5-1/lib:$LD_LIBRARY_PATH gst-launch-1.0 rtspsrc location='rtsp://…..' latency=200 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink

 

Now I wanted to try the axinferencenet plugin but I'm having problems. I've tested various commands and formats, such as:

 

GST_PLUGIN_PATH=/voyager-sdk/operators/lib AXELERA_DEVICE_DIR=../opt/axelera/device-1.2.5-1/omega LD_LIBRARY_PATH=/voyager-sdk/operators/lib:/opt/axelera/runtime-1.2.5-1/lib:$LD_LIBRARY_PATH gst-launch-1.0 rtspsrc location='rtsp://...' latency=200 ! rtph264depay ! avdec_h264 ! videoconvert ! video/x-raw,format=BGRA !  axtransform lib=libtransform_colorconvert.so options=format:rgba !  axinferencenet model=build/yolov7-coco/yolov7-coco/1/model.json ! autovideosink

 

and I'm getting the error:

 

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Pipeline is PREROLLED ...
Prerolled, waiting for progress to finish...
Progress: (connect) Connecting to rtsp://...
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Redistribute latency...
Progress: (request) Sending PLAY request
Redistribute latency...
Progress: (request) Sent PLAY request
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to get platform ID! Error = -1001

 

Could you help me? thank you! :)

Hi ​@sara ,

Just to confirm:

If you are inside voyager-sdk, you activate the virtual environment with source venv/bin/activate and then use inference.py to run inference, does it work?


yes! it works perfectly! i wanted to experiment with gstreamer plugins because i have a project already done and i could just integrate axelera plugins which would be great!
But I get this error (  what():  Failed to get platform ID! Error = -1001). I looked it up and I think the problem is that RPI5 hasn't implemented OpenCL (https://forums.raspberrypi.com/viewtopic.php?t=371760). From what I understand, I saw that Gstreamer plugins in the folder (/voyager-sdk/operators/lib) depend on OpenCL.
I don't know if I'm thinking correctly, that's why I needed help

 


Hi ​@sara ,

You are right.

OpenCL is not officially supported on Raspberry Pi 5. We are exploring possibilities of how to enable OpenCL in RPi 5 but that is something that we cannot disclosure more about at the moment.

 

The error is caused because you are attempting to use libtransform_colorconvert.so, this is an OpenCL element and there is no OpenCL available. One of our experts mentioned that it is not required anyway as the videoconvert immediately before the color convert could output RGBA and remove the need for the axtransform.

Our applications expert mentioned that once that is done though it will still fail due to several missing options for the axinferencenet element. 

At he moment, the easiest is to use yaml files. However this is great feedback for us, as it might be interesting to provide a tool to take our generated low-level yaml and output a gst-launch command line from that so that users can then experiment with their pipelines.


I'll look into yaml files then! thanks 😊


No problem ​@sara ,

Please find here documentation that can be very helpful:

https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/ax_models/tutorials/general/tutorials.md

 

I recommend you to take a look at it.

Best,

Victor


Hi ​@sara,

We have an example for Yolov5s with the axinferencenet plugin, please see e1], also for a minimal working workflow to use / test the axinferencenet with gst-launch binary we can do:

> ./deploy.py yolov8s-coco-onnx --aipu-cores=1
> ./inference.py                    \
--pipe="gst" \
--aipu-cores=1 \
--disable-vaapi \
--disable-opencl \
--no-display \
yolov8s-coco-onnx \
media/output.mp4

the above performs the inference and also generates the gst yaml file at path ${AXELERA_FRAMEWORK}/build/yolov8s-coco-onnx/logs/gst_pipeline.yaml which is almost 1:1 mappable to the gst-launch ‘s syntax as shown below:

> GST_PLUGIN_PATH=${AXELERA_FRAMEWORK}/operators/lib                                                                \
AXELERA_DEVICE_DIR=/opt/axelera/device-1.2.5-1/omega \
LD_LIBRARY_PATH=${AXELERA_FRAMEWORK}/operators/lib:/opt/axelera/runtime-1.2.5-1/lib:$LD_LIBRARY_PATH \
gst-launch-1.0 \
filesrc \
location=/home/ubuntu/.cache/axelera/media/output.mp4 \
! qtdemux \
! h264parse \
! avdec_h264 \
! videorate ! video/x-raw,framerate=30/1 \
! videoconvert \
! video/x-raw,format=RGBA \
! axinplace \
lib="${AXELERA_FRAMEWORK}/operators/lib/libinplace_addstreamid.so" \
mode="meta" \
options="stream_id:0" \
! queue \
max-size-buffers=4 \
max-size-time=0 \
max-size-bytes=0 \
! axinferencenet \
model="/home/ubuntu/v125/voyager-sdk/build/yolov8s-coco-onnx/yolov8s-coco-onnx/1/model.json" \
devices="metis-0:1:0" \
double_buffer=true \
dmabuf_inputs=true \
dmabuf_outputs=true \
num_children=0 \
preprocess0_lib="${AXELERA_FRAMEWORK}/operators/lib/libtransform_resize.so" \
preprocess0_options="width:640;height:640;padding:114;to_tensor:1;letterbox:1;scale_up:1" \
preprocess1_lib="${AXELERA_FRAMEWORK}/operators/lib/libinplace_normalize.so" \
preprocess1_options="mean:0.;std:1.;simd:avx2;quant_scale:0.003921568859368563;quant_zeropoint:-128" \
preprocess1_mode="write" \
preprocess2_lib="${AXELERA_FRAMEWORK}/operators/lib/libtransform_padding.so" \
preprocess2_options="padding:0,0,1,1,1,15,0,0;fill:0" \
preprocess2_batch="1" \
postprocess0_lib="${AXELERA_FRAMEWORK}/operators/lib/libdecode_yolov8.so" \
postprocess0_options="meta_key:detections;classes:80;confidence_threshold:0.25;scales:0.07057229429483414,0.0674663856625557,0.0697908028960228,0.09414555132389069,0.152308851480484,0.17069441080093384;padding:0,0,0,0,0,0,0,0|0,0,0,0,0,0,0,0|0,0,0,0,0,0,0,0|0,0,0,0,0,0,0,48|0,0,0,0,0,0,0,48|0,0,0,0,0,0,0,48;zero_points:-67,-58,-42,147,106,110;topk:30000;multiclass:0;classlabels_file:/tmp/tmpzegeue43;model_width:640;model_height:640;scale_up:1;letterbox:1" \
postprocess0_mode="read" \
postprocess1_lib="${AXELERA_FRAMEWORK}/operators/lib/libinplace_nms.so" \
postprocess1_options="meta_key:detections;max_boxes:300;nms_threshold:0.45;class_agnostic:1;location:CPU" \
! fakesink sync=false

Hope this helps!
Please feel free to reach out if you encounter any issues or have any other questions or queries!
Thanks!
---

r1] https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/docs/reference/pipeline_operators.md#example-yolov5s-pipeline


Reply