Skip to main content

Hello,
I'm using the Metis M.2 accelerator with Aetina board, and I'm using Voyager SDK 1.4.0. I've been able to use the Metis M.2 card by issuing a reboot after resets, and that's good enough to work with for now.

 

My code and model has been coming along, but I've got to the stage where I now wish to combine the models, to have a 2-stage pipeline.

The aim is to have:

(a) vehicle detection, followed by

(b) license plate detection. 

 

For (a), I'm using an existing detector (supplied with the SDK), which is yolov5m-v7-coco-tracker and I can successfully extract bounding box co-ords and images just as a test.

For (b), I'm using a model I had to compile from ONNX format (using compile -i)

 

The above is fine so far, but now I'm trying to chain them into a single vehicles-then-plates.yaml file, and that's unsuccessful. When I type ./deploy vehicles-then-plates, it silently exits, with no error. It may be a path issue, or it may be something else; I think I need help with the contents of that file.

I'm using these folders:

/axelera/voyager-sdk  - contains the SDK
/axelera/compiled  - the folder in which I typed compile -i /axelera/lp_yolov8n.onnx -o my_lp_model_compiled
/axelera/compiled/my_lp_model_compiled  - the folder that got created during the compile process
/axelera/compiled/my_lp_model_compiled/compiled_model  - contains model.json, pool_l2_const.bin, etc
/axelera/myapp   - contains the vehicle-then-plates.yaml file, and also where I will placed my Python app

Please could someone kindly take a look at the yaml file, any modifications suggested, and how can I deploy it, so that I can run create_inference_stream from the Python code? It would be appreciated loads.

 

Here are Microsoft OneDrive links:

my_lp_model_compiled.tar.gz   (The folder that was created using compile -i)

vehicles-then-plates.yaml (The YAML file for the 2-stage pipeline)

myapp.zip (the Python code)

lp_yolov8n.onnx  (The ONNX file if needed)

 

Many thanks!!

 

 

Be the first to reply

Reply