Skip to main content

Hi, I’m trying to deploy and inference with a yolov8 segmentation model. I tried to follow the tutorial, but I just get:
INFO: deploying model yolov8sseg-coco for 4 cores. This may take a while…
 

How long should this run? I left it on for 3.5 hours and nothing happened (except the loading bar). This seems to be a model included in your library: yolov8sseg-coco.

Any idea if I can turn on more extensive logging to see what is happening?

 

Thank you in advance!

 

 

Hi ​@Pepijn

To check what’s happening under the hood, you can add --verbose to your deploy.py command. Maybe that will give us an insight we can dig into?

Maybe a few deets on your host system and OS, just in case, too?


Note that, if your device is slow and you don’t want to wait for the deployment to finish, you can download prebuilt zoo models using the ./download_prebuilt.py script in the same way you use the deploy script.


Hi thanks for the quick response. I want to check if I can use a model from our environment and not a prebuilt zoo model(although it is a standard model also included in your library).

The demo for object detection from your library did work on the system (ubuntu 22).

When running with the --verbose flag, I can see that the process is stuck at Running midendToTIR 4/7 [57%].

 

I added the config files in the attachment. thank you in advance.


Hi there ​@Pepijn! I don’t know if this is the cause for it to hang, but it looks like in the compile_config that you’ve using 8GB. I wonder if that’s a bit borderline for complex segmentation models? Are you able to bump that up to test?


Where do I change this? in the config.yaml this is what I can find:
    extra_kwargs:
      m2:
        mvm_limitation: 57
      max_compiler_cores: 4
      compilation_config:
        split_buffer_promotion: True
        tiling_depth: 6

I don’t see anywhere, where I specify only to use 8GB (there is more available on the PC).

 

Also, the model is a small model, I don’t think this is super complex? usually we can run this using less than 2GB video memory.

 

But I can try to bump it up!


Hi ​@Pepijn! If I remember, the setting you’re looking for is in compile_config.json file you shared, rather than compile.yaml.

I think… 😄


The compile_config.json gets created after running deploy:

 

:axelera.app.compile: Saved compilation config to '/home/xxx/Workspace/voyager-sdk/build/yolov8n-coco/yolov8n-coco/compile_config.json'

So I guess it is somewhere defined before?


Hi, in the end this one worked after compiling for 2 hours. This was the standard (pre-trained) and smallest yolo segmentation model. Do you experience similar times when doing a conversion? or is it normally faster?

 

Also, I tried one of our own trained models, which is slightly bigger and got some errors. I added the output as an attachment. Hopefully you can help with this. thanks!


Hi ​@Pepijn! Nice work!

Two hours does sound a bit long for the smallest YOLOv8 segmentation model. Prebuilt zoo models skip compilation entirely, and I’d expect local builds on a well-resourced system to be faster than that. I suppose long compiles might happen if the compiler hits a heavy optimisation pass or system resources are limited?

On your own model, I think the log shows an unsupported ONNX op (Upsample with certain attributes). You could check that against the full list of supported ops here:

🔗 Supported ONNX Operators

The Model Zoo & Custom Weights guide covers alignment with SDK expectations, which might also help with this. 👍


I see the yolo11 & yolov8 segmentation supported in the your model zoo with ONNX. Do you know how it is exported? I used the simplify(True) and Dynamic(False, so static) flags and Opset 15 or 13, both do not work?


Hi ​@Pepijn 

At DeGirum (a SW partner of Axelera), we developed a cloud compiler for YOLO models that takes a pytorch checkpoint and provides the final compiled assets that can be deployed easily with our PySDK. Please see our post: Axelera Now Supported in DeGirum Cloud Compiler | Community for more details. Hope you find this tool useful.


Hi, Is pytorch then better supported than ONNX?

Because here in the model zoo, i see many references to ONNX. e.g.: https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.3/ax_models/zoo/yolo/instance_segmentation/yolov8lseg-coco-onnx.yaml

 

However, I added these two arguments in the yaml(as in your onnx files is mentioned btw).

compilation_config:
        quantization_scheme: per_tensor_min_max
        ignore_weight_buffers: False

Seems to have solved it… Now, I go to the next step of trying to use it in inference :)


Hi ​@Pepijn!

Glad to hear you got it compiling! Excellent work on that.

ONNX is very well supported in the Voyager SDK, and most models in the zoo are ONNX-based. The key is making sure the ONNX export matches the compiler’s expectations, as you did!


Reply