This week I made progress in custom dataset creation, model training, fine-tuning, and deployment. I also advanced on the pipeline schema and ONNX model compilation.
Spoiler alert: using the Axelera pipelines was straightforward thanks to the provided examples .
Progress in the Pipeline
I worked on model generation, definition, and pipeline construction, starting with the planned pipeline.
The diagram below highlights this week’s main progress:
- Fine-tuning the YOLO11M model with extended classes.
- Training a MobileNet V2 model to identify bowl food levels.
The highlighted blocks show the main focus of this week.
Dataset Creation and Annotation
I collected around 400 images of the environment I want to monitor, focusing mainly on my custom pet fountain and the automatic food dispenser.
Example image:
For annotation, I used Label Studio, creating six new classes:
- bowl_empty
- bowl_half
- bowl_full
- fountain_minimum
- fountain_middle
- fountain_maximum
After manually annotating all 400 images, I exported them in YOLO format for training MobileNet and fine-tuning YOLO models.
Model Training and Fine-tuning
I created two different notebooks for training and exporting models to ONNX format:
- Yolo_Finetuning.ipynb – fine-tuning YOLO with my custom classes.
- MobileNet_Bowl_Training.ipynb – full training with my custom dataset for bowl food level detection.
A small detail:
For YOLO fine-tuning, I grouped all bowl levels into a single custom_bowl class and did the same with the fountain levels, grouping them into custom_pet_fountain.
The idea is:
- Detect the custom_bowl position in the first stage with YOLO.
- Use MobileNet to classify the bowl food level.
Custom Model Deployment
The notebooks also handle ONNX export, allowing deployment to the embedded board.
Next steps involve building my own pipeline according to the planning diagram:
Pipeline config (custom YOLO + cascade tracker)
To make it work, I adapted the pipeline to:
- Load local ONNX models instead of downloading them.
- Add the new classes to the labels file.
Here’s a short video showing the system running, detecting the classes, and dispensing food →
Next Steps
Next week’s focus will be on bringing all the pieces together into a complete working system.
If you’re interested in more details about any part of this work (dataset creation, model training, or deployment), let me know!
The datasets aren’t on GitHub due to size limits. If you know a good hosting solution, I’d be happy to hear your suggestions.
The notebooks and pipeline files are already available here:
GitHub – Axelera.ai Summer Sidekick