Hi all!
So here's the final update of MotionFlow! It is now in a state that has been running 24/7 in my home for some days now. Demo video and full source code with setup instructions are linked below.
 The Result
ÂMotionFlow turns IP cameras into privacy-preserving activity sensors. It runs YOLOv11-Pose on the Axelera Metis accelerator, tracks people, classifies what they're doing from pose geometry, and publishes events over MQTT - no video leaves the device, only structured state like "person sitting on couch" or "person entered through hallway door".
Â
 Demo videoÂ
Â
Source code on Github:Â https://github.com/FreezerJohn/motionflow
See Readme for more technical details on the project.
Â
Â
What I Learned
Â
- Voyager SDK's streaming APIÂ was the way to go. Letting GStreamer handle capture/decode and the SDK manage pipelined inference was far better than pulling frames through OpenCV.
- Rule-based action classification is surprisingly robust for home use cases. I also tried an ML approach (YOLOv11-Classify on stacked skeleton sequences), but the geometric rules won on reliability with less data effort. It is by no means perfect, but very much usable.
- Door crossing at oblique camera angles was quite a hard problem. People vanish behind walls mid-crossing, so the tripwire detector predicts crossings from velocity when tracks disappear near doors.
- Building the Web UI early was invaluable. Editing polygon coordinates in YAML is painful; the visual editor made the whole development cycle much faster.
 Â
Once again thanks to Axelera for the hardware and the Smarter Spaces Challenge - it was fun and a great motivation to push this from idea to something running in daily life. Also thanks to the community, it's been fun seeing what everyone built! Will sure try out some of the other projects! :)Â
 Best,
- Jonathan
