🚀 FlowSentry-Wake — Excited to Join the Axelera AI Smarter Spaces Project Challenge!!🎉
Â
Hi everyone!
We’re Zijie Ning ​
We’re thrilled to share that our project FlowSentry-Wake has been selected as one of the top 10 projects in the Axelera AI Smarter Spaces Project Challenge! 🎉
Thank you to the reviewers for the recognition, and to the community for creating a space where edge-AI ideas can truly come to life.
FlowSentry-Wake is an on-device intelligent space-guarding system. Our goal is to explore how a space can be understood locally—without cloud dependency—while keeping the system low-power, reliable, and resistant to simple adversarial tricks.
Â
đź”§ What Problem Are We Addressing?
Many conventional security systems struggle with a few persistent issues:
-
High power usage Running heavy AI models 24/7 is expensive, hot, and often unrealistic for long-term deployment. By improving energy efficiency in our embedded system, we can significantly reduce overall power consumption, which directly lowers the system’s carbon footprint.
-
High false-alarm rates
Simple pixel-based motion detectors get triggered by shadows, lighting changes, or a gust of wind.
-
Easy to fool
A surprisingly simple trick can defeat many systems:
👉 A person covering themselves with a black cloth and crawling slowly on the ground.
Many detectors that rely on “normal upright human shapes” fail immediately.
With FlowSentry-Wake, we aim to build an experience within a defined space (e.g. a server room, lab corridor, or sensitive office), where the system can:
- Stay in ultra-low-power standby most of the time;
- Wake up the Metis M.2 + Orange Pi 5 Plus only when meaningful activity is detected;
- Understand not only “whether something is moving”, but also “whether it is moving in a suspicious way”;
- Continue tracking even if the person alters their posture or hides under a cloth, without going “blind”;
- And do all of this fully offline, without sending data to the cloud.
Â
🌍 Application Scenarios (From Everyday Life → Higher Security)
We designed FlowSentry-Wake to be useful in a variety of “right-sized” spaces.
Starting from everyday life:
-
🏡 Backyard or garden
Distinguishing between family members, small animals, or an unfamiliar visitor.
-
🏫 School hallway or dorm entrance
Low-power standby at night, waking up only for real activity.
Moving toward more sensitive environments:
-
đź§Ş Laboratories or equipment rooms
Focusing on behavior patterns rather than just motion/no motion.
-
đź–Ą Data centers
Avoiding failure against simple disguises or unusual postures (e.g., crawling, covered by cloth).
-
🏛 Small museums or exhibition spaces
Fully offline, local monitoring without uploading video anywhere.
Ultimately, we hope FlowSentry-Wake can become a portable “space intelligence module,” not tied to any single scenario.
Â
Â
🔍 Why Are We Using Optical Flow (Even Though It’s Not in the Model Zoo)?
The Axelera model zoo provides excellent detection and classification models that cover most common needs.
But we realized a key issue:
Not all suspicious behavior can be judged by semantic detection alone.
For example:
- A person crawling with a cloth over their body
- Low-light or partial occlusion
- Unusual poses that fall outside normal training data
Semantic models often fail in these conditions.
Optical flow models, however, focus on how something moves, not what it is:
- Is the motion coherent and purposeful?
- Does it resemble human-like movement?
- Is it distinct from background noise?
- Does the trajectory appear abnormal?
Because the current model zoo doesn’t include optical flow, we saw this challenge as a good opportunity to explore:
How can a complex motion-analysis model run efficiently on the Metis M.2, and how can it be fused with semantic detectors for a more complete understanding of a space?
We hope our findings can be useful to others in the community as well.
Â
Â
🙌 Stay Tuned for Updates!
Over the next few weeks, we’ll be sharing:
- Progress from early experiments
- Observations and challenges in real-space testing
- Interesting behaviors when combining detection with optical flow
- How we visualize detection + motion cues in the UI
- And finally, our demo video and project summary
We’re excited to learn from the community, receive feedback, and exchange ideas.
Let’s explore together how edge AI can make our everyday spaces smarter, more reliable, and more intuitive. ✨
