Skip to main content

🚀FlowSentry-Wake: Selected for the Axelera AI Smarter Spaces Final 10🎉

  • December 5, 2025
  • 5 replies
  • 154 views

 

Hi everyone!

We’re Zijie Ning ​@mm0806son   and Enmin Lin ​@Enmin  from KU Leuven (Belgium).

We’re thrilled to share that our project FlowSentry-Wake has been selected as one of the top 10 projects in the Axelera AI Smarter Spaces Project Challenge! 🎉

Thank you to the reviewers for the recognition, and to the community for creating a space where edge-AI ideas can truly come to life.

FlowSentry-Wake is an on-device intelligent space-guarding system. Our goal is to explore how a space can be understood locally—without cloud dependency—while keeping the system low-power, reliable, and resistant to simple adversarial tricks.

 

🔧 What Problem Are We Addressing?

Many conventional security systems struggle with a few persistent issues:

  • High power usage Running heavy AI models 24/7 is expensive, hot, and often unrealistic for long-term deployment. By improving energy efficiency in our embedded system, we can significantly reduce overall power consumption, which directly lowers the system’s carbon footprint.

  • High false-alarm rates

    Simple pixel-based motion detectors get triggered by shadows, lighting changes, or a gust of wind.

  • Easy to fool

    A surprisingly simple trick can defeat many systems:

    👉 A person covering themselves with a black cloth and crawling slowly on the ground.

    Many detectors that rely on “normal upright human shapes” fail immediately.

With FlowSentry-Wake, we aim to build an experience within a defined space (e.g. a server room, lab corridor, or sensitive office), where the system can:

  • Stay in ultra-low-power standby most of the time;
  • Wake up the Metis M.2 + Orange Pi 5 Plus only when meaningful activity is detected;
  • Understand not only “whether something is moving”, but also “whether it is moving in a suspicious way”;
  • Continue tracking even if the person alters their posture or hides under a cloth, without going “blind”;
  • And do all of this fully offline, without sending data to the cloud.

 

🌍 Application Scenarios (From Everyday Life → Higher Security)

We designed FlowSentry-Wake to be useful in a variety of “right-sized” spaces.

Starting from everyday life:

  • 🏡 Backyard or garden

    Distinguishing between family members, small animals, or an unfamiliar visitor.

  • 🏫 School hallway or dorm entrance

    Low-power standby at night, waking up only for real activity.

Moving toward more sensitive environments:

  • 🧪 Laboratories or equipment rooms

    Focusing on behavior patterns rather than just motion/no motion.

  • 🖥 Data centers

    Avoiding failure against simple disguises or unusual postures (e.g., crawling, covered by cloth).

  • 🏛 Small museums or exhibition spaces

    Fully offline, local monitoring without uploading video anywhere.

Ultimately, we hope FlowSentry-Wake can become a portable “space intelligence module,” not tied to any single scenario.

 

 

🔍 Why Are We Using Optical Flow (Even Though It’s Not in the Model Zoo)?

The Axelera model zoo provides excellent detection and classification models that cover most common needs.

But we realized a key issue:

Not all suspicious behavior can be judged by semantic detection alone.

For example:

  • A person crawling with a cloth over their body
  • Low-light or partial occlusion
  • Unusual poses that fall outside normal training data

Semantic models often fail in these conditions.

Optical flow models, however, focus on how something moves, not what it is:

  • Is the motion coherent and purposeful?
  • Does it resemble human-like movement?
  • Is it distinct from background noise?
  • Does the trajectory appear abnormal?

Because the current model zoo doesn’t include optical flow, we saw this challenge as a good opportunity to explore:

How can a complex motion-analysis model run efficiently on the Metis M.2, and how can it be fused with semantic detectors for a more complete understanding of a space?

We hope our findings can be useful to others in the community as well.

 

 

🙌 Stay Tuned for Updates!

Over the next few weeks, we’ll be sharing:

  • Progress from early experiments
  • Observations and challenges in real-space testing
  • Interesting behaviors when combining detection with optical flow
  • How we visualize detection + motion cues in the UI
  • And finally, our demo video and project summary

We’re excited to learn from the community, receive feedback, and exchange ideas.

Let’s explore together how edge AI can make our everyday spaces smarter, more reliable, and more intuitive. ✨

5 replies

Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • December 5, 2025

Hi there ​@Enmin and team, welcome to the big show!

I think I mentioned in another post recently that I’m really into home (and more) automation, so I know how much work it is trying to get all different kinds of sensors to work together to build a system that can do at least some of the things you’re talking about.

But that Frankenstein of “dumb” sensors has loads of points of failure, is rife with false positives, and takes a tonne of effort to try and hammer them into a working, coherent system.

So this is the next generation of smart systems that can replace a whole load of those disjointed sensors all at once! While it might sound like a complex solution, using AI to replace things like door contacts, PIRs, timers, etc, it’s actually a way more elegant system!

Really looking forward to seeing it in action!


  • Author
  • Ensign
  • January 6, 2026

🚀 FlowSentry-Wake | First End-to-End Bring-Up

1️⃣ Hardware Has Arrived

We’ve received the full kit:

Orange Pi 5 Plus + Metis M.2.

The project has officially moved from planning to execution.

From ideas to something that actually runs.

2️⃣ SDK Setup: Smooth and Predictable

We followed the official YouTube guide:

Standalone AI on Orange Pi 5 Plus | Step-by-Step Guide

System, drivers, Voyager SDK.

Step by step. No shortcuts.

The process was smoother than expected.

Very little guesswork. No hidden traps.

That matters.

3️⃣ inferency.py : A Strong First Impression 👍

One script deserves a clear mention: inference.py.

Compilation, Quantization, Deployment, Inference, Visualization

All triggered by a single command.

For a first on-board validation, this is exactly what you want.

Direct. Practical. Time-saving.

Credit to the Axelera engineering team here.

4️⃣ Trying Larger Models: Reality Check

We also tested a slightly larger model from the model zoo,

for example yolov11l-coco-onnx.

Deployment time increased rapidly.

After one hour, it still hadn’t finished.

We will give it more patience.

5️⃣ Camera Streaming: Surprisingly Easy

Another part that worked very well is camera streaming via the SDK.

Steps were simple:

  • Configure WiFi on the Orange Pi (with tplink wifi adapter)
  • Put the Sonoff camera on the same local network
  • Open eWeLink
  • Device settings → advanced options
  • Enable ONVIF / RTSP
  • Copy the RTSP address

No extra tools needed.

6️⃣ RTSP → Inference, No Friction

The key point:

inference.py accepts the RTSP address directly as input.

Camera to inference, almost no friction.

For our project, this is important.

We start with live input from day one.

7️⃣ Where We Are Now

This is still a very early stage.

So far, the focus has been simple:

  • Toolchain reliability
  • Input pipeline
  • Basic inference flow

All confirmed.

🔜 Next Update

Next up:

Deploying optical flow models inside the SDK

and understanding their runtime cost.

That’s where things get interesting.

And much closer to the core of FlowSentry-Wake.

More soon.


Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • January 7, 2026

Very nice work guys! Great summary, and so cool to hear that it all ran smoothly 👍 Do let me know about any bumps or stumbles, no matter how small - these kind of active field tests are exactly why we’re so keen on putting the Axelera gear in your hands for things like Smarter Spaces!


  • Cadet
  • January 21, 2026

Hi guys, good going. I was wondering if you guys were able to  find an optical flow model that runs on the m.2 version. I am not able to find any resource on whether or not it is possible!


  • Author
  • Ensign
  • January 22, 2026

Hi guys, good going. I was wondering if you guys were able to  find an optical flow model that runs on the m.2 version. I am not able to find any resource on whether or not it is possible!

Hi! Yes — we’re currently experimenting with EdgeFlowNet, which is a relatively recent optical flow model with a clean encoder–decoder structure and generally operator-friendly design.

At the moment, the main blocker we’ve identified is related to transposed convolution, which introduces asymmetric padding that isn’t supported by the Voyager SDK (M.2) yet. We’re actively exploring possible workarounds and model adjustments.

Still very much work in progress, but happy to exchange ideas or experiences if you’re looking into similar setups!