Skip to main content

Elderly Guardian: Final Submission

  • September 7, 2025
  • 9 replies
  • 369 views

Forum|alt.badge.img

Final Submission Notes - “Elderly Guardian” (edge fall-detection prototype)

I had a blast building this. Between my daughter’s sports games this weekend, I squeezed in some late-night tinkering to land a working demo I’m proud of. The YouTube link in my submission shows the prototype end-to-end: live pose detection, on-screen overlays, and a smart-plug alert when a real fall is confirmed.

What I built

A privacy-first, on-device fall detector that runs YOLOv8 Pose on an RK3588 with a Metis (Voyager SDK) accelerator, watches a Logitech C920 feed, and triggers a TP-Link Kasa plug to kick off downstream alerts/automations to an Amazon Echo Dot

Hardware + stack

  • Compute: RK3588 SBC

  • Accelerator: Axelera Metis M.2 (via Voyager SDK)

  • Camera: Logitech C920 (tested at 1280×720 @ 30 FPS)

  • Network: Edimax EW-7822UAC USB Wi-Fi

  • Alert path: TP-Link Kasa smart plug (local control with python-kasa)

  • Model: YOLOv8 Pose (Ultralytics weights) running through Voyager’s pipeline

How it works (quick tour)

  1. Capture: Voyager pulls frames from the USB camera (GStreamer) and letterboxes to 640×640 for the model.

  2. Pose: The Metis accelerator runs YOLOv8 Pose; Voyager returns a tidy meta package with boxes, 17 keypoints, and scores per person.

  3. Heuristics: My Python app filters low-quality detections, then decides “fall vs ok” using:

    • Posture: horizontal body (wide bbox + skeleton spread), near the floor (box bottom + ankles/hips in the bottom ~20–22% of frame).

    • Motion: true downward movement within a short window (core y and box bottom y both move down on screen).

    • Slow falls: if someone ends up persistently floor-prone for N frames, we flag it even without a big drop.

    • Guards: proximity (too close to camera), retreat (walking away → shrinking box), couch/bed sitting/lying logic to avoid false alarms.

  4. Output: Draws skeleton + status overlays, optional MP4 recording, and pulses the Kasa plug when a fall is confirmed (easy to hook into Alexa/IFTTT/MQTT).

What’s in the repo

  • A README with setup steps (exports, deps, GStreamer bits) and example run commands.

  • The main fall_detection.py app with:

    • OpenCV overlay, optional MP4 writer

    • Per-person tracking, latch-until-recovery (prevents blink-flipping)

    • Tunable knobs for floor band, descent pixels, min keypoints, bbox area, etc.

    • Optional Kasa trigger (--kasa-ip, cooldown, pulse length)

Demo highlights (what you’ll see in the video)

  • Normal walk-through - no alert

  • Quick sit on couch - suppressed by “off-ground + floor band” rules

  • Fall to floor - FALL in red, Kasa plug pulses on

  • Stand back up - clears after “upright + off-floor + ascended” recovery

Results & current thresholds (sane defaults)

  • Filters: min box area ≈ 90k px², max ≈ 220k px²; min pose conf 0.65; ≥ 8 visible keypoints (kp conf ≥ 0.35).

  • Fall heuristic: horizontal near floor + downward motion; or persistent floor-prone for ~6 frames.

  • Temporal smoothing: latch ~7 s; recovery requires upright, off floor, and rise ≥ ~50 px for ~10 frames.

Separate “future” branch

I’ve started a separate branch focused on:

  • “Lying on couch/bed/chair” labeling (non-alerting) using off-ground + horizontal posture with a simple per-room floor ROI.

  • Per-site calibration (camera height/tilt, floor band refinement).

  • Continued false-positive reduction for close-to-camera and quick sit scenarios.

There’s already interest on LinkedIn from healthcare, and smart-home communities around the approach and the heuristic choices, so I’m excited to share the code and iterate.

Why this matters

  • On-device inference keeps video private; only a local smart-plug pulse/automation needs to leave the box.

  • Edge-friendly: RK3588 + Metis handles 720p smoothly while staying power-efficient.

  • Practical: uses off-the-shelf camera and a $20 smart plug; easy path to real-world pilots.

Setup & run (basics, more details in README below)

  • Set AXELERA_FRAMEWORK, PYTHONPATH, LD_LIBRARY_PATH, GST_PLUGIN_PATH.

  • pip install numpy opencv-python pyyaml python-kasa inside a venv.

  • python3 fall_detection.py app-config.yaml --out /path/to/fall_demo.mp4 --log INFO

  • Add --kasa-ip <plug_ip> to pulse the alert.

Thanks

Thanks for hosting the challenge! This was a fun, meaningful build. The README has step-by-step instructions; the video shows it working live. I’m happy to answer questions, or help tweak thresholds for specific rooms.

Demo:

Code: https://github.com/moorebrett0/elderly-guardian

9 replies

Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • September 8, 2025

Very cool man! Great project, and an excellent demo!

I have to say, the fall detection is great and it’s easy to see how useful this could be in a tonne of different applications. Even as part of a standard CCTV system in any building; it might be primarily for security, but why not also add this function so it could help people too? Any store, shopping mall, workplace could benefit from that.

But most of all, I’m really impressed with how it doesn’t trigger the alarm! Being able to sit, kneel, etc without false positives is outstanding work!


Forum|alt.badge.img
  • Author
  • Ensign
  • September 8, 2025

Thank you Spanner, and thank you for the opportunity to participate!


Radhika J
Axelera Team
  • Axelera Team
  • September 23, 2025

Hi ​@moorebrett0 ,

Congratulations on your final submission!
I really liked the way you’ve used different rules and guards to avoid false alarms as much as using the pose information as is - it comes close to scene analysis which then has a broad range of applications. And as you’ve articulated, I couldn’t agree more that your project design provides an “easy path to real world pilots” with an off the shelf camera, $20 smart plug, and cost and energy efficient Metis!

Thanks a lot for your engagement and good luck with future projects!

Best,

Radhika


Forum|alt.badge.img
  • Author
  • Ensign
  • September 24, 2025

@Radhika J - thank you so much for your kind words.  I thoroughly enjoyed building this and have already begun working on my next feature-set and rules in a separate branch in my github. I’ll be posting updates here and helping out others in the community for sure!


Mariusz
Axelera Team
  • Axelera Team
  • September 29, 2025

Thank you for sharing your project Demo and other instruments details. I think I will press "gold button" similar like in the Talent people TV program. This is amazing project and should be used widely everywhere. You didn't won 1000 Pounds or Dollars, you have won millions. Congratulations. Your project will get real investments from serious Companies. Why I think like that?. Because has very wide opportunities to be fit in many environments. Safety is very important in many environments. To be honest where I was travelling in some countries there is no safety at all like Thailand or other countries and they are very small statistics with injuries. Why?. Because human intelligence is very complicated and very advanced to understand many processes involved with our behaviours. But your projects is for these bodies who cannot control themselves, who need additional support or are very ill. Your project can also be used for other group of people not necessary ill or disabled or elderly, can be used for everyone. As a simple example many people think they are very fit, eating well, doing some sport no drinking alcohol or using some drugs. Yes they can be affected by heart attack or other unexpected issues like gas leakage in the property. They have only few minutes or seconds sometimes to survive and your project can do it. send alert to emergency people. People doing jokes and it can be a joke but sometimes the quick reaction will save their lives. Congratulations to you. I have only one technical question: "How did you managed with space on Aetina?. This is just demo or you can use the system data all the time without managing space on the memory, or you redirecting system data not to be stored on the Aetina?” . Thank you.  


Forum|alt.badge.img
  • Author
  • Ensign
  • September 29, 2025

Hey ​@Mariusz,  right now storage is super simple, it’s real time inference processing the frames in RAM, I don’t store any video on device. So it’s basically the model, 10mb of JSON metadata (that gets rotated out) and optional logs for when I was debugging. 

Thanks for your vote of confidence, I think this project definitely has the ability to help people all around the world, I’m hoping to find a mentor in the space that can help take it from prototype to a reality. 


Mariusz
Axelera Team
  • Axelera Team
  • September 30, 2025

Thank you moorebrett0 for the reply. I understand. In my case system files was very limited with space because system consumed space for itself data. This is why I asked. Yes this project can be used in any computer unit compatible with Metis. That is great. I asked for storage for other reason also. Sometimes is needed to store some data for security or other purposes. Nice design and is worth to spent more time on it, expand and investments.   


saadtiwana
Ensign
Forum|alt.badge.img+1
  • Ensign
  • October 1, 2025

Hi ​@Mariusz If your concern is about the lack of disk space on the Aetina board, then you can follow what some of us did when we set up our boards. You can add an SD-card or M.2 SSD and use links to map folders so that most of the data can stay on the SSD/SD-card instead of consuming the very limited space on the flash. 

I documented this procedure so you can follow along if you like:

Full procedure to re-image and prepare an Aetina RK3588 board with Metis M.2 | Community


Martin Gorner
Axelera Team
  • Axelera Team
  • October 2, 2025

Hi ​@moorebrett0 , great project, congrats.
I noticed you are classifying fallen / non-fallen states using a set of rules. This is a prime candidate for machine learning. Here is an example: https://github.com/PIC4SeR/AcT