Skip to main content

Final Submission Notes - “Elderly Guardian” (edge fall-detection prototype)

I had a blast building this. Between my daughter’s sports games this weekend, I squeezed in some late-night tinkering to land a working demo I’m proud of. The YouTube link in my submission shows the prototype end-to-end: live pose detection, on-screen overlays, and a smart-plug alert when a real fall is confirmed.

What I built

A privacy-first, on-device fall detector that runs YOLOv8 Pose on an RK3588 with a Metis (Voyager SDK) accelerator, watches a Logitech C920 feed, and triggers a TP-Link Kasa plug to kick off downstream alerts/automations to an Amazon Echo Dot

Hardware + stack

  • Compute: RK3588 SBC

  • Accelerator: Axelera Metis M.2 (via Voyager SDK)

  • Camera: Logitech C920 (tested at 1280×720 @ 30 FPS)

  • Network: Edimax EW-7822UAC USB Wi-Fi

  • Alert path: TP-Link Kasa smart plug (local control with python-kasa)

  • Model: YOLOv8 Pose (Ultralytics weights) running through Voyager’s pipeline

How it works (quick tour)

  1. Capture: Voyager pulls frames from the USB camera (GStreamer) and letterboxes to 640×640 for the model.

  2. Pose: The Metis accelerator runs YOLOv8 Pose; Voyager returns a tidy meta package with boxes, 17 keypoints, and scores per person.

  3. Heuristics: My Python app filters low-quality detections, then decides “fall vs ok” using:

    • Posture: horizontal body (wide bbox + skeleton spread), near the floor (box bottom + ankles/hips in the bottom ~20–22% of frame).

    • Motion: true downward movement within a short window (core y and box bottom y both move down on screen).

    • Slow falls: if someone ends up persistently floor-prone for N frames, we flag it even without a big drop.

    • Guards: proximity (too close to camera), retreat (walking away → shrinking box), couch/bed sitting/lying logic to avoid false alarms.

  4. Output: Draws skeleton + status overlays, optional MP4 recording, and pulses the Kasa plug when a fall is confirmed (easy to hook into Alexa/IFTTT/MQTT).

What’s in the repo

  • A README with setup steps (exports, deps, GStreamer bits) and example run commands.

  • The main fall_detection.py app with:

    • OpenCV overlay, optional MP4 writer

    • Per-person tracking, latch-until-recovery (prevents blink-flipping)

    • Tunable knobs for floor band, descent pixels, min keypoints, bbox area, etc.

    • Optional Kasa trigger (--kasa-ip, cooldown, pulse length)

Demo highlights (what you’ll see in the video)

  • Normal walk-through - no alert

  • Quick sit on couch - suppressed by “off-ground + floor band” rules

  • Fall to floor - FALL in red, Kasa plug pulses on

  • Stand back up - clears after “upright + off-floor + ascended” recovery

Results & current thresholds (sane defaults)

  • Filters: min box area ≈ 90k px², max ≈ 220k px²; min pose conf 0.65; ≥ 8 visible keypoints (kp conf ≥ 0.35).

  • Fall heuristic: horizontal near floor + downward motion; or persistent floor-prone for ~6 frames.

  • Temporal smoothing: latch ~7 s; recovery requires upright, off floor, and rise ≥ ~50 px for ~10 frames.

Separate “future” branch

I’ve started a separate branch focused on:

  • “Lying on couch/bed/chair” labeling (non-alerting) using off-ground + horizontal posture with a simple per-room floor ROI.

  • Per-site calibration (camera height/tilt, floor band refinement).

  • Continued false-positive reduction for close-to-camera and quick sit scenarios.

There’s already interest on LinkedIn from healthcare, and smart-home communities around the approach and the heuristic choices, so I’m excited to share the code and iterate.

Why this matters

  • On-device inference keeps video private; only a local smart-plug pulse/automation needs to leave the box.

  • Edge-friendly: RK3588 + Metis handles 720p smoothly while staying power-efficient.

  • Practical: uses off-the-shelf camera and a $20 smart plug; easy path to real-world pilots.

Setup & run (basics, more details in README below)

  • Set AXELERA_FRAMEWORK, PYTHONPATH, LD_LIBRARY_PATH, GST_PLUGIN_PATH.

  • pip install numpy opencv-python pyyaml python-kasa inside a venv.

  • python3 fall_detection.py app-config.yaml --out /path/to/fall_demo.mp4 --log INFO

  • Add --kasa-ip <plug_ip> to pulse the alert.

Thanks

Thanks for hosting the challenge! This was a fun, meaningful build. The README has step-by-step instructions; the video shows it working live. I’m happy to answer questions, or help tweak thresholds for specific rooms.

Demo:

Code: https://github.com/moorebrett0/elderly-guardian

Very cool man! Great project, and an excellent demo!

I have to say, the fall detection is great and it’s easy to see how useful this could be in a tonne of different applications. Even as part of a standard CCTV system in any building; it might be primarily for security, but why not also add this function so it could help people too? Any store, shopping mall, workplace could benefit from that.

But most of all, I’m really impressed with how it doesn’t trigger the alarm! Being able to sit, kneel, etc without false positives is outstanding work!


Thank you Spanner, and thank you for the opportunity to participate!


Hi ​@moorebrett0 ,

Congratulations on your final submission!
I really liked the way you’ve used different rules and guards to avoid false alarms as much as using the pose information as is - it comes close to scene analysis which then has a broad range of applications. And as you’ve articulated, I couldn’t agree more that your project design provides an “easy path to real world pilots” with an off the shelf camera, $20 smart plug, and cost and energy efficient Metis!

Thanks a lot for your engagement and good luck with future projects!

Best,

Radhika


@Radhika J - thank you so much for your kind words.  I thoroughly enjoyed building this and have already begun working on my next feature-set and rules in a separate branch in my github. I’ll be posting updates here and helping out others in the community for sure!