Fall-Detection - Proof of Concept
What’s done
-
Built a working end-to-end app on the Voyager SDK: camera > YOLOv8-Pose > decode > filters > overlay > action.
-
Added OpenCV overlay (boxes + 17-key point skeleton) and optional MP4 recording.
-
Integrated a Kasa plug trigger, I will be configuring an Alexa routine once I figure out some issues.
-
Here is a screen recording of the fall detection working:
Pipeline YAML (high level)
-
Model:
yolov8lpose-coco
-
Preprocess: letterbox to 640×640,
torch-totensor
. -
Postprocess:
decodeyolopose
with NMS + confidence thresholds; outputs scaled back to original frame size. -
Outputs land in
frame.metao'yolov8lpose-coco']
as:-
boxes
(N×4, xywh),keypoints
(N×17×3),scores
(N, pose conf).
-
How I decide what a “fall” is (current rules)
-
Quality gates (to ignore furniture/noise):
-
Min box area ≈ 90,000 px², max ≈ 250,000 px².
-
Min pose confidence 0.65.
-
At least 8 visible keypoints with per-keypoint confidence ≥ 0.35.
-
-
Fall heuristic: box goes wide>tall (high w/h) and vertical spread of keypoints is low (lying posture).
-
Temporal smoothing: require 5 consecutive fallen frames before triggering;
-
Action: pulse Kasa plug (default 5s)
NEW / In progress: “lying on couch/bed” detection
-
Goal: distinguish fallen on floor vs lying safely on a surface (couch/bed) to reduce false alarms.
-
Approach (rule-based first, then refine):
-
No descent: low vertical velocity before the posture change (no sudden drop).
-
Support height: head/hips above floor by a threshold (e.g., >15% of frame height), consistent with a raised surface.
-
Horizontal posture, but with stable box bottom near a calibrated couch/bed ROI (simple per-site calibration).
-
Longer dwell for “lying” state without triggering Kasa; label as “resting” instead.
-
I’m really excited with where I’m at with this project. My initial goals are met and I’m actually able to refine the logic before the deadline to get a much better prototype working!