Skip to main content

SmartPark: AI-Powered Real-Time Parking Spot Finder

  • March 31, 2026
  • 3 replies
  • 87 views

Picture this: You’re heading to the mall with the whole family. The kids are excited in the back seat, already arguing over who gets to pick the first activity once you’re inside. You pull into the massive parking lot… and the nightmare begins.

You start circling. Row after row. Every spot seems taken. The excitement in the car slowly turns into frustration. “Are we there yet?” becomes “Why is this taking so long?” The kids grow restless, voices rising, while you keep looping around hoping for that one open spot. Minutes tick by. Fuel burns. Patience runs thin. What should be a fun family outing starts with unnecessary stress — all because no one knows where the empty parking spots actually are.

SmartPark solves exactly that frustration.

It’s a complete local-first, edge-AI smart parking system built to run on the Axelera Metis. Using real-time computer vision, it detects open parking spots instantly and gives operators (and eventually drivers) clear, live visibility into the parking lot.

The Problem It Solves

  • Drivers waste precious time, fuel, and patience endlessly circling for parking.
  • Mall and parking operators have little to no real-time visibility across their lots.
  • Existing solutions are often expensive, cloud-dependent, and slow to respond.

SmartPark fixes this with a fast, private, and efficient edge-based solution that runs entirely on affordable hardware.

Cost Comparison: SmartPark vs Traditional Systems

One of the biggest advantages of SmartPark is its dramatically lower total cost of ownership compared to legacy solutions.

Aspect Traditional Parking Sensors (per-spot) SmartPark (RTSP Cams + Orange Pi + Metis) Camera System on Mac Mini
Hardware Cost (per 100 spots) $30,000 – $50,000+ (1 sensor per spot + gateways) $800 – $1,800 (4–6 cheap RTSP cams + Orange Pi + Metis card) $2,500 – $4,000 (cams + Mac Mini)
Installation Very High (drilling, wiring per spot, lot closures) Low (mount cams on poles/lights, minimal cabling) Medium (cams + running cables to central Mac)
Power & Connectivity High (many sensors need individual power/SIMs) Very Low (edge device handles multiple streams) Medium-High (Mac Mini is power-hungry)
Maintenance Medium-High (sensor failures, battery replacements) Low (standard IP cams are reliable) Medium (Mac Mini upkeep + OS updates)
Scalability Expensive (linear cost per added spot) Excellent (add cams cheaply, one Metis handles many streams) Good but limited by single machine
Privacy & Cloud Dependency Often cloud-based, privacy concerns Fully local-first, no cloud needed Can be local but heavier setup
AI Performance Basic counting only Full YOLO spot detection + classification (EV, accessible, etc.) on Metis Strong but higher power draw

SmartPark wins on cost-efficiency while delivering richer data (spot-level details, filters for EV/accessible spots, analytics) thanks to the Axelera Metis running efficient YOLO inference at the edge.


 


 

How It Works

  1. Metis-Powered Vision A YOLO-based ParkingManagement model runs efficiently on the Axelera Metis AIPU using the Voyager SDK. Multiple RTSP camera feeds are processed in real time to detect occupied vs. free parking spots. Only changed spot states are sent forward, keeping everything lightweight and responsive.
  2. Local-First Backend
    • FastAPI with WebSocket support for live updates
    • SQLite as the lightweight, zero-config database
    • Simple incremental ingestion endpoint (POST /ingest/slots) for easy integration
  3. Operator Dashboard (Streamlit)
    • Live occupancy overview with beautiful charts and lot maps
    • Filters for EV charging spots, accessible spots, wide spots, and covered areas
    • Real-time analytics: trends, estimated wait times, turnover rates, and peak hours
    • CSV export for reports and snapshots
    • Camera health monitoring and admin tools to manage lots or simulate data
    • Alert system for low capacity, camera issues, or operational events

The entire system seeds with realistic demo data, so you can spin it up instantly and see live updates even before connecting real cameras.

Tech Stack Highlights (Metis Edition)

  • Axelera Metis + Voyager SDK → High-performance, low-power YOLO inference for multiple camera streams
  • Python / FastAPI / WebSockets → Responsive real-time API layer
  • Streamlit → Clean, intuitive operator dashboard
  • SQLite → Simple and reliable local persistence
  • YOLO worker → Clean separation between vision processing and the dashboard

Quick demo command to try it:

Bash

python -m smartpark.worker demo --lot-id westfield-top-deck --cycles 10 --poll-seconds 2

 

I built SmartPark to show how the Axelera Metis can solve everyday real-world problems with efficient edge AI. Instead of families wasting time circling parking lots in frustration, SmartPark helps them find a spot quickly so the fun can start sooner — all at a fraction of the cost of traditional systems.

Repo: https://github.com/shashibhat/SmartPark (Complete setup instructions, API docs, and demo data included)

I’d love to hear feedback from the Axelera community — especially from anyone working with multi-camera YOLO pipelines on Metis. Feel free to try it out and let me know what you think!

Let’s make endless circling for parking a thing of the past. 🚗✨

#AxeleraMetis #DemoJam #SmartParking #EdgeAI #YOLO

Thank you for the opportunity
Regards,
Shashi

3 replies

Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • April 1, 2026

Great project, ​@shashibhat! Really like the architecture here, especially the decision to only push changed spot states rather than streaming full frame data through to the backend. That's a smart call for keeping things responsive when you're scaling up the number of camera feeds on a single Metis. No point sending unnecessary data around the place.

One thing I'm curious about on the vision side: how are you handling the mapping between YOLO detections and specific parking spot identities? I'm thinking about the step where you go from "there's a car at these pixel coordinates" to "spot B-14 is occupied". Are you pre-defining polygon regions per spot and checking for overlap with detection bounding boxes, or doing something different?

Thanks for sharing this, and for including the repo with full setup instructions. Looking forward to seeing how it develops!


  • Author
  • Ensign
  • April 6, 2026

Ahh, i think i havent checked in the latest code to git . 
As soon as we started to develop this was core road block we hit. our approach was two fold .
1. We are maintaining a DB with the all the spots and labels and wrote a sync tool sync label and DB . with this we get the details response per polygon { "camera_id": "cam-north", "spots": [ { "label": "A-01", "type": "standard", "polygon": [[120,80],[200,80],[200,140],[120,140]] }, { "label": "A-02", "type": "ev"} ] }
2. hardest problem was( still not fully solved) when two camera looking there are multiple spot which get overlapped. due to this we were getting multiple entries. wrote a homography tool to avoid this . but this still in progress.
3. we over engineered to avoid any noise in the detection like a person walking or leaf blowing. but when we integrated Yolo26 we were able use inbuilt object detection and overlay that on parking polygon to match the parking spot used by a car not bird chilling 


Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • April 7, 2026

Awesome to see you’re getting good use from the latest YOLO there! Nice work 👍