On-device AI
YOLO11 and YOLOv26n-OBB run via Core ML. Your camera feed never leaves the device.
Your dashcam becomes a parking sensor. On-device AI detects open spots as you drive and shares them before they disappear.
The problem
Peer2Park
The solution
YOLO11 and YOLOv26n-OBB run via Core ML. Your camera feed never leaves the device.
Spots sorted by recency, not popularity. The newest signal always surfaces first.
Every driver is a sensor. The more people drive, the fresher the data for everyone.
Live capture
Real dashcam footage processed by on-device computer vision in real time.
How it works
Your phone's camera passively records as you drive. No hardware, no setup. Just your commute.
YOLO models run on-device via Core ML, detecting spots with 94% confidence in real time.
Fresh detections surface to drivers nearby, ranked by recency. The newer the signal, the better.
Under the hood
Built on proven ML models, serverless infrastructure, and privacy-first principles.
Resolution-8 cells for geospatial queries and 20m deduplication.
Apple Maps integration with voice search and hands-free guidance.
All ML on-device. No raw footage shared. Only structured data leaves your phone.
AWS Lambda, DynamoDB, and API Gateway. Scales with every new driver.
Why freshness matters
A spot seen 30 seconds ago is gold. The same spot reported 10 minutes ago is a guess. Peer2Park ranks by recency because curb conditions change faster than any prediction model.
Every report carries a timestamp, a confidence score, and a decay window. You never act on stale data.
Join the network
Turn your commute into shared intelligence. The more people drive, the fresher the data gets for everyone.