Smart parking system using ultrasonic sensors and ESP32-CAM
Sankareh, Saihou (2025)
Sankareh, Saihou
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025052817282
https://urn.fi/URN:NBN:fi:amk-2025052817282
Tiivistelmä
This project addressed a common issue in urban environments parking difficulties and low-speed collisions due to poor visibility or misalignment. The goal of the thesis was to develop a smart parking detection system that supports more accurate and safer vehicle positioning. The system was designed to be low-cost, easy to prototype, and applicable to embedded environments, combining both distance sensing and object recognition.
To achieve this, two sensing technologies were integrated. Ultrasonic sensors were mounted at the end of the parking spot to measure the distance of the toy car towards the end of the parking spot. These sensors provided real-time feedback through LED indicators: green for safe distance, yellow for caution, and red for danger. In parallel, a camera module (ESP32-CAM) was placed above the parking area to observe the car from a top-down view. A lightweight object detection model, trained using Edge Impulse, was deployed on the ESP32-CAM to identify both the car and the parking spot. Arduino IDE was used to program the system logic, which included calculating bounding box coordinates to verify if the car was properly aligned in the spot.
The testing showed that the ultrasonic sensors consistently measured distances within expected thresholds and triggered the correct LED responses. The ESP32-CAM also demonstrated high accuracy in detecting the toy car and parking spot using real-time image processing. The logic implemented was able to assess whether the car was correctly positioned, and output messages via the serial monitor confirmed the detection results. The integration of camera-based object detection and distance measurement proved effective and reliable for small-scale simulation.
The project successfully demonstrated that combining ultrasonic sensing with embedded AI-based visual recognition creates a more robust smart parking solution. It serves as a working proof of concept that could be scaled to real vehicles or urban environments. With further improvements such as supporting more object classes, enhancing low-light detection, or adding wireless communication, the system could contribute to future developments in intelligent transportation and IoT-enabled urban mobility.
To achieve this, two sensing technologies were integrated. Ultrasonic sensors were mounted at the end of the parking spot to measure the distance of the toy car towards the end of the parking spot. These sensors provided real-time feedback through LED indicators: green for safe distance, yellow for caution, and red for danger. In parallel, a camera module (ESP32-CAM) was placed above the parking area to observe the car from a top-down view. A lightweight object detection model, trained using Edge Impulse, was deployed on the ESP32-CAM to identify both the car and the parking spot. Arduino IDE was used to program the system logic, which included calculating bounding box coordinates to verify if the car was properly aligned in the spot.
The testing showed that the ultrasonic sensors consistently measured distances within expected thresholds and triggered the correct LED responses. The ESP32-CAM also demonstrated high accuracy in detecting the toy car and parking spot using real-time image processing. The logic implemented was able to assess whether the car was correctly positioned, and output messages via the serial monitor confirmed the detection results. The integration of camera-based object detection and distance measurement proved effective and reliable for small-scale simulation.
The project successfully demonstrated that combining ultrasonic sensing with embedded AI-based visual recognition creates a more robust smart parking solution. It serves as a working proof of concept that could be scaled to real vehicles or urban environments. With further improvements such as supporting more object classes, enhancing low-light detection, or adding wireless communication, the system could contribute to future developments in intelligent transportation and IoT-enabled urban mobility.