Dingo Mobile Robot: Autonomous Navigation and Obstacle Avoidance with 5G and Reinforcement Learning
Belila, Mahdi (2025)
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025112730365
https://urn.fi/URN:NBN:fi:amk-2025112730365
Tiivistelmä
The rise of intelligent mobile robotics has created a growing need for systems capable of autonomous navigation and obstacle avoidance in complex environments. This thesis presents the design and implementation of
an autonomous navigation system for the Dingo-O mobile robot using deep reinforcement learning (DRL) and
5G connectivity. The primary objective was to demonstrate how learning-based control, combined with lowlatency communication, can enhance the reliability and responsiveness of mobile robots.
The study began by examining the Dingo-O’s hardware platform, including its omnidirectional wheels, inertial
measurement unit (IMU), and Velodyne VLP-16 LiDAR sensor. These components provided accurate motion
control and environmental perception for training and testing. A virtual simulation environment was developed
using real sensor data, allowing a DRL agent to learn safe navigation behaviour before deployment on the
physical robot. The final model architecture employed a stacked 10×10 grid input representation, which improved spatial awareness and decision stability.
The trained agent demonstrated strong performance in both simulated and real-world tests, achieving reliable
goal-reaching and obstacle-avoidance behaviour. The integration of 5G communication between the edge
server and the robot proved essential, enabling smooth real-time control and efficient data transfer. In comparison, Wi-Fi communication resulted in inconsistent responses and occasional failures.
In conclusion, this thesis successfully demonstrated how reinforcement learning, supported by 5G and edge
computing, can be applied to real robotic systems. The findings highlight the potential of combining artificial
intelligence and modern communication technologies to advance the development of intelligent, adaptive, and
responsive mobile robots.
an autonomous navigation system for the Dingo-O mobile robot using deep reinforcement learning (DRL) and
5G connectivity. The primary objective was to demonstrate how learning-based control, combined with lowlatency communication, can enhance the reliability and responsiveness of mobile robots.
The study began by examining the Dingo-O’s hardware platform, including its omnidirectional wheels, inertial
measurement unit (IMU), and Velodyne VLP-16 LiDAR sensor. These components provided accurate motion
control and environmental perception for training and testing. A virtual simulation environment was developed
using real sensor data, allowing a DRL agent to learn safe navigation behaviour before deployment on the
physical robot. The final model architecture employed a stacked 10×10 grid input representation, which improved spatial awareness and decision stability.
The trained agent demonstrated strong performance in both simulated and real-world tests, achieving reliable
goal-reaching and obstacle-avoidance behaviour. The integration of 5G communication between the edge
server and the robot proved essential, enabling smooth real-time control and efficient data transfer. In comparison, Wi-Fi communication resulted in inconsistent responses and occasional failures.
In conclusion, this thesis successfully demonstrated how reinforcement learning, supported by 5G and edge
computing, can be applied to real robotic systems. The findings highlight the potential of combining artificial
intelligence and modern communication technologies to advance the development of intelligent, adaptive, and
responsive mobile robots.
