Skip to main content

Deep Reinforcement Learning for Autonomous Mobile Robot Navigation

  • Chapter
  • First Online:
Artificial Intelligence for Robotics and Autonomous Systems Applications

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1093))

Abstract

Numerous fields, such as the military, agriculture, energy, welding, and automation of surveillance, have benefited greatly from autonomous robots’ contributions. Since mobile robots need to be able to navigate safely and effectively, there was a strong demand for cutting-edge algorithms. The four requirements for mobile robot navigation are as follows: perception, localization, planning a path and controlling movement. Numerous algorithms for autonomous robots have been developed over the past two decades. The number of algorithms that can navigate and control robots in dynamic environments is limited, even though the majority of autonomous robot applications take place in dynamic environments. A qualitative comparison of the most recent Autonomous Mobile Robot Navigation techniques for controlling autonomous robots in dynamic environments with safety and uncertainty considerations is presented in this paper. The work incorporates different angles like the essential technique, benchmarking, and showing parts of the improvement interaction. The structure, pseudocode, tools, and practical, in-depth applications of the particular Deep Reinforcement Learning algorithms for autonomous mobile robot navigation are also included in the research. This study provides an overview of the development of suitable Deep Reinforcement Learning techniques for various applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

AC:

Actor-Critical Method

AI:

Artificial Intelligence

AMRN:

Autonomous Mobile Robot Navigation

ANN:

Artificial Neural Networks

AR:

Autonomous Robot

BDL:

Bayesian Deep Learning

CNN:

Convolutional Neural Networks

CMAD-DDQN:

Communication-Enabled Multiagent Decentralized DDQN

DRL:

Deep Reinforcement Learning

DNN:

Deep Neural Networks

DNFS:

Deep Neuro-fuzzy systems

DL:

Deep Learning

DQN:

Deep Q-network

DDQN:

Double DQN

D3QN:

Dueling Double Deep Q-network

DDPG:

Deep Deterministic Policy Gradient

DDP:

Deep Deterministic Policy

DQL:

Deep Q-Learning

DART:

Dynamic Animation and Robotics Toolkit

FC:

Fully Connected

FL:

Fuzzy Logic Control

GNC:

Guidance, Navigation and Control

LSTM:

Long Short Term Memory

MNR:

Mobile Robot Navigation

ML:

Machine Learning

MR:

Mobile Robot

MDP:

Markov Decision Process

MARL:

Multi-Agent Reinforcement Learning

MADRL:

Multi Robot Deep Reinforcement Learning

MSE:

Mean Square Error

NN:

Neural Networks

ODE:

Open Dynamics Engine

OSRF:

Open Source Robotics Foundation

POMDPs:

Partially observable Markov Decision processes

PPO:

Proximal Policy Optimization

RL:

Reinforcement Learning

RL-AKF:

Adaptive Kalman Filter Navigation Algorithm

RNN:

Recurrent Neural Network

ROS:

Robot Operating System

RSSM:

Recurrent State-Space Model

SAC:

Soft Actor Critic

SDF:

Simulation Description Format

URDF:

Unified Robotic Description Format.

References

  1. Dargazany DRL (2021). Deep Reinforcement Learning for Intelligent Robot Control–Concept, Literature, and Future (Vvol. 13806v1, no. 2105, p. 16).

    Google Scholar 

  2. Abbeel, P. (2016). Deep learning for robotics. In DL-workshop-RS.

    Google Scholar 

  3. Balhara, S. (2022). A survey on deep reinforcement learning architectures, applications and emerging trends. IET Communications, 16.

    Google Scholar 

  4. Hodge, V. J. (2020). Deep reinforcement learning for drone navigation using sensor data. Neural Computing and Applications, 20.

    Google Scholar 

  5. Kondratenko, Y., Atamanyuk, I., Sidenko, Machine learning techniques for increasing efficiency of the robot’s sensor and control information processing. Sensors MDPI, 22(1062), 31.

    Google Scholar 

  6. Gao, X. (2020). RL-AKF: An adaptive kalman filter navigation algorithm based on reinforcement learning for ground vehicles. Remote Sensing, 12(1704), 25.

    Google Scholar 

  7. Hewawasam, H. S. (2022). Past, present and future of path-planning algorithms for mobile robot navigation in dynamic environments. IEEE Industrial Electronics Society, 3(2022), 13.

    Google Scholar 

  8. Doukhi, O. (2022). Deep reinforcement learning for autonomous map-less navigation of a flying robot. IEEE Access, 13.

    Google Scholar 

  9. Xiao, X. (2022). Motion planning and control for mobile robot navigation using machine learning: A survey. Autonomous Robots, 29.

    Google Scholar 

  10. Kober, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, no. Res.0278364913495721.

    Google Scholar 

  11. Plasencia, A. (2013). Simulación de la navegación de los robots móviles mediante algoritmos de aprendizaje por refuerzo para fines docentes. In TCA-2013, La Habana.

    Google Scholar 

  12. H. B. (2005). Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance. In: Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou.

    Google Scholar 

  13. H. M. (2008). Simulation of the navigation of a mobile robot by the Q Learning using artificial neuron networks. In University Hadj Lakhdar, Batna, Algeria.

    Google Scholar 

  14. Bengio, Y. (2009). Learning deep architectures for AI. in Now Publishers Inc.

    Google Scholar 

  15. Zhu, K. (2021). Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Science and Technology, 26(5), 18.

    Article  Google Scholar 

  16. Melcher, K., Silipo, R. (2020). Codeless deep learning with KNIME. Packt Publishing.

    Google Scholar 

  17. Plasencia, A.: Autonomous robotics safety. in X Taller Internacional De Cibernética Aplicada, La Habana.

    Google Scholar 

  18. González-Rodríguez, L. (2021). Uncertainty-Aware autonomous mobile robot navigation with deep reinforcement learning. In: Deep learning for unmanned systems, Switzerland AG (pp. 225–257). Springer.

    Google Scholar 

  19. Plasencia, A. (2021). Managing deep learning uncertainty for unmanned systems. In Deep Learning for Unmanned Systems, Switzerland (pp. 184–223). Cham: Springer.

    Google Scholar 

  20. Lillicrap, T. P. (2016). Continuous control with deep reinforcement. In ICLR 2016, London, UK.

    Google Scholar 

  21. Rodrigues, M. (2021). Robot training and navigation through the deep Q-Learning algorithm. In IEEE International Conference on Consumer Electronics (ICCE).

    Google Scholar 

  22. Jiang, Q. (2022). Path planning method of mobile robot based on Q-learning. in AIIM-2021 Journal of Physics: Conference Series.

    Google Scholar 

  23. Ruan, X. (2019). Mobile robot navigation based on deep reinforcement learning. in The 31th Chinese Control and Decision Conference (2019 CCDC), Beijing.

    Google Scholar 

  24. Wu, P. (2022). DayDreamer: World models for physical robot learning (p. 15). arXiv:2206.14176v1 [cs.RO].

    Google Scholar 

  25. Omoniwa, Communication-Enabled multi-agent decentralised deep reinforcement learning to optimise energy-efficiency in UAV-Assisted networks. In IEEE transactions on cognitive communications and networking (p. 12).

    Google Scholar 

  26. Guo, N. (2021). A fusion method of local path planing for mobile robots based on LSTM neural network and reinforcement learning. Mathematical Problems in Engineering Hindawi, 2021, no. id 5524232, p. 21, 2021.

    Google Scholar 

  27. Talpur, N. (2022). Deep Neuro-Fuzzy System application trends, challenges, and future perspectives: a systematic survey. Artificial Intelligence Review, 49.

    Google Scholar 

  28. Zhao, K. (2022). Hybrid navigation method for multiple robots facing dynamic obstacles. Tsinghua Science and Technology, 27(6), 8.

    Article  Google Scholar 

  29. Zhu, W. (2022). A hierarchical deep reinforcement learning framework with high efficiency and generalization for fast and safe navigation. IEEE Transactions on Industrial Electronics, 10.

    Google Scholar 

  30. Hillebrand, M. (2020). A design methodology for deep reinforcement learning in autonomous systems. Procedia Manufacturing, 52, 266–271.

    Article  Google Scholar 

  31. François-Lavet, V. (2018). An introduction to deep reinforcement learning. Foundations and Trends in Machine Learning, 11(3–4), 140. arXiv:1811.12560v2 [cs.LG].

    Google Scholar 

  32. Araujo, H. (2022). Testing, validation, and verification of robotic and autonomous systems: A systematic review. Association for Computing Machinery ACM, 62.

    Google Scholar 

  33. La, W. G. (2022). DeepSim: A reinforcement learning environment build toolkit for ROS and Gazebo (p. 10). arXiv:2205.08034v1 [cs.LG].

    Google Scholar 

  34. Yue, P. (2019). Experimental research on deep reinforcement learning in autonomous navigation of mobile robot (2019)

    Google Scholar 

  35. Tian, Z. (2022). Reinforcement Learning for Self-exploration in Narrow Spaces (Vol. 17, p. 7). arXiv:2209.08349v1 [cs.RO].

    Google Scholar 

  36. Xu, Z. Benchmarking reinforcement learning techniques for autonomous navigation.

    Google Scholar 

  37. Chen, J. (2022). MultiRoboLearn: An open-source Framework for Multi-robot Deep Reinforcement Learning (p. 7). arXiv:2209.13760v1 [cs.RO].

    Google Scholar 

  38. Dietz, G. (2022). ARtonomous: Introducing middle school students to reinforcement learning through virtual robotics. In IDC ’22: Interaction Design and Children.

    Google Scholar 

  39. Yang, T., Zuo (2022). Target-Oriented teaching path planning with deep reinforcement learning for cloud computing-assisted instructions. Applied Sciences, 12(9376), 18.

    Google Scholar 

  40. Armando Plasencia, Y. S. (2019). Open source robotic simulators platforms for teaching deep reinforcement learning algorithms. Procedia Computer Science, 150, 9.

    Google Scholar 

  41. Coppelia robotics. Retrieved October 10, 2022, from https://www.coppeliarobotics.com/.

  42. Quiroga, F. (2022). Position control of a mobile robot through deep reinforcement learning. Applied Sciences, 12(7194), 17.

    Google Scholar 

  43. Zeng, T. (2018). Learning continuous control through proximal policy optimization for mobile robot navigation. In: 2018 International Conference on Future Technology and Disruptive Innovation, Hangzhou, China.

    Google Scholar 

  44. Tai, L. (2017). Virtual-to-real deep reinforcement learning: continuous control of mobile robots for mapless navigation. In IROS 2017, Hong Kong.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Armando de Jesús Plasencia-Salgueiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Plasencia-Salgueiro, A.d. (2023). Deep Reinforcement Learning for Autonomous Mobile Robot Navigation. In: Azar, A.T., Koubaa, A. (eds) Artificial Intelligence for Robotics and Autonomous Systems Applications. Studies in Computational Intelligence, vol 1093. Springer, Cham. https://doi.org/10.1007/978-3-031-28715-2_7

Download citation

Publish with us

Policies and ethics