Skip to main content

Advertisement

Log in

Incremental Learning for Autonomous Navigation of Mobile Robots based on Deep Reinforcement Learning

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This paper presents an incremental learning method and system for autonomous robot navigation. The range finder laser sensor and online deep reinforcement learning are utilized for generating the navigation policy, which is effective for avoiding obstacles along the robot’s trajectories as well as for robot’s reaching the destination. An empirical experiment is conducted under simulation and real-world settings. Under the simulation environment, the results show that the proposed method can generate a highly effective navigation policy (more than 90% accuracy) after only 150k training iterations. Moreover, our system has slightly outperformed deep-Q, while having considerably surpassed Proximal Policy Optimization, two recent state-of-the art robot navigation systems. Finally, two experiments are performed to demonstrate the feasibility and effectiveness of our robot’s proposed navigation system in real-time under real-world settings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part i. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)

    Article  Google Scholar 

  2. Mac, T.T., Copot, C., Duc, T.T., Keyser, R.D.: Heuristic approaches in robot path planning: A survey. Robot. Auton. Syst. 86, 13–28 (2016)

    Article  Google Scholar 

  3. Tessler, C., Givony, S., Zahavy, T., Mankowitz, D.J., Mannor, S.: A deep hierarchical approach to lifelong learning in minecraft. In: AAAI (2016)

  4. Zhu, Y., Mottaghi, R., Kolve, E., Lim, J.J., Gupta, A., Fei-Fei, L., Farhadi, A.: Target-driven visual navigation in indoor scenes using deep reinforcement learning. 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3357–3364 (2016)

  5. Chen, Y.F., Everett, M., Liu, M., How, J.P.: Socially aware motion planning with deep reinforcement learning. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1343–1350 (2017)

  6. Tai, L., Zhang, J., Liu, M., Burgard, W.: Socially compliant navigation through raw depth inputs with generative adversarial imitation learning. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1111–1117 (2017)

  7. Kim, Y., Jang, J., Yun, S.: End-to-end deep learning for autonomous navigation of mobile robot. In: 2018 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–6 (2018)

  8. Tai, L., Li, S., Liu, M.: A deep-network solution towards model-less obstacle avoidance. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2759–2764

  9. Pfeiffer, M., Schaeuble, M., Nieto, J., Siegwart, R., Cadena, C.: From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1527–1533 (2017)

  10. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M.A., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  11. Tai, L., Paolo, G., Liu, M.: Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 31–36 (2017)

  12. Xia, C., Kamel, A.E.: Neural inverse reinforcement learning in autonomous navigation. Robot. Auton. Syst. 84, 1–14 (2016)

    Article  Google Scholar 

  13. Wu, Y., Mansimov, E., Liao, S., Grosse, R.B., Ba, J.: Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In: NIPS (2017)

  14. Maier, D., Hornung, A., Bennewitz, M.: Real-time navigation in 3d environments based on depth camera data. 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), pp. 692–697 (2012)

  15. Tanzmeister, G., Friedl, M., Wollherr, D., Buss, M.: Efficient evaluation of collisions and costs on grid maps for autonomous vehicle motion planning. IEEE Trans. Intell. Transp. Syst. 15(5), 2249–2260 (2014)

    Article  Google Scholar 

  16. Blöchliger, F., Fehr, M., Dymczyk, M., Schneider, T., Siegwart, R.: Topomap: Topological mapping and navigation based on visual slam maps. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9 (2017)

  17. Garrido, S., Moreno, L., Blanco, D., Martin, F.: Exploratory navigation based on voronoi transform and fast marching. In: 2007 IEEE International Symposium on Intelligent Signal Processing, pp. 1–6 (2007)

  18. Arslan, O., Koditschek, D.E.: Exact robot navigation using power diagrams. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8 (2016)

  19. Ko, B., Choi, H.-J., Hong, C., Kim, J., Kwon, O.C., Yoo, C.D.: Neural network-based autonomous navigation for a homecare mobile robot. In: 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 403–406 (2017)

  20. Niroui, F., Zhang, K., Kashino, Z., Nejat, G.: Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robot. Autom. Lett. 4, 610–617 (2019)

    Article  Google Scholar 

  21. Tai, L., Liu, M.: Towards cognitive exploration through deep reinforcement learning for mobile robots. ArXiv:1610.01733 (2016)

  22. Tai, L., Liu, M.: A robot exploration strategy based on q-learning network. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 57–62 (2016)

  23. Han, S., Choi, H., Benz, P., Loaiciga, J.: Sensor-based mobile robot navigation via deep reinforcement learning. 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 147–154 (2018)

  24. Everett, M., Chen, Y.F., How, J.P.: Motion planning among dynamic, decision-making agents with deep reinforcement learning. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3052–3059 (2018)

  25. Pfeiffer, M., Shukla, S., Turchetta, M., Cadena, C., Krause, A., Siegwart, R., Nieto, J.: Reinforced imitation: Sample efficient deep reinforcement learning for mapless navigation by leveraging prior demonstrations. IEEE Robot. Autom. Lett. 3(4), 4423–4430 (2018)

    Article  Google Scholar 

  26. Codevilla, F., Miiller, M., López, A., Koltun, V., Dosovitskiy, A.: End-to-end driving via conditional imitation learning. 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–9 (2017)

  27. Kuderer, M., Gulati, S., Burgard, W.: Learning driving styles for autonomous vehicles from demonstration. 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2641–2646 (2015)

  28. Silver, D., Bagnell, J.A., Stentz, A.: Learning from demonstration for autonomous navigation in complex unstructured terrain. I. J. Robot. Res. 29, 1565–1592 (2010)

    Article  Google Scholar 

  29. Kretzschmar, H., Spies, M., Sprunk, C., Burgard, W.: Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Rob. Res. 35(11), 1289–1307 (September 2016)

    Article  Google Scholar 

  30. Kim, B., Pineau, J.: Socially adaptive path planning in human environments using inverse reinforcement learning. Int. J. Soc. Robot. 8, 51–66 (2016)

    Article  Google Scholar 

  31. Chiang, H-TL, Faust, A., Fiser, M., Francis, A.: Learning navigation behaviors end-to-end with autorl. IEEE Robot. Autom. Lett. 4, 2007–2014 (2019)

    Article  Google Scholar 

  32. Francis, A., Faust, A., Chiang, H.-T.L., Hsu, J., Kew, J.C., Fiser, M., Lee, T-WE: Long-range indoor navigation with prm-rl. ArXiv:1902.09458 (2019)

  33. Zhang, K., Niroui, F., Ficocelli, M., Nejat, G.: Robot navigation of environments with unknown rough terrain using deep reinforcement learning. 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–7 (2018)

  34. Koenig, N.P., Howard, A.G.: Design and use paradigms for gazebo, an open-source multi-robot simulator. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, pp. 2149–2154 (2004)

  35. Quigley, M., Gerkey, B.P., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R., Ng, A.Y.: Ros: an open-source robot operating system (2009)

  36. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. ArXiv:1707.06347 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cuong Pham.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luong, M., Pham, C. Incremental Learning for Autonomous Navigation of Mobile Robots based on Deep Reinforcement Learning. J Intell Robot Syst 101, 1 (2021). https://doi.org/10.1007/s10846-020-01262-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-020-01262-5

Keywords

Navigation