Skip to main content
Log in

Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

Mobile robot path planning in an unknown environment is a fundamental and challenging problem in the field of robotics. Dynamic window approach (DWA) is an effective method of local path planning, however some of its evaluation functions are inadequate and the algorithm for choosing the weights of these functions is lacking, which makes it highly dependent on the global reference and prone to fail in an unknown environment. In this paper, an improved DWA based on Q-learning is proposed. First, the original evaluation functions are modified and extended by adding two new evaluation functions to enhance the performance of global navigation. Then, considering the balance of effectiveness and speed, we define the state space, action space and reward function of the adopted Q-learning algorithm for the robot motion planning. After that, the parameters of the proposed DWA are adaptively learned by Q-learning and a trained agent is obtained to adapt to the unknown environment. At last, by a series of comparative simulations, the proposed method shows higher navigation efficiency and successful rate in the complex unknown environment. The proposed method is also validated in experiments based on XQ-4 Pro robot to verify its navigation capability in both static and dynamic environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33

Similar content being viewed by others

References

  • Agha-Mohammadi, A. A., Chakravorty, S., & Amato, N. M. (2014). Firm: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements. The International Journal of Robotics Research, 33(2), 268–304.

    Article  Google Scholar 

  • Azzabi, A., & Nouri, K. (2019). An advanced potential field method proposed for mobile robot path planning. Transactions of the Institute of Measurement and Control,. https://doi.org/10.1177/0142331218824393.

  • Ballesteros, J., Urdiales, C., Velasco, A. B. M., & Ramos-Jimenez, G. (2017). A biomimetical dynamic window approach to navigation for collaborative control. IEEE Transactions on Human–Machine Systems, 47(6), 1123–1133.

    Article  Google Scholar 

  • Bayili, S., & Polat, F. (2011). Limited-damage A*: A path search algorithm that considers damage as a feasibility criterion. Knowledge-Based Systems, 24(4), 501–512.

    Article  Google Scholar 

  • Best, G., Faigl, J., & Fitch, R. (2017). Online planning for multi-robot active perception with self-organising maps. Autonomous Robots, 42(4), 715–738.

    Article  Google Scholar 

  • Brock, O., & Oussama, K. (1999). High-speed navigation using the global dynamic window approach. In 1999 IEEE international conference (pp. 341–346).

  • Chang, L., Shan, L., Li, J., & Dai, Y. W. (2019). The path planning of mobile robots based on an improved A* algorithm. In 2019 IEEE 16th international conference on networking, sensing and control (ICNSC) (pp. 257–262).

  • Contreras-Cruz, M. A., Ayala-Ramirez, V., & Hernandez-Belmonte, U. H. (2015). Mobile robot path planning using artificial bee colony and evolutionary programming. Applied Soft Computing, 30(2015), 319–328.

    Article  Google Scholar 

  • Das, P. K., Behera, H. S., & Panigrahi, B. K. (2015). Intelligent-based multi-robot path planning inspired by improved classical Q-learning and improved particle swarm optimization with perturbed velocity. Engineering Science and Technology, an International Journal,. https://doi.org/10.1016/j.jestch.2015.09.009.

  • Das, P. K., Behera, H. S., & Panigrahi, B. K. (2016). Intelligent-based multi-robot path planning inspired by improved classical q-learning and improved particle swarm optimization with perturbed velocity. Engineering Science and Technology, an International Journal, 19(1), 651–669.

    Article  Google Scholar 

  • Duguleana, M., & Mogan, G. (2016). Neural networks based reinforcement learning for mobile robots obstacle avoidance. Expert Systems with Applications, 62(2016), 104–115.

    Article  Google Scholar 

  • Durrant, W. H. (1994). Where am I? A tutorial on mobile vehicle localization. Industrial Robot, 21(2), 11–16.

    Article  Google Scholar 

  • Elbanhawi, M., & Simic, M. (2014). Sampling-based robot motion planning: A review. IEEE Access, 2(2014), 56–77.

    Article  Google Scholar 

  • Fox, D., Burgard, W., & Thrun, S. (2002). The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1), 23–33.

    Article  Google Scholar 

  • Fu, Y., Ding, M., Zhou, C., & Han, H. (2013). Route planning for unmanned aerial vehicle (UAV) on the sea using hybrid differential evolution and quantum-behaved particle swarm optimization. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(6), 1451–1465.

    Article  Google Scholar 

  • González, R., Jayakumar, P., & Iagnemma, K. (2017). Stochastic mobility prediction of ground vehicles over large spatial regions: A geostatistical approach. Autonomous Robots, 41(2), 311–331.

    Article  Google Scholar 

  • Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics, 4(2), 100–107.

    Article  Google Scholar 

  • Henkel, C., Bubeck, A., & Xu, W. (2016). Energy efficient dynamic window approach for local path planning in mobile service robotics *. IFAC PapersOnLine, 49(15), 32–37.

    Article  Google Scholar 

  • Hossain, M. A., & Ferdous, I. (2015). Autonomous robot path planning in dynamic environment using a new optimization technique inspired by bacterial foraging technique. Robotics and Autonomous Systems, 64(2015), 137–141.

    Article  Google Scholar 

  • Ishay, K., Elon, R., & Ehud, R. (1998). TangentBug: A range-sensor-based navigation algorithm. The International Journal of Robotics Research, 17(9), 934–953.

    Article  Google Scholar 

  • Jaradat, M. A. K., Al-Rousan, M., & Quadan, L. (2011). Reinforcement based mobile robot navigation in dynamic environment. Robotics and Computer-Integrated Manufacturing, 27(1), 135–149.

    Article  Google Scholar 

  • Karaman, S., & Frazzoli, E. (2011). Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research, 30(7), 846–894.

    Article  Google Scholar 

  • Khaled, B., Froduald, K., & Leo, H. (2013). Randomized path planning with preferences in highly complex dynamic environments.Robotica, 31(8), 1195–1208.

  • Khatib, O. (1986). Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1), 90–98.

    Article  Google Scholar 

  • Kim, S., & Kim, H. (2010). Optimally overlapped ultrasonic sensor ring design for minimal positional uncertainty in obstacle detection. International Journal of Control, Automation, and Systems, 8(6), 1280–1287.

    Article  Google Scholar 

  • Kiss, D. (2012). A receding horizon control approach to navigation in virtual corridors. Applied Computational Intelligence in Engineering and Information Technology, 1(2012), 175–186.

    Article  Google Scholar 

  • Kiss, D., & Tevesz, G. (2012). Advanced dynamic window based navigation approach using model predictive control. International conference on methods & models in automation & robotics (pp. 148–153).

  • Kovács, B., Szayer, G., Tajti, F., Burdelis, M., & Korondi, P. (2016). A novel potential field method for path planning of mobile robots by adapting animal motion attributes. Robotics and Autonomous Systems, 82(C), 24–34.

    Article  Google Scholar 

  • Kröse, Ben J. A. (1995). Learning from delayed rewards. Robotics and Autonomous Systems, 15(4), 233–235.

    Article  Google Scholar 

  • Lamini, C., Fathi, Y., & Benhlima, S. (2015). Collaborative Q-learning path planning for autonomous robots based on holonic multi-agent system. In International conference on intelligent systems: theories & applications (pp. 1–6).

  • Langer, R. A., Coelho, L. S., & Oliveira G. H. C. (2007). K-Bug, a new bug approach for mobile robot’s path planning. Control applications. In 2007 IEEE international conference on control applications (pp. 403–408).

  • Li, G., & Chou, W. (2016). An improved potential field method for mobile robot navigation. High Technology Letters, 22(1), 16–23.

    MathSciNet  Google Scholar 

  • Li, M., Song, Q., Zhao, Q. J., & Zhang, Y. L. (2016). Route planning for unmanned aerial vehicle based on rolling RRT in unknown environment. In 2016 IEEE international conference on computational intelligence and computing research (pp. 1–4).

  • Likhachev, M., Ferguson, D., Gordon, G., Stentz, A., & Thrun, S. (2008). Anytime search in dynamic graphs. Artificial Intelligence, 172(14), 1613–1643.

    Article  MathSciNet  Google Scholar 

  • Lumelsky, V. J., & Skewis, T. (1990). Incorporating range sensing in the robot navigation function. IEEE Transactions on Systems, Man and Cybernetics, 20(5), 1058–1069.

    Article  Google Scholar 

  • Lumelsky, V. J., & Stepanov, A. A. (1987). Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape. Algorithmica, 2(1), 403–430.

    Article  MathSciNet  Google Scholar 

  • Lynda, D. (2015). E-Bug: New bug path-planning algorithm for autonomous robot in unknown environment. In 2015 international conference on information procession, security and advanced communications (pp. 1–8).

  • Maroti, A., Szaloki, D., Kiss, D., & Tevesz, G. (2013). Investigation of dynamic window based navigation algorithms on a real robot. In 2013 IEEE 11th international symposium on applied machine intelligence and informatics (SAMI)(pp. 95–100).

  • Mohanty, P. K., Sah, A. K., Kumar, V., & Kundu, S. (2017). Application of deep Q-learning for wheel mobile robot navigation. International conference on computational intelligence & networks (pp. 88–93).

  • Monfared, H., & Salmanpour, S. (2015). Generalized intelligent water drops algorithm by fuzzy local search and intersection operators on partitioning graph for path planning problem. Journal of Intelligent & Fuzzy Systems, 29(2), 975–986.

    Article  MathSciNet  Google Scholar 

  • Ogren, P., & Leonard, N. E. (2005). A convergent dynamic window approach to obstacle avoidance. IEEE Transactions on Robotics, 21(2), 188–195.

    Article  Google Scholar 

  • Özdemi, A., & Sezer, V. (2018). Follow the gap with dynamic window approach. International Journal of Semantic Computing, 12(01), 43–57.

    Article  Google Scholar 

  • Pinto, A. M., Moreira, E., Lima, J., Sousa, J. P., & Costa, P. (2016). A cable-driven robot for architectural constructions: a visual-guided approach for motion control and path-planning. Autonomous Robots, 41(7), 1487–1499.

    Article  Google Scholar 

  • Qureshi, A. H., & Ayaz, Y. (2016). Potential functions based sampling heuristic for optimal path planning. Autonomous Robots, 40(6), 1079–1093.

    Article  Google Scholar 

  • Rickert, M., Brock, O., & Knoll, A. (2008). Balancing exploration and exploitation in motion planning. In IEEE international conference on robotics & automation (pp. 2812–2817).

  • Seder, M. & Petrović, I. (2007). Dynamic window based approach to mobile robot motion control in the presence of moving obstacles. In Proceedings of the 2007 IEEE international conference on robotics and automation (pp. 1986–1991).

  • Sharma, A., Gupta, K., Kumar, A., Sharma, A., & Kumar, R. (2017). Model based path planning using Q-Learning. In 2017 IEEE international conference on industrial technology (ICIT) (pp. 837–842).

  • Simmons, R. (1996). The curvature-velocity method for local obstacle avoidance. In IEEE international conference on robotics & automation (pp. 3375–3382).

  • Smart, W. D., & Kaelbling, L. P. (2002). Effective reinforcement learning for mobile robots. In IEEE international conference on robotics and automation (pp. 3404–3410).

  • Stentz A. (1995). The focussed D* algorithm for real-time replanning. In International joint conference on artificial intelligence (pp. 1662–1669).

  • Wang, Y. X., Tian, Y. Y., Li, X., & Li, L. H. (2019). Self-adaptive dynamic window approach in dense obstacles. Control and Decision, 34(02), 34–43.

    Google Scholar 

  • Wang, Z., Shi, Z., Li, Y., & Tu, J. (2014). The optimization of path planning for multi-robot system using Boltzmann Policy based Q-learning algorithm. In 2013 IEEE international conference on robotics and biomimetics (ROBIO) (pp. 1199–1204).

  • Watkins. (1992). Technical note: Q-learning. Machine Learning, 8(3–4), 279–292.

  • Xin, Y., Liang, H. W., Du, M., Mei, T., Wang, Z. L., & Jiang, R. (2014). An improved A* algorithm for searching infinite neighbourhoods. Robot, 36(5), 627–633.

    Google Scholar 

  • Xu, Y. L. (2017). Research on mapping and navigation technology of mobile robot based on ROS. M.A. Thesis. Harbin: Harbin Institute of Technology.

  • Zhang, A., Chong, C., & Bi, W. (2016). Rectangle expansion A* pathfinding for grid maps. Chinese Journal of Aeronautics, 29(5), 1385–1396.

    Article  Google Scholar 

  • Zhang, J., & Singh, S. (2017). Low-drift and real-time lidar odometry and mapping. Autonomous Robots, 41(2), 401–416.

    Article  Google Scholar 

  • Zhao, X., Wang, Z., Huang, C. K., & Zhao, Y. W. (2018). Mobile robot path planning based on an improved A* algorithm. Robot, 40(06), 137–144.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Shan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by the Natural Science Foundation of Jiangsu Province (BK20191286) and the Fundamental Research Funds for the Central Universities (30920021139).

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (flv 14896 KB)

Supplementary material 2 (mp4 7561 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, L., Shan, L., Jiang, C. et al. Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment. Auton Robot 45, 51–76 (2021). https://doi.org/10.1007/s10514-020-09947-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-020-09947-4

Keywords

Navigation