Advertisement

Evolving Systems

, Volume 10, Issue 4, pp 649–658 | Cite as

Research on intelligent collision avoidance decision-making of unmanned ship in unknown environments

  • Chengbo Wang
  • Xinyu ZhangEmail author
  • Longze Cong
  • Junjie Li
  • Jiawei Zhang
Original Paper

Abstract

To solve the problem of intelligent collision avoidance by unmanned ships in unknown environments, a deep reinforcement learning obstacle avoidance decision-making (DRLOAD) algorithm is proposed. The problems encountered in unmanned ships’ intelligent avoidance decisions are analyzed, and the design criteria for a proposed decision algorithm are put forward. Based on the Markov decision process, an intelligent collision avoidance model is established for unmanned ships. The optimal strategy for an intelligent decision system is determined through the value function which maximizes the return for the mapping of the in unmanned ship’s state to behavior. A reward function is specifically designed for obstacle avoidance, approaching a target and safety. Finally, simulation experiments are carried out in multi-state obstacle environments, demonstrate the effectiveness of the proposed DRLOAD algorithm.

Keywords

Deep reinforcement learning Obstacle avoidance Intelligent decision-making Unmanned ship 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant 51309043), the Nature Science of Foundation of Liaoning Province (Grant 2015020626), the Outstanding Young Scholars Growth Plan of Liaoning Province (Grant LJQ201405), a basic research project from the Key Laboratory of Liaoning Provincial Education Department (Grant LZ2015009), and the Fundamental Research Funds for the Central Universities (Grant 3132016315).

References

  1. Akakpo GS, Ngankam TM (2015) A mathematical model for analysis on ships collision avoidance. Reg Marit Univ J 4:61–70Google Scholar
  2. Chen XM, Tian G, Miao YS, Gong JW (2017) Driving rule acquisition and decision algorithm to unmanned vehicle in urban traffic. J Trans Beijing Inst Technol 37(5):491–496Google Scholar
  3. Cheng Y, Zhang W (2017) Concise deep reinforcement learning obstacle avoidance for underactuated unmanned marine vessels. Neurocomputing 272:63–73CrossRefGoogle Scholar
  4. Du M (2016) Research on behavior decision and movement planning of unmanned vehicles based on human driving behavior. Ph.D. Dissertation, University of Science and Technology of China, HefeiGoogle Scholar
  5. Furda A, Vlacic L (2011) Enabling safe autonomous driving in real-world city traffic using multiple criteria decision making. IEEE Intell Transp Syst Mag 3(1):4–17CrossRefGoogle Scholar
  6. Gao H (2016) Mobile robot path planning based on reinforcement learning. M.A. Thesis, Southwest Jiaotong University of China, ChengduGoogle Scholar
  7. Hasselt HV, Guez A, Silver D (2015) Deep reinforcement learning with double Q-learning. Comput Sci 3:1–12Google Scholar
  8. Li H (2004) Research on application of ship collision avoidance expert system based on AIS. Dalian Maritime University of China, DalianGoogle Scholar
  9. Li X, Xu X, Zuo L (2016) Reinforcement learning based overtaking decision-making for highway autonomous driving. In: Sixth international conference on intelligent control and information processing, Wuhan, China, pp 336–342Google Scholar
  10. Liu H, Zhang Z (2015) Sailing ship safety encounter distance. J Shipp Manag 37(10):12–13Google Scholar
  11. Liu Y, Song R, Bucknall R (2015) A practical path planning and navigation algorithm for an unmanned surface vehicle using the fast marching algorithm. In: OCEANS, Genova, Italy, pp 1–7Google Scholar
  12. Liu Q, Zhai J-W, Zhang Z-Z, Zhong S, Zhou Q, Zhang P et al (2017) A survey on deep reinforcement learning. Chin J Comput 1–28 [2017-12-24]. http://kns.cnki.net/kcms/detail/11.1826.TP.20170119.1030.002.html
  13. Peng G, Huang X, Yang T, Gao J, Wu Y, Xiong Y (2004) Mobile robot behavior decision and control based on neural net work and fuzzy inference. J Huazhong Univ Sci Technol 32:129–132Google Scholar
  14. Perera LP, Carvalho JP, Soares CG (2010) Autonomous guidance and navigation based on the COLREGs rules and regulations of collision avoidance. Advanced ship design for pollution prevention, UKGoogle Scholar
  15. Smierzchalski R, Michalewicz Z (2000) Modeling of ship trajectory in collision situations by an evolutionary algorithm. IEEE Trans Evol Comput 4(3):227–241CrossRefGoogle Scholar
  16. Sun L (2000) The research on mathematical models of decision making in ship collision avoidance. Ph.D. Dissertation, Dalian Maritime University of China, DalianGoogle Scholar
  17. Szepesvari C (2011) Algorithms for reinforcement learning. Wiley encyclopedia of operations research and management science. Wiley, Hoboken, pp 632–636Google Scholar
  18. Tan F, Yan P, Guan X (2017) Deep reinforcement learning: from Q-learning to deep Q-learning. In: International conference on neural information processing. Springer, Cham, pp 475–483Google Scholar
  19. Temizer S, Kochenderfer M, Kaelbling L, Lozanoperez T, J Kuchar (2010) Collision avoidance for unmanned aircraft using markov decision processes. In: AIAA guidance, navigation, and control conference, Toronto, Ontario Canada, pp 1–22Google Scholar
  20. Tian G (2016) Research on bionic lane change decision model of unmanned vehicles in complex dynamic urban environment. Beijing Institute of Technology, BeijingGoogle Scholar
  21. Wang B (2006) Research on obstacle avoidance of mobile robot based on multisensor information fusion. M.A. Thesis, Nanjing University of Science and Technology of China, NanjingGoogle Scholar
  22. Xue H, Dong HE (2017) Research on mathematical model and computer simulation of ship automatic intelligent collision avoidance. Ship Sci Technol 16:31–33Google Scholar
  23. Yasuno T, Kamano T, Suzuki T, Uemura K, Harada H, Kataoka Y (2015) Autonomous mobile robot based on behavior decision skill and control skill of the operator. Electr Eng Jpn 131(2):30–39CrossRefGoogle Scholar
  24. Zhao Y-W, Tan D-L (2001) Multi-behavior integrated-decision method based on feasibility of velocity vectors. J Inf Control 30(1):72–75Google Scholar
  25. Zhao D-B, Shao K, Zhu Y-H, Li D, Chen Y-R, Wang H-T et al (2016) Review of deep reinforcement learning and discussions on the development of computer Go. J Control Theory Appl 33(6):701–717zbMATHGoogle Scholar
  26. Zhen R (2013) Research on intelligent decision method of unmanned vehicle based on augmented learning. M.A. Thesis, National University of Defense Technology, ChangshaGoogle Scholar
  27. Zheng R, Liu C, Guo Q (2014) A decision-making method for autonomous vehicles based on simulation and reinforcement learning. In: International conference on machine learning and cybernetics, Tianjin, China, pp 362–369Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  • Chengbo Wang
    • 1
    • 2
  • Xinyu Zhang
    • 1
    • 2
    Email author
  • Longze Cong
    • 3
  • Junjie Li
    • 1
    • 2
  • Jiawei Zhang
    • 1
    • 2
  1. 1.Navigation CollegeDalian Maritime UniversityDalianChina
  2. 2.Key Laboratory of Marine Simulation and Control for Ministry of CommunicationsDalian Maritime UniversityDalianChina
  3. 3.School of Maritime Economics and ManagementDalian Maritime UniversityDalianChina

Personalised recommendations