Abstract
The task of building unmanned automated vehicle (UAV) control systems is developing in the direction of complication of options for interaction of UAV with the environment and approaching real life situations. A new concept of so called “smart city” was proposed and view of transportation shifted in direction to self-driving cars. In this work we developed a solution to car’s movement on road intersection. For that we made a new environment to simulate a process and applied a hierarchical reinforcement learning method to get a required behaviour from a car. Created environment could be then used as a benchmark for future algorithms on this task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Xu H, Gao Y, Yu F, Darrell T (2016) End-to-end learning of driving models from large-scale video datasets. CoRR, vol. abs/1612.01079
Shalev-Shwartz S, Shammah S, Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. CoRR, vol. abs/1610.03295
Bojarski M, Testa DD, Dworakowski D, Firner B, Flepp B, Goyal P, Jackel LD, Monfort M, Muller U, Zhang J, Zhang X, Zhao J, Zieba K (2016) End to end learning for self-driving cars. CoRR, vol. abs/1604.07316
Barto AG, Mahadevan S (2003) Recent advances in hierarchical reinforcement learning. Discrete Event Dyn Syst 13:341–379
Al-Emran M (2015) Hierarchical reinforcement learning - a survey. Int J Comput Dig Syst 4:137–143
Ayunts E, Panov AI (2017) Task planning in “Block World” with deep reinforcement learning. In: Samsonovich AV, Klimov VV (eds) Biologically inspired cognitive architectures (BICA) for young scientists, advances in intelligent systems and computing, Springer International Publishing, pp 3–9
Kuzmin V, Panov AI (2018) Hierarchical reinforcement learning with options and united neural network approximation. In: Abraham A, Kovalev S, Tarassov V, Snasel V, Sukhanov A (eds) Proceedings of the third international scientific conference “Intelligent Information Technologies for Industry” (IITI’18), advances in intelligent systems and computing, Springer International Publishing, pp 453–462
Aitygulov E, Kiselev G, Panov AI (2018) Task and spatial planning by the cognitive agent with human-like knowledge representation. In Ronzhin A, Rigoll G, Meshcheryakov R (eds) Interactive collaborative robotics, lecture notes in artificial intelligence, Springer International Publishing, pp 1–12
Paxton C, Raman V, Hager GD, Kobilarov M (2017) Combining neural networks and tree search for task and motion planning in challenging environments. CoRR, vol. abs/1703.07887
Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W (2016) OpenAI Gym
Sutton RS, Precup D, Singh S (1999) Between mdps and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif Intell 112:181–211
Mnih V, Badia AP, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. CoRR, vol. abs/1602.01783
Sutton RS, McAllester D, Singh S, Mansour Y (1999) Policy gradient methods for reinforcement learning with function approximation. In: Proceedings of the 12th international conference on neural information processing systems, NIPS’99, Cambridge, MA, USA, MIT Press, pp 1057–1063
Bacon P, Harb J, Precup D (2016) The option-critic architecture. CoRR, vol. abs/1609.05140
Acknowledgments
This work was supported by the Russian Science Foundation (Project No. 18-71-00143).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Shikunov, M., Panov, A.I. (2020). Hierarchical Reinforcement Learning Approach for the Road Intersection Task. In: Samsonovich, A. (eds) Biologically Inspired Cognitive Architectures 2019. BICA 2019. Advances in Intelligent Systems and Computing, vol 948. Springer, Cham. https://doi.org/10.1007/978-3-030-25719-4_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-25719-4_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25718-7
Online ISBN: 978-3-030-25719-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)