Abstract
The advent of autonomous vehicles comes with many questions from an ethical and technological point of view. The need for high performing controllers, which show transparency and predictability is crucial to generate trust in such systems. Popular data-driven, black box-like approaches such as deep learning and reinforcement learning are used more and more in robotics due to their ability to process large amounts of information, with outstanding performance, but raising concerns about their transparency and predictability. Model-based control approaches are still a reliable and predictable alternative, used extensively in industry but with restrictions of their own. Which of these approaches is preferable is difficult to assess as they are rarely directly compared with each other for the same task, especially for autonomous vehicles. Here we compare two popular approaches for control synthesis, model-based control i.e. Model Predictive Controller (MPC), and data-driven control i.e. Reinforcement Learning (RL) for a lane keeping task with speed limit for an autonomous vehicle; controllers were to take control after a human driver had departed lanes or gone above the speed limit. We report the differences between both control approaches from analysis, architecture, synthesis, tuning and deployment and compare performance, taking overall benefits and difficulties of each control approach into account.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Wymann, B., Coulom, R., Dimitrakakis, C., Espié, E.: Andrew Sumner: TORCS, The Open Racing Car Simulator (2014). http://www.torcs.org
Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2722–2730. IEEE (2015)
Gillespie, T.D.: Vehicle Dynamics. Warren dale (1997)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Griffin, M.J.: Handbook of Human Vibration. Academic press, Cambridge (2012)
Haddadin, S., Haddadin, S., Khoury, A., Rokahr, T., Parusel, S., Burgkart, R., Bicchi, A., Albu-Schäffer, A.: On making robots understand safety: embedding injury knowledge into control. Int. J. Robot. Res. 31(13), 1578–1602 (2012). http://ijr.sagepub.com/content/31/13/1578
ISO: ISO 13482:2014: Robots and robotic devices - Safety requirements for personal care robots. International Organization for Standardization, Geneva, Switzerland (2011)
Jacobs, M.: videantis \(\gg \) Handy list of automotive ADAS acronyms. http://www.videantis.com/handy-list-of-automotive-adas-acronyms.html
Keviczky, T., Falcone, P., Borrelli, F., Asgari, J., Hrovat, D.: Predictive control approach to autonomous vehicle steering. In: American Control Conference, 2006, p. 6-pp. IEEE (2006)
Kim, J.S.: Recent advances in adaptive MPC. In: ICCAS, vol. 2010, pp. 218–222 (2010)
Kong, J., Pfeiffer, M., Schildbach, G., Borrelli, F.: Kinematic and dynamic vehicle models for autonomous driving control design. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 1094–1099. IEEE (2015)
Lefevre, S., Carvalho, A., Borrelli, F.: A learning-based framework for velocity control in autonomous driving. IEEE Trans. Autom. Sci. Eng. 13(1), 32–42 (2016)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, September 2015. arXiv:1509.02971 [cs, stat]
Lima, P.: Predictive control for autonomous driving. Ph.D. thesis, KTH, 2016. Unpublished thesis (2016)
Maciejowski, J.M.: Predictive Control: With Constraints. Pearson education, London (2002)
Morari, M., Lee, J.H.: Model predictive control: past, present and future. Comput. Chem. Eng. 23(4–5), 667–682 (1999). http://www.sciencedirect.com/science/article/pii/S0098135498003019
Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)
Shia, V., Gao, Y., Vasudevan, R., Campbell, K., Lin, T., Borrelli, F., Bajcsy, R.: Semiautonomous vehicular control using driver modeling. IEEE Trans. Intell. Transp. Syst. 15(6), 2696–2709 (2014)
Standard, S.A.: Vehicle Dynamics Terminology: J670E. SAE International (1976)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1. MIT press, Cambridge (1998)
VENTURER: VENTURER Trial 1: Planned handover. Technical report, May 2017
Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)
Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. In: Sutton, R.S. (ed.) Reinforcement Learning, pp. 5–32. Springer, Boston (1992). https://doi.org/10.1007/978-1-4615-3618-5_2
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Lopez Pulgarin, E.J., Irmak, T., Paul, J.V., Meekul, A., Herrmann, G., Leonards, U. (2018). Comparing Model-Based and Data-Driven Controllers for an Autonomous Vehicle Task. In: Giuliani, M., Assaf, T., Giannaccini, M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science(), vol 10965. Springer, Cham. https://doi.org/10.1007/978-3-319-96728-8_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-96728-8_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96727-1
Online ISBN: 978-3-319-96728-8
eBook Packages: Computer ScienceComputer Science (R0)