Skip to main content

Comparing Model-Based and Data-Driven Controllers for an Autonomous Vehicle Task

  • Conference paper
  • First Online:
Towards Autonomous Robotic Systems (TAROS 2018)

Abstract

The advent of autonomous vehicles comes with many questions from an ethical and technological point of view. The need for high performing controllers, which show transparency and predictability is crucial to generate trust in such systems. Popular data-driven, black box-like approaches such as deep learning and reinforcement learning are used more and more in robotics due to their ability to process large amounts of information, with outstanding performance, but raising concerns about their transparency and predictability. Model-based control approaches are still a reliable and predictable alternative, used extensively in industry but with restrictions of their own. Which of these approaches is preferable is difficult to assess as they are rarely directly compared with each other for the same task, especially for autonomous vehicles. Here we compare two popular approaches for control synthesis, model-based control i.e. Model Predictive Controller (MPC), and data-driven control i.e. Reinforcement Learning (RL) for a lane keeping task with speed limit for an autonomous vehicle; controllers were to take control after a human driver had departed lanes or gone above the speed limit. We report the differences between both control approaches from analysis, architecture, synthesis, tuning and deployment and compare performance, taking overall benefits and difficulties of each control approach into account.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wymann, B., Coulom, R., Dimitrakakis, C., Espié, E.: Andrew Sumner: TORCS, The Open Racing Car Simulator (2014). http://www.torcs.org

  2. Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2722–2730. IEEE (2015)

    Google Scholar 

  3. Gillespie, T.D.: Vehicle Dynamics. Warren dale (1997)

    Google Scholar 

  4. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

    Google Scholar 

  5. Griffin, M.J.: Handbook of Human Vibration. Academic press, Cambridge (2012)

    Google Scholar 

  6. Haddadin, S., Haddadin, S., Khoury, A., Rokahr, T., Parusel, S., Burgkart, R., Bicchi, A., Albu-Schäffer, A.: On making robots understand safety: embedding injury knowledge into control. Int. J. Robot. Res. 31(13), 1578–1602 (2012). http://ijr.sagepub.com/content/31/13/1578

    Article  Google Scholar 

  7. ISO: ISO 13482:2014: Robots and robotic devices - Safety requirements for personal care robots. International Organization for Standardization, Geneva, Switzerland (2011)

    Google Scholar 

  8. Jacobs, M.: videantis \(\gg \) Handy list of automotive ADAS acronyms. http://www.videantis.com/handy-list-of-automotive-adas-acronyms.html

  9. Keviczky, T., Falcone, P., Borrelli, F., Asgari, J., Hrovat, D.: Predictive control approach to autonomous vehicle steering. In: American Control Conference, 2006, p. 6-pp. IEEE (2006)

    Google Scholar 

  10. Kim, J.S.: Recent advances in adaptive MPC. In: ICCAS, vol. 2010, pp. 218–222 (2010)

    Google Scholar 

  11. Kong, J., Pfeiffer, M., Schildbach, G., Borrelli, F.: Kinematic and dynamic vehicle models for autonomous driving control design. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 1094–1099. IEEE (2015)

    Google Scholar 

  12. Lefevre, S., Carvalho, A., Borrelli, F.: A learning-based framework for velocity control in autonomous driving. IEEE Trans. Autom. Sci. Eng. 13(1), 32–42 (2016)

    Article  Google Scholar 

  13. Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, September 2015. arXiv:1509.02971 [cs, stat]

  14. Lima, P.: Predictive control for autonomous driving. Ph.D. thesis, KTH, 2016. Unpublished thesis (2016)

    Google Scholar 

  15. Maciejowski, J.M.: Predictive Control: With Constraints. Pearson education, London (2002)

    MATH  Google Scholar 

  16. Morari, M., Lee, J.H.: Model predictive control: past, present and future. Comput. Chem. Eng. 23(4–5), 667–682 (1999). http://www.sciencedirect.com/science/article/pii/S0098135498003019

    Article  Google Scholar 

  17. Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)

    Article  Google Scholar 

  18. Shia, V., Gao, Y., Vasudevan, R., Campbell, K., Lin, T., Borrelli, F., Bajcsy, R.: Semiautonomous vehicular control using driver modeling. IEEE Trans. Intell. Transp. Syst. 15(6), 2696–2709 (2014)

    Article  Google Scholar 

  19. Standard, S.A.: Vehicle Dynamics Terminology: J670E. SAE International (1976)

    Google Scholar 

  20. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1. MIT press, Cambridge (1998)

    Google Scholar 

  21. VENTURER: VENTURER Trial 1: Planned handover. Technical report, May 2017

    Google Scholar 

  22. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

  23. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. In: Sutton, R.S. (ed.) Reinforcement Learning, pp. 5–32. Springer, Boston (1992). https://doi.org/10.1007/978-1-4615-3618-5_2

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Erwin Jose Lopez Pulgarin , Guido Herrmann or Ute Leonards .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lopez Pulgarin, E.J., Irmak, T., Paul, J.V., Meekul, A., Herrmann, G., Leonards, U. (2018). Comparing Model-Based and Data-Driven Controllers for an Autonomous Vehicle Task. In: Giuliani, M., Assaf, T., Giannaccini, M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science(), vol 10965. Springer, Cham. https://doi.org/10.1007/978-3-319-96728-8_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96728-8_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96727-1

  • Online ISBN: 978-3-319-96728-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics