Comparing Model-Based and Data-Driven Controllers for an Autonomous Vehicle Task

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10965)


The advent of autonomous vehicles comes with many questions from an ethical and technological point of view. The need for high performing controllers, which show transparency and predictability is crucial to generate trust in such systems. Popular data-driven, black box-like approaches such as deep learning and reinforcement learning are used more and more in robotics due to their ability to process large amounts of information, with outstanding performance, but raising concerns about their transparency and predictability. Model-based control approaches are still a reliable and predictable alternative, used extensively in industry but with restrictions of their own. Which of these approaches is preferable is difficult to assess as they are rarely directly compared with each other for the same task, especially for autonomous vehicles. Here we compare two popular approaches for control synthesis, model-based control i.e. Model Predictive Controller (MPC), and data-driven control i.e. Reinforcement Learning (RL) for a lane keeping task with speed limit for an autonomous vehicle; controllers were to take control after a human driver had departed lanes or gone above the speed limit. We report the differences between both control approaches from analysis, architecture, synthesis, tuning and deployment and compare performance, taking overall benefits and difficulties of each control approach into account.


Semi-autonomous Vehicles Model Predictive Controller (MPC) Human Driver Deep Deterministic Policy Gradient (DDPG) Advanced Driver Assistance Systems (ADAS) 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Wymann, B., Coulom, R., Dimitrakakis, C., Espié, E.: Andrew Sumner: TORCS, The Open Racing Car Simulator (2014).
  2. 2.
    Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2722–2730. IEEE (2015)Google Scholar
  3. 3.
    Gillespie, T.D.: Vehicle Dynamics. Warren dale (1997)Google Scholar
  4. 4.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  5. 5.
    Griffin, M.J.: Handbook of Human Vibration. Academic press, Cambridge (2012)Google Scholar
  6. 6.
    Haddadin, S., Haddadin, S., Khoury, A., Rokahr, T., Parusel, S., Burgkart, R., Bicchi, A., Albu-Schäffer, A.: On making robots understand safety: embedding injury knowledge into control. Int. J. Robot. Res. 31(13), 1578–1602 (2012). Scholar
  7. 7.
    ISO: ISO 13482:2014: Robots and robotic devices - Safety requirements for personal care robots. International Organization for Standardization, Geneva, Switzerland (2011)Google Scholar
  8. 8.
    Jacobs, M.: videantis \(\gg \) Handy list of automotive ADAS acronyms.
  9. 9.
    Keviczky, T., Falcone, P., Borrelli, F., Asgari, J., Hrovat, D.: Predictive control approach to autonomous vehicle steering. In: American Control Conference, 2006, p. 6-pp. IEEE (2006)Google Scholar
  10. 10.
    Kim, J.S.: Recent advances in adaptive MPC. In: ICCAS, vol. 2010, pp. 218–222 (2010)Google Scholar
  11. 11.
    Kong, J., Pfeiffer, M., Schildbach, G., Borrelli, F.: Kinematic and dynamic vehicle models for autonomous driving control design. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 1094–1099. IEEE (2015)Google Scholar
  12. 12.
    Lefevre, S., Carvalho, A., Borrelli, F.: A learning-based framework for velocity control in autonomous driving. IEEE Trans. Autom. Sci. Eng. 13(1), 32–42 (2016)CrossRefGoogle Scholar
  13. 13.
    Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, September 2015. arXiv:1509.02971 [cs, stat]
  14. 14.
    Lima, P.: Predictive control for autonomous driving. Ph.D. thesis, KTH, 2016. Unpublished thesis (2016)Google Scholar
  15. 15.
    Maciejowski, J.M.: Predictive Control: With Constraints. Pearson education, London (2002)zbMATHGoogle Scholar
  16. 16.
    Morari, M., Lee, J.H.: Model predictive control: past, present and future. Comput. Chem. Eng. 23(4–5), 667–682 (1999). Scholar
  17. 17.
    Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)CrossRefGoogle Scholar
  18. 18.
    Shia, V., Gao, Y., Vasudevan, R., Campbell, K., Lin, T., Borrelli, F., Bajcsy, R.: Semiautonomous vehicular control using driver modeling. IEEE Trans. Intell. Transp. Syst. 15(6), 2696–2709 (2014)CrossRefGoogle Scholar
  19. 19.
    Standard, S.A.: Vehicle Dynamics Terminology: J670E. SAE International (1976)Google Scholar
  20. 20.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1. MIT press, Cambridge (1998)Google Scholar
  21. 21.
    VENTURER: VENTURER Trial 1: Planned handover. Technical report, May 2017Google Scholar
  22. 22.
    Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)zbMATHGoogle Scholar
  23. 23.
    Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. In: Sutton, R.S. (ed.) Reinforcement Learning, pp. 5–32. Springer, Boston (1992). Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Mechanical EngineeringUniversity of BristolBristolUK
  2. 2.Experimental PsychologyUniversity of BristolBristolUK

Personalised recommendations