Advertisement

Deep learning for lateral vehicle control – an end-to-end trained multi-fusion steering model

  • Johann HaselbergerEmail author
  • Jian Chen
  • Bernhard Schick
Conference paper
Part of the Proceedings book series (PROCEE)

Abstract

Deep Learning based behavior reflex methods found their way into modern vehicles. To model the human driving behavior it is not sufficient to rely solely on individual, noncontiguous camera frames without taking vehicle signals or road specific features into account. In this work four temporal fusion methods are evaluated based on three different Deep Learning models. The proposed spatio-temporal Mixed Fusion model extends the present end-to-end models and consist of multiple levels of fusions. The raw image data from a single front facing camera is mixed with recorded vehicle data and a map based predicted road bank angle gradient vector. The model accesses multiple time axes: temporal features of multiple image frames are extracted through a combination of Convolution and LSTM layers while it can also make assumptions about the future road condition with the use of upcoming Ground Truth road bank angle changes. Experiments are performed on a recorded data set of real world drivings. Results show, that this approach leads to an accurate imitation of the human driver with an inference capability of more than 60 FPS.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving”, Tech. Rep., 2015. [Online]. Available: http://deepdriving.cs.princeton.edu.
  2. J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, M. Sokolsky, G. Stanek, D. Stavens, A. Teichman, M. Werling, and S. Thrun, “Towards Fully Autonomous Driving: Systems and Algorithms”, Tech. Rep. [Online]. Available: https://www.ri.cmu.edu/wpcontent/uploads/2017/12/levinson-iv2011.pdf.
  3. D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, L. Van Gool Esat-Psi, and K. U. Leuven, “Fast Scene Understanding for Autonomous Driving”, Tech. Rep. arXiv: 1708.02550v1. [Online]. Available: https://youtu.be/55ElRh-g{\_}7o..
  4. Z. Yang, Y. Zhang, J. Yu, J. Cai, and J. Luo, “End-to-end Multi-Modal Multi- Task Vehicle Control for Self-Driving Cars with Visual Perceptions”, [Online]. Available: https://arxiv.org/pdf/1801.06734.pdf.
  5. J. E. Naranjo, C. González, R. García, and T. De Pedro, “Lane-change fuzzy control in autonomous vehicles for the overtaking maneuver”, IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 438–450, 2008, ISSN: 15249050.  https://doi.org/10.1109/tits.2008.922880.
  6. A. Gurghian, T. Koduri, S. V. Bailur, K. J. Carey, and V. N. Murali, “DeepLanes: End-To-End Lane Position Estimation using Deep Neural Networks”, Tech. Rep. [Online]. Available: https://www.cv-foundation.org/openaccess/content{\_}cvpr{\_}2016{\_}workshops/w3/papers/Gurghian{\_}DeepLanes{\_}End-To-End{\_}Lane{\_}CVPR{\_}2016{\_}paper.pdf.
  7. D. Neven, B. De Brabandere, S. Georgoulis, M. P. Luc, V. Gool Esat-Psi, and K. Leuven, “Towards End-to-End Lane Detection: an Instance Segmentation Approach”, [Online]. Available: https://arxiv.org/pdf/1802.05591.pdf.
  8. J. M. Pierre, “Spatio-temporal deep learning for robotic visuomotor control”, Proceedings - 2018 4th International Conference on Control, Automation and Robotics, ICCAR 2018, pp. 94–103, 2018.  https://doi.org/10.1109/iccar.2018.8384651.
  9. K. Kawaguchi Leslie Pack Kaelbling and Y. Bengio, “Generalization in Deep Learning”, Tech. Rep. arXiv: 1710.05468v4. [Online]. Available: https://arxiv.org/pdf/1710.05468.pdf.
  10. D. A. Pomerleau, “Alvinn, an autonomous land vehicle in a neural network”, 1989.Google Scholar
  11. M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars”, 2016. arXiv: arXiv:1604.07316v1. [Online]. Available: https://arxiv.org/pdf/1604.07316.pdf.
  12. J. Jhung, I. Bae, J. Moon, T. Kim, J. Kim, and S. Kim, “End-to-End Steering Controller with CNN-based Closed-loop Feedback for Autonomous Vehicles”, IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2018-June, no. Iv, pp. 617–622, 2018.  https://doi.org/10.1109/ivs.2018.8500440.
  13. H. Yu, S. Yang, W. Gu, and S. Zhang, “Baidu driving dataset and end-To-end reactive control model”, IEEE Intelligent Vehicles Symposium, Proceedings, no. Iv, pp. 341–346, 2017.  https://doi.org/10.1109/ivs.2017.7995742.
  14. C. Innocenti, H. Linden, G. Panahandeh, L. Svensson, and N. Mohammadiha, “Imitation learning for vision-based lane keeping assistance”, IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, vol. 2018-March, pp. 425–430, 2018.  https://doi.org/10.1109/itsc.2017.8317915.arXiv: 1709.03853.
  15. S. Du, H. Guo, and A. Simpson, “Self-Driving Car Steering Angle Prediction Based on Image Recognition”, Tech. Rep. [Online]. Available: http://cs231n.stanford.edu/reports/2017/pdfs/626.pdf.
  16. L. Chi and Y. Mu, “Deep Steering: Learning End-to-End Driving Model from Spatial and Temporal Visual Cues”, Tech. Rep. arXiv: 1708.03798v1. [Online]. Available: https://github.com/gy20073/BDD.
  17. Y. Hou, Z. Ma, C. Liu, and C. C. Loy, “Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks”, Tech. Rep. [Online]. Available: www.aaai.org.
  18. H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end Learning of Driving Models from Large-scale Video Datasets”, [Online]. Available: https://arxiv.org/pdf/1612.01079.pdf.
  19. E. Yiqi, Hou, S. Hornauer, and K. Zipser, “Fast Recurrent Fully Convolutional Networks for Direct Perception in Autonomous Driving”, [Online]. Available: https://arxiv.org/pdf/1711.06459.pdf.
  20. H. M. Eraqi, M. N. Moustafa, and J. Honer, “End-to-End Deep Learning for Steering Autonomous Vehicles Considering Temporal Dependencies”, [Online]. Available: https://arxiv.org/pdf/1710.03804.pdf.
  21. W. Yuan, M. Yang, H. Li, C.Wang, and B.Wang, “End-to-end learning for highprecision lane keeping via multi-state model”, CAAI Transactions on Intelligence Technology, 2018, ISSN: 2468-2322.  https://doi.org/10.1049/trit.2018.1025.
  22. Y. Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. A. Theodorou, and B. Boots, “Agile Autonomous Driving using End-to-End Deep Imitation Learning”, Tech. Rep. arXiv: 1709.07174v4. [Online]. Available: https://arxiv.org/pdf/1709.07174.pdf.
  23. C. Hubschneider, A. Bauer, M. Weber, and J. M. Zollner, “Adding navigation to the equation: Turning decisions for end-to-end vehicle control”, IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, vol. 2018-March, no. October, pp. 1–8, 2018.  https://doi.org/10.1109/itsc.2017.8317923.
  24. S. Hecker, D. Dai, and L. V. Gool, “Learning Driving Models with a Surround-View Camera System and a Route Planner”, [Online]. Available: https://arxiv.org/pdf/1803.10158.pdf.Google Scholar
  25. G. Devineau, P. Polack, F. Altché, and F. Moutarde, “Coupled Longitudinal and Lateral Control of a Vehicle using Deep Learning”, 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 642–649, 2018. DOI: arXiv:1810 . 09365v1. arXiv:1810 . 09365. [Online]. Available: http://arxiv.org/abs/1810.09365.
  26. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale Video Classification with Convolutional Neural Networks”, Tech. Rep. [Online]. Available: http://cs.stanford.edu/people/karpathy/deepvideo.
  27. D. Heide and A. O. Ag, “Banked road estimation and compensation algorithm for Lane Keeping Assist and its integration in GM ’ s Software Architecture”, pp. 38–42, 2017.Google Scholar
  28. J. Heylen, S. Iven, B. De Brabandere, J. Oramasm, L. Van Gool, and T. Tuytelaars, “From pixels to actions: Learning to drive a car with deep neural networks”, Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, vol. 2018-Janua, pp. 606–615, 2018.  https://doi.org/10.1109/wacv. 2018.00072.
  29. UdacitySDC-Challenge2 cg23, 2016. [Online]. Available: https://github.com/udacity/self- driving- car/tree/master/steering- models/community-models/cg23 (visited on 04/24/2019).
  30. C. Szegedy,W. Liu, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions”, Tech. Rep. arXiv: 1409. 4842v1. [Online]. Available: https://arxiv.org/pdf/1409.4842.pdf.
  31. C. Zhang, S. Bengio, G. Brain, M. Hardt, B. Recht, O. Vinyals, and G. Deepmind, “UNDERSTANDING DEEP LEARNING REQUIRES RE-THINKING GENERALIZATION”, Tech. Rep. arXiv: 1611.03530v2. [Online]. Available: https://arxiv.org/pdf/1611.03530.pdf.
  32. D Schneider, B Huber, H Lategahn, and P. B. Schick, “Measuring method for function and quality of automated lateral control based on high-precision digital “Ground Truth” maps”, Tech. Rep.Google Scholar
  33. Z. Yang, Y. Zhang, J. Yu, J. Cai, and J. Luo, “End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars with Visual Perception”, 2018, ISSN: 10514651.  https://doi.org/10.1109/icpr.2018.8546189.arXiv:1801.06734.
  34. T.-d. Do, M.-t. Duong, Q.-v. Dang, and M.-h. Le, “Real-Time Self-Driving Car Navigation Using Deep Neural Network”, pp. 7–12, 2018.Google Scholar
  35. D. P. Kingma and J. Lei Ba, “ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION”, Tech. Rep. arXiv: 1412.6980v9. [Online]. Available: https://arxiv.org/pdf/1412.6980.pdf.
  36. R. Hyndman, A. Koehler, K. Ord, and R. Snyder, Forecasting with Exponential Smoothing. Springer Berlin Heidelberg, 2008.  https://doi.org/10.1007/978-3-540-71918-2. [Online]. Available:  https://doi.org/10.1007/978-3-540-71918-2.
  37. J. Kim and J. Canny, “Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention”, Tech. Rep. arXiv: 1703.10631v1. [Online]. Available: https://arxiv.org/pdf/1703.10631.pdf.
  38. Z. Chen and X. Huang, “End-to-end learning for lane keeping of self-driving cars”, in 2017 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2017, pp. 1856–1860, ISBN: 978-1-5090-4804-5.  https://doi.org/10.1109/ivs.2017.7995975. [Online]. Available: http://ieeexplore.ieee.org/document/7995975/.
  39. J. Carreira, A. Zisserman, Z. Com, and. Deepmind, “Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset”, Tech. Rep. arXiv: 1705.07750v3. [Online]. Available: https://arxiv.org/pdf/1705.07750.pdf.
  40. S. Hochreiter and J. Urgen Schmidhuber, “LONG SHORT-TERM MEMORY”, Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. [Online]. Available: http://www7.informatik.tu-muenchen.de/{\~}hochreithttp://www.idsia.ch/{\~}juergen.
  41. J. Donahue, L. A. Hendricks, M. Rohrbach, S. Venugopalan, S. Guadarrama, K. Saenko, and T. Darrell, “Long-term Recurrent Convolutional Networks for Visual Recognition and Description”, [Online]. Available: https://arxiv.org/pdf/1411.4389.pdf.

Copyright information

© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2020

Authors and Affiliations

  • Johann Haselberger
    • 1
    Email author
  • Jian Chen
    • 2
  • Bernhard Schick
    • 1
  1. 1.Kempten University of Applied SciencesKemptenGermany
  2. 2.Porsche Engineering Services GmbHBietigheim-BissingenGermany

Personalised recommendations