Advertisement

Drivers’ Manoeuvre Classification for Safe HRI

  • Erwin Jose Lopez Pulgarin
  • Guido Herrmann
  • Ute Leonards
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10454)

Abstract

Ever increasing autonomy of machines and the need to interact with them creates challenges to ensure safe operation. Recent technical and commercial interest in increasing autonomy of vehicles has led to the integration of more sensors and actuators inside the vehicle, making them more like robots. For interaction with semi-autonomous cars, the use of these sensors could help to create new safety mechanisms. This work explores the concept of using motion tracking (i.e. skeletal tracking) data gathered from the driver whilst driving to learn to classify the manoeuvre being performed. A kernel-based classifier is trained with empirically selected features based on data gathered from a Kinect V2 sensor in a controlled environment. This method shows that skeletal tracking data can be used in a driving scenario to classify manoeuvres and sets a background for further work.

Keywords

HRI Semi-autonomous vehicles Vehicles Driver actions Classification Machine learning 

References

  1. 1.
    Bo, L., Ren, X., Fox, D.: Unsupervised feature learning for RGB-D based object recognition. In: Desai, J.P., Dudek, G., Khatib, O., Kumar, V. (eds.) Experimental Robotics. Springer Tracts in Advanced Robotics, vol. 88, pp. 387–402. Springer International Publishing, Switzerland (2013). doi: 10.1007/978-3-319-00065-7_27 CrossRefGoogle Scholar
  2. 2.
    Crabbe, B., Paiement, A., Hannuna, S., Mirmehdi, M.: Skeleton-free body pose estimation from depth images for movement analysis. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, December 2015Google Scholar
  3. 3.
    Duda, R.O., Hart, P.E., et al.: Pattern Classification and Scene Analysis, vol. 3. Wiley, New York (1973)zbMATHGoogle Scholar
  4. 4.
    Franke, U., Pfeiffer, D., Rabe, C., Knoeppel, C., Enzweiler, M., Stein, F., Herrtwich, R.: Making bertha see. In: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 214–221 (2013)Google Scholar
  5. 5.
    Healey, J.A., Picard, R.W.: Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transport. Syst. 6(2), 156–166 (2005)CrossRefGoogle Scholar
  6. 6.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 31(5), 647–663 (2012). http://dx.doi.org/10.1177/0278364911434148 CrossRefGoogle Scholar
  7. 7.
    Koenig, A., Caruso, A., Bolliger, M., Somaini, L., Omlin, X., Morari, M., Riener, R.: Model-based heart rate control during robot-assisted gait training. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 4151–4156, May 2011Google Scholar
  8. 8.
    Kondyli, A., Sisiopiku, V., Barmpoutis, A.: A 3D experimental framework for exploring drivers’ body activity using infrared depth sensors. In: 2013 International Conference on Connected Vehicles and Expo (ICCVE), pp. 574–579, December 2013Google Scholar
  9. 9.
    Kumar, P., Perrollaz, M., Lefvre, S., Laugier, C.: Learning-based approach for online lane change intention prediction. In: 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 797–802, June 2013Google Scholar
  10. 10.
    Liu, Y., Ji, X., Ryouhei, H., Takahiro, M., Lou, L.: Function of shoulder muscles of driver in vehicle steering maneuver. Sci. China Technol. Sci. 55(12), 3445–3454 (2012). http://link.springer.com/article/10.1007/s11431-012-5045-9 CrossRefGoogle Scholar
  11. 11.
    Lopez Pulgarin, E.J., Herrmann, G., Leonards, U.: Dataset for drivers’ manoeuvre classification for safe HRI, August 2017. https://doi.org/10.5281/zenodo.556961
  12. 12.
    Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)zbMATHGoogle Scholar
  13. 13.
    Perrett, T., Mirmehdi, M., Dias, E.: Cost-based feature transfer for vehicle occupant classification. arXiv:1512.07080, December 2015. http://arxiv.org/abs/1512.07080
  14. 14.
    Ren, X., Bo, L., Fox, D.: RGB-(D) scene labeling: features and algorithms. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2759–2766, June 2012Google Scholar
  15. 15.
    Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Proces. Manage. 45(4), 427–437 (2009). http://www.sciencedirect.com/science/article/pii/S0306457309000259 CrossRefGoogle Scholar
  16. 16.
    Spinello, L., Arras, K.O.: People detection in RGB-D data. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3838–3843, September 2011Google Scholar
  17. 17.
    Tran, C., Trivedi, M.: Towards a vision-based system exploring 3D driver posture dynamics for driver assistance: issues and possibilities. In: 2010 IEEE Intelligent Vehicles Symposium (IV). pp. 179–184, June 2010Google Scholar
  18. 18.
    Zhao, C., Zhang, B., He, J., Lian, J.: Recognition of driving postures by contourlet transform and random forests. IET Intell. Transp. Syst. 6(2), 161–168 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Mechanical EngineeringUniversity of BristolBristolUK
  2. 2.Experimental PsychologyUniversity of BristolBristolUK

Personalised recommendations