Advertisement

Automatic Detection of a Driver’s Complex Mental States

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10406)

Abstract

Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community.

Notes

Acknowledgment

The work presented in this paper was funded and supported by Jaguar Land Rover, Coventry, UK.

References

  1. 1.
    Adams, A., Robinson, P.: Automated recognition of complex categorical emotions from facial expressions and head motions. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 355–361. IEEE (2015)Google Scholar
  2. 2.
    Baltru, T., Robinson, P., Morency, L.P., et al.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016)Google Scholar
  3. 3.
    Baltrušaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalisation for automatic action unit detection. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–6. IEEE (2015)Google Scholar
  4. 4.
    Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 568–573. IEEE (2005)Google Scholar
  5. 5.
    Cohn, J.F., De la Torre, F.: Automated face analysis for affective. In: The Oxford Handbook of Affective Computing, p. 131 (2014)Google Scholar
  6. 6.
    Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schröder, M.: ‘FEELTRACE’: An instrument for recording perceived emotion in real time. In: ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion (2000)Google Scholar
  7. 7.
    Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)CrossRefGoogle Scholar
  8. 8.
    Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, New York (1997)Google Scholar
  9. 9.
    El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-time Vision for Human-computer Interaction, pp. 181–200. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  10. 10.
    Gudi, A., Tasli, H.E., den Uyl, T.M., Maroulis, A.: Deep learning based FACS action unit occurrence and intensity estimation. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–5. IEEE (2015)Google Scholar
  11. 11.
    Gunes, H., Schuller, B.: Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image Vis. Comput. 31(2), 120–136 (2013)CrossRefGoogle Scholar
  12. 12.
    van den Haak, P., van Lon, R., van der Meer, J., Rothkrantz, L.: Stress assessment of car-drivers using EEG-analysis. In: Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies, pp. 473–477. ACM (2010)Google Scholar
  13. 13.
    Hu, S., Zheng, G.: Driver drowsiness detection with eyelid related parameters by support vector machine. Expert Syst. Appl. 36(4), 7651–7658 (2009)CrossRefGoogle Scholar
  14. 14.
    Jones, C.M., Jonsson, I.M.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future. Computer-Human Interaction Special Interest Group (CHISIG) of Australia, pp. 1–10 (2005)Google Scholar
  15. 15.
    Katsis, C., Goletsis, Y., Rigas, G., Fotiadis, D.: A wearable system for the affective monitoring of car racing drivers during simulated conditions. Transp. Res. Part C: Emerg. Technol. 19(3), 541–551 (2011)CrossRefGoogle Scholar
  16. 16.
    Katsis, C.D., Katertsidis, N., Ganiatsas, G., Fotiadis, D.I.: Toward emotion recognition in car-racing drivers: A biosignal processing approach. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 38(3), 502–512 (2008)CrossRefGoogle Scholar
  17. 17.
    Krippendorff, K.: Agreement and information in the reliability of coding. Commun. Methods Measures 5(2), 93–112 (2011)CrossRefGoogle Scholar
  18. 18.
    Lee, H.C., Cameron, D., Lee, A.H.: Assessing the driving performance of older adult drivers: on-road versus simulated driving. Accid. Anal. Prev. 35(5), 797–803 (2003)CrossRefGoogle Scholar
  19. 19.
    Lisetti, C.L., Nasoz, F.: Affective intelligent car interfaces with emotion recognition. In: Proceedings of 11th International Conference on Human Computer Interaction, Las Vegas. Citeseer (2005)Google Scholar
  20. 20.
    Mahmoud, M., Baltrušaitis, T., Robinson, P., Riek, L.D.: 3D corpus of spontaneous complex mental states. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011. LNCS, vol. 6974, pp. 205–214. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-24600-5_24 CrossRefGoogle Scholar
  21. 21.
    McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The semaine corpus of emotionally coloured character interactions. In: 2010 IEEE International Conference on Multimedia and Expo (ICME), pp. 1079–1084. IEEE (2010)Google Scholar
  22. 22.
    Nasoz, F., Ozyer, O., Lisetti, C.L., Finkelstein, N.: Multimodal affective driver interfaces for future cars. In: Proceedings of the Tenth ACM International Conference on Multimedia, pp. 319–322. ACM (2002)Google Scholar
  23. 23.
    Oehl, M., Siebert, F.W., Tews, T.-K., Höger, R., Pfister, H.-R.: Improving human-machine interaction–a non invasive approach to detect emotions in car drivers. In: Jacko, J.A. (ed.) HCI 2011. LNCS, vol. 6763, pp. 577–585. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-21616-9_65 CrossRefGoogle Scholar
  24. 24.
    O?Reilly, H., Pigat, D., Fridenson, S., Berggren, S., Tal, S., Golan, O., Bölte, S., Baron-Cohen, S., Lundqvist, D.: The EU-emotion stimulus set A validation study. Behav. Res. Methods 48(2), 1–10 (2015)Google Scholar
  25. 25.
    Ringeval, F., Sonderegger, A., Sauer, J., Lalanne, D.: Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8. IEEE (2013)Google Scholar
  26. 26.
    Roidl, E., Frehse, B., Höger, R.: Emotional states of drivers and the impact on speed, acceleration and traffic violations? a simulator study. Accid. Anal. Prev. 70, 282–292 (2014)CrossRefGoogle Scholar
  27. 27.
    Rozin, P., Cohen, A.B.: High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of americans. Emotion 3(1), 68 (2003)CrossRefGoogle Scholar
  28. 28.
    Baron-Cohen, S., Ofer Golan, S.W.: A new taxonomy of human emotions (2004)Google Scholar
  29. 29.
    Valstar, M.F., Almaev, T., Girard, J.M., McKeown, G., Mehu, M., Yin, L., Pantic, M., Cohn, J.F.: FERA 2015-second facial expression recognition and analysis challenge. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 1–8. IEEE (2015)Google Scholar
  30. 30.
    Whitehill, J., Bartlett, M., Movellan, J.: Automatic facial expression recognition for intelligent tutoring systems. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2008, pp. 1–6. IEEE (2008)Google Scholar
  31. 31.
    Zhang, X., Yin, L., Cohn, J.F., Canavan, S., Reale, M., Horowitz, A., Liu, P., Girard, J.M.: BP4D-spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32(10), 692–706 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of EngineeringUniversity of CambridgeCambridgeUK
  2. 2.Computer LaboratoryUniversity of CambridgeCambridgeUK
  3. 3.Jaguar Land RoverCoventryUK

Personalised recommendations