Skip to main content

Advertisement

Log in

Automatic stress analysis from facial videos based on deep facial action units recognition

  • Original Article
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

A Correction to this article was published on 09 March 2022

This article has been updated

Abstract

Stress conditions are manifested in different human body’s physiological processes and the human face. Facial expressions are modelled consistently through the Facial Action Coding System (FACS) using the facial Action Units (AU) parameters. This paper focuses on the automated recognition and analysis of AUs in videos as quantitative indices to discriminate between neutral and stress states. A novel deep learning pipeline for automatic recognition of facial action units is proposed, relying on two publicly available annotated facial datasets for training, the UNBC and the BOSPHORUS datasets. Two types of descriptive facial features are extracted from the input images, geometric features (non-rigid 3D facial deformations due to facial expressions) and appearance features (deep facial appearance features). The extracted facial features are then fed to deep fully connected layers that regress AU intensities and robustly perform AU classification. The proposed algorithm is applied to the SRD’15 stress dataset, which contains neutral and stress states related to four types of stressors. We present thorough experimental results and comparisons, which indicate that the proposed methodology yields particularly promising performance in terms of both AU detection and stress recognition accuracy. Furthermore, the AUs relevant to stress were experimentally identified, providing evidence that their intensity is significantly increased during stress, which leads to a more expressive human face as compared to neutral states.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Change history

References

  1. Wethington E, Brown GW, Kessler RC (1995) Interview measurement of stressful life events. Meas Stress: A Guide for Health Soc Sci 59–79

  2. Dohrenwend BP, Raphael KG, Schwartz S, Stueve A, Skodol A (1993) The structured event probe and narrative rating method for measuring stressful life events. Free Press, pp 174–199

  3. Aigrain J, Spodenkiewicz M, Dubuiss S, Detyniecki M, Cohen D, Chetouani M (2016) Multimodal stress detection from multiple assessments. IEEE Trans Affect Comput 9(4):491–506

  4. Chrousos GP (2009) Stress and disorders of the stress system. Nat Rev Endocrinol 5(7):374

    Article  Google Scholar 

  5. Giannakakis G, Grigoriadis D, Giannakaki K, Simantiraki O, Roniotis A, Tsiknakis M (2019) Review on psychological stress detection using biosignals. IEEE Trans Affect Computing

  6. Giannakakis G, Marias K, Tsiknakis M (2019) A stress recognition system using hrv parameters and machine learning techniques. In: 2019 8th international conference on affective computing and intelligent interaction workshops and demos (ACIIW). IEEE, pp 269–272

  7. Weber R, Barrielle V, Soladie C, Seguier R (2018) Unsupervised adaptation of a person-specific manifold of facial expressions. IEEE Trans Affect Comput

  8. Henriquez P, Matuszewski BJ, Andreu-Cabedo Y, Bastiani L, Colantonio S, Coppini G, D’Acunto M, Favilla R, Germanese D, Giorgi D et al (2017) Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization. IEEE Trans Multimed 19(7):1467–1481

  9. Darwin C (1872) The expression of the emotions in man and animals. John Marry, London, UK

  10. Giannakakis G, Pediaditis M, Manousos D, Kazantzaki E, Chiarugi F, Simos P, Marias K, Tsiknakis M (2017) Stress and anxiety detection using facial cues from videos. Biomed Signal Process Control 31:89–101

  11. Korda AI, Giannakakis G, Ventouras E, Asvestas PA, Smyrnis N, Marias K, Matsopoulos GK (2021) Recognition of blinks activity patterns during stress conditions using cnn and markovian analysis. Signals 2(1):55–71

  12. Martinez B, Valstar MF, Jiang B, Pantic M (2017) Automatic analysis of facial actions: a survey. IEEE Trans Affect Comput

  13. Donato G, Bartlett MS, Hager JC, Ekman P, Sejnowski TJ (1999) Classifying facial actions. IEEE Trans Pattern Anal Mach Intell 21(10):974–989

  14. Giannakakis G, Koujan MR, Roussos A, Marias K (2020) Automatic stress detection evaluating models of facial action units. In: 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020), pp 817–822

  15. Ruiz A, Van de Weijer J, Binefa X (2015) From emotions to action units with hidden and semi-hidden-task learning. In: Proceedings of the IEEE international conference on computer vision, pp 3703–3711

  16. Chu W-S, De la Torre F, Cohn JF (2017) Learning spatial and temporal cues for multi-label facial action unit detection. In: 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017). IEEE, pp 25–32

  17. Shao Z, Liu Z, Cai J, Ma L (2018) Deep adaptive attention for joint facial action unit detection and face alignment. In: Proceedings of the European conference on computer vision (ECCV), pp 705–720

  18. Ma C, Chen L, Yong J (2019) Au r-cnn: encoding expert prior knowledge into r-cnn for action unit detection. Neurocomput 355:35–47

  19. Bevilacqua F, Engstrom H, Backlund P (2018) Automated analysis of facial cues from videos as a potential method for differentiating stress and boredom of players in games. Int J Comput Games Technol

  20. Daudelin-Peltier C, Forget H, Blais C, Deschˆenes A, Fiset D (2017) The effect of acute social stress on the recognition of facial expression of emotions. Sci Rep 7(1):1036

  21. Gavrilescu M, Vizireanu N (2019) Predicting depression, anxiety, and stress levels from videos using the facial action coding system. Sensors 19(17):3693

    Article  Google Scholar 

  22. Viegas C, Lau S-H, Maxion R, Hauptmann A (2018) Distinction of stress and non-stress tasks using facial action units. In: Proceedings of the 20th international conference on multimodal interaction: Adjunct, pp 1–6

  23. Koujan MR, Alharbawee L, Giannakakis G, Pugeault N, Roussos A (2020) Real-time facial expression recognition “in the wild” by disentangling 3d expression from identity. In: IEEE international conference on automatic face and gesture recognition (FG 2020)

  24. Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I (2011) Painful data: the unbc-mcmaster shoulder pain expression archive database. In: Automatic face & gesture recognition and workshops (FG 2011), 2011 IEEE international conference On. IEEE, pp 57–64

  25. Savran A, Alyüz N, Dibeklioğlu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Bosphorus database for 3d face analysis. In: European workshop on biometrics and identity management. Springer, pp 47–56

  26. Stroop JR (1935) Studies of interference in serial verbal reactions. J Exp Psychol 18(6):643

    Article  Google Scholar 

  27. Lang PJ, Bradley MM, Cuthbert BN et al (1997) International affective picture system (iaps): technical manual and affective ratings. NIMH Center Study Emot Atten 1:39–58

  28. Andreu Y, Chiarugi F, Colantonio S, Giannakakis G, Giorgi D, Henriquez P, Kazantzaki E, Manousos D, Marias K, Matuszewski BJ, Pascali MA, Pediaditis M, Raccichini G, Tsiknakis M (2016) Wizemirror - a smart, multisensory cardio-metabolic risk monitoring system. Comput Vision Image Underst 148:3–22

  29. Ekman P (2002) Facial action coding system (FACS). A human face

  30. Ekman P, Friesen W (1978) Facial action coding system (FACS): manual. Consulting Psychol Press Palo Alto

  31. Hjortsjo C-H (1969) Man’s face and mimic language. Studentlitteratur

  32. Najibi M, Samangouei P, Chellappa R, Davis LS (2017) Ssh: single stage headless face detector. In: Proceedings of the IEEE international conference on computer vision, pp 4875–4884

  33. Deng J, Zhou Y, Cheng S, Zaferiou S (2018) Cascade multi-view hourglass model for robust 3d face alignment. In: FG

  34. Matthews I, Baker S (2004) Active appearance models revisited. Int J Comput Vis 60(2):135–164

    Article  Google Scholar 

  35. Cootes TF, Taylor CJ (2004) Statistical models of appearance for computer vision. Technical report, University of Manchester

  36. Watson D (2013) Contouring: a guide to the analysis and display of spatial data, vol. 10. Elsevier

  37. Blanz V, Vetter T (1999) A morphable model for the synthesis of 3d faces. In: Proceedings of the 26th annual conference on computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., pp 187–194

  38. Zafeiriou S, Chrysos GG, Roussos A, Ververas E, Deng J, Trigeorgis G (2017) The 3d menpo facial landmark tracking challenge. In: ICCV, pp 2503–2511

  39. Deng J, Roussos A, Chrysos G, Ververas E, Kotsia I, Shen J, Zafeiriou S (2018) The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. IJCV

  40. Koujan MR, Roussos A (2018) Combining dense nonrigid structure from motion and 3d morphable models for monocular 4d face reconstruction. In: CVMP

  41. Gecer B, Ploumpis S, Kotsia I, Zafeiriou S (2019) Ganfit: generative adversarial network fitting for high fidelity 3d face reconstruction. arXiv preprint arXiv:1902.05978

  42. Booth J, Roussos A, Ponniah A, Dunaway D, Zafeiriou S (2018) Large scale 3d morphable models. IJCV

  43. Cao C, Weng Y, Zhou S, Tong Y, Zhou K (2014) Facewarehouse: a 3dfacial expression database for visual computing. IEEE Trans Vis Comp Gr 20(3):413–425

    Article  Google Scholar 

  44. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  45. Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: International conference on automatic face and gesture recognition

  46. Kingma D, Ba J (2014) Adam: a method for stochastic optimization. International conference on learning representations

  47. Herbrich R, Graepel T, Obermayer K (1999) Support vector learning for ordinal regression. In: 1999 ninth international conference on artificial neural networks ICANN 99, vol 1, pp 97–1021. https://doi.org/10.1049/cp:19991091

  48. Fürnkranz J, Hüllermeier E (2003) Pairwise preference learning and ranking. In: European conference on machine learning. Springer, pp 145–156

  49. Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Elect Eng 40(1):16–28

    Article  Google Scholar 

  50. Ding C, Peng H (2005) Minimum redundancy feature selection from microarraygene expression data. J Bioinf Comput Biol 3(02):185–205

    Article  Google Scholar 

  51. Gulgezen G, Cataltepe Z, Yu L (2009) Stable and accurate feature selection. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 455–468

  52. Gu Q, Li Z, Han J (2012) Generalized fisher score for feature selection. arXiv preprint https://arxiv.org/abs/1202.3725

  53. Baltrusaitis T, Zadeh A, Lim YC, Morency L-P (2018) Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, pp 59–66

  54. Giannakakis G, Manousos D, Chaniotakis V, Tsiknakis M (2018) Evaluation of head pose features for stress detection and classification. In: 2018 IEEE EMBS international conference on biomedical & health informatics (BHI), pp 406–409

  55. Anis K, Zakia H, Mohamed D, Jeffrey C (2018) Detecting depression severity by interpretable representations of motion dynamics. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG2018), pp 739–745

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giorgos Giannakakis.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giannakakis, G., Koujan, M.R., Roussos, A. et al. Automatic stress analysis from facial videos based on deep facial action units recognition. Pattern Anal Applic 25, 521–535 (2022). https://doi.org/10.1007/s10044-021-01012-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-021-01012-9

Keywords

Navigation