Abstract
The Facial Action Coding System (FACS) for studying facial expressions is manual and requires significant effort and expertise. This paper explores using automated techniques to generate Action Units (AUs) for studying facial expressions. We propose an unsupervised approach based on Principal Component Analysis (PCA) and facial keypoint tracking to generate data-driven AUs called PCA AUs using the publicly available DISFA dataset. The PCA AUs comply with the direction of facial muscle movements and can explain over 92.83% of the variance in other public test datasets (BP4D-Spontaneous and CK+), indicating their capability to generalize facial expressions. The PCA AUs are also comparable to a keypoint-based equivalence of FACS AUs in terms of variance explained on the test datasets. Besides, PCA AUs can code at 30 fps on AMD EPYC 7402 24-Core Processor. In conclusion, our research demonstrates the potential of an automated coding system as an alternative to manual FACS, which could lead to efficient real-time analysis of facial expressions in psychology and related fields. To promote further research, we have made the code publicly available.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The code can be found here: https://github.com/Shivansh-ct/PCA-AUs.
- 2.
Modification of image https://github.com/Fang-Haoshu/Halpe-FullBody/blob/master/docs/face.jpg.
References
Zhi, R., Liu, M., Zhang, D.: A comprehensive survey on automatic facial action unit analysis. Vis. Comput. 36, 1067–1093 (2020)
Ekman, P., Friesen, W., Hager, J.: Facial action coding system. A Human Face, Salt Lake City, UT (2002)
Waller, B., Julle-Daniere, E., Micheletta, J.: Measuring the evolution of facial ‘expression’ using multi-species FACS. Neurosci. Biobehav. Rev. 113, 1–11 (2020)
Bartlett, M., Hager, J., Ekman, P., Sejnowski, T.: Measuring facial expressions by computer image analysis. Psychophysiology 36, 253–263 (1999)
Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J., et al.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1, 22–35 (2006)
Mavadati, S., Mahoor, M., Bartlett, K., Trinh, P.: Automatic detection of non-posed facial action units. In: 2012 19th IEEE International Conference On Image Processing, pp. 1817–1820 (2012)
Shao, Z., Liu, Z., Cai, J., Ma, L.: Deep adaptive attention for joint facial action unit detection and face alignment. In: ECCV 2018. LNCS, vol. 11217, pp. 725–740. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_43
Torre, F., Cohn, J.: Facial expression analysis. Visual Analysis Of Humans: Looking At People, pp. 377–409 (2011)
Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1113–1133 (2014)
Dong, X., Yang, Y., Wei, S., Weng, X., Sheikh, Y., Yu, S.: Supervision by registration and triangulation for landmark detection. IEEE Trans. Pattern Anal. Mach. Intell. 43, 3681–3694 (2020)
Mavadati, S., Mahoor, M., Bartlett, K., Trinh, P., Cohn, J.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4, 151–160 (2013)
Zhang, X., et al.: A high-resolution spontaneous 3D dynamic facial expression database. In: 2013 10th IEEE International Conference And Workshops On Automatic Face And Gesture Recognition (FG), pp. 1–6 (2013)
Zhang, X., et al.: Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database. Image Vis. Comput. 32, 692–706 (2014)
Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings Fourth IEEE International Conference On Automatic Face And Gesture Recognition (cat. No. PR00580), pp. 46–53 (2000)
Lucey, P., Cohn, J., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference On Computer Vision And Pattern Recognition-Workshops, pp. 94–101 (2010)
Cootes, T., Edwards, G., Taylor, C.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23, 681–685 (2001)
Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91, 200–215 (2011)
Vonikakis, V., Winkler, S.: Identity-invariant facial landmark frontalization for facial expression analysis. In: 2020 IEEE International Conference On Image Processing (ICIP), pp. 2281–2285 (2020)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)
Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 301–320 (2005)
Fienup, J.: Invariant error metrics for image reconstruction. Appl. Opt. 36, 8352–8357 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chandra Tripathi, S., Garg, R. (2023). A PCA-Based Keypoint Tracking Approach to Automated Facial Expressions Encoding. In: Maji, P., Huang, T., Pal, N.R., Chaudhury, S., De, R.K. (eds) Pattern Recognition and Machine Intelligence. PReMI 2023. Lecture Notes in Computer Science, vol 14301. Springer, Cham. https://doi.org/10.1007/978-3-031-45170-6_85
Download citation
DOI: https://doi.org/10.1007/978-3-031-45170-6_85
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45169-0
Online ISBN: 978-3-031-45170-6
eBook Packages: Computer ScienceComputer Science (R0)