Abstract
We present a system for facial expression recognition that is evaluated on multiple databases. Automated facial expression recognition systems face a number of characteristic challenges. Firstly, obtaining natural training data is difficult, especially for facial configurations expressing emotions like sadness or fear. Therefore, publicly available databases consist of acted facial expressions and are biased by the authors’ design decisions. Secondly, evaluating trained algorithms towards real-world behavior is challenging, again due to the artificial conditions in available image data. To tackle these challenges and since our goal is to train classifiers for an online system, we use several databases in our evaluation. Comparing classifiers across data-bases determines the classifiers capability to generalize more reliable than traditional self-classification.
Similar content being viewed by others
References
J. Ahlberg, “Candide-3 — an updated parameterized face,” Tech. Rep. LiTH-ISY-R-2326 (Linköping Univ., 2001).
K. Khnlenz, B. Radig, C. Mayer, and S. Sosnowski, “Towards robotic facial mimicry: system development and evaluation,” in Proc. Int. Symp. in Robot-Human Interactive Communication (Atlanta, 2011).
C. Darwin, The Expression of the Emotions in Man and Animals (John Murray, London, 1872).
P. Ekman, “Universals and cultural differences in facial expressions of emotion,” in Proc. Nebraska Symp. on Motivation 1971, Ed. by J. Cole (Univ. Nebraska Press, Lincoln, NE, 1972), Vol. 19, pp. 207–283.
P. Ekman and W. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement (Consulting Psychologists Press, San Francisco, 1978).
I. Fasel, M. B. J. Whitehill, G. Littlewort, and J. Movellan, “Toward practical smile detection,” IEEE Trans. Pattern Anal. Mach. Intellig. 31(11), 2106–2111 (2009).
T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proc. Int. Conf. on Automatic Face and Gesture Recognition (Grenoble, March 2000), pp. 46–53.
I. Kotsia and I. Pitas, “Facial expression recognition in image sequences using geometric deformation features and support vector machines,” IEEE Trans. Image Processing 16(1), 172–187 (2007).
A. A. Livshin and X. Rodet, “The importance of cross database evaluation in musical instrument sound classification: a critical approach,” in Proc. Int. Symp. on Music Information Retrieval (ISMIR 2003) (Baltimore, 2003).
M. G. Frank, C. Lainscsek, I. R. Fasel, M. S. Bartlett, G. C. Littlewort, and J. R. Movellan, “Automatic recognition of facial actions in spontaneous expressions,” J. Multimedia 1(6) (2006).
C. Mayer, M. Wimmer, F. Stulp, Z. Riaz, A. Roth, M. Eggers, and B. Radig, “A real time system for model-based interpretation of the dynamics of facial expressions,” in Proc. Int. Conf. on Face and Gesture Recognition (Amsterdam, 2008).
M. Pantic, M. F. Valstar, R. Rademaker, and L. Maat, “Web-based database for facial expression analysis,” in Proc. IEEE Int. Conf. Multmedia and Expo (ICME’05) (Amsterdam, 2005).
M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art,” IEEE Trans. Pattern Ana. Mach. Intellig. 22(12), 1424–1445 (2000).
R. Reed, “Pruning algorithms — a survey,” IEEE Trans. Neural Networks 4(5), 740–747 (1993).
C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: a comprehensive study,” Image Vision Comput. 27, 803–816 (2009).
P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57(2), 137–154 (2004).
F. Wallhoff, The Feedtum Database (2006). http://cotesys.mmk.etechnik.tu-muenchen.de/isg/content/feeddatabase. [Accessed June 13, 2011].
P. Watzlawick, J. B. Bavelas, and D. D. Jackson, Pragmatics of Human Communication: A Study of Interactional Patterns, Pathologies, and Paradoxes (W. W. Norton and Co, New York, 1967).
Author information
Authors and Affiliations
Corresponding author
Additional information
The article is published in the original. This article uses the materials of the report submitted at the 8th Open German-Russian Workshop “Pattern Recognition and Image Understanding,” Nizhni Novgorod, November 21–26, 2011.
Christoph Mayer studied Computer Science at the Technische Universitoat Muonchen from 2000 to 2007 and received his doctoral degree in 2012. While he was working on his Ph.D, he has been working in the German Cluster of Excellence “Cognition for Technical Systems” in the Intelligent Autonomous Systems Group. His research interests were in the field of face model fitting, facial expression recognition and emotion recognition. He has been first author of the paper “Adjusted Pixel Features for Facial Component Classification” that appeared in the Vision and Image Computing Journal in 2009 and has been awarded with the best paper award in 2009 for the paper “Facial Expression Recognition with 3D Deformable Models” that has been presented at the conference “Advances in Computer-Human Interaction”. His current research interest is in the automatic analysis of soccer games from optical camera data.
Martin Eggers received a diploma (Dipl.-Inf Univ.) in computer science from Technische Universitaot Muonchen (TUM) in 2009. He is currently working as a doctoral candidate in the German Cluster of Excellence for Cognition in Technical Systems (CoTeSys) at TUM, where his research interests are appearance modeling, visual object tracking, surveillance architectures and applications of large scale multicamera systems. In 2009, he received a best paper award for the paper “Facial Expression Recognition with 3D Deformable Models” at the conference for Advances in Computer-Human Interaction.
Bernd Radig received his diploma degree in Physics in 1972 from the University of Bonn and the doctor degree in Computer Science in 1978 from the University of Hamburg. There he got his venia legendi and finished his habilitation dissertation in 1982. He was Assistant and Associate Professor in Hamburg (1982–1986) and full professor, chair of Image Understanding and Knowledge Based Systems, Fakultaot fuor Informatik, Technische Universitaot Muonchen (1986–2009). He is a member of the Emeriti of Excellence programme. He was chairman and founder of the Association of Bavarian Research Cooperations (1993–2007), a unique network of scientists, specialising in challenging disciplines in accordance with Bavarian enterprises. 1988 he founded the Bavarian Research Centre for Knowledge Based Systems (FOR-WISS), an institute common to the three universities TU Muonchen, Erlangen and Passau. He was general chairman of the annual symposium of the German Association for Pattern Recognition in 1981, 1991, 2001 as well as of the European Conference on Artificial Intelligence (ECAI), 1988. He is active as organizer and programme committee member of the German-Russian Workshop on Pattern Recognition. He holds the German Order of Merit (1992) and the award Pro Meritis Scientiae et. Litterarum of the State of Bavaria for outstanding contributions to science and art (2002). His current research activities are in real-time image sequence understanding for applications in robotics, sports or driver assistance systems.
Rights and permissions
About this article
Cite this article
Mayer, C., Eggers, M. & Radig, B. Cross-database evaluation for facial expression recognition. Pattern Recognit. Image Anal. 24, 124–132 (2014). https://doi.org/10.1134/S1054661814010106
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1054661814010106