FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
KeywordsEmotion Facial expression Software Research material Facial action coding system FACS
This work was partially supported by the following sources: HUMAINE, 6th Framework Programme IST Multimodal Interfaces, http://emotion-research.net. The National Centre of Competence in Research (NCCR) in Affective Sciences financed by the Swiss National Science Foundation (n° 51NF40-104897). A grant from the Swiss National Science Foundation (105311-108187/1 to David Sander and Patrik Vuilleumier). The “Programme d’actions intégrées Franco-Suisse Germaine de Staël” in collaboration with the Swiss Academy for Technical Sciences (to Lionel Reveret and David Sander).
The authors would like to thank Prof. Susanne Kaiser, Dr. Marc Méhu, Katia Schenkel, Birgit Michel, and Stéphane With (University of Geneva) for their expertise and guidance about the FACS, and Dr Mina Vasalou (University of Bath) for comments on drafts of this paper.
- Blanz, V., & Vetter, T. (1999). Morphable model for the synthesis of 3D faces [Computer software]. Los Angeles: SIGGRAPH.Google Scholar
- Corneille, O., Hugenberg, K., & Timothy, P. (2007). Applying the attractor field model to social cognition: Perceptual discrimination is facilitated, but memory is impaired for faces displaying evaluatively congruent expressions. Journal of Personality and Social Psychology, 93(3), 335–352.CrossRefPubMedGoogle Scholar
- Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
- Ekman, P., Friesen, W. V., & Hager, J. (2002). The facial action coding system. London, UK.Google Scholar
- Ekman, P., Irwin, W., & Rosenberg, E. L. (1994). The emotional facial action coding system (EMFACS). London, UK.Google Scholar
- FaceGen Modeller. [Software] (2009). Singular Inversions Inc. Retrieved from http://www.facegen.com/.
- Grammer, K., Tessarek, A., & Hofer, G. (in preparation). From emoticons to avatars: The simulation of facial expression. In A. Kappas (Ed.), Emotional communication on the internet. Retrieved from http://evolution.anthro.univie.ac.at/institutes/urbanethology/projects/simulation/emosym/index.html.
- Hezle, V., Biehn, C., Schlömer, T., & Linner, F. (2004). Adaptable setup for performance driven facial animation. In Proceedings of SIGGRAPH’04—sketches. Los Angeles: Springer.Google Scholar
- Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. In Proceedings of the fourth IEEE international conference on automatic face and gesture recognition. Retrieved from http://vasc.ri.cmu.edu/idb/html/face/facial_expression/index.html.
- Lundqvist, D., Esteves, F., & Öhman, A. (1998). The Karolinska directed emotional faces—KDEF [CD ROM]. Department of Clinical Neuroscience, Psychology section, Karolinska Institute. ISBN 91-630-7164-9.Google Scholar
- Malatesta, L., Raouzaiou, A., Karpouzis, K., & Kollias, S. (2006). Mpeg-4 facial expression synthesis based on appraisal theory. In The 3rd IFIP conference in artificial intelligence applications and innovations, AIAI 2006. Athens, Greece.Google Scholar
- Moradi, F., Koch, C., Shimojo, S., Sarma, G., & Gutierrez, J. (2005). Adaptation to face identity and emotional expression depends on attention. In Proceedings of vision sciences society 5th. Sarasota, FL: Journal of Vision.Google Scholar
- Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. In Proceedings of the national academy of sciences of the United States of America, 105(32), 11087–11092.Google Scholar
- Pantic, M., Valstar, M. F., Rademaker, R., & Maat, L. (2005). Web-based database for facial expression analysis. In Proceedings of the IEEE international conference on multimedia and expo (ICME’05). Retrieved from http://www.docstoc.com/docs/2933918/EULA-End-User-License-Agreement-MMI-Face-Database-www-mmifacedb.
- Parke, F. I., & Waters, K. (1996). Computer facial animation. Natick, MA: A.K. Peters Ltd.Google Scholar
- Pasquariello, S., & Pelachaud, C. (2001). Greta: A simple facial animation engine. In R. Rajkumar, M. Köppen, S. Ovaska, T. Furuhashi, & F. Hoffmann (Eds.), 6th online world conference on soft computing in industrial applications, session on soft computing for intelligent 3D agents. Germany: Springer.Google Scholar
- Poser 7 [Software] (2007). e-frontier. Retrieved from http://www.e-frontier.com/go/products/poser/.
- Roesch, E. B., Sander, D., Mumenthaler, C., Kerzel, D., & Scherer, K. R. (2010a). Psychophysics of emotion: The QUEST for emotional attention. Journal of Vision, 10(3), 4, 1–9.Google Scholar
- Roesch, E. B., Sander, D., & Scherer, K. R. (2009). Emotion and motion in facial expressions modulate the attentional blink. Perception, 38, 466.Google Scholar
- Roesch, E. B., Sander, D., & Scherer, K. R. (2010b). The 4th dimension(s) of emotion perception: Emotion and motion in facial expressions modulate the attentional blink. Manuscript in preparation.Google Scholar