How to Make a Robot Smile? Perception of Emotional Expressions from Digitally-Extracted Facial Landmark Configurations
To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of those artificial faces. Our starting point is assessing human perception of core facial information: Moving dots representing the facial landmarks, i.e., the locations and movements of the crucial parts of a face. Earlier research suggested that participants can relatively accurately identity facial expressions when all they can see of a real human full face are moving white painted dots representing the facial landmarks (although less accurate than recognizing full faces). In the current study we investigated the accuracy of recognition of emotions expressed by comparable facial landmarks (compared to accuracy of recognition of emotions expressed by full faces), but now used face-tracking software to produce the facial landmarks. In line with earlier findings, results suggested that participants could accurately identify emotions expressed by the facial landmarks (though less accurately than those expressed by full faces). Thereby, these results provide a starting point for further research on the fundamental characteristics of technology (AI methods) producing facial emotional expressions and their evaluation by human users.
KeywordsRobots Emotion Facial expression Facial landmarks FaceTracker Perception
Unable to display preview. Download preview PDF.
- 3.Saragih, J.M., Lucey, S., Cohn, J.F., Court, T.: Real-time avatar animation from a single image. Automatic Face & Gesture (2011)Google Scholar
- 6.Saragih, J., Lucey, S., Cohn, J.: Face alignment through subspace constrained mean-shifts. In: IEEE International Conference on Computer Vision, pp. 1034–1041 (2009)Google Scholar
- 7.Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting with a mixture of local experts. In: International Conference on Computer Vision (2009)Google Scholar
- 9.Alexander, O., Rogers, M., Lambeth, W., Chiang, M., Debevec, P.: Creating a photoreal digital actor: the digital Emily project. In: 2009 Conference for Visual Media Production, pp. 176–187 (2009)Google Scholar
- 10.Yang, C., Chiang, W.: An interactive facial expression generation system. Springer Science Business Media. Mutimed Tools Appl. (2007)Google Scholar
- 11.Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. Emotion (2011) (advance online publication), doi:10.137/a0025827Google Scholar
- 12.Bänziger, T., Scherer, K.R.: Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer, K.R., Bänziger, T., Roesch, E.B. (eds.) Blueprint for Affective Computing: A Sourcebook, pp. 271–294. Oxford University Press, Oxford (2010)Google Scholar
- 14.Breazeal, C.: Designing sociable robots. MIT Press, Cambridge (2002)Google Scholar
- 15.Lucey, P., Lucey, S., Cohn, J.F.: Registration invariant representations for expression detection. In: 2010 International Conference on Digital Image Computing: Techniques and Applications, pp. 255–261 (2010)Google Scholar