Advertisement

How to Make a Robot Smile? Perception of Emotional Expressions from Digitally-Extracted Facial Landmark Configurations

  • Caixia Liu
  • Jaap Ham
  • Eric Postma
  • Cees Midden
  • Bart Joosten
  • Martijn Goudbeek
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7621)

Abstract

To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of those artificial faces. Our starting point is assessing human perception of core facial information: Moving dots representing the facial landmarks, i.e., the locations and movements of the crucial parts of a face. Earlier research suggested that participants can relatively accurately identity facial expressions when all they can see of a real human full face are moving white painted dots representing the facial landmarks (although less accurate than recognizing full faces). In the current study we investigated the accuracy of recognition of emotions expressed by comparable facial landmarks (compared to accuracy of recognition of emotions expressed by full faces), but now used face-tracking software to produce the facial landmarks. In line with earlier findings, results suggested that participants could accurately identify emotions expressed by the facial landmarks (though less accurately than those expressed by full faces). Thereby, these results provide a starting point for further research on the fundamental characteristics of technology (AI methods) producing facial emotional expressions and their evaluation by human users.

Keywords

Robots Emotion Facial expression Facial landmarks FaceTracker Perception 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bassili, J.N.: Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance 4, 373–379 (1978)CrossRefGoogle Scholar
  2. 2.
    Johansson, G.: Visual motion perception. Scientific American 232, 76–88 (1975)CrossRefGoogle Scholar
  3. 3.
    Saragih, J.M., Lucey, S., Cohn, J.F., Court, T.: Real-time avatar animation from a single image. Automatic Face & Gesture (2011)Google Scholar
  4. 4.
    Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision 91, 200–215 (2011)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Lucey, S., Wang, Y., Saragih, J., Cohn, J.: Non-rigid face tracking with enforced convexity and local appearance consistency constraint. International Journal of Image and Vision Computing 28, 781–789 (2010)CrossRefGoogle Scholar
  6. 6.
    Saragih, J., Lucey, S., Cohn, J.: Face alignment through subspace constrained mean-shifts. In: IEEE International Conference on Computer Vision, pp. 1034–1041 (2009)Google Scholar
  7. 7.
    Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting with a mixture of local experts. In: International Conference on Computer Vision (2009)Google Scholar
  8. 8.
    Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robotics and Autonomous Systems. 42, 143–166 (2003)CrossRefMATHGoogle Scholar
  9. 9.
    Alexander, O., Rogers, M., Lambeth, W., Chiang, M., Debevec, P.: Creating a photoreal digital actor: the digital Emily project. In: 2009 Conference for Visual Media Production, pp. 176–187 (2009)Google Scholar
  10. 10.
    Yang, C., Chiang, W.: An interactive facial expression generation system. Springer Science Business Media. Mutimed Tools Appl. (2007)Google Scholar
  11. 11.
    Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. Emotion (2011) (advance online publication), doi:10.137/a0025827Google Scholar
  12. 12.
    Bänziger, T., Scherer, K.R.: Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer, K.R., Bänziger, T., Roesch, E.B. (eds.) Blueprint for Affective Computing: A Sourcebook, pp. 271–294. Oxford University Press, Oxford (2010)Google Scholar
  13. 13.
    Xiao, J., Chai, J., Kanade, T.: A closed-form solution to non-rigid shape and motion recovery. International Journal of Computer Vision 2, 233–246 (2006)CrossRefGoogle Scholar
  14. 14.
    Breazeal, C.: Designing sociable robots. MIT Press, Cambridge (2002)Google Scholar
  15. 15.
    Lucey, P., Lucey, S., Cohn, J.F.: Registration invariant representations for expression detection. In: 2010 International Conference on Digital Image Computing: Techniques and Applications, pp. 255–261 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Caixia Liu
    • 1
    • 2
  • Jaap Ham
    • 1
  • Eric Postma
    • 2
  • Cees Midden
    • 1
  • Bart Joosten
    • 2
  • Martijn Goudbeek
    • 2
  1. 1.Human-Technology Interaction Group, Department of Industrial Engineering and Innovation SciencesEindhoven University of TechnologyEindhovenThe Netherlands
  2. 2.Tilburg Center for Cognition and CommunicationTilburg UniversityTilburgThe Netherlands

Personalised recommendations