Interpreting Hand-Over-Face Gestures

  • Marwa Mahmoud
  • Peter Robinson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6975)

Abstract

People often hold their hands near their faces as a gesture in natural conversation, which can interfere with affective inference from facial expressions. However, these gestures are valuable as an additional channel for multi-modal inference. We analyse hand-over-face gestures in a corpus of naturalistic labelled expressions and propose the use of those gestures as a novel affect cue for automatic inference of cognitive mental states. We define three hand cues for encoding hand-over-face gestures, namely hand shape, hand action and facial region occluded, serving as a first step in automating the interpretation process.

Keywords

Facial Expression Depth Image Hand Gesture Face Region Facial Expression Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ambady, N., Rosenthal, R.: Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin 111(2), 256 (1992)CrossRefGoogle Scholar
  2. 2.
    Bourel, F., Chibelushi, C., Low, A.: Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In: IEEE Automatic Face and Gesture Recognition (2002)Google Scholar
  3. 3.
    Cook, S., Goldin-Meadow, S.: The role of gesture in learning: do children use their hands to change their minds? Journal of Cognition and Development 7(2), 211–232 (2006)CrossRefGoogle Scholar
  4. 4.
    De Gelder, B.: Towards the neurobiology of emotional body language. Nature Reviews Neuroscience 7(3), 242–249 (2006)CrossRefGoogle Scholar
  5. 5.
    De Gelder, B.: Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Phil. Trans. of the Royal Society B 364(1535), 3475 (2009)CrossRefGoogle Scholar
  6. 6.
    Ekenel, H., Stiefelhagen, R.: Block selection in the local appearance-based face recognition scheme. In: Computer Vision and Pattern Recognition Workshop, pp. 43–43. IEEE, Los Alamitos (2006)Google Scholar
  7. 7.
    Ekman, P., Friesen, W.: The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1(1), 49–98 (1969)CrossRefGoogle Scholar
  8. 8.
    Goldin-Meadow, S.: Hearing gesture: How our hands help us think. Belknap Press (2005)Google Scholar
  9. 9.
    Goldin-Meadow, S., Wagner, S.: How our hands help us learn. Trends in Cognitive Sciences 9(5), 234–241 (2005)CrossRefGoogle Scholar
  10. 10.
    Gunes, H., Piccardi, M.: A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In: International Conference on Pattern Recognition, vol. 1, pp. 1148–1153. IEEE, Los Alamitos (2006)Google Scholar
  11. 11.
    el Kaliouby, R., Robinson, P.: Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures. In: Real-Time Vision for Human Computer Interaction, pp. 181–200. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Lucey, P., Cohn, J., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: Computer Vision and Pattern Recognition Workshop, pp. 94–101. IEEE, Los Alamitos (2010)Google Scholar
  13. 13.
    Mahmoud, M., Baltrusaitis, T., Robinson, P., Reik, L.: 3D corpus of spontaneous complex mental states. In: D´Mello, S., et al. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 205–214. Springer, Heidelberg (2011)Google Scholar
  14. 14.
    Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: IEEE Conf. Multimedia and Expo., p. 5. IEEE, Los Alamitos (2005)Google Scholar
  15. 15.
    Pease, A., Pease, B.: The definitive book of body language. Bantam (2006)Google Scholar
  16. 16.
    Sun, Y., Yin, L.: Facial expression recognition based on 3D dynamic range model sequences. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 58–71. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1683–1699 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Marwa Mahmoud
    • 1
  • Peter Robinson
    • 1
  1. 1.University of CambridgeUSA

Personalised recommendations