Annotating Multimodal Behaviors Occurring During Non Basic Emotions

  • Jean-Claude Martin
  • Sarris Abrilian
  • Laurence Devillers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3784)


The design of affective interfaces such as credible expressive characters in story-telling applications requires the understanding and the modeling of relations between realistic emotions and behaviors in different modalities such as facial expressions, speech, hand gestures and body movements. Yet, research on emotional multimodal behaviors has focused on individual modalities during acted basic emotions. In this paper we describe the coding scheme that we have designed for annotating multimodal behaviors observed during mixed and non acted emotions. We explain how we used it for the annotation of videos from a corpus of emotionally rich TV interviews. We illustrate how the annotations can be used to compute expressive profiles of videos and relations between non basic emotions and multimodal behaviors.


Facial Expression Hand Gesture Basic Emotion Movement Quality Facial Action Code System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Abrilian, S., Devillers, L., Buisine, S., Martin, J.-C.: EmoTV1: Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces. In: HCI International, Las Vegas, USA (2005a)Google Scholar
  2. 2.
    Abrilian, S., Martin, J.-C., Devillers, L.: A Corpus-Based Approach for the Modeling of Multimodal Emotional Behaviors for the Specification of Embodied Agents. In: HCI International, Las Vegas, USA (2005b)Google Scholar
  3. 3.
    Allwood, J., Cerrato, L., Dybkær, L., Paggio, P.: The MUMIN multimodal coding scheme. In: Workshop on Multimodal Corpora and Annotation, Stockholm (2004)Google Scholar
  4. 4.
    Boone, R.T., Cunningham, J.G.: Children’s decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology 34(5) (1998)Google Scholar
  5. 5.
    DeMeijer, M.: The attribution of agression and grief to body movements: the effect of sex-stereotypes. European Journal of Social Psychology 21 (1991)Google Scholar
  6. 6.
    Devillers, L., Abrilian, S., Martin, J.-C.: Representing real life emotions in audiovisual data with non basic emotional patterns and context features. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 519–526. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  7. 7.
    Douglas-Cowie, E., Campbell, N., Cowie, R., Roach, P.: Emotional speech; Towards a new generation of databases. Speech Communication 40 (2003)Google Scholar
  8. 8.
    Ekman, P.: Emotions revealed. Weidenfeld & Nicolson (2003)Google Scholar
  9. 9.
    Ekman, P.W.: F.: Facial Action Coding System, FACS (1978)Google Scholar
  10. 10.
    Hartmann, B., Mancini, M., Pelachaud, C.: Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis. Computer Animation (2002)Google Scholar
  11. 11.
    Kipp, M.: Anvil - A Generic Annotation Tool for Multimodal Dialogue. In: Eurospeech (2001)Google Scholar
  12. 12.
    Kipp, M.: Gesture Generation by Imitation. From Human Behavior to Computer Character Animation. Boca Raton, Florida (2004)Google Scholar
  13. 13.
    Lamolle, M., Mancini, M., Pelachaud, C., Abrilian, S., Martin, J.-C., Devillers, L.: Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-Level Specification of Emotion and Expressivity in Embodied Conversational Agents. In: 4th International and Interdisciplinary Conference on Modeling and Using Context (Context), Paris (2005)Google Scholar
  14. 14.
    Magno Caldognetto, E., Poggi, I., Cosi, P., Cavicchio, F., Merola, G.: Multimodal Score: an Anvil Based Annotation Scheme for Multimodal Audio-Video Analysis. In: Workshop "Multimodal Corpora: Models Of Human Behaviour For The Specification And Evaluation Of Multimodal Input And Output Interfaces" In Association with the 4th International Conference On Language Resources And Evaluation (LREC), Lisbon, Portugal (2004)Google Scholar
  15. 15.
    McNeill, D.: Hand and mind - what gestures reveal about thoughts. University of Chicago Press, Chicago (1992)Google Scholar
  16. 16.
    Montepare, J., Koff, E., Zaitchik, D., Albert, M.: The use of body movements and gestures as cues to emotions in younger and older adults. Journal of Nonverbal Behavior 23(2) (1999)Google Scholar
  17. 17.
    Newlove, J.: Laban for actors and dancers. Routledge, New York (1993)Google Scholar
  18. 18.
    Wallbott, H.G.: Bodily expression of emotion. European Journal of Social Psychology 28 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jean-Claude Martin
    • 1
  • Sarris Abrilian
    • 1
  • Laurence Devillers
    • 1
  1. 1.LIMSI-CNRSOrsayFrance

Personalised recommendations