Advertisement

Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs

  • Jean-Claude Martin
  • Sarkis Abrilian
  • Laurence Devillers
  • Myriam Lamolle
  • Maurizio Mancini
  • Catherine Pelachaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3661)

Abstract

In this paper we present a two-steps approach towards the creation of affective Embodied Conversational Agents (ECAs): annotation of a real-life non-acted emotional corpus and animation by copy-synthesis. The basis of our approach is to study how coders perceive and annotate at several levels the emotions observed in a corpus of emotionally rich TV video interviews. We use their annotations to specify the expressive behavior of an agent at several levels. We explain how such an approach can be useful for providing knowledge as input for the specification of non-basic patterns of emotional behaviors to be displayed by the ECA (e.g. which perceptual cues and levels of annotation are required for enabling the proper recognition of the emotions).

Keywords

Emotional Behavior Annotation Scheme Expressive Behavior Movement Quality Iconic Gesture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abrilian, S., Devillers, L., Buisine, S., Martin, J.-C.: EmoTV1: Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces. In: HCI International, Las Vegas, USA (2005a)Google Scholar
  2. 2.
    Abrilian, S., Martin, J.-C., Devillers, L.: A Corpus-Based Approach for the Modeling of Multimodal Emotional Behaviors for the Specification of Embodied Agents. In: HCI International, Las Vegas, USA (2005b)Google Scholar
  3. 3.
    Batliner, A., Fisher, K., Huber, R., Spilker, J., Noth, E.: Desperately seeking emotions or: Actors, wizards, and human beings. Speech Emotion, pp. 195–200 (2000)Google Scholar
  4. 4.
    Boone, R.T., Cunningham, J.G.: Children’s decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology 34(5) (1998)Google Scholar
  5. 5.
    Calbris, G.: The semiotics of French gestures. University Press Bloomington Indiana (1990)Google Scholar
  6. 6.
    Cao, Y., Faloutsos, P., Kohler, E., Pighin, F.: Real-time Speech Motion Synthesis from Recorded Motions. In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2004)Google Scholar
  7. 7.
    Cassell, J., Stone, M., Hao, Y.: Coordination and context-dependence in the generation of embodied conversation. In: INLG, pp. 171–178 (2000)Google Scholar
  8. 8.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  9. 9.
    Cassell, J., Vilhjàlmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: 28th annual conference on Computer graphics and interactive techniques, pp. 477–486 (2001)Google Scholar
  10. 10.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: 27th annual conference on Computer graphics and interactive techniques, pp. 173–182 (2000)Google Scholar
  11. 11.
    Cowie, R.: Emotion recognition in human-computer interaction. IEEE Signal processing Magazine 18 (2001)Google Scholar
  12. 12.
    Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schröder, M.: ’FEELTRACE’: An Instrument for Recording Perceived Emotion in Real Time. In: ISCA Workshop on Speech & Emotion (2000)Google Scholar
  13. 13.
    Craggs, R., Wood, M.M.: A categorical annotation scheme for emotion in the linguistic content. In: André, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 89–100. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  14. 14.
    De Carolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a Markup Language for Believable Behavior Generation. In: Life-like characters. Tools, affective functions and applications. Springer, Heidelberg (2004)Google Scholar
  15. 15.
    DeMeijer, M.: The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 13 (1989)Google Scholar
  16. 16.
    Douglas-Cowie, E., Campbell, N., Cowie, R., Roach, P.: Emotional speech; Towards a new generation of databases. Speech Communication 40 (2003)Google Scholar
  17. 17.
    Egges, A., Kshirsagar, S., Magnenat-Thalmann, N.: Imparting Individuality to Virtual Humans. In: First International Workshop on Virtual Reality Rehabilitation, Lausanne, Switzerland (2002)Google Scholar
  18. 18.
    Ekman, P.: Basic emotions. Handbook of Cognition & Emotion. John Wiley, Chichester (1999)Google Scholar
  19. 19.
    Ekman, P., Friesen, W.V.: Manual for the facial action coding system. Consulting Psychology Press Palo Alto, CA (1978)Google Scholar
  20. 20.
    Gallaher, P.: Individual differences in nonverbal behavior: Dimensions of style. Journal of Personality and Social Psychology 63 (1992)Google Scholar
  21. 21.
    Hartmann, B., Mancini, M., Pelachaud, C.: Formational Parameters and Adaptive Prototype Instantiation for MPEG-4 Compliant Gesture Synthesis. Computer Animation (2002)Google Scholar
  22. 22.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. In: Gesture Workshop, Vannes (2005)Google Scholar
  23. 23.
    Kendon, A.: Human gesture. Tools, Language and Intelligence. Cambridge University Press, Cambridge (1993)Google Scholar
  24. 24.
    Kipp, M.: Anvil - A Generic Annotation Tool for Multimodal Dialogue. Eurospeech 2001 (2001)Google Scholar
  25. 25.
    Kipp, M.: Gesture Generation by Imitation. From Human Behavior to Computer Character Animation. Boca Raton, Dissertation.com Florida (2004)Google Scholar
  26. 26.
    Kopp, S., Wachsmuth, I.: A knowledge-based approach for lifelike gesture animation. In: 14th European Conference on Artificial Intelligence, ECAI (2000)Google Scholar
  27. 27.
    Kshirsagar, S., Molet, T., Magnant-Thalmann, N.: Principal components of expressive speech animation. Computer Graphics International, 38–44 (2001)Google Scholar
  28. 28.
    Maya, V., Lamolle, M., Pelachaud, C.: Influences on Embodied Conversational Agent’s Expressivity: Toward an Individualization of the ECAs. In: AISB 2004 convention. Symposium on Language, Speech and Gesture for Expressive Characters, pp. 75–85 (2004)Google Scholar
  29. 29.
    McNeill, D.: Hand and mind - what gestures reveal about thoughts. University of Chicago Press (1992)Google Scholar
  30. 30.
    Newlove, J.: Laban for actors and dancers. Routledge New York (1993)Google Scholar
  31. 31.
    Noot, H., Ruttkay, Z.: Gesture in Style. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 324–337. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  32. 32.
    Ochs, M., Niewiadomski, R., Pelachaud, C., Sadek, D.: Intelligent expressions of emotions. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 707–714. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  33. 33.
    Pandzic, I.S.: Facial Motion Cloning. Elsevier Graphical Models Journal 65(6) (2003)Google Scholar
  34. 34.
    Pelachaud, C., Carofiglio, V., De Carolis, B., De Rosis, F., Poggi, I.: Embodied Contextual Agent in Information Delivering Application. In: First International Joint Conference on Autonomous Agent and Multiagent Systems(AAMAS), Bologna, Italy, pp. 758–765 (2002)Google Scholar
  35. 35.
    Plutchik, R.: The psychology and Biology of Emotion. Harper Collins College, New York (1994)Google Scholar
  36. 36.
    Poggi, I.: Mind Markers. In: 5th International Pragmatics Conference, Mexico City (1996)Google Scholar
  37. 37.
    Poggi, I., Pelachaud, C., de Rosis, F.: Eye communication in a conversational 3D synthetic agent. AI Communications. Special Issue on Behavior Planning for Life-Like Characters and Avatars 13(3) (2000)Google Scholar
  38. 38.
    Prendinger, H., Ishizuka, M.: Life-like characters. Tools, affective functions and applications. Springer, Heidelberg (2004)Google Scholar
  39. 39.
    Scherer, K.R.: Emotion. Introduction to Social Psychology: A European perspective. Blackwell, Oxford (2000)Google Scholar
  40. 40.
    Tepper, P., Kopp, S., Cassell, J.: Content in Context: Generating Language and Iconic Gesture without a Gestionary. In: Workshop on Balanced Perception and Action in ECAs at Automous Agents and Multiagent Systems (AAMAS), New York, NY (2004)Google Scholar
  41. 41.
    Theune, M., Heylen, D., Nijholt, A.: Generating Embodied Information Presentations. In: Multimodal Intelligent Information Presentation. Kluwer Academic Publishers, Dordrecht (2004)Google Scholar
  42. 42.
    Tsapatsoulis, N., Raouzaiou, A., Kollias, S., Cowie, R., Douglas-Cowie, E.: Emotion Recognition and Synthesis based on MPEG-4 FAPs. In: MPEG-4 Facial Animation. John Wiley & Sons, Chichester (2002)Google Scholar
  43. 43.
    Wallbott, H.G.: Bodily expression of emotion. European Journal of Social Psychology 28 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jean-Claude Martin
    • 1
  • Sarkis Abrilian
    • 1
  • Laurence Devillers
    • 1
  • Myriam Lamolle
    • 2
  • Maurizio Mancini
    • 2
  • Catherine Pelachaud
    • 2
  1. 1.LIMSI-CNRSOrsayFrance
  2. 2.LINC, IUT de MontreuilUniversité Paris 8France

Personalised recommendations