Advertisement

Copying Behaviour of Expressive Motion

  • Maurizio Mancini
  • Ginevra Castellano
  • Elisabetta Bevacqua
  • Christopher Peters
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4418)

Abstract

In this paper we present an agent that can analyse certain human full-body movements in order to respond in an expressive manner with copying behaviour. Our work focuses on the analysis of human full-body movement for animating a virtual agent, called Greta, able to perceive and interpret users’ expressivity and to respond properly. Our system takes in input video data related to a dancer moving in the space. Analysis of video data and automatic extraction of motion cues is done in EyesWeb. We consider the amplitude and speed of movement. Then, to generate the animation for our agent, we need to map the motion cues on the corresponding expressivity parameters of the agent. We also present a behaviour markup language for virtual agents to define the values of expressivity parameters on gestures.

Keywords

Multiagent System Virtual Character Virtual Agent Facial Animation Temporal Extent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [BB00]
    Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Embodied conversational agents, pp. 189–219. MIT Press, Cambridge (2000)Google Scholar
  2. [BB02]
    Byun, M., Badler, N.: Facemote: Qualitative parametric modifiers for facial animations. In: Symposium on Computer Animation, San Antonio, TX (July 2002)Google Scholar
  3. [BC98]
    Boone, R.T., Cunningham, J.G.: Children’s decoding of emotion in expressive body movement: the development of cue attunement. Developmental Psychology 34, 1007–1016 (1998)CrossRefGoogle Scholar
  4. [BRI+05]
    Balomenos, T., et al.: Emotion analysis in man-machine interaction systems. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 318–328. Springer, Heidelberg (2005)Google Scholar
  5. [BRP+06]
    Bevacqua, E., et al.: Multimodal sensing, interpretation and copying of movements by a virtual agent. In: André, E., et al. (eds.) PIT 2006. LNCS (LNAI), vol. 4021, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. [Cas06]
    Castellano, G.: Human full-body movement and gesture analysis for emotion recognition: a dynamic approach. Paper presented at HUMAINE Crosscurrents meeting, Athens (June 2006)Google Scholar
  7. [CCM+04]
    Camurri, A., et al.: Toward real-time multimodal processing: Eyesweb 4.0. In: Proc. AISB 2004 Convention: Motion, Emotion and Cognition, Leeds, UK (2004)Google Scholar
  8. [CCRV06]
    Camurri, A., et al.: Subject interfaces: measuring bodily activation during an emotional experience of music. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 268–279. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. [CCZB00]
    Chi, D., et al.: The emote model for effort and shape. In: ACM SIGGRAPH ’00, New Orleans, LA, July 2000, pp. 173–182. ACM Press, New York (2000)Google Scholar
  10. [CLV03]
    Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: Comparison of spectator recognition and automated techniques. International Journal of Human-Computer Studies 59, 213–225 (2003)CrossRefGoogle Scholar
  11. [CMV04]
    Camurri, A., Mazzarino, B., Volpe, G.: Analysis of expressive gesture: The eyesweb expressive gesture processing library. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, Springer, Heidelberg (2004)Google Scholar
  12. [CP06]
    Castellano, G., Peters, C.: Full-body analysis of real and virtual human motion for animating expressive agents. Paper presented at the HUMAINE Crosscurrents Meeting, Athens (June 2006)Google Scholar
  13. [DBI+03]
    Drosopoulos, A., et al.: Emotionally-rich man-machine interaction based on gesture analysis. In: Human-Computer Interaction International, vol. 4, Crete, Greece, June 2003, pp. 1372–1376 (2003)Google Scholar
  14. [DeM89]
    DeMeijer, M.: The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 28, 247–268 (1989)CrossRefGoogle Scholar
  15. [Gal92]
    Gallaher, P.E.: Individual differences in nonverbal behavior: Dimensions of style. Journal of Personality and Social Psychology 63(1), 133–145 (1992)CrossRefGoogle Scholar
  16. [HMP05]
    Hartmann, B., Mancini, M., Pelachaud, C.: Towards affective agent action: Modelling expressive ECA gestures. In: Proceedings of the IUI Workshop on Affective Interaction, San Diego, CA (January 2005)Google Scholar
  17. [KSW03]
    Kopp, S., Sowa, T., Wachsmuth, I.: Imitation games with an artificial agent: From mimicking to understanding shape-related iconic gestures. In: Gesture Workshop, pp. 436–447 (2003)Google Scholar
  18. [MBP05]
    Mancini, M., Bresin, R., Pelachaud, C.: From acoustic cues to an expressive agent. In: Gesture Workshop, pp. 280–291 (2005)Google Scholar
  19. [McN92]
    McNeill, D.: Hand and mind - what gestures reveal about thought. The University of Chicago Press, Chicago (1992)Google Scholar
  20. [PB03]
    Pelachaud, C., Bilvi, M.: Computational model of believable conversational agents. In: Huget, M.-P. (ed.) Communication in Multiagent Systems. LNCS (LNAI), vol. 2650, pp. 300–317. Springer, Heidelberg (2003)Google Scholar
  21. [PCP+03]
    Paiva, A., et al.: Sentoy: a tangible interface to control the emotions of a synthetic character. In: AAMAS ’03: Proceedings of the second international joint conference on Autonomous agents and multiagent systems, Melbourne, Australia, pp. 1088–1089. ACM Press, New York (2003)CrossRefGoogle Scholar
  22. [Pet05]
    Peters, C.: Direction of attention perception for conversation initiation in virtual environments. In: International Working Conference on Intelligent Virtual Agents, Kos, Greece, September 2005, pp. 215–228 (2005)Google Scholar
  23. [Pic97]
    Picard, R.: Affective computing. MIT Press, Boston (1997)Google Scholar
  24. [Pol04]
    Pollick, F.E.: The features people use to recognize human movement style. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 20–39. Springer, Heidelberg (2004)Google Scholar
  25. [RN96]
    Reeves, B., Nass, C.: The media equation: How people treat computers, television and new media like real people and places. CSLI Publications, Stanford (1996)Google Scholar
  26. [SW85]
    Scherer, K.R., Wallbott, H.G.: Analysis of nonverbal behavior. In: Handbook of Discourse: Analysis, vol. 2, Academic Press, London (1985)Google Scholar
  27. [TTB]
    Taylor, R., Torres, D., Boulanger, P.: Using music to interact with a virtual character. In: The 2005 International Conference on New Interfaces for Musical Expression (2005)Google Scholar
  28. [Wal98]
    Wallbott, H.G.: Bodily expression of emotion. European Journal of Social Psychology 13, 879–896 (1998)CrossRefGoogle Scholar
  29. [WS86]
    Wallbott, H.G., Scherer, K.R.: Cues and channels in emotion recognition. Journal of Personality and Social Psychology 51(4), 690–699 (1986)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Maurizio Mancini
    • 1
  • Ginevra Castellano
    • 2
  • Elisabetta Bevacqua
    • 1
  • Christopher Peters
    • 1
  1. 1.IUT de Montreuil, University of Paris8 
  2. 2.InfoMus Lab, DIST, University of Genova 

Personalised recommendations