Virtual Reality

, Volume 14, Issue 4, pp 221–228 | Cite as

Piavca: a framework for heterogeneous interactions with virtual characters

Original Article


This paper presents a virtual character animation system for real- time multimodal interaction in an immersive virtual reality setting. Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze. This multimodality means that, in order to simulate social interaction, our characters must be able to handle many different types of interaction and many different types of animation, simultaneously. Our system is based on a model of animation that represents different types of animations as instantiations of an abstract function representation. This makes it easy to combine different types of animation. It also encourages the creation of behavior out of basic building blocks, making it easy to create and configure new behaviors for novel situations. The model has been implemented in Piavca, an open source character animation system.



We would like to thank the funders of this work: BT plc, the European Union FET project PRESENCIA (contract number 27731) and the Empathic Avatars project funded by the UK Engineering and Physical Sciences Research Council. We also would like to thank the members of the University College London Department of Computer Science Virtual Environments and Graphics Group.


  1. Alexa M, Müller W (2000) Representing animations by principal components. Comput Graph Forum 19(3):411–418CrossRefGoogle Scholar
  2. Argyle M, Cookv M (1976) Gaze and mutual gaze. Cambridge University Press, CambridgeGoogle Scholar
  3. Arikan O, Forsyth DA (2002) Interactive motion generation from examples. ACM Trans Graph 21(3):483-490CrossRefGoogle Scholar
  4. Badler N, Philips C, Webber B (eds) (1993) Simulating humans: computer graphics, animation and control. Oxford University Press, OxfordMATHGoogle Scholar
  5. Bui TD, Heylen D, Nijholt A (2004) Combination of facial movements on a 3D talking head. In: Proceedings of computer graphics international 2004, pp 284–290Google Scholar
  6. Cassell J, Bickmore T, Campbell L, Chang K,Vilhjálmsson H, Yan H (1999) Embodiment in conversational interfaces: Rea. In: ACM SIGCHI , ACM Press, pp 520–527Google Scholar
  7. Cassell J, Bickmore T, Campbell L, Chang K Vilhjálmsson H, Yan H (1999) Requirements for an architecture for embodied conversational characters. In: Proceedings of computer animation and simulation ’99Google Scholar
  8. Cassell J, Stone M (1999) Living hand to mouth: psychological theories about speech and gesture in interactive dialogue systems. In: AAI 1999 fall symposium on psychological models of communication in collaborative systems, pp 34–42Google Scholar
  9. Egges A, Molet T, Magnenat-Thalmann N (2004) Personalised real-time idle motion synthesis. In: 12th Pacific conference on computer graphics and applications, pp 121–130Google Scholar
  10. Elliott C, Schechter G, Yeung R, Abi-Ezzi S (July 1994) Tbag: a high level framework for interactive, animated 3d graphics applications. In: Proceedings of SIGGRAPH 94, computer graphics proceedings, annual conference series, pp 421–434Google Scholar
  11. Figueroa P, Green M, Hoover HJ (2002) Intml: a description language for vr applications. In: Proceedings of Web3D ’02Google Scholar
  12. Gillies M, Ballin D (2003) A model of interpersonal attitude and posture generation. In: Rist T, Aylett R, Ballin D, Rickel J (eds) Fourth workshop on intelligent virtual agents. Kloster Irsee, GermanyGoogle Scholar
  13. Grassia FS (1998) Practical parameterization of rotations using the exponential map. J Graph Tools 3(3):29–48Google Scholar
  14. Gratch J, Marsella S (2001) Tears and fears: modeling emotions and emotional behaviors in synthetic agents. In: AGENTS ’01, Proceedings of the fifth international conference on autonomous agents, pp 278–285, ACM Press, New YorkGoogle Scholar
  15. Guye-Vuilléme A, Capin TK, Pandzic IS, Magnenat-Thalmann N, Thalmann D (1999) Non-verbal communication interface for collaborative virtual environments. Virtual Real J 4:49–59CrossRefGoogle Scholar
  16. Kendon A (1970) Movement coordination in social interaction. Acta Psychol 32:1–25CrossRefGoogle Scholar
  17. Kopp S, Jung B, Lessmann N, Wachsmuth I (2003) Max–a multimodal assistant in virtual reality construction. KI-Knstliche Intelligenz 3(4):11–17Google Scholar
  18. Kopp S, Tepper P, Cassell J (2005) Towards integrated microplanning of language and iconic gesture for multimodal output. In: International Conference on Multimodal Interfaces (ICMI’04), ACM Press, pp 97–104Google Scholar
  19. Kopp S, Wachsmuth I (2004) Synthesizing multimodal utterances for conversational agents. J Comput Animat Virtual Worlds 15(1):39–52CrossRefGoogle Scholar
  20. Kovar L, Gleicher M, Pighin F (2002) Motion graphs. ACM Trans Graph 21(3):473–482 July 2002CrossRefGoogle Scholar
  21. Kshirsagar S, Magnenat-Thalmann N (2002) A multilayer personality model. In: 2nd international symposium on smart graphics, pp 107–115Google Scholar
  22. Lee J, Chai J, Reitsma PSA, Hodgins JK, Pollard NS (2002) Interactive control of avatars animated with human motion data. ACM Trans Graph 21(3):491–500Google Scholar
  23. Lee SP, Badler JB, Badler NI (2002) Eyes alive. ACM Trans Graph 21(3):637–644Google Scholar
  24. Maatman RM, Gratch J, Marsella S (2005) Natural behavior of a listening agent. In: Panayiotopoulos T, Gratch J, Aylett R, Ballin D, Olivier P, Rist T (eds) Intelligent virtual agents. 5th international working conference, KosGoogle Scholar
  25. Pan X, Gillies M, Slater M (2008) Male bodily responses during an interaction with a virtual woman. In: Prendinger H, Lester JC, Ishizuka M (eds) Intelligent virtual agents. 8th international conference, IVA 2008, Tokyo, Japan, September 1–3, 2008. Proceedings, Vol 5208 of lecture notes in computer science, Springer, pp 89–96Google Scholar
  26. Pan X, Slater M (2007) A preliminary study of shy males interacting with a virtual female. In: Presence 2007. The 10th annual international workshop on presence, pp 101–108Google Scholar
  27. Poggi I, Pelachaud C, De Rosis F, Carofiglio V, De Carolis B (2005) Greta. A believable embodied conversational agent. In: Stock O, Zancanaro M (eds) Multimodal intelligent information presentation. Text, speech and language technology, vol 27. KluwerGoogle Scholar
  28. Rose C, Cohen MF, Bodenheimer B (1998) Verbs and adverbs: multidimensional motion interpolation. IEEE Comput Graph Appl 18(5):32–40Google Scholar
  29. Slater M, Usoh M (1994) Body centred interaction in immersive virtual environments. In: Thalmann NM, Thalmann D (eds) Artificial life and virtual reality. Wiley, pp 125–148Google Scholar
  30. Stone M, DeCarlo D, Oh I, Rodriguez C, Stere A, Lees A, Bregler C (2004) Speaking with hands: creating animated conversational characters from recordings of human performance. In: SIGGRAPH ’04, ACM SIGGRAPH 2004 papers. ACM Press, New York, pp 506–513Google Scholar
  31. Thórisson K (1998) Real-time decision making in multimodal face-to-face communication. In: Second ACM international conference on autonomous agents, pp 16–23Google Scholar
  32. Vilhjálmsson HH (2005) Augmenting online conversation through automated discourse tagging. In: 6th annual minitrack on persistent conversation at the 38th Hawaii international conference on system sciencesGoogle Scholar
  33. Vinayagamoorthy V, Garau M, Steed A, Slater M (2004) An eye gaze model for dyadic interaction in an immersive virtual environment: practice and experience. Comput Graph Forum 23(1):1–12CrossRefGoogle Scholar
  34. Vinayagamoorthy V, Gillies M, Steed A, Tanguy E, Pan X, Loscos C, Slater M (2006) Building expression into virtual characters. In: Eurographics conference state of the art reportsGoogle Scholar
  35. Witkin A, Popović Z (1995) Motion warping. In: ACM SIGGRAPH, pp 105–108Google Scholar

Copyright information

© Springer-Verlag London Limited 2010

Authors and Affiliations

  1. 1.Department of Computing, Goldsmiths CollegeUniversity of LondonLondonUK
  2. 2.Department of Computer ScienceUniversity College LondonLondonUK
  3. 3.ICREA-Universitat de BarcelonaBarcelonaSpain

Personalised recommendations