Advertisement

The Visual Computer

, Volume 23, Issue 5, pp 317–333 | Cite as

Presence and interaction in mixed reality environments

  • Arjan EggesEmail author
  • George Papagiannakis
  • Nadia Magnenat-Thalmann
Open Access
Original Article

Abstract

In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.

Keywords

Presence Interaction Animation Real-time rendering Mixed reality 

References

  1. 1.
    Alice chat bot: http://www.alicebot.org/. Cited November (2005)Google Scholar
  2. 2.
    Amanatides, J., Woo, A.: A fast voxel traversal algorithm for ray tracing. In: G. Marechal (ed.) Proceedings of Eurographics ’87, pp. 3–10. Elsevier, Amsterdam (1987)Google Scholar
  3. 3.
    Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Comput. Graph. Appl. 21(6), 34–47 (2001)CrossRefGoogle Scholar
  4. 4.
    Balcisoy, S.S.: Analysis and development of interaction techniques between real and synthetic worlds. Dissertation, EPFL (2001)Google Scholar
  5. 5.
    Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH ’01, pp. 477–486 (2001)Google Scholar
  6. 6.
    Cavazza, M., Martin, O., Charles, F., Mead, S., Marichal, X.: Users acting in mixed reality interactive storytelling. In: Proceedings of 2nd International Conference on Virtual Storytelling, pp. 189–197 (2003)Google Scholar
  7. 7.
    Cohen, M.M., Massaro, D.W.: Modeling coarticulation in synthetic visual speech. In: N. Magnenat-Thalmann, D. Thalmann (eds.) Models and Techniques in Computer Animation, pp. 139–156. Springer, Berlin Heidelberg New York (1993)Google Scholar
  8. 8.
    Egges, A., Kshirsagar, S., Magnenat-Thalmann, N.: Generic personality and emotion simulation for conversational agents. Comput. Anim. Virt. Worlds 15(1), 1–13 (2004)CrossRefGoogle Scholar
  9. 9.
    Egges, A., Magnenat-Thalmann, N.: Emotional communicative body animation for multiple characters. In: Proceedings of the 1st International Workshop on Crowd Simulation (V-Crowds), pp. 31–40 (2005)Google Scholar
  10. 10.
    Egges, A., Molet, T., Magnenat-Thalmann, N.: Personalised real-time idle motion synthesis. In: Pacific Graphics 2004, pp. 121–130 (2004)Google Scholar
  11. 11.
    Garchery, S.: Animation faciale temps-reel multi plates-formes. Dissertation, MIRALab, University of Geneva (2004)Google Scholar
  12. 12.
    Grassia, F.S.: Practical parameterization of rotations using the exponential map. J. Graph. Tools 3(3), 29–48 (1998)Google Scholar
  13. 13.
    H-Anim Humanoid Animation Working Group: Specification for a standard humanoid. http://www.h-anim.org/. Cited May (2006)Google Scholar
  14. 14.
    Hartmann, B., Mancini, M., Pelachaud, C.: Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In: Computer Animation 2002, pp. 111–119 (2002)Google Scholar
  15. 15.
    Inui, T., Tanabe, Y., Onodera, Y.: Group Theory and its Applications in Physics. Springer, Berlin Heidelberg New York (1990)zbMATHGoogle Scholar
  16. 16.
    Ivanic, J., Ruedenberg, K.: Additions and corrections: rotation matrices for real spherical harmonics. J. Phys. Chem. 102(45), 9099–9100 (1998)Google Scholar
  17. 17.
    Kautz, J., Lehtinen, J., Sloan, P.P.: Precomputed radiance transfer: theory and practise. In: ACM SIGGRAPH ’05 Course Notes (2005)Google Scholar
  18. 18.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Comput. Anim. Virt. Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  19. 19.
    Kovar, L., Gleicher, M.: Flexible automatic motion blending with registration curves. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214–224 (2003)Google Scholar
  20. 20.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proceedings of SIGGRAPH ’02, pp. 473–482 (2002)Google Scholar
  21. 21.
    Krenn, B., Pirker, H.: Defining the gesticon: language and gesture coordination for interacting embodied agents. In: Proceedings of the AISB-2004 Symposium on Language, Speech and Gesture for Expressive Characters, pp. 107–115. University of Leeds, UK (2004)Google Scholar
  22. 22.
    Kshirsagar, S., Molet, T., Magnenat-Thalmann, N.: Principal components of expressive speech animation. In: Computer Graphics International 2001, pp. 38–44. IEEE Press, Washington, DC (2001)Google Scholar
  23. 23.
    Microsoft Speech SDK version 5.1 (SAPI5.1): http://www.microsoft.com/speech/download/sdk51/. Cited May (2006)Google Scholar
  24. 24.
    Mueller, M., Roeder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. In: Proceedings SIGGRAPH ’05, pp. 677–685 (2005)Google Scholar
  25. 25.
    Openscenegraph: http://www.openscenegraph.org/. Cited May (2006)Google Scholar
  26. 26.
    Papagiannakis, G., Foni, A., Magnenat-Thalmann, N.: Practical precomputed radiance transfer for mixed reality. In: Proceedings of Virtual Systems and Multimedia 2005, pp. 189–199. VSMM Society, Yanagido, Japan (2005)Google Scholar
  27. 27.
    Papagiannakis, G., Kim, H., Magnenat-Thalmann, N.: Believability and presence in mobile mixed reality environments. In: IEEE VR2005 Workshop on Virtuality Structures (2005)Google Scholar
  28. 28.
    Perlin, K.: An image synthesizer. In: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, pp. 287–296. ACM, Boston (1985)Google Scholar
  29. 29.
    Perlin, K.: Real time responsive animation with personality. IEEE Trans. Visual. Comput. Graph. 1(1), 5–15 (1995)CrossRefGoogle Scholar
  30. 30.
    Poggi, I., Pelachaud, C., Rosis, F.D., Carofiglio, V., Carolis, B.D.: Greta: a believable embodied conversational agent. In: O. Stock, M. Zancanaro (eds.) Multimodal Intelligent Information Presentation, vol. 27. Springer, Berlin Heidelberg New York (2005)Google Scholar
  31. 31.
    Ponder, M., Papagiannakis, G., Molet, T., Magnenat-Thalmann, N., Thalmann, D.: Vhd++ development framework: towards extendible, component based VR/AR simulation engine featuring advanced virtual character technologies. In: Proceedings of Computer Graphics International (CGI), pp. 96–104. IEEE Press, Washington, DC (2003)Google Scholar
  32. 32.
    Ramamoorthi, R., Hanrahan, P.: An efficient representation for irradiance environment maps. In: Proceedings of SIGGRAPH ’01. ACM, Boston (2001)Google Scholar
  33. 33.
    Ren, Z., Wang, R., Snyder, J., Zhou, K., Liu, X., Sun, B., Sloan, P.P., Bao, H., Peng, Q., Guo, B.: Real-time soft shadows in dynamic scenes using spherical harmonic exponentiation. In: Prococeedings of ACM SIGGRAPH ’06, pp. 977–986 (2006)Google Scholar
  34. 34.
    Rose, C., Cohen, M., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–48 (1998)Google Scholar
  35. 35.
    Rose, C., Guenter, B., Bodenheimer, B., Cohen, M.: Efficient generation of motion transitions using spacetime constraints. In: Proceedings of ACM SIGGRAPH ’96, Annual Conference Series, pp. 147–154 (1996)Google Scholar
  36. 36.
    Sloan, P.P., Kautz, J., Snyder, J.: Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. In: Proceedings of ACM SIGGRAPH ’02, pp. 527–536. ACM, Boston (2002)Google Scholar
  37. 37.
    Tamura, H.: Mixed reality: future dreams seen at the border between real and virtual worlds. IEEE Comput. Graph. Appl. 21(6), 64–70 (2001)CrossRefGoogle Scholar
  38. 38.
    Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., Piekarski, W.: Arquake: an outdoor/indoor augmented reality first person application. In: Proceedings of Symposium on Wearable Computers, pp. 139–146 (2000)Google Scholar
  39. 39.
    Unuma, M., Anjyo, K., Tekeuchi, T.: Fourier principles for emotion-based human figure animation. In: Proceedings of ACM SIGGRAPH ’95, Annual Conference Series, pp. 91–96 (1995)Google Scholar
  40. 40.
    Virtual Human Markup Language (vhml):http://www.vhml.org/. Cited November (2005)Google Scholar
  41. 41.
    Vacchetti, L., Lepetit, V., Ponder, M., Papagiannakis, G., Fua, P., Thalmann, D., Magnenat-Thalmann, N.: Stable real-time AR framework for training and planning in industrial environments. In: S.K. Ong, A.Y.C. Nee (eds.) Virtual Reality and Augmented Reality Applications in Manufacturing. Springer, Berlin Heidelberg New York (2004)Google Scholar
  42. 42.
    Vlahakis, V., Ioannidis, N., Karigiannis, J., Tsotros, M., Gounaris, M., Stricker, D., Gleue, T., Daehne, P., Almeida, L.: Archeoguide: an augmented reality guide for archaeological sites. IEEE Comput. Graph. Appl. 22(5), 52–60 (2002)CrossRefGoogle Scholar
  43. 43.
    Wiley, D., Hahn, J.: Interpolation synthesis of articulated figure motion. IEEE Comput. Graph. Appl. 17(6), 39–45 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2007

Authors and Affiliations

  • Arjan Egges
    • 1
    Email author
  • George Papagiannakis
    • 2
  • Nadia Magnenat-Thalmann
    • 2
  1. 1.Center for Advanced Gaming and Simulation, Department of Information and Computing SciencesUtrecht UniversityUtrechtThe Netherlands
  2. 2.MIRALabUniversity of GenevaGenevaSwitzerland

Personalised recommendations