Advertisement

High-Realistic and Flexible Virtual Presenters

  • David Oyarzun
  • Andoni Mujika
  • Aitor Álvarez
  • Aritz Legarretaetxeberria
  • Aitor Arrieta
  • María del Puy Carretero
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6169)

Abstract

This paper presents the research steps that have been necessaries for creating a mixed reality prototype called PUPPET. The prototype provides a 3D virtual presenter that is embedded in a real TV scenario and is driven by an actor in real time. In this way it can interact with real presenters and/or public. The key modules of this prototype improve the state-of-the-art in such systems in four different aspects: real time management of high-realistic 3D characters, animations generated automatically from actor’s speech, less equipment needs, and flexibility in the real/virtual integration. The paper describes the architecture and main modules of the prototype.

Keywords

3D virtual presenters mixed reality real time animation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Animal Logic web page, http://www.animallogic.com
  2. 2.
    Nazooka web page, http://www.nazooka.com/site/
  3. 3.
    Metaio Augmented Solutions, http://www.metaio.com
  4. 4.
    Oda, O., Lister, L.J., White, S., Feiner, S.: Developing an augmented reality racing game. In: Proceedings of the 2nd international conference on INtelligent TEchnologies for interactive enterTAINment (2008)Google Scholar
  5. 5.
    Carlin, A.S., Hoffman, H.G., Weghorst, S.: Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behaviour research and therapy (1997)Google Scholar
  6. 6.
    Tan, K.T.W., Lewis, E.M., Avis, N.J., Withers, P.J.: Using augmented reality to promote an understanding of materials science to school children. In: International Conference on Computer Graphics and Interactive Techniques (2008)Google Scholar
  7. 7.
    McCool, M.D.: Shadow volume reconstruction from depth maps. ACM Transactions on Graphics (TOG) 19, 1–26 (2000)CrossRefGoogle Scholar
  8. 8.
    Fuhrmann, A., Hesina, G., Faure, F., Gervautz, M.: Occlusion in collaborative augmented environments. Computers and Graphics 23 (1999)Google Scholar
  9. 9.
    Fortin, P., Herbert, P.: Handling occlusions in realtime augmented reality: Dealing with movable real and virtual objects. In: Proceedings of the Canadian Conf. on Computer and Robot Vision, Vol. 54 (2006)Google Scholar
  10. 10.
    Matusik, W., Buehler, C., McMillan, L.: Polyhedral visual hulls for real-time rendering. In: Proc. 12th Eurographics Workshop on Rendering EGWR ’01, London (2001)Google Scholar
  11. 11.
    Lehr, M., Arruti, A., Ortiz, A., Oyarzun, D., Obach, M.: Speech Driven Facial Animation using HMMs in Basque. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2006. LNCS (LNAI), vol. 4188, pp. 415–422. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  12. 12.
    Goldenthal, W., Waters, K., Van Thong, J.M., Glickman, O.: Driving Synthetic Mouth Gestures: Phonetic Recognition for FaceMe. In: Eurospeech, Rhodes, Greece (1997)Google Scholar
  13. 13.
    Massaro, D., Beskow, S., Cohen, M., Fry, C., Rodriguez, T.: Picture My Voice: Audio to Visual Speech Synthesis using Artificial Neural Networks. In: AVSP, Santa Cruz, California (1999)Google Scholar
  14. 14.
    Deng, Z., Busso, C., Narayanan, S., Neumann, U. : Audio-based Head Motion Synthesis for Avatar-based Telepresence Systems. In: ACM SIGMM Workshop on Effective Telepresence (ETP) (2004) Google Scholar
  15. 15.
    Chuang, E., Bregler, C.: Mood swings: expressive speech animation. ACM Transactions on Graphics (TOG) (2005)Google Scholar
  16. 16.
    Malcangi, M., de Tintis, R.: Audio Based Real-Time Speech Animation of Embodied Conversational Agents. LNCS. Springer, Heidelberg (2004)Google Scholar
  17. 17.
    Young, S., Kershaw, D., Odell, J., Ollason, D., Valtchev, V., Woodland, P.: The HTK BookGoogle Scholar
  18. 18.
    Young, S.: The ATK Real-Time API for HTKGoogle Scholar
  19. 19.
    Casacuberta, F., Garcia, R., Llisterri, J., Nadeu, C., Pardo, J.M., Rubio, A.: Development of Spanish Corpora for Speech Research (ALBAYZIN). In: Workshop on International Cooperation and Standardization of Speech Databases and Speech I/O Assesment Methods, Chiavari, Italy (1991)Google Scholar
  20. 20.
    Alexa, M., Behr, J., Müller, W.: The morph node. In: Proceedings of the fifth symposium on Virtual reality modeling language (Web3D-VRML), Monterey, California, United States, pp. 29–34 (2000)Google Scholar
  21. 21.
    Meredith, M., Maddock, S.: Motion Capture File Formats Explained. Department of Computer Science, University of Sheffield (2001)Google Scholar
  22. 22.
  23. 23.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • David Oyarzun
    • 1
  • Andoni Mujika
    • 1
  • Aitor Álvarez
    • 1
  • Aritz Legarretaetxeberria
    • 1
  • Aitor Arrieta
    • 1
  • María del Puy Carretero
    • 1
  1. 1.Vicomtech Research CentreSan SebastiánSpain

Personalised recommendations