IDEAS4Games: Building Expressive Virtual Characters for Computer Games

  • Patrick Gebhard
  • Marc Schröder
  • Marcela Charfuelan
  • Christoph Endres
  • Michael Kipp
  • Sathish Pammi
  • Martin Rumpler
  • Oytun Türk
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5208)

Abstract

In this paper we present two virtual characters in an interactive poker game using RFID-tagged poker cards for the interaction. To support the game creation process, we have combined models, methods, and technology that are currently investigated in the ECA research field in a unique way. A powerful and easy-to-use multimodal dialog authoring tool is used for the modeling of game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer. During the game, the characters show a consistent expressive behavior that reflects the individually simulated affect in speech and animations. As a result, users are provided with an engaging interactive poker experience.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Loyall, A.B.: Believable Agents: Building Interactive Personalities. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA (1997)Google Scholar
  2. 2.
    Gratch, J., Rickel, J., André, E., Cassell, J., Petajan, E., Badler, N.I.: Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems 17, 54–63 (2002)CrossRefGoogle Scholar
  3. 3.
    Mateas, M., Stern, A.: Façade: An experiment in building a fully-realized interactive drama. In: Game Developers Conference, Game Design Track (2003)Google Scholar
  4. 4.
    Swartout, W., Gratch, J., Hill, R., Hovy, E., Marsella, S., Rickel, J., Traum, D.: Toward virtual humans. AI Magazine 27, 96–108 (2006)Google Scholar
  5. 5.
    Martin, J.C., Niewiadomski, R., Devillers, L., Buisine, S., Pelachaud, C.: Multimodal complex emotions: Gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, Special Edition Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids (2006)Google Scholar
  6. 6.
    Kipp, M., Neff, M., Kipp, K.H., Albrecht, I.: Toward natural gesture synthesis: Evaluating gesture units in a data-driven approach. In: IVA 2007. LNCS (LNAI), vol. 4722, pp. 15–28. Springer, Heidelberg (2007)Google Scholar
  7. 7.
    de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., de Carolis, B.: From greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. Int. Journal of Human Computer Studies 59, 81–118 (2003)CrossRefGoogle Scholar
  8. 8.
    Marsella, S., Gratch, J.: Ema: A computational model of appraisal dynamics. In: Agent Construction and Emotions (2006)Google Scholar
  9. 9.
    Schröder, M.: Emotional speech synthesis: A review. In: Proceedings of Eurospeech 2001, Aalborg, Denmark, vol. 1, pp. 561–564 (2001)Google Scholar
  10. 10.
    Prendinger, H., Saeyor, S., Ishizuk, M.: Mpml and scream: Scripting the bodies and minds of life-like characters. In: Life-like Characters – Tools, Affective Functions, and Applications, pp. 213–242. Springer, Heidelberg (2004)Google Scholar
  11. 11.
    Bosser, A.G., Levieux, G., Sehaba, K., Bundia, A., Corruble, V., de Fondaumière, G., Gal, V., Natkin, S., Sabouret, N.: Dialogs taking into account experience, emotions and personality. In: Ma, L., Rauterberg, M., Nakatsu, R. (eds.) ICEC 2007. LNCS, vol. 4740, pp. 356–362. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  12. 12.
    Gebhard, P., Kipp, M., Klesen, M., Rist, T.: Authoring scenes for adaptive, interactive performances. In: Proc. of the 2nd Int. Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 725–732. ACM, New York (2003)CrossRefGoogle Scholar
  13. 13.
    Wikipedia: Draw Poker (2008), http://en.wikipedia.org/wiki/Draw_poker
  14. 14.
    Schulz, M.: Horde3D – Next-Generation Graphics Engine. Horde 3D Team (2006–2008), http://www.nextgen-engine.net/home.html
  15. 15.
    Hill, E.: Jess in Action: Java Rule-Based Systems, Manning, Greenwich, CT, USA (2003)Google Scholar
  16. 16.
    Klesen, M., Kipp, M., Gebhard, P., Rist, T.: Staging exhibitions: methods and tools for modelling narrative structure to produce interactive performances with virtual actors (2003). Virtual Reality 7, 17–29 (2003)CrossRefGoogle Scholar
  17. 17.
    Gebhard, P.: Alma - a layered model of affect. In: Proc. of the 4th Int. Joint Conference on Autonomous Agents and Multiagent Systems, pp. 29–36. ACM, New York (2005)Google Scholar
  18. 18.
    Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)Google Scholar
  19. 19.
    McCrae, R., John, O.: An introduction to the five-factor model and its applications. Journal of Personality 60, 175–215 (1992)CrossRefGoogle Scholar
  20. 20.
    Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology: Developmental, Learning, Personality, Social 14, 261–292 (1996)MathSciNetGoogle Scholar
  21. 21.
    Gebhard, P., Kipp, K.H.: Are computer-generated emotions and moods plausible to humans? In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 343–356. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  22. 22.
    Hunt, A., Black, A.W.: Unit selection in a concatenative speech synthesis system using a large speech database. In: Proceedings of ICASSP 1996, Atlanta, Georgia, vol. 1, pp. 373–376 (1996)Google Scholar
  23. 23.
    Black, A.W., Lenzo, K.A.: Limited domain synthesis. In: Proceedings of the 6th International Conference on Spoken Language Processing, Beijing, China (2000)Google Scholar
  24. 24.
    Yoshimura, T., Tokuda, K., Masuko, T., Kobayashi, T., Kitamura, T.: Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In: Proceedings of Eurospeech 1999, Budapest, Hungary (1999)Google Scholar
  25. 25.
    Schweitzer, A., Braunschweiler, N., Klankert, T., Möbius, B., Säuberlich, B.: Restricted unlimited domain synthesis. In: Proc. Eurospeech 2003, Geneva, Switzerland (2003)Google Scholar
  26. 26.
    Hunecke, A.: Optimal design of a speech database for unit selection synthesis. Master’s thesis, Universität des Saarlandes, Saarbrücken, Germany (2007)Google Scholar
  27. 27.
    Schröder, M., Hunecke, A.: Creating German unit selection voices for the MARY TTS platform from the BITS corpora. In: Proc. SSW6, Bonn, Germany (2007)Google Scholar
  28. 28.
    Zen, H., Nose, T., Yamagishi, J., Sako, S., Masuko, T., Black, A., Tokuda, K.: The HMM-based speech synthesis system version 2.0. In: Proc. of ISCA SSW6, Bonn, Germany (2007)Google Scholar
  29. 29.
    Charfuelan, M., Schröder, M., Türk, O., Pammi, S.C.: Open source HMM-based synthesisser for the MARY TTS platform. In: Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008) (submitted, 2008)Google Scholar
  30. 30.
    Ellbogen, T., Schiel, F., Steffen, A.: The BITS speech synthesis corpus for German. In: Proc. 4th Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal, pp. 2091–2094 (2004)Google Scholar
  31. 31.
    Turk, O., Schröder, M., Bozkurt, B., Arslan, L.: Voice quality interpolation for emotional text-to-speech synthesis. In: Proc. Interspeech 2005, Lisbon, Portugal, pp. 797–800 (2005)Google Scholar
  32. 32.
    Schröder, M.: Interpolating expressions in unit selection. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, Springer, Heidelberg (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Patrick Gebhard
    • 1
  • Marc Schröder
    • 1
  • Marcela Charfuelan
    • 1
  • Christoph Endres
    • 1
  • Michael Kipp
    • 1
  • Sathish Pammi
    • 1
  • Martin Rumpler
    • 2
  • Oytun Türk
    • 1
  1. 1.DFKISaarbrücken and BerlinGermany
  2. 2.FH TrierGermany

Personalised recommendations