Enhancing Animated Agents in an Instrumented Poker Game

  • Marc Schröder
  • Patrick Gebhard
  • Marcela Charfuelan
  • Christoph Endres
  • Michael Kipp
  • Sathish Pammi
  • Martin Rumpler
  • Oytun Türk
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5243)


In this paper we present an interactive poker game in which one human user plays against two animated agents using RFID-tagged poker cards. The game is used as a showcase to illustrate how current AI technologies can be used for providing new features to computer games. A powerful and easy-to-use multimodal dialog authoring tool is used for modeling game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer. Through the combination of these methods, the characters show a consistent expressive behavior that enhances the naturalness of interaction in the game.


Vocal Tract Emotional Expressivity Speech Synthesis Virtual Character Unit Selection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gratch, J., Rickel, J., André, E., Cassell, J., Petajan, E., Badler, N.I.: Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems 17, 54–63 (2002)CrossRefGoogle Scholar
  2. 2.
    Mateas, M., Stern, A.: Façade: An experiment in building a fully-realized interactive drama. In: Game Developers Conference, Game Design Track (2003)Google Scholar
  3. 3.
    Swartout, W., Gratch, J., Hill, R., Hovy, E., Marsella, S., Rickel, J., Traum, D.: Toward virtual humans. AI Magazine 27, 96–108 (2006)Google Scholar
  4. 4.
    Martin, J.-C., Niewiadomski, R., Devillers, L., Buisine, S., Pelachaud, C.: Multimodal complex emotions: Gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics, Special Edition, Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids (2006)Google Scholar
  5. 5.
    de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., de Carolis, B.: From Greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. Int. Journal of Human Computer Studies 59, 81–118 (2003)CrossRefGoogle Scholar
  6. 6.
    Schröder, M.: Emotional speech synthesis: A review. In: Proceedings of Eurospeech 2001, Aalborg, Denmark, vol. 1, pp. 561–564 (2001)Google Scholar
  7. 7.
    Prendinger, H., Saeyor, S., Ishizuk, M.: MPML and SCREAM: Scripting the bodies and minds of life-like characters. In: Life-like Characters – Tools, Affective Functions, and Applications, pp. 213–242. Springer, Heidelberg (2004)Google Scholar
  8. 8.
    Wikipedia: Draw Poker (2008),
  9. 9.
    Schulz, M.: Horde3D – Next-Generation Graphics Engine. Horde 3D Team (2006–2008),
  10. 10.
    Gebhard, P., Kipp, M., Klesen, M., Rist, T.: Authoring scenes for adaptive, interactive performances. In: Proc. of the 2nd Int. Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 725–732. ACM, New York (2003)CrossRefGoogle Scholar
  11. 11.
    Gebhard, P.: ALMA - a layered model of affect. In: Proc. of the 4th Int. Joint Conference on Autonomous Agents and Multiagent Systems, pp. 29–36. ACM, New York (2005)Google Scholar
  12. 12.
    Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)Google Scholar
  13. 13.
    McCrae, R., John, O.: An introduction to the five-factor model and its applications. Journal of Personality 60, 175–215 (1992)CrossRefGoogle Scholar
  14. 14.
    Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology: Developmental, Learning, Personality, Social 14, 261–292 (1996)MathSciNetGoogle Scholar
  15. 15.
    Gebhard, P., Kipp, K.H.: Are computer-generated emotions and moods plausible to humans? In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 343–356. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  16. 16.
    Hunt, A., Black, A.W.: Unit selection in a concatenative speech synthesis system using a large speech database. In: Proceedings of ICASSP 1996, Atlanta, Georgia, vol. 1, pp. 373–376 (1996)Google Scholar
  17. 17.
    Yoshimura, T., Tokuda, K., Masuko, T., Kobayashi, T., Kitamura, T.: Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In: Proceedings of Eurospeech 1999, Budapest, Hungary (1999)Google Scholar
  18. 18.
    Schweitzer, A., Braunschweiler, N., Klankert, T., Möbius, B., Säuberlich, B.: Restricted unlimited domain synthesis. In: Proc. Eurospeech 2003, Geneva (2003)Google Scholar
  19. 19.
    Hunecke, A.: Optimal design of a speech database for unit selection synthesis. Diploma thesis, Universität des Saarlandes, Saarbrücken, Germany (2007)Google Scholar
  20. 20.
    Schröder, M., Hunecke, A.: Creating German unit selection voices for the MARY TTS platform from the BITS corpora. In: Proc. SSW6, Bonn, Germany (2007)Google Scholar
  21. 21.
    Zen, H., Nose, T., Yamagishi, J., Sako, S., Masuko, T., Black, A., Tokuda, K.: The HMM-based speech synthesis system version 2.0. In: Proc. of ISCA SSW6, Bonn, Germany (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Marc Schröder
    • 1
  • Patrick Gebhard
    • 1
  • Marcela Charfuelan
    • 1
  • Christoph Endres
    • 1
  • Michael Kipp
    • 1
  • Sathish Pammi
    • 1
  • Martin Rumpler
    • 2
  • Oytun Türk
    • 1
  1. 1.DFKI, Saarbrücken and BerlinGermany
  2. 2.FH TrierBirkenfeldGermany

Personalised recommendations