Multi-modal System Architecture for Serious Gaming

  • Otilia Kocsis
  • Todor Ganchev
  • Iosif Mporas
  • George Papadopoulos
  • Nikos Fakotakis
Part of the IFIP International Federation for Information Processing book series (IFIPAICT, volume 296)

Abstract

Human-computer interaction (HCI), especially in the games domain, targets to mimic as much as possible the natural human-to-human interaction, which is multimodal, involving speech, vision, haptic, etc. Furthermore, the domain of serious games, aiming to value-added games, makes use of additional inputs, such as biosensors, motion tracking equipment, etc. In this context, game development has become complex, expensive and burdened with a long development cycle. This creates barriers to independent game developers and inhibits the introduction of innovative games, or new game genres. In this paper the PlayMancer platform is introduced, a work in progress aiming to overcome such barriers by augmenting existing 3D game engines with innovative modes of interaction. Playmancer integrates open source existing systems, such as a game engine and a spoken dialog management system, extended by newly implemented components, supporting innovative interaction modalities, such as emotion recognition from audio data, motion tracking, etc, and advanced configuration tools.

Keywords

Emotion Recognition Game Development Voice Activity Detection Multimodal Interaction Game Engine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bohus, D., Rudnicky, A. (2003). RavenClaw: Dialog management using hierarchical task decomposition and expectation agenda. In: Proceedings Eu-rospeech 2003, pp.597–600, Geneva.Google Scholar
  2. 2.
    Vildjiounaite, E., Kocsis, O., Kyllonen, V., Kladis, B. (2007) Context-dependent user modeling for smart homes. In: Proceedings 11th International Conference on User Modeling, LNCS, vol. 4511/2007, pp. 345–349, Springer, Heidelberg.Google Scholar
  3. 3.
    Object-Oriented Graphics Rendering Engine, http://ogre3d.orgGoogle Scholar
  4. 4.
    Nuance Recognizer, http://www.nuance.com.
  5. 5.
    The CMU Sphinx Group Open Source Speech Recognition Engines, http://cmusphinx.sourceforge.net/html/cmusphinx.php.Google Scholar
  6. 6.
    FestVox speech synthesizer, http://festvox.org.Google Scholar
  7. 7.
    Reitmayr, G., Schmalstieg, D. (2001). OpenTracker — an open software architecture for reconfigurable tracking based on XML. In: Proceedings IEEE Virtual Reality 2001, pp. 285–286.Google Scholar
  8. 8.
    OpenTracker — An open architecture for reconfigurable tracking based on XML, http://studierstube.icg.tu-graz.ac.at:80/opentracker/Google Scholar
  9. 9.
    Kostoulas, T., Ganchev, T., Mporas, I., Fakotakis, N. (2008). A real world emotional speech corpus for modern Greek. In: Proceedings LREC′2008, Morocco, May 28–30.Google Scholar
  10. 10.
    Bourguet, M.L. (2004). Software design and development of multimodal interaction. In: Proceedings IFIP 2004, pp. 409–414.Google Scholar
  11. 11.
    Navarre, D., Palanque, P., Bastide, R., Schyn, A., Winckler, M., Nedel, L.P., Freitas, C. (2005) A formal description of multimodal interaction techniques for immersive virtual reality applications. In: Proceedings INTERACT′05, LNCS, vol.3585, Springer, pp.170–183.Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2009

Authors and Affiliations

  • Otilia Kocsis
    • 1
  • Todor Ganchev
    • 1
  • Iosif Mporas
    • 1
  • George Papadopoulos
    • 1
  • Nikos Fakotakis
    • 1
  1. 1.Dept. of Electrical and Computer EngineeringUniversity of PatrasRionGreece

Personalised recommendations