Advertisement

Affective Interactive Narrative in the CALLAS Project

  • Fred Charles
  • Samuel Lemercier
  • Thurid Vogt
  • Nikolaus Bee
  • Maurizio Mancini
  • Jérôme Urbain
  • Marc Price
  • Elisabeth André
  • Catherine Pélachaud
  • Marc Cavazza
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4871)

Abstract

Interactive Narrative relies on the ability for the user (and spectator) to intervene in the course of events so as to influence the unfolding of the story. This influence is obviously different depending on the Interactive Narrative paradigm being implemented, i.e. the user being a spectator or taking part in the action herself as a character. If we consider the case of an active spectator influencing the narrative, most systems implemented to date [1] have been based on the direct intervention of the user either on physical objects staged in the virtual narrative environment or on the characters themselves via natural language input [1] [3]. While this is certainly empowering the spectator, there may be limitations as to the realism of that mode of interaction if we were to transpose Interactive Narrative for a vast audience.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cavazza, M., Charles, F., Mead, S.J.: Character-based Interactive Storytelling. IEEE Intelligent Systems, special issue on AI in Interactive Entertainment, 17–24 (2002)Google Scholar
  2. 2.
    Lugrin, J.-L., Cavazza, M.: AI-based world behaviour for emergent narratives. In: Proceedings of the ACM Advances in Computer Entertainment Technology, Los Angeles, USA (2006)Google Scholar
  3. 3.
    Mateas, M., Stern, A.: Natural Language Understanding in Façade: Surface-text Processing. In: Göbel, S., Spierling, U., Hoffmann, A., Iurgel, I., Schneider, O., Dechau, J., Feix, A. (eds.) TIDSE 2004. LNCS, vol. 3105, Springer, Heidelberg (2004)Google Scholar
  4. 4.
    Poggi, I., Pelachaud, C., de Rosis, F., Carofiglio, V., De Carolis, B.: GRETA. A Believable Embodied Conversational Agent. In: Stock, O., Zancarano, M. (eds.) Multimodal Intelligent Information Presentation, Kluwer, Dordrecht (2005)Google Scholar
  5. 5.
    Wagner, J., Vogt, T., André, E.: A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech. In: ACII 2007. LNCS, vol. 4738, pp. 114–125. Springer, Heidelberg (2007)Google Scholar
  6. 6.
    Cheong, Y.G., Young, R.M.: A Computational Model of Narrative Generation for Suspense. In: AAAI 2006 Computational Aesthetic Workshop, Boston, MA, USA (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Fred Charles
    • 1
  • Samuel Lemercier
    • 1
  • Thurid Vogt
    • 2
  • Nikolaus Bee
    • 2
  • Maurizio Mancini
    • 3
  • Jérôme Urbain
    • 4
  • Marc Price
    • 5
  • Elisabeth André
    • 2
  • Catherine Pélachaud
    • 3
  • Marc Cavazza
    • 1
  1. 1.School of Computing, University of TeessideUnited Kingdom
  2. 2.Multimedia Concepts and Applications Group, Augsburg UniversityGermany
  3. 3.IUT of Montreuil, University Paris VIIIFrance
  4. 4.Faculté Polytechnique de Mons, Department of Electrical Engineering, TCTS LabBelgium
  5. 5.BBC Research, Tadworth, SurreyUnited Kingdom

Personalised recommendations