An Event-Driven, Stochastic, Undirected Narrative (EDSUN) Framework for Interactive Contents

  • Adam Barclay
  • Hannes Kaufmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4326)

Abstract

In this paper, we present an extensible framework for interactive multimodal contents, with emphasis on augmented reality applications. The proposed framework, EDSUN, enables concurrent and variable narrative structures as well as content reusability and dynamic yet natural experience generation. EDSUN’s main components include a canonical specification of 5-state lexical syntax and grammar, stochastic state transitions, and extensions for hierarchical grammars to represent complex behavioral and multimodal interactions. The benefits of EDSUN in enabling classical contents to support the affordances of AR environments and in complementing recent published works are also discussed.

Keywords

augmented reality event driven stochastic transitions undirected narrative content framework grammar story telling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Branigan, E.: Narrative Comprehension and Film. In: Buscombe, E. (ed.) Sightlines, Routledge, New York, vol. 1 (1992)Google Scholar
  2. 2.
    Oliver, N., Horvitz, E., Garg, A.: Layered representations for human activity recognition. In: Fourth IEEE Int. Conf. on Multimodal Interfaces, pp. 3–8 (2002)Google Scholar
  3. 3.
    Ivanov, Y., Bobick, A.: Recognition of visual activities and interactions by stochastic parsing. IEEE Trans. PAMI 22(8), 852–872 (2002)Google Scholar
  4. 4.
    Ventura, D., Brogan, D.: Digital Storytelling with DINAH: dynamic, interactive, narrative authoring heuristic. IFIP First International Workshop on Entertainment Computing (May 2002)Google Scholar
  5. 5.
    MacIntyre, B., Bolter, J.D., Moreno, E., Hannigan, B.: Augmented Reality as a New Media Experience. IEEE and ACM International Symposium on Augmented Reality (ISAR 2001), 197–206 (2001)Google Scholar
  6. 6.
    Schneider, H.: GEIST: Mobile outdoor AR-Information System for Historical Education with Digital Story Telling. CeBIT 2003, and Intergeo (2003)Google Scholar
  7. 7.
    MacIntyre, B., Bolter, J.D.: Single-narrative, multiple point-of-view dramatic experiences in augmented reality. Virtual Reality 7, 10–16 (2003)CrossRefGoogle Scholar
  8. 8.
    Mateas, M., Stern, A.: Natural Language Understanding in Façade: Surface-text Processing. In: Göbel, S., Spierling, U., Hoffmann, A., Iurgel, I., Schneider, O., Dechau, J., Feix, A. (eds.) TIDSE 2004. LNCS, vol. 3105, Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Mateas, M., Stern, A.: A Behaviour Language: Joint Action and Behavioural Idioms. In: Predinger, H., Ishiuka, M. (eds.) Life-like Characters: Tools, Affective Functions and Applications, Springer, Heidelberg (2004)Google Scholar
  10. 10.
    Klesen, M., Kipp, M., Gebhard, P., Rist, T.: Staging exhibitions: methods and tools for modelling narrative structure to produce interactive performances with virtual actors. Virtual Reality 7, 17–29 (2003)CrossRefGoogle Scholar
  11. 11.
    Cavazza, M., Charles, F., Mead, S.J.: Developing Re-usable Interactive Storytelling Technologies. In: IFIP World Computer Congress 2004, Toulouse, France (2004)Google Scholar
  12. 12.
    Cavazza, M., Martin, O., Charles, F., Mead, S.J., Marichal, X., Nandi, A.: Multi-modal Acting in Mixed Reality Interactive Storytelling. IEEE Multimedia 11(3) (July-September 2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Adam Barclay
    • 1
  • Hannes Kaufmann
    • 1
  1. 1.Institute of Software Technology and Interactive SystemsVienna University of TechnologyViennaAustria

Personalised recommendations