Mediating Action and Music with Augmented Grammars

  • Pietro Casella
  • Ana Paiva
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2792)

Abstract

Current approaches to the integration of action and music in Intelligent Virtual Environments are in most part, “hardwired” to fit a particular interaction scheme. This lack of independence between both parts prevents the music or sound director from exploring new musical possibilities in a systematic and focused way.

A framework is proposed which mediates action and music, using an augmented context free grammar based model of behavior to map events from the environment to musical events. This framework includes mechanisms for self modification of the model and maintenance of internal state variables. The output module can be changed to perform other non musical behavior.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Casella, P., Paiva, A.: MAgentA: An Architecture for Real Time Automatic Composition of Background Music. In: de Antonio, A., Aylett, R.S., Ballin, D. (eds.) IVA 2001. LNCS (LNAI), vol. 2190, p. 224. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  2. 2.
  3. 3.
    Downie, M.: Behavior, animation, music: the music and movement of synthetic characters. MSc Thesis. MIT (2001)Google Scholar
  4. 4.
    Microsoft DirectMusic, http://www.microsoft.com/directx
  5. 5.
    Miles Sound System, http://www.sensaura.com

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Pietro Casella
    • 1
  • Ana Paiva
    • 1
  1. 1.Instituto Superior Técnico, Intelligent Agents and Synthetic Characters GroupInstituto de Engenharia de Sistemas e ComputadoresLisboaPortugal

Personalised recommendations