Mediating Action and Music with Augmented Grammars
Current approaches to the integration of action and music in Intelligent Virtual Environments are in most part, “hardwired” to fit a particular interaction scheme. This lack of independence between both parts prevents the music or sound director from exploring new musical possibilities in a systematic and focused way.
A framework is proposed which mediates action and music, using an augmented context free grammar based model of behavior to map events from the environment to musical events. This framework includes mechanisms for self modification of the model and maintenance of internal state variables. The output module can be changed to perform other non musical behavior.
Unable to display preview. Download preview PDF.