Gestalt-based composition and performance in multimodal environments

  • Antonio Camurri
  • Marc Leman
V. From Musical Expression to Interactive Computer Systems
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1317)


The paper introduces the concept of multimodal environments in relation to Gestalt theory. Multimedia environments provide digital extensions for different human activities, such as movement, thinking, composing, listening, planning, etc. The environments discussed in this paper make use of the state-of-the art in sensoring technology and computing. They typically combine different methods of knowledge representation such as symbolic, iconic, and subsymbolic representations into a hybrid architecture. Movement detection, as well as beat induction and musical responses to action that happen on the scene all involve Gestalt notions. In this paper, we focus on a number of requirements for the application of multimodal environments in music and art. Applications are discussed and the basic architecture of an existing experimental platform is outlined.


Multimodal Interaction Multimodal User Computer Music Symbolic Reasoning Gesture Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bates, J. (1994). The role of emotions in believable agents. Communications of the ACM, 37, 122–125.Google Scholar
  2. Camurri, A. (1995). Interactive dance/music systems. In Proceedings International Computer Music Conference (ICMC-95). Banff: ICMA Press.Google Scholar
  3. Camurri, A., & Ferrentino, P. (1997). A computational model of artificial emotion and its application in the theatrical museal machine project (Tech. Rep.). DIST Laboratory on Musical Informatics.Google Scholar
  4. Camurri, A., Innocenti, C., & Massari, A. (1995). Toward an integrated agent architecture for real-time, multi-modal interaction in multimedia systems. In First International Congress on Music and Artificial Intelligence (ICMAI-95). Edinburgh.Google Scholar
  5. Camurri, A., & Leman, M. (1997). Gestalt-based composition and performance in multimodal environments. In M. Leman (Ed.), Music, Gestalt, and computing: Studies in cognitive and systematic musicology. Berlin, Heidelberg: Springer-Verlag.Google Scholar
  6. Camurri, A., Morasso, P., Tagliasco, V., & Zaccaria, R. (1986). Dance and movement notation. In P. Morasso & V.Tagliasco (Eds.), Human movement understanding. Amsterdam: North Holland.Google Scholar
  7. Damasio, A. (1994). Descatres error: Emotion, reason, and the human brain. Chatham, Kent: Mackays of Chatham.Google Scholar
  8. Leman, M. (1989). Symbolic and subsymbolic information processing in models of musical communication and cognition. Interface-Journal of New Music Research, 18, 141–160.Google Scholar
  9. Leman, M. (1995). Music and schema theory: Cognitive foundations of systematic musicology. Berlin, Heidelberg: Springer-Verlag.Google Scholar
  10. Machover, T., & Chung, J. (1989). Hyperinstruments: Musically intelligent and interactive performance and creativity systems. In Proceedings International Computer Music Conference (ICMC-1989). Columbus, Ohio, USA.Google Scholar
  11. Maes, P., Blumberg, B., Darrel, T., Pentland, A., & Wexelblat, A. (1995). Modeling interactive agents in ALIVE. In Proceedings International Joint Conference on Artificial Intelligence (IJCAI-95). Montreal.Google Scholar
  12. Povall, R. (1995). Compositional methods in interactive performance environments. Journal of New Music Research, 109–120.Google Scholar
  13. Riecken, D. (1992). Wolfgang: A system using emoting potentials to manage musical design. In M. Balaban, K. Ebcioglu, & O. Laske (Eds.), Understanding music with AI: Perspectives on music cognition. Cambridge, MA: The MIT Press.Google Scholar
  14. Riecken, D. (Ed.). (1994). Special issue on intelligent agents. (Communications of the ACM, July 1994)Google Scholar
  15. Rowe, R. (1993). Interactive music systems. Cambridge, MA: The MIT Press.Google Scholar
  16. Schomaker, L. (1995). Taxonomy on multimodal interaction in multimedia systems. MIAMI Esprit Project.Google Scholar
  17. Thorisson, K. (1995). Multimodal interface agents. Unpublished doctoral dissertation, MIT Media Lab.Google Scholar
  18. Vidolin, A.(1987). Ambienti esecutivi.In I profili del suono. Musica Verticale.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Antonio Camurri
    • 1
  • Marc Leman
    • 2
  1. 1.Lab. of Musical InformaticsDIST, University of GenovaGenovaItaly
  2. 2.IPEM, University of GhentGhentBelgium

Personalised recommendations