Virtual Reality

, 10:62 | Cite as

Analysis of expression in simple musical gestures to enhance audio in interfaces

Original Article


Expression could play a key role in the audio rendering of virtual reality applications. Its understanding is an ambitious issue in the scientific environment, and several studies have investigated the analysis techniques to detect expression in music performances. The knowledge coming from these analyses is widely applicable: embedding expression on audio interfaces can drive to attractive solutions to emphasize interfaces in mixed-reality environments. Synthesized expressive sounds can be combined with real stimuli to experience augmented reality, and they can be used in multi-sensory stimulations to provide the sensation of first-person experience in virtual expressive environments. In this work we focus on the expression of violin and flute performances, with reference to sensorial and affective domains. By means of selected audio features, we draw a set of parameters describing performers’ strategies which are suitable both for tuning expressive synthesis instruments and enhancing audio in human–computer interfaces.


Expression Audio interfaces Sonification 



This research was supported by the European Network of Excellence “Enactive Interfaces” under the sixth framework program of the European Commission. We thank David Pirrò for developing part of the prototype.


  1. Brewster SA, Grease MG (1999) Correcting menu usability problems with sound. Behav Inf Technol 18(3):165–177CrossRefGoogle Scholar
  2. Camurri A, De Poli G, Leman M, Volpe G (2001) A multi-layered conceptual framework for expressive gesture applications. In: Proceedings of the MOSART workshop on current research directions in computer music, Barcelona, pp 29–34Google Scholar
  3. Canazza S, De Poli G, Rodà A, Vidolin A (2003) An abstract control space for communication of sensory expressive intentions in music performance. J New Music Res 32(3):281–294CrossRefGoogle Scholar
  4. Canazza S, De Poli G, Drioli C, Rodà A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92(4):686–701CrossRefGoogle Scholar
  5. Coker W (1972) Music and meaning: a theoretical introduction to musical aesthetics. The Free Press, New YorkGoogle Scholar
  6. Dannenberg R, Thom B, Watson D (1997) A machine learning approach to musical style recognition. In: Proceedings of the international computer music conference, San Francisco, USA, pp 344–347Google Scholar
  7. De Poli G (2003) Expressiveness in music performance: analysis and modeling. In: Proceedings of the SMAC03 Stockholm music acoustics conference, Stockholm, Sweden, pp 17–20Google Scholar
  8. De Poli G, Rodà A, Vidolin A (1998) Note-by-note analysis of the influence of expressive intentions and musical structure in violin performance. J New Music Res 27(3):293–321CrossRefGoogle Scholar
  9. De Poli G, D’incà G, Mion L (2005) Computational models for audio expressive communication. In: Proceedings of the Audio Engineering Society annual meeting, Como, Italy, 9–12 November 2005Google Scholar
  10. Duxbury C, Sandler M, Davies M (2002) A hybrid approach to musical note onset detection. In: Proceedings of the fifth international conference on digital audio effects (DAFX-02), Hamburg, Germany, 26–28 September 2002Google Scholar
  11. Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484CrossRefGoogle Scholar
  12. Friberg A, Schoonderwaldt E, Juslin P, Bresin R (2002) Automatic real-time extraction of musical expression. In: Proceedings of the international computer music conference, Göteborg, Sweden, pp 365–367Google Scholar
  13. Gaver W (1986) Auditory icons: using sound in computer interfaces. Hum Comput Interact 2(2):167–177CrossRefGoogle Scholar
  14. Hermann T, Ritter H (2004) Sound and meaning in auditory data display. Proc IEEE 92(4):730–741CrossRefGoogle Scholar
  15. Hunt AD, Paradis M, Wanderley M (2003) The importance of parameter mapping in electronic instrument design. J New Music Res 32(4):429–440CrossRefGoogle Scholar
  16. Leman M (2000) Visualization and calculation of roughness of acoustical musical signals using the synchronization index model (SIM). In: Proceedings of the COST G-6 conference on digital audio effects (DAFX-00), Verona, Italy, pp 125–130Google Scholar
  17. Leman M, Lesaffre M, Tanghe K (2001) An introduction to the IPEM toolbox for perception-based music analysis. Mikropolyphonie—The Online Contemporary Music Journal 7Google Scholar
  18. Mion L (2003) Application of Bayesian networks to automatic recognition of expressive content of piano improvisations. In: Proceedings of the SMAC03 Stockholm music acoustics conference, Stockholm, Sweden, pp 557–560Google Scholar
  19. Mion L, De Poli G (2004) Expressiveness detection of music performances in the kinematics energy space. In: Proceedings of sound and music computing conference (JIM/CIM 04), Paris, France, 20–22 October 2004, pp 257–261Google Scholar
  20. Repp B (1990) Patterns of expressive timing in performances of a Beethoven minuet by nineteen pianists. J Acoust Soc Am 88(2):622–641CrossRefPubMedGoogle Scholar
  21. Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39:1161–1178CrossRefGoogle Scholar
  22. Stanton N (1994) Human factors in alarm designs. Taylor & Francis, LondonGoogle Scholar
  23. Wanderley M, Battier M (2000) Trends in gestural control of music. Edition lectronique. IRCAM, ParisGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2006

Authors and Affiliations

  1. 1.Department of Information Engineering—CSC/DEIUniversity of PadovaPadovaItaly

Personalised recommendations