Evaluation of Computer Systems for Expressive Music Performance


In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.


Listening Test Performance Rule Music Performance Listening Experiment Music Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33(3):203–216CrossRefGoogle Scholar
  2. 2.
    Canazza S, De Poli G, Drioli C, Roda A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92(4):686–701CrossRefGoogle Scholar
  3. 3.
    Bresin R, Friberg A (2000) Emotional coloring of computer controlled music performance. Comput Music J 24(4):44–63CrossRefGoogle Scholar
  4. 4.
    Todd NPMcA (1985) A model of expressive timing in tonal music. Music Percept 3:33–58CrossRefGoogle Scholar
  5. 5.
    Friberg A, Bresin R, Frydén L, Sundberg J (1998) Musical punctuation on the microlevel: automatic identification and performance of small melodic units. J New Music Res 27(3):271–292CrossRefGoogle Scholar
  6. 6.
    Cambouropoulos E (1988) Towards a general computational theory of musical structure. PhD thesis, Faculty of Music and Department of Artificial Intelligence, University of EdinburghGoogle Scholar
  7. 7.
    Ahlbäck S (2004) Melody beyond notes. A study in melody cognition. PhD thesis, Department of Musicology, Göteborg UniversityGoogle Scholar
  8. 8.
    Bresin R (1998) Artificial neural networks based models for automatic performance of musical scores. J New Music Res 27(3):239–270CrossRefGoogle Scholar
  9. 9.
    Goebl W, Dixon S, De Poli G, Friberg A, Bresin R, Widmer G (2008) Sense in expressive music performance: data acquisition, computational studies, models. In: Polotti P, Rocchesso D (eds) Sound to sense - sense to sound: a state of the art in sound and music computing. Logos Verlag, Berlin, pp 195–242Google Scholar
  10. 10.
    Goebl W, Widmer G (2009) On the use of computational methods for expressive music performance. In: Crawford T, Gibson L (eds) Modern methods for musicology: prospects, proposals and realities. Ashgate, Aldershot, pp 93–113Google Scholar
  11. 11.
    Repp BH (1992) Diversity and commonality in music performance: an analysis of timing microstructure in Schumann’s “Träumerei”. J Acoust Soc Am 92(5):2546–2568CrossRefGoogle Scholar
  12. 12.
    Friberg A, Sundström A (2002) Swing ratios and ensemble timing in jazz performance: evidence for a common rhythmic pattern. Music Percept 19(3):333–349CrossRefGoogle Scholar
  13. 13.
    Repp BH (1999) A microcosm of musical expression: II. Quantitative analysis of pianists’ dynamics in the initial measures of Chopin’s Etude in E major. J Acoust Soc Am 105:1972–1988CrossRefGoogle Scholar
  14. 14.
    De Poli G (2004) Methodologies for expressiveness modeling of and for music performance. J New Music Res 33(3):189–202CrossRefGoogle Scholar
  15. 15.
    Friberg A (1995) Matching the rule parameters of Phrase arch to performances of “Träumerei”: a preliminary study. In: Friberg A, Sundberg J (eds) Proceedings of the KTH symposium on Grammars for music performance, 27 May 1995, pp 37–44Google Scholar
  16. 16.
    Friberg A, Sundberg J (1995) Time discrimination in a monotonic, isochronous sequence. J Acous Soc Am 98(5): 2524–2531.Google Scholar
  17. 17.
    Repp BH (1995) Detectability of duration and intensity increments in melody tones: a partial connection between music perception and performance. Percept Psychophys 57(8):1217–1232CrossRefGoogle Scholar
  18. 18.
    Zanon P, De Poli G (2003) Estimation of parameters in rule systems for expressive rendering in musical performance. Comput Music J 27:29–46CrossRefGoogle Scholar
  19. 19.
    Zanon P, De Poli G (2003) Estimation of time-varying parameters in rule systems for music performance. J New Music Res 32(3):295–316CrossRefGoogle Scholar
  20. 20.
    Todd NPMcA (1989) A computational model of rubato. Contemporary Music Review 3:69–88MathSciNetCrossRefGoogle Scholar
  21. 21.
    Juslin PN, Friberg A, Bresin R (2002) Toward a computational model of expression in performance: the GERM model. Musicae Scientiae, Special issue 2001–2002, 63–122Google Scholar
  22. 22.
    Sundberg J, Friberg A, Bresin A (2003) Attempts to reproduce a pianist’s expressive timing with Director Musices performance rules. J New Music Res 32(3):317–326CrossRefGoogle Scholar
  23. 23.
    Friberg A, Battel GU (2002) Structural communication. In: Parncutt R, McPherson GE (eds) The science and psychology of music performance: creative strategies for teaching and learning. Oxford University Press, New York, pp 199–218Google Scholar
  24. 24.
    Marsland S (2009) Machine learning: an algorithmic perspective. Chapman & Hall/CRC, Boca RatonGoogle Scholar
  25. 25.
    Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484CrossRefGoogle Scholar
  26. 26.
    Widmer G (2003) Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries. Artif Intell 146(2):129–148MathSciNetMATHCrossRefGoogle Scholar
  27. 27.
    Widmer G (2002) Machine discoveries: a few simple, robust local expression principles. J New Music Res 31:37–50CrossRefGoogle Scholar
  28. 28.
    Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cogn Psychol Spec Issue Music Perform 2(2–3):145–161Google Scholar
  29. 29.
    Bresin R (2001) Articulation rules for automatic music performance. In: Schloss A, Dannenberg R, Driessen P (eds) Proceedings of the international computer music conference – ICMC 2001. ICMA, San Francisco, pp 294–297Google Scholar
  30. 30.
    Goebl W (2001) Melody lead in piano performance: expressive device or artifact? J Acoust Soc Am 110(1):563–572CrossRefGoogle Scholar
  31. 31.
    Bjurling J (2007) Timing in piano music - a model of melody lead. Master of Science thesis, KTH Royal Institute of Technology, School of Computer Science and Communication, Stockholm, Sweden. ISSN-1653–5715.
  32. 32.
    Friberg A (2006) pDM: an expressive sequencer with real-time control of the KTH music performance rules. Comput Music J 30(1):37–48CrossRefGoogle Scholar
  33. 33.
    Bjurling J, Bresin R (2008) Timing in piano music – testing a model of melody lead. In Proceedings of ICMPC 10, SapporoGoogle Scholar
  34. 34.
    Gabrielsson A, Lindström E (2010) The role of structure in the musical expression of emotions. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 367–400Google Scholar
  35. 35.
    Juslin PN, Timmers R (2010) Expression and communication of emotion in music performance. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 453–489: 2001–2002Google Scholar
  36. 36.
    Repp BH (1997) Acoustics, perception, production of legato articulation on a computer-controlled grand piano. J Acoust Soc Am 102(3):1878–1890CrossRefGoogle Scholar
  37. 37.
    Bresin R, Battel GU (2000) Articulation strategies in expressive piano performance. Analysis of legato, staccato, repeated notes in performances of the Andante movement of Mozart's sonata in G major (K 545). J New Music Res 29(3):211–224CrossRefGoogle Scholar
  38. 38.
    Bresin R, Friberg A (2011) Emotion rendering in music: range and characteristic values of seven musical variables. Cortex 47(9):1068–1081CrossRefGoogle Scholar
  39. 39.
    Vieillard S, Peretz I, Gosselin N, Khalfa S, Gagnon L, Bouchard B (2007) Happy, sad, scary and peaceful musical excerpts for research on emotions. Cogn Emot 22(4):720–752CrossRefGoogle Scholar
  40. 40.
    Hashida M, Nakra M, Katayose H, Murao T, Hirata K, Suzuki K, Kitahara T (2008) Rencon: performance rendering contest for automated music systems. In: Proceedings of international conference on music perception and cognition (ICMPC 2008)Google Scholar
  41. 41.
    Hiraga R, Hashida M, Hirata K, Katayose H, Noike K (2002) RENCON: toward a new evaluation method for performance rendering system. In: Proceedings of internatioal computer music conference, pp 357–361Google Scholar
  42. 42.
    Hiraga R, Bresin R, Hirata K, Katayose, H (2003) Rencon in 2002. In Proceedings of IJCAI-03 rencon workshop, Acapulco, Mexico, pp 59–64Google Scholar
  43. 43.
    Hiraga R, Bresin R, Hirata K, Katayose H (2004) Rencon 2004: turing test for musical expression. In NIME ‘04: proceedings of the 4th international conference on New interfaces for musical expression, Hamamatsu, Shizuoka, Japan, pp 120–123Google Scholar
  44. 44.
    Hiraga R, Bresin R, Katayose H (2006) Rencon 2005. In Proceeding of the 20th annual conference of the Japanese Society for Artificial Intelligence (1D2–1)Google Scholar
  45. 45.
    Noike K, Toyoda K, Katayose H (2005) An initial implementation of corpus based performance rendering system “COPER”. Info Process Soc Jpn (IPSJ) 2005(14):67–70Google Scholar
  46. 46.
    Hashida M, Nagata N, Katayose H (2005) A study of description capability of performance characteristics on PopE. In: The 19th annual conference of JSAIGoogle Scholar
  47. 47.
    Widmer G, Flossmann S, Grachten M (2009) YQX plays chopin. AI Mag 30(3):35–48Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Department of Speech, Music & HearingKTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations