Expressive Performance Rendering with Probabilistic Models

  • Sebastian Flossmann
  • Maarten Grachten
  • Gerhard Widmer


We present YQX, a probabilistic performance rendering system based on Bayesian network theory. It models dependencies between score and performance and predicts performance characteristics using information extracted from the score. We discuss the basic system that won the Rendering Contest RENCON 2008 and then present several extensions, two of which aim to incorporate the current performance context into the prediction, resulting in more stable and consistent predictions. Furthermore, we describe the first steps towards a multilevel prediction model: Segmentation of the work, decomposition of tempo trajectories, and combination of different prediction models form the basis for a hierarchical prediction system. The algorithms are evaluated and compared using two very large data sets of human piano performances: 13 complete Mozart sonatas and the complete works for solo piano by Chopin.


Score Feature Successive Note Pitch Interval Linear Gaussian Model Score Note 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We express our gratitude to Mme Irène Magaloff for her generous permission to use the unique resource that is the Magaloff Corpus for our research. This work is funded by the Austrian National Research Fund FWF via grants TRP 109-N23 and Z159 (“Wittgenstein Award”). The Austrian Research Institute for Artificial Intelligence acknowledges financial support from the Austrian Federal Ministries BMWF and BMVIT.


  1. 1.
    Arcos J, de Mántaras R (2001) An interactive CBR approach for generating expressive music. J Appl Intell 27(1):115–129CrossRefGoogle Scholar
  2. 2.
    Dorard L, Hardoon D, Shawe-Taylor J (2007) Can style be learned? A machine learning approach towards “performing” as famous pianists. In: Proceedings of music, brain & cognition workshop – the neural information processing systems 2007 (NIPS 2007), WhistlerGoogle Scholar
  3. 3.
    Flossmann S, Grachten M, Widmer G (2008) Experimentally investigating the use of score features for computational models of expressive timing. In: Proceedings of the 10th international conference on music perception and cognition 2008 (ICMPC ’08), SapporoGoogle Scholar
  4. 4.
    Flossmann S, Grachten M, Widmer G (2009) Expressive performance rendering: introducing performance context. In: Proceedings of the 6th sound and music computing conference 2009 (SMC ’09), Porto, pp 155–160Google Scholar
  5. 5.
    Flossmann S, Goebl W, Grachten M, Niedermayer B, Widmer G (2010) The magaloff project: an interim report. J New Music Res 39(4):363–377CrossRefGoogle Scholar
  6. 6.
    Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484CrossRefGoogle Scholar
  7. 7.
    Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cognit Psychol 2(2–3):145–161CrossRefGoogle Scholar
  8. 8.
    Grachten M (2006) Expressivity-aware tempo transformations of music performances using case based reasoning. Ph.D. thesis, Pompeu Fabra University, BarcelonaGoogle Scholar
  9. 9.
    Grindlay GC (2005) Modeling expressive musical performance with Hidden Markov models. Master’s thesis, University of California, Santa CruzGoogle Scholar
  10. 10.
    Grindlay G, Helmbold D (2006) Modeling, analyzing, and synthesizing expressive piano performance with graphical models. Mach Learn 65(2–3):361–387CrossRefGoogle Scholar
  11. 11.
    Hashida M (2008) RENCON – Performance Rendering Contest for computer systems. Accessed Sep 2008
  12. 12.
    Juang BH, Rabiner LR (1991) Hidden Markov Models for speech recognition. Technometrics 33(3):251–272MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Kim TH, Fukayama S, Nishimoto T, Sagayama S (2010) Performance rendering for polyphonic piano music with a combination of probabilistic models for melody and harmony. In: Proceedings of the 7th sound and music computing conference 2010 (SMC ’10), BarcelonaGoogle Scholar
  14. 14.
    Krumhansl CL, Kessler EJ (1982) Tracing the dynamic changes in perceived tonal organization in a spatioal representation of musical keys. Psychol Rev 89:334–368CrossRefGoogle Scholar
  15. 15.
    Lerdahl F, Jackendoff R (1983) A generative theory of tonal music. The MIT Press, CambridgeGoogle Scholar
  16. 16.
    Mazzola G (2002) The topos of music – geometric logic of concepts, theory, and performance. Birkhäuser Verlag, BaselMATHGoogle Scholar
  17. 17.
    Mazzola G (2006) Rubato software.
  18. 18.
    Meyer L (1956) Emotion and meaning in music. University of Chicago Press, ChicagoGoogle Scholar
  19. 19.
    Milmeister G (2006) The Rubato composer music software: component-based implementation of a functorial concept architecture. Ph.D. thesis, Universität Zürich, ZürichGoogle Scholar
  20. 20.
    Moog RA, Rhea TL (1990) Evolution of the keyboard interface: the Boesendorfer 290 SE recording piano and the moog multiply-touch-sensitive keyboards. Comput Music J 14(2):52–60CrossRefGoogle Scholar
  21. 21.
    Murphy K (2002) Dynamic Bayesian networks: presentation, inference and learning. Ph.D. thesis, University of California, BerkeleyGoogle Scholar
  22. 22.
    Narmour E (1990) The analysis and cognition of basic melodic structures: the implication–realization model. University of Chicago Press, ChicagoGoogle Scholar
  23. 23.
    Narmour E (1992) The analysis and cognition of melodic complexity: the implication–realization model. University of Chicago Press, ChicagoGoogle Scholar
  24. 24.
    Perez A, Maestre E, Ramirez R, Kersten S (2008) Expressive irish fiddle performance model informed with bowing. In: Proceedings of the international computer music conference 2008 (ICMC ’08), BelfastGoogle Scholar
  25. 25.
    Ramirez R, Hazan A, Gòmez E, Maestre E (2004) Understanding expressive transformations in saxophone Jazz performances using inductive machine learning in saxophone jazz performances using inductive machine learning. In: Proceedings of the sound and music computing international conference 2004 (SMC ’04), ParisGoogle Scholar
  26. 26.
    Rasmussen CE, Williams CKI (2006) Gaussian processes for machine learning. The MIT Press, Cambridge.
  27. 27.
    Recordare (2003) MusicXML definition.
  28. 28.
    Sundberg J, Askenfelt A, Frydén L (1983) Musical performance: a synthesis-by-rule approach. Comput Music J 7:37–43CrossRefGoogle Scholar
  29. 29.
    Suzuki T (2003) The second phase development of case based performance rendering system “Kagurame”. In: Working notes of the IJCAI-03 rencon workshop, Acapulco, pp 23–31Google Scholar
  30. 30.
    Temperley D (2007) Music and probability. MIT Press, CambridgeMATHGoogle Scholar
  31. 31.
    Teramura K, Okuma H, et al (2008) Gaussian process regression for rendering music performance. In: Proceedings of the 10th international conference on music perception and cognition 2008 (ICMPC ’08), SapporoGoogle Scholar
  32. 32.
    Tobudic A, Widmer G (2006) Relational IBL in classical music. Mach Learn 64(1–3):5–24CrossRefGoogle Scholar
  33. 33.
    Todd NPM (1992) The dynamics of dynamics: a model of musical expression. J Acoust Soc Am 91:3450–3550Google Scholar
  34. 34.
    Widmer G (2002) Machine discoveries: a few simple, robust local expression principles. J New Music Res 31(1):37–50MathSciNetCrossRefGoogle Scholar
  35. 35.
    Widmer G (2003) Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries. Artif Intell 146(2):129–148MathSciNetMATHCrossRefGoogle Scholar
  36. 36.
    Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33(3):203–216CrossRefGoogle Scholar
  37. 37.
    Widmer G, Tobudic A (2003) Playing Mozart by analogy: learning multi-level timing and dynamics strategies. J New Music Res 32(3):259–268CrossRefGoogle Scholar
  38. 38.
    Widmer G, Flossmann S, Grachten M (2009) YQX plays Chopin. AI Mag 30(3):35–48Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  • Sebastian Flossmann
    • 1
  • Maarten Grachten
    • 2
  • Gerhard Widmer
    • 3
    • 4
  1. 1.Department of Computational PerceptionJohannes Kepler UniversitatLinzAustria
  2. 2.Post Doctoral Researcher at the Department of Computational PerceptionJohannes Kepler UniversityLinzAustria
  3. 3.Department of Computational PerceptionJohannes Kepler University (JKU)LinzAustria
  4. 4.The Austrian Research Institute for Artificial Intelligence (OFAI)ViennaAustria

Personalised recommendations