Expressive Piano Music Playing Using a Kalman Filter

  • Alexandra Bonnici
  • Maria Mifsud
  • Kenneth P. Camilleri
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10783)

Abstract

In this paper, we present an algorithm that uses the Kalman filter to combine simple phrase structure models with observed differences in pitch within the phrase to refine the phrase model and hence adjust the loudness level and tempo of qualities of the melody line. We show how similar adjustments may be made to the accompaniment to introduce expressive attributes to a midi file representation of a score. In the paper, we show that the subjects had some difficulty in distinguishing between the resulting expressive renderings and human performances of the same score.

References

  1. 1.
    Bainbridge, D., Bell, T.: A music notation construction engine for optical music recognition. Softw. Pract. Exp. 33(2), 173–200 (2003)CrossRefMATHGoogle Scholar
  2. 2.
    Bishop, G., Welch, G.: An introduction to the kalman filter. Proc of SIGGRAPH, Course 8(27599–23175), 41 (2001)Google Scholar
  3. 3.
    Bresin, R., Friberg, A.: Evaluation of computer systems for expressive music performance. In: Kirke, A., Miranda, E. (eds.) Guide to computing for expressive music performance, pp. 181–203. Springer, London (2013).  https://doi.org/10.1007/978-1-4471-4123-5_7 CrossRefGoogle Scholar
  4. 4.
    Cancino-Chacón, C.E., Gadermaier, T., Widmer, G., Grachten, M.: An evaluation of linear and non-linear models of expressive dynamics in classical piano and symphonic music. Mach. Learn. 106(6), 887–909 (2017)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Cemgil, A.T., Kappen, B., Desain, P., Honing, H.: On tempo tracking: tempogram representation and kalman filtering. J. New Music Res. 29(4), 259–273 (2000)CrossRefGoogle Scholar
  6. 6.
    Fober, D., Letz, S., Orlarey, Y., Askenfelt, A., Hansen, K.F., Schoonderwaldt, E.: Imutus: an interactive music tuition system. In: The Sound and Music Computing Conference (SMC 04), October 20–22, 2004, IRCAM, Paris, France, pp. 97–103 (2004)Google Scholar
  7. 7.
    Friberg, A.: Generative rules for music performance: a formal description of a rule system. Comput. Music J. 15(2), 56–71 (1991)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Friberg, A., Bresin, R., Sundberg, J.: Overview of the kth rule system for musical performance. Adv. Cogn. Psychol. 2(2–3), 145–161 (2006)CrossRefGoogle Scholar
  9. 9.
    Giraldo, S.I., Ramirez, R.: A machine learning approach to discover rules for expressive performance actions in jazz guitar music. Frontiers in psychology 7, 1965 (2016)CrossRefGoogle Scholar
  10. 10.
    Gu, Y., Raphael, C.: Modeling piano interpretation using switching kalman filter. In: ISMIR, pp. 145–150 (2012)Google Scholar
  11. 11.
    Hashida, M., Hirata, K., Katayose, H.: Rencon workshop 2011 (SMC-Rencon): performance rendering contest for computer systems (2011)Google Scholar
  12. 12.
    Kim, T.H., Fukayama, S., Nishimoto, T., Sagayama, S.: Statistical approach to automatic expressive rendition of polyphonic piano music. In: Kirke, A., Miranda, E. (eds.) Guide to Computing for Expressive Music Performance, pp. 145–179. Springer, London (2013).  https://doi.org/10.1007/978-1-4471-4123-5_6 CrossRefGoogle Scholar
  13. 13.
    Kirke, A., Miranda, E.R.: An overview of computer systems for expressive music performance. In: Kirke, A., Miranda, E. (eds.) Guide to computing for expressive music performance, pp. 1–47. Springer, London (2013).  https://doi.org/10.1007/978-1-4471-4123-5_1 CrossRefGoogle Scholar
  14. 14.
    Kosta, K., Ramirez, R., Bandtlow, O.F., Chew, E.: Mapping between dynamic markings and performed loudness: a machine learning approach. J. Math. Music 10(2), 149–172 (2016)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Krebs, F., Grachten, M.: Combining score and filter based models to predict tempo fluctuations in expressive music performances. In: Proceedings of the Ninth Sound and Music Computing Conference (SMC), pp. 358–363. Copenhagen Denmark (2012)Google Scholar
  16. 16.
    Lewis, J.P.: Fast normalized cross-correlation. In: Vision Interface, vol. 10, pp. 120–123 (1995)Google Scholar
  17. 17.
    Pluta, M., Spalek, L.J., Delekta, R.J.: An automatic synthesis of musical phrases from multi-pitch samples. Arch. Acoust. 42(2), 235–247 (2017)Google Scholar
  18. 18.
    Todd, N.: A model of expressive timing in tonal music. Music Percept. Interdisc. J. 3(1), 33–57 (1985)CrossRefGoogle Scholar
  19. 19.
    Werner, G., Widmer, G.: Modern methods for musicology: prospects, proposals, and realities. In: On the Use of Computational Methods for Expressive Music Performance, pp. 93–113. Routledge (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Alexandra Bonnici
    • 1
  • Maria Mifsud
    • 1
  • Kenneth P. Camilleri
    • 1
  1. 1.Faculty of EngineeringUniversity of MaltaMsidaMalta

Personalised recommendations