Chapter

Guide to Computing for Expressive Music Performance

pp 123-144

Date:

Modeling, Analyzing, Identifying, and Synthesizing Expressive Popular Music Performances

  • Rafael RamirezAffiliated withDTIC, Universitat Pompeu Fabra Email author 
  • , Esteban MaestreAffiliated withCCRMA, Stanford University
  • , Alfonso PerezAffiliated withCIRMMT and IDMIL, Schulich School of Music, McGill University

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Professional musicians manipulate sound properties such as pitch, timing, amplitude, and timbre in order to add expression to their performances. However, there is little quantitative information about how and in which contexts this manipulation occurs. In this chapter, we describe an approach to quantitatively model and analyze expression in popular music monophonic performances, as well as identifying interpreters from their playing styles. The approach consists of (1) applying sound analysis techniques based on spectral models to real audio performances for extracting both inter-note and intra-note expressive features, and (2) based on these features, training computational models characterizing different aspects of expressive performance using machine learning techniques. The obtained models are applied to the analysis and synthesis of expressive performances as well as to automatic performer identification. We present results, which indicate that the features extracted contain sufficient information, and the explored machine learning methods are capable of learning patterns that characterize expressive music performance.