Chapter

Guide to Computing for Expressive Music Performance

pp 49-73

Date:

Systems for Interactive Control of Computer Generated Music Performance

  • Marco FabianiAffiliated withDepartment of Speech, Music & Hearing, KTH Royal Institute of Technology
  • , Anders FribergAffiliated withDepartment of Speech, Music & Hearing, KTH Royal Institute of Technology
  • , Roberto BresinAffiliated withDepartment of Speech, Music & Hearing, KTH Royal Institute of Technology

* Final gross prices may vary according to local VAT.

Get Access

Abstract

This chapter is a literature survey of systems for real-time interactive control of automatic expressive music performance. A classification is proposed based on two initial design choices: the music material to interact with (i.e., MIDI or audio recordings) and the type of control (i.e., direct control of the low-level parameters such as tempo, intensity, and instrument balance or mapping from high-level parameters, such as emotions, to low-level parameters). Their pros and cons are briefly discussed. Then, a generic approach to interactive control is presented, comprising four steps: control data collection and analysis, mapping from control data to performance parameters, modification of the music material, and audiovisual feedback synthesis. Several systems are then described, focusing on different technical and expressive aspects. For many of the surveyed systems, a formal evaluation is missing. Possible methods for the evaluation of such systems are finally discussed.