Capturing Expressive and Indicative Qualities of Conducting Gesture: An Application of Temporal Expectancy Models

  • Dilip Swaminathan
  • Harvey Thornburg
  • Todd Ingalls
  • Stjepan Rajko
  • Jodi James
  • Ellen Campana
  • Kathleya Afanador
  • Randal Leistikow
Conference paper

DOI: 10.1007/978-3-540-85035-9_3

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4969)
Cite this paper as:
Swaminathan D. et al. (2008) Capturing Expressive and Indicative Qualities of Conducting Gesture: An Application of Temporal Expectancy Models. In: Kronland-Martinet R., Ystad S., Jensen K. (eds) Computer Music Modeling and Retrieval. Sense of Sounds. CMMR 2007. Lecture Notes in Computer Science, vol 4969. Springer, Berlin, Heidelberg

Abstract

Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Dilip Swaminathan
    • 1
  • Harvey Thornburg
    • 1
  • Todd Ingalls
    • 1
  • Stjepan Rajko
    • 1
  • Jodi James
    • 1
  • Ellen Campana
    • 1
  • Kathleya Afanador
    • 1
  • Randal Leistikow
    • 2
  1. 1.Arts, Media and EngineeringArizona State UniversityUSA
  2. 2.Zenph Studios IncRaleighUSA

Personalised recommendations