Skip to main content

2007: Wireless Sensor Interface and Gesture-Follower for Music Pedagogy

  • Chapter
  • First Online:
A NIME Reader

Part of the book series: Current Research in Systematic Musicology ((CRSM,volume 3))

Abstract

We present in this paper a complete gestural interface built to support music pedagogy. The development of this prototype concerned both hardware and software components: a small wireless sensor interface including accelerometers and gyroscopes, and an analysis system enabling gesture following and recognition. A first set of experiments was conducted with teenagers in a music theory class. The preliminary results were encouraging concerning the suitability of these developments in music education.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.i-maestro.org/.

  2. 2.

    http://www.infusionsystems.com/.

  3. 3.

    Since the first publication in 2007, this company has significantly evolved, see http://en.wikipedia.org/wiki/Crossbow_Technology.

  4. 4.

    http://ecomote.net/.

  5. 5.

    This model is not available anymore from this company, see the product evolution here https://en.wikipedia.org/wiki/XBee.

  6. 6.

    http://www.glui.de/.

  7. 7.

    https://www.sparkfun.com/.

  8. 8.

    This implementation is deprecated, please consider the freely available gf external object in the MuBu package http://forumnet.ircam.fr/fr/produit/mubu/.

  9. 9.

    http://guthman.gatech.edu/.

References

  • Aylward, R., & Paradiso, J. A. (2006). Sensemble: A wireless, compact, multi-user sensor system for interactive dance. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 134–139). Paris, France.

    Google Scholar 

  • Bevilacqua, F., Muller, R., & Schnell, N. (2005). MnM: A Max/MSP mapping toolbox. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 85–88). Vancouver, Canada.

    Google Scholar 

  • Bevilacqua, F., Rasamimanana, N., Flety, E., Lemouton, S., & Baschet, F. (2006). The augmented violin project: Research, composition and performance report. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 402–406). Paris, France.

    Google Scholar 

  • Bevilacqua, F., Schnell, N., Rasamimanana, N., Zamborlin, B., & Guédy, F. (2011). Online gesture analysis and control of audio processing. In Musical Robots and Interactive Multimodal Systems (pp. 127–142). Springer.

    Google Scholar 

  • Bevilacqua, F., Zamborlin, B., Sypniewski, A., Schnell, N., Guédy, F., & Rasamimanana, N. (2010). Continuous realtime gesture following and recognition. In Gesture in embodied communication and human-computer interaction (pp. 73–84). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Bevilacqua, F., Baschet, F., & Lemouton, S. (2012). The augmented string quartet: Experiments and gesture following. Journal of New Music Research, 41(1), 103–119.

    Article  Google Scholar 

  • Borchers, J., Hadjakos, A., & Mühlhäuser, M. (2006). MICON a music stand for interactive conducting. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 254–259). Paris, France.

    Google Scholar 

  • Caramiaux, B., Montecchio, N., Tanaka, A., & Bevilacqua, F. (2014). Adaptive gesture recognition with variation estimation for interactive systems. ACM Transactions on Interactive Intelligent Systems, 4(4), 18:1–18:34.

    Google Scholar 

  • Coduys, T., Henry, C., & Cont, A. (2004). Toaster and kroonde: High-resolution and high-speed real-time sensor interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 205–206). Hamamatsu, Japan.

    Google Scholar 

  • Ferguson, S. (2006). Learning musical instrument skills through interactive sonification. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 384–389). Paris, France.

    Google Scholar 

  • Flety, E. (2005). The WiSe Box: A multi-performer wireless sensor interface using WiFi and OSC. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 266–267). Vancouver, Canada.

    Google Scholar 

  • Fléty, E., & Maestracci, C. (2011). Latency improvement in sensor wireless transmission using IEEE 802.15.4. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 409–412). Oslo, Norway.

    Google Scholar 

  • Françoise, J., Schnell, N., Borghesi, R., & Bevilacqua, F. (2014). Probabilistic models for designing motion and sound relationships. In Proceedings of the International Conference on New Interfaces for Musical Expression, London, UK.

    Google Scholar 

  • Guedy, F. (2006). L’Inoui (Vol. 2), Le traitement du son en pedagogie musicale. Editions Leo Scheer.

    Google Scholar 

  • Iazzetta, F. (2000). Meaning in musical gesture. In M. M. Wanderley & M. Battier (Eds.), Trends in gestural control of music (pp. 259–268). Paris, France: IRCAM.

    Google Scholar 

  • Kolesnik, P., & Wanderley, M. (2004). Recognition, analysis and performance with expressive conducting gestures. In Proceedings of the International Computer Music Conference (pp. 572–575).

    Google Scholar 

  • Lee, E., Grüll, I., Kiel, H., & Borchers, J. (2006a). conga: A framework for adaptive conducting gesture analysis. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 260–265). Paris, France.

    Google Scholar 

  • Lee, E., Karrer, T., & Borchers, J. (2006b). Toward a framework for interactive systems to conduct digital audio and video streams. Computer Music Journal, 30(1), 21–36.

    Article  Google Scholar 

  • Machover, T. (2004). Shaping minds musically. BT Technology Journal, 22(4), 171–179.

    Article  Google Scholar 

  • Manitsaris, S., Glushkova, A., Bevilacqua, F., & Moutarde, F. (2014). Capture, modeling, and recognition of expert technical gestures in wheel-throwing art of pottery. Journal on Computing and Cultural Heritage (JOCCH), 7(2), 10.

    Google Scholar 

  • Merrill, D., & Paradiso, J. A. (2005). Personalization, expressivity, and learnability of an implicit mapping strategy for physical interfaces. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) (pp. 2152–2161).

    Google Scholar 

  • Pritchard, B., & Fels, S. (2006). Grassp: Gesturally-realized audio, speech and song performance. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 272–276). Paris, France.

    Google Scholar 

  • Puig, V., Guedy, F., Fingerhut, M., Serriere, F., Bresson, J., & Zeller, O. (2005). Musique lab 2: A three level approach for music education at school. In Proceedings of the International Computer Music Conference (pp. 419–422).

    Google Scholar 

  • Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.

    Google Scholar 

  • Ritter, M., Hamel, K., & Pritchard, R. (2013). Integrated multimodal score-following environment. In Proceedings of the International Computer Music Conference (pp. 185–192). Perth, Australia.

    Google Scholar 

  • Schnell, N., & Schwarz, D. (2005). Gabor, multi-representation real-time analysis/synthesis. In COST-G6 Conference on Digital Audio Effects (p. 122).

    Google Scholar 

  • Schnell, N., Bevilacqua, F., Rasamimana, N., Bloit, J., Guedy, F., & Flety, E. (2011). Playing the “MO”—gestural control and re–embodiment of recorded sound and music. in Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 535–536). Oslo, Norway.

    Google Scholar 

  • Schnell, N., Borghesi, R., Schwarz, D., Bevilacqua, F., & Muller, R. (2005). FTM—complex data structures for max. In Proceedings of the International Computer Music Conference, Barcelona, Spain.

    Google Scholar 

  • Van Nort, D., Oliveros, P., & Braasch, J. (2013). Electro/acoustic improvisation and deeply listening machines. Journal of New Music Research, 42(4), 303–324.

    Article  Google Scholar 

Download references

Acknowledgements

The I-MAESTRO project is partially supported by the European Community under the Information Society Technologies (IST) priority of the 6th Framework Programme for R&D (IST-026883). Thanks to all I-MAESTRO project partners and participants, for their interests, contributions and collaborations. We would like to thank Remy Müller, Alice Daquet, Nicolas Rasamimanana, Riccardo Borghesi, Diemo Schwarz and Donald Glowinski for contributions to this work and fruitful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frederic Bevilacqua .

Editor information

Editors and Affiliations

Appendices

Author Commentary: Once Upon A NIME

Frederic Bevilacqua and Norbert Schnell

The initial aim of this article was to give an overview of research on music pedagogy with tangible interfaces that we were starting at that time in our team. The article contains three fairly independent contributions that alternatively could have been separated into three different articles:

  • Wireless sensing hardware

  • Gesture analysis and interactive machine learning software

  • Applications and use cases in music pedagogy

However, the idea was to describe how different streams of NIME research converged in specific use cases that had been actually implemented. Revisiting the three themes almost 10 years later gives the opportunity to contextualize this work in the flow of still ongoing research and development.

Firstly, the possibility to create a small size wireless sensor interface using off-the-shelf wireless transmission modules marked a considerable breakthrough compared to other systems we reported earlier in the NIME community (see Fléty and Maestracci 2011 and references herein). The advantage of these modules resided in the favorable compromise between a small size, low power consumption and relatively high bandwidth. Moreover, it was possible to use several wireless interfaces in parallel. This matched well with our applications that included music and dance performance as well as experimental music pedagogy. This hardware development, among others at the time, was representative of the increased use of wireless technology at NIME. For our research group, this development allowed us to boost the use of miniature wireless interfaces using inertial measurement units, such as accelerometers and gyroscopes. Since then, we have employed different generations of such IMU-based wireless interface in many applications that more recently also include mobile and web platforms. In fact, the development described in the article coincides with that of the first generation of smartphones as well as the first generation of game controllers that include wireless motion sensing such as the Wiimote (with a much bigger form factor and still lacking precision and reliability).

Secondly, this article featured the first complete description of the “gesture follower,” which has represented an important line of research in our team up until now. The gesture follower has been used since this reports in a large number of artistic performances and installations, in music and dance (Bevilacqua et al. 2011). More complete descriptions followed in subsequent articles and this research influenced similar methods such as “Mapping by Listening” and tools such as GVF (Caramiaux et al. 2014) or even more recently XMM (Françoise et al. 2014). The gesture follower can be considered as our first development in what we call now more generally “interactive machine learning” (but the name was not as common in NIME at that time). In particular, it allowed users to record, as many time they wanted, their gestures to build movement-sound interactions based on rather few examples. We found this flexibility a key element for the music pedagogy use cases we reported on as well as for many other use cases we developed over the past years. Finally, the applications in music education described in the article occupied us for several years of continued research. These “real-world” applications certainly contributed to foster several concepts we’re pursuing, such as the use of “metaphors” and “playing techniques.”

Globally, this proceeding can be seen as the start of a line of research that produced our interface MO—Modular Musical Objects, that were also reported in the NIME community (Schnell et al. 2011) and that also won the Margaret Guthman Musical Instrument Competition.Footnote 9 Many of the ideas and techniques that were presented in this article are still actively pursued, in particular with mobile and web technologies.

Expert Commentary: Gesture Following Made Accessible

Kristian Nymoen

One of the beauties of the NIME conference is the encouragement of publications that focus on novel systems for musical expression. As a result, many NIME publications are often broad in scope, presenting an entire system, including the hardware, software, modes of interaction, musical output, and more. Sometimes the presented system as a whole persists as a symbolic interface for musical expression for many years to come. In other cases a particular sub-unit of the system, such as a piece of hardware or a specific mode of interaction, is the contribution that makes the publication leave its mark within and beyond the community.

The paper by Bevilacqua et al. demonstrates brilliantly how NIME developments may provide important outcomes. The paper is “disguised” as a well-written, yet quite ordinary, NIME paper on a prototype of a complete system for music pedagogy. The custom-made hardware was thoroughly documented and a cutting edge solution for portable sensing and wireless communication. The proposed problem of using technology for studying and teaching fluidity in conducting is still relevant, and the authors showed the applicability of the system for this task through testing and evaluating in a real pedagogical context with students and their teacher. Still, it is the software part of the system that really stands out as one of the important NIME contributions of the decade: the “Gesture Follower.”

The Gesture Follower was implemented in IRCAM’s FTM framework for Max. The FTM framework extends Max with various types of data structures and operators, and facilitates many types of data processing in Max, for instance of motion data. The implementation in Max is one of the main reasons why the Gesture Follower became such an important piece of software. Gesture recognition in music had already been explored for several years at the time of this paper’s publication. However, its application required knowledge of the machine learning algorithms involved and in most cases also proficiency in some text-based programming environment. The out-of-the-box examples and tutorials for the Gesture Follower made gesture recognition accessible to a larger user-group, including musicians and artists who preferred the graphical interface of Max to text-based languages.

Not only was the Gesture Follower more accessible due to its Max implementation, it provided a combination of highly useful features for use in musical interaction. Classification happens in real time, continuously updating the classification score against each of the pre-trained examples while the user is moving. As such, the system can easily be used in mapping between motion data and synthesizer parameters. To allow for different durations between training examples and the input gestures, an elegant time-warping solution has been implemented. For each of the pre-trained examples, the Gesture Follower provides an estimation of the time index within the gesture. In other words, the system does not only recognize the gesture, it follows the gesture. With such a time warping function in place, it is possible match the playback duration to the duration of the input gesture.

Since this first publication on the Gesture Follower, the IRCAM team has presented a number of follow-up articles with various improvements and testing of the system in different contexts (Bevilacqua et al. 2010, 2012). One of the developments is the implementation of the Gesture Follower in IRCAM’s multi-buffer solution for audio and motion capture data in Max, Mubu, with improvements in both functionality and user interface.

The paper of Bevilacqua et al. shows that developing new interfaces for musical expression is more than just developing a system in itself. NIME technologies are often highly generic, and may have much broader application areas than the system that is presented in the NIME publication. In this specific case, the Gesture Follower was presented as one part of a system for music pedagogy, but has proven just as useful in music performance (Van Nort et al. 2013), specialized systems for multimodal score-following (Ritter et al. 2013), or even tasks unrelated to music, such as recognition of wheel-throwing pottery gestures (Manitsaris et al. 2014).

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Bevilacqua, F., Guédy, F., Schnell, N., Fléty, E., Leroy, N. (2017). 2007: Wireless Sensor Interface and Gesture-Follower for Music Pedagogy. In: Jensenius, A., Lyons, M. (eds) A NIME Reader. Current Research in Systematic Musicology, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-319-47214-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-47214-0_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-47213-3

  • Online ISBN: 978-3-319-47214-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics