Advertisement

MashtaCycle: On-Stage Improvised Audio Collage by Content-Based Similarity and Gesture Recognition

  • Christian Frisson
  • Gauthier Keyaerts
  • Fabien Grisard
  • Stéphane Dupont
  • Thierry Ravet
  • François Zajéga
  • Laura Colmenares Guerra
  • Todor Todoroff
  • Thierry Dutoit
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 124)

Abstract

In this paper we present the outline of a performance in-progress. It brings together the skilled musical practices from Belgian audio collagist Gauthier Keyaerts aka Very Mash’ta; and the realtime, content-based audio browsing capabilities of the AudioCycle and LoopJam applications developed by the remaining authors. The tool derived from AudioCycle named MashtaCycle aids the preparation of collections of stem audio loops before performances by extracting content-based features (for instance timbre) used for the positioning of these sounds on a 2D visual map. The tool becomes an embodied on-stage instrument, based on a user interface which uses a depth-sensing camera, and augmented with the public projection of the 2D map. The camera tracks the position of the artist within the sensing area to trigger sounds similarly to the LoopJam installation. It also senses gestures from the performer interpreted with the Full Body Interaction (FUBI) framework, allowing to apply sound effects based on bodily movements. MashtaCycle blurs the boundary between performance and preparation, navigation and improvisation, installations and concerts.

Keywords

Human-music interaction audio collage content-based similarity gesture recognition depth cameras digital audio effects 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bekkedal, E.: Music kinection: Musical sound and motion in interactive systems. Master’s thesis, Department of Musicology, University of Oslo (2012)Google Scholar
  2. 2.
    Bettens, F., Todoroff, T.: Real-time DTW-based gesture recognition external object for max/msp and puredata. In: Proceedings of the 6th Sound and Music Computing Conference, Porto, Portugal, July 23-25 (2009)Google Scholar
  3. 3.
    Bevilacqua, F., Zamborlin, B., Sypniewski, A., Schnell, N., Guédy, F., Rasamimanana, N.: Continuous realtime gesture following and recognition. In: Kopp, S., Wachsmuth, I. (eds.) GW 2009. LNCS, vol. 5934, pp. 73–84. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Brent, W.: Physical navigation of virtual timbre spaces with timbreID and DILib. In: Proceedings of the 18th International Conference on Auditory Display, Atlanta, GA, USA, June 18-21 (2012)Google Scholar
  5. 5.
    Dupont, S., Frisson, C., Siebert, X., Tardieu, D.: Browsing sound and music libraries by similarity. In: 128th Audio Engineering Society (AES) Convention, London, UK, May 22-25 (2010)Google Scholar
  6. 6.
    Dupont, S., Ravet, T., Picard-Limpens, C., Frisson, C.: Nonlinear dimensionality reduction approaches applied to music and textural sounds. In: IEEE International Conference on Multimedia and Expo (ICME), San Jose, USA, July 15-19 (2013)Google Scholar
  7. 7.
    Frisson, C.: Designing tangible/free-form applications for navigation in audio/visual collections (by content-based similarity). In: Graduate Student Consortium of the ACM Tangible, Embedded and Embodied Interaction Conference (TEI 2013), Barcelona, Spain, February 10-13 (2013)Google Scholar
  8. 8.
    Frisson, C., Dupont, S., Leroy, J., Moinet, A., Ravet, T., Siebert, X., Dutoit, T.: LoopJam: Turning the dance floor into a collaborative instrumental map. In: Proceedings of the New Interfaces for Musical Expression (NIME), Ann Arbor, Michigan, USA, May 21-23 (2012)Google Scholar
  9. 9.
    Frisson, C., Dupont, S., Siebert, X., Tardieu, D., Dutoit, T., Macq, B.: DeviceCycle: Rapid and reusable prototyping of gestural interfaces, applied to audio browsing by similarity. In: Proceedings of the New Interfaces for Musical Expression++ (NIME++), Sydney, Australia, June 15-18 (2010)Google Scholar
  10. 10.
    Kistler, F., Sollfrank, D., Bee, N., André, E.: Full body gestures enhancing a game book for interactive story telling. In: International Conference on Interactive Digital Storytelling, Proceedings of ICIDS 2011 (2011)Google Scholar
  11. 11.
    Lallemand, I., Schwartz, D.: Interaction-optimized sound database representation. In: Proceedings of the 14th International Conference on Digital Audio Effects (DAFx 2011), Paris, France, September 19-23 (2011)Google Scholar
  12. 12.
    Martin, A.G.V.: Touchless gestural control of concatenative sound synthesis. Master’s thesis, McGill University, Montreal, Canada (2011)Google Scholar
  13. 13.
    Mathieu, B., Essid, S., Fillon, T., Prado, J., Richard, G.: Yaafe, an easy to use and efficient audio feature extraction software. In: Proceedings of the 11th ISMIR Conference, Utrecht, Netherlands (2010)Google Scholar
  14. 14.
    Pedersoli, F., Adami, N., Benini, S., Leonardi, R.: XKin: eXtendable hand pose and gesture recognition library for kinect. In: Proceedings of the 20th ACM International Conference on Multimedia (MM 2012), pp. 1465–1468. ACM, New York (2012)Google Scholar
  15. 15.
    Saffer, D.: Designing Gestural Interfaces. O’Reilly Media, Inc. (2009)Google Scholar
  16. 16.
    Scavone, G.P., Cook, P.R.: RtMidi, RtAudio, and a Synthesis Toolkit (STK) update. In: of the International Computer Music Conference (2005)Google Scholar
  17. 17.
    Thompson, W.: Soundpainting: The Art of Live Composition. In: Walter Thompson, 2006. with instructional DVD (2006)Google Scholar
  18. 18.
    Tilmanne, J.: Data-driven Stylistic Humanlike Walk Synthesis. PhD thesis, University of Mons (2013)Google Scholar
  19. 19.
    Toop, D.: Ocean Of Sound: Aether Talk, Ambient Sounds and Imaginary Worlds. Serpent’s tAIL (1995)Google Scholar
  20. 20.
    Verfaille, V., Wanderley, M.M., Depalle, P.: Mapping strategies for gestural and adaptive control of digital audio effects. Journal of New Music Research 35(1), 71–93 (2006)CrossRefGoogle Scholar
  21. 21.
    Wanderley, M., Battier, M. (eds.): Trends In Gestural Control of Music. Ircam - Centre Pompidou (2000)Google Scholar
  22. 22.
    Zadel, M.: Graphical Performance Software in Contexts: Explorations with Different Strokes. PhD thesis, McGill University, Montreal, Quebec, Canada (2012)Google Scholar

Copyright information

© ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering 2013

Authors and Affiliations

  • Christian Frisson
    • 2
  • Gauthier Keyaerts
    • 1
  • Fabien Grisard
    • 2
    • 3
  • Stéphane Dupont
    • 2
  • Thierry Ravet
    • 2
  • François Zajéga
    • 2
  • Laura Colmenares Guerra
    • 2
  • Todor Todoroff
    • 2
  • Thierry Dutoit
    • 2
  1. 1.aka Very Mash’ta and the Aktivist, artist residing in BrusselsBelgium
  2. 2.numediart InstituteUniversity of Mons (UMONS)MonsBelgium
  3. 3.ACROE / Institut PhelmaGrenobleFrance

Personalised recommendations