Journal on Multimodal User Interfaces

, Volume 4, Issue 1, pp 27–35 | Cite as

Human movement expressivity for mobile active music listening

  • Maurizio Mancini
  • Giovanna Varni
  • Jari Kleimola
  • Gualtiero Volpe
  • Antonio Camurri
Original Paper

Abstract

In this paper we describe the SAME networked platform for context-aware, experience-centric mobile music applications, and we present an implementation of the SAME active music listening paradigm: the Mobile Conductor. It allows the user to express herself in conducting a virtual ensemble playing a MIDI piece of music by means of her mobile phone. The mobile phone detects the user’s hand movement and molds the music performance style by modulating its speed, volume, and intonation.

Keywords

Human-computer interaction Active music listening Movement expressivity 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anttila A (2006) Sonicpulse: exploring a shared music space. In: 3rd international workshop on mobile music technology Google Scholar
  2. 2.
    Ball G, Breese J (2000) Emotion and personality in a conversational agent. In: Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational characters. MIT Press, Cambridge Google Scholar
  3. 3.
    Camurri A (1995) Interactive dance/music systems. In: Proceedings of international computer music conference Google Scholar
  4. 4.
    Camurri A, Canepa C, Coletta P, Mazzarino B, Volpe G (2007) Mappe per affetti erranti: a multimodal system for social active listening and expressive performance. In: Proceedings of the 8th international conference on new interfaces for musical expression Google Scholar
  5. 5.
    Camurri A, Canepa C, Volpe G (2007) Active listening to a virtual orchestra through an expressive gestural interface: the orchestra explorer. In: Proceedings of the 7th international conference on new interfaces for musical expression Google Scholar
  6. 6.
    Camurri A, Mazzarino B, Volpe G (2004) Analysis of expressive gesture: the eyesweb expressive gesture processing library. Lecture notes in computer science Google Scholar
  7. 7.
    Camurri A, Volpe G, Vinet H, Bresin R, Maestre E, Llop J, Kleimola J, Valimaki S, Seppanen J (2009) User-centric context-aware mobile applications for embodied music listening. In: Proceedings of the 1st international ICST conference on user centric media Google Scholar
  8. 8.
    Castellano G, Bresin R, Camurri A, Volpe G (2007) Expressive control of music and visual media by full-body movement. In: Proceedings of the 7th international conference on New interfaces for musical expression, pp 390–391 Google Scholar
  9. 9.
  10. 10.
    Gallaher PE (1992) Individual differences in nonverbal behavior: dimensions of style. J Pers Soc Psychol 63(1):133–145 CrossRefGoogle Scholar
  11. 11.
    Gaye L, Mazé R, Holmquist L (2003) Sonic city: the urban environment as a musical interface. In: Proceedings of the 3rd international conference on new interfaces for musical expression Google Scholar
  12. 12.
    Goto M (2007) Active music listening interfaces based on signal processing. In: Proceedings of the 2007 IEEE international conference on acoustics, speech, and signal processing Google Scholar
  13. 13.
  14. 14.
    Johansson G (1973) Visual perception of biological motion and a model for its analysis. Percept Psychophys 14:201–211 Google Scholar
  15. 15.
    Laban R, Lawrence FC (1947) Effort. Macdonald & Evans, USA Google Scholar
  16. 16.
    Leman M, Demey M, Lesaffre M, van Noorden L, Moelants D (2009) Concepts, technology and assessment of the social music game sync-in team. In: Proceedings of the 12th IEEE international conference on computational science and engineering Google Scholar
  17. 17.
    Mancini M, Bresin R, Pelachaud C (2007) A virtual head driven by music expressivity. IEEE Trans Audio Speech Lang Process 15(6):1833–1841 CrossRefGoogle Scholar
  18. 18.
    Östergren M, Juhlin O (2004) Sound pryer: truly mobile joint listening. In: 1st international workshop on mobile music technology Google Scholar
  19. 19.
    Paiva A, Andersson G, Höök K, Mourao D, Costa M, Martinho C (2002) Sentoy in fantasya: designing an affective sympathetic interface to a computer game. Pers Ubiquitous Comput 6(5-6):378–389 CrossRefGoogle Scholar
  20. 20.
    Pollick FE (2004) The features people use to recognize human movement style. In: Camurri A, Volpe G (eds) Gesture-based communication in human-computer interaction-gesture workshop 2003. LNAI, vol 2915. Springer, Berlin, pp 10–19 CrossRefGoogle Scholar
  21. 21.
  22. 22.
    Rohs M, Essl G (2007) Camus2-collaborative music performance with mobile camera phones. In: Proceedings of the international conference on advances in computer entertainment technology (ACE) Google Scholar
  23. 23.
    Rohs M, Essl G, Roth M (2006) Camus: live music performance using camera phones and visual grid tracking. In: NIME ’06: proceedings of the 2006 conference on new interfaces for musical expression. IRCAM, Paris, pp 31–36 Google Scholar
  24. 24.
  25. 25.
    Wallbott HG (1998) Bodily expression of emotion. Eur J Soc Psychol 28:879–896 CrossRefGoogle Scholar
  26. 26.
    Wallbott HG, Scherer KR (1986) Cues and channels in emotion recognition. J Pers Soc Psychol 51(4):690–699 CrossRefGoogle Scholar

Copyright information

© OpenInterface Association 2010

Authors and Affiliations

  • Maurizio Mancini
    • 1
  • Giovanna Varni
    • 1
  • Jari Kleimola
    • 2
  • Gualtiero Volpe
    • 1
  • Antonio Camurri
    • 1
  1. 1.InfoMus Lab—Laboratorio di Informatica MusicaleDIST—University of GenovaGenovaItaly
  2. 2.Aalto University School of Science and TechnologyDepartment of Signal Processing and AcousticsAaltoFinland

Personalised recommendations