International Conference on Affective Computing and Intelligent Interaction

ACII 2007: Affective Computing and Intelligent Interaction pp 501-510

User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements

  • Ginevra Castellano
  • Roberto Bresin
  • Antonio Camurri
  • Gualtiero Volpe
Conference paper

DOI: 10.1007/978-3-540-74889-2_44

Volume 4738 of the book series Lecture Notes in Computer Science (LNCS)
Cite this paper as:
Castellano G., Bresin R., Camurri A., Volpe G. (2007) User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements. In: Paiva A.C.R., Prada R., Picard R.W. (eds) Affective Computing and Intelligent Interaction. ACII 2007. Lecture Notes in Computer Science, vol 4738. Springer, Berlin, Heidelberg

Abstract

In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.

Keywords

Affective interaction expressive gesture multimodal environments interactive music systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ginevra Castellano
    • 1
  • Roberto Bresin
    • 2
  • Antonio Camurri
    • 1
  • Gualtiero Volpe
    • 1
  1. 1.InfoMus Lab, DIST - University of Genova, Viale Causa 13, I-16145, GenovaItaly
  2. 2.KTH, CSC School of Computer Science and Communication, Dept. of Speech Music and Hearing, Stockholm