Ghost in the Cave – An Interactive Collaborative Game Using Non-verbal Communication

  • Marie-Louise Rinman
  • Anders Friberg
  • Bendik Bendiksen
  • Demian Cirotteau
  • Sofia Dahl
  • Ivar Kjellmo
  • Barbara Mazzarino
  • Antonio Camurri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2915)

Abstract

The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bowers, J., Rodden, T.: Exploding the Interface: Experiences of a CSCW Network. In: Proceedings of INTERCHI 1993 the conference on Human factors in computing systems, Amsterdam, The Netherlands (1993)Google Scholar
  2. 2.
    Camurri, A.: Interactive Systems Design: a KANSEI-based Approach. In: NIME 2002, Intl. Conference on New Interfaces for Musical Expression, Dublin, Ireland (2002)Google Scholar
  3. 3.
    Camurri, A., Coletta, P., Mazzarino, B., Trocca, R., Volpe, G.: Improving the manmachine interface through the analysis of expressiveness in human movement. In: Proc. Intl. Conf. IEEE ROMAN 2002, Berlin, September 2002, IEEE CS Press, Los Alamitos (2002)Google Scholar
  4. 4.
    Camurri, A., Lagerhof, I., Volpe, G.: Recognizing Emotion from Dance Movement: Comparison of Spectator Recognition and Automated Techniques. International Journal of Human-Computer Studies 59, 213–255 (2003)CrossRefGoogle Scholar
  5. 5.
    Canazza, S.: An abstract control space for communication of sensory expressive intentions in music performance. Computer Music Journal (in press)Google Scholar
  6. 6.
    Friberg, A., Battel, G.U.: Structural Communication. In: Parncutt, R., McPherson, G.E. (eds.) The Science and Psychology of Music Performance: Creative Strategies for Teaching and Learning, pp. 199–218. Oxford University Press, New York (2002)Google Scholar
  7. 7.
    Friberg, A., Schoonderwaldt, E., Juslin, P.N., Bresin, R.: Automatic Real-Time Extraction of Musical Expression. In: Proceedings of the International Computer Music Conference 2002, International Computer Music Association, San Francisco, pp. 365-367 (2002)Google Scholar
  8. 8.
    Juslin, P.N.: Communication of emotion in music performance: A review and a theoretical framework. In: Juslin, P.N., Sloboda, J.A. (eds.) Music and emotion: Theory and research, pp. 309–337. Oxford University Press, New York (2001)Google Scholar
  9. 9.
    Juslin, P.N., Friberg, A., Bresin, R.: Toward a computational model of expression in performance: The GERM model. Musicae Scientiae special issue 2001-2002, 63–122 (2002)Google Scholar
  10. 10.
    Juslin, P.N., Sloboda, J.A. (eds.): Music and emotion: Theory and research. Oxford University Press, New York (2002)Google Scholar
  11. 11.
    Laurel, B.: Computers as Theatre. Addison-Wesley Publishing Company, Inc., Reading (1991,1993)Google Scholar
  12. 12.
    Rinman, M.-L.: Forms of Interaction in Mixed Reality Performance – A study of the artistic event Desert Rain. Licentiate thesis, Royal Institute of Technology (KTH), Stockholm (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Marie-Louise Rinman
    • 1
  • Anders Friberg
    • 2
  • Bendik Bendiksen
    • 3
  • Demian Cirotteau
    • 5
  • Sofia Dahl
    • 2
  • Ivar Kjellmo
    • 3
  • Barbara Mazzarino
    • 4
  • Antonio Camurri
    • 4
  1. 1.Centre of User Oriented IT-designKTHStockholm
  2. 2.Speech Music and HearingKTHStockholm
  3. 3.Octaga / TelenorOslo
  4. 4.InfoMus LabDIST-University of GenoaGenoa
  5. 5.CSC – Center of Computational Sonology, DEI – Dept of Information EngineeringUniversity of Padua 

Personalised recommendations