Ghost in the Cave – An Interactive Collaborative Game Using Non-verbal Communication
The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.
Unable to display preview. Download preview PDF.
- 1.Bowers, J., Rodden, T.: Exploding the Interface: Experiences of a CSCW Network. In: Proceedings of INTERCHI 1993 the conference on Human factors in computing systems, Amsterdam, The Netherlands (1993)Google Scholar
- 2.Camurri, A.: Interactive Systems Design: a KANSEI-based Approach. In: NIME 2002, Intl. Conference on New Interfaces for Musical Expression, Dublin, Ireland (2002)Google Scholar
- 3.Camurri, A., Coletta, P., Mazzarino, B., Trocca, R., Volpe, G.: Improving the manmachine interface through the analysis of expressiveness in human movement. In: Proc. Intl. Conf. IEEE ROMAN 2002, Berlin, September 2002, IEEE CS Press, Los Alamitos (2002)Google Scholar
- 5.Canazza, S.: An abstract control space for communication of sensory expressive intentions in music performance. Computer Music Journal (in press)Google Scholar
- 6.Friberg, A., Battel, G.U.: Structural Communication. In: Parncutt, R., McPherson, G.E. (eds.) The Science and Psychology of Music Performance: Creative Strategies for Teaching and Learning, pp. 199–218. Oxford University Press, New York (2002)Google Scholar
- 7.Friberg, A., Schoonderwaldt, E., Juslin, P.N., Bresin, R.: Automatic Real-Time Extraction of Musical Expression. In: Proceedings of the International Computer Music Conference 2002, International Computer Music Association, San Francisco, pp. 365-367 (2002)Google Scholar
- 8.Juslin, P.N.: Communication of emotion in music performance: A review and a theoretical framework. In: Juslin, P.N., Sloboda, J.A. (eds.) Music and emotion: Theory and research, pp. 309–337. Oxford University Press, New York (2001)Google Scholar
- 9.Juslin, P.N., Friberg, A., Bresin, R.: Toward a computational model of expression in performance: The GERM model. Musicae Scientiae special issue 2001-2002, 63–122 (2002)Google Scholar
- 10.Juslin, P.N., Sloboda, J.A. (eds.): Music and emotion: Theory and research. Oxford University Press, New York (2002)Google Scholar
- 11.Laurel, B.: Computers as Theatre. Addison-Wesley Publishing Company, Inc., Reading (1991,1993)Google Scholar
- 12.Rinman, M.-L.: Forms of Interaction in Mixed Reality Performance – A study of the artistic event Desert Rain. Licentiate thesis, Royal Institute of Technology (KTH), Stockholm (2002)Google Scholar