Multimodal Gumdo Game: The Whole Body Interaction with an Intelligent Cyber Fencer

  • Jungwon Yoon
  • Sehwan Kim
  • Jeha Ryu
  • Woontack Woo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2532)


This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. Firstly, the multimodal interface allows a user to move around and to shout without distracting the user. Secondly, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback helps a user experience an immersive interaction. The proposed interactive system with an intelligent fencer is designed to satisfy comfortable interface, perceptual intelligence, and natural interaction (I-cubed) and enhance the life-like impression of fighting actions. The suggested system can be applied to various applications such as education, art, and exercise.


Virtual Reality Virtual Environment Haptic Feedback Motor Module Body Interaction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    W. Woo, N. Kim, K. Wong and M. Tadenuma, “ Sketch on Dynamic Gesture Tracking and Analysis Exploiting Vision-based 3D Interface,” in Proc. SPIE PW-EI-VCIP’01, vol. 4310, pp. 656–666, Jan. 2001.Google Scholar
  2. 2.
    L. Emering, R. Boulic, D. Thalmann, Interacting with Virtual Humans through Body Actions, IEEE Computer Graphics and Applications,1998, Vol.18, No1, pp8–11.CrossRefGoogle Scholar
  3. 3.
    T. Molet, A. Aubel, T. Çapin, S. Carion, E. Lee, N. M. Thalmann, H. Noser, I. Pandzic, G. Sannier, D. Thalmann, “ANYONE FOR TENNIS? ”, Presence, Vol. 8, No. 2, pp.140–156, April 1999CrossRefGoogle Scholar
  4. 4.
    Maes P., T. Darrell, B. Blumberg, A. Pentland, 1995. The ALIVE System: Full-body Interaction with Autonomous Agents. In Proc. Computer Animation’95, pp 11–18, 1995.Google Scholar
  5. 5.
    F. Bobick, S. S. Intille, J. W. Davis, F. Baird, C. S. Pinhanez, L. W. Campell, Y. A. Ivanov, A. Schutte, A. Wilson, “The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment ”, Presence, Vol. 8, no. 4, pp.369–393, Aug. 1999CrossRefGoogle Scholar
  6. 6.
    Gavrila, L.S. Davis, 1996. 3D Model-Based Tracking of Humans in Action: A Multi-View Approach. Proc. of IEEE Conf. on CVPR, pp 73–80, June 1996Google Scholar
  7. 7.
    W. Woo and Y. Iwadate, “Object-oriented hybrid segmentation using stereo images,” in Proc. SPIE VCIP, pp. 487–495, Jan. 2000.Google Scholar
  8. 8.
    S. Kim and W. Woo, ”3D Movement Tracking with Asynchronous Multi-cameras for Interactive Systems”, in Proc. SPIE PW-EI-VCIP’02, vol. 4671, pp. 502–512, Jan. 20-25, 2002.CrossRefGoogle Scholar
  9. 9.
    Shuzo Saito, “ Fundamentals of Speech Signal Processing”, Academic Press, 1985Google Scholar
  10. 10.
    S-Y Yoon, R. C. Burke, B. M. Blumberg, G. E. Schneider, “Interactive Training for Synthetic Characters”, AAAI 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Jungwon Yoon
    • 1
  • Sehwan Kim
    • 2
  • Jeha Ryu
    • 1
  • Woontack Woo
    • 2
  1. 1.Dept. of MechatronicsK-JISTKorea
  2. 2.Dept. of Information & CommunicationsK-JISTKwangjuKorea

Personalised recommendations