Skip to main content

Real-Time Gesture Recognition, Evaluation and Feed-Forward Correction of a Multimodal Tai-Chi Platform

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5270))

Abstract

This paper presents a multimodal system capable to understand and correct in real-time the movements of Tai-Chi students through the integration of audio-visual-tactile technologies. This platform acts like a virtual teacher that transfers the knowledge of five Tai-Chi movements using feed-back stimuli to compensate the errors committed by a user during the performance of the gesture. The fundamental components of this multimodal interface are the gesture recognition system (using k-means clustering, Probabilistic Neural Networks (PNN) and Finite State Machines (FSM)) and the real-time descriptor of motion which is used to compute and qualify the actual movements performed by the student respect to the movements performed by the master, obtaining several feedbacks and compensating this movement in real-time varying audio-visualtactile parameters of different devices. The experiments of this multimodal platform have confirmed that the quality of the movements performed by the students is improved significantly.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   59.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Byrne, R., Russon, A.: Learning by imitation: a Hierarchical Approach. Behavioral and Brain Sciences 21, 667–721 (1998)

    Google Scholar 

  2. Spitzer, M.: The mind within the net: models of learning, thinking and acting. The MIT press, Cambridge (1998)

    Google Scholar 

  3. Viatt, S., Kuhn, K.: Integration and synchronization of input modes during multimodal human-computer interaction. In: Proc. Conf. Human Factors in Computing Systems CHI (1997)

    Google Scholar 

  4. Tan Chua, P.: Training for physical Tasks in Virtual Environments: Tai Chi. In: Proceedings of the IEEE Virtual Reality (2003)

    Google Scholar 

  5. Cole, R., Mariani, J.: Multimodality. In: Survey of the State of the Art of Human Language Technolgy, Carnegie Mellon University, Pittsburgh,PA (1995)

    Google Scholar 

  6. Sharma, R., Pavlovic, V., Huang, T.: Toward Multimodal Human-Computer Interface. Proceedings of the IEEE 86(5), 853–869 (1998)

    Article  Google Scholar 

  7. Akay, M., Marsic, I., Medl, A.: A System for Medical Consultation and Education Using Multimodal Human/Machine Communication. IEEE Transactions on information technology in Biomedicine 2 (1998)

    Google Scholar 

  8. Hauptmann, A.G., McAvinney, P.: Gesture with Speech for Graphics Manipulation. Man-Machines Studies 38 (1993)

    Google Scholar 

  9. Oviatt, S.: User-Centered Modeling and Evaluation of Multimodal Interfaces. Proceedings of the IEEE 91 (1993)

    Google Scholar 

  10. Bizzi, E, Mussa-Ivaldi, F.A. and Shadmehr, R. : System for human trajectory learning in virtual environmets. US Patent No. 5,554,033 (1996)

    Google Scholar 

  11. Lieberman, J., Breazeal, C.: Development of a wearable Vibrotactile FeedBack Suit for Accelerated Human Motor Learning. In: IEEE International Conference on Robotics and Automation (2007)

    Google Scholar 

  12. Bloomfield, A., Badler, N.: Virtual Training via vibrotactile arrays. Teleoperator and Virtual Environments 17 (2008)

    Google Scholar 

  13. Hollander, A.J., Furness III, T.A.: Perception of Virtual Auditory Shapes. In: Proceedings of the International Conference on Auditory Displays (1994)

    Google Scholar 

  14. Qian, G.: A gesture-Driven Multimodal Interactive Dance System. In: IEEE International Conference on Multimedia and Expo ICME (2004)

    Google Scholar 

  15. Bobick, W.: State-Based Approach to the Representation and Recognition of Gesture. Pattern Analysis and Machine Intelligence, IEEE Transactions 19 (1997)

    Google Scholar 

  16. Farmer, J.: State-space reconstruction in the presence of noise. Physics (1991)

    Google Scholar 

  17. Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A review. ACM Computing Surveys 31 (1999)

    Google Scholar 

  18. Hong, P., Turk, M.: Gesture Modeling and Recognition Using Finite State Machines. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and recognition (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Antti Pirhonen Stephen Brewster

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Portillo-Rodriguez, O., Sandoval-Gonzalez, O.O., Ruffaldi, E., Leonardi, R., Avizzano, C.A., Bergamasco, M. (2008). Real-Time Gesture Recognition, Evaluation and Feed-Forward Correction of a Multimodal Tai-Chi Platform. In: Pirhonen, A., Brewster, S. (eds) Haptic and Audio Interaction Design. HAID 2008. Lecture Notes in Computer Science, vol 5270. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87883-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87883-4_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87882-7

  • Online ISBN: 978-3-540-87883-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics