Skip to main content

Summary

Humans make often conscious and unconscious gestures, which reflect their mind, thoughts and the way these are formulated. These inherently complex processes can in general not be substituted by a corresponding verbal utterance that has the same semantics (McNeill, 1992). Gesture, which is a kind of body language, contains important information on the intention and the state of the gesture producer. Therefore, it is an important communication channels in human computer interaction.

In the following we describe first the state of the art in gesture recognition. The next section describes the gesture interpretation module. After that we present the experiments and results for recognition of user states. We summarize our results in the last section.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • S. Akyol, L. Libuda, and K.F. Kraiss. Multimodale Benutzung adaptiver Kfz-Bordsysteme. In: T. Jürgensohn and K.P. Timpe (eds.), Kraftfahrzeugf ührung, pp. 137–154, Berlin Heidelberg New York, 2001. Springer.

    Google Scholar 

  • V. Attina, D. Beautemps, M.A. Cathiard, and M. Odisio. Toward an Audiovisual Synthesizer for Cued Speech: Rules for CV French Syllables. In: J.L. Schwartz, F. Berthommier, M.A. Cathiard, and D. Sodoyer (eds.), Proc. AVSP 2003 Auditory-Visual Speech Processing, pp. 227–232, St. Jorioz, France, September 2003. ISCA Tutorial and Research Workshop.

    Google Scholar 

  • R. Azuma. Tracking Requirements for Augmented Reality. In: ACM, vol. 36, pp. 50–51, July 1993.

    Article  Google Scholar 

  • R. Bolt. “Put-That-There”:Voice and Gesture. In: Computer Graphics, pp. 262–270, 1980.

    Google Scholar 

  • W. Buxton, R. Sniderman, W. Reeves, S. Patel, and R. Baecker. An Introduction to the SSSP Digital Synthesizer. In: C. Roads and J. Strawn (eds.), Foundations of Computer Music, pp. 387–392, Cambridge, MA, 1985. MIT Press.

    Google Scholar 

  • R.O. Cornett. Cued Speech. American Annals of the Deaf, 112:3–13, 1967.

    Google Scholar 

  • H. Eglowstein. Reach Out and Touch Your Data. Byte, 7:283–290, 1990.

    Google Scholar 

  • S. Fels and G.E. Hinton. Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer. In: IEEE Transactions on Neural Networks, vol. 4, pp. 2–8, 1993.

    Article  Google Scholar 

  • S. Kettebekov and R. Sharma. Multimodal Interfaces. http://www.cse.psu.edu/~rsharma/imap1.html. Cited 15 December 2003.

    Google Scholar 

  • G. Kurtenbach and T. Baudel. Hypermarks: Issuing Commands by Drawing Marks in Hypercard. In: ACM SIGCHI, p. 64, Vancouver, Canada, 1992.

    Google Scholar 

  • T. Lütticke. Gestenerkennung zur Anweisung eines mobilen Roboters. Master’s thesis, Universität Karlsruhe (TH), 2000.

    Google Scholar 

  • C. Maggioni. Gesture Computer — New Ways of Operating a Computer. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 166–171, 1995.

    Google Scholar 

  • A. Marcus and J. Churchill. Sensing Human Hand Motions for Controlling Dexterous Robots. In: The 2nd Annual Space Operations Automation and Robotics Workshop, Dayton, OH, July 1988.

    Google Scholar 

  • D. McNeill. Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago, IL, 1992.

    Google Scholar 

  • S. Oviatt. Ten Myths of Multimodal Interaction. Communications of the ACM, 42(11):74–81, 1999.

    Article  Google Scholar 

  • F. Quek. FingerMouse: A Freehand Pointing Interface. In: Int. Workshop on Automatic Face-and Gesture-Recognition, pp. 372–377, Zurich, Switzerland, June 1995.

    Google Scholar 

  • F.H. Raab, E.B. Blood, T.O. Steiner, and H.R. Jones. Magnetic Position and Orientation Tracking System. In: IEEE Transaction on Aerospace and Electronic Systems, vol. 15, pp. 709–718, 1979.

    Google Scholar 

  • L.R. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In: Proc. IEEE, vol. 77, pp. 257–286, 1989.

    Article  Google Scholar 

  • L.R. Rabiner and B.H. Juang. An Introduction to Hidden Markov Models. Acoustics, Speech and Signal Processin, 3(1):4–16, 1986.

    Google Scholar 

  • L.R. Rabiner and B.H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ, 1993.

    Google Scholar 

  • D. Rubine. Specifying Gestures by Example. In: SIGGRAPH’ 91 Proceedings, vol. 25, pp. 329–337, New York, 1991.

    Google Scholar 

  • E. Sachs. Coming Soon to a CAD Lab Near You. Byte, 7:238–239, 1990.

    Google Scholar 

  • M. Streit, A. Batliner, and T. Portele. Emotion Analysis and Emotion Handling Subdialogs, 2006. In this volume.

    Google Scholar 

  • A. Waibel and J. Yang. INTERACT. http://www.is.cs.cmu.edu/js/gesture.html. Cited 15 December 2003.

    Google Scholar 

  • Y. Wu and T.S. Huang. “Paper-Rock-Scissors”. http://www.ece.northwestern.edu/~yingwu/research/HCI/hci_game_prs.html. Cited 15 December 2003.

    Google Scholar 

  • T.G. Zimmerman and J. Lanier. A Hand Gesture Interface Device. In: ACM SIGCHI/GI, pp. 189–192, New York, 1987.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Shi, R.P. et al. (2006). The Gesture Interpretation Module. In: Wahlster, W. (eds) SmartKom: Foundations of Multimodal Dialogue Systems. Cognitive Technologies. Springer, Berlin, Heidelberg . https://doi.org/10.1007/3-540-36678-4_14

Download citation

  • DOI: https://doi.org/10.1007/3-540-36678-4_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23732-7

  • Online ISBN: 978-3-540-36678-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics