Summary
Humans make often conscious and unconscious gestures, which reflect their mind, thoughts and the way these are formulated. These inherently complex processes can in general not be substituted by a corresponding verbal utterance that has the same semantics (McNeill, 1992). Gesture, which is a kind of body language, contains important information on the intention and the state of the gesture producer. Therefore, it is an important communication channels in human computer interaction.
In the following we describe first the state of the art in gesture recognition. The next section describes the gesture interpretation module. After that we present the experiments and results for recognition of user states. We summarize our results in the last section.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Akyol, L. Libuda, and K.F. Kraiss. Multimodale Benutzung adaptiver Kfz-Bordsysteme. In: T. Jürgensohn and K.P. Timpe (eds.), Kraftfahrzeugf ührung, pp. 137–154, Berlin Heidelberg New York, 2001. Springer.
V. Attina, D. Beautemps, M.A. Cathiard, and M. Odisio. Toward an Audiovisual Synthesizer for Cued Speech: Rules for CV French Syllables. In: J.L. Schwartz, F. Berthommier, M.A. Cathiard, and D. Sodoyer (eds.), Proc. AVSP 2003 Auditory-Visual Speech Processing, pp. 227–232, St. Jorioz, France, September 2003. ISCA Tutorial and Research Workshop.
R. Azuma. Tracking Requirements for Augmented Reality. In: ACM, vol. 36, pp. 50–51, July 1993.
R. Bolt. “Put-That-There”:Voice and Gesture. In: Computer Graphics, pp. 262–270, 1980.
W. Buxton, R. Sniderman, W. Reeves, S. Patel, and R. Baecker. An Introduction to the SSSP Digital Synthesizer. In: C. Roads and J. Strawn (eds.), Foundations of Computer Music, pp. 387–392, Cambridge, MA, 1985. MIT Press.
R.O. Cornett. Cued Speech. American Annals of the Deaf, 112:3–13, 1967.
H. Eglowstein. Reach Out and Touch Your Data. Byte, 7:283–290, 1990.
S. Fels and G.E. Hinton. Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer. In: IEEE Transactions on Neural Networks, vol. 4, pp. 2–8, 1993.
S. Kettebekov and R. Sharma. Multimodal Interfaces. http://www.cse.psu.edu/~rsharma/imap1.html. Cited 15 December 2003.
G. Kurtenbach and T. Baudel. Hypermarks: Issuing Commands by Drawing Marks in Hypercard. In: ACM SIGCHI, p. 64, Vancouver, Canada, 1992.
T. Lütticke. Gestenerkennung zur Anweisung eines mobilen Roboters. Master’s thesis, Universität Karlsruhe (TH), 2000.
C. Maggioni. Gesture Computer — New Ways of Operating a Computer. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 166–171, 1995.
A. Marcus and J. Churchill. Sensing Human Hand Motions for Controlling Dexterous Robots. In: The 2nd Annual Space Operations Automation and Robotics Workshop, Dayton, OH, July 1988.
D. McNeill. Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago, IL, 1992.
S. Oviatt. Ten Myths of Multimodal Interaction. Communications of the ACM, 42(11):74–81, 1999.
F. Quek. FingerMouse: A Freehand Pointing Interface. In: Int. Workshop on Automatic Face-and Gesture-Recognition, pp. 372–377, Zurich, Switzerland, June 1995.
F.H. Raab, E.B. Blood, T.O. Steiner, and H.R. Jones. Magnetic Position and Orientation Tracking System. In: IEEE Transaction on Aerospace and Electronic Systems, vol. 15, pp. 709–718, 1979.
L.R. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In: Proc. IEEE, vol. 77, pp. 257–286, 1989.
L.R. Rabiner and B.H. Juang. An Introduction to Hidden Markov Models. Acoustics, Speech and Signal Processin, 3(1):4–16, 1986.
L.R. Rabiner and B.H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ, 1993.
D. Rubine. Specifying Gestures by Example. In: SIGGRAPH’ 91 Proceedings, vol. 25, pp. 329–337, New York, 1991.
E. Sachs. Coming Soon to a CAD Lab Near You. Byte, 7:238–239, 1990.
M. Streit, A. Batliner, and T. Portele. Emotion Analysis and Emotion Handling Subdialogs, 2006. In this volume.
A. Waibel and J. Yang. INTERACT. http://www.is.cs.cmu.edu/js/gesture.html. Cited 15 December 2003.
Y. Wu and T.S. Huang. “Paper-Rock-Scissors”. http://www.ece.northwestern.edu/~yingwu/research/HCI/hci_game_prs.html. Cited 15 December 2003.
T.G. Zimmerman and J. Lanier. A Hand Gesture Interface Device. In: ACM SIGCHI/GI, pp. 189–192, New York, 1987.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Shi, R.P. et al. (2006). The Gesture Interpretation Module. In: Wahlster, W. (eds) SmartKom: Foundations of Multimodal Dialogue Systems. Cognitive Technologies. Springer, Berlin, Heidelberg . https://doi.org/10.1007/3-540-36678-4_14
Download citation
DOI: https://doi.org/10.1007/3-540-36678-4_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23732-7
Online ISBN: 978-3-540-36678-2
eBook Packages: Computer ScienceComputer Science (R0)