On the use of context and a priori knowledge in motion analysis for visual gesture recognition

  • Karin Husballe Munk
  • Erik Granum
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1371)

Abstract

The correspondence analysis part of a model based vision system is investigated theoretically and through a synthetic image sequence showing a human hand gesture. The purpose of the study is to find and describe ways of improving the conditions for robust tracking, by introducing a priori knowledge such as structural information from the model and temporal context of the observed motion.

Primary performance characteristics are the size of the search space for correspondence analysis, and the prediction error under various conditions.

Theoretical models for the search space dependencies on connectivity properties and on prediction accuracy are developed. Observations from the image sequence suggest simple predictors for the context of smooth motion, and their expected influence on search space is verified. Special considerations must be given to handling of motion trajectory discontinuities, and alternatives are suggested.

Keywords

Search Space Prediction Error Correspondence Analysis Hand Gesture Search Area 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Granum, E., Christensen, H.I.: Dynamic Robot Vision. Traditional and Non-Traditional Robotic Sensors: 57-71, Springer-Verlag, Germany, 1990.Google Scholar
  2. 2.
    Crowley, J.L., Christensen, H.I. (Eds.): Vision as Process. Esprit Basic Research Series, Springer-Verlag, Germany, sept. 1994.Google Scholar
  3. 3.
    Transom Technologies, Inc: Transom Jack, User’s Guide, V. 1.1. Ann Arbor, USA, 1997.Google Scholar
  4. 4.
    Ce’dras, C., Shah, M.: Motion-based recognition: A survey. Image and Vision Computing, 13(2):129–155, march 1995.CrossRefGoogle Scholar
  5. 5.
    Bobick, Aaron F.: Movement, Activity, and Action: The role of Knowledge in the Perception of Motion. M.I.T. Media Lab. Perceptual Computing Sect. Tech. Rep. no. 413, Massachusetts Inst. of Technology, nov. 1996.Google Scholar
  6. 6.
    Wren, C., and Pentland, A.: Behavioral and Dynamic Models for Human Motion Understanding. M.I.T. Media Lab. Perceptual Computing Sect. Tech. Rep. no. 415, Massachusetts Inst. of Technology, nov. 1996.Google Scholar
  7. 7.
    Rehg, J.M., Kanade, T.: Visual Tracking of Self-Occluding Articulated Objects. Tech. Rep. CMU-CS-94-224, Carnegie Mellon University, Pittsburgh, dec. 1994.Google Scholar
  8. 8.
    Dorner, B., Hagen, E.: Towards an American Sign Language Interface. Artificial Intelligence Review 8: 235–253, Kluwer Academic Publishers, Netherlands, 1994.Google Scholar
  9. 9.
    Pavlovic, V.I., Sharma, R., Huang, T.S.: Visual Interpretation of Hand Gestures for Human-Computer Interaction: A review. IEEE Trans. PAMI, 19(7): 677–695, July 1997.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 1998

Authors and Affiliations

  • Karin Husballe Munk
    • 1
  • Erik Granum
    • 1
  1. 1.Laboratory of Image AnalysisAalborg UniversityDenmark

Personalised recommendations