Advertisement

A New Voting Algorithm for Tracking Human Grasping Gestures

  • Pablo Negri
  • Xavier Clady
  • Maurice Milgram
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3708)

Abstract

This article deals with a monocular vision system for grasping gesture acquisition. This system could be used for medical diagnostic, robot or game control. We describe a new algorithm, the Chinese Transform, for the segmentation and localization of the fingers. This approach is inspired in the Hough Transform utilizing the position and the orientation of the gradient from the image edge’s pixels. Kalman filters are used for gesture tracking. We presents some results obtained from images sequence recording a grasping gesture. These results are in accordance with medical experiments.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pavlovic, V., Sharma, R., Huang, T.S.: Visual interpretation of hand gestures for human-computer interaction: A review. PAMI 19, 677–695 (1997)Google Scholar
  2. 2.
    Jeannerod, M.: Intersegmental coordination during reaching at natural visual objects. In: Long, J., Baddeley, A. (eds.) Attention and performance, pp. 153–168 (1981)Google Scholar
  3. 3.
    Jeannerod, M.: The timing of natural prehension movements. Journal of Motor Behavior 16, 235–254 (1984)Google Scholar
  4. 4.
    Castiello, U., Bennet, K., Bonfiglioli, C., Lim, S., Peppard, R.: The reach-to-grap movement in parkinson’s disease: response to a simultaneous perturbation of object position and object size. Computer Exp. Brain Res., 453–462 (1999)Google Scholar
  5. 5.
    Hermdörfer, J., Ulrich, S., Marquardt, C., Goldenberg, G., Mai, N.: Prehension with the ipsilesional hand after unilateral brain damage. Cortex 35, 139–161 (1999)CrossRefGoogle Scholar
  6. 6.
    Turk, M., Kolsch, M.: Perceptual Interfaces. In: Emerging Topics in Computer Vision. Prentice Hall PTR, Englewood Cliffs (2005)Google Scholar
  7. 7.
    Triesh, J., von der Malsburg, C.: Classification of hand postures agains complex backgrounds using elastic graph matching. Image and Vision Computing 20, 937–943 (2002)CrossRefGoogle Scholar
  8. 8.
    Oberkampf, D., DeMenthon, D., Davis, L.: Iterative pose estimation using coplanar feature points. Computer Vision and Image Understanding 63, 495–511 (1996)CrossRefGoogle Scholar
  9. 9.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR, Fort Collins, Colorado, vol. 2, pp. 22–46 (1999)Google Scholar
  10. 10.
    Reisfeld, D.: Generalized Symmetry Transforms: Attentional Mechanisms and Face Recognition. PhD thesis, Tel Aviv University (1994)Google Scholar
  11. 11.
    Milgram, M., Prevost, L., Belaroussi, R.: Multi-stage combination of geometric and colorimetric detectors for eyes localization. In: Roli, F., Vitulano, S. (eds.) ICIAP 2005. LNCS, vol. 3617, pp. 1010–1017. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Kalman, R.: A new approach to linear filtering and prediction problems. Transactions of the ASME - Journal of Basic Engineering, 35–45 (1960)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Pablo Negri
    • 1
  • Xavier Clady
    • 1
  • Maurice Milgram
    • 1
  1. 1.LISIF – PARC, UMPC (Paris 6)Ivry-sur-SeineFrance

Personalised recommendations