Advertisement

Context-Based Gesture Recognition

  • José Antonio Montero
  • L. Enrique Sucar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4225)

Abstract

Most gesture recognition systems are based only on hand motion information, and are designed mainly for communicative gestures. However, many activities of everyday life involve interaction with surrounding objects. We propose a new approach for the recognition of manipulative gestures that interact with objects in the environment. The method uses non-intrusive vision-based techniques. The hands of a person are detected and tracked using an adaptive skin color segmentation process, so the system can operate in a wide range of lighting conditions. Gesture recognition is based on hidden Markov models, combining motion and contextual information, where the context refers to the relation of the position of the hand with other objects. The approach was implemented and evaluated on two different domains: video conference and assistance, obtaining gesture recognition rates from 94 % to 99.47 %. The system is very efficient so it is adequate for use in real-time applications.

References

  1. 1.
    Ayers, D., Shah, M.: Monitoring human behavior in an office environment. PAMI 7, 780–794 (1997)Google Scholar
  2. 2.
    Bradski, G.R.: Computer vision face tracking as a component of a perceptual user interface. In: Workshop on Applications of Computer Vision, vol. 1, pp. 214–219 (1998)Google Scholar
  3. 3.
    Campbell, L.W., Becker, D.A., Azarbayejani, A., Bobick, A.F., Pentland, A.: Invariant features for 3-d gesture recognition. In: Proc. of the Intl. Workshop on Automatic Face and Gesture Recognition, Zurich, vol. 5, pp. 157–162 (1996)Google Scholar
  4. 4.
    Gray, R.M.: Vector quantization. IEEE ASSP Magazine 7, 407–467 (1984)Google Scholar
  5. 5.
    Martínez, M., Sucar, L.E.: Learning and optimal naive bayes classifier. 36vo Congreso de Investigacion y Desarrollo del Tecnológico de Monterrey: Impulsando la Economía Basada en Conocimiento (2006)Google Scholar
  6. 6.
    Moore, D.J., Essa, I.A., Hayes, M.H.: Exploiting human actions and object context for recognition tasks. In: Proc. IEEE of the 7th International Conference on Computer Vision, September 2000, vol. 1 (2000)Google Scholar
  7. 7.
    Otsu, N.: A threshold selection method from gray-evel histograms. IEEE, Trans. Sys, Man, and Cybernetics 1, 62–66 (1979)Google Scholar
  8. 8.
    Quan, D.L.: Gesture recognition with a data glove. In: IEEE Proc. National Aerospace and Electronics Conf., vol. 2 (1990)Google Scholar
  9. 9.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proc. of the IEEE 77(2), 267–293 (1989)CrossRefGoogle Scholar
  10. 10.
    Swain, M.J., Ballard, D.H.: Color indexing. International Journal of Computer Vision, 11–32 (1991)Google Scholar
  11. 11.
    Thonnat, M., Rota, N.: Video sequence interpretation for visual surveillance. In: Third International Workshop on Cooperative Distributed Vision, November 2000, pp. 1–9 (2000)Google Scholar
  12. 12.
    Jose Antonio Montero, V., Luis Enrique Sucar, S.: Feature selection for visual gesture recognition using hidden markov models. In: Fifth Mexican International Conference on Computer Science, vol. 1, pp. 196–203 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • José Antonio Montero
    • 1
    • 3
  • L. Enrique Sucar
    • 2
  1. 1.Instituto Tecnológico de AcapulcoAcapulco, GuerreroMéxico
  2. 2.Inst. Nacional de Astrofísca, Óptica y ElectrónicaTonantzintla, PueblaMéxico
  3. 3.ITESM Campus CuernavacaLomas Cuernavaca, MorelosMéxico

Personalised recommendations