Advertisement

A Robust Method Based on Static Hand Gesture Recognition for Human–Computer Interaction Under Complex Background

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 107)

Abstract

An appearance-based approach needs usually a perfect segmentation. However, it is a difficult task especially under complex background. As a result, it limits the robustness for application. In this chapter, we design a new method for static hand gesture recognition in complex background for human–computer interface (HCI). In this method, we do not need perfect segmentation or hand tracking. The Hu invariant moment features are extracted from a binary image after simple segmentation and served as the input of our classifier, which is constructed beforehand based on support vector machines (SVM) algorithm. Furthermore, a Euclidean distance is calculated to combine with SVM model for avoiding the non-hand gestures. Tests on the testing dataset show the proposed method exhibits a recognition rate near 100%. Experimental results on a simple HCI system on real-time demonstrated the effectiveness, speediness and robustness of the system under cluttered background.

Keywords

Hand gesture recognition Support vector machine (SVM) Hu invariant moments Human–computer interaction (HCI) 

Notes

Acknowledgment

This work was supported by National Natural Science Foundation of China under the Grants 60978006.

References

  1. 1.
    Sharma R, Huang TS (1997) Visual interpretation of hand gestures for human–computer interaction: a review. IEEE Trans Pattern Anal Mach Intell 19(7):677–695CrossRefGoogle Scholar
  2. 2.
    Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cyberne Part C Appl Rev 37(3):311–324CrossRefGoogle Scholar
  3. 3.
    Erol A, Bebis G, Nicoleascu M (2007) Vision-based hand pose estimation: a review. Comput Vis Imag Underst 108:52–73CrossRefGoogle Scholar
  4. 4.
    Ge SS, Yang Y, Lee TH (2008) Hand gesture recognition and tracking based on distributed locally linear embedding. Image Vis Comput 26:1607–1620CrossRefGoogle Scholar
  5. 5.
    Dankers A, Barnes N, Zelinsky A (2007) MAP ZDF segmentation and tracking using active stereo vision: hand tracking case study. Comput Vis Imag Underst 108:74–86CrossRefGoogle Scholar
  6. 6.
    Shan C, Tan T, Wei Y (2007) Real-time hand tracking using a mean shift embedded particle filter. Pattern Recogn 40:1958–1970CrossRefMATHGoogle Scholar
  7. 7.
    Yang R, Sarkar S (2009) Coupled grouping and matching for sign and gesture recognition. Comput Vis Imag Underst 113:663–681CrossRefGoogle Scholar
  8. 8.
    Otsu N (1979) A threshold selection method from gray level histogram, IEEE SMC-9 (1):62–66Google Scholar
  9. 9.
    Tin H, Maung H (2009) Real-time hand tracking and gesture recognition system using neural networks. In: Proceedings of World Academy of Science, Engineering and Technology, p 50Google Scholar
  10. 10.
    Hu MK (1962) Visual pattern recognition by moment invariants. IRE Trans Inform Theor 8:179–187MATHGoogle Scholar
  11. 11.
    Bradski G, Kaehler A (2008) Machine learning. In: learning OpenCV: computer vision with the OpenCV library, O’Reilly Media USA pp 495−517Google Scholar
  12. 12.
    Jianjun Y, Hongxun Y, Feng J (2004) Based on HMM and SVM multilayer architecture classifier for Chinese sign language recognition with large vocabulary. In: Proceedings of the third international conference on image and graphicGoogle Scholar
  13. 13.
    Platt JC, Cristianini N, Shawe-Taylor J (2000) Large margin DAGs for multiclass classification. Adv Neur Inform Process Syst 12:547–553Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.Department of PhysicsUniversity of Chemical and TechnologyBeijingChina

Personalised recommendations