Skip to main content
Log in

A novel set of features for continuous hand gesture recognition

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

Applications requiring the natural use of the human hand as a human–computer interface motivate research on continuous hand gesture recognition. Gesture recognition depends on gesture segmentation to locate the starting and end points of meaningful gestures while ignoring unintentional movements. Unfortunately, gesture segmentation remains a formidable challenge because of unconstrained spatiotemporal variations in gestures and the coarticulation and movement epenthesis of successive gestures. Furthermore, errors in hand image segmentation cause the estimated hand motion trajectory to deviate from the actual one. This research moves toward addressing these problems. Our approach entails using gesture spotting to distinguish meaningful gestures from unintentional movements. To avoid the effects of variations in a gesture’s motion chain code (MCC), we propose instead to use a novel set of features: the (a) orientation and (b) length of an ellipse least-squares fitted to motion-trajectory points and (c) the position of the hand. The features are designed to support classification using conditional random fields. To evaluate the performance of the system, 10 participants signed 10 gestures several times each, providing a total of 75 instances per gesture. To train the system, 50 instances of each gesture served as training data and 25 as testing data. For isolated gestures, the recognition rate using the MCC as a feature vector was only 69.6 % but rose to 96.0 % using the proposed features, a 26.1 % improvement. For continuous gestures, the recognition rate for the proposed features was 88.9 %. These results show the efficacy of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Segouat J, Braffort A (2010) Toward modeling sign language coarticulation. Lecture Notes in Computer Science (Gesture in Embodied Communication and Human–Computer Interaction), vol 5934, pp 325–336

  2. Yang R, Sarkar S (2006) Detecting coarticulation in sign language using conditional random fields. In: 18th international conference on pattern recognition (ICPR), vol 2, pp 108–112

  3. Eisenstein J, Davis R (2005) Gestural Cues for Sentence Segmentation. Technical report, MIT AI Memo, pp 1–14

    Google Scholar 

  4. Lee HK, Kim JH (1999) An HMM-based threshold model approach for gesture recognition. IEEE Trans Pattern Anal Mach Intell 21(2):961–972

    Google Scholar 

  5. Bhuyan MK, Bora PK, Ghosh D (2011) An integrated approach to the recognition of a wide class of continuous hand gestures. Int J Pattern Recognit Artif Intell 25(2):227–252

    Article  Google Scholar 

  6. Yang H-D, Sclaroff S, Lee SW (2009) Sign language spotting with a threshold model based on conditional random fields. IEEE Trans Pattern Anal Mach Intell 31(7):1264–1277

    Article  Google Scholar 

  7. Alon J, Athitsos V, Quan Y, Sclaroff S (2009) A unified framework for gesture recognition and spatiotemporal gesture segmentation. IEEE Trans Pattern Anal Mach Intell 31(9):1685–1699

    Article  Google Scholar 

  8. Zaki MM, Shaheen IS (2011) Sign language recognition using a combination of new vision-based features. Pattern Recognit Lett 32(4):572–577

    Article  Google Scholar 

  9. Lafferty J, McCallum A, Pereira F (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of 18th international conference on machine learning, pp 282–289

  10. Wallach HM (2004) Conditional random fields: an introduction. University of Pennsylvania

  11. Rabiner LR (1989) A tutorial on hidden markov models and selected applications in speech recognition. In: Proceedings of the IEEE, vol 77, no 2, pp 257–286

  12. Chai D, Ngan KN (1999) Face segmentation using skin color map in videophone applications. IEEE Trans Circuits Syst. Video Technol. 9(4):551–564

    Article  Google Scholar 

  13. Fitzgibbon A, Pilu M, Fisher RB (1999) Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 21(5):476–480

    Article  Google Scholar 

  14. Quattoni A, Wang S, Morency LP, Collins M, Darrell T (2007) Hidden conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 29(10):1848–1852

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. K. Bhuyan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhuyan, M.K., Ajay Kumar, D., MacDorman, K.F. et al. A novel set of features for continuous hand gesture recognition. J Multimodal User Interfaces 8, 333–343 (2014). https://doi.org/10.1007/s12193-014-0165-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-014-0165-0

Keywords

Navigation