Skip to main content
Log in

Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition

  • Image & Signal Processing
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Fang, G., Gao, X., Gao, W., and Chen, Y.: ‘A novel approach to automatically extracting basic units from Chinese sign language’, Proc. of the 17th Intl. Conference on Pattern Recognition (ICPR), 2004, pp 454–457

  2. Kong, W.W., and Ranganath, S., Towards subject independent continuous sign language recognition: A segment and merge approach. Pattern Recognition, Handwriting Recognition and other PR Applications. 47(3):1294–1308, 2014.

    Article  Google Scholar 

  3. Stokoe, W.C., Sign Langauge structure: An outline of the visual communication system of the American deaf. Journal of Deaf Studies and Deaf Education. 10(1):3–37, 2005. https://doi.org/10.1093/deafed/eni001.

    Article  PubMed  Google Scholar 

  4. Liddell, S., and Johnson, R., American sign language: The Phonological Base. Sign Language Studies. 64(1):195–277, 1989. https://doi.org/10.1353/sls.1989.0027.

    Article  Google Scholar 

  5. Bauer, B., and Kraiss, K.: ‘Towards an Automatic Sign Language Recognition System Using Subunits’, Proc. Of Intl. Workshop on Gesture and Sign Languages in Human-Computer Interaction, 2001, pp 64–75

  6. C. Fabian Benitez-Quiroz, Kadir Gökgöz, Ronnie B. Wilbur, Aleix M. Martinez, "Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language", PLoS ONE, Vol. 9, No. 2, 2014

  7. Vogler, C., and Metaxas, D., A framework for recognizing the simultaneous aspects of American sign language. Computer Vision and Image Understanding. 81(3):358–384, 2001. https://doi.org/10.1006/cviu.2000.0895.

    Article  Google Scholar 

  8. Yeasin, M., and Chaudhuri, S., Visual understanding of dynamic hand gestures. Pattern Recognition. 33(11):1805–1817, 2000. https://doi.org/10.1016/S0031-3203(99)00175-2.

    Article  Google Scholar 

  9. Theodorakis, S., Pitsikalis, V., and Maragos, P.: ‘Model-Level Data-Driven Sub-Units For Signs In Videos Of Continuous Sign Language’, Proc. Of IEEE Intl. Conference on Acoustics Speech and Signal Processing (ICASSP), 2010, 2262–2265

  10. Ong, S., and Ranganath, S., Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell.:873–891, 2005. https://doi.org/10.1109/TPAMI.2005.112.

  11. Junwei, H., George, A., and Alistar, S., Boosted subunits: A framework for recognizing sign language from videos. IET Image Processing. 7(1):70–80, 2013. https://doi.org/10.1049/iet-ipr.2012.0273.

    Article  Google Scholar 

  12. Derpanis, K.G., Wildes, R.P., and Tsotsos, J.K., Definition and recovery of kinematic featuresfor recognition of American sign language movements. Image Vision Computing. 26(12):1650–1662, 2008. https://doi.org/10.1016/j.imavis.2008.04.007.

    Article  Google Scholar 

  13. Nagendraswamy, H.S., Chethana Kumara, B.M., and Lekha, C.R., GIST descriptors for sign language recognition: An approach based on symbolic representation. Mining Intelligence and Knowledge Exploration, Lecture Notes in Computer Science. 9468:103–114, 2016.

    Article  Google Scholar 

  14. Su R., Chen X., Cao S., and Zhang X., "Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors", Sensors, Vol. 16, no.1, 2016.

  15. Jinxu, Y., Wenwen, T., and Ye, Z., Continuous sign language recognition using level building based on fast hidden Markov model. Pattern Recognition Letters. 78:28–35, 2016.

    Article  Google Scholar 

  16. Kumar, P., Gauba, H., PratimRoy, P., and ProsadDogra, D., Coupled HMM-based multi-sensor data fusion for sign language recognition. Pattern Recognition Letters. 86:1–8, 2017.

    Article  Google Scholar 

  17. Li, S., Yu, B., Wu, W., Su, S., and Ji, C., Feature learning based on SAE–PCA network for human gesture recognition in RGBD images. Neurocomputing, Vol. 151(2):565–573, 2015.

    Article  Google Scholar 

  18. Singha, J., and Laskar, R.H., ANN-based hand gesture recognition using self co-articulated set of features. IETE Journal of Research. 61(6):597–608, 2015.

    Article  Google Scholar 

  19. Samuel, W.S., Hui, Z., xin, L.X., Hui, W., Sangaiah, A.K., and Gunglin, L., Pattern recognition of electromyography signals based on novel time domain features for amputees' limb motion classification. Computers & Electrical Engineering, 2017. https://doi.org/10.1016/j.compeleceng.2017.04.003.

  20. Zhang, R., Shen, J., Wei, F., Li, X., and Sangaiah, A., Medical image classification based on multi-scale non-negative sparse coding. Artificial Intelligence in Medicine, 2017. https://doi.org/10.1016/j.artmed.2017.05.006.

  21. Cooper, H., and Bowden R.: ‘Sign Language Recognition Using Linguistically Derived Sub-Units’, Proc. Fourth Workshop on the Representation and Processing of Sign Languages: Corpora And Sign Language Technologies, 2010, pp 57–61

  22. Cooper, H., and Bowden, R.: ‘Sign Language Recognition Using Boosted Volumetric Features’, Proc. Iapr Conf. On Machine Vision Applications, 2007, pp 359–362.

  23. Huang, X., Ariki, Y., and Jack, M.: ‘Hidden Markov Models for Speech Recognition’, Edinburgh University Press, 2000.

  24. Rabiner, L.R., and Juang, B.H., An introduction to hidden Markov models. IEEE ASSP Mag. 3(1):4–16, 1986. http://dx.doi.org/10.1109/MASSP.1986.1165342.

  25. George, C., Olga, D., and Kostas, K.: ‘Automatic Sign Language Recognition: Vision Based Feature Extraction and Probabilistic Recognition Scheme from Multiple Cues’, Proc. of ACM PETRA, 2008

  26. Dahmani, D., and Larabi, S., User-independent system for sign language finger spelling recognition. Journal of Visual Communication and Image Representation. 25(5):1240–1250, 2014.

    Article  Google Scholar 

  27. Elakkiya, R., Selvamani, K., Velumadhava Rao, R., and Kannan, A., Fuzzy hand gesture recognition based human computer Interface intelligent system. UACEE International Journal of Advances in Computer Networks and its Security. 2(1):2250–3757, 2012.

    Google Scholar 

  28. Carol Neidle, Ashwin Thangali and Stan Sclaroff: ‘Challenges in Development of the American Sign Language Lexicon Video Dataset (ASLLVD) Corpus’, 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon (LREC), 2012

  29. Carol Neidle and Christian Vogler: ‘A New Web Interface to Facilitate Access to Corpora: Development of the ASLLRP Data Access Interface’, Proc. of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, 2012

  30. Kruger, M., Malsburg, C., and Wurtz, R.: ‘Self- Organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition’, in Organic Computing S.(Ed.): Understanding Complex Systems, (Springer Berlin Heidelberg, 2008), pp 321–342, doi:https://doi.org/10.1007/978–3–540-77657-4_15

  31. Almeida, S.G.M., Guimarães, F.G., and Ramírez, J.A., Feature extraction in Brazilian sign language recognition based on phonological structure and using RGB-D sensors. Expert Systems with Applications. 41(16):7259–7271, 2014.

    Article  Google Scholar 

  32. Elakkiya, R., Selvamani, K., Kanimozhi, S., Velumadhava Rao, R., and Senthilkumar, J., An interactive system for sensory and gustatory impaired people based on hand gesture recognition. Elseveir Proceedia Engineering. 38:3166–3172, 2012.

    Article  Google Scholar 

  33. Elakkiya, R., Selvamani, K., and Kanimozhi, S.: ‘A Framework for Recognizing and Segmenting Sign Language Gestures from Continuous Video Sequence Using Boosted Learning Algorithm’, Proc. of IEEE Intl. Conference on Intelligent Computing Techniques (ICICT), 2014, pp 498–503

  34. Kadir, T., Bowden, R., and Ong, E.J.: ‘Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition’, Proc. Of British Machine Vision Conference, 2004, pp 938–948

  35. Roh, M.-C., and Lee, S.-W., Human gesture recognition using a simplified dynamic Bayesian network. Multimedia Systems. 21(6):557–568, 2015.

    Article  Google Scholar 

  36. Starner, T., Weaver, J., and Pentland, A., Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell.:1371–1375, 1998. https://doi.org/10.1109/34.735811.

  37. Stergiopoulou, E., and Papamarkos, N., Hand gesture recognition using a neural network shape fitting technique. Elsevier Engineering Applications of Artificial Intelligence. 22(8), 2009. https://doi.org/10.1016/j.engappai.2009.03.008.

  38. Yang, M.H., Ahuja, N., and Tabb, M., Extraction of 2d trajectories and its application to hand gesture recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence. 24(8):1061–1074, 2002. https://doi.org/10.1109/TPAMI.2002.1023803.

    Article  Google Scholar 

  39. http://www.cdss.ca.gov/cdssweb/entres/forms/English/pub391.pdf

Download references

Acknowledgements

We thank Dr. Vassillis Athitsos, Associate Professor of Computer Science and Engineering, University of Texas, Arlington for sharing the links of sign language videos and providing a C++ library file to read the video format.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elakkiya R.

Ethics declarations

Conflict of Interest

The authors declared that we do not have any conflict of interest for this research work.

Additional information

This article is part of the Topical Collection on Image & Signal Processing

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

R, E., K, S. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition. J Med Syst 41, 175 (2017). https://doi.org/10.1007/s10916-017-0819-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-017-0819-z

Keywords

Navigation