Skip to main content

A Comprehensive Analysis on Technological Approaches in Sign Language Recognition

  • Conference paper
  • First Online:
Emergent Converging Technologies and Biomedical Systems

Abstract

Communication is one of the imperative sides of the twenty-first century. We as human being cannot even imagine this world without communicating with each other. For honest communication to happen two persons should agree on a common language that each of them understands. Keeping this context in mind, there will be a necessity for a translator in between a hearing/speech-impaired person and a normal person. But the translators or consultants who understand sign language do not seem to be simply and commonly offered all the days, if offered conjointly, they charge tons. To help the social interaction of hearing-impaired–speech debilitated individuals with the society economical interactive communication tools are made. As the importance of sign language interpretation is increasing day by day so several triple-crown applications for linguistic communication recognition comprise new forms of real-time interpretation with progressed artificial intelligence and image processing approaches. In this paper, we gave an entire outline of various methodologies and processes rooted in deep learning and discussed technological approaches for linguistic communication interpretation, also achieving 95.7% accuracy with two layers of CNN model classifier for the sign gestures of 26 English alphabets with readily available resources. This paper also provides current analysis and advances in this area and tends to distinguish openings and difficulties for future exploration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 379.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Islam MM, Siddiqua S, Afnan J (2017) Real time Hand Gesture Recognition using different algorithms based on American Sign Language. In: 2017 IEEE international conference on imaging, vision and pattern recognition, icIVPR 2017. https://doi.org/10.1109/ICIVPR.2017.7890854

  2. Young A, Oram R, Napier J (2019) Hearing people perceiving deaf people through sign language interpreters at work: on the loss of self through interpreted communication. J Appl Commun Res 47(1):90–110. https://doi.org/10.1080/00909882.2019.1574018

    Article  Google Scholar 

  3. Zheng L, Liang B, Jiang A (2017) Recent advances of deep learning for sign language recognition. In: DICTA 2017—2017 international conference on digital image computing: techniques and applications, vol 2017, pp 1–7. https://doi.org/10.1109/DICTA.2017.8227483

  4. https://www.geeksforgeeks.org/opencv-python-program-analyze-image-using-histogram/. Accessed 22 Feb 2021

  5. Manikandan K, Patidar A, Walia P, Roy AB (2018) Hand gesture detection and conversion to speech and text

    Google Scholar 

  6. Contours—OpenCv. https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html. Accessed 22 Feb 2021

  7. Data Augmentation | How to use deep learning when you have limited data — Part 2. https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/. Accessed 22 Feb 2021

  8. Rai R, Shukla S, Singh B (2020) Reactive power based MRAS for speed estimation of solar fed induction motor with improved feedback linearization for water pumping. IEEE Trans Ind Inf 16(7):4714–4725. https://doi.org/10.1109/TII.2019.2950094

    Article  Google Scholar 

  9. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  10. Bo L, Lai K, Ren X, Fox D (2011) Object recognition with hierarchical kernel descriptors. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 1729–1736. https://doi.org/10.1109/CVPR.2011.5995719

  11. Vieira AW, Nascimento ER, Oliveira GL, Liu Z, Campos MFM (2012) STOP: space-time occupancy patterns for 3D action recognition from depth map sequences. In: Lecture notes in computer science (including Subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 7441, LNCS, pp 252–259. https://doi.org/10.1007/978-3-642-33275-3_31

  12. Mohandes M, Aliyu SO (2017) (12) United States Patent, vol 2, no 12

    Google Scholar 

  13. Zhang X, Chen X, Li Y, Lantz V, Wang K, Yang J (2011) A framework for hand gesture recognition based on accelerometer and EMG sensors. IEEE Trans Syst Man Cybern Part A Syst Humans 41(6):1064–1076. https://doi.org/10.1109/TSMCA.2011.2116004

  14. Hongo H, Ohya M, Yasumoto M, Niwa Y, Yamamoto K (2000) Focus of attention for face and hand gesture recognition using multiple cameras. In: Proceedings—4th IEEE international conference on automatic face and gesture recognition, FG 2000, pp 156–161. https://doi.org/10.1109/AFGR.2000.840627

  15. https://en.wikipedia.org/wiki/Tango_(platform). Accessed 22 Feb 2021

  16. Lai K, Konrad J, Ishwar P (2012) A gesture-driven computer interface using Kinect. In: Proceedings of IEEE Southwest symposium on image analysis and interpretation, pp 185–188. https://doi.org/10.1109/SSIAI.2012.6202484

  17. Chuan CH, Regina E, Guardino C (2014) American sign language recognition using leap motion sensor. In: Proceedings—2014 13th international conference on machine learning and applications ICMLA 2014, pp 541–544. https://doi.org/10.1109/ICMLA.2014.110.

  18. A guide to the different types of sign language around the world. https://k-international.com/blog/different-types-of-sign-language-around-the-world/. Accessed 22 Feb 2021

  19. FrenchSign Language. https://www.tradonline.fr/en/blog/french-sign-language-a-language-in-its-own-right/. Accessed 22 Feb 2021

  20. ASL—Wikipedia. https://en.wikipedia.org/wiki/American_Sign_Language. Accessed 22 Feb 2021

  21. Welcome to Indian Sign Language Portal. https://indiansignlanguage.org/. Accessed 22 Feb 2021

  22. Thai sign-language—Wikipedia. https://en.wikipedia.org/wiki/Thai_Sign_Language. accessed 22 Feb 2021

  23. German sign-language—Wikipedia. https://en.wikipedia.org/wiki/German_Sign_Language. Accessed 22 Feb 2021

  24. Arab sign-language family—Wikipedia. https://en.wikipedia.org/wiki/Arab_sign-language_family. Accessed 22 Feb 2021

  25. Vaitkevičius A, Taroza M, Blažauskas T, Damaševičius R, Maskeliunas R, Woźniak M (2019) Recognition of American Sign Language gestures in a virtual reality using leap motion. Appl Sci 9(3):1–16. https://doi.org/10.3390/app9030445

    Article  Google Scholar 

  26. Pansare JR, Ingle M (2016) Vision-based approach for American Sign Language recognition using edge orientation histogram. In: 2016 international conference on image, vision and computing ICIVC 2016, pp 86–90. https://doi.org/10.1109/ICIVC.2016.7571278

  27. Rioux-Maldague L, Giguere P (2014) Sign language fingerspelling classification from depth and color images using a deep belief network. In: Proceedings—Conference on Computer and Robot Vision, CRV 2014, no May 2014, pp 92–97. https://doi.org/10.1109/CRV.2014.20

  28. Ameen S, Vadera S (2017) A convolutional neural network to classify American Sign Language fingerspelling from depth and colour images. Expert Syst 34(3). https://doi.org/10.1111/exsy.12197

  29. Kim T et al (2017) Lexicon-free fingerspelling recognition from video: data, models, and signer adaptation. Comput Speech Lang 46:209–232. https://doi.org/10.1016/j.csl.2017.05.009

    Article  Google Scholar 

  30. Kumar P, Gauba H, Pratim Roy P, Prosad Dogra D (2017) A multimodal framework for sensor based sign language recognition. Neurocomputing 259(2017):21–38. https://doi.org/10.1016/j.neucom.2016.08.132

  31. Zaki MM, Shaheen SI (2011) Sign language recognition using a combination of new vision based features. Pattern Recognit Lett 32(4):572–577. https://doi.org/10.1016/j.patrec.2010.11.013

    Article  Google Scholar 

  32. Dong C, Leu MC, Yin Z (2015) American Sign Language alphabet recognition using Microsoft Kinect. In: IEEE computer society conference on computer vision and pattern recognition workshop, vol 2015, pp 44–52, 2015. https://doi.org/10.1109/CVPRW.2015.7301347

  33. Aryanie D, Heryadi Y (2015) American sign language-based finger-spelling recognition using k-Nearest Neighbors classifier. In: 2015 3rd international conference on information communication technology ICoICT 2015, pp 533–536. https://doi.org/10.1109/ICoICT.2015.7231481

  34. Jin CM, Omar Z, Jaward MH (2016) A mobile application of American sign language translation via image processing algorithms. In: Proceedings—2016 IEEE region 10 symposium TENSYMP 2016, pp 104–109. https://doi.org/10.1109/TENCONSpring.2016.7519386

  35. Truong VNT, Yang CK, Tran QV (2016) A translator for American sign language to text and speech. In: 2016 IEEE 5th global conference on consumer electronics GCCE 2016. https://doi.org/10.1109/GCCE.2016.7800427

  36. Fiagbe Y (2020) World journal of engineering, no April

    Google Scholar 

  37. Taskiran M, Killioglu M, Kahraman N (2018) A real-time system for recognition of American Sign Language by using deep learning. In: 2018 41st international conference on telecommunications and signal processing TSP 2018, pp 1–5. https://doi.org/10.1109/TSP.2018.8441304

  38. Joshi A, Sierra H, Arzuaga E (2017) American sign language translation using edge detection and cross correlation. In: 2017 IEEE Colombian conference on communications and computing COLCOM 2017—proceedings. https://doi.org/10.1109/ColComCon.2017.8088212

  39. Koller O, Zargaran S, Ney H, Bowden R (2016) Deep sign: hybrid CNN-HMM for continuous sign language recognition. In: British machine vision conference 2016, BMVC 2016, vol 2016, no August, pp 136.1–136.12. https://doi.org/10.5244/C.30.136

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, K., Kumar, B., Sehgal, D., Kaushik, A. (2022). A Comprehensive Analysis on Technological Approaches in Sign Language Recognition. In: Marriwala, N., Tripathi, C.C., Jain, S., Mathapathi, S. (eds) Emergent Converging Technologies and Biomedical Systems . Lecture Notes in Electrical Engineering, vol 841. Springer, Singapore. https://doi.org/10.1007/978-981-16-8774-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-8774-7_29

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-8773-0

  • Online ISBN: 978-981-16-8774-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics