Skip to main content
Log in

An AI-based Approach for Improved Sign Language Recognition using Multiple Videos

  • 1174: Futuristic Trends and Innovations in Multimedia Systems Using Big Data, IoT and Cloud Technologies (FTIMS)
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

People with hearing and speaking disabilities face significant hurdles in communication. The knowledge of sign language can help mitigate these hurdles, but most people without disabilities, including relatives, friends, and care providers, cannot understand sign language. The availability of automated tools can allow people with disabilities and those around them to communicate ubiquitously and in a variety of situations with non-signers. There are currently two main approaches for recognizing sign language gestures. The first is a hardware-based approach, involving gloves or other hardware to track hand position and determine gestures. The second is a software-based approach, where a video is taken of the hands and gestures are classified using computer vision techniques. However, some hardware, such as a phone's internal sensor or a device worn on the arm to track muscle data, is less accurate, and wearing them can be cumbersome or uncomfortable. The software-based approach, on the other hand, is dependent on the lighting conditions and on the contrast between the hands and the background. We propose a hybrid approach that takes advantage of low-cost sensory hardware and combines it with a smart sign-recognition algorithm with the goal of developing a more efficient gesture recognition system. The Myo band-based approach using the Support Vector Machine method achieves an accuracy of only 49%. The software-based approach uses the Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) methods to train the Myo-based module and achieves an accuracy of over 80% in our experiments. Our method combines the two approaches and shows the potential for improvement. Our experiments are done with a dataset of nine gestures generated from multiple videos, each repeated five times for a total of 45 trials for both the software-based and hardware-based modules. Apart from showing the performance of each approach, our results show that with a more improved hardware module, the accuracy of the combined approach can be significantly improved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Availability of Data and Material

Not applicable.

Code availability

Not applicable.

References

  1. Open-myo. https://github.com/Alvipe/Open-Myo. Accessed: 2019–07–08.

  2. Welcome to myo support. https://support.getmyo.com/hc/en-us. Accessed: 2019–07–08.

  3. Wikipedia: American manual alphabet. https://en.wikipedia.org/wiki/americanmanualalphabet. Accessed: 2019–07–09.

  4. Abreu JG, Joao Marcelo Teixeira, Lucas Silva Figueiredo, and Veronica Teichrieb (2016) Evaluating Sign Language Recognition Using the Myo Armband Proceedings- 18th Symposium on Virtual and Augmented Reality, SVR 2016, pages 64–70

  5. Assan M, Grobel K (1998) Video-based sign language recognition using hidden markov models Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 1371:97-109

  6. Britta Bauer and Karl-friedrich Kraiss (2002) Video-Based Sign Recognition Using Self-organizing Subunits. pages 434–437

  7. Choe BW, Jun-Ki Min, and Sung-Bae Cho (2010) Online gesture recognition for user interface on accelerometer built-in mobile phones. In Proceedings of the 17th International Conference on Neural Information Processing: Models and Applications - Volume Part II, ICONIP’10, pages 650–657, Berlin, Heidelberg, Springer-Verlag

  8. Huang C-L and Wen-Yi Huang (1998) Sign language recognition using model-based tracking and a 3D Hopfield neural network. pages 292–307

  9. Cui Y and Juyang Weng (2000) Appearance-based hand sign recognition from intensity image sequences. Computer Vision and Image Understanding

  10. Hanene Elleuch, Ali Wali, Anis Samet, and Adel M. Alimi (2016) A static hand gesture recognition system for real time mobile device monitoring. International Conference on Intelligent Systems Design and Applications, ISDA, 2016-June: 195–200

  11. Gaolin Fang, Wen Gao, Xilin Chen, Chunli Wang, and Jiyong Ma (2002) Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM. pages 76–85

  12. Gandhi P, Dalvi D, Gaikwad P, Khode S (2015) Image Based Sign Language Recognition on Android. Int J Eng Tech 1(5):55–60

    Google Scholar 

  13. Ghanem S, Christopher Conly, and Vassilis Athitsos. (2017) A Surveyon Sign Language Recognition Using Smartphones. pages 171–176. Association for Computing Machinery (ACM)

  14. Gupta HP, Haresh S. Chudgar, Siddhartha Mukherjee, Tanima Dutta, and Kulwant Sharma (2016) A Continuous Hand Gestures Recognition Technique for Human-Machine Interaction Using Accelerometer and Gyroscope Sensors. IEEE Sensors J 16(16):6425–6432

  15. Hays P, Raymond Ptucha, and Roy Melton (2013) Mobile device to cloud co-processing of ASL finger spelling to text conversion. 2013 IEEE Western New York Image Processing Workshop, WNYIPW 2013 -Proceedings, (1):39–43

  16. Jin CM, Zaid Omar, and Mohamed Hisham Jaward (2016) A mobile application of American sign language translation via image processing algorithms. Proceedings - 2016 IEEE Region 10 Symposium, TENSYMP2016, pages 104–109

  17. Joselli M and Esteban Clua (2009) gRmobile: A framework for touch and accelerometer gesture recognition for mobile games. SBGAMES2009 -8th Brazilian Symposium on Games and Digital Entertainment, pages141–150

  18. Joshi TJ, N.Z. Tarapore, Shiva Kumar, and Vivek Mohile (2015) StaticHand Gesture Recognition using an Android Device. Int J Comput Appl 120(21):48–53

  19. Kadous. Mohammed Waleed (1996) Machine recognition of auslan signs using power gloves: towards large lexicon recognition of sign language. Proceeding of the Workshop of Gestures in Language and Speech, pages165–174

  20. Kamat R, Danoji A, Dhage A, Puranik P, Sengupta S (2016) MonVoix-An Android Application for the acoustically challenged people. J Commun Technol Electron Comput Sci 8(November):24

    Article  Google Scholar 

  21. Kau LJ, Wan Lin Su, Pei Ju Yu, and Sin Jhan Wei (2015) A real-time portable sign language translation system. Midwest Symposium on Circuits and Systems, 2015-Septe(1):1–4

  22. Kobayashi T and Haruyama S (1997) Partly-hidden Markov model and its application to gesture recognition. pages 3081–308

  23. Lahiani H, Mohamed Elleuch, and Monji Kherallah (2015) Real time hand gesture recognition system for android devices. In International Conference on Intelligent Systems Design and Applications, ISDA, pages591–596. IEEE

  24. Masood S, Adhyan Srivastava, Harish Chandra Thuwal, and Musheer Ahmad (2017) Real Time Sign Language Gesture Recognition From Video Sequences. 110017

  25. Matsuo H, Igi S, Shan Lu, Nagashima Y Takata Y, Teshima T (1998) The recognition algorithm with non-contact for Japanese sign language using morphological analysis Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 1371:273-284

  26. Murakami K and Hitomi Taguchi (1991) Gesture Recognition using Recurrent Neural Networks. Proc. SIGCHI Conf. Human Factors in Computing Systems, pages 237–242

  27. Niezen G and GP Hancke GP (2008) Gesture recognition as ubiquitous input for mobile phones. International Workshop on Devices that Alter Perception, (Dap):17–21

  28. Ong SCW and Surendra Ranganath (2005) Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6):873–891

  29. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duch-esnay E (2011) Scikit-learn: Machine learning in Python. J Mach Learn Res 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  30. Prasuhn L, Yuji Oyamada, Yoshihiko Mochizuki, and Hiroshi Ishikawa (2014) A HOG-based hand gesture recognition system on a mobile device. 2014 IEEE International Conference on Image Processing, ICIP2014, pages 3973–3977

  31. Preetham C, Girish Ramakrishnan, Sujan Kumar, Anish Tamse, and Nagendra Krishnapura (2013) Hand talk-implementation of a gesture recognizing glove. Proceedings - 2013 Texas Instruments India Educators’ Conference, TIIEC 2013, pages 328–331

  32. Raheja J, Singhal A, Sadab, and Ankit Chaudhary (2015) Android Based Portable Hand Sign Recognition System. pages 1–18

  33. Rao GA and Kishore PVV (2016) Sign language recognition system simulated for video captured with smart phone front camera. Int J Electric Comput Engi 6(5):2176–2187

  34. Ronchetti F, Facundo Quiroga, Cèsar Estrebou, and Laura Lanzarini. Handshape recognition for Argentinian Sign Language using ProbSom. Technical report.

  35. Saxena S, Deepak Kumar Jain, and Ananya Singhal (2014) Hand gesture recognition using an android device. Proceedings - 20144th International Conference on Communication Systems and Network Technologies, CSNT 2014, pages 819–822

  36. Saxena A, Deepak Kumar Jain, and Ananya Singhal (2014) Sign language recognition using principal component analysis. Proceedings - 20144th International Conference on Communication Systems and Network Technologies, CSNT 2014, pages 810–813

  37. Sequeira S, Naik GM, Parab JS, and Gad RS. SIGN LANGUAGE RECOGNITION USING sEMG AND IMU.

  38. Setiawardhana R,Hakkun Y, and Achmad Baharuddin (2015) Sign language learning based on Android for deaf and speech impaired people. Proceedings -, 2015 International Electronics Symposium: Emerging Technology in Electronic and Information IES 2015 pages114–117

  39. Seymour M and Mohohlo Tsoeu (2015) A mobile application for South African Sign Language (SASL) recognition. IEEE AFRICON Conference, 2015-Novem:1–5

  40. Starner T, Weaver J, Pentland A (1998) Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans Pattern Anal Mach Intell 20(12):1371–1375

    Article  Google Scholar 

  41. Tanibata N, Nobutaka Shimada, and Yoshiaki Shirai (2002) Extraction of hand features for recognition of sign language words. The 15thInternational Conference on Vision Interface, (August 2014):391–398

  42. Wang C, Wen Gao, and Shiguang Shan (2002) An approach based on phonemes to large vocabulary Chinese sign language recognition. Proceedings - 5th IEEE International Conference on Automatic Face Gesture Recognition, FGR 2002, pages 411–416

  43. Wang X, Paula Tarrío, Eduardo Metola, Ana Barbolla, and Josè R. Casar (2012) Gesture Recognition Using Mobile Phone’s Inertial Sensors 151:173–184. 01

  44. Warrier KS, Jyateen Kumar Sahu, Himadri Halder, Rajkumar Koradiya, and Karthik Raj V (2016) Software based sign language converter. International Conference on Communication and Signal Processing, ICCSP 2016, pages 1777–1780

  45. Wu J, Gao W (2000) A Fast Sign Word Recognition Method for Chinese Sign Language 2:599–606

    Google Scholar 

  46. Yang MH, Narendra Ahuja, and Mark Tabb (2002) Extraction of 2D motion trajectories and its application to hand gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(8):1061–1074

  47. Rosero-Montalvo PD et al (2018) Sign Language Recognition Based on Intelligent Glove Using Machine Learning Techniques. IEEE Third Ecuador Technical Chapters Meeting (ETCM) 2018:1–5. https://doi.org/10.1109/ETCM.2018.8580268

    Article  Google Scholar 

  48. Amin MS, Amin MT, Latif MY, Jathol AA, Ahmed N and Tarar MIN (2020) "Alphabetical Gesture Recognition of American Sign Language using E-Voice Smart Glove," 2020 IEEE 23rd International Multitopic Conference (INMIC) pp. 1–6 https://doi.org/10.1109/INMIC50486.2020.9318185.

  49. Krishnan A, Vijay A, B. M and S. BS (2020) "Gesture Recognizer and Communicator using Flex Sensors and Accelerometer with Logistic Regression," 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS) pp. 391–394, https://doi.org/10.1109/ICISS49785.2020.9315897.

  50. Ilya Makarov, Nikolay Veldyaykin, Maxim Chertkov, and Aleksei Pokoev (2019) American and russian sign language dactyl recognition. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA '19). Association for Computing Machinery, New York, NY, USA, 204–210. https://doi.org/10.1145/3316782.3316786

  51. Chu X, Liu J and Shimamoto S (2021)"A Sensor-Based Hand Gesture Recognition System for Japanese Sign Language," 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech) pp. 311–312 https://doi.org/10.1109/LifeTech52111.2021.9391981.

  52. Sobia Fayyaz and Yasar Ayaz (2019) CNN and Traditional Classifiers Performance for Sign Language Recognition. In Proceedings of the 3rd International Conference on Machine Learning and Soft Computing (ICMLSC 2019). Association for Computing Machinery, New York, NY, USA, 192–196. DOI:https://doi-org.ezproxy.uta.edu/https://doi.org/10.1145/3310986.3311011

  53. Shrenika S and Madhu Bala M (2020) "Sign Language Recognition Using Template Matching Technique," 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA), 2020, pp. 1–5 https://doi.org/10.1109/ICCSEA49143.2020.9132899

  54. Yu-Kwong Kwok, Karlapalem K, Ahmad I and Moon Pun NG (1996) "Design and evaluation of data allocation algorithms for distributed multimedia database systems," in IEEE Journal on Selected Areas in Communications, 14(7):1332–1348 https://doi.org/10.1109/49.536483

  55. Sun Yu, Ahmad I (Oct. 2004) A robust and adaptive rate control algorithm for object-based video coding. IEEE Trans Circuits Syst Video Technol 14(10):1167–1182. https://doi.org/10.1109/TCSVT.2004.833164

    Article  Google Scholar 

  56. Dignan C, Perez E, Ahmad I, Huber M and Clark A (2020) "Improving Sign Language Recognition by Combining Hardware and Software Techniques," 2020 3rd International Conference on Data Intelligence and Security (ICDIS), pp. 87–92 https://doi.org/10.1109/ICDIS50059.2020.00018.

Download references

Funding

This project was funded by the National Science Foundation under Award Number:1757641.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ishfaq Ahmad.

Ethics declarations

Conflicts of Interest

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dignan, C., Perez, E., Ahmad, I. et al. An AI-based Approach for Improved Sign Language Recognition using Multiple Videos. Multimed Tools Appl 81, 34525–34546 (2022). https://doi.org/10.1007/s11042-021-11830-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11830-y

Keywords

Navigation