Abstract
In today’s world, many people are not as fortunate as us. They suffer from speaking and hearing disabilities that make daily tasks very difficult to perform. This paper mainly deals with people suffering from the loss of hearing and cannot speak. In this study, we have introduced a recognition and classification technique which can detect hand gestures used by the deaf and dumb community, which is then translated to an equivalent English word. The vision-based system is built using two machine learning algorithms, namely convolution neural network (CNN) and recurrent neural network (RNN). The video sequences are not directly fed into the model instead those videos are first converted into frames and then fed into the model. Further, each frame is directed into CNN, the exact features are extracted, trained on those features and stored in a model file, these model files are fed into RNN for further feature extraction, and the model is trained on those features. The model can convert the sign language to text with an accuracy of 96%. To make the system as user-friendly as possible, we have provided a feature where the general public can record the video of the person doing the hand gesture and use it to make the equivalent English word prediction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aarthi, M., Vijayalakshmi, P.: Sign language to speech conversion. In: Fifth International Conference on Recent Trends in Information Technology, India (2016)
Chandra, M.M., Rajkumar, S., Kumar, L.S.: Sign languages to speech conversion prototype using the SVM classifier. In: TENCON 2019—2019 IEEE Region 10 Conference (TENCON), Kochi, India, pp. 1803–1807 (2019)
Bhujbal, V.P., Warhade, K.K.: Hand sign recognition based communication system for speech disable people. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, pp. 348–352 (2018)
Badhe, P.C., Kulkarni, V.: Indian sign language translator using gesture recognition algorithm. In: 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS), Bhubaneshwar, pp. 195–200 (2015)
Assaleh, K., Shanableh, T., Fanaswala, M., Amin, F., Bajaj, H.: Continous Arabic sign language recognition in user dependent mode. J. Intell. Learning Syst. Appl. 2(1), 19–27 (2010)
Ekbote, J., Joshi, M.: Indian sign language recognition using ANN and SVM classifiers. In: 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, pp. 1–5 (2017)
Lahoti, S., Kayal, S., Kumbhare, S., Suradkar, I., Pawar, V.: Android based american sign language recognition system with skin segmentation and SVM. In: 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Bangalore, pp. 1–6 (2018)
Kolkur, S., Kalbande, D., Shimpi, P., Bapat, C., Jatakia, J.: Human skin detection using RGB. HSV and YCbCr Color Models (2017). https://doi.org/10.2991/iccasp-16.2017.51
Tang, X., Luo, B., Gao, X., Pissaloux, E., Zhan, H.: Video text extraction using temporal feature vectors. In: Proceedings. IEEE International Conference on Multimedia and Expo (n.d.). https://doi.org/10.1109/icme.2002.1035724
Kingma, D., Ba, J.: Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980 (2014)
Pang, J., Bai, Z.-Y., Lai, J.-C., Li, S.-K.: Automatic Segmentation of Crop Leaf Spot Disease Images by Integrating Local Threshold and Seeded Region Growing. IEEE (2011)
Li, H.-N., Feng, J., Yang, W.-P., Wu, X.-S., Li, Z.-D., Liu, W.: Spectrum-Based Method for Quantitatively Detecting Diseases on Cucumber Leaf. IEEE (2011)
Hashim, H., Haron, M.A., Osman, F.N., Al Junid, S.A.M.: Classification of Rubber Tree Leaf Disease Using Spectrometer. IEEE (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Santhosh, K.S., Hoysala, S.K., Srihari, D.R., Chandra, S.S., Krishna, A.N. (2021). Gesture Recognition of Indian Sign Language. In: Bhateja, V., Satapathy, S.C., Travieso-Gonzalez, C.M., Flores-Fuentes, W. (eds) Computer Communication, Networking and IoT. Lecture Notes in Networks and Systems, vol 197. Springer, Singapore. https://doi.org/10.1007/978-981-16-0980-0_3
Download citation
DOI: https://doi.org/10.1007/978-981-16-0980-0_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-0979-4
Online ISBN: 978-981-16-0980-0
eBook Packages: EngineeringEngineering (R0)