Abstract
Sign language is one of the ways to communicate by not making sound, as it is communicated by using hands and expressions. Sign language does not get much attention from the public because it is used by the minority of the society, hence the limited resources available for learning them. This paper proposes a sign translation system called ReadMe that uses deep learning approach, specifically the Convolutional Neural Networks (CNNs) to train the recognition model. However, training ReadMe using the CNNs revealed a low accuracy of 39.8% due to small size of training dataset, hence testing is aggravating. In order to increase the recognition accuracy in the future, CNNs algorithm in ReadMe will be trained using a larger dataset from both American Sign Language (ASL) and Malaysian Sign Language (BIM). In addition, the system is also hoped to enable users to train new gestures. Only by means of crowdsourcing that the system able to expand its vocabulary without facing the knowledge engineering bottleneck.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Weaver, K.A., Starner T.: We need to communicate! helping hearing parents of deaf children learn American sign language. In: 13th International ACM SIGACCESS Conference on Computers and Accessibility (ACM), pp. 91–98 (2011)
Azbel, L.: How do the deaf read? The paradox of performing a phonemic task without sound. Intel Science Talent Search (2004)
Cummins, J., Swain, M.: Bilingualism in Education: Aspects of Theory, Research and Practice. Routledge, Abingdon (2014)
Musselman, C.: How do children who can’t hear learn to read an alphabetic script? A review of the literature on reading and deafness. J. Deaf Stud. Deaf Educ. 5(1), 9–31 (2000)
Conrad, R.: The Deaf Schoolchild: Language and Cognitive Function. HarperCollins Publishers, New York (1979)
Microsoft Blog Editor, Kinect Sign Language Translator - Part 1. https://www.microsoft.com/en-us/research/blog/kinect-sign-language-translator-part-1. Accessed 10 Apr 2019
Polunina, T.: Students Create App to Translate Sign Language. https://wp.nyu.edu/connect/2018/05/29/sign-language-app/. Assessed 10 Apr 2019
BusinessWire, SignAll tracking the body movement, facial expression and finger shapes of a person. https://nyunews.com/2018/04/09/04-10-news-ar/. Assessed 10 Apr 2019
Le, Q.V., Smola, A.J., Vishwanathan, S.: Bundle methods for machine learning. In: Advances in Neural Information Processing systems, pp. 1377–1384 (2008)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Networks 61, 85–117 (2015)
Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press, Cambridge (2016)
Aggarwal, C.C.: Neural Networks and deep Learning. Springer, Cham (2018)
Bejiga, M., Zeggada, A., Nouffidj, A., Melgani, F.: A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery. Remote Sens. 9(2), 100 (2017)
Acknowledgement
This project is sponsored by Universiti Tun Hussein Onn Malaysia.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kasinathan, V., Mustapha, A., Hew, H.S., Hamed, V.A. (2021). Sign Language Translation System Using Convolutional Neural Networks Approach. In: Zakaria, M., Abdul Majeed, A., Hassan, M. (eds) Advances in Mechatronics, Manufacturing, and Mechanical Engineering. Lecture Notes in Mechanical Engineering. Springer, Singapore. https://doi.org/10.1007/978-981-15-7309-5_17
Download citation
DOI: https://doi.org/10.1007/978-981-15-7309-5_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-7308-8
Online ISBN: 978-981-15-7309-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)