Skip to main content

Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model

  • Conference paper
  • First Online:
Advanced Engineering, Technology and Applications (ICAETA 2023)

Abstract

Many members of society rely on sign language because it provides them with an alternative means of communication. Hand shape, motion profile, and the relative positioning of the hand, face, and other body components all contribute to the uniqueness of each sign throughout sign languages. Therefore, the field of computer vision dedicated to the study of visual sign language identification is a particularly challenging one. In recent years, many models have been suggested by various researchers, with deep learning approaches greatly improving upon them. In this study, we employ a fine-tuned CNN that has been presented for sign language recognition based on visual input, and it was trained using a dataset that included 2062 images. When it comes to sign language recognition, it might be difficult to achieve the levels of high accuracy that are sought when using systems that are based on machine learning. This is due to the fact that there are not enough datasets that have been annotated. Therefore, the goal of the study is to improve the performance of the model by transferring knowledge. In the dataset that was utilized for the research, there are images of 10 different numbers ranging from 0 to 9, and as a result of the testing, the sign was detected with a level of accuracy that was equal to 98% using the VGG16 pre-trained model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Available at http://www.who.int/ en/data-and-evidence.

  2. 2.

    Available at https://data.tuik.gov.tr.

References

  1. Sertkaya, M., Ergen, B., Togacar, M.: Diagnosis of eye retinal diseases based on convolutional neural networks using optical coherence images. In: 2019 23rd International Conference Electronics, pp. 1–5. IEEE (2019)

    Google Scholar 

  2. Altuntas, Y., Comert, Z., Kocamaz, A.: Identification of haploid and diploid maize seeds using convolutional neural networks and a transfer learning approach. Comput. Electron. Agric. 163, 104874 (2019)

    Google Scholar 

  3. Koller, O., Ney, H., Bowden, R.: Deep learning of mouth shapes for sign language. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 85–91 (2015)

    Google Scholar 

  4. Huang, J., Zhou, W., Li, H., Li, W.: Sign language recognition using 3D convolutional neural networks. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015)

    Google Scholar 

  5. Elmas, N.: Örgütsel iletisimin is tatmini üzerindeki etkisi ve bir uygulama. Master Thsesis, Istanbul Ticaret University (2017)

    Google Scholar 

  6. Yildiz, Z., Yildiz, S., Bozyer, S.: Isitme Engelli Turizmi(Sessiz Turizm): Dunya ve Turkiye Potansiyeline Yonelik Bir Degerlendirme. Suleyman Demirel University Vizyoner Dergisi 9(20), 103–117 (2018)

    Article  Google Scholar 

  7. Campbell, L.: Ethnologue: languages of the world. JSTOR (2008)

    Google Scholar 

  8. Togacar, M., Comert, Z., Ergen, B.: Siyam Sinir Aglarini Kullanarak Turk Isaret Dilindeki Rakamlarin Tanimlanmasi. Dokuz Eylul University Muhendislik Fakültesi Fen ve Muhendislik Dergisi 23(68), 349–356 (2021)

    Article  Google Scholar 

  9. Haualand, H.: The Two Week Village-The Significance of Sacred Occasions. Disability in Local and Global Worlds. University of California Press, Berkeley (2003)

    Google Scholar 

  10. Murray, J.: Coequality and transnational studies: understanding deaf lives. Open Your Eyes Deaf Stud. Talk. 100, 110 (2008)

    Google Scholar 

  11. Wang, H., Leu, M., Oz, C.: American sign language recognition using multi-dimensional hidden Markov models. J. Inf. Sci. Eng. 22(5), 1109–1123 (2006)

    Google Scholar 

  12. Patlar, F., Akbulut, A.: Triphone based continuous speech recognition system for Turkish language using hidden Markov model. In: 12th IASTED International Conference in Signal and Image Processing, pp. 13–17 (2010). https://doi.org/10.2316/P.2010.710-059

  13. Shanableh, T., Assaleh, K.: User-independent recognition of Arabic sign language for facilitating communication with the deaf community. Digit. Signal Process. 21(4), 535–542 (2011)

    Article  Google Scholar 

  14. Cömert, Z., Kocamaz, A.F.: Fetal hypoxia detection based on deep convolutional neural network with transfer learning approach. In: Silhavy, R. (ed.) CSOC2018 2018. AISC, vol. 763, pp. 239–248. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91186-1_25

    Chapter  Google Scholar 

  15. Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B.: Sign language recognition using convolutional neural networks. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_40

    Chapter  Google Scholar 

  16. Kemaloglu, N., Sevli, O.: Evrisimsel Sinir Aglari ile Isaret Dili Tanima. In: Proceedings on 2nd International Conference on Technology and Science, pp. 942–948 (2019)

    Google Scholar 

  17. Bheda, V. and Radpour, D.: Using deep convolutional networks for gesture recognition in American sign language. arXiv preprint arXiv:1710.06836 (2017)

  18. Kalam, M., Mondal, M., Ahmed, B.: Rotation independent digit recognition in sign language. In: 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1–5. IEEE (2019)

    Google Scholar 

  19. MAvi, A.: A new dataset and proposed convolutional neural network architecture for classification of American sign language digits. arXiv preprint arXiv:2011.08927 (2020)

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  22. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  23. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)

  24. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.H.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2015)

    Google Scholar 

  25. Kocacinar, B., Tas, B., Akbulut, F., Catal, C., Mishra, D.: A real-time CNN-based lightweight mobile masked face recognition system. IEEE Access 10, 63496–63507 (2022)

    Article  Google Scholar 

  26. Yirtici, T., Yurtkan, K.: Regional-CNN-based enhanced Turkish sign language recognition. Signal Image Video Process. 1–7 (2022)

    Google Scholar 

  27. Brock, H., Farag, I., Nakadai, K.: Recognition of non-manual content in continuous Japanese sign language. Sensors 20(19), 5621 (2020)

    Article  Google Scholar 

  28. Zhang, J., Zhou, W., Xie, C., Pu, J., Li, H.: Chinese sign language recognition with adaptive HMM. In: 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2016)

    Google Scholar 

  29. Bantupalli, K., Xie, Y.: American sign language recognition using deep learning and computer vision. In: 2018 IEEE International Conference on Big Data (Big Data), pp. 4896–4899. IEEE (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Şeyma Derdiyok .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ozgul, G., Derdiyok, Ş., Akbulut, F.P. (2024). Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model. In: Ortis, A., Hameed, A.A., Jamil, A. (eds) Advanced Engineering, Technology and Applications. ICAETA 2023. Communications in Computer and Information Science, vol 1983. Springer, Cham. https://doi.org/10.1007/978-3-031-50920-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50920-9_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50919-3

  • Online ISBN: 978-3-031-50920-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics