Journal of Signal Processing Systems

, Volume 91, Issue 2, pp 131–146 | Cite as

Hand Sign Recognition for Thai Finger Spelling: an Application of Convolution Neural Network

  • Pisit NakjaiEmail author
  • Tatpong Katanyukul


The finger spelling is a necessary part of Sign Language—an important means of communication among people with hearing disability. The finger spelling is used to spell out names, places or signs that have not yet been defined. A sign recognition system attempts to allow better communication between hearing majority and hearing disability people. Our study investigates Thai Finger Spelling(TFS), its unique characteristics, a design of automatic TFS recognition, and approaches to handle a TFS key potential issue. Our research designs automatic TFS recognition as a two-stage pipeline: (1) locating and extracting a signing hand on the image and (2) classifying the signing image into the valid TFS sign. Signing hand is located and extracted based on color scheme and contour area using Green’s Theorem. Two approaches are examined for signing image classification: Convolution Neural Network(CNN)-based and Histogram of Oriented Gradients(HOG)-based approaches. Our experimental results have shown the viability of the proposed pipeline, which achieves mean Average Precision (mAP) at 91.26. The proposed design outperforms state-of-the-arts in automatic visual TFS recognition. In a practical sign recognition system, invalid TFS signs may appear in sign transition or simply from unaware hand postures. We proposed a formulation, called confidence ratio. Confidence ratio is simple to compute and generally compatible with multi-class classifiers. The confidence ratio has been found to be a promising mechanism for identifying invalid TFS signs. Our findings reveal challenging issues related to TFS recognition, practical design for TFS sign transcription, formulation and effectiveness of confidence ratio.


Sign language transcription Thai sign recognition Thai Finger Spelling Convolution Neural Network Open-set recognition 


  1. 1.
    Acharya, U.R., Fujita, H., Lih, O.S., Hagiwara, Y., Tan, J.H., Adam, M. (2017). Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Information Sciences, 405, 81–90. Scholar
  2. 2.
    Adhan, S., & Pintavirooj, C. (2016). Thai sign language recognition by using geometric invariant feature and ANN classification. In 2016 9th biomedical engineering international conference (BMEiCON) (pp. 1–4).
  3. 3.
    Antia, S.D., Reed, S., Kreimeyer, K.H. (2005). Written language of deaf and hard-of-hearing students in public schools. The Journal of Deaf Studies and Deaf Education, 10(3), 244–255. Scholar
  4. 4.
    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y. (2016). Realtime multi-person 2d pose estimation using part affinity fields. arXiv:1611.08050 [cs].
  5. 5.
    Cardoso, D.O., Gama, J., França, F.M.G. (2017). Weightless neural networks for open set recognition. Machine Learning 1–21.
  6. 6.
    Chanda, P., Auephanwiriyakul, S., Theera-Umpon, N. (2012). Thai sign language translation system using upright speed-up robust feature and dynamic time warping. In 2012 IEEE international conference on computer science and automation engineering (CSAE) (Vol. 2, pp. 70–74).
  7. 7.
    Chandola, V., Banerjee, A., Kumar, V. (2009). Anomaly detection: a survey. ACM Computing Surveys, 41 (3), 15:1–15:58. Scholar
  8. 8.
    Chansri, C., & Srinonchat, J. (2016). Hand gesture recognition for Thai sign language in complex background using fusion of depth and color video. Procedia Computer Science, 86, 257–260. Scholar
  9. 9.
    Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05) (Vol. 1, pp. 886–893).
  10. 10.
    Everingham, M., Gool, L.V., Williams, C.K.I., Winn, J., Zisserman, A. (2010). The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2), 303–338. Scholar
  11. 11.
    Everingham, M., Eslami, S.M.A., Gool, L.V., Williams, C.K.I., Winn, J., Zisserman, A. (2015). The Pascal visual object classes challenge: a retrospective. International Journal of Computer Vision, 111(1), 98–136. Scholar
  12. 12.
    Fukushima, K. (1980). Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193–202. Scholar
  13. 13.
    Girshick, R., Iandola, F., Darrell, T., Malik, J. (2014). Deformable part models are convolutional neural networks. arXiv:1409.5403 [cs].
  14. 14.
    Hikawa, H., & Kaida, K. (2015). Novel FPGA implementation of hand sign recognition system with SOM #x2013;Hebb classifier. IEEE Transactions on Circuits and Systems for Video Technology, 25(1), 153–166. Scholar
  15. 15.
    Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. MIT Press, 9, 1735–1780. Scholar
  16. 16.
    Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. IEEE Press.Google Scholar
  17. 17.
    Inoue, K., Shiraishi, T., Yoshioka, M., Yanagimoto, H. (2015). Depth sensor based automatic hand region extraction by using time-series curve and its application to Japanese finger-spelled sign language recognition. Procedia Computer Science, 60, 371–380. Scholar
  18. 18.
    Isaacs, J., & Foo, S. (2004). Hand pose estimation for American sign language recognition. In Proceedings of the thirty-sixth southeastern symposium on system theory (pp. 132–136).
  19. 19.
    Junxia, B., Jianqin, Y., Jun, W., Ling, Z. (2015). Hand detection based on depth information and color information of the Kinect. In The 27th Chinese control and decision conference (2015 CCDC) (pp. 4205–4210).
  20. 20.
    Katanyukul, T., & Ponsawat, J. (2017). Customer analytics: customer detection with multiple cues, to be appeared in Acta Polytechnica Hungarica. Acta Polytechnica Hungarica, 14(3), 187–207.Google Scholar
  21. 21.
    Kishore, P.V.V., Prasad, M.V.D., Kumar, D.A., Sastry, A.S.C.S. (2016). Optical flow hand tracking and active contour hand shape features for continuous sign language recognition with artificial neural networks. In 2016 IEEE 6th international conference on advanced computing (IACC) (pp. 346–351).
  22. 22.
    Lecun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. Scholar
  23. 23.
    Liwicki, S., & Everingham, M. (2009). Automatic recognition of fingerspelled words in British Sign Language. In 2009 IEEE computer society conference on computer vision and pattern recognition workshops (pp. 50–57).
  24. 24.
    Michel, D., Oikonomidis, I., Argyros, A. (2011). Scale invariant and deformation tolerant partial shape matching. Image and Vision Computing, 29(7), 459–469. Scholar
  25. 25.
    Oz, C., & Leu, M.C. (2011). American Sign Language word recognition with a sensory glove using artificial neural networks. Engineering Applications of Artificial Intelligence, 24(7), 1204–1213. Scholar
  26. 26.
    Pariwat, T., & Seresangtakul, P. (2017). Thai finger-spelling sign language recognition using global and local features with SVM. In 2017 9th international conference on knowledge and smart technology (KST) (pp. 116–120).
  27. 27.
    Pattanaworapan, K., Chamnongthai, K., Guo, J.M. (2016). Signer-independence finger alphabet recognition using discrete wavelet transform and area level run lengths. Journal of Visual Communication and Image Representation, 38(Supplement C), 658–677. Scholar
  28. 28.
    Redmon, J., & Farhadi, A. (2016). YOLO9000: better, faster, stronger. arXiv:1612.08242 [cs].
  29. 29.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2015). You only look once: unified, real-time object detection. arXiv:1506.02640 [cs].
  30. 30.
    Ren, S., He, K., Girshick, R., Sun, J. (2015). Faster R-CNN: towards real-time object detection with region proposal networks. arXiv:1506.01497 [cs].
  31. 31.
    Saengsri, S., Niennattrakul, V., Ratanamahatana, C.A. (2012). TFRS: Thai finger-spelling sign language recognition system. In 2012 second international conference on digital information and communication technology and it’s applications (DICTAP) (pp. 457–462).
  32. 32.
    Silanon, K. (2017). Thai finger-spelling recognition using a cascaded classifier based on histogram of orientation gradient features. Computational Intelligence and Neuroscience, 2017, 11. Scholar
  33. 33.
    Simon, T., Joo, H., Matthews, I., Sheikh, Y. (2017). Hand keypoint detection in single images using multiview bootstrapping. arXiv:1704.07809 [cs].
  34. 34.
    Smedt, Q.D., Wannous, H., Vandeborre, J.P. (2016). Skeleton-based dynamic hand gesture recognition. In 2016 IEEE conference on computer vision and pattern recognition workshops (CVPRW) (pp. 1206–1214).
  35. 35.
    Starner, T., & Pentland, A. (1995). Real-time American sign language recognition from video using hidden Markov models. In Proceedings of international symposium on computer vision (pp. 265–270).
  36. 36.
    Suwanarat, M., & Reilly, C. (1986). National Association of the Deaf in Thailand, B.: The Thai sign language dictionary. Washington, D.C.: Distributed by ERIC Clearinghouse.Google Scholar
  37. 37.
    Suzuki, S., & Be, K. (1985). Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1), 32–46. Scholar
  38. 38.
    Tang, H.K., & Feng, Z.Q. (2008). Hand’s skin detection based on ellipse clustering. In 2008 international symposium on computer science and computational technology (Vol. 2, pp. 758–761).
  39. 39.
    Yang, C., Feinen, C., Tiebe, O., Shirahama, K., Grzegorzek, M. (2016). Shape-based object matching using interesting points and high-order graphs. Pattern Recognition Letters.
  40. 40.
    Yap Bee, W., & Nornadiah Mohd, R. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests - Semantic Scholar. Journal of Statistical Modeling and Analytics, 2(1), 21–33.Google Scholar
  41. 41.
    Zaman, M.F., Mossarrat, S.T., Islam, F., Karmaker, D. (2015). Real-time hand detection and tracking with depth values. In 2015 international conference on advances in electrical engineering (ICAEE) (pp. 129–132).
  42. 42.
    Zhao, Y., Song, Z., Wu, X. (2012). Hand detection using multi-resolution HOG features. In 2012 IEEE international conference on robotics and biomimetics (ROBIO) (pp. 1715–1720).

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer EngineeringKhon Kaen UniversityKhon KaenThailand

Personalised recommendations