A Robust Static Sign Language Recognition System Based on Hand Key Points Estimation
Sign language recognition is not only an essential tool between normal people and deaf, but a prospective technique in human-computer interaction (HCI). This paper proposes a robust method based on the RGB sensor and hand key points estimation. Compared with depth sensor and the wearable devices, RGB sensor has smaller size and simpler operation process. With the hand key points detection technique, the data can conquer the influence of unfavourable factors like complex background, occlusion, and different angles. During training step, 5 kinds of machine learning algorithms are used for the classification of 20 letters in alphabet, and the highest classification accuracy are realized by SVM and KNN algorithms, which are 95.54% and 97.3% respectively. Finally, a real time sign language recognition system with SVM training model is built and it’s recognition accuracy can reach 97%, which confirms that our method can effectively eliminate unfavourable factors.
KeywordsSign language Key points estimation SVM
This work is supported in part by the National Natural Science Foundation of China under Grant 61671266, 61327902, in part by the Research Project of Tsinghua University under Grant 20161080084, and in part by National High-tech Research and Development Plan under Grant 2015AA042306, 2015AA016304.
- 1.Cokely, D., Bakershenk, C.L.: American sign language. Language 59(1), 119–124 (1998)Google Scholar
- 2.Chou, F.H., Su, Y.C.: An encoding and identification approach for the static sign language recognition. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 885–889. IEEE (2012)Google Scholar
- 3.Starner, T., Pentland, A., Weaver, J.: Real-time American sign language recognition using desk and wearable computer based video. IEEE Comput. Soc. 20, 1371–1375 (1998)Google Scholar
- 4.Wu, J., Jafari, R.: Wearable computers for sign language recognition (2017)Google Scholar
- 5.Meena, S.: A Study on Hand Gesture Recognition Technique (2015)Google Scholar
- 6.Hasan, M.M., Mishra, P.K.: HSV brightness factor matching for gesture recognition system. Int. J. Image Process. 4(5), 456–467 (2010)Google Scholar
- 7.Yu, C., Wang, X., Huang, H., et al.: Vision-based hand gesture recognition using combinational features. In: Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 543–546. IEEE (2010)Google Scholar
- 8.Oz, C., Leu, M.C.: Recognition of finger spelling of American sign language with artificial neural network using position/orientation sensors and data glove. In: Advances in Neural Networks ISNN 2005, pp. 157–164. Springer, Heidelberg (2005)Google Scholar
- 12.Sutarman, M.M.B.A., Zain, J.B.M., et al.: Recognition of Malaysian sign language using skeleton data with neural network. In: International Conference on Science in Information Technology, pp. 231–236. IEEE (2016)Google Scholar
- 13.Simon, T., Joo, H., Matthews, I., et al.: Hand keypoint detection in single images using multiview bootstrapping (2017)Google Scholar
- 14.Cao, Z., Simon, T., Wei, S.E., et al.: Realtime multi-person 2D pose estimation using part affinity fields (2016)Google Scholar
- 15.Wei, S.E., Ramakrishna, V., Kanade, T., et al.: Convolutional pose machines, pp. 4724–4732 (2016)Google Scholar