Local Binary Pattern based features for sign language recognition
In this paper we focus on appearance features particularly the Local Binary Patterns describing the manual component of Sign Language. We compare the performance of these features with geometric moments describing the trajectory and shape of hands. Since the non-manual component is also very important for sign recognition we localize facial landmarks via Active Shape Model combined with Landmark detector that increases the robustness of model fitting. We test the recognition performance of individual features and their combinations on a database consisting of 11 signers and 23 signs with several repetitions. Local Binary Patterns outperform the geometric moments. When the features are combined we achieve a recognition rate up to 99.75% for signer dependent tests and 57.54% for signer independent tests.
Keywordslocal binary pattern sign language sign language recognition
Unable to display preview. Download preview PDF.
- 1.C. G. Wang, J. Ma, and W. Gao, “A Real-Time Large Vocabulary Recognition System for Chinese Sign Language,” Adv. Multimedia Inf. Processing, pp. 150–157 (2001).Google Scholar
- 2.O. Aran, I. Ari, A. Benoit, A. H. Carrillo, F. X. Fanard, P. Campr, L. Akarun, A. Caplier, M. Rombaut, and B. Sankur, “Sign Language Tutoring Tool,” in Proc. eNTERFACE 2006, The Summer Workshop on Multimodal Interfaces (Istanbul, 2006), pp. 23–33.Google Scholar
- 3.J. Zieren, U. Canzler, B. Bauer, and K.-F. Kraiss, “Sign Language Recognition,” in Advanced Man-Machine Interaction-Fundamentals and Implementation (Springer, 2006), pp. 95–139.Google Scholar
- 6.U. von Agris, M. Knorr, and K.-F. Kraiss, “The Significance of Facial Features for Automatic Sign Language Recognition,” in Proc. 8th IEEE Int. Conf. on Automatic Face and Gesture Recognition (Amsterdam, 2008), pp. 1–6.Google Scholar
- 7.G. A. Holt, M. J. T. Reinders, E. A. Hendriks, H. de Ridder, and A. J. van Doorn, “Influence of Handshape Information on Automatic Sign Language Recognition,” in Proc. 8th Int. Workshop on Gesture in HumanComputer Interaction and Simulation (2010).Google Scholar
- 8.M. Hrúz, P. Campr, and M. Železný, “Semi-Automatic Annotation of Sign Language Corpora,” in Proc. 6th Int. Conf. on Language Resources and Evaluation (Marrakec, 2008), pp. 78–81.Google Scholar
- 9.D.-Y. Huang, W.-Ch. Hu, and S.-H. Chang, “Vision-Based Hand Gesture Recognition Using PCA+Gabor Filters and SVM,” in Proc. 5th Int. Conf. on Intelligent Information Hiding and Multimedia Signal Processing (Kyoto, 2009), pp. 1–4.Google Scholar
- 13.S. Milborrow and F. Nicolls, “Locating Facial Features with an Extended Active Shape Model,” in Proc. 10th European Conf. on Computer Vision (Marseille, 2008), Ch. 4, Vol. 5305, pp. 504–513.Google Scholar
- 14.D. Cristinacce and T. Cootes, “Boosted Regression Active Shape Models,” in Proc. 18th British Machine Vision Conf. (Warwick, 2007), Vol. 2, pp. 880–889.Google Scholar
- 16.T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” in Proc. Conf. on Automatic Face and Gesture Recognition (Grenoble, 2000), pp. 46–53.Google Scholar
- 18.J. Trmal, M. Hrúz, J. Zelinka, P. Campr, and L. Müller, “Feature Space Transforms for Czech Sign-Language Recognition,” in Proc. 9th Annu. Conf. of the International Speech Communication Association (Brisbane, 2008), pp. 2036–2039.Google Scholar
- 19.P. Campr, M. Hrúz, J. Langer, J. Kanis, M. Železný, and L. Müller, “Towards Czech On-Line Sign Language Dictionary-Technological Overview and Data Collection,” in Proc. LREC 2010, 7th Int. Conf. on Language Resources and Evaluation; 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign language Technologies (2010), pp. 41–44.Google Scholar