Advertisement

Visual System of Sign Alphabet Learning for Poorly-Hearing Children

  • Margarita Favorskaya
Part of the Studies in Computational Intelligence book series (SCI, volume 473)

Abstract

Training visual systems have significant role for people with limited physical abilities. In this paper, the task of sign alphabet learning by poorly-hearing children was discussed using advanced recognition methods. Such intelligent system is an additional instrument for cultural development of children who can not learn alphabet in the usual way. The novelty of the method consists in proposed technique of features extraction and building vector models of outer contours for following identification of gestures which are associated with letters. The high variability of gestures in 3D space causes ambiguous segmentation, which makes the visual normalization necessary. The corresponding software has two modes: a learning mode (building of etalon models) and a testing mode (recognition of a current gesture). The Visual system of Russian sign alphabet learning is a real-time application and does not need high computer resources.

Keywords

Sign alphabet gesture recognition features extraction spatiotemporal segmentation skin classifiers 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mitra, S., Acharya, T.: Gesture Recognition: a Survey. IEEE Trans. Syst. Man. Cybern. Part C Appl. Rev. 37(3), 311–324 (2007)CrossRefGoogle Scholar
  2. 2.
    Alon, J., Athitsos, V., Yuan, Q., Sclaroff, S.: A Unified Framework for Gesture Recognition and Spatio-temporal Gesture Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1685–1699 (2009)CrossRefGoogle Scholar
  3. 3.
    Li, H., Greenspan, M.: Model-based Segmentation and Recognition of Dynamic Gestures in Continuous video streams. Pattern Recognition 44, 1614–1628 (2011)CrossRefGoogle Scholar
  4. 4.
    Li, H., Greenspan, M.: Segmentation and Recognition of Continuous Gestures. In: Int. Conf. on Image Proc., vol. 1, pp. I365–I368 (2007)Google Scholar
  5. 5.
    Fang, G., Gao, W., Zhao, D.: Large-vocabulary Continuous Sign Language Recognition Based on Transition-movement Models. IEEE Trans. Syst. Man. Cybern. Part A: Syst. Humans 37(1), 1–9 (2007)CrossRefGoogle Scholar
  6. 6.
    Kim, D., Song, J., Kim, D.: Simultaneous Gesture Segmentation and Recognition Based on Forward Spotting Accumulative HMMs. Pattern Recognition 40, 3012–3026 (2007)MATHCrossRefGoogle Scholar
  7. 7.
    Roh, M., Christmas, B., Kittler, J., Lee, S.: Gesture Spotting for Low-resolution Sports for Video Annotation. Pattern Recognition 41, 1124–1137 (2008)MATHCrossRefGoogle Scholar
  8. 8.
    Yang, R., Sarkar, S., Loeding, B.: Handling Movement Epenthesis and Hand Segmentation Ambiguities in Continuous Sign Language Recognition Using Nested Dynamic Programming. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 462–477 (2010)CrossRefGoogle Scholar
  9. 9.
    Morguet, P., Lang, M.: Spotting Dynamic Hand Gestures in Video Image Ssequences Using Hidden Markov Models. In: Int. Conf. on Image Processing, vol. 3, pp. 193–197 (1998)Google Scholar
  10. 10.
    Lee, H.-K., Kim, J.H.: An HMM-based Threshold Model Approach for Gesture Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 961–973 (1999)CrossRefGoogle Scholar
  11. 11.
    Kang, H., Lee, C.W., Jung, K.: Recognition-based Gesture Spotting in Video Games. Pattern Recognition 25, 1701–1714 (2004)CrossRefGoogle Scholar
  12. 12.
    Suk, H.-I., Sin, B.-K., Lee, S.-W.: Hand Gesture Recognition Based on Dynamic Bayesian Network Framework. Pattern Recognition 43, 3059–3072 (2010)MATHCrossRefGoogle Scholar
  13. 13.
    Tsai, C.-Y., Lee, Y.-H.: The Parameters Effect on Performance in ANN for Hand Gesture Recognition System. Expert Systems with Applications 38, 7980–7983 (2011)CrossRefGoogle Scholar
  14. 14.
    Liang, R.-H., Ouhyoung, M.: A Real-time Continuous Gesture Recognition System for Sign Language. In: IEEE Int. Conf. Automatic Face and Gesture Recognition, pp. 558–563 (1998)Google Scholar
  15. 15.
    Bauer, B., Hienz, H.: Relevant Features for Video-based Continuous Sign Language Recognition. In: IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 440–445 (2000)Google Scholar
  16. 16.
    Bobick, A.F., Davis, J.W.: The Recognition of Human Movement Using Temporal Templates. IEEE Trans. Pattern Anal. Mach. Intell. 23, 257–267 (2001)CrossRefGoogle Scholar
  17. 17.
    Starner, T., Weaver, J., Pentland, A.: Real-time American Sign Language Recognition Using Desk and Wearable Computer Based Video. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1371–1375 (1998)CrossRefGoogle Scholar
  18. 18.
    Nayak, S., Sarkar, S., Loeding, B.: Automated Extraction of Signs from Continuous Sign Language Sentences Using Iterated Conditional Modes. In: Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2583–2590 (2009)Google Scholar
  19. 19.
    Favorskaya, M.N., Pahirka, A.I.: Models of Face Localization on Images. Control Systems and Information Technologies 3(33), 404–408 (2008) (in Russian)Google Scholar
  20. 20.
    Favorskaya, M.: Motion Estimation for Object Analysis and Detection in Videos. In: Kountchev, R., Nakamatsu, K. (eds.) Advances in Reasoning-based Image Processing, Analysis and Intelligent Systems: Conventional and Intelligent Paradigms, Springer, Heidelberg (2012)Google Scholar
  21. 21.
    Zinbi, Y., Chahir, Y., Elmoataz, A.: Moving Object Segmentation Using Optical Flow with Active Contour Model. In: Intern. Conf. on Information and Communication Technologies: From Theory to Applications (2008), doi:10.1109/ICTTA.2008.4530112Google Scholar
  22. 22.
    Tuytelaars, T., Mikolajczyk, K.: Local Invariant Feature Detectors: A Survey. Foundations and Trends in Computer Graphics and Vision 3(3), 177–280 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  1. 1.Siberian State Aerospace UniversityKrasnoyarskRussia

Personalised recommendations