Providing Feedback in Ukrainian Sign Language Tutoring Software

  • M. V. Davydov
  • I. V. Nikolski
  • V. V. Pasichnyk
  • O. V. Hodych
  • Y. M. Shcherbyna

Abstract

This chapter focuses on video recognition methods implemented as part of the Ukrainian Sign Language Tutoring Software. At the present time the sign language training software can easily verify how users understand signs and sentences. However, currently there is no good solution to the problem of verifying how the person reproduces signs due to a large variety of training conditions and human specifics. The new approach to user interaction with the Sign Tutoring Software is proposed as well as new algorithms implementing it. The use of body posture recognition methods allows interaction with users during learning of signs and the verification process. The software provides a feedback to the user by capturing person’s gestures via a web camera improving the success of training. A single web camera is used without utilising depth sensors. The process of human posture reconstruction from a web camera in real-time involves background modelling, image segmentation and machine learning methods. The success rate of 91.7% has been achieved for sign recognition on the test set of 85 signs.

Keywords

Sign language image segmentation interactive tutoring tutoring software neural-networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akyol, S., Zieren, J.: Evaluation of ASM head tracker for robustness against occlusion. In: Arabnia, H.R., Mun, Y. (eds.) Proceedings of the International Conference on Imaging Science, Systems, and Technology (CISST 2002), Las Vegas, Nevada, June 24-27, vol. I, pp. 28–34. CSREA Press (2002)Google Scholar
  2. 2.
    Aran, O., Ari, I., Benoit, A., Carrillo, A.H., Fanard, F.X., Campr, P., Akarun, L., Caplier, A., Rombaut, M., Sankur, B.: Sign language tutoring tool. In: HAL - CCSD: eNTERFACE 2006, The Summer Workshop on Multimodal Interfaces, Istanbul, Turkey, pp. 23–33 (2006)Google Scholar
  3. 3.
    Akgül, C.B.: Cascaded self-organizing networks for color image segmentation (2004), http://www.tsi.enst.fr/~akgul/oldprojects/CascadedSOM_cba.pdf (cited April 15, 2011)
  4. 4.
    Bradski, G.: Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal 2(2), 12–21 (1998)Google Scholar
  5. 5.
    Brashear, H., Zafrulla, Z., Starner, T., Hamilton, H., Presti, P., Lee, S.: CopyCat: A corpus for verifying american sign language during gameplay by deaf children. In: Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (2010)Google Scholar
  6. 6.
    Brown, D., Craw, I., Lewthwaite, J.: A SOM based approach to skin detection with application in real time systems, University of Aberdeen (2001), http://www.bmva.ac.uk/bmvc/2001/papers/33/accepted_33.pdf (cited April 15, 2011)
  7. 7.
    Campbell, N.W., Thomas, B.T., Troscianko, T.: Neural networks for the segmentation of outdoor images. Journal of Systems Engineering 6(3), 343–346 (1996)Google Scholar
  8. 8.
    Campbell, N.W., Thomas, B.T., Troscianko, T.: Segmentation of natural images using self-organising feature maps. In: Proceedings of the British Machine Vision Conference, pp. 223–232. University of Bristol, Bristol (1996), http://www.cs.bris.ac.uk/Publications/Papers/1000140.pdf (cited February 11, 2012)Google Scholar
  9. 9.
    Davydov, M.V., Nikolski, I.V.: Real-time video object classification using neural network classifier. Herald of National University “Lvivska Polytechnica” (549), 82–92 (2005) (Lviv, Ukraine, in Ukrainian)Google Scholar
  10. 10.
    Davydov, M.V., Nikolski, I.V.: Automatic identification of sign language gestures by means on dactyl matching. Herald of National University “Lvivska Polytechnica” (589), 174–198 (2007) (Lviv, Ukraine, in Ukrainian)Google Scholar
  11. 11.
    Davydov, M.V., Nikolski, I.V., Pasichnyk, V.V.: Software training simulator for sign language learning. Connection, 98–106 (2007) (Kyiv, Ukraine, in Ukrainian)Google Scholar
  12. 12.
    Davydov, M.V., Nikolski, I.V., Pasichnyk, V.V.: Selection of an effective method for image processing based on dactyl matching for identification of sign language gestures. Herald of Kharkiv National University of Radio-Electronics (139), 59–68 (2008) (Kharkiv, Ukraine, in Ukrainian)Google Scholar
  13. 13.
    Davydov, M.V., Nikolski, I.V., Pasichnyk, V.V.: Real-time Ukrainian sign language recognition system. In: Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS), pp. 875–879 (2010)Google Scholar
  14. 14.
    Dreuw, P., Rybach, D., Deselaers, T., Zahedi, M.: Ney. H.: Speech recognition techniques for a sign language recognition system. In: ISCA Best Student Paper Award Interspeech 2007, Antwerp, Belgium, pp. 2513–2516 (2007)Google Scholar
  15. 15.
    Dreuw, P., Stein, D., Ney, H.: Enhancing a Sign Language Translation System with Vision-Based Features. In: Sales Dias, M., Gibet, S., Wanderley, M.M., Bastos, R. (eds.) GW 2007. LNCS (LNAI), vol. 5085, pp. 108–113. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  16. 16.
    Dreuw, P., Forster, J., Deselaers, T., Ney, H.: Efficient approximations to model-based joint tracking and recognition of continuous sign language. In: Proceedings of the 8th IEEE International Conference Automatic Face and Gesture Recognition, Amsterdam, The Netherlands, September 17-19, pp. 1–6 (2008), http://thomas.deselaers.de/publications/papers/dreuw_fg08.pdf (cited February 11, 2011)
  17. 17.
    Ford, A., Roberts, A.: Colour Space Conversions (1998), http://www.poynton.com/PDFs/coloureq.pdf (cited April 15, 2011)
  18. 18.
    García, C., Prieto, M., Pascual-Montano, A.D.: A speculative parallel algorithm for self-organizing maps. In: PARCO, pp. 615–622 (2005), http://www2.fz-juelich.de/nic-series/volume33/615.pdf (Cited April 15, 2011)
  19. 19.
    Hämäläinen, T.D.: Parallel implementation of self-organizing maps. In: Seiffert, U., Jain, L.C. (eds.) Self-Organizing Neural Networks: Recent Advances and Applications, pp. 245–278. Springer, New York (2002)Google Scholar
  20. 20.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 2nd edn. Morgan Kaufmann Publishers (2001)Google Scholar
  21. 21.
    Hodych, O., Nikolskyi, Y., Shcherbyna, Y.: Application of self-organising maps in medical diagnostics. Herald of National University “Lvivska Polytechnica” (464), 31–43 (2002) (Lviv, Ukraine, in Ukrainian)Google Scholar
  22. 22.
    Hodych, O., Nikolskyi, Y., Pasichnyk, V., Shcherbyna, Y.: Analysis and comparison of SOM-based training algorithms. Control Systems and Machines (2), 63–80 (2006) (Kyiv, Ukraine, in Ukrainian)Google Scholar
  23. 23.
    Hodych, O., Nikolskyi, Y., Pasichnyk, V., Shcherbyna, Y.: High-dimensional data structure analysis using self-organising maps. In: Proceedings of the 9th International Conference on CAD Systems in Microelectronics (CADSM 2007), pp. 218–221 (2007)Google Scholar
  24. 24.
    Hodych, O., Hushchyn, K., Shcherbyna, Y., Nikolski, I., Pasichnyk, V.: SOM-Based Dynamic Image Segmentation for Sign Language Training Simulator. In: Yang, J., Ginige, A., Mayr, H.C., Kutsche, R.-D. (eds.) Information Systems: Modeling, Development, and Integration. LNBIP, vol. 20, pp. 29–40. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  25. 25.
    Hodych, O.V., Davydov, M.V., Nikolski, I.V., Pasichnyk, V.V., Scherbyna, Y.M.: Ukrainian Sign Language: the aspects of computational linguistics. Piramida, Lviv (2009) (in Ukrainian)Google Scholar
  26. 26.
    Hoffmann, G.: CIE color space (2000), http://www.fho-emden.de/~hoffmann/ciexyz29082000.pdf (cited April 15, 2011)
  27. 27.
    Hoffmann, G.: CIELab color space (2003), http://www.fho-emden.de/~hoffmann/cielab03022003.pdf (cited April 15, 2011)
  28. 28.
    Hunt, R.W.G.: Measuring Colour, 3rd edn. Fountain Pr. Ltd (2001)Google Scholar
  29. 29.
    Jacox, E.H., Hanan, S.: Metric space similarity joins. ACM Trans. Database Syst. 33(2), 7:1–7:38 (2008)CrossRefGoogle Scholar
  30. 30.
    Jander, M., Luciano, F.: Neural-based color image segmentation and classification using self-organizing maps (1996), http://mirror.impa.br/sibgrapi96/trabs/pdf/a19.pdf (cited April 15, 2011)
  31. 31.
    Jiang, Y., Chen, K.-J., Zhou, Z.-H.: SOM Based Image Segmentation. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds.) RSFDGrC 2003. LNCS (LNAI), vol. 2639, pp. 640–643. Springer, Heidelberg (2003)Google Scholar
  32. 32.
    Kohonen, T.: Self-Organizing Maps, 3rd edn. Springer, Berlin (2001)MATHCrossRefGoogle Scholar
  33. 33.
    Lawrence, R. D., Almasi, G.S., Rushmeier, H.E.: A scalable parallel algorithm for self-organizing maps with applications to sparse data mining problmes, http://www.research.ibm.com/dar/papers/pdf/scalableSOM.pdf (cited April 15, 2011)
  34. 34.
    Rauber, A., Tomsich, P., Merkl, D.: parSOM: A parallel implementation of the self-organizing map exploiting cache effects: making the som fit for interactive high-performance data analysis. Neural Networks 6, 61–77 (2000)Google Scholar
  35. 35.
    Reyes-Aldasoro, C.C.: Image segmentation with Kohonen neural network self-organising maps (2004) http://www.cs.jhu.edu/~cis/cista/446/papers/SegmentationWithSOM.pdf (cited April 15, 2011)
  36. 36.
    Tominaga, S., Takahashi, E., Tanaka, N.: Parameter estimation of a reflection model from a multi-band image. In: Proceedings of the 1999 IEEE Workshop on Photometric Modeling for Computer Vision and Graphics, pp. 56–63 (1999)Google Scholar
  37. 37.
    Wu, Y., Liu, Q., Huang, T.S.: An adaptive self-organizing color segmentation algorithm with application to robust real-time human hand localization. In: Proceedings of the Asian Conference on Computer Vision, Taiwan, pp. 1106–1111 (2000)Google Scholar
  38. 38.
    Zahedi, M., Keysers, D., Deselaers, T., Ney, H.: Combination of Tangent Distance and an Image Distortion Model for Appearance-Based Sign Language Recognition. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, pp. 401–408. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  39. 39.
    Zieren, J., Kraiss, K.-F.: Non-intrusive sign language recognition for human-computer interaction. In: Proceedings of the 9th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems, Atlanta, Georgia, p.CD-paper 49 (2004)Google Scholar
  40. 40.
    Zieren, J., Kraiss, K.-F.: Robust Person-Independent Visual Sign Language Recognition. In: Marques, J.S., Pérez de la Blanca, N., Pina, P. (eds.) IbPRIA 2005. LNCS, vol. 3522, pp. 520–528. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  41. 41.
    IBM research demonstrates innovative ’Speech to Sign Language’ translation system. Pressrelease (September 12 2007), http://www-03.ibm.com/press/us/en/pressrelease/22316.wss (cited April 15, 2011)
  42. 42.
    The iCommunicator User’s Guide (2005), http://www.myicommunicator.com/downloads/iCommunicator-UserGuide-v40.pdf (cited April 15, 2011)
  43. 43.
    Senthilkumaran, N., Rajesh, R.: A study on rough set theory for medical image segmentation. International Journal of Recent Trends in Engineering 2(2), 236–238 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • M. V. Davydov
    • 1
  • I. V. Nikolski
    • 1
  • V. V. Pasichnyk
    • 1
  • O. V. Hodych
    • 2
  • Y. M. Shcherbyna
    • 2
  1. 1.L’viv Polytechnic National UniversityL’vivUkraine
  2. 2.Ivan Franko National University of L’vivL’vivUkraine

Personalised recommendations