Advertisement

Basic Investigation for Improvement of Sign Language Recognition Using Classification Scheme

  • Hirotoshi Shibata
  • Hiromitsu Nishimura
  • Hiroshi TanakaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9734)

Abstract

Sign language is a commonly-used communication method for hearing-impaired or speech-impaired people. However, it is quite difficult to learn sign language. If automatic translation for sign language can be realized, it becomes very meaningful and convenient not only for impaired people but also physically unimpaired people. The cause of the difficulty in automatic translation is that there are so many variations in sign language motions, which degrades recognition performance. This paper presents a recognition method for maintaining the recognition performance for many sign language motions. A scheme is introduced to classification using a decision tree, which can decrease the number of words to be recognized at a time by dividing them into groups. The used hand, the characteristics of hand motion and the relative position between hands and face have been considered in creating the decision tree. It is confirmed by experiments that the recognition success rate increased from 41 % and 59 % to 59 % and 82 %, respectively, for a basic 17 words of sign language with four sign language operators.

Keywords

Sign language Color gloves Optical camera Recognition Classification Decision tree 

References

  1. 1.
    Jen, T., adamo-villani, N.: the effect of rendering style on perception of sign language animations. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 383–392. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  2. 2.
    Adamo-Villani, N., Wilbur, R.B.: ASL-Pro: American sign language animation with prosodic elements. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 307–318. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  3. 3.
    Baatar, B., Tanaka, J.: Comparing sensor based and vision based techniques for dynamic gesture recognition. In: The 10th Asia Pacific Conference on Computer Human Interaction (APCHI), Poster 2P-21 (2012)Google Scholar
  4. 4.
    Matsuda, Y., Sakuma, I., Jimbo, Y., Kobayashi, E., Arafune, T., Isomura, T.: Development of finger braille recognition system. J. Biom. Sci. Eng. 5(1), 54–65 (2010)CrossRefGoogle Scholar
  5. 5.
    Humphries, T., Padden, C., O’Rourke, T.: Basic Course in American Sign Language. T.J. Publishers Inc., Silver Spring (1994)Google Scholar
  6. 6.
    Murakami, K., Taguchi, H.: Gesture recognition using recurrent natural networks. In: CHI 1991 Conference Proceedings, pp. 237–242 (1991)Google Scholar
  7. 7.
    Sugaya, T., Nishimura, H., Tanaka, H.: Enhancement of accuracy of hand shape recognition using color calibration by clustering scheme and majority voting method. In: Yamamoto, S. (ed.) HCI 2014, Part I. LNCS, vol. 8521, pp. 251–260. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Sugaya, T., Tsuchiya, H., Iwasawa, H., Nishimura, H., Tanaka, H.: Fundamental study on sign language recognition using color detection with an optical camera. In: International Conference on Imaging and Printing Technologies (ICIPT), Mandarin Hotel, Bangkok, Thailand, pp. 8–13 (2014)Google Scholar
  9. 9.
    Sugaya, T., Suzuki, T., Nishimura, H., Tanaka, H.: Basic investigation into hand shape recognition using colored gloves taking account of the peripheral environment. In: Yamamoto, S. (ed.) HCI 2013, Part I. LNCS, vol. 8016, pp. 133–142. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
  11. 11.

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Hirotoshi Shibata
    • 1
  • Hiromitsu Nishimura
    • 1
  • Hiroshi Tanaka
    • 1
    Email author
  1. 1.Kanagawa Institute of TechnologyAtsugiJapan

Personalised recommendations