Advertisement

Subunit Modeling for Japanese Sign Language Recognition Based on Phonetically Depend Multi-stream Hidden Markov Models

  • Shinji Sako
  • Tadashi Kitamura
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8009)

Abstract

We work on automatic Japanese sign Language (JSL) recognition using Hidden Markov Model (HMM). An important issue for modeling sign is that how to determine the constituent element of sign (i.e., subunit) like “phoneme” in spoken language. We focused on special feature of sign language that JSL is composed of three types of phonological elements which is hand local information, position, and movement. In this paper, we propose an efficiently method of generating subunit using multi-stream HMM which is correspond to phonological elements. An isolated word recognition experiment has confirmed the effectiveness of our proposed method.

Keywords

Hidden Markov models Sign language recognition Subunit Phonetic systems of sign language 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Okazawa, Y., Nishida, M., Horiuchi, Y., Ichikawa, K.: Sign Language Recognition Using Gesture Component Involved Transition Part. Technical Report of IEICE (WIT) 103(747), 13–18 (2004) (in Japanese)Google Scholar
  2. 2.
    Sawada, H., Hashimoto, S., Matsushima, T.: A Study of Gesture Recognition Based on Motion and Hand Figure Primitives and Its Application to Sign Language Recognition. IPSJ Journal 39(5), 1325–1333 (1998) (in Japanese)Google Scholar
  3. 3.
    Kanayama, K., Shirai, Y., Shimada, N.: Recognition of Sign Language using HMM. Technical Report of IEICE (WIT) 104(93), 21–28 (2004) (in Japanese)Google Scholar
  4. 4.
    Imagawa, K., Lu, S., Matsuo, H., Igi, S.: Real-Time Tracking of Human Hands from a Sign-Language Image Sequence in Consideration of Disappearance by Skin Regions. IEICE Journal (D) J81-D-2(8), 1787–1795 (1988) (in Japanese)Google Scholar
  5. 5.
    von Agris, U., Knorr, M., Kraiss, K.-F.: The significance of facial features for automatic sign language recognition. In: Proc. 8th IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 1–6 (2008) Google Scholar
  6. 6.
    Maebatake, M., Nishida, M., Horiuchi, Y., Kuroiwa, S.: Sign Language Recognition Based on Position and Movement Using Hidden Markov Model. Technical Report of IEICE (PRMU) 108(94), 7–12 (2008) (in Japanese)Google Scholar
  7. 7.
    Kanda, K.: Study on the characteristics of the sign language — Architecture of an electronic sign language dictionary. Fukumura Shuppan Inc. (2010) (in Japanese)Google Scholar
  8. 8.
    Kanda, K., Naka, H.: Phonological Notational System for Japanese Sign Language. Journal of JASL 12, 31–39 (1991) (in Japanese)Google Scholar
  9. 9.
    Toyokura, Y., Nankaku, Y., Goto, T., Kitamura, T.: A-4-5 Approach to Japanese Sign Language Word Recognition using Basic Motion HMM. In: Annual Conference of IEICE, p. 72 (2006) (in Japanese)Google Scholar
  10. 10.
    Bauer, B., Kraiss, K.F.: Video-Based Sign Recognition Using Self-Organizing Subunits. In: Proc. 16th Int. Conf. on Pattern Recognition, vol. 2, pp. 434–437 (2002)Google Scholar
  11. 11.
    Vogler, C., Metaxas, D.: Handshapes and Movements: Multiple-Channel American Sign Language Recognition. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 247–258. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Nishida, M., Maebatake, M., Suzuki, I., Horiuchi, Y., Kuroiwa, S.: Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM. Journal of IEEJ 129(10), 1902–1907 (2009) (in Japanese)Google Scholar
  13. 13.
    Yabe, H., Oka, R., Hayamizu, S., Yoshimura, T., Sakurai, S., Nobe, S., Mukai, T., Yamashita, H.: RWC Database –Gesture Database-. Technical Report of IEICE (PRMU) 100(181), 45–50 (2000) (in Japanese)Google Scholar
  14. 14.
    Ariga, K., Sako, S., Kitamura, T.: Sign Language Recognition Considering Signer and Motion Diversity Using HMM. Technical Report of IEICE (WIT) 110(53), 55–60 (2010) (in Japanese)Google Scholar
  15. 15.
    Hamada, Y., Shimada, N., Shirai, Y.: Shape Estimation of Quickly Moving Hand under Complex Backgrounds for Gesture Recognition. IEICE Journal (D) J90-D(3), 617–627 (2007) (in Japanese) Google Scholar
  16. 16.
    Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3(1), 71–86 (1991)CrossRefGoogle Scholar
  17. 17.
    Yonehara, H., Nagashima, Y., Terauchi, M.: A Measurement of Fixation Point Distribution of Native Signer. Technical Report of IEICE (TL) 102(254), 91–95 (2002) (in Japanese) Google Scholar
  18. 18.
    Hidden Markov Model Toolkit (HTK) version 3.4.1, http://htk.eng.cam.ac.uk/

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Shinji Sako
    • 1
  • Tadashi Kitamura
    • 1
  1. 1.Nagoya Institute of TechnologyNagoyaJapan

Personalised recommendations