A Method for Analyzing Spatial Relationships Between Words in Sign Language Recognition

  • Hirohiko Sagawa
  • Masaru Takeuchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1739)


There are expressions usingspatial relationships in sign language that are called directional verbs. To understand a sign-language sentence that includes a directional verb, it is necessary to analyze the spatial relationship between the recognized sign-language words and to find the proper combination of a directional verb and the sign-language words related to it. In this paper, we propose an analysis method for evaluatingthe spatial relationship between a directional verb and other sign-language words according to the distribution of the parameters representingthe spatial relationship.


Spatial Relationship Semantic Information Sign Language Gesture Recognition Sign Language Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Waldron, M. B., Kim, S.: Isolated ASL Sign Recognition System for Deaf Persons. IEEE Transactions on Rehabilitation Engineering, Vol. 3, No. 3 (1995) 261–271CrossRefGoogle Scholar
  2. 2.
    Murakami, K., Taguchi, H.: Gesture Recognition Using Recurrent Neural Networks. Proceedings of the CHI’91 (1991) 237–242Google Scholar
  3. 3.
    Kadous, M.W.: Machine Recognition of Auslan Signs Using PowerGloves: Towards Large-Lexicon Recognition of Sign Language. Proceedings of the Workshop on the Integration of Gesture in Language and Speech (1996) 165–174Google Scholar
  4. 4.
    Fujii, M., Iwanami, T., Kamei, S., Nagashima, Y.: Recognition of Global Articulator of Signs Based on the Spatio-Temporal Image. Proceedings of the Eleventh Symposium on Human Interface (1995) 179–184 [in Japanese]Google Scholar
  5. 5.
    Teshima, T., Matsuo, H., Takata, Y. Hirakawa, M.: Virtual Recognition of Hand Gestures in Consideration of Distinguishing Right and Left Hands and Occlusion. Human Interface News and Report, Vol.10, No.4 (1995) 467–464 [in Japanese].Google Scholar
  6. 6.
    Matsuo, H., Igi, S., Lu, S., Nagashima, Y., Takata, Y., Teshima, T.: The Recognition Algorithm With Non-Contact For Japanese Sign Language Using Morphological Analysis. Gesture and Sign-Language in Human-Computer Interaction (1998)Google Scholar
  7. 7.
    Ohki, M.: The Sign Language Telephone. 7th World Telecommunication Forum, Vol.1 (1995) 391–395Google Scholar
  8. 8.
    Sagawa, H., Takeuchi, M., Ohki, M.: Methods to Describe and Recognize Sign Language Based on Gesture Components Represented by Symbols and Numerical Values. Knowledge-Based Systems, Vol.10, No.5 (1998) 287–294CrossRefGoogle Scholar
  9. 9.
    Starner, T., Weaver, J., Pentland, A.: Real-Time American Sign Language Recognition UsingDesk and Wearable Computer Based Video. IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 20, No. 12 (1998) 1371–1375CrossRefGoogle Scholar
  10. 10.
    Vogler, C., Metaxas, D.: ASL Recognition Based on a Coupling Between HMMS and 3D Motion Analysis. Proceedings of the International Conference on Computer Vision (1998) 363–369Google Scholar
  11. 11.
    Liang, R.-H., Ouhyoung, M.: A Real-time Continuous Gesture Recognition System for the Taiwanese Sign Language. Proceedings of The Third IEEE International Conference on Automatic Face and Gesture Recognition (1998) 558–565Google Scholar
  12. 12.
    Kanda, K.: Lecture on Sign Language Study. Fukumura Publisher (1994) [in Japanese]Google Scholar
  13. 13.
    Braffort, A.: ARGo: An Architecture for Sign Language Recognition and Interpretation. Progress in Gestural Interaction (1997) 17–30Google Scholar
  14. 14.
    Sagawa, H., Takeuchi, M.: A Study on Spatial Information for Sign Language Recognition. Proceedings of the 13th Symposium on Human Interface (1997) 231–236 [in Japanese]Google Scholar
  15. 15.
    Tanaka, H., Tsujii, J.: Natural Language Understanding. Ohmsha, Ltd. (1988) [in Japanese]Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Hirohiko Sagawa
    • 1
  • Masaru Takeuchi
    • 1
  1. 1.RWCPMulti-modal Functions Hitachi LaboratoryTokyoJapan

Personalised recommendations