Skip to main content

Towards a dialogue system based on recognition and synthesis of Japanese sign language

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1371))

Abstract

This paper describes a dialogue system based on the recognition and synthesis of Japanese sign language. The purpose of this system is to support conversation between people with hearing impairments and hearing people. The system consists of five main modules: sign-language recognition and synthesis, voice recognition and synthesis, and dialogue control. The sign-language recognition module uses a stereo camera and a pair of colored gloves to track the movements of the signer, and signlanguage synthesis is achieved by regenerating the motion data obtained by an optical motion capture system. An experiment was done to investigate changes in the gaze-line of hearing-impaired people when they read sign language, and the results are reported.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T. Starner and A. Pentland.: Real-time American sign language recognition from video using Hidden Markov Models. In Proc. International Symposium on Computer Vision (1995) 265–270.

    Google Scholar 

  2. H. Sagawa, H. Sakou, and M. Abe.: Sign language translation system using continuous DP matching. In IAPR Workshop on Machine Vision Applications (1992) 339–342.

    Google Scholar 

  3. Y. Nagashima, S. Kamei, and Y. Sugiyama: Sign Language Animation System of Morpheme Information Driven Type. In Technical Report of IEICE, ET96-86 (1996) 73–78 (in Japanese).

    Google Scholar 

  4. T. Kurokawa.: Gesture dictionary and its application to Japanese-to-Sign language translation. In Proc. Of the 3rd Advanced Symposium on Human Interface (1992) 51–58.

    Google Scholar 

  5. J. Lee and T. Kunii: Computer animated visual translation from natural language to sign language. The Journal of Visualization and Computer Animation 4 (1993) 64–78.

    Article  Google Scholar 

  6. M. Ohki, H. Sagawa, T. Sakiyama, E. Oohira, H. Ikeda, and H. Fujisawa.: Pattern recognition and synthesis for sign language translation system. In ASSETS ’94 (1994).

    Google Scholar 

  7. W. Trager. A practical approach to motion capture: Acclaim’s optical motion capture system. In SIGGRAPH’94 Course Notes (1994).

    Google Scholar 

  8. L. Williams and A. Brudelin. A Motion Signal Processing. In Proc. of SIG-GRAPH’95 (1996) 97–104.

    Google Scholar 

  9. H. Matsuo, S. Igi, S. Lu, Y. Nagashima, Y. Takata, and T. Teshima.: Recognition algorithm with non-contact for Japanese sign language using morphological analysis. In Bielefeld Gesture Workshop’97 (1997).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ipke Wachsmuth Martin Fröhlich

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag

About this paper

Cite this paper

Lu, S., Igi, S., Matsuo, H., Nagashima, Y. (1998). Towards a dialogue system based on recognition and synthesis of Japanese sign language. In: Wachsmuth, I., Fröhlich, M. (eds) Gesture and Sign Language in Human-Computer Interaction. GW 1997. Lecture Notes in Computer Science, vol 1371. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0053005

Download citation

  • DOI: https://doi.org/10.1007/BFb0053005

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64424-8

  • Online ISBN: 978-3-540-69782-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics