Advertisement

Enhancing a Sign Language Translation System with Vision-Based Features

  • Philippe Dreuw
  • Daniel Stein
  • Hermann Ney
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5085)

Abstract

In automatic sign language translation, one of the main problems is the usage of spatial information in sign language and its proper representation and translation, e.g. the handling of spatial reference points in the signing space. Such locations are encoded at static points in signing space as spatial references for motion events.

We present a new approach starting from a large vocabulary speech recognition system which is able to recognize sentences of continuous sign language speaker independently. The manual features obtained from the tracking are passed to the statistical machine translation system to improve its accuracy. On a publicly available benchmark database, we achieve a competitive recognition performance and can similarly improve the translation performance by integrating the tracking features.

Keywords

Sign Language Gesture Recognition Translation System Statistical Machine Translation Word Error Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agarwal, A., Triggs, B.: Recovering 3D Human Pose from Monocular Images. IEEE Trans. PAMI 28(1), 44–58 (2006)CrossRefGoogle Scholar
  2. 2.
    Chiu, Y.-H., Wu, C.-H., Su, H.-Y., Cheng, C.-J.: Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis. IEEE Trans. PAMI 29(1), 28–39 (2007)CrossRefGoogle Scholar
  3. 3.
    Dreuw, P., Deselaers, T., Rybach, D., Keysers, D., Ney, H.: Tracking Using Dynamic Programming for Appearance-Based Sign Language Recognition. In: 7th Intl. Conference on Automatic Face and Gesture Recognition, pp. 293–298. IEEE, Southampton (2006)CrossRefGoogle Scholar
  4. 4.
    Dreuw, P., Forster, J., Deselaers, T., Ney, H.: Efficient Approximations to Model-based Joint Tracking and Recognition of Continuous Sign Language. In: IEEE Face and Gesture Recognition, Amsterdam, The Netherlands (September 2008)Google Scholar
  5. 5.
    Dreuw, P., Rybach, D., Deselaers, T., Zahedi, M., Ney, H.: Speech Recognition Techniques for a Sign Language Recognition System. In: Interspeech 2007 - Eurospeech, Antwerp, Belgium, August 2007, pp. 2513–2516 (2007)Google Scholar
  6. 6.
    Mauser, A., Zens, R., Matusov, E., Hasan, S., Ney, H.: The RWTH Statistical Machine Translation System for the IWSLT 2006 Evaluation. In: IWSLT, Kyoto, Japan, November 2006, pp. 103–110 (2006) (Best paper award)Google Scholar
  7. 7.
    Morrissey, S., Way, A.: An Example-Based Approach to Translating Sign Language. In: Workshop Example-Based Machine Translation (MT X 2005), Phuket, Thailand, September 2005, pp. 109–116 (2005)Google Scholar
  8. 8.
    Ong, S.C., Ranganath, S.: Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning. IEEE Trans. PAMI 27(6), 873–891 (2005)CrossRefGoogle Scholar
  9. 9.
    Ramanan, D., Forsyth, D.A., Zisserman, A.: Tracking People by Learning Their Appearance. IEEE Trans. PAMI 29(1), 65–81 (2007)CrossRefGoogle Scholar
  10. 10.
    Stein, D., Bungeroth, J., Ney, H.: Morpho-Syntax Based Statistical Methods for Sign Language Translation. In: 11th Annual conference of the European Association for Machine Translation, Oslo, Norway, June 2006, pp. 169–177 (2006)Google Scholar
  11. 11.
    Vogler, C., Metaxas, D.: A Framework for Recognizing the Simultaneous Aspects of ASL. CVIU 81(3), 358–384 (2001)zbMATHGoogle Scholar
  12. 12.
    von Agris, U., Schneider, D., Zieren, J., Kraiss, K.-F.: Rapid Signer Adaptation for Isolated Sign Language Recognition. In: CVPR Workshop V4HCI, New York, USA, June 2006, p. 159 (2006)Google Scholar
  13. 13.
    Wang, S.B., Quattoni, A., Morency, L.-P., Demirdjian, D., Darrell, T.: Hidden Conditional Random Fields for Gesture Recognition. In: CVPR, June 2006, vol. 2, pp. 1521–1527 (2006)Google Scholar
  14. 14.
    Yao, G., Yao, H., Liu, X., Jiang, F.: Real Time Large Vocabulary Continuous Sign Language Recognition Based on OP/Viterbi Algorithm. In: 18th ICPR, August 2006, vol. 3, pp. 312–315 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Philippe Dreuw
    • 1
  • Daniel Stein
    • 1
  • Hermann Ney
    • 1
  1. 1.Human Language Technology and Pattern RecognitionRWTH Aachen UniversityGermany

Personalised recommendations