Advertisement

A Linguistic Feature Vector for the Visual Interpretation of Sign Language

  • Richard Bowden
  • David Windridge
  • Timor Kadir
  • Andrew Zisserman
  • Michael Brady
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3021)

Abstract

This paper presents a novel approach to sign language recognition that provides extremely high classification rates on minimal training data. Key to this approach is a 2 stage classification procedure where an initial classification stage extracts a high level description of hand shape and motion. This high level description is based upon sign linguistics and describes actions at a conceptual level easily understood by humans. Moreover, such a description broadly generalises temporal activities naturally overcoming variability of people and environments. A second stage of classification is then used to model the temporal transitions of individual signs using a classifier bank of Markov chains combined with Independent Component Analysis. We demonstrate classification rates as high as 97.67% for a lexicon of 43 words using only single instance training outperforming previous approaches where thousands of training examples are required.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bowden, R., Sarhadi, M.: A non-linear model of shape and motion for tracking finger spelt american sign language. Image and Vision Computing 20(9-10), 597–607 (2002)CrossRefGoogle Scholar
  2. 2.
    D.B. (ed.): Dictionary of British Sign Language. British Deaf Association, Faber and Faber (1992) ISBN: 0571143466Google Scholar
  3. 3.
    Fels, S.S., Hinton, G.: Glove-talk: A neural network interface between a dataglove and a speech synthesiser. IEEE Trans. on Neural Networks 4(1), 2–8 (1993)CrossRefGoogle Scholar
  4. 4.
    Kadous, M.W.: Machine recognition of auslan signs using powergloves: towards large lexicon recognition of sign language. In: Proc. Workshop on the Integration of Gesture in Language and Speech, pp. 165–174 (1996)Google Scholar
  5. 5.
    Kim, J., Jang, W., Bien, Z.: A dynamic gesture recognition system for the korean sign language (ksl). IEEE Trans. Systems, Man and Cybernetics 26(2), 354–359 (1996)CrossRefGoogle Scholar
  6. 6.
    Liang, R., Ouhyoung, M.: A real time continuous gesture recognition system for sign language. In: Intl. Conf. on Automatic Face and Gesture Recognition, pp. 558–565 (1998)Google Scholar
  7. 7.
    Lockton, R., Fitzgibbon, A.W.: Real-time gesture recognition using deterministic boosting. In: Proc. British Machine Vision Conf. (2002)Google Scholar
  8. 8.
    Starner, T., Pentland, A.: Visual recognition of american sign language using hidden markov models. In: Intl. Conf. on Automatic Face and Gesture Recognition, pp. 189–194 (1995)Google Scholar
  9. 9.
    Stenger, B., Thayananthan, A., Torr, P., Cipolla, R.: Filtering using a tree-based estimator. In: Proc. Intl. Conf. on Computer Vision, vol. II, pp. 1063–1070 (2003)Google Scholar
  10. 10.
    Sutton-Spence, R., Woll, B.: The Linguistics of British Sign Language, An Introduction. Cambridge University Press, Cambridge (1999)Google Scholar
  11. 11.
    Vogler, C., Metaxas, D.: Asl recognition based on a coupling between hmms and 3d motion analysis. In: Proc. Intl. Conf. on Computer Vision, pp. 363–369 (1998)Google Scholar
  12. 12.
    Vogler, C., Metaxas, D.: Towards scalability in asl recognition: Breaking down signs into phonemes. In: Gesture Workshop, pp. 17–99 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Richard Bowden
    • 1
    • 2
  • David Windridge
    • 1
  • Timor Kadir
    • 2
  • Andrew Zisserman
    • 2
  • Michael Brady
    • 2
  1. 1.CVSSP, School of EPSUniversity of SurreyGuildfordUK
  2. 2.Department of Engineering ScienceUniversity of OxfordOxfordUK

Personalised recommendations