Skip to main content

Progress in Automated Computer Recognition of Sign Language

  • Conference paper
Computers Helping People with Special Needs (ICCHP 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3118))

Included in the following conference series:

Abstract

This paper reviews the extensive state of the art in automated recognition of continuous signs, from different languages, based on the data sets used, features computed, technique used, and recognition rates achieved. We find that, in the past, most work has been done in finger-spelled words and isolated sign recognition, however recently, there has been significant progress in the recognition of signs embedded in short continuous sentences. We also find that researchers are starting to address the important problem of extracting and integrating non-manual information that is present in face and head movement. We present results from our own experiments integrating non-manual features.

This material is based upon work supported by the National Science Foundation under Grant No. IIS 0312993.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., Abbott, S.: TESSA, a system to aid communication with deaf people. In: Proceedings of the fifth international ACM conference on Assistive technologies, pp. 205–212. ACM Press, New York (2002)

    Chapter  Google Scholar 

  2. Phelps, K.: Signing avatar characters become virtual tutors. In: Virtual Voice (2002)

    Google Scholar 

  3. Toro, J., et al.: An improved graphical environment for transcription and display of American Sign Language. Information 4, 533–539 (2001)

    Google Scholar 

  4. Akyol, S., Canzler, U.: An information terminal using vision based sign language recognition. In: ITEA Workshop on Virtual Home Environments, pp. 61–68 (2002)

    Google Scholar 

  5. Sagawa, H., Takeuchi, M.: Development of an information kiosk with a sign language recognition system. In: Conference on Universal Usability, pp. 149–150 (2000)

    Google Scholar 

  6. Kramer, J., Leifer, L.: The talking glove: An expressive and receptive ’verbal’ communication aid for the deaf, deaf-blind and nonvocal. In: Conference on Computer Technology, Special Education, and Rehabilitation (1987)

    Google Scholar 

  7. Murakami, K., Taguchi, H.: Gesture recognition using recurrent neural networks. In: SIGCHI Conference Proceedings, pp. 237–242 (1991)

    Google Scholar 

  8. Charayaphan, C., Marble, A.: Image processing system for interpreting motion in American Sign Language. Journal of Biomedical Engineering 14, 419–425 (1992)

    Article  Google Scholar 

  9. Waldron, M., Kim, S.: Isolated ASL recognition system for deaf persons. IEEE Transactions on Rehabilitation Engineering 3, 261 (1995)

    Article  Google Scholar 

  10. Kadous, M.W.: Machine translation of AUSLAN signs using powergloves: Towards large lexicon-recognition of sign language. In: Workshop on the integration of Gesture in Language and Speech, pp. 165–174 (1996)

    Google Scholar 

  11. Vamplew, P.: Recognition of Sign Language Using Neural Networks. PhD thesis, Department of Computer Science, University of Tasmania (1996)

    Google Scholar 

  12. Lee, C., Xu, Y.: Online, interactive learning of gestures for human robot interfaces. In: IEEE International Conference on Robotics and Automation, pp. 2982–2987 (1996)

    Google Scholar 

  13. Al-Jarrah, O., Halawani, A.: Recognition of gestures in Arabic sign language using neurofuzzy systems. Artificial Intelligence 133, 117–138 (2001)

    Article  MATH  Google Scholar 

  14. Fang, G., Gao, W., Zhao, D.: Large sign vocabulary sign recognition based on hierarchical decision tree. In: International Conference on Multimodal Interfaces, pp. 125–131 (2003)

    Google Scholar 

  15. Messing, L., Erenshteyn, R., Foulds, R., Galuska, S., Stern, G.: American Sign Language computer recognition: Its present and its promise. In: ISAAC Conference (1994)

    Google Scholar 

  16. Hernandez-Rebollar, J.L., Lindeman, R.W., Kyriakopoulos, N.: A multi-class pattern recognition system for practical finger spelling translation. In: The 4th IEEE International Conference on Multimodal Interfaces, p. 185 (2002)

    Google Scholar 

  17. Vassilia, P.N., Konstantinos, M.G.: Towards an assistive tool for Greek Sign Language communication. In: IEEE International Conference on Advanced Learning Technologies (ICALT 2003), vol. (125) (2003)

    Google Scholar 

  18. Starner, T., Pentland, A.: Computer-based visual recognition of American Sign Language. In: International Conference on Theoretical Issues in Sign Language Research (1996)

    Google Scholar 

  19. Braffort, A.: ARGo: An architecture for sign language recognition and interpretation. In: Progress in Gestural Interaction, pp. 17–30 (1996)

    Google Scholar 

  20. Grobel, K., Assan, M.: Isolated sign language recognition using Hidden Markov Models. In: International Conference System: Man and Cybernetics, pp. 162–167 (1996)

    Google Scholar 

  21. Liang, R., Ouhyoung, M.: A real-time continuous gesture recognition system for sign language. In: International Conference on Automatic Face and Gesture Recognition, pp. 558–565 (1998)

    Google Scholar 

  22. Vogler, C., Metaxas, D.: Handshapes and movements: Multiple-channel ASL recognition. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 247–258. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  23. Ma, J., Gao, W., Wang, C., Wu, J.: A continuous Chinese Sign Language recognition system. In: International Conference on Automatic Face and Gesture Recognition, pp. 428–433 (2000)

    Google Scholar 

  24. Chen, Y.: Chinese Sign Language recognition and synthesis. In: IEEE International Workshop on Analysis and Modeling of Faces and Gestures (2003)

    Google Scholar 

  25. Kapuscinski, T., Wysocki, M.: Vision-based recognition of Polish Sign Language. In: Methods in Artificial Intelligence (2003)

    Google Scholar 

  26. Starner, T.: Visual recognition of American Sign Language using Hidden Markov Models. Master’s thesis, MIT, Media Lab (1995)

    Google Scholar 

  27. Vogler, C., Metaxas, D.: Adapting Hidden Markov Models for ASL recognition by using three-dimensional computer vision methods. In: International Conference Systems on Man and Cybernetics, pp. 156–161 (1997)

    Google Scholar 

  28. Starner, T., Weaver, J., Pentland, A.: A wearable computer based American Sign Language recognizer. In: International Symposium on Wearable Computers, pp. 130–137 (1997)

    Google Scholar 

  29. Assan, M., Grobel, K.: Video-based sign language recognition using Hidden Markov Models. In: International Gesture Workshop: Gesture and Sign Language in Human-Computer Interaction, pp. 97–109 (1998)

    Google Scholar 

  30. Vogler, C., Metaxas, D.: ASL recognition based on a coupling between HMMs and 3D motion analysis. In: International Conference on Computer Vision, pp. 363–369 (1998)

    Google Scholar 

  31. Hienz, K., Bauer, B., Kraiss, K.: HMM-based continuous sign language recognition using stochastic grammars. In: Gesture Workshop (1999)

    Google Scholar 

  32. Bauer, B., Hienz, H., Kraiss, K.F.: Video-based continuous sign language recognition using statistical methods. In: International Conference on Pattern Recognition, pp. 463–466 (2000)

    Google Scholar 

  33. Vogler, C., Sun, H., Metaxas, D.: A framework for motion recognition with application to American Sign Language and gait recognition. In: Workshop on Human Motion, pp. 33–38 (2000)

    Google Scholar 

  34. Bauer, B., Kraiss, K.F.: Towards an automatic sign language recognition system using subunits. In: International Gesture Workshop: Gesture and Sign Language in Human-Computer Interaction, pp. 64–75 (2002)

    Google Scholar 

  35. Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language recognition. In: IEEE International Symposium on Wearable Computers (2003)

    Google Scholar 

  36. Starner, T., Pentland, A.P.: Real-time American Sign Language recognition from video using Hidden Markov Models. In: Symposium on Computer Vision, pp. 265–270 (1995)

    Google Scholar 

  37. Vogler, C., Metaxas, D.: Toward scalability in ASL recognition: Breaking down signs into phonemes. In: Gesture-Based Communication in Human-Computer Interaction, pp. 211–224 (1999)

    Google Scholar 

  38. Vogler, C., Metaxas, D.: Parallel Hidden Markov Models for American Sign Language recognition. In: International Conference on Computer Vision, pp. 116–122 (1999)

    Google Scholar 

  39. Parashar, A.: Representation and interpretation of manual and non-manual information for automated American Sign Language Recognition. Master’s thesis, University of South Florida (2003)

    Google Scholar 

  40. Bauer, B., Hienz, H.: Relevant features for video-based continuous sign language recognition. In: International Conference on Automatic Face and Gesture Recognition, pp. 440–445 (2000)

    Google Scholar 

  41. Vogler, C., Metaxas, D.: A framework of recognizing the simultaneous aspects of American Sign Language. Computer Vision and Image Understanding 81, 358–384 (2001)

    Article  MATH  Google Scholar 

  42. Neidle, C., MacLaughlin, D., Bahan, B., G., L.R., Kegl, J.: The SignStream project. In: American Sign Language Linguistic Research Project, Report 5 Boston University (1997)

    Google Scholar 

  43. Martinez, A.M., Wilbur, R.R., Shay, R., Kak, A.: Purdue RVL-SLLL ASL database for automatic recognition of American Sign Language. In: International Conference on Multimodal Interfaces (2002)

    Google Scholar 

  44. Sagawa, H., Takeuchi, M.: A method for recognizing a sequence of sign language words represented in a Japanese Sign Language sentence. In: International Conference on Automatic Face and Gesture Recognition, pp. 434–439 (2000)

    Google Scholar 

  45. Xu, M.: A vision-based method for recognizing non-manual information in Japanese Sign Language. In: International Conference on Advances in multimodal interfaces, pp. 572–581 (2000)

    Google Scholar 

  46. Erdem, U., Sclaroff, S.: Automatic detection of relevant head gestures in American Sign Language communication. In: International Conference on Pattern Recognition, vol. I, pp. 460–463 (2002)

    Google Scholar 

  47. Canzler, U., Dziurzyk, T.: Extraction of non manual features for videobased sign language recognition. In: Proceedings of IAPR Workshop on Machine Vision Applications, pp. 318–321 (2002)

    Google Scholar 

  48. Yuan, Q., Gao, W., Yao, H., Wang, C.: Recognition of strong and weak connection models in continuous sign language. In: International Conference on Pattern Recognition, p. 10075 (2002)

    Google Scholar 

  49. Clendenin, M.: Chinese lab hopes to commercialize sign-language recognition platform (March 2003), http://www.eetimes.com/article/showArticle.jhtml?articleId=10801270

  50. Zimmerman, G., Vanderheiden, G.: Modality translation and assistance services: A challenge for artificial intelligence. Journal of the Australian Society of Artificial Intelligence 20 (2001)

    Google Scholar 

  51. Larson, J.A.: EMMA: W3C’s extended multimodal annotation markup language. Speech Technology Magazine 8 (2003), http://www.w3.org/TR/emma/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Loeding, B.L., Sarkar, S., Parashar, A., Karshmer, A.I. (2004). Progress in Automated Computer Recognition of Sign Language. In: Miesenberger, K., Klaus, J., Zagler, W.L., Burger, D. (eds) Computers Helping People with Special Needs. ICCHP 2004. Lecture Notes in Computer Science, vol 3118. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27817-7_159

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-27817-7_159

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22334-4

  • Online ISBN: 978-3-540-27817-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics