Sign Language Recognition

Abstract

This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data sets.

References

  1. 1.
    Akyol, S., Alvarado, P.: Finding relevant image content for mobile sign language recognition. In: Procs. of IASTED Int. Conf. on Signal Processing, Pattern Recognition and Application, pp. 48–52, Rhodes, Greece (3–6 July 2001) Google Scholar
  2. 2.
    Aran, O., Burger, T., Caplier, A., Akarun, L. A belief-based sequential fusion approach for fusing manual signs and non-manual signals. Pattern Recognit. Lett. 42(5), 812–822 (2009) MATHGoogle Scholar
  3. 3.
    Athitsos, V., Sclaroff, S.: Estimating 3D hand pose from a cluttered image. In: Procs. of CVPR, vol. 2, Madison WI, USA (June 2003) Google Scholar
  4. 4.
    Awad, G., Han, J., Sutherland, A.: A unified system for segmentation and tracking of face and hands in sign language recognition. In: Procs. of ICPR, Hong Kong, China, pp. 239–242 (August 2006) Google Scholar
  5. 5.
    Ba, S.O., Odobez, J.M.: Visual focus of attention estimation from head pose posterior probability distributions. In: Procs. of IEEE Int. Conf. on Multimedia and Expo, pp. 53–56 (2008) Google Scholar
  6. 6.
    Bailly, K., Milgram, M.: Bisar: Boosted input selection algorithm for regression. In: Procs. of Int. Joint Conf. on Neural Networks, pp. 249–255 (2009) CrossRefGoogle Scholar
  7. 7.
    Bauer, B., Hienz, H., Kraiss, K. Video-based continuous sign language recognition using statistical methods. In: Procs. of ICPR, Barcelona, Spain, vol. 15, pp. 463–466 (September 2000) Google Scholar
  8. 8.
    Bauer, B., Nießen, S., Hienz, H.: Towards an automatic sign language translation system. In: Procs. of Int. Wkshp: Physicality and Tangibility in Interaction: Towards New Paradigms for Interaction Beyond the Desktop, Siena, Italy (1999) Google Scholar
  9. 9.
    Bowden, R., Windridge, D., Kadir, T., Zisserman, A., Brady, M.: A linguistic feature vector for the visual interpretation of sign language. In: Procs. of ECCV, Prague, Czech Republic. LNCS, pp. 390–401, Springer, Berlin (11–14 May 2004) Google Scholar
  10. 10.
    British Deaf Association: Dictionary of British Sign Language/English. Faber & Faber, London (1992) Google Scholar
  11. 11.
    BSL Corpus Project. Bsl corpus project site (2010) Google Scholar
  12. 12.
    Buehler, M., Everingham, P., Zisserman, A.: Learning sign language by watching TV (using weakly aligned subtitles). In: Procs. of CVPR, Miami, FL, USA, pp. 2961–2968 (20–26 June 2009) Google Scholar
  13. 13.
    Bungeroth, J., Ney, H.: Statistical sign language translation. In: Procs. of LREC: Wkshp: Representation and Processing of Sign Languages, Lisbon, Portugal, pp. 105–108 (26–28 May 2004) Google Scholar
  14. 14.
    Coogan, T., Sutherland, A.: Transformation invariance in hand shape recognition. In: Procs. of ICPR, Hong Kong, China, pp. 485–488 (August 2006) Google Scholar
  15. 15.
    Cooper, H., Bowden, R.: Large lexicon detection of sign language. In: Procs. of ICCV: Wkshp: Human–Computer Interaction, Rio de Janeiro, Brazil, pp. 88–97 (16–19 October 2007) Google Scholar
  16. 16.
    Cooper, H., Bowden, R.: Sign language recognition using boosted volumetric features. In: Procs. of IAPR Conf. on Machine Vision Applications, Tokyo, Japan, pp. 359–362 (16–18 May 2007) Google Scholar
  17. 17.
    Cooper, H., Bowden, R.: Learning signs from subtitles: A weakly supervised approach to sign language recognition. In: Procs. of CVPR, Miami, FL, USA, pp. 2568–2574 (20–26 June 2009) Google Scholar
  18. 18.
    Cooper, H., Bowden, R.: Sign language recognition using linguistically derived sub-units. In: Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Languages Technologies, Valetta, Malta, (17–23 May 2010) Google Scholar
  19. 19.
    Cooper, H., Ong, E.-J., Bowden, R.: Give me a sign: An interactive sign dictionary. Technical report, University of Surrey (2011) Google Scholar
  20. 20.
    Corradini, A.: Dynamic time warping for off-line recognition of a small gesture vocabulary. In: Procs. of ICCV: Wkshp: Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Vancouver, BC, pp. 82–90. IEEE Comput. Soc., Los Alamitos (9–12 July 2001) CrossRefGoogle Scholar
  21. 21.
    DGS-Corpus. Dgs-corpus website (2010) Google Scholar
  22. 22.
    DictaSign Project. Dictasign project website (2010) Google Scholar
  23. 23.
    Doliotis, P., Stefan, A., Mcmurrough, C., Eckhard, D., Athitsos, V.: Comparing gesture recognition accuracy using color and depth information. In: Conference on Pervasive Technologies Related to Assistive Environments (PETRA) (May 2011) Google Scholar
  24. 24.
    Dreuw, P., Deselaers, T., Rybach, D., Keysers, D., Ney, H.: Tracking using dynamic programming for appearance-based sign language recognition. In: Procs. of FGR, Southampton, UK, pp. 293–298 (10–12 April 2006) Google Scholar
  25. 25.
    Efthimiou, E., Fotinea, S.-E., Vogler, C., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, P., Segouat, J.: Sign language recognition, generation, and modelling: A research effort with applications in deaf communication. In: Procs. of Int. Conf. on Universal Access in Human–Computer Interaction. Addressing Diversity, San Diego, CA, USA, vol. 1, pp. 21–30, Springer, Berlin (19–24 July 2009) CrossRefGoogle Scholar
  26. 26.
    Ekman, P.: Basic emotions. In: Dalgleish, T., Power, T. (eds.) The Handbook of Cognition and Emotion, pp. 45–60. Wiley, New York (1999) Google Scholar
  27. 27.
    Elliott, R., Cooper, H., Ong, E.-J., Glauert, J., Bowden, R., Lefebvre-Albaret, F.: Search-by-example in multilingual sign language databases. In: ACM SIGACCESS Conference on Computers and Accessibility (ASSETS): Sign Language Translation and Avatar Technology, Dundee, UK (23 October 2011) Google Scholar
  28. 28.
    Elliott, R., Glauert, J., Kennaway, J., Parsons, K.: D5-2: SiGML Definition. ViSiCAST Project working document (2001) Google Scholar
  29. 29.
    Ershaed, H., Al-Alali, I., Khasawneh, N., Fraiwan, M.: An arabic sign language computer interface using the Xbox Kinect. In: Annual Undergraduate Research Conf. on Applied Computing, Dubai, UAE (May 2011) Google Scholar
  30. 30.
    Gaolin, F., Gao, W., Debin, Z.: Large vocabulary sign language recognition based on fuzzy decision trees. IEEE Trans. Syst. Man Cybern., Part A, Syst. Hum., 34(3), 305–314 (2004) CrossRefGoogle Scholar
  31. 31.
    Farhadi, A., Forsyth, D.: Aligning ASL for statistical translation using a discriminative word model. In: Procs. of CVPR, New York, NY, USA, pp. 1471–1476 (June 2006) Google Scholar
  32. 32.
    Feris, R., Turk, M., Raskar, R., Tan, K., Ohashi, G.: Exploiting depth discontinuities for vision-based fingerspelling recognition. In: Procs. of CVPR: Wkshp, Washington, DC, USA vol. 10, IEEE Comput. Soc., Los Alamitos (June 2004) Google Scholar
  33. 33.
    Fillbrandt, H., Akyol, S., Kraiss, K.-F.: Extraction of 3D hand shape and posture from image sequences for sign language recognition. In: Procs. of ICCV: Wkshp: Analysis and Modeling of Faces and Gestures, Nice, France, pp. 181–186 (14–18 October 2003) Google Scholar
  34. 34.
    Fujimura, K., Liu, X.: Sign recognition using depth image streams. In: Procs. of FGR, Southampton, UK, pp. 381–386 (10–12 April 2006) Google Scholar
  35. 35.
    Gao, W., Ma, J., Wu, J., Wang, C.: Sign language recognition based on HMM/ANN/DP. Int. J. Pattern Recognit. Artif. Intell. 14(5), 587–602 (2000) CrossRefGoogle Scholar
  36. 36.
    Gao, W., Fang, G., Zhao, D., Chen, Y.: Transition movement models for large vocabulary continuous sign language recognition. In: Procs. of FGR, Seoul, Korea, pp. 553–558 (17–19 May 2004) Google Scholar
  37. 37.
    Goh, P., Holden, E.-J.: Dynamic fingerspelling recognition using geometric and motion features. In: Procs. of ICIP, pp. 2741–2744 (2006) Google Scholar
  38. 38.
    Grobel, K., Assan, M.: Isolated sign language recognition using hidden Markov models. In: Procs. of IEEE Int. Conf. on Systems, Man, and Cybernetics, Orlando, FL, USA, vol. 1, pp. 162–167 (12–15 October 1997) Google Scholar
  39. 39.
    Grzeszcuk, R., Bradski, G., Chu, M.H., Bouguet, J.Y.: Stereo based gesture recognition invariant to 3d pose and lighting. In: Procs. of CVPR, vol. 1 (2000) Google Scholar
  40. 40.
    Guan, H., Chang, J.S., Chen, L., Feris, R.S., Turk, M.: Multi-view appearance-based 3D hand pose estimation, p. 154 Google Scholar
  41. 41.
    Hadfield, S., Bowden, R.: Generalised pose estimation using depth. In: Procs. of ECCV Int. Wkshp: Sign, Gesture, Activity, Heraklion, Crete (5–11 September 2010) Google Scholar
  42. 42.
    Hamada, Y., Shimada, N., Shirai, Y.: Hand shape estimation under complex backgrounds for sign language recognition. In: Procs. of FGR, Seoul, Korea, pp. 589–594 (17–19 May 2004) Google Scholar
  43. 43.
    Han, J., Awad, G., Sutherland, A.: Automatic skin segmentation and tracking in sign language recognition. IET Comput. Vis. 3(1), 24–35 (2009) CrossRefGoogle Scholar
  44. 44.
    Han, J.W., Awad, G., Sutherland, A.: Modelling and segmenting subunits for sign language recognition based on hand motion analysis. Pattern Recognit. Lett. 30(6), 623–633 (2009) CrossRefGoogle Scholar
  45. 45.
    Heracleous, P., Aboutabit, N., Beautemps, D.: Lip shape and hand position fusion for automatic vowel recognition in cued speech for French. IEEE Signal Process. Lett. 16(5), 339–342 (2009) CrossRefGoogle Scholar
  46. 46.
    Hernandez-Rebollar, J.L., Lindeman, R.W., Kyriakopoulos, N.: A multi-class pattern recognition system for practical finger spelling translation. In: Procs. of IEEE Int. Conf. on Multimodal Interfaces, p. 185. IEEE Comput. Soc., Los Alamitos (2002) CrossRefGoogle Scholar
  47. 47.
    Hienz, H., Bauer, B., Karl-Friedrich, K.: HMM-based continuous sign language recognition using stochastic grammars. In: Procs. of GW, Gif-sur-Yvette, France, pp. 185–196. Springer, Berlin (17–19 March 1999) Google Scholar
  48. 48.
    Holden, E.J., Lee, G., Owens, R.: Australian sign language recognition. Mach. Vis. Appl. 16(5), 312–320 (2005) CrossRefGoogle Scholar
  49. 49.
    Holden, E.J., Owens, R.: Visual sign language recognition. In: Procs. of Int. Wkshp: Theoretical Foundations of Computer Vision, Dagstuhl Castle, Germany. LNCS, vol. 2032, pp. 270–288. Springer, Berlin (12–17 March 2000) Google Scholar
  50. 50.
    Hong, S., Setiawan, N.A., Lee, C.: Real-time vision based gesture recognition for human-robot interaction. In: Procs. of Int. Conf. on Knowledge-Based and Intelligent Information & Engineering Systems: Italian Wkshp: Neural Networks, Vietri sul Mare, Italy. LNCS, vol. 4692, p. 493. Springer, Berlin (12–14 September 2007) Google Scholar
  51. 51.
    Huang, C.-L., Huang, W.-Y., Lien, C.-C.: Sign language recognition using 3D Hopfield neural network. In: Procs. of ICIP, vol. 2, pp. 611–614 (23–26 October 1995) Google Scholar
  52. 52.
    Imagawa, K., Lu, S., Igi, S.: Color-based hands tracking system for sign language recognition. In: Procs. of FGR, Nara, Japan, pp. 462–467 (14–16 April 1998) Google Scholar
  53. 53.
    Isaacs, J., Foo, J.S.: Hand pose estimation for American sign language recognition. In: Procs. of Southeastern Symposium on System Theory, Atlanta, GA, USA, pp. 132–136 (March 2004) CrossRefGoogle Scholar
  54. 54.
    Jennings, C.: Robust finger tracking with multiple cameras. In: Procs. of ICCV: Wkshp: Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Corfu, Greece, pp. 152–160 (21–24 September 1999) Google Scholar
  55. 55.
    Jerde, T.E., Soechting, J.F., Flanders, M.: Biological constraints simplify the recognition of hand shapes. IEEE Trans. Biomed. Eng. 50(2), 265–269 (2003) CrossRefGoogle Scholar
  56. 56.
    Kadir, T., Bowden, R., Ong, E.J., Zisserman, A.: Minimal training, large lexicon, unconstrained sign language recognition. In: Procs. of BMVC, Kingston, UK, pp. 939–948 (7–9 September 2004) Google Scholar
  57. 57.
    Kadous, M.W.: Machine recognition of Auslan signs using PowerGloves: Towards large-lexicon recognition of sign language. In: Procs. of Wkshp: Integration of Gesture in Language and Speech (1996) Google Scholar
  58. 58.
    Kim, J.B., Park, K.H., Bang, W.C., Kim, J.S., Bien, Z.: Continuous Korean sign language recognition using automata based gesture segmentation and hidden Markov model. In: Procs. of Int. Conf. on Control, Automation and Systems, pp. 822–825 (2001) Google Scholar
  59. 59.
    Kim, J.-S., Jang, W., Bien, Z.: A dynamic gesture recognition system for the Korean sign language (KSL). IEEE Trans. Syst. Man Cybern., Part B, Cybern. 26(2), 354–359 (1996) CrossRefGoogle Scholar
  60. 60.
    Koelstra, S., Pantic, M., Patras, I.: A dynamic texture-based approach to recognition of facial actions and their temporal models. IEEE Trans. Pattern Anal. Mach. Intell. 32(11), 1940–1954 (2010) CrossRefGoogle Scholar
  61. 61.
    Kong, W.W., Ranganath, S.: Automatic hand trajectory segmentation and phoneme transcription for sign language. In: Procs. of FGR, Amsterdam, The Netherlands, pp. 1–6 (17–19 September 2008) Google Scholar
  62. 62.
    Krinidis, M., Nikolaidis, N., Pitas, I.: 3-d head pose estimation in monocular video sequences using deformable surfaces and radial basis functions. IEEE Trans. Circuits Syst. Video Technol. 19(2), 261–272 (2009) CrossRefGoogle Scholar
  63. 63.
    Lan, Y., Harvey, R., Theobald, B.-J., Ong, E.-J., Bowdenn R.: Comparing visual features for lipreading. In: Procs. of Int. Conf. Auditory-visual Speech Processing, Norwich, UK (2009) Google Scholar
  64. 64.
    Lee, C.-S., Bien, Z., Park, G.-T., Jang, W., Kim, J.-S., Kim, S.-K.: Real-time recognition system of Korean sign language based on elementary components. In: Procs. of IEEE Int. Conf. on Fuzzy Systems, vol. 3, pp. 1463–1468 (1–5 July 1997) Google Scholar
  65. 65.
    Liang, R.H., Ouhyoung, M.: A real-time continuous gesture recognition system for sign language. In: Procs. of FGR, Nara, Japan, pp. 558–567 (14–16 April 1998) Google Scholar
  66. 66.
    Lichtenauer, J., Hendriks, E., Reinders, M.: Learning to recognize a sign from a single example. In: Procs. of FGR, Amsterdam, The Netherlands, pp. 1–6 (17–19 September 2008) Google Scholar
  67. 67.
    Liddell, S.K., Johnson, R.E.: American sign language: The phonological base. Sign Lang. Stud. 64, 195–278 (1989) Google Scholar
  68. 68.
    Lien, J.-J.J., Kanade, T., Cohn, J., Li, C.-C.: Automated facial expression recognition based on FACS action units. In: Procs. of FGR, Nara, Japan, pp. 390–395 (14–16 April 1998) Google Scholar
  69. 69.
    Liu, X., Fujimura, K.: Hand gesture recognition using depth data. In: Procs. of FGR, Seoul, Korea, pp. 529–534 (17–19 May 2004) Google Scholar
  70. 70.
    Liwicki, S., Everingham, M.: Automatic recognition of fingerspelled words in British sign language. In: Procs. of CVPR, Miami, FL, USA, pp. 50–57 (20–26 June 2009) Google Scholar
  71. 71.
    Lyons, K., Brashear, H., Westeyn, T.L., Kim, J.S., Starner, T.: GART: The gesture and activity recognition toolkit. In: Procs. of Int. Conf. HCI, Beijing, China, pp. 718–727 (July 2007) Google Scholar
  72. 72.
    Micilotta, A., Bowden, R.: View-based location and tracking of body parts for visual interaction. In: Procs. of BMVC, Kingston, UK, pp. 849–858 (7–9 September 2004) Google Scholar
  73. 73.
    Ming, K.W., Ranganath, S.: Representations for facial expressions. In: Procs. of Int. Conf. on Control, Automation, Robotics and Vision, vol. 2, pp. 716–721 (2002) Google Scholar
  74. 74.
    Mitra, S., Acharya, T.: Gesture recognition: A survey. IEEE Trans. Syst. Man Cybern., Part C, Appl. Rev. 37(3), 311–324 (2007) CrossRefGoogle Scholar
  75. 75.
    Moore, S., Bowden, R.: Automatic facial expression recognition using boosted discriminatory classifiers. In: Procs. of ICCV: Wkshp: Analysis and Modeling of Faces and Gestures, Rio de Janeiro, Brazil, (16–19 October 2007) Google Scholar
  76. 76.
    Munoz-Salinas, R., Medina-Carnicer, R., Madrid-Cuevas, F.J., Carmona-Poyato, A.: Depth silhouettes for gesture recognition. Pattern Recognit. Lett. 29(3), 319–329 (2008) CrossRefGoogle Scholar
  77. 77.
    Murakami, K., Taguchi, H.: Gesture recognition using recurrent neural networks. In: Procs. of SIGCHI Conf. on Human factors in computing systems: Reaching through technology, pp. 237–242. ACM, New York (1991) CrossRefGoogle Scholar
  78. 78.
    Murphy-Chutorian, E., Trivedi, M. M.: Head pose estimation in computer vision: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 31(4), 607–626 (2009) CrossRefGoogle Scholar
  79. 79.
    Neidle, C.: National centre for sign language and gesture resources (2006) Google Scholar
  80. 80.
    Nguyen, T.D., Ranganath, S.: Towards recognition of facial expressions in sign language: Tracking facial features under occlusion. In: Procs. of ICIP, pp. 3228–3231 (12–15 October 2008) Google Scholar
  81. 81.
    Nguyen, T.D., Ranganath, S.: Tracking facial features under occlusions and recognizing facial expressions in sign language. In: Procs. of FGR, Amsterdam, The Netherlands, pp. 1–7 (17–19 September 2008) Google Scholar
  82. 82.
    Oikonomidis, I., Kyriazis, N., Argyros, A.A.: Efficient model-based 3D tracking of hand articulations using Kinect. In: Procs. of BMVC, Dundee, UK (August 29 – September 10 2011) Google Scholar
  83. 83.
    Ong, E.-J., Bowden, R.: A boosted classifier tree for hand shape detection. In: Procs. of FGR, Seoul, Korea, pp. 889–894 (17–19 May 2004) Google Scholar
  84. 84.
    Ong, E.-J., Bowden, R.: Learning sequential patterns for lipreading. In: Procs. of BMVC, Dundee, UK (August 29 – September 10 2011) Google Scholar
  85. 85.
    Ong, E.-J., Bowden, R.: Robust facial feature tracking using shape-constrained multi-resolution selected linear predictors. IEEE Trans. Pattern Anal. Mach. Intell. 33(9), 1844–1859 (September 2011). doi:10.1109/TPAMI.2010.205 Google Scholar
  86. 86.
    Ong, E.-J., Bowden, R.: Detection and segmentation of hand shapes using boosted classifiers. In: Procs. of FGR, Seoul, Korea, (17–19 May 2004) Google Scholar
  87. 87.
    Ong, E.-J., Bowden, R.: Robust lip-tracking using rigid flocks of selected linear predictors. In: Procs. of FGR, Amsterdam, The Netherlands (17–19 September 2008) Google Scholar
  88. 88.
    Ong, S.C.W., Ranganath, S.: Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 873–891 (2005) CrossRefGoogle Scholar
  89. 89.
    Ouhyoung, M., Liang, R.-H.. A sign language recognition system using hidden Markov model and context sensitive search. In: Procs. of ACM Virtual Reality Software and Technology Conference, pp. 59–66 (1996) Google Scholar
  90. 90.
    Pahlevanzadeh, M., Vafadoost, M., Shahnazi, M.: Sign language recognition. In: Procs. of Int. Symposium on Signal Processing and Its Applications, pp. 1–4 (12–15 February 2007) CrossRefGoogle Scholar
  91. 91.
    Pugeault, N., Bowden, R.: Spelling it out: Real-time ASL fingerspelling recognition. In: Consumer Depth Cameras for Computer Vision (CDC4CV), Barcelona, Spain (7–11 November, 2011) Google Scholar
  92. 92.
    Rezaei, A., Vafadoost, M., Rezaei, S., Daliri, A.: 3D pose estimation via elliptical Fourier descriptors for deformable hand representations. In: Procs. of Int. Conf. on Bioinformatics and Biomedical Engineering, pp. 1871–1875 (16–18 May 2008) CrossRefGoogle Scholar
  93. 93.
    Roussos, A., Theodorakis, S., Pitsikalis, P., Maragos, P.: Hand tracking and affine shape-appearance handshape sub-units in continuous sign language recognition. In: Workshop on Sign, Gesture and Activity, 11th European Conference on Computer Vision (ECCV) (2010) Google Scholar
  94. 94.
    Segen, J., Kumar, S.: Shadow gestures: 3D hand pose estimation using a single camera. In: Procs. of CVPR, vol. 1, Fort Collins, CO, USA (23–25 June 1999) Google Scholar
  95. 95.
    Shamaie, A., Sutherland, A.: A dynamic model for real-time tracking of hands in bimanual movements. In: Procs. of GW, Genova, Italy, pp. 172–179 (15–17 April 2003) Google Scholar
  96. 96.
    Sheerman-Chase, T., Ong, E.-J., Bowden, R.: Feature selection of facial displays for detection of non verbal communication in natural conversation. In: Procs. of ICCV: Wkshp: Human–Computer Interaction, Kyoto, Japan, pp. 1985–1992 (29 September – 2 October 2009) Google Scholar
  97. 97.
    Starner, T., Pentland, A.: Real-time American sign language recognition from video using hidden Markov models. In: Procs. of Int. Symposium on Computer Vision, pp. 265–270 (21–23 November 1995) CrossRefGoogle Scholar
  98. 98.
    Starner, T., Weaver, J., Pentland, A.: Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1371–1375 (1998) CrossRefGoogle Scholar
  99. 99.
    Stein, D., Forster, J., Zelle, U., Dreuw, P., Ney, H.. Analysis of the German sign language weather forecast corpus. In: Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Valletta, Malta, pp. 225–230 (May 2010) Google Scholar
  100. 100.
    Stenger, B.: Template-based hand pose recognition using multiple cues. In: Procs. of ACCV, Hyderabad, India, vol. 2, pp. 551–561. Springer, Berlin (13–16 January 2006) Google Scholar
  101. 101.
    Stenger, B., Mendonca, P.R.S., Cipolla, R.: Model-based 3D tracking of an articulated hand. In: Procs. of CVPR, Kauai, HI, USA, vol. 2 (December 2001) Google Scholar
  102. 102.
    Stokoe, W.C.: Sign language structure: An outline of the visual communication systems of the American deaf. Stud. Linguist., Occas. Pap. 8, 3–37 (1960) Google Scholar
  103. 103.
    Sutton-Spence, R., Woll, B.: The Linguistics of British Sign Language: An Introduction. Cambridge University Press, Cambridge (1999) Google Scholar
  104. 104.
    Valli, C., Lucas, C.: Linguistics of American Sign Language: An Introduction. Gallaudet University Press, Washington (2000) Google Scholar
  105. 105.
    Vogler, C., Goldenstein, S.: Analysis of facial expressions in American sign language. In: Procs. of Int. Conf. on Universal Access in Human–Computer Interaction, Las Vegas, Nevada, USA (2005) Google Scholar
  106. 106.
    Vogler, C., Goldenstein, S.: Facial movement analysis in ASL. Universal Access in the Information Society 6(4), 363–374 (2008) CrossRefGoogle Scholar
  107. 107.
    Vogler, C., Li, Z., Kanaujia, A., Goldenstein, S., Metaxas, D.: The best of both worlds: Combining 3D deformable models with active shape models. In: Procs. of ICCV, Rio de Janeiro, Brazil, pp. 1–7 (16–19 October 2007) Google Scholar
  108. 108.
    Vogler, C., Metaxas, D.: Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods. In: Procs. of IEEE Int. Conf. on Systems, Man, and Cybernetics, Orlando, FL, USA, vol. 1, pp. 156–161 (12–15 October 1997) Google Scholar
  109. 109.
    Vogler, C., Metaxas, D.: Parallel hidden Markov models for American sign language recognition. In: Procs. of ICCV, Corfu, Greece, pp. 116–122 (21–24 September 1999) Google Scholar
  110. 110.
    Vogler, C., Metaxas, D.: Handshapes and movements: Multiple-channel American sign language recognition. In: Procs. of GW, Genova, Italy, pp. 247–258 (15–17 April 2003) Google Scholar
  111. 111.
    Vogler, C., Metaxas, D.: ASL recognition based on a coupling between HMMs and 3D motion analysis. In: Procs. of ICCV, Bombay, India pp. 363–369. IEEE Comput. Soc., Los Alamitos (4–7 January 1998) Google Scholar
  112. 112.
    von Agris, U., Blomer, C., Kraiss, K.-F.: Rapid signer adaptation for continuous sign language recognition using a combined approach of eigenvoices, MLLR, and MAP. In: Procs. of ICPR, Tampa, Florida, USA, pp. 1–4 (8–11 December 2008) Google Scholar
  113. 113.
    von Agris, U., Knorr, M., Kraiss, K.F.: The significance of facial features for automatic sign language recognition. In: Procs. of FGR, Amsterdam, The Netherlands, pp. 1–6 (17–19 September 2008) Google Scholar
  114. 114.
    von Agris, U., Zieren, J., Canzler, U., Bauer, B., Kraiss, K.F.: Recent developments in visual sign language recognition. Univers. Access Inf. Soc. 6(4), 323–362 (2008) CrossRefGoogle Scholar
  115. 115.
    Waldron, M.B., Kim, S.: Isolated ASL sign recognition system for deaf persons. IEEE Trans. Rehabil. Eng. 3(3), 261–271 (1995) CrossRefGoogle Scholar
  116. 116.
    Wang, C., Gao, W., Shan, S.: An approach based on phonemes to large vocabulary Chinese sign language recognition. In: Procs. of FGR, Washington, DC, USA, pp. 411–416 (20–21 May 2002) Google Scholar
  117. 117.
    Wassner, H.: Kinect + Reseau de Neurone = Reconnaissance de Gestes. http://tinyurl.com/5wbteug (May 2011)
  118. 118.
    Wong, S.-F., Cipolla, R.: Real-time interpretation of hand motions using a sparse Bayesian classifier on motion gradient orientation images. In: Procs. of BMVC, Oxford, UK, vol. 1, pp. 379–388 (6–8 September 2005) Google Scholar
  119. 119.
    Yacoob, Y., Davis, L.S.: Recognizing human facial expressions from long image sequences using optical-flow. IEEE Trans. Pattern Anal. Mach. Intell. 18(6), 636–642 (1996) CrossRefGoogle Scholar
  120. 120.
    Yamaguchi, T., Yoshihara, M., Akiba, M., Kuga, M., Kanazawa, N., Kamata, K.: Japanese sign language recognition system using information infrastructure. In: Procs. of IEEE Int. Conf. on Fuzzy Systems, vol. 5, pp. 65–66 (20–24 March 1995) Google Scholar
  121. 121.
    Yang, H.-D., Sclaroff, S., Lee, S.-W.: Sign language spotting with a threshold model based on conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1264–1277 (2009) CrossRefGoogle Scholar
  122. 122.
    Yang, M.-H., Ahuja, N., Tabb, M.: Extraction of 2D motion trajectories and its application to hand gesture recognition. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1061–1074 (2002) CrossRefGoogle Scholar
  123. 123.
    Yin, P., Starner, T., Hamilton, H., Essa, I., Rehg, J.M.: Learning the basic units in American sign language using discriminative segmental feature selection. In: Procs. of ASSP, Taipei, Taiwan, pp. 4757–4760 (19–24 April 2009) Google Scholar
  124. 124.
    Zafrulla, Z., Brashear, H., Presti, P., Hamilton, H., Starner, T.: Copycat – Center for Accessible Technology in Sign. http://tinyurl.com/3tksn6s, http://www.youtube.com/watch?v=qFH5rSzmgFE&feature=related (2010)
  125. 125.
    Zahedi, M., Dreuw, P., Rybach, D., Deselaers, T., Ney, H.: Geometric features for improving continuous appearance-based sign language recognition. In: Procs. of BMVC, Edinburgh, UK, pp. 1019–1028 (4–7 September 2006) Google Scholar
  126. 126.
    Zahedi, M., Keysers, D., Deselaers, T., Ney, H.: Combination of tangent distance and an image based distortion model for appearance-based sign language recognition. In: Procs. of German Association for Pattern Recognition Symposium, Vienna, Austria. LNCS, vol. 3663, page 401, Springer, Berlin (31 August – 2 September 2005) Google Scholar
  127. 127.
    Zahedi, M., Keysers, D., Ney, H.: Appearance-based recognition of words in American sign language. In: Procs. of IbPRIA, Estoril, Portugal, pp. 511–519 (7–9 June 2005) Google Scholar
  128. 128.
    Zhang, L.G., Chen, Y., Fang, G., Chen, X., Gao, W.: A vision-based sign language recognition system using tied-mixture density HMM. In: Procs. of Int. Conf. on Multimodal interfaces, State College, PA, USA, pp. 198–204, ACM, New York (13–15 October 2004) CrossRefGoogle Scholar
  129. 129.
    Zieren, J., Kraiss, K.F., Non-intrusive sign language recognition for human computer interaction. In: Procs. of IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design and Evaluation of Human Machine Systems (2004) Google Scholar
  130. 130.
    Zieren, J., Kraiss, K.F., Robust person-independent visual sign language recognition. In: Procs. of IbPRIA, Estoril, Portugal, pp. 520–528 (7–9 June 2005) Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  1. 1.University of SurreyGuildfordUK

Personalised recommendations