Universal Access in the Information Society

, Volume 11, Issue 2, pp 169–184 | Cite as

Effect of spatial reference and verb inflection on the usability of sign language animations

  • Matt Huenerfauth
  • Pengfei Lu
Long Paper


Computer-generated animations of American Sign Language (ASL) can improve the accessibility of information, communication, and services for the significant number of deaf adults in the US with difficulty in reading English text. Unfortunately, there are several linguistic aspects of ASL that current automatic generation or translation systems cannot produce (or are time-consuming for human animators to create). To determine how important such phenomena are to user satisfaction and the comprehension of ASL animations, studies were conducted in which native ASL signers evaluated ASL animations with and without: establishment of spatial reference points around the virtual human signer representing entities under discussion, pointing pronoun signs, contrastive role shift, and spatial inflection of ASL verbs. It was found that adding these phenomena to ASL animations led to a significant improvement in user comprehension of the animations, thereby motivating future research on automating the generation of these animations.


American sign language Animation Evaluation Sign language Spatial reference Verb inflection Accessibility technology for people who are deaf 



American sign language


Human-computer interaction


Machine translation


British sign language



This research was supported in part by the US. National Science Foundation under award number 0746556, by The City University of New York PSC-CUNY Research Award Program, by Siemens A&D UGS PLM Software through a Go PLM Academic Grant, and by Visage Technologies AB through a free academic license for character animation software. Jonathan Lamberton prepared experimental materials and organized data collection for the ASL animation studies discussed in Sects. 2 and 3.


  1. 1.
    Huenerfauth, M.: Improving spatial reference in American sign language animation through data collection from native ASL signers. In: Proceedings of the Universal Access in Human Computer Interaction conference (UAHCI’09), pp. 530–539. (2009). doi: 10.1007/978-3-642-02713-0_56
  2. 2.
    Mitchell, R., Young, T.A., Bachleda, B., Karchmer, M.A.: How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6(4), 306–335 (2006)CrossRefGoogle Scholar
  3. 3.
    Traxler, C.: The Stanford achievement test, ninth edition: national norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000). doi: 10.1093/deafed/5.4.337 CrossRefGoogle Scholar
  4. 4.
    Huenerfauth, M., Hanson, V.L.: Sign language in the interface: access for Deaf signers. In: Stephanidis, C. (ed.) The Universal Access Handbook. Lawrence Erlbaum Associates, Mahwah (2009)Google Scholar
  5. 5.
    Lane, H., Hoffmeister, R., Bahan, B.: A Journey into the Deaf World. DawnSign Press, San Diego (1996)Google Scholar
  6. 6.
    Padden, C., Humphries, T.: Inside Deaf Culture. Harvard University Press, Cambridge (2005)Google Scholar
  7. 7.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2006). doi: 10.1007/s10209-007-0102-z CrossRefGoogle Scholar
  8. 8.
    Kennaway, J., Glauert, J., Zwitserlood, I.: Providing signed content on the Internet by synthesized animation. ACM Trans. Comput. Hum. Interact. 14(3), 1–29 (2007). doi: 10.1145/1279700.1279705 CrossRefGoogle Scholar
  9. 9.
    VCom3D: Sign Smith Studio. Accessed 11 Mar 2010 (2010)
  10. 10.
    Chiu, Y.H., Wu, C.H., Su, H.Y., Cheng, C.J.: Joint optimization of word alignment and epenthesis generation for Chinese to Taiwanese sign synthesis. IEEE Trans. Pattern Anal. Mach. Intell. 29(1):28–39. IEEE Press, New York (2007). doi: 10.1109/TPAMI.2007.15 Google Scholar
  11. 11.
    Fotinea, S.E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Univ. Access Inf. Soc. 6(4), 405–418 (2008). doi: 10.1007/s10209-007-0094-8 CrossRefGoogle Scholar
  12. 12.
    Marshall, I., Safar, E.: Grammar development for sign language avatar-based synthesis. In: Stephanidis, C. (ed.) Universal Access in HCI: Exploring New Dimensions of Diversity—Volume 8 of the Proceedings of the 11th International Conference on Human-Computer Interaction, Lawrence Erlbaum Associates, Mahwah (2005) Google Scholar
  13. 13.
    Karpouzis, K., Caridakis, G., Fotinea, S.E., Efthimiou, E.: Educational resources and implementation of a Greek sign language synthesis architecture. Comput. Educ. 49(1), 54–74 (2007). doi: 10.1016/j.compedu.2005.06.004 CrossRefGoogle Scholar
  14. 14.
    Stein, D., Bungeroth, J., Ney, H.: Morpho-syntax based statistical methods for sign language translation. In: Proceedings of the European Association for Machine Translation, pp. 169–177. European Association for Machine Translation, Allschwil (2006)Google Scholar
  15. 15.
    Morrissey, S., Way, A.: An example-based approach to translating sign language. In: Proceedings of the Workshop on Example-Based Machine Translation, pp 109–116 (2005)Google Scholar
  16. 16.
    Shionome, T., Kamata, K., Yamamoto, H., Fischer, S.: Effects of display size on perception of Japanese sign language—mobile access in signed language. In: Proceedings of the Human-Computer Interaction Conference, pp 22–27 (2005)Google Scholar
  17. 17.
    Sumihiro, K., Yoshihisa, S., Takao, K.: Synthesis of sign animation with facial expression and its effects on understanding of sign language. IEIC Tech. Rep. 100(331), 31–36 (2000)Google Scholar
  18. 18.
    van Zijl, L., Barker, D.: South African sign language MT system. In: Proceedings of AFRIGRAPH, pp. 49–52 (2003). doi: 10.1145/602330.602339
  19. 19.
    Zhao, L., Kipper, K., Schuler, W., Vogler, C., Badler, N., Palmer, M.: A machine translation system from English to American sign language. In: Proceedings of the 4th Conference of the Association for Machine Translation in the Americas on Envisioning Machine Translation in the Information Future (Lecture Notes in Computer Science 1934). Springer, Heidelberg, pp. 54–67 (2000). doi: 10.1007/3-540-39965-8_6
  20. 20.
    Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., Lee, R.: The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. MIT Press, Cambridge (2000)Google Scholar
  21. 21.
    Liddell, S.: Grammar Gesture and Meaning in American Sign Language. Cambridge University Press, Cambridge (2003)Google Scholar
  22. 22.
    Padden, C.: Interaction of Morphology and Syntax in American Sign Language. Outstanding Dissertations in Linguistics, Series IV. Garland Press, New York (1988)Google Scholar
  23. 23.
    Braffort, A., Dalle, P.: Sign language applications: preliminary modeling. Univ. Access Inf. Soc. 6(4), 393–404 (2008). doi: 10.1007/s10209-007-0103-y CrossRefGoogle Scholar
  24. 24.
    Marshall, I., Safar, E.: A prototype text to British sign language (BSL) translation system. In: Companion Volume to the Proceedings of the Association for Computational Linguistics Conference, pp 113–116 (2003). doi: 10.3115/1075178.1075194
  25. 25.
    Iwarsson, S., Stahl, A.: Accessibility, usability and universal design-positioning and definition of concepts describing person-environment relationships. Disabil. Rehabil. 25(2), 57–66 (2003). doi: 10.1080/0963828021000007969 Google Scholar
  26. 26.
    Nielsen, J.: Usability Engineering. Academic Press, Boston (1993)zbMATHGoogle Scholar
  27. 27.
    International Organization for Standardization: ISO 9241-11: Guidance on Usability. International Organization for Standardization. (1998)
  28. 28.
    Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of american sign language. In: Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 129–136, ACM Press, New York (2008). doi: 10.1145/1530064.1530067
  29. 29.
    Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of American sign language. ACM Trans. Access. Comput. 2(2):1–31. ACM Press, New York (2009). doi: 10.1145/1530064.1530067
  30. 30.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of American sign language generation by native ASL signers. ACM Trans. Access. Comput. 1(1):1–27. ACM Press, New York (2008). doi: 10.1145/1361203.1361206
  31. 31.
    Lucas, C., Valli, C.: Language Contact in the American Deaf Community. Academic Press, San Diego (1992)Google Scholar
  32. 32.
    Campbell, N.: Speech synthesis evaluation. In: Human Language Technologies (HLT) Evaluation Workshop, European Language Resources Association (ELRA). Accessed 13 Mar 2010 (2005)
  33. 33.
    Van Bezooijen, R., Pols, L.: Evaluating text-to-speech systems: Some methodological aspects. Speech Commun. 9(4), 263–270 (1990). doi: 10.1016/0167-6393(90)90002-Q CrossRefGoogle Scholar
  34. 34.
    Huenerfauth, M., Lu, P.: Annotating spatial reference in a motion-capture corpus of American sign language discourse. In: Proceedings of the Fourth Workshop on the Representation and Processing of Signed Languages: Corpora and Sign Language Technologies, the 7th International Conference on Language Resources and Evaluation (LREC 2010). ELRA, Paris (2010)Google Scholar
  35. 35.
    Bungeroth, J., Stein, D., Dreuw, P., Zahedi, M., Ney, H.: A German sign language corpus of the domain weather report. In: Vettori, C. (ed.) 2nd Workshop on the Representation and Processing of Sign Languages, pp. 2000–2003. ELRA, Paris (2006)Google Scholar
  36. 36.
    Crasborn, O., Sloetjes, H., Auer, E., Wittenburg, P.: Combining video and numeric data in the analysis of sign languages within the ELAN annotation software. In: Vettori, C. (ed.) 2nd Workshop on the Representation and Processing of Sign Languages, the 5th International Conference on Language Resources and Evaluation (LREC 2006), pp. 82–87. ELRA, Paris (2006)Google Scholar
  37. 37.
    Efthimiou, E., Fotinea, S.E.: GSLC: Creation and annotation of a Greek sign language corpus for HCI. In: Universal Access in Human Computer Interaction. (Lecture Notes in Computer Science 4554), pp. 657–666. Springer, Heidelberg (2007)Google Scholar
  38. 38.
    Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language recognition. IEEE International Symposium on Wearable Computers, p. 45, IEEE Press, New York (2003). doi: 10.1109/ISWC.2003.1241392
  39. 39.
    Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., Abbott, S.: Tessa, a system to aid communication with deaf people. In: 5th International ACM Conference on Assistive Technologies, pp. 205–212. ACM Press, New York (2002). doi: 10.1145/638249.638287
  40. 40.
    Vogler, C., Metaxas, D.: Handshapes and movements: Multiple-channel ASL recognition. (Lecture Notes in Artificial Intelligence 2915), pp. 247–258, Springer, Heidelberg (2004). doi: 10.1007/11678816

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.Department of Computer Science, Queens CollegeThe City University of New YorkFlushingUSA
  2. 2.Department of Computer Science, Graduate CenterThe City University of New YorkNew YorkUSA

Personalised recommendations