Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information

  • Hernisa Kacorri
  • Pengfei Lu
  • Matt Huenerfauth
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8009)

Abstract

Facial expressions and head movements communicate essential information during ASL sentences. We aim to improve the facial expressions in ASL animations and make them more understandable, ultimately leading to better accessibility of online information for deaf people with low English literacy. This paper presents how we engineer stimuli and questions to measure whether the viewer has seen and understood the linguistic facial expressions correctly. In two studies, we investigate how changing several parameters (the variety of facial expressions, the language in which the stimuli were invented, and the degree of involvement of a native ASL signer in the stimuli design) affects the results of a user evaluation study of facial expressions in ASL animation.

Keywords

American Sign Language accessibility technology for people who are deaf animation natural language generation evaluation user study stimuli 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allbritton, D.W., Mckoon, G., Ratcliff, R.: Reliability of prosodic cues for resolving syntactic ambiguity. J. Exp. Psychol.-Learn. Mem. Cogn. 22, 714–735 (1996)CrossRefGoogle Scholar
  2. 2.
    Boulares, M., Jemni, M.: Toward an example-based machine translation from written text to ASL using virtual agent animation. In: Proceedings of CoRR (2012)Google Scholar
  3. 3.
    Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ. Access. Inf. Soc. 6(4), 375–391 (2008)CrossRefGoogle Scholar
  4. 4.
    Filhol, M., Delorme, M., Braffort, A.: Combining constraint-based models for sign language synthesis. In: Proceedings of 4th Workshop on the Representation and Processing of Sign Languages, Language Resources and Evaluation Conference (LREC), Malta (2010)Google Scholar
  5. 5.
    Fotinea, S.E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Univ. Access. Inf. Soc. 6(4), 405–418 (2008)CrossRefGoogle Scholar
  6. 6.
    Gibet, S., Courty, N., Duarte, K., Le Naour, T.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1(1), Article 6 (2011)Google Scholar
  7. 7.
    Holt, J.A.: Stanford achievement test - 8th edn: Reading comprehension subgroup results. American Annals of the Deaf 138, 172–175 (1993)CrossRefGoogle Scholar
  8. 8.
    Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of American Sign Language. In: The 10th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2008), Halifax, Nova Scotia, Canada (2008)Google Scholar
  9. 9.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of American Sign Language generation by native ASL signers. ACM Trans. Access. Comput 1(1), 1–27 (2008)CrossRefGoogle Scholar
  10. 10.
    Huenerfauth, M., Hanson, V.: Sign Language in the interface: access for deaf signers. In: Stephanidis, C. (ed.) Universal Access Handbook, pp. 38.1-38.18. Erlbaum, NJ (2009)Google Scholar
  11. 11.
    Huenerfauth, M., Lu, P., Rosenberg, A.: Evaluating importance of facial expression in American Sign Language and Pidgin Signed English animations. In: Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2011), Dundee, Scotland. ACM Press, New York (2011)Google Scholar
  12. 12.
    Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., Lee, R.G.: The syntax of American Sign Language: functional categories & hierarchical structure. MIT Press, Cambridge (2000)Google Scholar
  13. 13.
    Price, P., Ostendorf, M., Shattuck-Hufnagel, S., Fong, C.: The use of prosody in syntactic disambiguation. Journal of the Acoustical Society of America (1991)Google Scholar
  14. 14.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: An introductory guide to HamNoSys Version 2.0: Hamburg notation system for Sign Languages. In: International Studies on Sign Language and Communication of the Deaf. Signum, Hamburg (1989)Google Scholar
  15. 15.
    San-Segundo, R., Barra, R., Córdoba, R., D’Haro, L.F., Fernández, F., Ferreiros, J., Lucas, J.M., Macías-Guarasa, J., Montero, J.M., Pardo, J.M.: Speech to sign language translation system for Spanish. Speech Commun. 50(11-12), 1009–1020 (2008)CrossRefGoogle Scholar
  16. 16.
    Schnepp, J., Wolfe, R., McDonald, J.: Synthetic corpora: A synergy of linguistics and computer animation. In: 4th Workshop on the Representation and Processing of Sign Languages, LREC 2010, Valetta, Malta (2010)Google Scholar
  17. 17.
    Traxler, C.: The Stanford achievement test, 9th edn: National norming & performance standards for deaf & hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000)CrossRefGoogle Scholar
  18. 18.
    Vcom3D. Homepage (2013), http://www.vcom3d.com/
  19. 19.
    Wallraven, C., Breidt, M., Cunningham, D.W., Bülthoff, H.H.: Evaluating perceptual realism of animated facial expressions. ACM Trans. Appl. Percept. 4(4), Article 4 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Hernisa Kacorri
    • 1
  • Pengfei Lu
    • 1
  • Matt Huenerfauth
    • 2
  1. 1.Doctoral Program in Computer Science, The Graduate CenterThe City University of New York (CUNY)New YorkUSA
  2. 2.Computer Science Department, CUNY Queens College Computer Science and Linguistics Programs, CUNY Graduate CenterThe City University of New York (CUNY)FlushingUSA

Personalised recommendations