Advertisement

Sign Language Avatars: Animation and Comprehensibility

  • Michael Kipp
  • Alexis Heloir
  • Quan Nguyen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

Many deaf people have significant reading problems. Written content, e.g. on internet pages, is therefore not fully accessible for them. Embodied agents have the potential to communicate in the native language of this cultural group: sign language. However, state-of-the-art systems have limited comprehensibility and standard evaluation methods are missing. In this paper, we present methods and discuss challenges for the creation and evaluation of a signing avatar. We extended the existing EMBR character animation system with prerequisite functionality, created a gloss-based animation tool and developed a cyclic content creation workflow with the help of two deaf sign language experts. For evaluation, we introduce delta testing, a novel way of assessing comprehensibility by comparing avatars with human signers. While our system reached state-of-the-art comprehensibility in a short development time we argue that future research needs to focus on nonmanual aspects and prosody to reach the comprehensibility levels of human signers.

Keywords

Accessible interfaces virtual characters sign language synthesis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univers. Access Inf. Soc. 6, 375–391 (2008)CrossRefGoogle Scholar
  2. 2.
    Filhol, M.: Zebedee: a lexical description model for sign language synthesis. Tech. Rep. 2009-08, LIMSI (2009)Google Scholar
  3. 3.
    Hanke, T.: iLex - a tool for sign language lexicography and corpus analysis. In: Proceedings of the 3rd International Conference on Language Resources and Evaluation, pp. 923–926 (2002)Google Scholar
  4. 4.
    Heloir, A., Kipp, M.: Realtime animation of interactive agents: Specification and realization. Journal of Applied Artificial Intelligence 24(6), 510–529 (2010)CrossRefGoogle Scholar
  5. 5.
    Heloir, A., Kipp, M., Gibet, S., Courty, N.: Specifying and evaluating data-driven style transformation for gesturing embodied agents. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 215–222. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Hodgins, J., Jörg, S., O’Sullivan, C., Park, S.I., Mahler, M.: The saliency of anomalies in animated human characters. ACM Trans. Appl. Percept. 7, 22:1–22:14 (2010)Google Scholar
  7. 7.
    Holt, J.A.: Demographic, Stanford achievement test - 8th edition for deaf and hard of hearing students: Reading comprehension subgroup results. Amer. Annals Deaf. 138, 172–175 (1993)Google Scholar
  8. 8.
    Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of american sign language. ACM Trans. Access. Comput. 2, 9:1–9:31 (2009)Google Scholar
  9. 9.
    Huenerfauth, M., Hanson, V.L.: Sign language in the interface: Access for deaf signers. In: Stephanidis, C. (ed.) The Universal Access Handbook. CRC Press, Boca Raton (2009)Google Scholar
  10. 10.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluating american sign language generation through the participation of native ASL signers. In: Proc. of the 9th International ACM Conference on Computers and Accessibility (ASSETS), pp. 211–218. ACM, New York (2007)CrossRefGoogle Scholar
  11. 11.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluating american sign language generation by native ASL signers. ACM Transactions on Access Computing 1(1), 1–27 (2008)CrossRefGoogle Scholar
  12. 12.
    Johnston, T.: Australian Sign Language (Auslan): An introduction to sign language linguistics. Cambridge University Press, Cambridge (2007)CrossRefGoogle Scholar
  13. 13.
    Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the internet by synthesized animation. ACM Transactions on Computer-Human Interaction (TOCHI) 14(3), 15–29 (2007)CrossRefGoogle Scholar
  14. 14.
    Kipp, M.: Anvil: The video annotation research tool. In: Durand, J., Gut, U., Kristofferson, G. (eds.) Handbook of Corpus Phonology. Oxford University Press, Oxford (to appear, 2011)Google Scholar
  15. 15.
    Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior: Closing the gap between behavior planning and embodied agent presentation. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 57–63. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  16. 16.
    Kipp, M., Neff, M., Albrecht, I.: An Annotation Scheme for Conversational Gestures: How to economically capture timing and form. Journal on Language Resources and Evaluation - Special Issue on Multimodal Corpora 41(3-4), 325–339 (2007)Google Scholar
  17. 17.
    Kipp, M., Nguyen, Q.: Multitouch Puppetry: Creating coordinated 3D motion for an articulated arms. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (2010)Google Scholar
  18. 18.
    Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS). ACM Press, New York (2011)Google Scholar
  19. 19.
    Kita, S., van Gijn, I., van der Hulst, H.: Movement phases in signs and co-speech gestures, and their transcription by human coders. In: Wachsmuth, I., Fröhlich, M. (eds.) GW 1997. LNCS (LNAI), vol. 1371, pp. 23–35. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  20. 20.
    Liddell, S.K., Johnson, R.E.: American sign language: The phonological base. Sign Language Studies 64, 195–277 (1989)Google Scholar
  21. 21.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  22. 22.
    Neff, M., Kipp, M., Albrecht, I., Seidel, H.P.: Gesture Modeling and Animation Based on a Probabilistic Recreation of Speaker Style. ACM Transactions on Graphics 27(1), 1–24 (2008)CrossRefGoogle Scholar
  23. 23.
    Niewiadomski, R., Bevacqua, E., Mancini, M., Pelachaud, C.: Greta: An interactive expressive ECA systems. In: Proc. of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1399–1400 (2009)Google Scholar
  24. 24.
    Pfau, R., Quer, J.: Nonmanuals: their prosodic and grammatical roles. In: Brentari, D. (ed.) Sign languages (Cambridge Language Surveys), pp. 381–402. Cambridge University Press, Cambridge (2010)Google Scholar
  25. 25.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: HamNoSys Version 2.0. Hamburg Notation System for Sign Language. An Introductory Guides, Signum (1989)Google Scholar
  26. 26.
    Schröder, M., Trouvain, J.: The german text-to-speech synthesis system mary: A tool for research, development and teaching. International Journal of Speech Technology 6, 365–377 (2003)CrossRefGoogle Scholar
  27. 27.
    Sheard, M., Schoot, S., Zwitserlood, I., Verlinden, M., Weber, I.: Evaluation reports 1 and 2 of the EU project essential sign language information on government networks, Deliverable D6.2 (March 2004)Google Scholar
  28. 28.
    Stokoe, W.C.: Sign language structure: An outline of the visual communication system of the American deaf. Studies in linguistics, Occasional papers 8 (1960)Google Scholar
  29. 29.
    Swisher, V., Christie, K., Miller, S.: The reception of signs in peripheral vision by deaf persons. Sign Language Studies 63, 99–125 (1989)Google Scholar
  30. 30.
    Thiebaux, M., Marshall, A., Marsella, S., Kallman, M.: SmartBody: Behavior realization for embodied conversational agents. In: Proc. of the 7th Int. Conf. on Autonomous Agents and Multiagent Systems, AAMAS (2008)Google Scholar
  31. 31.
    Wolfe, R., McDonald, J., Davidson, M.J., Frank, C.: Using an animation-based technology to support reading curricula for deaf elementary schoolchildren. In: The 22nd Annual International Technology & Persons with Disabilities Conference (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Michael Kipp
    • 1
  • Alexis Heloir
    • 1
  • Quan Nguyen
    • 1
  1. 1.DFKIEmbodied Agents Research GroupSaarbrückenGermany

Personalised recommendations