Skip to main content
Log in

An automated technique for real-time production of lifelike animations of American Sign Language

  • Long paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Generating sentences from a library of signs implemented through a sparse set of key frames derived from the segmental structure of a phonetic model of ASL has the advantage of flexibility and efficiency, but lacks the lifelike detail of motion capture. These difficulties are compounded when faced with real-time generation and display. This paper describes a technique for automatically adding realism without the expense of manually animating the requisite detail. The new technique layers transparently over and modifies the primary motions dictated by the segmental model and does so with very little computational cost, enabling real-time production and display. The paper also discusses avatar optimizations that can lower the rendering overhead in real-time displays.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. Recall that a median of 3.5 on an integer 5-point Likert scale indicates that 50 % were either a 4 or a 5.

References

  1. Courty, N., Gibet, S.: Why is the creation of a virtual signer challenging computer. In: Boulic, R., Chrysanthou, Y., Komura, T. (eds.), pp. 290–300. Springer, Utrecht (2010)

  2. Delorme, M.: Modelisation du squelette pour la generation realiste de postures de la lange de signes francaise. Ph.D. Dissertation, Laboratoire d’Informatique pour la Mechanique et les Sciences de L’Ingenieur (LIMSI) (2011)

  3. Delorme, M., Filhol, M., Braffort, A.: Animation generation process for sign language synthesis. In: Advances in Computer–Human Interactions, ACHI ‘09, pp. 386–390 (2009)

  4. Efthimiou, E., Fotinea, S.-E., Vogler, C., Hanke, T., Glauert, J., Bowden, R., Segouat, J.: Sign language recognition, generation, and modelling: a research effort with applications in deaf communication. In: Universal Access in Human–Computer Interaction, Addressing Diversity. Lecture Notes in Computer Science, vol 5614, pp. 21–30 (2009)

  5. Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2008)

    Article  Google Scholar 

  6. Filhol, M.: A combination of two synchronisation methods to formalise sign language animation. In: Proceedings of the 9th International Gesture Workshop (2011)

  7. Gibet, S., Courty, N., Duarte, K., Le Naour, T.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact Intell. Syst. (TiiS) 1(1), 1–26 (2011)

    Article  Google Scholar 

  8. Hanke, T., Matthes, S., Regen, A., Storz, J., Satu, W., Elliott, R., Kennaway, R.: Using Timing Information to Improve the Performance of Avatars. Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, Scotland, UK (2011)

  9. Huenerfauth, M., Lu, P., Rosenberg, A.: Evaluating importance of facial expression in American Sign Language and pidgin signed English animations. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility,pp. 99–106. Association For Computing Machinery, New York (2011)

  10. Johnson, R.E., Liddell, S.K.: A segmental framework for representing signs phonetically. Sign Lang. Stud. 11(3), 408–463 (2011)

    Article  Google Scholar 

  11. Johnston, O., Thomas, F.: The illusion of life: disney animation. Random House (Disney Press), New York (1995)

    Google Scholar 

  12. Kacorri, H., Lu, P., Huenerfauth, M.: Effect of displaying human videos during an evaluation study of American Sign Language animation. ACM Trans. Access. Comput. (TACCESS) 5(2), 4 (2013)

    Google Scholar 

  13. Lasseter, J.: Principles of traditional animation applied to 3D computer animation. In: SIGGRAPH ‘87 Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, pp. 35–44. Association of Computing Machinery, Anaheim, California (1987)

  14. McDonald, J., Alkoby, K., Carter, R., Christopher, J., Davidson, M., Ethridge, D., Wolfe, R.: A direct method for positioning the arms of a human model. In: Proceedings of Graphics Interface, pp. 99–106. Calgary, Alberta Canada (2002)

  15. Metzger, M.: Constructed dialogue and constructed action in American Sign Language. In: Lucas, C. (ed.) The Sociolinguistics in Deaf Communities, pp. 255–271. Gaullaudet University Press, Washington (1995)

    Google Scholar 

  16. Napoli, R., Gloman, C.: Scene Design and Lighting Techniques: A Basic Guide for Theatre. Focal Press, Burlington (2007)

    Google Scholar 

  17. Paolo, B., Ronan, B.: An inverse kinematics architecture enforcing an arbitrary number of strict priority levels. Vis. Comput. 20(6), 402–417 (2004)

    Article  Google Scholar 

  18. Perlin, K.: A system for scripting interactive actors in virtual worlds. In: Proceedings of ACM SIGGRAPH 96, pp. 205–216. Association for Computing Machinery, New Orleans (1996)

  19. Phong, T.: Illumination for computer generated pictures. Commun. ACM 18(6), 311–317 (1975)

    Article  Google Scholar 

  20. Poor, G.: ASL video dictionary and inflection guide. http://www.ntid.rit.edu/dig (2008). Retrieved 15 May 2011

  21. Reese, N.B.: Joint Range of Motion and Muscle Length Testing. Elsevier, St. Louis (2010)

    Google Scholar 

  22. Saladin, K.: Human Anatomy. McGraw-Hill, New York (2007)

    Google Scholar 

  23. Schnepp, J. (2012). A representation of selected nonmanual signals in American Sign Language. DePaul University: ProQuest Dissertations and Theses

  24. Schnepp, J., Wolfe, R., Shiver, B., McDonald, J., Toro, J.: SignQUOTE: a remote testing facility for eliciting signed qualitative feedback. In: Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT). http://vhg.cmp.uea.ac.uk/demo/SLTAT2011Dundee/4.pdf (2011). Retrieved 23 Apr 2012

  25. Shoemake, K.: Euler angle conversion. In: Heckbert, P. (ed.) Graphics Gems IV, pp. 222–229. Academic Press, San Diego (1994)

    Chapter  Google Scholar 

  26. Shreiner, D., Sellers, G., Kessenich, J., Licea-Kane, B.: OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 4.3, 8th edn. Addison-Wesley Professional, Upper Saddle River (2013)

    Google Scholar 

  27. Sozio, S.: The Mastery of Mimodrame: An In-Depth Study of Mime Techniques. Destiny Image Publishers, Shippensburg (1989)

    Google Scholar 

  28. Sutton, V. (Ed.): SignWriting for sign languages. From SignWriting.org: http://www.signwriting.org/ (2014). Retrieved 8 Oct 2014

  29. Tolani, D., Goswami, A., Badler, N.: Real-time inverse kinematics techniques for anthropomorphic limbs. Graph. Models 62, 353–388 (2000). doi:10.1006/gmod.2000.0528

    Article  MATH  Google Scholar 

  30. Whitaker, H., Halas, J.: Timing for Animation. Focal Press, Burlington (2008)

    Google Scholar 

  31. Wilbur, R.: Phonological and prosodic layering of nonmanuals in American Sign Language. In: Emmorey, K., Lane, H.L., Bellugi, U., Klima, E. (eds.) The Signs of Language Revisited: Festscrift for Ursula Bellugi and Edward Klima, pp. 213–241 (2000)

  32. Wolfe, R., Cook, P., McDonald, J., Schnepp, J.: Linguistics as structure in computer animation: toward a more effective synthesis of brow motion in American Sign Language. Sign Lang. Linguist. 14(1), 179–199 (2011)

    Article  Google Scholar 

  33. Wolfe, R., McDonald, J., Schnepp, J.: An Avatar to Depict Sign Language: Building from Reusable Hand Animation. International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Berlin, Germany (2011)

  34. Wyvill, B., McPheeters, C., Wyvill, G.: Animating soft objects. Vis. Comput. 2, 235–242 (1986)

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the Deaf experts and qualified interpreters for reviewing drafts of the animations. We are also grateful to the reviewers for their thoughtful comments and valuable feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John McDonald.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McDonald, J., Wolfe, R., Schnepp, J. et al. An automated technique for real-time production of lifelike animations of American Sign Language. Univ Access Inf Soc 15, 551–566 (2016). https://doi.org/10.1007/s10209-015-0407-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-015-0407-2

Keywords

Navigation