Skip to main content
Log in

Towards enhanced visual clarity of sign language avatars through recreation of fine facial detail

  • Published:
Machine Translation

Abstract

Facial nonmanual signals and expressions convey critical linguistic and affective information in signed languages. However, the complexity of human facial anatomy has made the implementation of these movements a particular challenge in avatar research. Recent advances have improved the possible range of motion and expression. Because of this, we propose that an important next step is incorporating fine detail such as wrinkles to increase the visual clarity of these facial movements for the purposes of enhancing the legibility of avatar animation, particularly on small screens. This paper reviews research efforts to portray nonmanual signals via avatar technology and surveys extant illumination models for their suitability for this application. Based on this information, The American Sign Language Avatar Project at DePaul University has developed a new technique based on commercial visual effects paradigms for implementing realistic fine detail on the Paula avatar that functions within the complexity constraints of real-time sign language avatars.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Appel A (1968) Some techniques for shading machine renderings of solids. AFIPS 1968. Proc Spring Joint Comput Conf 32:37–45. https://doi.org/10.1145/1468075.1468082

    Article  Google Scholar 

  • Bridges B, Metzger M (1996) Deaf tend your non-manual signals in ASL. Calliope Press, Silver Spring

    Google Scholar 

  • Cohen M, Greenberg D (1985) The hemi-cube: a radiosity solution for complex environments. ACM Siggraph Comput Graph 19(3):31–40. https://doi.org/10.1145/325165.325171

    Article  Google Scholar 

  • Corson R, Glavan J, Norcross BG (2015) Stage makeup, 10th edn. Pearson Canada

  • Ebling S, Glauert J (2016) Building a Swiss German Sign Language avatar with JASigning and evaluating it among the Deaf community. Univ Access Inf Soc 15(4):577–587

    Article  Google Scholar 

  • Efthimiou E, Fontinea S.E., Hanke T, Glauert J, Bowden R, Braffort A, Goudenove F (2010) Dicta-sign sign language recognition, generation, and modelling: a research effort with application in Deaf communication. In: Proceeding of the 4th workshop on the representation and processing of sign languages: corpora and sign language technologies, pp 80–83

  • Elliott R, Glauert J, Kennaway J, Marshall I (2000) The development of language processing support for the ViSiCAST project. Fourth Annual ACM Conference on Assistive Technologies 2000:101–108. https://doi.org/10.1145/354324.354349

    Article  Google Scholar 

  • Elliott R, Glauert J, Jennings V, Kennaway JR (2004) An overview of the SiGML notation and SiGML signing software system. In: Sign language processing satellite workshop of the fourth international conference on language resources and evaluation, LREC, pp. 98–104

  • Elliott R, Glauert J, Kennaway J, Marshall I, Safar E (2008) Linguistic modelling and language-processing technologies for Avatar-based sign language. Univ Access Inf Soc 6(4):375–391. https://doi.org/10.1007/s10209-007-0102-z

    Article  Google Scholar 

  • Elliott R, Bueno J, Kennaway J, Glauert J (2010) Towards the integration of synthetic sign language animation with avatars into corpus annotation tools. In: 4th workshop on the representation and processing of sign languages: corpora and sign language technologies. Valletta, Malta

  • Hall R (1986) A characterization of illumination models and shading techniques. Vis Comput 2:268–277. https://doi.org/10.1007/BF02020427

    Article  Google Scholar 

  • Huenerfauth M, Kacorri H (2015) Augmenting EMBR virtual human animation system with MPEG-4 controls for producing ASL facial expressions. In: International symposium on sign language translation and avatar technology, vol. 3

  • Hughes JF, Van Dam A, Foley JD, Mcguire M, Feiner SK, Sklar DF (2014) Computer graphics: principles and practice. Upper Saddle River, Pearson Education

    Google Scholar 

  • Hugo E (2006) Difference between standard direct illumination without shadow umbra, and radiosity with shadow umbra. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Radiosity_Comparison.jpg

  • Jiménez J, Xianchun W, Pesce A, Jarabo A (2016) Practical real-time strategies for accurate indirect occlusion. In: SIGGRAPH 2016 courses: physically based shading in theory and practice

  • Johnston O, Thomas F (1981) The illusion of life: disney animation. Disney Editions, New York

    Google Scholar 

  • Kennaway JR, Glauert JRW, Zwitserlood I (2007) Providing signed content on the Internet by synthesized animation. ACM Trans Comput-Human Interact 14(3):15. https://doi.org/10.1145/1279700.1279705

    Article  Google Scholar 

  • Kipp M, Nguyen Q, Heloir A, Matthes S (2011) Assessing the deaf user perspective on sign language avatars. In: The proceeding of the 13th international ACM SIGACCESS conference on computers and accessibility, pp 107–114. https://doi.org/10.1145/2049536.2049557

  • Lico R (2018) Topics in real-time animation. ACM Siggraph 2018 courses. 17:1. https://doi.org/10.1145/3214834.3214882

  • Liu E, Llamas I, Cañada J, Kelly P (2019). Cinematic rendering in UE4 with real-time ray tracing and denoising. Ray Tracing Gems, pp 289–319

  • McDonald J, Wolfe R, Schnepp J, Hochgesang J, Jamrozik DG, Stumbo M, Berke L, Bialek M, Thomas F (2016) An automated technique for real-time production of lifelike animations of American Sign Language. Univ Access Inf Soc 15(4):551–566

    Article  Google Scholar 

  • Mori M (1970) The Uncanny Valley. IEEE Robot Autom Mag 19(2):98–100. https://doi.org/10.1109/MRA.2012.2192811

    Article  Google Scholar 

  • Osipa J (2010) Stop staring: facial modeling and animation done right, 3rd edn. Sybex

  • Phong BT (1975) Illumination for computer generated pictures. Commun ACM 18(6):311–317. https://doi.org/10.1145/360825.360839

    Article  Google Scholar 

  • Ritschel T, Grosch T, Seidel H (2009) Approximating dynamic global illumination in image space. In: Proceedings of the 2009 symposium on Interactive 3D graphics and games, pp 75–82. https://doi.org/10.1145/1507149.1507161

  • Scherson ID, Casparay E (1987) Data structures and the time complexity of ray tracing. Vis Comput 3(4):201–213. https://doi.org/10.1007/BF01952827

    Article  Google Scholar 

  • Schnepp J (2011) A representation of selected nonmanual signals in American sign language. Dissertation, DePaul University

  • Sims E, Silverglate D (2002) Interactive 3D characters for web-based learning and accessibility. ACM SIGGRAPH 2002 conference abstracts and applications. https://doi.org/10.1145/1242073.1242322

  • Smith B (2006) Phong components version 4. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Phong_components_version_4.png

  • Thewusa (2006) AmbientOcclusion German. Wikimedia Commons. https://en.m.wikipedia.org/wiki/File:AmbientOcclusion_German.jpg

  • Vaughan W (2011) Digital modeling. New Riders

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronan Johnson.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Johnson, R. Towards enhanced visual clarity of sign language avatars through recreation of fine facial detail. Machine Translation 35, 431–445 (2021). https://doi.org/10.1007/s10590-021-09269-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-021-09269-x

Keywords

Navigation