Skip to main content

Advertisement

Log in

Considerations on generating facial nonmanual signals on signing avatars

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Signing avatars continue to be an active field of research in Deaf-hearing communication, and while their ability to communicate the extemporaneous signing has grown significantly in the last decade, one are continues to be commented on consistently in user tests: the quality, quantity or lack-thereof of facial movement. Facial nonmanual signals are extremely important for forming legible grammatically correct signing, because they are an intrinsic part of the grammar of the language. On computer-generated avatars, they are even more important because they are a key element that keeps the avatar moving in all aspects, and thus are critical for creating synthesized signing that is lifelike and acceptable to Deaf users. This paper explores the technical and perception issues that avatars must grapple with to attain acceptability, and proposes a method for rigging and controlling the avatar that extends on prior interfaces based on the Facial Action Coding System that has been used for many signing avatars in the past. Further, issues of realism are explored that must be considered if the avatar is to avoid the creepiness that has plagued computer-generated characters in many other applications such as film. The techniques proposed are demonstrated using a state-of-the-art signing avatar that has been tested with Deaf users for acceptability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data availability

Available upon request.

Code availability

Not applicable.

Notes

  1. The exception is the cheek puffs which are implemented as blendshapes and are layered below the skin, meaning that the shapes are blended on the mesh before the bones move the skin.

References

  1. Krausneker, V., Schügerl, S.: Avatars for sign languages: best practice from the perspective of deaf users. Assistive Technology, Accessibility and Inclusion, ICCHP-AAATE (2022)

  2. Brun, R., Turki, A., Laville, A.: A 3D application to familiarize children with sign language and assess the potential of avatars and motion capture for learning movement. In: Proceedings of the 3rd International Symposium on Movement and Computing, pp. 1–2 (2016)

  3. Kuroda, T., Sato, K., Chihara, K.: S-TEL: an avatar based sign language telecommunication system. Int. J. Virtual Real. 3(4), 20–26 (1998)

    Article  Google Scholar 

  4. Bigand, F., Prigent, E., Berret, B., Braffort, A.: Decomposing spontaneous sign language into elementary movements: a principal component analysis-based approach. PloS one 16(10), 0259464 (2021)

    Article  Google Scholar 

  5. Gibet, S.: Building French sign language motion capture corpora for signing avatars. In: Workshop on Representation and Processing of Sign Language, 8th International Conference on Language Resources and Evaluation (LREC 2018), pp. 45–52 (2018)

  6. Wolfe, R., McDonald, J., Schnepp, J.: Avatar to depict sign language: Building from reusable hand animation. International Workshop on Sign Language Translation and Avatar Technology (SLTAT) (2011)

  7. Kennaway, J., Glauert, J.R., Zwitserlood, I.: Providing signed content on the internet by synthesized animation. ACM Transact. Comput. Human Interact. (TOCHI) 14(3), 15 (2007)

    Article  Google Scholar 

  8. McDonald, J., Filhol, M.: Natural synthesis of productive forms from structured descriptions of sign language. Mach. Transl. 35(3), 363–386 (2021)

    Article  Google Scholar 

  9. Verlinden, M., Tijsseling, C., Frowein, H.: A signing avatar on the www. In: International Gesture Workshop, pp. 169–172 (2002). Springer

  10. Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 107–114 (2011)

  11. Ebling, S., Glauert, J.: Building a Swiss German sign language avatar with Jasigning and evaluating it among the deaf community. Univ. Access Inf. Soc. 15(4), 577–587 (2016)

    Article  Google Scholar 

  12. Brumm, M., Johnson, R., Hanke, T., Grigat, R., Wolfe, R.: Use of avatar technology for automatic mouth gesture recognition. In: SignNonmanuals Workshop, vol. 2 (2019)

  13. Ebling, S., Müller, M.: Easier Project Public Deliverables D1.2 Report on performance metrics and user study preparations. https://www.project-easier.eu/deliverables/. Accessed: 2023-01-09

  14. Sargeant, A.: The Undeath of Cinema. The New Atlantis 53(Summer/Fall), 17–32 (2017)

  15. Wang, S., Lilienfeld, S.O., Rochat, P.: The uncanny valley: existence and explanations. Rev. General Psychol. 19(4), 393–407 (2015)

    Article  Google Scholar 

  16. Ho, C.-C., MacDorman, K.F.: Revisiting the uncanny valley theory: developing and validating an alternative to the godspeed indices. Comput. Human Behav. 26(6), 1508–1518 (2010)

    Article  Google Scholar 

  17. Mori, M.: The uncanny valley," energy". Esso 4, 33–35 (1970)

    Google Scholar 

  18. MacDorman, K.F., Chattopadhyay, D.: Reducing consistency in human realism increases the uncanny valley effect: increasing category uncertainty does not. Cognition 146, 190–205 (2016)

    Article  Google Scholar 

  19. Mitchell, W.J., Szerszen, K.A., Sr., Lu, A.S., Schermerhorn, P.W., Scheutz, M., MacDorman, K.F.: A mismatch in the human realism of face and voice produces an uncanny valley. I-Perception 2(1), 10–12 (2011)

    Article  Google Scholar 

  20. Piwek, L., McKay, L.S., Pollick, F.E.: Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion. Cognition 130(3), 271–277 (2014)

    Article  Google Scholar 

  21. Fang, Z., Cai, L., Wang, G.: Metahuman creator the starting point of the metaverse. In: 2021 International Symposium on Computer Technology and Information Science (ISCTIS), pp. 154–157 (2021). IEEE

  22. Technologies, K.: Making Content Accessible to Sign Language Users. https://www.kara.tech/. Accessed: 2023-01-08

  23. Poizner, H., Bellugi, U., Lutes-Driscoll, V.: Perception of American sign language in dynamic point-light displays. J. Exp. Psychol. Human Percept. Perform. 7(2), 430 (1981)

    Article  Google Scholar 

  24. Schalber, K., Grose, D.: The semantics, syntax and phonology of adverbial nonmanuals in Austrian and American sign language. Sign Languages: Spinning and Unraveling the Past, Present and Future. Editora Arara Azul. Petrópolis/RJ. Brazil, 552–565 (2008)

  25. Wolfe, R., Hanke, T., Langer, G., Jahn, E., Worseck, S., Bleicken, J., McDonald, J.C., Johnson, S.: Exploring localization for mouthings in sign language avatars. In: Sign-lang@ LREC 2018, pp. 207–212 (2018). European Language Resources Association (ELRA)

  26. Bickford, J.A., Fraychineaud, K.: Mouth morphemes in asl: A closer look. In: Sign Languages: Spinning and Unraveling the Past, Present and Future (papers from the Ninth Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil (2006)

  27. Baker-Shenk, C.L., Cokely, D.: American Sign Language: A Teacher’s Resource Text on Grammar and Culture. Gallaudet University Press, Washington, DC (1991)

    Google Scholar 

  28. Sandler, W., Lillo-Martin, D.: Sign Language and Linguistic Universals. Cambridge University Press, Cambridge, UK (2006)

    Book  Google Scholar 

  29. Wilbur, R.B., Patschke, C.: Syntactic correlates of brow raise in asl. Sign Lang. Linguist. 2(1), 3–41 (1999)

    Article  Google Scholar 

  30. Filhol, M., Braffort, A.: Sign description: how geometry and graphing serve linguistic issues. Theor. Issues Sign Lang. Res. TISLR 9 (2006)

  31. Benitez-Quiroz, C.F., Wilbur, R.B., Martinez, A.M.: The not face: a grammaticalization of facial expressions of emotion. Cognition 150, 77–84 (2016)

    Article  Google Scholar 

  32. Crasborn, O.A., Van Der Kooij, E., Waters, D., Woll, B., Mesch, J.: Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Lang. Linguist. 11(1), 45–67 (2008)

    Article  Google Scholar 

  33. Giustolisi, B., Mereghetti, E., Cecchetto, C.: Phonological blending or code mixing? why mouthing is not a core component of sign language grammar. Nat. Lang. Linguist. Theory 35(2), 347–365 (2017)

    Article  Google Scholar 

  34. van Zoest, W., Donk, M., Theeuwes, J.: The role of stimulus-driven and goal-driven control in saccadic visual selection. J. Exp. Psychol. Human Percept. Perform. 30(4), 746 (2004)

    Article  Google Scholar 

  35. Guo, Y., Helmert, J.R., Graupner, S.-T., Pannasch, S.: Eye movement patterns in complex tasks: characteristics of ambient and focal processing. Plos one 17(11), 0277099 (2022)

    Article  Google Scholar 

  36. Adamo-Villani, N., Wilbur, R.B.: Asl-PRO: American sign language animation with prosodic elements. In: International Conference on Universal Access in Human-Computer Interaction, pp. 307–318 (2015). Springer

  37. Filhol, M., McDonald, J.: Representation and synthesis of geometric relocations. In: Proceedings of the LREC2022 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources, pp. 53–58 (2022)

  38. Wolfe, R., McDonald, J.C.: A survey of facial nonmanual signals portrayed by avatar. Grazer Linguist. Stud 93, 161–223 (2021)

    Google Scholar 

  39. Wolfe, R., McDonald, J., Johnson, R., Moncrief, R., Alexander, A., Sturr, B., Klinghoffer, S., Conneely, F., Saenz, M., Choudhry, S.: State of the art and future challenges of the portrayal of facial nonmanual signals by signing avatar. In: Universal Access in Human-Computer Interaction. Design Methods and User Experience: 15th International Conference, UAHCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I, pp. 639–655 (2021). Springer

  40. Weast, T.: American sign language tone and intonation: a phonetic analysis of eyebrow properties. Formational Units in Sign Languages, pp. 203–228 (2011)

  41. McDonald, J., Wolfe, R., Johnson, S., Baowidan, S., Moncrief, R., Guo, N.: An improved framework for layering linguistic processes in sign language generation: Why there should never be a brows tier. In: International Conference on Universal Access in Human-Computer Interaction, pp. 41–54 (2017). Springer

  42. Woll, B.: Do mouths sign? do hands speak?: echo phonology as a window on language genesis. LOT Occas. Ser. 10, 203–224 (2008)

    Google Scholar 

  43. Weast, T.P.: Questions in American Sign Language: A Quantitative Analysis of Raised and Lowered Eyebrows. The University of Texas at Arlington, Arlington, Texas (2008)

    Google Scholar 

  44. Schnepp, J.C., Wolfe, R.J., McDonald, J.C., Toro, J.A.: Combining emotion and facial nonmanual signals in synthesized american sign language. In: Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 249–250 (2012)

  45. Picron, F., VanLanduyt, D., Omardeen, R.: Easier Project Public Deliverables D1.3 Report on Interim Evaluation Study (forthcoming). https://www.project-easier.eu/deliverables/. Accessed: 2023-01-09

  46. Pfau, R., Steinbach, M., Woll, B.: Sign Language, An International Handbook. De Gruyter Mouton, Berlin, Boston (2012)

  47. Ebbinghaus, H.: Signs and words: accounting for spoken language elements. Int. Rev. Sign Linguist. 1(1), 23 (2013)

    Google Scholar 

  48. De Vos, C., Zeshan, U.: Introduction: demographic, sociocultural, and linguistic variation across rural signing communities. Sign Languages in Village Communities: Anthropological and Linguistic Insights, pp. 2–23 (2012)

  49. Bank, R., Crasborn, O.A., Van Hout, R.: Variation in mouth actions with manual signs in sign language of the Netherlands (NGT). Sign Lang. Linguist. 14(2), 248–270 (2011)

    Article  Google Scholar 

  50. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Transact. Graph. (TOG) 30(4), 1–10 (2011)

    Article  Google Scholar 

  51. Games, E.: Unreal Engine, Metahuman Platform and LOD Specifications. https://docs.metahuman.unrealengine.com/en-US/platform-support-and-lod-specifications-for-metahumans. Accessed: 2023-01-08

  52. Carlson, W.E.: Computer Graphics and Computer Animation: A Retrospective Overview. Ohio State University, Columbus, Ohio (2017)

    Google Scholar 

  53. Ekman, P., Friesen, W.V.: Facial action coding system. Environ. Psychol. Nonverbal Behav. (1978)

  54. Chen, S., Lou, H., Guo, L., Rong, Q., Liu, Y., Xu, T.-M.: 3-D finite element modelling of facial soft tissue and preliminary application in orthodontics. Comput. Methods Biomech. Biomed. Eng. 15(3), 255–261 (2012)

    Article  Google Scholar 

  55. Kim, S.-W., Li, Z.-X., Aoki, Y.: On intelligent avatar communication using Korean, Chinese and Japanese sign-languages: an overview. In: ICARCV 2004 8th Control, Automation, Robotics and Vision Conference, 2004., vol. 1, pp. 747–752 (2004). IEEE

  56. ElGhoul, O., Jemni, M.: Websign: A system to make and interpret signs using 3D avatars. In: Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, UK, vol. 23 (2011)

  57. Tolba, R., Elarif, T., El-Horbarty, E.: Facial action coding system for the tongue. Int. J. Comput. 12, 9–14 (2018)

    Google Scholar 

  58. Johnson, R.: Improved facial realism through an enhanced representation of anatomical behavior in sign language avatars. In: Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives, pp. 53–58 (2022)

  59. Shoemake, K.: Animating rotation with quaternion curves. In: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, pp. 245–254 (1985)

  60. McDonald, J., Johnson, R., Wolfe, R.: A novel approach to managing lower face complexity in signing avatars. In: Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives, pp. 67–72 (2022)

  61. Thomas, F., Johnston, O., Thomas, F.: The Illusion of Life: Disney Animation. Hyperion New York, New York, NY, USA (1995)

    Google Scholar 

  62. Angel, E., Shreiner, D.: Interact. Comput. Graph. WebGL. Addison-Wesley Professional, Reading, PA (2014)

    Google Scholar 

  63. Wolfe, R., McDonald, J., Johnson, R., Sturr, B., Klinghoffer, S., Bonzani, A., Alexander, A., Barnekow, N.: Supporting mouthing in signed languages: New innovations and a proposal for future corpus building. In: Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives, pp. 125–130 (2022)

  64. Dimou, A.-L., Papavassiliou, V., McDonald, J., Goulas, T., Vasilaki, K., Vacalopoulou, A., Fotinea, S.-E., Efthimiou, E., Wolfe, R.: Signing avatar performance evaluation within easier project. In: Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives, pp. 39–44 (2022)

  65. Wilbur, R.: The linguistic description of American sign language. In: Recent Perspectives on American Sign Language, pp. 7–31. Psychology Press, London, UK (2017)

  66. Filhol, M., Hadjadj, M.N., Choisier, A.: Non-manual features: the right to indifference. In: Representation and Processing of Sign Languages: Beyond the Manual Channel, Language Resource and Evaluation Conference (LREC), Reykjavik, Iceland (2014)

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John McDonald.

Ethics declarations

Conflict of interest

Not applicable.

Ethical approval

Not applicable.

Consent to participation

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McDonald, J. Considerations on generating facial nonmanual signals on signing avatars. Univ Access Inf Soc (2024). https://doi.org/10.1007/s10209-024-01090-6

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10209-024-01090-6

Keywords

Navigation