Skip to main content
Log in

Natural synthesis of productive forms from structured descriptions of sign language

  • Published:
Machine Translation

Abstract

Natural animation of sign language directly from linguistic descriptions continues to be a challenge especially in cases where the forms involved are more productive, such as geometric depictions. Prior work laid the foundation for natural sign language synthesis with the Paula animation system directly from AZee linguistic descriptions. This paper considers more elaborate discourse, composed of several clauses linked together by the overall meaning and involving largely productive signing. We make the case that one of the keys to natural animation of such discourse lies also in the segments between the typically annotated signs, in other words on the segments traditionally termed “transitions”. By studying an example discourse video and the corresponding motion capture, we progressively build an efficient linguistic description of it and specify how to animate it naturally.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

Data availability

Posted online as indicated above.

Notes

  1. We use the term Sign to refer to any of the complete natural languages having a visual/gestural modality that are used within Deaf communities as a preferred language.

  2. The sign being relocatable, rug would accept a loc argument like restaurant. But in our video, it is applied without relocation as it is performed generically. The rug entity is nonetheless placed with what follows in the utterance.

  3. Classifiers also can specify wider shapes on the body as, for example, when placing a tree. In that case the entire forearm and hand become the tree to be placed.

References

  • Adamo-Villani N, Wilbur RB (2015) Asl-pro: American sign language animation with prosodic elements. In International Conference on Universal Access in Human–Computer Interaction. Springer, pp 307–318

  • Benchiheub M, Berret B, Braffort A (2016) Collecting and analysing a motion-capture corpus of French sign language. In Language Resources and Evaluation Conference (LREC), Representation and Processing of Sign Languages, Portorož, Slovenia

  • Ebling S, Glauert J (2016) Building a swiss German sign language avatar with Jasigning and evaluating it among the deaf community. Univ Access Inf Soc 15(4):577–587

    Article  Google Scholar 

  • Filhol M, Hadjadj MN (2016) Juxtaposition as a form feature; syntax captured and explained rather than assumed and modelled. In Language Resources and Evaluation Conference (LREC), Representation and Processing of Sign Languages, Portorož, Slovenia

  • Filhol M, McDonald J (2018) Extending the azee-paula shortcuts to enable natural proform synthesis. In Workshop on Representation and Processing of Sign Language, International Conference on Language Resources and Evaluation (LREC), pp 45–52

  • Filhol M, McDonald J (2020) The synthesis of complex shape deployments in sign language. In Workshop on Representation and Processing of Sign Language, International Conference on Language Ressources and Evaluation (LREC)

  • Filhol M, McDonald J, Wolfe R (2017) Synthesizing sign language by connecting linguistically structured descriptions to a multi-track animation system. In International Conference on Universal Access in Human-Computer Interaction. Springer, pp 27–40

  • Gibet S (2018) Building French sign language motion capture corpora for signing avatars. In Workshop on Representation and Processing of Sign Language, 8th Internationnal Conference on Language Resources and Evaluation (LREC 2018), pp 45–52

  • Gibet S, Courty N, Duarte K, Naour TL (2011) The signcom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans Interact Intell Syst (TiiS) 1(1):6

    Google Scholar 

  • Huenerfauth M, Marcus M, Palmer M (2006) Generating American Sign Language classifier predicates for English-to-ASL machine translation. Ph.D. Thesis, University of Pennsylvania

  • Jamrozik DG, Davidson MJ, McDonald JC, Wolfe R (2010) Teaching students to decipher fingerspelling through context: a new pedagogical approach. In Proceedings of the 17th National Convention Conference of Interpreter Trainers, San Antonio, TX, pp 35–47

  • Johnston T (2010) From archive to corpus: transcription and annotation in the creation of signed language corpora. Int J Corpus Ling 15(1):106–131

    Article  Google Scholar 

  • Johnson RE, Liddell SK (2011) A segmental framework for representing signs phonetically. Sign Lang Stud 11(3):408–463

    Article  Google Scholar 

  • Kipp M, Nguyen Q, Heloir A Matthes S (2011) Assessing the deaf user perspective on sign language avatars. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility, pp 107–114

  • Lancaster G, Alkoby K, Campen J, Carter R, Davidson MJ, Ethridge D, Furst J, Hinkle D, Kroll B, Layesa R (2003) et al. Voice activated display of American sign language for airport security. Proceedings of the 18th Annual International Conference on Technology And Persons With Disabilities

  • Lombardo V, Battaglino C, Damiano R, Nunnari F (2011) An avatar-based interface for the Italian sign language. In Complex, Intelligent and Software Intensive Systems (CISIS), 2011 International Conference on. IEEE, pp 589–594

  • López-Colino F, Colás J (2011) The synthesis of lse classifiers: From representation to evaluation. J Univ Comput Sci

  • McDonald J, Wolfe R, Johnson S, Baowidan S, Moncrief R, Guo N (2017) An improved framework for layering linguistic processes in sign language generation: Why there should never be a “brows” tier. In International Conference on Universal Access in Human–Computer Interaction. Springer, pp 41–54

  • Schembri A(2003) Perspectives on Classifier Constructions in Sign Languages, chapter Rethinking ’classifiers’ in signed languages. Psychology Press, pp 3–34

  • Thomas F, Johnston O, Thomas F (1995) The illusion of life: Disney animation. Hyperion New York

  • Wilbur R (2017) The linguistic description of American sign language. In Recent perspectives on American sign language. Psychology Press, pp 7–31

  • Wolfe R, McDonald J, Schnepp J (2011) Avatar to depict sign language: Building from reusable hand animation. In: International Workshop on Sign Language Translation and Avatar Technology (SLTAT)

  • Woll B (2007) The linguistics of sign language classifiers: phonology, morpho-syntax, semantics and discourse. Ling Int Rev Gen Ling 117(7):1159–1353

    Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John McDonald.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McDonald, J., Filhol, M. Natural synthesis of productive forms from structured descriptions of sign language. Machine Translation 35, 363–386 (2021). https://doi.org/10.1007/s10590-021-09272-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-021-09272-2

Keywords

Navigation