Universal Access in the Information Society

, Volume 15, Issue 4, pp 525–539 | Cite as

Interactive editing in French Sign Language dedicated to virtual signers: requirements and challenges

  • Sylvie Gibet
  • François Lefebvre-Albaret
  • Ludovic Hamon
  • Rémi Brun
  • Ahmed Turki
Long paper


Signing avatars are increasingly used as an interface for communication to the deaf community. In recent years, an emerging approach uses captured data to edit and generate sign language (SL) gestures. Thanks to motion editing operations (e.g., concatenation, mixing), this method offers the possibility to compose new utterances, thus facilitating the enrichment of the original corpus, enhancing the natural look of the animation, and promoting the avatar’s acceptability. However, designing such an editing system raises many questions. In particular, manipulating existing movements does not guarantee the semantic consistency of the reconstructed actions. A solution is to insert the human operator in a loop for constructing new utterances and to incorporate within the utterance’s structure constraints that are derived from linguistic patterns. This article discusses the main requirements for the whole pipeline design of interactive virtual signers, including: (1) the creation of corpora, (2) the needed resources for motion recording, (3) the annotation process as the heart of the SL editing process, (4) the building, indexing, and querying of a motion database, (5) the virtual avatar animation by editing and composing motion segments, and (6) the conception of a dedicated user interface according to user’ knowledge and abilities. Each step is illustrated by the authors’ recent work and results from the project Sign3D, i.e., an editing system of French Sign Language (LSF) content (


Signing avatar Interactive editing Data-driven synthesis 



The Sign3D project was funded by the French Ministry of Industry (DGCIS: Direction Générale de la Compétitivité de l’Industrie et des Services, Program “Investissements d’avenir”).


  1. 1.
    Adamo-Villani, N., Hayward, K., Lestina, J., Wilbur, R.B.: Effective animation of sign language with prosodic elements for annotation of digital educational content. In: Siggraph talks. ACM (2010)Google Scholar
  2. 2.
    Arikan, O., Forsyth, D.A., O’Brien, J.F.: Motion synthesis from annotations. ACM Trans. Graph. 22(3), 402–408 (2003)CrossRefMATHGoogle Scholar
  3. 3.
    Awad, C., Courty, N., Duarte, K., Le Naour, T., Gibet, S.: A combined semantic and motion capture database for real-time sign language synthesis. In: Proceedings of the 9th International Conference on Intelligent Virtual Agents. Vol. 5773 of Lecture Notes in Artificial Intelligence, pp. 432–38. Springer, Berlin (2009)Google Scholar
  4. 4.
    Brun, R., Gibet, S., Hamon, L., Lefebvre-Albaret, F., Turki, A.: The SIGN3D project. (2012)
  5. 5.
    Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., Abbott, S.: Tessa, a system to aid communication with deaf people. In: Proceedings of the Fifth International ACM Conference on Assistive Technologies. Assets ’02, New York, NY, pp. 205–212 (2002)Google Scholar
  6. 6.
    Crasborn, O., Sloetjes, H.: Enhanced elan functionality for sign language corpora. In: Proceedings of lrec 2008, Sixth International Conference on Language Resources and Evaluation (2008)Google Scholar
  7. 7.
    Cuxac, C.: Phonétique de la lsf: une formalisation problématique. In Silexicales - la linguistique de la lsf: Recherches actuelles, vol. 4 (2004)Google Scholar
  8. 8.
    Duarte, K.: Motion capture and avatars as portals for analyzing the linguistic structure of signed languages. Ph.D. Thesis, université de bretagne sud (2012)Google Scholar
  9. 9.
    Duarte, K., Gibet, S.: Heterogeneous data sources for signed language analysis and synthesis: the signcom project. In: Proceedings of the Seventh Conference on International Language Resources and Evaluation (lrec’10). European Language Resources Association (ELRA), Valletta (2010a)Google Scholar
  10. 10.
    Duarte, K., Gibet, S.: Reading between the signs: How are transitions built in signed languages? In: Theoretical Issues in Sign Language Research (TILSR 2010), Indiana, USA (2010b)Google Scholar
  11. 11.
    Efthimiou, E., Fotinea, S.E., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, P., Lefebvre-Albaret, F.: Sign language technologies and resources of the dicta-sign project. In: Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions Between Corpus and Lexicon, Workshop to the Eighth International Conference on Language Resources and Evaluation (lrec-2012), pp. 37–45. European Language Resources Association (ELRA), Istanbul (2012)Google Scholar
  12. 12.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I.: The development of language processing support for the ViSiCAST project. In: Proceedings of the Fourth International ACM Conference on Assistive Technologies, Assets’00, pp. 101–108 (2000)Google Scholar
  13. 13.
    Elliott, R., Glauert, J.R.W., Jennings, V., Kennaway, J.: An overview of the sigml notation and sigML signing software system. In: Workshop on the Representation and Processing of Signed Languages, 4th International Conference on Language Resources and Evaluation (2004)Google Scholar
  14. 14.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univ. Access Inf. Soc. 6, 375–391 (2008)CrossRefGoogle Scholar
  15. 15.
    Filhol, M.: Combining two synchronization methods in a linguistic model to describe sign language. Hum. Comput. Inter. Embod. Commun. GW 2011 7206, 194–203 (2012)Google Scholar
  16. 16.
    Fotinea, S.E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Univ. Access Inf. Soc. 6(4), 405–418 (2008)CrossRefGoogle Scholar
  17. 17.
    Gibet, S., Lebourque, T., Marteau, P.F.: High level specification and animation of communicative gestures. J. Vis. Lang. Comput. 12, 657–687 (2001)CrossRefGoogle Scholar
  18. 18.
    Gibet, S., Marteau, P.F., Duarte, K.: Toward a motor theory of sign language perception. Hum. Comput. Inter. Embod. Commun. GW 2011 7206, 161–172 (2012)Google Scholar
  19. 19.
    Gibet, S., Héloir, A., Courty, N., Kamp, J.F., Gorce, P., Rezzoug, N., Multon, F., Pelachaud, C.: Virtual agent for deaf signing gestures. AMSE J. Assoc. Adv. Modell. Simul. Tech. Enterp. (Special edition HANDICAP) 67, 127–136 (2006)Google Scholar
  20. 20.
    Gibet, S., Courty, N., Duarte, K., Le Naour, T.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1(1), 6–23 (2011)Google Scholar
  21. 21.
    Glauert, J.R.W., Elliott, R.: Extending the sigml notation: a progress report (2011)Google Scholar
  22. 22.
    Hayward, K., Adamo-Villani, N., Lestina, J.: A computer animation system for creating deaf-accessible math and science curriculum materials. In: Proceedings of Eurographics 2010, Education Paper (2010)Google Scholar
  23. 23.
    Héloir, A., Gibet, S.: A qualitative and quantitative characterisation of style in sign language gestures. In: Gesture in Human–Computer Interaction and Simulation, GW, Lecture Notes in Artificial Intelligence, lNAI, p. 2009. Springer, Lisboa (2007)Google Scholar
  24. 24.
    Héloir, A., Courty, N., Gibet, S., Multon, F.: Temporal alignment of communicative gesture sequences. Comput. Anim. Virtual Worlds 17, 347–357 (2006)CrossRefGoogle Scholar
  25. 25.
    Huenerfauth, M.: Generating American Sign Language classifier predicates for English-to-ASL machine translation. Dissertation, University of Pennsylvania (2006)Google Scholar
  26. 26.
    Huenerfauth, M., Lu, P.: Modeling and synthesizing spatially inflected verbs for American Sign Language animations. In: Assets, pp. 99–106 (2010)Google Scholar
  27. 27.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of American Sign Language generation by native ASL signers. ACM Trans. Access. Comput. 1(1), 1–27 (2008)CrossRefGoogle Scholar
  28. 28.
    Johnson, R.E., Liddell, S.K.: A segmental framework for representing signs phonetically. Sign Lang. Stud. 11(3), 408–463 (2011a)CrossRefGoogle Scholar
  29. 29.
    Johnson, R.E., Liddell, S.K.: Toward a phonetic representation of signs: sequentiality and contrast. Sign Lang. Stud. 11(2), 241–274 (2011b)CrossRefGoogle Scholar
  30. 30.
    Johnston, T.: The lexical database of AUSLAN (Australian Sign Language). Lexical databases. In: Proceedings of the First Intersign Workshop, Hamburg (1998)Google Scholar
  31. 31.
    Kennaway, J.R.: Experience with, and requirements for, a gesture description language for synthetic animation. In: Proceedings of Gesture Workshop, LNCS, Genova, Italy (2003)Google Scholar
  32. 32.
    Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the internet by synthesized animation. ACM Trans. Comput. Hum. Interact. 14(3), 15 (2007)CrossRefGoogle Scholar
  33. 33.
    Kita, S., van Gijn, I., van der Hulst, H.: Movement phase in signs and co-speech gestures, and their transcriptions by human coders. In: Proceedings of the Gesture Workshop on Gesture and Sign Language in Human–Computer Interaction. Vol. 1371 of Lecture Notes in Computer Science, pp. 23–35. Springer, London (1997)Google Scholar
  34. 34.
    Lebourque, T., Gibet, S.: High level specification and control of communication gestures: the gessyca system. In: Proceedings of Computer Animation, Genova, Switzerland (1999)Google Scholar
  35. 35.
    Lefebvre-Albaret, F.: Segmentation de la langue des signes française par une approche basée sur la phonologie. Ph.D. Thesis, Université Paul Sabatier, Toulouse (2010)Google Scholar
  36. 36.
    Lenseigne, B., Dalle, P.: Using signing space as a representation for sign language processing. In: Gibet, S. (ed.) Gesture in Human-Computer Interaction and Simulation, GW, Lecture Notes in Computer Science, vol. 3881, pp. 256–260. Springer, Berlin (2006)CrossRefGoogle Scholar
  37. 37.
    Lombardo, V., Nunnari, F., Damiano, R.: A virtual interpreter for the Italian Sign Language. In: Iva, pp. 201–207 (2010)Google Scholar
  38. 38.
    Losson, O.: Modélisation du geste communicatif et réalisation d’un signeur virtuel de phrases en langue des signes française. Ph.D. Thesis, Université de Lille(2000)Google Scholar
  39. 39.
    Lu, P., Huenerfauth, M.: Data-driven synthesis of spatially inflected verbs for American Sign Language animation. TACCESS 4(1), 4 (2011)CrossRefGoogle Scholar
  40. 40.
    Marshall, I., Safar, E.: Grammar development for sign language avatar-based synthesis. In: Proceedings UAHCI (2005)Google Scholar
  41. 41.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: Hamburg Notation System for Sign Languages–An Introductory Guide. University of Hamburg Press, Hamburg (1989)Google Scholar
  42. 42.
    Ségouat, J.: Modélisation de la coarticulation en langue des signes française pour la diffusion automatique d’information en gare ferroviaire a l’aide d’un signeur virtuel. Ph.D. Thesis, Université Paris Sud (2010)Google Scholar
  43. 43.
    Websourd.: Evaluation of the sign wiki. In: Dicta-sign: Sign language Recognition, Generation and Modelling with Application in Deaf Communication, d8.2 (2012)Google Scholar
  44. 44.
    Wolfe, R., Cook, P., McDonald, J.C., Schnepp, J.: Linguistics as structure in computer animation: toward a more effective synthesis of brow motion in American Sign Language. Nonman. Sign Lang. Spec. Issue Sign Lang. Linguist. 14(1), 179–199 (2011a)Google Scholar
  45. 45.
    Wolfe, R., McDonald, J. C., Schnepp, J., Toro, J.: Synthetic and acquired corpora: meeting at the annotation. In: Workshop on Building Sign Language Corpora in North America, Washington, DC (2011b)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Sylvie Gibet
    • 1
  • François Lefebvre-Albaret
    • 2
  • Ludovic Hamon
    • 1
  • Rémi Brun
    • 3
  • Ahmed Turki
    • 3
  1. 1.IRISAUniversité de Bretagne SudVannesFrance
  2. 2.WebsourdToulouseFrance
  3. 3.MocaplabParisFrance

Personalised recommendations