Universal Access in the Information Society

, Volume 6, Issue 4, pp 375–391 | Cite as

Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

  • R. Elliott
  • J. R. W. GlauertEmail author
  • J. R. Kennaway
  • I. Marshall
  • E. Safar
Long Paper


Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.


Sign Language Deaf People British Sign Language Discourse Representation Structure Animation Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The authors acknowledge with gratitude, funding from the European Union for their work on the ViSiCAST and eSIGN projects under the European Union Framework V programme; we are also grateful for assistance from our partners in these projects, including Televirtual Ltd., who developed the VGuido avatar for the eSIGN project together with the accompanying avatar rendering software.


  1. 1.
    Blackburn, P., Bos, J.: Representation and inference for natural language. In: A First Course in Computational Semantics, vol II.∼bos/comsem/book1.html (1999)
  2. 2.
    Bray, T., Paoli, J., Sperberg, C.M., Mahler, E., Yergeau, F. (eds.): Extensible Markup Language (XML) 1.0, 3rd edn. Retrieved 7 July 2006 (2004)
  3. 3.
    Brien D. (Ed.): Dictionary of British Sign Language/English. Faber and Faber, London, Boston (1992)Google Scholar
  4. 4.
    Carpenter, B., Penn, G.: The Attribute Logic Engine. User’s Guide. Version 3.2 Beta, Bell Labs (1999)Google Scholar
  5. 5.
    Elliott, R., Glauert, J.R.W., Jennings, V.J., Kennaway, J.R.: An overview of the SiGML notation and SiGMLSigning software system. In: Streiter, O., Vettori, C. (eds.) Workshop on Representing and Processing of Sign Languages, LREC 2004, Lisbon, pp 98–104 (2004)Google Scholar
  6. 6.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R.: A framework for non-manual gestures in a synthetic signing system. In: Keates, S., Clarkson, P.J., Langdon, P., Robinson, P., (eds.) 2nd Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), Cambridge, 2004, pp. 127–136 (2004)Google Scholar
  7. 7.
    Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I.: Development of language processing support for the ViSiCAST Project. In: ASSETS 2000, 4th International ACM SIGCAPH Conference on Assistive Technologies, Washington DC (2000)Google Scholar
  8. 8.
    eSIGN Project (2004). Project WWW site at
  9. 9.
    Francik, T., Fabian P.: Animating sign language in the real time. In: 20th IASTED International Multi-Conference on Applied Informatics, 2002, pp. 276–281 (2002)Google Scholar
  10. 10.
    Glauert, J.R.W.: ViSiCAST: Sign language using virtual humans. In: International Conference on Assistive Technology (ICAT 2002), Derby, pp. 21–33 (2002)Google Scholar
  11. 11.
    Glauert, J.R.W., Elliott, R., Cox, S.J., Tryggvason, J.T., Sheard, M.: VANESSA—A System for Communication between Deaf and Hearing People. Technology and Disability (Special issue on Virtual Reality and Disability) (2006, in press)Google Scholar
  12. 12.
    Grieve-Smith, A.B.: SignSynth: A sign language synthesis application using Web3D and Perl. In: Wachsmuth, I., Sowa, T. (eds.) 4th International Workshop on Gesture and Sign Language Based Human–Computer Interaction, LNAI 2298. Springer, Heidelberg, pp. 134–145 (2001)Google Scholar
  13. 13.
    Hanke, T.: HamNoSys—representing sign language data in language resources and language processing contexts. In: Streiter, O., Vettori, C. (eds.) Fourth International Conference on Language Resources and Evaluation (LREC 2004). Representation and Processing of Sign Languages Workshop, pp. 1–6. European Language Resources Association, Paris (2004)Google Scholar
  14. 14.
    Kamp, H., Reyle, U.: From Discourse to Logic. Introduction to Model Theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer, Dordrecht (1993)Google Scholar
  15. 15.
    Kennaway, J.R.: Synthetic animation of deaf signing gestures. In: Wachsmuth, I., Sowa, T. (eds.) 4th International Workshop on Gesture and Sign Language Based Human–Computer Interaction, LNAI 2298, pp. 146–157. Springer, Heidelberg (2001)Google Scholar
  16. 16.
    Kennaway, J.R.: Experience with and requirements for a gesture description language for synthetic animation. In: Camurri, A., Volpe, G. (eds.) Gesture-based Communication in Human–Computer Interaction, LNAI 2915. Springer, Heidelberg (2004)Google Scholar
  17. 17.
    Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the Internet by synthesized animation. ACM Trans Comput Hum Interact (TOCHI) (2007, in press)Google Scholar
  18. 18.
    Lebourque, T., Gibet, S.: A complete system for the specification and the generation of sign language gestures. In: Braffort, A., Gherbu, R., Gibet, S., Richardson, J., Teil, D. (eds.) 3rd International Workshop on Gesture-Based Communication in Human-Computer Interaction, LNAI 1739, pp. 227–238. Springer, Heidelberg (1999)Google Scholar
  19. 19.
    Liddel, S.K.: Structures for representing handshape and local movement at the phonemic level. In: Fischer, S.D., Siple, P. (eds.) Theoretical Issues in Sign Language Research, vol. 1, pp. 37–65. University of Chicago Press, Chicago (1990)Google Scholar
  20. 20.
    Marshall, I., Safar, E.: Extraction of semantic representations from syntactic CMU link grammar linkages. In: Angelova, G. (ed.) Recent Advances in Natural Language Processing (RANLP), pp. 154–159, Tzigov Chark. ISBN 954-90906-1-2 (2001)Google Scholar
  21. 21.
    Marshall, I., Safar, E.: Sign language synthesis using HPSG. In: Ninth International Conference on Theoretical and Methodological Issues in Machine Translation (TMI), Keihanna (2002)Google Scholar
  22. 22.
    Marshall, I., Safar, E.: Sign language generation in an ALE HPSG. In: Stefan, M. (ed.) Proceedings of the HPSG04 Conference, Center for Computational Linguistics, Katholieke Universiteit Leuven, pp. 189–201. CSLI Publications. (2004)
  23. 23.
    Neidle, C., Kegl, J., MacLaughlin, D, Bahan, B., Lee, R.G.: The Syntax of American Sign Language. MIT Press, Cambridge (2000)Google Scholar
  24. 24.
    Pezeshkpour, F., Marshall, I., Elliott, R., Bangham, J.A.: Development of a legible deaf signing virtual human. In: Proceedings IEEE International Conference on Multimedia Computing and Systems, Florence, June 1999, vol 1 (1999)Google Scholar
  25. 25.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J., et al. Hamburg Notation System for Sign Languages—An Introductory Guide. International Studies on Sign Language and the Communication of the Deaf (5). Institute of German Sign Language and Communication of the Deaf, University of Hamburg, Hamburg (1989)Google Scholar
  26. 26.
    Pollard, C., Sag, I.A.: Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago (1994)Google Scholar
  27. 27.
    Safar, E., Marshall, I.: The architecture of an english-text-to-sign-languages translation system. In: Angelova, G. (ed.) Recent Advances in Natural Language Processing (RANLP), Tzigov Chark, 2001, pp. 223–228 (2001a)Google Scholar
  28. 28.
    Safar, E., Marshall, I.: Translation of english text to a DRS-based sign language oriented semantic representation. In: Conference sur le Traitment Automatique des Langues Naturelles (TALN), vol. 2, pp. 297–306 (2001b)Google Scholar
  29. 29.
    Safar, E., Marshall, I.: An intermediate semantic representation extracted from english text for sign language generation. In: Seventh Symposium on Logic and Language, Pecs (2002)Google Scholar
  30. 30.
    Safar E., Marshall I.: Translation via DRT and HPSG. In: Gelbukh, A. (eds.) Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), pp. 58–68. LNCS. Springer, Mexico City (2002)Google Scholar
  31. 31.
    Sleator, D., Temperley, D.: Parsing English with a Link Grammar. Carnegie Mellon University Computer Science Technical Report CMU-CS-91-196 (1991)Google Scholar
  32. 32.
    Stokoe, W.C.: Sign Language Structure, 2nd edn. Linstok Press, Silver Spring (1978)Google Scholar
  33. 33.
    Suszczanska, N., Szmal, P., Francik, J.: Translating Polish texts into sign language in the TGT system. In: 20th IASTED International Multi-Conference on Applied Informatics, 2002, pp. 282–287 (2002)Google Scholar
  34. 34.
    Sutton-Spence, R., Woll, B.: The linguistics of British sign language. An Introduction. University Press, Cambridge (1999)Google Scholar

Copyright information

© Springer-Verlag 2007

Authors and Affiliations

  • R. Elliott
    • 1
  • J. R. W. Glauert
    • 1
    Email author
  • J. R. Kennaway
    • 1
  • I. Marshall
    • 1
  • E. Safar
    • 1
  1. 1.School of Computing SciencesUniversity of East AngliaNorwichUK

Personalised recommendations