Skip to main content
Log in

Linguistic modelling and language-processing technologies for Avatar-based sign language presentation

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Sign languages are the native languages for many pre-lingually deaf people and must be treated as genuine natural languages worthy of academic study in their own right. For such pre-lingually deaf, whose familiarity with their local spoken language is that of a second language learner, written text is much less useful than is commonly thought. This paper presents research into sign language generation from English text at the University of East Anglia that has involved sign language grammar development to support synthesis and visual realisation of sign language by a virtual human avatar. One strand of research in the ViSiCAST and eSIGN projects has concentrated on the generation in real time of sign language performance by a virtual human (avatar) given a phonetic-level description of the required sign sequence. A second strand has explored generation of such a phonetic description from English text. The utility of the conducted research is illustrated in the context of sign language synthesis by a preliminary consideration of plurality and placement within a grammar for British Sign Language (BSL). Finally, ways in which the animation generation subsystem has been used to develop signed content on public sector Web sites are also illustrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. An alternative form of the same problem occurs in the pronominal system where BSL distinguishes ‘we-inclusive of hearer’ from ‘we-exclusive of the hearer’. For a translation of an English sentence containing ‘we’ the additional information must be inferred or volunteered by human intervention.

  2. On a technical note, the formulation of TAKE and PUT presented here originates from two different formulations for directional verbs. The description of the motion for TAKE is characterised by a HamNoSys ‘replacement’ of the location of the sign. Usually this form of replacement is used for describing the change in handshape within a sign rather than the location at which it is signed. Change of position is usually denoted by HamNoSys motion primitives aimed at a targeted location (destination) as with PUT here. The preliminary HPSG support to achieve this is now in place, but older lexical items need updating and testing to ensure this generalises to all forms of one- and two-handed motions is still to be undertaken. Essentially both formulations look to achieve the same effect, the former is simpler but exploits ‘undocumented’ features of HamNoSys, the latter is more complicated but more in the original spirit of HamNoSys.

  3. (on computers running a Windows operating system, at any rate).

  4. It should be emphasised that each entry in an eSIGN lexicon simply contains the phonetic information for a single fixed sign; an eSIGN lexicon is thus to be sharply distinguished from the HPSG lexicon described in Sect. 3, each of whose entries contains a much richer range of grammatical data about the sign language feature it describes.

  5. At the time of writing, 2006-07.

  6. http://www.vcom3d.com

  7. Virtual Reality Modelling Language, the open standard (now succeeded by X3D) for describing 3D interactive animated worlds. See http://www.web3d.org

References

  1. Blackburn, P., Bos, J.: Representation and inference for natural language. In: A First Course in Computational Semantics, vol II. http://www.coli.uni-sb.de/∼bos/comsem/book1.html (1999)

  2. Bray, T., Paoli, J., Sperberg, C.M., Mahler, E., Yergeau, F. (eds.): Extensible Markup Language (XML) 1.0, 3rd edn. http://www.w3.org/TR/REC-xml/. Retrieved 7 July 2006 (2004)

  3. Brien D. (Ed.): Dictionary of British Sign Language/English. Faber and Faber, London, Boston (1992)

  4. Carpenter, B., Penn, G.: The Attribute Logic Engine. User’s Guide. Version 3.2 Beta, Bell Labs (1999)

  5. Elliott, R., Glauert, J.R.W., Jennings, V.J., Kennaway, J.R.: An overview of the SiGML notation and SiGMLSigning software system. In: Streiter, O., Vettori, C. (eds.) Workshop on Representing and Processing of Sign Languages, LREC 2004, Lisbon, pp 98–104 (2004)

  6. Elliott, R., Glauert, J.R.W., Kennaway, J.R.: A framework for non-manual gestures in a synthetic signing system. In: Keates, S., Clarkson, P.J., Langdon, P., Robinson, P., (eds.) 2nd Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), Cambridge, 2004, pp. 127–136 (2004)

  7. Elliott, R., Glauert, J.R.W., Kennaway, J.R., Marshall, I.: Development of language processing support for the ViSiCAST Project. In: ASSETS 2000, 4th International ACM SIGCAPH Conference on Assistive Technologies, Washington DC (2000)

  8. eSIGN Project (2004). Project WWW site at http://www.sign-lang.uni-hamburg.de/eSIGN/

  9. Francik, T., Fabian P.: Animating sign language in the real time. In: 20th IASTED International Multi-Conference on Applied Informatics, 2002, pp. 276–281 (2002)

  10. Glauert, J.R.W.: ViSiCAST: Sign language using virtual humans. In: International Conference on Assistive Technology (ICAT 2002), Derby, pp. 21–33 (2002)

  11. Glauert, J.R.W., Elliott, R., Cox, S.J., Tryggvason, J.T., Sheard, M.: VANESSA—A System for Communication between Deaf and Hearing People. Technology and Disability (Special issue on Virtual Reality and Disability) (2006, in press)

  12. Grieve-Smith, A.B.: SignSynth: A sign language synthesis application using Web3D and Perl. In: Wachsmuth, I., Sowa, T. (eds.) 4th International Workshop on Gesture and Sign Language Based Human–Computer Interaction, LNAI 2298. Springer, Heidelberg, pp. 134–145 (2001)

  13. Hanke, T.: HamNoSys—representing sign language data in language resources and language processing contexts. In: Streiter, O., Vettori, C. (eds.) Fourth International Conference on Language Resources and Evaluation (LREC 2004). Representation and Processing of Sign Languages Workshop, pp. 1–6. European Language Resources Association, Paris (2004)

  14. Kamp, H., Reyle, U.: From Discourse to Logic. Introduction to Model Theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer, Dordrecht (1993)

    Google Scholar 

  15. Kennaway, J.R.: Synthetic animation of deaf signing gestures. In: Wachsmuth, I., Sowa, T. (eds.) 4th International Workshop on Gesture and Sign Language Based Human–Computer Interaction, LNAI 2298, pp. 146–157. Springer, Heidelberg (2001)

  16. Kennaway, J.R.: Experience with and requirements for a gesture description language for synthetic animation. In: Camurri, A., Volpe, G. (eds.) Gesture-based Communication in Human–Computer Interaction, LNAI 2915. Springer, Heidelberg (2004)

  17. Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the Internet by synthesized animation. ACM Trans Comput Hum Interact (TOCHI) (2007, in press)

  18. Lebourque, T., Gibet, S.: A complete system for the specification and the generation of sign language gestures. In: Braffort, A., Gherbu, R., Gibet, S., Richardson, J., Teil, D. (eds.) 3rd International Workshop on Gesture-Based Communication in Human-Computer Interaction, LNAI 1739, pp. 227–238. Springer, Heidelberg (1999)

  19. Liddel, S.K.: Structures for representing handshape and local movement at the phonemic level. In: Fischer, S.D., Siple, P. (eds.) Theoretical Issues in Sign Language Research, vol. 1, pp. 37–65. University of Chicago Press, Chicago (1990)

  20. Marshall, I., Safar, E.: Extraction of semantic representations from syntactic CMU link grammar linkages. In: Angelova, G. (ed.) Recent Advances in Natural Language Processing (RANLP), pp. 154–159, Tzigov Chark. ISBN 954-90906-1-2 (2001)

  21. Marshall, I., Safar, E.: Sign language synthesis using HPSG. In: Ninth International Conference on Theoretical and Methodological Issues in Machine Translation (TMI), Keihanna (2002)

  22. Marshall, I., Safar, E.: Sign language generation in an ALE HPSG. In: Stefan, M. (ed.) Proceedings of the HPSG04 Conference, Center for Computational Linguistics, Katholieke Universiteit Leuven, pp. 189–201. CSLI Publications. http://csli-publications.stanford.edu/ (2004)

  23. Neidle, C., Kegl, J., MacLaughlin, D, Bahan, B., Lee, R.G.: The Syntax of American Sign Language. MIT Press, Cambridge (2000)

  24. Pezeshkpour, F., Marshall, I., Elliott, R., Bangham, J.A.: Development of a legible deaf signing virtual human. In: Proceedings IEEE International Conference on Multimedia Computing and Systems, Florence, June 1999, vol 1 (1999)

  25. Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J., et al. Hamburg Notation System for Sign Languages—An Introductory Guide. International Studies on Sign Language and the Communication of the Deaf (5). Institute of German Sign Language and Communication of the Deaf, University of Hamburg, Hamburg (1989)

  26. Pollard, C., Sag, I.A.: Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago (1994)

    Google Scholar 

  27. Safar, E., Marshall, I.: The architecture of an english-text-to-sign-languages translation system. In: Angelova, G. (ed.) Recent Advances in Natural Language Processing (RANLP), Tzigov Chark, 2001, pp. 223–228 (2001a)

  28. Safar, E., Marshall, I.: Translation of english text to a DRS-based sign language oriented semantic representation. In: Conference sur le Traitment Automatique des Langues Naturelles (TALN), vol. 2, pp. 297–306 (2001b)

  29. Safar, E., Marshall, I.: An intermediate semantic representation extracted from english text for sign language generation. In: Seventh Symposium on Logic and Language, Pecs (2002)

  30. Safar E., Marshall I.: Translation via DRT and HPSG. In: Gelbukh, A. (eds.) Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), pp. 58–68. LNCS. Springer, Mexico City (2002)

  31. Sleator, D., Temperley, D.: Parsing English with a Link Grammar. Carnegie Mellon University Computer Science Technical Report CMU-CS-91-196 (1991)

  32. Stokoe, W.C.: Sign Language Structure, 2nd edn. Linstok Press, Silver Spring (1978)

  33. Suszczanska, N., Szmal, P., Francik, J.: Translating Polish texts into sign language in the TGT system. In: 20th IASTED International Multi-Conference on Applied Informatics, 2002, pp. 282–287 (2002)

  34. Sutton-Spence, R., Woll, B.: The linguistics of British sign language. An Introduction. University Press, Cambridge (1999)

Download references

Acknowledgments

The authors acknowledge with gratitude, funding from the European Union for their work on the ViSiCAST and eSIGN projects under the European Union Framework V programme; we are also grateful for assistance from our partners in these projects, including Televirtual Ltd., who developed the VGuido avatar for the eSIGN project together with the accompanying avatar rendering software.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. R. W. Glauert.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Elliott, R., Glauert, J.R.W., Kennaway, J.R. et al. Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ Access Inf Soc 6, 375–391 (2008). https://doi.org/10.1007/s10209-007-0102-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-007-0102-z

Keywords

Navigation