Skip to main content
Log in

Chinese Sign Language animation generation considering context

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Sign language (SL) is a kind of natural language for the deaf. Chinese Sign Language (CSL) synthesis aims to translate text into virtual human animation, which makes information and service accessible to the deaf. Generally, sign language animation based on key frames is realized by concatenating sign words captured independently. That means a sign language word has the same pattern in diverse context, which is different from realistic sign language expression. This paper studies the effect of context on manual gesture and non-manual gesture, and presents a method for generating stylized manual gesture and non-manual gesture according to the context. Experimental results show that synthesized sign language animation considering context based on the proposed method is more accurate and intelligible than that irrespective of context.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Amaya K, Bruderlin A, Calvert T (1996) Emotion from motion. In: Graphics interface

  2. Badler N, Allbeck J, Zhao L (2002) Representing and parameterizing agent behaviors. In: Proceedings of the computer animation 2002, pp 133–143

  3. Becker CA (1980) Semantic context effects in visual word recognition: an analysis of semantic strategies. Mem Cogn 8(6):493–512

    Article  Google Scholar 

  4. Bruderlin A, Williams L (1995) Motion signal processing. In: Computer graphics, SIGGRAPH ’95 proceedings, Los Angles, CA, 6–11 August, pp 97–101

  5. Chi D, Costa M, Zhao L, Badler N (2000) The emote model for effort and shape. In: Proc. ACM SIGGRAPH, pp 173–182

  6. Fügen C, Holzapfel H, Waibel A (2004) Tight coupling of speech recognition and dialog management-dialog-context dependent grammar weighting for speech recognition. In: 8th international conference on spoken language processing, Jeju Island, Korea, pp 169–172

  7. Conrad C (1974) Context effects in sentence comprehension: a study of the subjective lexicon. Mem Cogn 2(1):130–138

    Article  Google Scholar 

  8. Elliott R, Glauert J, Jennings V, Kennaway R (2004) An overview of the Sigml notation and SiGML signing software system. In: Workshop on representing and processing of sign languages, 4th LREC, pp 98–104

  9. Gao W, Chen YQ, Fang GL, Jiang DL, Ge CB, Wang CL (2004) HandTalker II: a Chinese Sign Language recognition and synthesis system, multimedia computing and systems, control, automation, robotics and vision conference

  10. Hartmann B, Mancini M, Pelachaud C (2006) Implementing expressive gesture synthesis for embodied conversational agents. In: GW 2005, LNAI 3881, pp 188–199

  11. Huenerfauth M, Lu PF (2010) Modeling and synthesizing spatially inflected verbs for American sign language animations. In: ASSETS’10, pp 99–106

  12. Irving A, Foulds R (2005) A parametric approach to sign language synthesis. In: ACM ASSETS’05, Baltimore, Maryland, USA, pp 212–213

  13. Kennaway R (2001) Synthetic animation of deaf signing gestures. In: International gesture workshop, City University, pp 146–157

  14. Lebourque T, Gibet S (1999) A complete system for the specification and the generation of sign language gestures. In: 3rd international workshop on gesture-based communication in human computer interaction. Springer, Heidelberg, pp 227–238

    Chapter  Google Scholar 

  15. Lemon O (2004) Context-sensitive speech recognition in ISU dialogue systems: results for the grammar switching approach. In: Proceedings of CATALOG, 8th workshop on the semantics and pragmatics of dialogue, Barcelona, Spain

  16. Li JH, Wang LC, Wang Z, Kong DH, Yin BC (2012) Lip movement and expression database construction for Chinese Sign Language Synthesis. J Beijing Univ Technol 38(11):1665–1669, in Chinese

    Google Scholar 

  17. Li S, Kong DH, Wang LC, Sun YF, Huang QM (2012) Optimal control theory based interpolation for Chinese Sign Language animation. In: ICGG2012, pp 397–402

  18. Lu S, Igi S, Matsuo H, Nagashima Y (1997) Towards a dialogue system based on recognition and synthesis of Japanese sign language. In: Proc international gestures workshop, pp 259–271

  19. Ma B, Huang TY, Xu B et al (1996) Context-dependent acoustic models for Chinese speech recognition. In: ICASSP-96, pp 455–458

  20. Mancini M (2008) Multimodal distinctive behavior for expressive embodied conversational agents. Dissertation, University of Paris 8

  21. Sutton V (1999) Lessons in SignWriting - Textbook and Workbook, 2nd edn. Deaf Action Committee for SignWriting. La Jolla, CA

  22. Trevor J, Adam S (2007) Australian sign language, an introduction to sign language linguistics

  23. Wang CL, Chen XL, Gao W (2005) A comparison between Etymon- and word-based Chinese Sign Language recognition systems. In: 6th international workshop on gesture in human–computer interaction and simulation, pp 84–87

  24. Wang R, Wang LC, Kong DH, Yin BC (2011) Information express oriented for the hearing-impaired based on sign language video synthesis. China Commun 8(1):139–144

    Google Scholar 

  25. Wang R, Wang LC, Kong DH, Yin BC (2012) Making smooth transitions based on a multi-dimensional transition database for joining Chinese Sign-Language videos. Multimed Tools Appl 60:483–493

    Article  Google Scholar 

  26. Ye KJ, Yin BC, Wang LC (2009) CSLML: a markup language for expressive Chinese Sign Language Synthesis. J Comput Animat Virtual Worlds (Special Issue) 20(2–3):237–245

    Article  Google Scholar 

  27. Zhang XW, Kong DH, Wang LC, Li JH, Sun YF, Huang QM (2012) Synthesis of Chinese Sign Language prosody based on head. In: CSSS 2012, pp 576–580

Download references

Acknowledgements

This research is supported by NSFC (Nos. U0935004 and 61170104) and Beijing Municipal Natural Science Foundation (4112008). The authors thank Beijing 3rd School for the Deaf, who gave them a great help for Chinese sign language data collection and advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lichun Wang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, J., Yin, B., Wang, L. et al. Chinese Sign Language animation generation considering context. Multimed Tools Appl 71, 469–483 (2014). https://doi.org/10.1007/s11042-013-1541-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-013-1541-6

Keywords

Navigation