Advertisement

Why Is the Creation of a Virtual Signer Challenging Computer Animation?

  • Nicolas Courty
  • Sylvie Gibet
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6459)

Abstract

Virtual signers communicating in signed languages are a very interesting tool to serve as means of communication with deaf people and improve their access to services and information. We discuss in this paper important factors of the design of virtual signers in regard to the animation problems. We notably show that some aspects of these signed languages are challenging for up-to-date animation methods, and present possible future research directions that could also benefit more widely the animation of virtual characters.

Keywords

Sign Language Motion Capture American Sign Computer Animation Deaf People 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arikan, O., Forsyth, D., O’Brien, J.: Motion synthesis from annotations. ACM Trans. on Graphics 22(3), 402–408 (2003)CrossRefzbMATHGoogle Scholar
  2. 2.
    Awad, C., Courty, N., Duarte, K., Le Naour, T., Gibet, S.: A combined semantic and motion capture database for real-time sign language synthesis. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS (LNAI), vol. 5773, pp. 432–438. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  3. 3.
    Bertram, D., Kuffner, J., Dillmann, R., Asfour, T.: An integrated approach to inverse kinematics and path planning for redundant manipulators. In: Int. Conf. on Robotic and Automation, ICRA 2006, pp. 1874–1879 (2006)Google Scholar
  4. 4.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E.F.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  5. 5.
    Chiu, Y.H., Wu, C.H., Su, H.Y., Cheng, C.J.: Joint optimization of word alignment and epenthesis generation for chinese to taiwanese sign synthesis. IEEE Trans. on Pattern Analysis and Machine Intelligence 29(1), 28–39 (2007)CrossRefGoogle Scholar
  6. 6.
    Deng, Z., Chiang, P.i.-Y., Fox, P., Newmann, U.: Animating blendshape faces by cross-mapping motion capture data. In: Proc. of the 2006 Symp. on Interactive 3D Graphics and Games, Redwood City, California, pp. 43–48 (March 2006)Google Scholar
  7. 7.
    Elliott, R., Glauert, J., Jennings, V., Kennaway, J.: An overview of the sigml notation and sigml signing software system. In: Workshop on the Representation and Processing of Signed Languages, 4th Int’l Conf. on Language Resources and Evaluation (2004)Google Scholar
  8. 8.
    Filhol, M., Braffort, A., Bolot, L.: Signing avatar: Say hello to elsi? In: Proc. of Gesture Workshop 2007, Lisbon, Portugal, LNCS. Springer, Heidelberg (June 2007)Google Scholar
  9. 9.
    Fotinea, S., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Universal Access in the Information Society 6(4), 405–418 (2008)CrossRefGoogle Scholar
  10. 10.
    Gibet, S., Lebourque, T., Marteau, P.F.: High level specification and animation of communicative gestures. Journal of Visual Languages and Computing 12, 657–687 (2001)CrossRefGoogle Scholar
  11. 11.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. Gesture in Human-Computer Interaction and Simulation 3881, 188–199 (2006)CrossRefGoogle Scholar
  12. 12.
    Hecker, C., Raabe, B., Enslow, R., DeWeese, J., Maynard, J., van Prooijen, K.: Real-time motion retargeting to highly varied user-created morphologies. ACM Trans. on Graphics 27(3), 1–11 (2008)CrossRefGoogle Scholar
  13. 13.
    Héloir, A., Courty, N., Gibet, S., Multon, F.: Temporal alignment of communicative gesture sequences. Computer Animation and Virtual Worlds 17, 347–357 (2006)CrossRefGoogle Scholar
  14. 14.
    Héloir, A., Gibet, S.: A qualitative and quantitative characterisation of style in sign language gestures. In: Sales Dias, M., Gibet, S., Wanderley, M.M., Bastos, R. (eds.) GW 2007. LNCS (LNAI), vol. 5085, pp. 122–133. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  15. 15.
    Héloir, A., Kipp, M., Gibet, S., Courty, N.: Evaluating data-driven style transformation for gesturing embodied agents. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 215–222. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  16. 16.
    Ho, E., Komura, T., Tai, C.-L.: Spatial relationship preserving character motion adaptation. ACM Trans. on Graphics 29(4), 1–8 (2010)CrossRefGoogle Scholar
  17. 17.
    Huenerfauth, M., Zhao, L., Gu, E., Allbeck, J.: Evaluation of american sign language generation by native asl signers. ACM Trans. Access. Comput. 1(1), 1–27 (2008)CrossRefGoogle Scholar
  18. 18.
    Choi, K.j., Ko, H.s.: On-line motion retargetting. Journal of Visualization and Computer Animation 11, 223–235 (2000)CrossRefzbMATHGoogle Scholar
  19. 19.
    Kendon, A.: Human gesture. In: Tools, Language and Cognition, pp. 43–62. Cambridge University Press, Cambridge (1993)Google Scholar
  20. 20.
    Kennaway, J.R., Glauert, J.R.W., Zwitserlood, I.: Providing signed content on the internet by synthesized animation. ACM Trans. Comput. Hum. Interact. 14(3), 15 (2007)CrossRefGoogle Scholar
  21. 21.
    Kipp, M., Neff, M., Kipp, K., Albrecht, I.: Toward natural gesture synthesis: Evaluating gesture units in a data-driven approach. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 15–28. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  22. 22.
    Kita, S., van Gijn, I., van der Hulst, H.: Movement phase in signs and co-speech gestures, and their transcriptions by human coders. In: Wachsmuth, I., Fröhlich, M. (eds.) GW 1997. LNCS (LNAI), vol. 1371, pp. 23–35. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  23. 23.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Journal Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  24. 24.
    Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: Falcone, R., Barber, S.K., Korba, L., Singh, M.P. (eds.) AAMAS 2002. LNCS (LNAI), vol. 2631. Springer, Heidelberg (2003)Google Scholar
  25. 25.
    Kulpa, R., Multon, F., Arnaldi, B.: Morphology-independent representation of motions for interactive human-like animation. Comput. Graph. Forum 24(3), 343–352 (2005)CrossRefGoogle Scholar
  26. 26.
    Majkowska, A., Zordan, V.B., Faloutsos, P.: Automatic splicing for hand and body animations. In: Proc. of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2006, pp. 309–316 (2006)Google Scholar
  27. 27.
    Marshall, I., Safar, E.: Grammar development for sign language avatar-based synthesis. In: Proc. of the 3rd Int. Conf. on Universal Access in Human-Computer Interaction, UAHCI 2005 (2005)Google Scholar
  28. 28.
    McNeill, D.: Hand and Mind - What Gestures Reveal about Thought. The University of Chicago Press, Chicago (1992)Google Scholar
  29. 29.
    Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Transactions on Graphics 27(1), 1–24 (2008)CrossRefGoogle Scholar
  30. 30.
    Noot, H., Ruttkay, Z.: Variations in gesturing and speech by gestyle. Int. J. Hum.-Comput. Stud. 62(2), 211–229 (2005)CrossRefGoogle Scholar
  31. 31.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: Hamburg Notation System for Sign Languages - An Introductory Guide. University of Hamburg Press (1989)Google Scholar
  32. 32.
    Starck, J., Hilton, A.: Surface capture for performance-based animation. IEEE Computer Graphics and Applications 27(3), 21–31 (2007)CrossRefGoogle Scholar
  33. 33.
    Stokoe, W.C.: Semiotics and Human Sign Language. Walter de Gruyter Inc., Berlin (1972)Google Scholar
  34. 34.
    Tak, S., Ko, H.-S.: A physically-based motion retargeting filter. ACM Tra. On Graphics 24(1), 98–117 (2005)CrossRefGoogle Scholar
  35. 35.
    Tolani, D., Goswami, A., Badler, N.: Real-time inverse kinematics techniques for anthropomorphic limbs. Graphical Models 62(5), 353–388 (2000)CrossRefzbMATHGoogle Scholar
  36. 36.
    Vilhalmsson, H., Cantelmo, N., Cassell, J., Chafai, N.E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A.N., Pelachaud, C., Ruttkay, Z., Thorisson, K., van Welbergen, H., van der Werf, R.J.: The behavior markup language: Recent developments and challenges. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 99–111. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  37. 37.
    Wang, J.M., Fleet, D.J., Hertzmann, A.: A multifactor gaussian process models for style-content separation. In: Proc. of Int. Conf. on Machine Learning, ICML (June 2007)Google Scholar
  38. 38.
    Zhang, L., Lin, M.C., Manocha, D., Pan, J.: A hybrid approach for simulating human motion in constrained environments. Computer Animation and Virtual Worlds 21(3-4), 137–149 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Nicolas Courty
    • 1
  • Sylvie Gibet
    • 1
  1. 1.Laboratoire VALORIAUniversité de Bretagne SudVannesFrance

Personalised recommendations