The Visual Computer

, Volume 26, Issue 5, pp 339–352 | Cite as

Generating animation from natural language texts and semantic analysis for motion search and scheduling

Original Article

Abstract

This paper presents an animation system that generates an animation from natural language texts such as movie scripts or stories. It also proposes a framework for a motion database that stores numerous motion clips for various characters. We have developed semantic analysis methods to extract information for motion search and scheduling from script-like input texts. Given an input text, the system searches for an appropriate motion clip in the database for each verb in the input text. Temporal constraints between verbs are also extracted from the input text and are used to schedule the motion clips found. In addition, when necessary, certain automatic motions such as locomotion, taking an instrument, changing posture, and cooperative motions are searched for in the database. An animation is then generated using an external motion synthesis system. With our system, users can make use of existing motion clips. Moreover, because it takes natural language text as input, even novice users can use our system.

Keywords

Computer animation Motion database Natural language processing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Oshita, M.: Generating animation from natural language texts and framework of motion database. In: Proc. of International Conference on Cyberworlds 2009, pp. 146–153, Bradford, UK (2009) Google Scholar
  2. 2.
    Winograd, T.: Understanding Natural Language. Academic Press, San Diego (1972) Google Scholar
  3. 3.
    Badler, N., Bindiganavale, R., Allbeck, J., Schuler, W., Zhao, L., Palmer, M.: Parameterized action representation for virtual human agents. In: Embodied Conversational Agents, pp. 256–284 (2000) Google Scholar
  4. 4.
    Bindiganavale, R., Schuler, W., Allbeck, J., Badler, N., Joshi, A., Palmer, M.: Dynamically altering agent behaviors using natural language instructions. In: Proc. of Autonomous Agents 2000, pp. 293–300 (2000) Google Scholar
  5. 5.
    Tokunaga, T., Funakoshi, K., Tanaka, H.: K2: animated agents that understand speech commands and perform actions. In: Proc. of 8th Pacific Rim International Conference on Artificial Intelligence 2004, pp. 635–643 (2004) Google Scholar
  6. 6.
    Fillmore, C.J.: The case for case. In: Universals in Linguistic Theory, pp. 1–88 (1968) Google Scholar
  7. 7.
    Lu, R., Zhan, S.: Automatic Generation of Computer Animation: Using AI for Movie Animation. Springer, Berlin (2002) Google Scholar
  8. 8.
    Sumi, K., Nagata, M.: Animated storytelling system via text. In: Proc. of International Conference on Advances in Computer Entertainment Technology (2006) Google Scholar
  9. 9.
    Baba, H., Noma, T., Okada, N.: Visualization of temporal and spatial information in natural language descriptions. Trans. Inf. Syst. E79-D(5), 591–599 (1996) Google Scholar
  10. 10.
    Coyne, B., Sproat, R.: Wordseye: an automatic text-to-scene conversion system. In: Proc. of SIGGRAPH 2001, pp. 487–496 (2000) Google Scholar
  11. 11.
    Levine, S., Theobalt, C., Koltun, V.: Real-time prosody-driven synthesis of body language. ACM Trans. Graph. 28(5), 172 (2009) (In: Proc. of ACM SIGGRAPH Asia 2009) CrossRefGoogle Scholar
  12. 12.
    Perlin, K., Goldberg, A.: Improv: a system for scripting interactive actors in virtual worlds. In: Proc. of SIGGRAPH ’96 Proceedings, pp. 205–216 (1996) Google Scholar
  13. 13.
    Conway, M.J.: Alice: easy-to-learn 3D scripting for novices. PhD Dissertation, University of Virginia (1997) Google Scholar
  14. 14.
    Hayashi, M., Ueda, H., Kurihara, T., Yasumura, M.: TVML (TV program Making Language)—automatic TV program generation from text-based script. In: Proc. of Imagina ’99, pp. 84–89 (1999) Google Scholar
  15. 15.
    Shim, H., Kang, B.G.: CAMEO—camera, audio and motion with emotion orchestration for immersive cinematography. In: Proc. of International Conference on Advances in Computer Entertainment Technology (ACE) 2008, pp. 115–118 (2008) Google Scholar
  16. 16.
    Park, S.I., Shin, H.J., Shin, S.Y.: On-line locomotion generation based on motion blending. In: Proc. of ACM SIGGRAPH Symposium on Computer Animation 2002, pp. 105–111 (2002) Google Scholar
  17. 17.
    Rose, C., Cohen, M.F., Bodenheimer, B.: Verbs and adverbs: Multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–40 (1998) CrossRefGoogle Scholar
  18. 18.
    Kovar, L., Gleicher, M.: Automated extraction and parameterization of motions in large data sets. ACM Trans. Graph. 23(3), 559–568 (2004) CrossRefGoogle Scholar
  19. 19.
    Klein, D., Manning, C.D.: Fast exact inference with a factored model for natural language parsing. In: Advances in Neural Information Processing Systems 15 (NIPS 2002), pp. 3–10 (2003) Google Scholar
  20. 20.
    Oshita, M.: Smart motion synthesis. Comput. Graph. Forum 27(7), 1909–1918 (2008) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.Kyushu Institute of TechnologyIizukaJapan

Personalised recommendations