Abstract

We introduce an anime blog system that enable users to create blogs containing animation by searching for and selecting animations or images from a database by using simple words. This system collects animation or image data using consumer-generated databases, in the manner of Web 2.0. If users cannot find appropriate data, they can easily upload new data that they have created. Our animation database, Animebase, correlates natural language with three-dimensional animation data. When an animation is uploaded, the system applies motion data of this model to other models and generates new anima-tions, which are then stored in Animebase. Our basic concept is that the animation data corresponding to natural language is useful for enabling novice users to create content. We discuss the difficulty of this approach to collecting animations and mention future work.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Winograd, T.: Understanding Natural Language. Academic Press, London (1972)Google Scholar
  2. 2.
    Vere, S., Bickmore, T.: A basic agent. Computational Intelligence 6, 41–60 (1990)CrossRefGoogle Scholar
  3. 3.
    Bolt, R.A.: “Put-that-there”: Voice and gesture at the graphics interface. In: International Conference on Computer Graphics and Interactive Techniques. Proceedings of the 7th annual conference on Computer graphics and interactive techniques, ACM Press, New York (1980)Google Scholar
  4. 4.
    Badler, N., Phillips, C., Webber, B.: Simulating Humans: Computer Graphics, Animation and Control. Oxford University Press, Oxford (1993)MATHGoogle Scholar
  5. 5.
    Cassel, J., Vilhjalmsson, H.H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 163–187. Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Tanaka, H., et al.: Animated Agents Capable of Understanding Natural Language and Performing Actions. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 163–187. Springer, Heidelberg (2004)Google Scholar
  7. 7.
    Marsella, S., Gratch, J., Rickel, J.: Expressive Behaviors for Virtual World. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 163–187. Springer, Heidelberg (2004)Google Scholar
  8. 8.
    Coyne, B., Sproat, R.: WordsEye: An Automatic Text-to-Scene Conversion System. In: SIGGRAPH 2001. Proceedings of the 28th Annual Conference on Computer Graphics, Los Angeles, California, USA, ACM, New York (2001)Google Scholar
  9. 9.
    Igarashi, T., Moscovich, T., Hughes, J.F.: Spatial Keyframing for Performance-driven Animation. In: ACM SIGGRAPH / Eurographics Symposium on Computer Animation (2005)Google Scholar
  10. 10.
    Sumi, K., Tanaka, K.: Automatic Conversion from E-content into Virtual Storytelling. In: Subsol, G. (ed.) Virtual Storytelling. LNCS, vol. 3805, pp. 262–271. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    Johnson-Laird, P.N.: Mental Models. Cambridge University Press, Harvard University Press, Cambridge, Mass (1983)Google Scholar
  12. 12.
    Norman, D.A.: The Psychology of Everyday Things. Basic Books (1988)Google Scholar
  13. 13.
    Soga, A., Umino, B., Longstaff, J.S.: Automatic Composition of Ballet Sequences Using a 3D Motion Archive. In: 1st South-Eastern European Digitization Initiative Conference (2005)Google Scholar
  14. 14.
    Nakamura, M., Hachimura, K.: An XML Representation of Labanotation, LabanXML, and Its Implementation on the Notation Editor LabanEditor2. Review of the National Center for Digitization (Online Journal) 9, 47–51 (2006)Google Scholar
  15. 15.
    Sagawa, H., Ohki, M., Sakiyama, T., Ohira, E., Ikeda, H., Fujisawa, H.: Pattern Recognition and Synthesis for a Sign Language Translation System. Journal of Visual Languages and Computing 17, 109–127 (1996)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Kaoru Sumi
    • 1
  1. 1.National Institute of Information and Communications Technology, 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0289Japan

Personalised recommendations