Enriching Agent Animations with Gestures and Highlighting Effects

  • Yukiko I. Nakano
  • Masashi Okamoto
  • Toyoaki Nishida
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3490)


Character agents have become more popular in the Internet due to its attractiveness. This paper proposes an agent animation generation system which automatically selects agent behaviours as well as highlighting animations to emphasise the agent actions. In order to produce appropriate animations according to the content of agent’s spoken message, our system, first analyses the message text using natural language processing engine, and then selects animations based on the linguistic information calculated by the engine.


Agent Behaviour Linguistic Information Syntactic Information Conversational Agent Nominal Phrase 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rogers, W.: The Contribution of Kinesic Illustrators towards the Comprehension of Verbal Behavior within Utterances. Human Communication Research 5, 54–62 (1978)CrossRefGoogle Scholar
  2. 2.
    Ishizuka, M., Tsutsui, T., Saeyor, S., Dohi, H., Zong, Y., Prendinger, H.: MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions. In: Proceedings of Agents2000 Workshop 7 on Achieving Human-like Behavior in Interactive Animated Agents, Barcelona, Spain, pp. 51–54 (2000)Google Scholar
  3. 3.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. The University of Chicago Press, Chicago (1992)Google Scholar
  4. 4.
    Firbas, J.: On the Concept of Communicative Dynamism in the Theory of Functional Sentence Perspective. Philologica Pragensia 8, 135–144 (1971)Google Scholar
  5. 5.
    Givon, T.: Iconicity, Isomorphism and Non-arbitrary Coding in Syntax. In: Haiman, J. (ed.) Iconicity in Syntax, pp. 187–219. John Benjamins, Amsterdam (1985)Google Scholar
  6. 6.
    Halliday, M.A.K.: Intonation and Grammar in British English. The Hague: Mouton (1967)Google Scholar
  7. 7.
    Prevost, S.A.: An Informational Structural Approach to Spoken Language Generation. In: Proceedings of 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, CA (1996)Google Scholar
  8. 8.
    Kendon, A.: Some Relationships between Body Motion and Speech. In: Siegman, A.W., Pope, B. (eds.) Studies in Dyadic Communication, pp. 177–210. Pergamon Press, Elmsford (1972)Google Scholar
  9. 9.
    Cassell, J., Prevost, S.: Distribution of Semantic Features Across Speech and Gesture by Humans and Computers. In: Proceedings of Workshop on the Integration of Gesture in Language and Speech (Newark, DE, WIGLS), pp. 253–270 (1996)Google Scholar
  10. 10.
    Kurohashi, S., Nagao, M.: A Syntactic Analysis Method of Long Japanese Sentences Based on the Detection of Conjunctive Structures. Computational Linguistics 20(4), 507–534 (1994)Google Scholar
  11. 11.
    Nakano, Y.I., Murayama, T., Nishida, T.: Multimodal Story-based Communication: Integrating a Movie and a Conversational Agent. IEICE Transactions on Information and Systems E87-D(6), 1338–1346 (2004)Google Scholar
  12. 12.
    Li, Q., Nakano, Y., Okamoto, M., Nishida, T.: Highlighting Multimodal Synchronization for Embodied Conversational Agent. In: Proceedings of the 2nd International Conference on Information Technology for Application, ICITA 2004 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Yukiko I. Nakano
    • 1
  • Masashi Okamoto
    • 2
  • Toyoaki Nishida
    • 3
  1. 1.Japan Science and Technology Agency (JST)TokyoJapan
  2. 2.Graduate School of Information Science and TechnologyThe University of TokyoTokyoJapan
  3. 3.Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations