Bridging the Gap between Language and Action

  • Tokunaga Takenobu
  • Koyama Tomofumi
  • Saito Suguru
  • Okumura Manabu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2792)

Abstract

When communicating with animated agents in a virtual space through natural language dialogue, it is necessary to deal with vagueness of language. To deal with vagueness, in particular vagueness of spatial relation, this paper proposes a new representation of locations. The representation is designed to have bilateral character, symbolic and numeric, in order to bridge the gap between the symbolic system (language processing) and the continuous system (animation generation). Through the implementation of a prototype system, the effectiveness of the proposed representation is evaluated.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Badler, N.I., Palmer, M.S., Bindinganavale, R.: Animation control for realtime visual humans. Communication of the ACM 42(8), 65–73 (1999)CrossRefGoogle Scholar
  2. 2.
    Bindinganavale, R., Schuler, W., Allbeck, J., Badler, N., Joshi, A., Palmer, M.: Dynamically altering agent behaviors using natural language instructions. In: Autonomous Agents 2000, pp. 293–300 (2000)Google Scholar
  3. 3.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.): Embodied Conversational Agents. The MIT Press, Cambridge (2000)Google Scholar
  4. 4.
    Fikes, R.E.: STRIPS: A new approach to the application of theorem problem solving. Artificial Intelligence 2, 189–208 (1971)MATHCrossRefGoogle Scholar
  5. 5.
    Grosz, B.J., Joshi, A.K., Weinstein, P.: Centering: A framework for modeling the local coherence of discourse. Computational Linguistics 21(2), 203–226 (1995)Google Scholar
  6. 6.
    Herskovits, A.: Language and Spatial Cognition. An Interdisciplinary Study of the Prepositions in English. Cambridge University Press, Cambridge (1986)Google Scholar
  7. 7.
    Horswill, I.D.: Visual routines and visual search. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence (August 1995)Google Scholar
  8. 8.
    Levelt, W.J.M.: Speaking: From Intention to Articulation. The MIT Press, Cambridge (1989)Google Scholar
  9. 9.
    Olivier, P., Maeda, T., Tsujii, J.: Automatic depiction of spatial descriptions. In: AAAI 1994, pp. 1405–1410 (1994)Google Scholar
  10. 10.
    Retsz-Schmidt, G.: Various views on spatial prepositions. AI Magazine 9(2), 95–105 (1988)Google Scholar
  11. 11.
    Shinyama, Y., Tokunaga, T., Tanaka, H.: Processing of 3-D spatial relations for virtual agents acting on natural language instructions. In: the Second Workshop on Intelligent Virtual Agents, pp. 67–78 (1999)Google Scholar
  12. 12.
    Tokunaga, T., Okumura, M., Saitô, S., Tanaka, H.: Constructing a lexicon of action. In: the 3rd International Conference on Language Resources and Evaluation (LREC), pp. 172–175 (2002)Google Scholar
  13. 13.
    Yamada, A., Nishida, T., Doshita, S.: Figuring out most plausible interpretation from spatial description. In: the 12th International Conference on Computational Linguistics (COLING), pp. 764–769 (1988)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Tokunaga Takenobu
    • 1
  • Koyama Tomofumi
    • 1
  • Saito Suguru
    • 2
  • Okumura Manabu
    • 2
  1. 1.Department of Computer ScienceTokyo Institute of TechnologyTokyoJapan
  2. 2.Precision and Intelligence LaboratoryTokyo Institute of TechnologyJapan

Personalised recommendations