Advertisement

Mutual Learning of Mind Reading between a Human and a Life-Like Agent

  • Seiji Yamada
  • Tomohiro Yamaguchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2413)

Abstract

This paper describes a human-agent interaction in which a user and a life-like agent mutually acquire the other’s mind mapping through a mutual mind reading game. In these several years, a lot of studies have been done on a life-like agent such a Micro Soft agent, an interface agent. Through development of various life-like agents, a mind like emotion, processing load has been recognized to play an important role in making them believable to a user. For establishing effective and natural communication between a agent and a user, they need to read the other’s mind from expressions and we call the mapping from expressions to mind states mind mapping. If an agent and a user don’t obtain these mind mappings, they can not utilize behaviors which significantly depend on the other’s mind. We formalize such mutual mind reading and propose a framework in which a user and a life-like agent mutually acquire mind mappings each other. In our framework, a user plays a mutual mind reading game with an agent and they gradually learn to read the other’s mind through the game. Eventually we fully implement our framework and make experiments to investigate its effectiveness.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. W. Aha, D. Kibler, and M. K. Albert. Instance-based learning algorithms. Machine Learning, 6:37–66, 1991.Google Scholar
  2. 2.
    J. Bates. The role of emotion in believable agents. Communications of the ACM, 37(7):122–125, 1994.CrossRefGoogle Scholar
  3. 3.
    J. Cassell. Embodied conversational agents: Representation and intelligence in user interface. AI Magazine, 22(4):67–83, 2001.Google Scholar
  4. 4.
    B. V. Dasarathy. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, 1991.Google Scholar
  5. 5.
    H. Kobayashi and F. Hara. Recognition of six basic facial expressions and their strength by neural network. In IEEE International Workshop on Robot and Human Communication, pages 381–386, 1992.Google Scholar
  6. 6.
    P. Maes. Agents that reduce work and information overload. Communications of the ACM, 37(7):30–40, July 1994.Google Scholar
  7. 7.
  8. 8.
    B. A. Myers, A. Cypher, D. Maulsby, D. C. Smith, and B. Shneiderman. Demonstrational interfaces. In Proceedings of 1991 Conference on Human Factors and Computing Systems, pages 393–396, 1991.Google Scholar
  9. 9.
    T. Ono and M. Imai. Reading a robot’s mind: A model of utterance understanding based on the theory of mind mechanism. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 142–148, 2000.Google Scholar
  10. 10.
    L. Steels. Emergent adaptive lexicon. In Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior, pages 562–567, 1996.Google Scholar
  11. 11.
    J. D. Velásquez. Modeling emotions and other motivations in synthetic agents. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 10–15, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Seiji Yamada
    • 1
  • Tomohiro Yamaguchi
    • 2
  1. 1.National Institute of InformaticsTokyoJapan
  2. 2.Nara National College of TechnologyNaraJapan

Personalised recommendations