Skip to main content

Learning Models of Other Agents Using Influence Diagrams

  • Conference paper
UM99 User Modeling

Part of the book series: CISM International Centre for Mechanical Sciences ((CISM,volume 407))

Abstract

We adopt decision theory as a descriptive paradigm to model rational agents. We use influence diagrams as a modeling representation of agents, which is used to interact with them and to predict their behavior. We provide a framework that an agent can use to learn the models of other agents in a multi-agent system (MAS) based on their observed behavior. Since the correct model is usually not known with certainty our agents maintain a number of possible models and assign them probabilities of being correct. When none of the available models is likely to be correct, we modify one of them to better account for the observed behaviors. The modification refines the parameters of the influence diagram used to model the other agent’s capabilities, preferences, or beliefs. The modified model is then allowed to compete with the other models and the probability assigned to it being correct can be arrived at based on how well it predicts the observed behaviors of the other agent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bond, A. H., and Gasser, L. (1988). Readings in Distributed Artificial Intelligence. San Mateo, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  • Carmel, D., and Markovitch, S. (1996). Learning models of intelligent agents. AAAI/IAAI 1:62–67.

    Google Scholar 

  • Cooper, G. H., and Herskovits, E. H. (1992). A bayesian method for the induction of probabilistic networks from data. Machine Learning 9:309–347.

    MATH  Google Scholar 

  • Gmytrasiewicz, P. J., and Durfee, E. H. (1995). A rigorous, operational formalization of recursive modeling. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS), 125–132.

    Google Scholar 

  • Gmytrasiewicz, P. J., and Durfee, E. H. (1998). Rational interaction in multiagent environments: coordination. http://www-cse.uta.edu Submitted for publication, available at.

    Google Scholar 

  • Gmytrasiewicz, P. J., Durfee, E. H., and Wehe, D. K. (1991). A decision-theoretic approach to coordinating multiagent interactions. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 166–172.

    Google Scholar 

  • Gmytrasiewicz, P. J. (1996). An approach to user modeling in decision support systems. In Proceedings of the Fifth International Conference on User Modeling, 121–127.

    Google Scholar 

  • Haykin, S. (1994). Neural Network: A Comprehensive Foundation. New York: Macmillan College Publishing Company.

    MATH  Google Scholar 

  • Howard, R. A., and Matheson, J. E. (1984). Influence diagrams (article dated 1981).

    Google Scholar 

  • Howard, R.A. and Matheson, J.E. (Eds.), Readings on the principles and applications of decision analysis 2:719–762.

    Google Scholar 

  • Kaminka, G., Tambe, M., and Hopper, C. (1998). The role of agent-modeling in agent robustness. In Proceedings of the Conference on AI Meets the Real World.

    Google Scholar 

  • Keeney, R. L., and Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: Wiley.

    Google Scholar 

  • Noh, S., and Gmytrasiewicz, P. J. (1997). Agent modeling in antiair defense. In Proceedings of the Sixth International Conference on User Modeling, 389–400.

    Google Scholar 

  • Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kauffman.

    Google Scholar 

  • Russell, S. J., and Norvig, P. (1995). Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall.

    MATH  Google Scholar 

  • Vidal, J. M., and Durfee, E. H. (1995). Recursive agent modeling using limited rationality. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS), 376–383.

    Google Scholar 

  • Widrow, B., and Hoff Jr., M. E. (1960). Adaptive switching circuits. IRE WESCON Convention Record 96–104.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this paper

Cite this paper

Suryadi, D., Gmytrasiewicz, P.J. (1999). Learning Models of Other Agents Using Influence Diagrams. In: Kay, J. (eds) UM99 User Modeling. CISM International Centre for Mechanical Sciences, vol 407. Springer, Vienna. https://doi.org/10.1007/978-3-7091-2490-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-2490-1_22

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83151-9

  • Online ISBN: 978-3-7091-2490-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics