Abstract
This research addresses rational decision making and coordination among antiair units whose mission is to defend a specified territory from a number of attacking missiles. The automated units have to decide which missiles to attempt to intercept, given the characteristics of the threat, and given the other units’ anticipated actions, in their attempt to minimize the expected overall damages to the defended territory. Thus, an automated defense unit needs to model the other agents, either human or automated, that control the other defense batteries. For the purpose of this case study, we assume that the units cannot communicate among themselves, say, due to an imposed radio silence. We use the Recursive Modeling Method (RMM), which enables an agent to select his rational action by examining the expected utility of his alternative behaviors, and to coordinate with other agents by modeling their decision making in a distributed multiagent environment. We describe how decision making using RMM is applied to the antiair defense domain and show experimental results that compare the performance of coordinating teams consisting of RMM agents, human agents, and mixed RMM and human teams.
This research has been sponsored by the Office of Naval Research Artificial Intelligence Program under contract N00014-95-1-0775, and by a research initiation grant from the CSE Department of the University of Texas at Arlington.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Durfee, E. H., and Montgomery, T. A. (1989). MICE: A flexible testbed for intelligent coordination experiments. In Proceedings of the 1989 Distributed AI Workshop, 25–40.
Gmytrasiewicz, P. J., and Durfee, E. H. (1995). A rigorous, operational formalization of recursive modeling. In Proceedings of the First International Conference on Multi-Agent Systems, 125–132. Menlo Park: AAAI Press/The MIT Press.
Gmytrasiewicz, P. J. (1996). On reasoning about other agents. In Wooldridge, M., Müller, J. P., and Tambe, M., eds., Intelligent Agents II: Agent Theories, Architectures, and Languages, 143–155. Berlin: Springer.
Jameson, A., Schäfer, R., Simons, J., and Weis, T. (1995). Adaptive provision of evaluation-oriented information: Tasks and techniques. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1886–1893. San Mateo, CA: Morgan Kaufmann.
Kellogg, T., and Gmytrasiewicz, P. J. (1996). Bayesian belief update in multi-agent systems. In preparation.
Mor, Y., Goldman, C. V., and Rosenschein, J. S.(1996). Learn your opponent’s strategy (in polynomial time)! In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent Systems — IJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 164–176.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman.
Poh, K. L., and Horvitz, E. J. (1993). Reasoning about the value of decision-model refinement: Methods and application. In Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, 174–182. San Mateo, CA: Morgan Kaufmann.
Poh, K. L., and Horvitz, E. J. (1996). A graph-theoretic analysis of information value. In Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence. San Mateo, CA: Morgan Kaufmann.
Rao, A. S., and Murray, G. (1994). Multi-agent mental-state recognition and its application to air-combat modelling. In Proceedings of the 13th International Distributed Artificial Intelligence Workshop, 283–304.
Russell, S. J., and Norvig, P. (1995). Artificial Intelligence: A Modern Approach. Englewood Cliffs, New Jersey: Prentice-Hall.
Sen, S., and Sekaran, M. (1996). Multiagent coordination with learning classifier systems. In Weiß, G., and Sen, S., eds., Adaptation and Learning in Multi-Agent Systems — IJCAI’95 Workshop, Lecture Notes in Artificial Intelligence. New York: Springer. 218–233.
Tambe, M., and Rosenbloom, P. S. (1996). Architectures for agents that track other agents in multi-agent worlds. In Wooldridge, M., Müller, J. P., and Tambe, M., eds., Intelligent Agents II: Agent Theories, Architectures, and Languages, 156–170. Berlin: Springer.
Wellman, M. P., and Doyle, J. (1991). Preferential semantics for goals. In Proceedings of the Ninth National Conference on Artificial Intelligence, 698–703.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag Wien
About this paper
Cite this paper
Noh, S., Gmytrasiewicz, P.J. (1997). Agent Modeling in Antiair Defense. In: Jameson, A., Paris, C., Tasso, C. (eds) User Modeling. International Centre for Mechanical Sciences, vol 383. Springer, Vienna. https://doi.org/10.1007/978-3-7091-2670-7_39
Download citation
DOI: https://doi.org/10.1007/978-3-7091-2670-7_39
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-82906-6
Online ISBN: 978-3-7091-2670-7
eBook Packages: Springer Book Archive