Learn your opponent's strategy (in polynomial time)!

  • Yishay Mor
  • Claudia V. Goldman
  • Jeffrey S. Rosenschein
Workshop Contributions

DOI: 10.1007/3-540-60923-7_26

Part of the Lecture Notes in Computer Science book series (LNCS, volume 1042)
Cite this paper as:
Mor Y., Goldman C.V., Rosenschein J.S. (1996) Learn your opponent's strategy (in polynomial time)!. In: Weiß G., Sen S. (eds) Adaption and Learning in Multi-Agent Systems. IJCAI 1995. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol 1042. Springer, Berlin, Heidelberg

Abstract

Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world.

This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time.

We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.

Keywords

Distributed Artificial Intelligence Learning repeated games automata 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag 1996

Authors and Affiliations

  • Yishay Mor
    • 1
  • Claudia V. Goldman
    • 1
  • Jeffrey S. Rosenschein
    • 1
  1. 1.Computer Science DepartmentHebrew UniversityJerusalemIsrael

Personalised recommendations