Advertisement

Competition and Coordination in Stochastic Games

  • Andriy Burkov
  • Abdeslam Boularias
  • Brahim Chaib-draa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4509)

Abstract

Agent competition and coordination are two classical and most important tasks in multiagent systems. In recent years, there was a number of learning algorithms proposed to resolve such type of problems. Among them, there is an important class of algorithms, called adaptive learning algorithms, that were shown to be able to converge in self-play to a solution in a wide variety of the repeated matrix games. Although certain algorithms of this class, such as Infinitesimal Gradient Ascent (IGA), Policy Hill-Climbing (PHC) and Adaptive Play Q-learning (APQ), have been catholically studied in the recent literature, a question of how these algorithms perform versus each other in general form stochastic games is remaining little-studied. In this work we are trying to answer this question. To do that, we analyse these algorithms in detail and give a comparative analysis of their behavior on a set of competition and coordination stochastic games. Also, we introduce a new multiagent learning algorithm, called ModIGA. This is an extension of the IGA algorithm, which is able to estimate the strategy of its opponents in the cases when they do not explicitly play mixed strategies (e.g., APQ) and which can be applied to the games with more than two actions.

Keywords

Multiagent System Mixed Strategy Stochastic Game Coordination Game Matrix Game 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Littman, M.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the Eleventh International Conference on Machine Learning (ICML’94), New Brunswick, NJ, Morgan Kaufmann, San Francisco (1994)Google Scholar
  2. 2.
    Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artificial Intelligence 136(2), 215–250 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Gies, O., Chaib-draa, B.: Apprentissage de la coordination multiagent: une méthode basée sur le Q-learning par jeu adaptatif. Revue d’Intelligence Artificielle 20(2-3), 385–412 (2006)Google Scholar
  4. 4.
    Singh, S., Kearns, M., Mansour, Y.: Nash convergence of gradient dynamics in general-sum games. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI’94), Morgan Kaufmann, San Francisco (1994)Google Scholar
  5. 5.
    Claus, C., Boutilier, C.: The dynamics of reinforcement learning in cooperative multiagent systems. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI’98), AAAI Press, Menlo Park (1998)Google Scholar
  6. 6.
    Hu, J., Wellman, P.: Multiagent reinforcement learning: Theoretical framework and an algorithm. In: Proceedings of the Fifteenth International Conference on Machine Learning (ICML’98), Morgan Kaufmann, San Francisco (1998)Google Scholar
  7. 7.
    Hu, J., Wellman, M.: Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research 4, 1039–1069 (2003)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Littman, M.: Friend-or-foe Q-learning in general-sum games. In: Proceedings of the Eighteenth International Conference on Machine Learning (ICML’01), Morgan Kaufmann, San Francisco (2001)Google Scholar
  9. 9.
    Chang, Y., Kaelbling, L.: Playing is believing: The role of beliefs in multi-agent learning. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS’01), Canada (2001)Google Scholar
  10. 10.
    Tesauro, G.: Extending Q-learning to general adaptive multi-agent systems. In: Thrun, S., Saul, L., Scholkopf, B. (eds.) Advances in Neural Information Processing Systems, vol. 16, MIT Press, Cambridge (2004)Google Scholar
  11. 11.
    Burkov, A., Chaib-draa, B.: Effective learning in adaptive dynamic systems. In: Proceedings of the AAAI, Spring Symposium on Decision Theoretic and Game Theoretic Agents (GTDT’07), Stanford, California, To appear (2007)Google Scholar
  12. 12.
    Young, H.: The evolution of conventions. Econometrica 61(1), 57–84 (1993)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Watkins, C., Dayan, P.: Q-learning. Machine Learning 8(3), 279–292 (1992)zbMATHGoogle Scholar
  14. 14.
    Powers, R., Shoham, Y.: New criteria and a new algorithm for learning in multi-agent systems. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 17, MIT Press, Cambridge (2005)Google Scholar
  15. 15.
    Powers, R., Shoham, Y.: Learning against opponents with bounded memory. In: Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, IJCAI’05 (2005)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Andriy Burkov
    • 1
  • Abdeslam Boularias
    • 1
  • Brahim Chaib-draa
    • 1
  1. 1.DAMAS Group, Dept of Computer Science, Laval University, G1K 7P4, QuebecCanada

Personalised recommendations