Advertisement

Unifying Convergence and No-Regret in Multiagent Learning

  • Bikramjit Banerjee
  • Jing Peng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3898)

Abstract

We present a new multiagent learning algorithm, RV σ(t), that builds on an earlier version, ReDVaLeR . ReDVaLeR could guarantee (a) convergence to best response against stationary opponents and either (b) constant bounded regret against arbitrary opponents, or (c) convergence to Nash equilibrium policies in self-play. But it makes two strong assumptions: (1) that it can distinguish between self-play and otherwise non-stationary agents and (2) that all agents know their portions of the same equilibrium in self-play. We show that the adaptive learning rate of RV σ(t) that is explicitly dependent on time can overcome both of these assumptions. Consequently, RV σ(t) theoretically achieves (a’) convergence to near-best response against eventually stationary opponents, (b’) no-regret payoff against arbitrary opponents and (c’) convergence to some Nash equilibrium policy in some classes of games, in self-play. Each agent now needs to know its portion of any equilibrium, and does not need to distinguish among non-stationary opponent types. This is also the first successful attempt (to our knowledge) at convergence of a no-regret algorithm in the Shapley game.

Keywords

Stochastic Game Mixed Equilibrium Equilibrium Policy Markov Game Game Payoff 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Banerjee, B., Peng, J.: Performance bounded reinforcement learning in strategic intercations. In: Proceedings of the 19th National Conference on Artificial Intelligence (AAAI 2004), pp. 2–7. AAAI Press, San Jose (2004)Google Scholar
  2. 2.
    Jafari, A., Greenwald, A., Gondek, D., Ercal, G.: On no-regret learning, fictitious play, and Nash equilibrium. In: Proceedings of the 18th International Conference on Machine Learning, pp. 216–223 (2001)Google Scholar
  3. 3.
    Nash, J.F.: Non-cooperative games. Annals of Mathematics 54, 286–295 (1951)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proc. of the 11th Int. Conf. on Machine Learning, pp. 157–163. Morgan Kaufmann, San Mateo (1994)Google Scholar
  5. 5.
    Littman, M., Szepesvári, C.: A generalized reinforcement learning model: Convergence and applications. In: Proceedings of the 13th International Conference on Machine Learning, pp. 310–318 (1996)Google Scholar
  6. 6.
    Hu, J., Wellman, M.P.: Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research 4, 1039–1069 (2003)MathSciNetMATHGoogle Scholar
  7. 7.
    Littman, M.L.: Friend-or-foe Q-learning in general-sum games. In: Proceedings of the Eighteenth International Conference on Machine Learnig. Williams College, USA (2001)Google Scholar
  8. 8.
    Greenwald, A., Hall, K.: Correlated Q-learning. In: Proceedings of AAAI Symposium on Collaborative Learning Agents (2002)Google Scholar
  9. 9.
    Singh, S., Kearns, M., Mansour, Y.: Nash convergence of gradient dynamics in general-sum games. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pp. 541–548 (2000)Google Scholar
  10. 10.
    Bowling, M., Veloso, M.: Rational and convergent learning in stochastic games. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence, Seattle,WA, pp. 1021–1026 (2001)Google Scholar
  11. 11.
    Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artificial Intelligence 136, 215–250 (2002)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Conitzer, V., Sandholm, T.: AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. In: Proceedings of the 20th International Conference on Machine Learning (2003)Google Scholar
  13. 13.
    Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: The adversarial multi-arm bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Compter Science, Milwaukee, WI, pp. 322–331. IEEE Computer Society Press, Los Alamitos (1995)Google Scholar
  14. 14.
    Fudenberg, D., Levine, D.K.: Consistency and cautious fictitious play. Journal of Economic Dynamics and Control 19, 1065–1089 (1995)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Freund, Y., Schapire, R.E.: Adaptive game playing using multiplicative weights. Games and Economic Behavior 29, 79–103 (1999)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Littlestone, N., Warmuth, M.: The weighted majority algorithm. Information and Computation 108, 212–261 (1994)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning, Washington, DC (2003)Google Scholar
  18. 18.
    Bowling, M.: Convergence and no-regret in multiagent learning. In: Proceedings of NIPS 2004/5 (2005)Google Scholar
  19. 19.
    Powers, R., Shoham, Y.: New criteria and a new algorithm for learning in multi-agent systems. In: Proceedings of NIPS 2004/5 (2005)Google Scholar
  20. 20.
    Weinberg, M., Rosenschein, J.S.: Best-response multiagent learning in non-stationary environments. In: Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), vol. 2, pp. 506–513. ACM, New York (2004)Google Scholar
  21. 21.
    Owen, G.: Game Theory. Academic Press, UK (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Bikramjit Banerjee
    • 1
  • Jing Peng
    • 1
  1. 1.Department of Electrical Engineering & Computer ScienceTulane UniversityNew OrleansUSA

Personalised recommendations