Skip to main content

Learning to Reach the Pareto Optimal Nash Equilibrium as a Team

  • Conference paper
  • First Online:
Book cover AI 2002: Advances in Artificial Intelligence (AI 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2557))

Included in the following conference series:

Abstract

Coordination is an important issue in multi-agent systems when agents want to maximize their revenue. Often coordination is achieved through communication, however communication has its price. We are interested in finding an approach where the communication between the agents is kept low, and a global optimal behavior can still be found.

In this paper we report on an efficient approach that allows independent reinforcement learning agents to reach a Pareto optimal Nash equilibrium with limited communication. The communication happens at regular time steps and is basicallya signal for the agents to start an exploration phase. During each exploration phase, some agents exclude their current best action so as to give the team the opportunityto look for a possiblyb etter Nash equilibrium. This technique of reducing the action space byexclusions was onlyrecen tlyin troduced for finding periodical policies in games of conflicting interests. Here, we explore this technique in repeated common interest games with deterministic or stochastic outcomes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Claus C., Boutilier C.: The dynamics of reinforcement learning in cooperative multi-agent systems. Proceedings of the fifteenth National Conference on Artificial Intelligence,(1998) p 746–752.

    Google Scholar 

  2. Hu J., Wellman M. P.: Multi Agent Reinforcement Learning. Journal of Machine Learning Research 1 (2002) p 1–32.

    Google Scholar 

  3. Jafari, C., Greenwald, A., Gondek, D. and Ercal, G.: On no-regret learning, fictitious play, and nash equilibrium. Proceedings of the Eighteenth International Conference on Machine Learning, (2001) p 223–226.

    Google Scholar 

  4. Lauer, M., Riedmiller, M.: An algorithm for distributed reinforcement learning in cooperative multi-agent systems. Proceedings of the seventeenth International Conference on Machine Learning (2000)

    Google Scholar 

  5. Litmann M.L.: Markov games as a framework for multi-agent reinforcement learning. Proceedings of the Eleventh International Conference on Machine Learning, (1994) p 157–163.

    Google Scholar 

  6. Narendra K., Thathachar M.,: Learning Automata: An Introduction. Prentice-Hall (1989).

    Google Scholar 

  7. Nowé, A., Parent, J., Verbeeck, K.: Social agents playing a periodical poliy. Proceedings of the 12th European Conference on Machine Learning, (2001) p 382–393)

    Google Scholar 

  8. Nowé, A., Verbeeck, K.: Distributed Reinforcement learning, Loadbased Routing a case study. Proceedings of the Neural, Symbolic and Reinforcement Methods for sequence Learning Workshop at ijcai99.

    Google Scholar 

  9. Osborne J.O., Rubinstein A.: A course in game theory. Cambridge, MA: MIT Press (1994).

    Google Scholar 

  10. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An introduction. Cambridge, MA: MIT Press (1998).

    Google Scholar 

  11. Samuelson, L.: Evolutionarygames and equilibrium selection. Cambridge, MA: MIT Press (1997).

    Google Scholar 

  12. Verbeeck, K., Nowé, A., Parent, J.: Homo egualis reinforcement learning agents for load balancing. Proceedings of the first NASA Workshop on Radical Agent Concepts. (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Verbeeck, K., Nowé, A., Lenaerts, T., Parent, J. (2002). Learning to Reach the Pareto Optimal Nash Equilibrium as a Team. In: McKay, B., Slaney, J. (eds) AI 2002: Advances in Artificial Intelligence. AI 2002. Lecture Notes in Computer Science(), vol 2557. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36187-1_36

Download citation

  • DOI: https://doi.org/10.1007/3-540-36187-1_36

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00197-3

  • Online ISBN: 978-3-540-36187-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics