Advertisement

Homo Egualis Reinforcement Learning Agents for Load Balancing

  • Katja Verbeeck
  • Johan Parent
  • Ann Nowé
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2564)

Abstract

Periodical policies were recently introduced as a solution for the coordination problem in games which assume competition between the players, and where the overall performance can only be as good as the performance of the poorest player. Instead of converging to just one Nash equilibrium, which may favor just one of the players, a periodical policy switches between periods in which all interesting Nash equilibria are played. As a result the players are able to equalize their pay-offs and a fair solution is build. Moreover players can learn this policy with a minimum on communication; now and then they send each other their performance. In this paper, periodical policies are investigated for use in real-life asynchronous games. More precisely we look at the problem of load balancing in a simple job scheduling game. The asynchronism of the problem is reflected in delayed pay-offs or reinforcements, probabilistic job creation and processor rates which follow an exponential distribution. We show that a group of homo egualis reinforcement learning agents can still find a periodical policy. When the jobs are small, homo egualis reinforcement learning agents find a good probability distribution over their action space to play the game without any communication.

Keywords

Nash Equilibrium Load Balance Action Space Common Pool Resource Learn Automaton 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Billard, E.A., Pasquale, J.C.: Adaptive Coordination in Distributed Systems with Delayed Communication. IEEE Transaction on Systems, Man and Cybernetics 25(4), 546–554 (1995)CrossRefGoogle Scholar
  2. 2.
    Gintis, H.: Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Behavior. Princeton University Press, Princeton (2000)Google Scholar
  3. 3.
    Glockner, A., Pasquale, J.: Coadaptive Behavior in a Simple Distributed Job Scheduling System. IEEE Transactions on Systems, Man and Cybernetics 23(3), 902–907 (1993)CrossRefGoogle Scholar
  4. 4.
    Narendra, K., Thathachar, M.: Learning Automata: An Introduction. Prentice-Hall, Englewood Cliffs (1989)Google Scholar
  5. 5.
    Nowé, A., Parent, J., Verbeeck, K.: Social Agents Playing a Periodical Policy. In: Proceedings oft he 12th European Conference on Machine Learning, Freiburg Germany (2001) (to appear)Google Scholar
  6. 6.
    Nowé, A., Verbeeck, K.: Distributed Reinforcement learning, Loadbased Routing a case study. In: Proceedings of the Neural, Symbolic and Reinforcement Methods for sequence Learning Workshop at ijcai 1999 (1999)Google Scholar
  7. 7.
    Schaerf, A., Shoham, Y., Tennenholtz, M.: Adaptive Load Balancing: A Study in Multi-Agent Learning. Journal of Artificial Intelligence Research 2, 475–500 (1995)zbMATHGoogle Scholar
  8. 8.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An introduction. MIT Press, Cambridge (1998)Google Scholar
  9. 9.
    QNAP2 reference manual, SIMULOG (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Katja Verbeeck
    • 1
  • Johan Parent
    • 1
  • Ann Nowé
    • 1
  1. 1.Computational Modeling Lab (COMO)Vrije Universiteit BrusselBelgium

Personalised recommendations