Multi-agent System Environment Based on Repeated Local Effect Functions

  • Kazuho Igoshi
  • Takao Miura
  • Isamu Shioya
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 88)


This paper discusses a behavior of agents under multi-agent environment in game theory. Then we assume the behavior takes probabilistic Nash equilibrium in reinforcement learning. It is well-known that the behavior provides us with poor properties. For instance, no Nash equilibrium correspond to Pareto optimal and we can’t guarantee the convergence of learning. There, it is difficult to develop a multi-agent system to proceed cooperative work with agents. This paper takes the other approach to employee mixed Nash strategy based on correlated technique in terms of Local Effect Functions, and the model is useful to achieve cooperation among agents and they are designed to assess the convergence in learning through experiments in practice.


Multi-Agent System Game theory Correlated Technique Nash equilibrium Local Effect Games 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aumann, R.J.: Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics 1, 67–96 (1974)zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Hu, J., Wellman, M.P.: Nash Q-Learning for General-Sum Stochastic Games. Journal of Machine Learning Research 4, 1039–1069 (2003)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Igoshi, K., Miura, T.: Strategic Knowledge By Nash-Q Learning for Reward Distribution. In: First IEEE International Conference on the Applications of Digital Information and Web Technologies, ICADIWT (2008)Google Scholar
  4. 4.
    Leyton-brown, K., Tennenholtz, M.: Local-Effect Games. In: International Joint Conference on Artificial Intelligence, IJCAI (2003)Google Scholar
  5. 5.
    Kok, J.R., Vlassis, N.: Collaborative Multiagent Reinforcement Learning by Payoff. Journal of Machine Learning Research 7, 1789–1828 (2006)MathSciNetGoogle Scholar
  6. 6.
    Rosenthal, R.W.: A class of games possessing pure-strategy Nash equilibrium. International Journal of Game Theory 2(1), 65–67 (1973)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Shoham, Y., Powers, R., Grenager, T.: Multi-Agent Reinforcement Learning - A Critical Survey, Technical Report (2003)Google Scholar
  8. 8.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  9. 9.
    Watkins, C.J.C.H., Dayan, P.: Technical note: Q-Learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Kazuho Igoshi
    • 1
  • Takao Miura
    • 1
  • Isamu Shioya
    • 2
  1. 1.Dept. of Elect.& Elect. Engr.HOSEI UniversityTokyoJapan
  2. 2.Dept. of Management & InformaticsSANNO UniversityKanagawaJapan

Personalised recommendations