Advertisement

Optimality and Equilibrium of Exploration Ratio for Multiagent Learning in Nonstationary Environments

  • Itsuki Noda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9568)

Abstract

I investigate relations between total performance of agent societies and relative performance of individual agents with respect to exploration ratio of multiagent learning. The exploration ratio is a key parameter to determine features of multiagent learning in two aspects: as a speed controller of learning in individual agents, and as a reciprocal noise factor for other agents. The investigation figures out trade-off of the two aspects and shows existence of single optimal value of the ratio to minimize the learning errors. I also carried out experiments to compare the performances of agents who use different exploration ratios. The results of the experiments tells existence of equilibrium points to choose the ratio by individual agents. Finally, we discuss the relationship between optimal and equilibrium values of the exploration ratio, which might bring dilemma of selection of the exploration ratio in an evolutionary way.

Notes

Acknowledgments

This work was supported by JST CREST and JSPS KAKENHI 24300064.

References

  1. 1.
    Kaisers, M., Tuyls, K.: Frequency adjusted multi-agent q-learning. In: Proceedings of 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), pp. 309–315, May 2010Google Scholar
  2. 2.
    Noda, I.: Limitations of simultaneous multiagent learning in nonstationary environments. In: Proceedings of 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013), pp. 309–314. IEEE, November 2013Google Scholar
  3. 3.
    Noda, I.: Robustness of optimality of exploration ratio against agent population in multiagent learning for nonstationary environments. In: Multiagent Interaction Without Prior Coordination (Technical report WS-14-09), pp. 28–34. AAAI, July 2014Google Scholar
  4. 4.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  5. 5.
    Tokic, M.: Adaptive \(\epsilon \)-greedy exploration in reinforcement learning based on value differences. In: Dillmann, R., Beyerer, J., Hanebeck, U.D., Schultz, T. (eds.) KI 2010. LNCS, vol. 6359, pp. 203–210. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    Tokic, M., Palm, G.: Value-difference based exploration: adaptive control between epsilon-greedy and softmax. In: Bach, J., Edelkamp, S. (eds.) KI 2011. LNCS, vol. 7006, pp. 335–346. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  7. 7.
    Wunder, M., Littman, M.L., Babes, M.: Classes of multiagent q-learning dynamicswith epsilon-greedy exploration. In: Frnkranz, J., Joachims, T. (eds.) Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 1167–1174. Omnipress (2010). http://www.icml2010.org/papers/191.pdf

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.National Institute of Advanced Industrial Science and TechnologyTsukubaJapan
  2. 2.CREST, JSTSaitamaJapan

Personalised recommendations