Simultaneous Learning to Acquire Competitive Behaviors in Multi-agent System Based on Modular Learning System
The existing reinforcement learning approaches have been suffering from the policy alternation of others in multiagent dynamic environments. A typical example is a case of RoboCup competitions since other agent behaviors may cause sudden changes in state transition probabilities of which constancy is needed for the learning to converge. The keys for simultaneous learning to acquire competitive behaviors in such an environment are
– a modular learning system for adaptation to the policy alternation of others, and
– an introduction of macro actions for simultaneous learning to reduce the search space.
This paper presents a method of modular learning in a multiagent environment, by which the learning agents can simultaneously learn their behaviors and adapt themselves to the situations as consequences of the others’ behaviors.
KeywordsReinforcement Learning Multiagent System Real Robot State Transition Probability Competitive Behavior
Unable to display preview. Download preview PDF.
- 1.Asada, M., Noda, S., Tawaratumida, S., Hosoda, K.: Purposive behavior acquisition for a real robot by vision-based reinforcement learning. Machine Learning 23, 279–303 (1996)Google Scholar
- 3.Asada, S.I.M., Hosoda, K.: Cooperative behavior acquisition by asynchronous policy renewal that enables simultaneous learning in multiagent environment. In: Proceedings of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, pp. 2728–2734 (2002)Google Scholar
- 5.Doya, K., Samejima, K., Katagiri, K.i., Kawato, M.: Multiple model-based reinforcement learning. Technical report, Kawato Dynamic Brain Project Technical Report, KDB-TR-08, Japan Science and Technology Corporation (June 2000)Google Scholar
- 6.Elfwing, S., Uchibe, E., Doya, K., Christensen1, H.I.: Multi-agent reinforcement learning: Using macro actions to learn a mating task. In: Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, CD–ROM (September 2004)Google Scholar
- 7.Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixture of local experts. Neural Computation (3), 79–87 (1991)Google Scholar
- 9.Singh, S.P.: The effeicient learnig of multiple task sequences. Neural Information Processing Systems 4, 251–258 (1992)Google Scholar
- 10.Singh, S.P.: Reinforcement learning with a hierarchy of abstract models. In: National Conference on Artificial Intelligence, pp. 202–207 (1992)Google Scholar
- 12.Takahashi, Y., Edazawa, K., Asada, M.: Multi-module learning system for behavior acquisition in multi-agent environment. In: Proceedings of 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, CD–ROM, pp. 927–931 (October 2002)Google Scholar
- 13.Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Sony CSL Technical Report, SCSL-TR-97-008 (1997)Google Scholar
- 14.Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Technical Report: SCSL-TR-97-008 (1997)Google Scholar