Simultaneous Learning to Acquire Competitive Behaviors in Multi-agent System Based on Modular Learning System

  • Yasutake Takahashi
  • Kazuhiro Edazawa
  • Kentarou Noma
  • Minoru Asada
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4020)


The existing reinforcement learning approaches have been suffering from the policy alternation of others in multiagent dynamic environments. A typical example is a case of RoboCup competitions since other agent behaviors may cause sudden changes in state transition probabilities of which constancy is needed for the learning to converge. The keys for simultaneous learning to acquire competitive behaviors in such an environment are

– a modular learning system for adaptation to the policy alternation of others, and

– an introduction of macro actions for simultaneous learning to reduce the search space.

This paper presents a method of modular learning in a multiagent environment, by which the learning agents can simultaneously learn their behaviors and adapt themselves to the situations as consequences of the others’ behaviors.


Reinforcement Learning Multiagent System Real Robot State Transition Probability Competitive Behavior 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Asada, M., Noda, S., Tawaratumida, S., Hosoda, K.: Purposive behavior acquisition for a real robot by vision-based reinforcement learning. Machine Learning 23, 279–303 (1996)Google Scholar
  2. 2.
    Asada, M., Uchibe, E., Hosoda, K.: Cooperative behavior acquisition for mobile robots in dynamically changing real worlds via vision-based reinforcement learning and development. Artificial Intelligence 110, 275–292 (1999)MATHCrossRefGoogle Scholar
  3. 3.
    Asada, S.I.M., Hosoda, K.: Cooperative behavior acquisition by asynchronous policy renewal that enables simultaneous learning in multiagent environment. In: Proceedings of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, pp. 2728–2734 (2002)Google Scholar
  4. 4.
    Connell, J.H., Mahadevan, S.: Robot Learning. Kluwer Academic Publishers, Dordrecht (1993)MATHGoogle Scholar
  5. 5.
    Doya, K., Samejima, K., Katagiri, K.i., Kawato, M.: Multiple model-based reinforcement learning. Technical report, Kawato Dynamic Brain Project Technical Report, KDB-TR-08, Japan Science and Technology Corporation (June 2000)Google Scholar
  6. 6.
    Elfwing, S., Uchibe, E., Doya, K., Christensen1, H.I.: Multi-agent reinforcement learning: Using macro actions to learn a mating task. In: Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, CD–ROM (September 2004)Google Scholar
  7. 7.
    Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixture of local experts. Neural Computation (3), 79–87 (1991)Google Scholar
  8. 8.
    Kuhlmann, G., Stone, P.: Progress in learning 3 vs. 2 keepaway. In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds.) RoboCup 2003. LNCS, vol. 3020, pp. 694–702. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Singh, S.P.: The effeicient learnig of multiple task sequences. Neural Information Processing Systems 4, 251–258 (1992)Google Scholar
  10. 10.
    Singh, S.P.: Reinforcement learning with a hierarchy of abstract models. In: National Conference on Artificial Intelligence, pp. 202–207 (1992)Google Scholar
  11. 11.
    Singh, S.P.: Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning 8, 323–339 (1992)MATHGoogle Scholar
  12. 12.
    Takahashi, Y., Edazawa, K., Asada, M.: Multi-module learning system for behavior acquisition in multi-agent environment. In: Proceedings of 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, CD–ROM, pp. 927–931 (October 2002)Google Scholar
  13. 13.
    Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Sony CSL Technical Report, SCSL-TR-97-008 (1997)Google Scholar
  14. 14.
    Tani, J., Nolfi, S.: Self-organization of modules and their hierarchy in robot learning problems: A dynamical systems approach. Technical report, Technical Report: SCSL-TR-97-008 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yasutake Takahashi
    • 1
    • 2
  • Kazuhiro Edazawa
    • 1
  • Kentarou Noma
    • 1
  • Minoru Asada
    • 1
    • 2
  1. 1.Dept. of Adaptive Machine SystemsGraduate School of Engineering 
  2. 2.Handai Frontier Research CenterOsaka UniversityOsakaJapan

Personalised recommendations