Evaluating Learning Automata as a Model for Cooperation in Complex Multi-agent Domains

  • Mohammad Reza Khojasteh
  • Mohammad Reza Meybodi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4434)


Learning automata act in a stochastic environment and are able to update their action probabilities considering the inputs from their environment, so optimizing their functionality as a result. In this paper, the goal is to investigate and evaluate the application of learning automata to cooperation in multi-agent systems, using soccer simulation server as a test bed. We have also evaluated our learning method in hard situations such as malfunctioning of some of the agents in the team and in situations that agents’ sense/act abilities have a lot of noise involved. Our experiment results show that learning automata adapt well with these situations.


Learn Automaton Learning Team Stochastic Environment Team Formation Opponent Team 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Stone, P.: Layered Learning in Multi_Agent Systems, PhD thesis, School of Computer Science. Carnegie Mellon University (December 1998)Google Scholar
  2. 2.
    Kitano, H. (ed.): RoboCup-97: Robot Soccer World Cup I. Springer, Heidelberg (1998)Google Scholar
  3. 3.
    Andre, D., Corten, E., Dorer, K., Gugenberger, P., Joldos, M., Kummenje, J., Navaratil, P.A., Noda, I., Riley, P., Stone, P., Takahashi, R., Yeap, T.: Soccer server manual, version 4.0, Technical Report RoboCup –1998-2001, RoboCup (1998)Google Scholar
  4. 4.
    Narendra, K.S., Thathachar, M.A.L.: Learning Automata: An Introduction. Prentice Hall, Inc., Englewood Cliffs (1989)Google Scholar
  5. 5.
    Khojasteh, M.R., Meybodi, M.R.: The Technique “Best Corner in State Square” for Generalization of Environmental States in a Cooperative Multi-agent Domain. In: CSICC 2003. Proceedings of the 8th annual CSI computer conference, pp. 446–455, Mashhad, Iran (February 25–27, 2003)Google Scholar
  6. 6.
    Khojasteh, M.R.: Cooperation in Multi-agent Systems using Learning Automata, M.Sc. thesis, Computer Engineering Faculty, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran (May 2002)Google Scholar
  7. 7.
    Khojasteh, M.R., Meybodi, M.R.: Using Learning Automata in Cooperation among Agents in a Team. In: Proceedings of the 12th Portuguese Conference on Artificial Intelligence, IEEE Conference Publication Program with ISBN 0-7803-9365-1 and IEEE Catalog Number 05EX1157, University of Beira Interior, pp. 306–312, Covilhã, Portugal (December 5-8, 2005)Google Scholar
  8. 8.
    Thathachar, M.A.L., Sastry, P.S.: A New Approach to the Design of Reinforcement Schemes for Learning Automata. IEEE Transactions on Systems, Man, and Cybernetics, SMC-15(1) (Janaury/February 1985)Google Scholar
  9. 9.
    Oomen, B.J., Lanctot, J.K.: Discretized Pursuit Learning Automata. IEEE Transactions on Systems, Man, and Cybernetics SMC-20(4) (July/August 1990)Google Scholar
  10. 10.
    Noda, I.: Team Description: Saloo, AIST & PREST, Japan (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Mohammad Reza Khojasteh
    • 1
  • Mohammad Reza Meybodi
    • 2
  1. 1.AI & Robotics Laboratory, Computer Engineering Department, Shiraz Islamic Azad University, ShirazIran
  2. 2.Soft Computing Laboratory, Computer Engineering Department, Amirkabir University of Technology (Tehran Polytechnic), TehranIran

Personalised recommendations