Advertisement

Broadcast based fitness sharing GA for conflict resolution among autonomous robots

  • Sadayoshi Mikami
  • Yukinori Kakazu
  • Terence C. Fogarty
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 993)

Abstract

This paper proposes a distributed GA for autonomous agents to learn in order to achieve co-operative action. Our objective is to develop a learning system that would make real-world heterogeneous agents feasible with the minimum amount of communication hardware. With such real-world agents, there are two constraints that make it difficult to estimate the global payoff: one, is that the communication bandwidth between the agents is limited to a small band-width. This prohibits the gathering of fitness values from all the agents. Second, is that local fitness values are always evaluated a long time after a conflict between agents has taken place. This means that some agents may be far away by then and will no longer be able to exchange local payoffs in order to calculate the estimated global payoff. To overcome these difficulties, we have developed a polarity based broadcast fitness sharing method for physically distributed populations. Instead of waiting for an exact local payoff, an estimated local payoff is exchanged whenever a conflict takes place. We found that a specific filter function gives a good estimate of global fitness values in conflict resolution tasks. Our results from simulations of a bump-avoidance task for multiple mobile robots show that it elicits a notable performance improvement.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Steels. L.: Co-operation between Distributed Agents through Self-Organisation, Decentralised AI, North-Holland (1990)Google Scholar
  2. 2.
    Arkin, R.C.: Integration of Reactive and Telerobotic Control in Multi-agent Robotic System, From Animals to Animats 3, MIT Press (1994) 473 478Google Scholar
  3. 3.
    Mataric, M.J.: Learning to Behave Socially, From Animals to Animats 3, MIT Press (1994) 453–462Google Scholar
  4. 4.
    Tan, M.: Multi-Agent Reinforcement Learning, Proc. Machine Learning (1993) 330 337Google Scholar
  5. 5.
    Bull, L., Fogarty, T.C., Pipe, A.G.: Artificial Endosymbiosis, Proc. ECAL 95 (1995)Google Scholar
  6. 6.
    Mikami, S., Kakazu, Y.: Genetic Reinforcement Learning for Co-operative Traffic Signal Control, IEEE World Congress on Computational Intelligence (1994) 223 228Google Scholar
  7. 7.
    Smith, S.F.: A Learning System Based on Genetic Adaptive Algorithms, University of Pittsburgh (1980)Google Scholar
  8. 8.
    Mikami, S., Fogarty, T.C., Kakazu,Y.: Co-operative Reinforcement Learning By Payoff Filters, Machine Learning: ECML-95 (1995) 319 322Google Scholar
  9. 9.
    Goldberg, D.E., Richardson, J.: Genetic algorithms with sharing for multimodal function optimization, Proc. Second ICGA (1987) 41–49Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Sadayoshi Mikami
    • 1
    • 2
  • Yukinori Kakazu
    • 1
  • Terence C. Fogarty
    • 2
  1. 1.Hokkaido UniversitySapporoJapan
  2. 2.University of the West of EnglandBristolUK

Personalised recommendations