Advertisement

Advice-Exchange Between Evolutionary Algorithms and Reinforcement Learning Agents: Experiments in the Pursuit Domain

  • Luís Nunes
  • Eugénio Oliveira
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3394)

Abstract

This research aims at studying the effects of exchanging information during the learning process in Multiagent Systems. The concept of advice-exchange, introduced in previous contributions, consists in enabling an agent to request extra feedback, in the form of episodic advice, from other agents that are solving similar problems. The work that was previously focused on the exchange of information between agents that were solving detached problems is now concerned with groups of learning-agents that share the same environment. This change added new difficulties to the task. The experiments reported below were conducted to detect the causes and correct the shortcomings that emerged when moving from environments where agents worked in detached problems to those where agents are interacting in the same environment. New concepts, such as self confidence, trust and advisor preference are introduced in this text.

Keywords

Multiagent System Joint Strategy Individual Scenario Reinforcement Learn Agent Heuristic Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Clouse, J.A.: On integrating apprentice learning and reinforcement learning. PhD thesis, University of Massachusetts, Department of Computer Science (1997)Google Scholar
  2. 2.
    Nunes, L., Oliveira, E.: Advice-exchange in heterogeneous groups of learning agents. Technical Report 1 12/02, FEUP/LIACC (2002)Google Scholar
  3. 3.
    Whitehead, S.D.: A complexity analysis of cooperative mechanisms in reinforcement learning. In: Proc. of the 9th National Conf. on AI (AAAI 1991), pp. 607–613 (1991)Google Scholar
  4. 4.
    Clouse, J.A., Utgoff, P.E.: Two kinds of training information for evaluation function learning. In: Proc. of AAAI 1991 (1991)Google Scholar
  5. 5.
    Clouse, J., Utgoff, P.: A teaching method for reinforcement learning. In: Proc. of the 9th Int. Conf. on Machine Learning, pp. 92–101 (1992)Google Scholar
  6. 6.
    Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8, 293–321 (1992)Google Scholar
  7. 7.
    Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. Proc. of the Tenth Int. Conf. on Machine Learning, 330–337 (1993)Google Scholar
  8. 8.
    Price, B., Boutilier, C.: Implicit imitation in multiagent reinforcement learning. Proc. of the Sixteenth Int. Conf. on Machine Learning, 325–334 (1999)Google Scholar
  9. 9.
    Sen, S., Kar, P.P.: Sharing a concept. In: Working Notes of the AAAI 2002 Spring Symposium on Collaborative Learning Agents (2002)Google Scholar
  10. 10.
    Watkins, C.J.C.H., Dayan, P.D.: Technical note: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar
  11. 11.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Parallel Distributed Processing: Exploration in the Microstructure of Cognition 1, 318–362 (1986)Google Scholar
  12. 12.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4 (1996)Google Scholar
  13. 13.
    Nunes, L., Oliveira, E.: Advice exchange between evolutionary algorithms and reinforcement learning agents: Experimental results in the pursuit domain. Technical Report 2 03/03, FEUP/LIACC (2003)Google Scholar
  14. 14.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press (1975)Google Scholar
  15. 15.
    Koza, J.R.: Genetic programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)zbMATHGoogle Scholar
  16. 16.
    Salustowicz, R.: A Genetic Algorithm for the Topological Optimization of Neural Networks. PhD thesis, Tech. Univ. Berlin (1995)Google Scholar
  17. 17.
    Yao, X.: Evolving artificial neural networks. Proc. of IEEE. 87, 1423–1447 (1999)CrossRefGoogle Scholar
  18. 18.
    Glickman, M., Sycara, K.: Evolution of goal-directed behavior using limited information in a complex environment. In: Proc. of the Genetic and Evolutionary Computation Conference, GECCO 1999 (1999)Google Scholar
  19. 19.
    Nunes, L., Oliveira, E.: On learning by exchanging advice. In: Proc. of the First Symposium on Adaptive Agents and Multi-Agent Systems, AISB 2002 (2002)Google Scholar
  20. 20.
    Benda, M., Jagannathan, V., Dodhiawalla, R.: On optimal cooperation of knowledge resources. Technical Report BCS G-2012-28, Boeing AI Center, Boeing Computer Services, Bellevue, WA (1985)Google Scholar
  21. 21.
    Haynes, T., Wainwright, R., Sen, S., Schoenfeld, D.: Strongly typed genetic programming in evolving cooperation strategies. In: Proc. of the Sixth Int. Conf. on Genetic Algorithms, pp. 271–278 (1995)Google Scholar
  22. 22.
    Sen, S., Sekaran, M., Hale, J.: Lerning to coordinate without sharing information. In: Proc. of the National Conf. on AI, pp. 426–431 (1994)Google Scholar
  23. 23.
    Sen, S., Sekaran, M.: Individual learning of coordination knowledge. Journal of Experimental, Theoretical Artificial Intelligence 10, 333–356 (1998)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Luís Nunes
    • 1
  • Eugénio Oliveira
    • 2
  1. 1.ISCTE/FEUP/LIACC-NIAD&R ISCTELisbonPortugal
  2. 2.FEUP/LIACC-NIAD&R FEUPPortoPortugal

Personalised recommendations