Advertisement

Autonomous Robots

, Volume 7, Issue 1, pp 77–88 | Cite as

Reinforcement Learning Soccer Teams with Incomplete World Models

  • Marco Wiering
  • Rafał Sałustowicz
  • Jürgen Schmidhuber
Article

Abstract

We use reinforcement learning (RL) to compute strategies for multiagent soccer teams. RL may profit significantly from world models (WMs) estimating state transition probabilities and rewards. In high-dimensional, continuous input spaces, however, learning accurate WMs is intractable. Here we show that incomplete WMs can help to quickly find good action selection policies. Our approach is based on a novel combination of CMACs and prioritized sweeping-like algorithms. Variants thereof outperform both Q(λ)-learning with CMACs and the evolutionary method Probabilistic Incremental Program Evolution (PIPE) which performed best in previous comparisons.

reinforcement learning CMAC world models simulated soccer Q(λ) evolutionary computation PIPE 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Albus, J.S. 1975. A new approach to manipulator control: The cerebellar model articulation controller (CMAC). Dynamic Systems, Measurement and Control, pp. 220-227.Google Scholar
  2. Baluja, S. and Caruana, R. 1995. Removing the genetics from the standard genetic algorithm. In Machine Learning: Proceedings of the Twelfth International Conference, A. Prieditis and S. Russell (Eds.), Morgan Kaufmann Publishers: San Francisco, CA, pp. 38-46.Google Scholar
  3. Barto, A.G., Sutton, R.S., and Anderson, C.W. 1983. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13:834-846.Google Scholar
  4. Bellman, R. 1961. Adaptive Control Processes, Princeton University Press.Google Scholar
  5. Bertsekas, D.P. and Tsitsiklis, J.N. 1996. Neuro-Dynamic Programming, Athena Scientific: Belmont, MA.Google Scholar
  6. Chapman, D. and Kaelbling, L.P. 1991. Input generalization in delayed reinforcement learning. In Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI), Morgan Kaufman, Vol. 2, pp. 726-731.Google Scholar
  7. Cramer, N.L. 1985. A representation for the adaptive generation of simple sequential programs. In Proceedings of an International Conference on Genetic Algorithms and Their Applications, J.J. Grefenstette (Ed.), Lawrence Erlbaum Associates: Hillsdale, NJ, pp. 183-187.Google Scholar
  8. Dickmanns, D., Schmidhuber, J., and Winklhofer, A. 1986. Der genetische Algorithmus: Eine Implementierung in Prolog. Fortgeschrittenenpraktikum, Institut für Informatik, Lehrstuhl Prof. Radig, Technische Universität München.Google Scholar
  9. Holland, J.H. 1975. Adaptation in Natural and Artificial Systems, University of Michigan Press: Ann Arbor.Google Scholar
  10. Kaelbling, L. 1993. Learning in Embedded Systems, MIT Press.Google Scholar
  11. Kearns, M. and Singh, S. 1999. Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in Neural Information Processing Systems 12, M. Kearns, S.A. Solla, and D. Cohn (Eds.), MIT Press: Cambridge, MA.Google Scholar
  12. Koza, J.R. 1992. Genetic evolution and co-evolution of computer programs. In Artificial Life II, C.G. Langton, C. Taylor, J.D. Farmer, and S. Rasmussen (Eds.), Addison Wesley Publishing Company, pp. 313-324.Google Scholar
  13. Lin, L.-J. 1993. Reinforcement Learning for Robots Using Neural Networks. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh.Google Scholar
  14. Moore, A. and Atkeson, C.G. 1993. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning, 13:103-130.Google Scholar
  15. Nowlan, S.J. and Hinton, G.E. 1992. Simplifying neural networks by soft weight sharing. Neural Computation, 4:173-193.Google Scholar
  16. Peng, J. and Williams, R. 1996. Incremental multi-step Q-learning. Machine Learning, 22:283-290.Google Scholar
  17. Rechenberg, I. 1971. Evolutions strategie—Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Dissertation, Published in 1973 by Fromman-Holzboog.Google Scholar
  18. Rummery, G.A. and Niranjan, M. 1994. On-line Q-learning using connectionist sytems. Technical Report CUED/F-INFENG-TR 166, Cambridge University, UK.Google Scholar
  19. Sałustowicz, R.P. and Schmidhuber, J. 1997. Probabilistic incremental program evolution. Evolutionary Computation, 5(2):123-141.Google Scholar
  20. Sałustowicz, R.P., Wiering, M.A., and Schmidhuber, J. 1997a. Evolving soccer strategies. In Proceedings of the Fourth International Conference on Neural Information Processing (ICONIP'97), Springer-Verlag: Singapore, pp. 502-506.Google Scholar
  21. Sałustowicz, R.P., Wiering, M.A., and Schmidhuber, J. 1997b. On learning soccer strategies. In Proceedings of the Seventh International Conference on Artificial Neural Networks (ICANN'97), volume 1327 of Lecture Notes in Computer Science, W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud (Eds.), Springer-Verlag: Berlin, Heidelberg, pp. 769-774.Google Scholar
  22. Sałustowicz, R.P., Wiering, M.A., and Schmidhuber, J. 1998. Learning team strategies: Soccer case studies. Machine Learning, 33(2/3):263-282.Google Scholar
  23. Samuel, A.L. 1959. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3:210-229.Google Scholar
  24. Santamaria, J.C., Sutton, R.S., and Ram, A. 1996. Experiments with reinforcement learning in problems with continuous state and action spaces. Technical Report CIONS 96-088, Georgia Institute of Technology, Atlanta.Google Scholar
  25. Schmidhuber, J. 1995. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, Revised January 1995.Google Scholar
  26. Schmidhuber, J., Zhao, J., and Schraudolph, N. 1997a. Reinforcement learning with self-modifying policies. In Learning to Learn, S. Thrun and L. Pratt (Eds.), Kluwer, pp. 293-309.Google Scholar
  27. Schmidhuber, J., Zhao, J., and Wiering, M. 1997b. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105-130.Google Scholar
  28. Singh, S.P. and Sutton, R.S. 1996. Reinforcement learning with replacing eligibility traces. Machine Learning, 22:123-158.Google Scholar
  29. Sutton, R.S. 1988. Learning to predict by the methods of temporal differences. Machine Learning, 3:9-44.Google Scholar
  30. Sutton, R.S. 1996. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems 8, D.S. Touretzky, M.C. Mozer, and M.E. Hasselmo (Eds.), MIT Press: Cambridge, MA, pp. 1038-1045.Google Scholar
  31. Sutton, R.S. and Barto, A.G. 1988. Reinforcement Learning: An Introduction, MIT Press/Bradford Books.Google Scholar
  32. Thrun, S., Fox, D., and Burgard, W. 1998. A probabilistic approach to concurrent mapping and localization for mobile robots. Machine Learning, (31):29-53. Also appeared in Autonomous Robots, 5:253–271, 1998 as joint issue.Google Scholar
  33. Watkins, C.J.C.H. 1989. Learning from Delayed Rewards. Ph.D. Thesis, King's College, Cambridge, England.Google Scholar
  34. Watkins, C.J.C.H. and Dayan, P. 1992. Q-learning. Machine Learning, 8:279-292.Google Scholar
  35. Wiering, M.A. 1999. Explorations in Efficient Reinforcement Learning. Ph.D. Thesis, University of Amsterdam/IDSIA.Google Scholar
  36. Wiering, M.A. and Schmidhuber, J. 1998a. Efficient model-based exploration. In Proceedings of the Sixth International Conference on Simulation of Adaptive Behavior: From Animals to Animats 6, J.A. Meyer and S.W. Wilson (Eds.), MIT Press/Bradford Books, pp. 223-228.Google Scholar
  37. Wiering, M.A. and Schmidhuber, J. 1998b. Fast online Q(λ). Machine Learning, 33(1):105-116.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • Marco Wiering
    • 1
  • Rafał Sałustowicz
    • 1
  • Jürgen Schmidhuber
    • 1
  1. 1.IDSIALuganoSwitzerland

Personalised recommendations