Advertisement

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

Chapter
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 173)

Abstract

This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.

Keywords

Reinforcement learning connectionist networks gradient descent mathematical analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barto, A.G. (1985). Learning by statistical cooperation of self-interested neuron-like computing elements.Human Neurobiology, 4, 229–256.Google Scholar
  2. Barto, A.G. & Anandan, P. (1985). Pattern recognizing stochastic learning automata.IEEE Transactions on Systems Man and Cybernetics, 15, 360–374.MathSciNetzbMATHCrossRefGoogle Scholar
  3. Barto, A.G. & Anderson, C.W. (1985). Structural learning in connectionist systems.Proceedings of the Seventh Annual Conference of the Cognitive Science Society(pp. 43–53). Irvine, CA.Google Scholar
  4. Barto, A.G., Sutton, R.S., & Anderson, C.W. (1983). Neuronlike elements that can solve difficult learning control problems.IEEE Transactions on Systems,Man,and Cybernetics, 13, 835–846.CrossRefGoogle Scholar
  5. Barto, A.G., Sutton, R.S., & Brouwer, P.S. (1981). Associative search network: A reinforcement learning associative memory.Biological Cybernetics, 40, 201–211.zbMATHCrossRefGoogle Scholar
  6. Barto, A.G., & Jordan, M.I. (1987). Gradient following without back-propagation in layered networks.Proceedings of the First Annual International Conference on Neural Networks, Vol. II (pp. 629–636).Google Scholar
  7. San Diego, CA. Barto, A.G., Sutton, R.S., & Watkins, C.J.C.H. (1990). Learning and sequential decision making. In: M. Gabriel & J.W. Moore (Eds.)Learning and computational neuroscience: Foundations of adaptive networks.Cambridge, MA: MIT Press.Google Scholar
  8. Dayan, P. (1990). Reinforcement comparison. In D.S. Touretzky, J.L. Elman, T.J. Sejnowski, & G.E. Hinton (Eds.)Proceedings of the 1990 Connectionist Models Summer School(pp. 45–51). San Mateo, CA: Morgan Kaufmann.Google Scholar
  9. Goodwin, G.C. & Sin, K.S. (1984).Adaptive filtering prediction and control. Englewood Cliffs, NJ: Prentice-Hall.zbMATHGoogle Scholar
  10. Gullapalli, V. (1990). A stochastic reinforcement learning algorithm for learning real-valued functions.Neural Networks, 3, 671–692.CrossRefGoogle Scholar
  11. Hinton, G.E. & Sejnowski, T.J. (1986). Learning and relearning in Boltzmann machines. In: D.E. Rumelhart & J.L. McClelland, (Eds.)Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1: Foundations.Cambridge, MA: MIT Press.Google Scholar
  12. Jordan, M.I. & Rumelhart, D.E. (1990).Forward models: supervised learning with a distal teacher.(Occasional Paper - 40). Cambridge, MA: Massachusetts Institute of Technology, Center for Cognitive Science.Google Scholar
  13. leCun, Y. (1985). Une procedure d’apprentissage pour resau a sequil assymetrique [A learning procedure for asymmetric threshold networks].Proceedings of Cognitiva, 85,599–604.Google Scholar
  14. Munro, P. (1987). A dual back-propagation scheme for scalar reward learning.Proceedings of the Ninth Annual Conference of the Cognitive Science Society(pp. 165–176). Seattle, WA.Google Scholar
  15. Narendra, K.S. & Thathatchar, M.A.L. (1989).Learning Automata: An introduction.Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  16. Narendra, K.S. & Wheeler, R. M., Jr. (1983). An N-player sequential stochastic game with identical payoffs. IEEE Transactions on Systems, Man, and Cybernetics, 13, 1154–1158.MathSciNetzbMATHCrossRefGoogle Scholar
  17. Nilsson, N.J. (1980).Principles of artificial intelligence.Palo Alto, CA: Tioga.Google Scholar
  18. Parker, D.B. (1985).Learning-logic.(Technical Report TR-47). Cambridge, MA: Massachusetts Institute of Technology, Center for Computational Research in Economics and Management Science.Google Scholar
  19. Rohatgi, V.K. (1976)An introduction to probability theory and mathematical statistics.New York: Wiley.zbMATHGoogle Scholar
  20. Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning internal representations by error propagation. In: D.E. Rumelhart & J.L. McClelland, (Eds.)Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 1: Foundations.Cambridge: MIT Press.Google Scholar
  21. Schmidhuber, J.H. & Huber, R. (1990). Learning to generate focus trajectories for attentive vision. (Technical Report FKI-128–90). Technische Universität München, Institut für Informatik.Google Scholar
  22. Sutton, R.S. (1984).Temporal credit assignment in reinforcement learning.Ph.D. Dissertation, Dept. of Computer and Information Science, University of Massachusetts, Amherst, MA.Google Scholar
  23. Sutton, R.S. (1988). Learning to predict by the methods of temporal differences.Machine Learning, 3, 9–44.Google Scholar
  24. Thathatchar, M.A.L. & Sastry, P.S. (1985). A new approach to the design of reinforcement schemes for learning automata. IEEE Transactions on Systems, Man, and Cybernetics, 15, 168–175.CrossRefGoogle Scholar
  25. Wheeler, R.M., Jr. & Narendra K.S. (1986). Decentralized learning in finite Markov chains. IEEE Transactions on Automatic Control, 31, 519–526.zbMATHCrossRefGoogle Scholar
  26. Watkins, C.J.C.H. (1989).Learning from delayed rewards.Ph.D. Dissertation, Cambridge University, Cambridge, England.Google Scholar
  27. Werbos, P.J. (1974).Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. Dissertation, Harvard University, Cambridge, MA.Google Scholar
  28. Williams, R.J. (1986).Reinforcement learning in connectionist networks: A mathematical analysis.(Technical Report 8605). San Diego: University of California, Institute for Cognitive Science.Google Scholar
  29. Williams, R.J. (1987a).Reinforcement-learning connectionist systems.(Technical Report NU-CCS-87–3). Boston, MA: Northeastern University, College of Computer Science.Google Scholar
  30. Williams, R.J. (1987b). A class of gradient-estimating algorithms for reinforcement learning in neural networks.Proceedings of the First Annual International Conference on Neural Networks, Vol. II (pp. 601–608). San Diego, CA.Google Scholar
  31. San Diego, CA. Williams, R.J. (1988a). On the use of backpropagation in associative reinforcement learning.Proceedings of the Second Annual International Conference on Neural Networks, Vol. I (pp. 263–270). San Diego, CA.CrossRefGoogle Scholar
  32. San Diego, CA. Williams, R.J. (1988b).Toward a theory of reinforcement-learning connectionist systems.(Technical Report NUCCS-88–3). Boston, MA: Northeastern University, College of Computer Science.Google Scholar
  33. Williams, R.J. & Peng, J. (1991). Function optimization using connectionist reinforcement learning algorithms.Connection Science, 3, 241–268.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1992

Authors and Affiliations

  1. 1.College of Computer Science, 161 CNNortheastern UniversityBostonUSA

Personalised recommendations