Machine Learning

, Volume 8, Issue 3–4, pp 279–292

Technical Note: Q-Learning

  • Christopher J.C.H. Watkins
  • Peter Dayan
Article

Abstract

\(\mathcal{Q}\)-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.

This paper presents and proves in detail a convergence theorem for \(\mathcal{Q}\)-learning based on that outlined in Watkins (1989). We show that \(\mathcal{Q}\)-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many \(\mathcal{Q}\) values can be changed each iteration, rather than just one.

\(\mathcal{Q}\)-learning reinforcement learning temporal differences asynchronous dynamic programming 

References

  1. Barto, A.G., Bradtke, S.J. & Singh, S.P. (1991). Real-time learning and control using asynchronous dynamic programming. (COINS technical report 91-57). Amherst: University of Massachusetts.Google Scholar
  2. Barto, A.G. & Singh, S.P. (1990). On the computational economics of reinforcement learning. In D.S. Touretzky, J. Elman, T.J. Sejnowski & G.E. Hinton, (Eds.), Proceedings of the 1990 Connectionist Models Summer School. San Mateo, CA: Morgan Kaufmann.Google Scholar
  3. Bellman, R.E. & Dreyfus, S.E. (1962). Applied dynamic programming. RAND Corporation.Google Scholar
  4. Chapman, D. & Kaelbling, L.P. (1991). Input generalization in delayed reinforcement learning: An algorithm and performance comparisons. Proceedings of the 1991 International Joint Conference on Artificial Intelligence (pp. 726–731).Google Scholar
  5. Kushner, H. & Clark, D. (1978). Stochastic approximation methods for constrained and unconstrained systems. Berlin, Germany: Springer-Verlag.Google Scholar
  6. Lin, L. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8.Google Scholar
  7. Mahadevan & Connell (1991). Automatic programming of behavior-based robots using reinforcement learning. Proceedings of the 1991 National Conference on AI (pp. 768–773).Google Scholar
  8. Ross, S. (1983). Introduction to stochastic dynamic programming. New York, Academic Press.Google Scholar
  9. Sato, M., Abe, K. & Takeda, H. (1988). Learning control of finite Markov chains with explicit trade-off between estimation and control. IEEE Transactions on Systems, Man and Cybernetics, 18, pp. 677–684.Google Scholar
  10. Sutton, R.S. (1984). Temporal credit assignment in reinforcement learning. Ph.D Thesis, University of Massachusetts, Amherst, MA.Google Scholar
  11. Sutton, R.S. (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp. 9–44.Google Scholar
  12. Sutton. R.S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kaufmann.Google Scholar
  13. Watkins, C.J.C.H. (1989). Learning from delayed rewards. PhD Thesis, University of Cambridge, England.Google Scholar
  14. Werbos, P.J. (1977). Advanced forecasting methods for global crisis warning and models of intelligence. General Systems Yearbook, 22, pp. 25–38.Google Scholar

Copyright information

© Kluwer Academic Publishers 1992

Authors and Affiliations

  • Christopher J.C.H. Watkins
    • 1
  • Peter Dayan
    • 2
  1. 1.HighburyEngland
  2. 2.Centre for Cognitive ScienceUniversity of EdinburghEdinburghScotland

Personalised recommendations