Q-learning

Abstract

Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.

This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989). We show thatQ-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where manyQ values can be changed each iteration, rather than just one.

References

  1. Barto, A.G., Bradtke, S.J. & Singh, S.P. (1991).Real-time learning and control using asynchronous dynamic programming. (COINS technical report 91-57). Amherst: University of Massachusetts.

    Google Scholar 

  2. Barto, A.G. & Singh, S.P. (1990). On the computational economics of reinforcement learning. In D.S. Touretzky, J. Elman, T.J. Sejnowski & G.E. Hinton, (Eds.),Proceedings of the 1990 Connectionist Models Summer School. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  3. Bellman, R.E. & Dreyfus, S.E. (1962).Applied dynamic programming. RAND Corporation.

  4. Chapman, D. & Kaelbling, L.P. (1991). Input generalization in delayed reinforcement learning: An algorithm and performance comparisons.Proceedings of the 1991 International Joint Conference on Artificial Intelligence (pp. 726–731).

  5. Kushner, H. & Clark, D. (1978).Stochastic approximation methods for constrained and unconstrained systems. Berlin, Germany: Springer-Verlag.

    Google Scholar 

  6. Lin, L. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching.Machine Learning, 8.

  7. Mahadevan & Connell (1991). Automatic programming of behavior-based robots using reinforcement learning.Proceedings of the 1991 National Conference on AI (pp. 768–773).

  8. Ross, S. (1983).Introduction to stochastic dynamic programming. New York, Academic Press.

    Google Scholar 

  9. Sato, M., Abe, K. & Takeda, H. (1988). Learning control of finite Markov chains with explicit trade-off between estimation and control.IEEE Transactions on Systems, Man and Cybernetics, 18, pp. 677–684.

    Google Scholar 

  10. Sutton, R.S. (1984).Temporal credit assignment in reinforcement learning. PhD Thesis, University of Massachusetts, Amherst, MA.

    Google Scholar 

  11. Sutton, R.S. (1988). Learning to predict by the methods of temporal difference.Machine Learning, 3, pp. 9–44.

    Google Scholar 

  12. Sutton, R.S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming.Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  13. Watkins, C.J.C.H. (1989).Learning from delayed rewards. PhD Thesis, University of Cambridge, England.

    Google Scholar 

  14. Werbos, P.J. (1977). Advanced forecasting methods for global crisis warning and models of intelligence.General Systems Yearbook, 22, pp. 25–38.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Watkins, C.J.C.H., Dayan, P. Q-learning. Mach Learn 8, 279–292 (1992). https://doi.org/10.1007/BF00992698

Download citation

Keywords

  • Q-learning
  • reinforcement learning
  • temporal differences
  • asynchronous dynamic programming