Machine Learning

, Volume 8, Issue 3, pp 279–292


  • Christopher J. C. H. Watkins
  • Peter Dayan
Technical Note

DOI: 10.1007/BF00992698

Cite this article as:
Watkins, C.J.C.H. & Dayan, P. Mach Learn (1992) 8: 279. doi:10.1007/BF00992698


Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.

This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989). We show thatQ-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where manyQ values can be changed each iteration, rather than just one.


Q-learning reinforcement learning temporal differences asynchronous dynamic programming 
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 1992

Authors and Affiliations

  • Christopher J. C. H. Watkins
    • 1
  • Peter Dayan
    • 2
  1. 1.Highbury, LondonEngland
  2. 2.Centre for Cognitive ScienceUniversity of EdinburghEdinburghScotland

Personalised recommendations