Machine Learning

, Volume 38, Issue 3, pp 287–308

Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms

  • Satinder Singh
  • Tommi Jaakkola
  • Michael L. Littman
  • Csaba Szepesvári
Article

DOI: 10.1023/A:1007678930559

Cite this article as:
Singh, S., Jaakkola, T., Littman, M.L. et al. Machine Learning (2000) 38: 287. doi:10.1023/A:1007678930559

Abstract

An important application of reinforcement learning (RL) is to finite-state control problems and one of the most difficult problems in learning for control is balancing the exploration/exploitation tradeoff. Existing theoretical results for RL give very little guidance on reasonable ways to perform exploration. In this paper, we examine the convergence of single-step on-policy RL algorithms for control. On-policy algorithms cannot separate exploration from learning and therefore must confront the exploration problem directly. We prove convergence results for several related on-policy algorithms with both decaying exploration and persistent exploration. We also provide examples of exploration strategies that can be followed during learning that result in convergence to both optimal values and optimal policies.

reinforcement-learning on-policy convergence Markov decision processes 
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Satinder Singh
    • 1
  • Tommi Jaakkola
    • 2
  • Michael L. Littman
    • 3
  • Csaba Szepesvári
    • 4
  1. 1.AT&T Labs-ResearchUSA
  2. 2.Department of Computer ScienceMassachusetts Institute of TechnologyCambridgeUSA
  3. 3.Department of Computer ScienceDuke UniversityDurhamUSA
  4. 4.Mindmaker Ltd.BudapestHungary

Personalised recommendations