Machine Learning

, Volume 22, Issue 1–3, pp 33–57

Linear Least-Squares Algorithms for Temporal Difference Learning

  • Steven J. Bradtke
  • Andrew G. Barto
Article

DOI: 10.1023/A:1018056104778

Cite this article as:
Bradtke, S.J. & Barto, A.G. Machine Learning (1996) 22: 33. doi:10.1023/A:1018056104778

Abstract

We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton‘s TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, σTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on σTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters.

Reinforcement learning Markov Decision Problems Temporal Difference Methods Least-Squares 

Copyright information

© Kluwer Academic Publishers 1996

Authors and Affiliations

  • Steven J. Bradtke
  • Andrew G. Barto

There are no affiliations available

Personalised recommendations