Chapter

Interactive Collaborative Information Systems

Volume 281 of the series Studies in Computational Intelligence pp 3-44

Approximate Dynamic Programming and Reinforcement Learning

  • Lucian BuşoniuAffiliated withDelft Center for Systems and Control, Delft University of Technology
  • , Bart De SchutterAffiliated withDelft Center for Systems and Control & Marine and Transport Technology Department, Delft University of Technology
  • , Robert BabuškaAffiliated withDelft Center for Systems and Control & Marine and Transport Technology Department, Delft University of Technology

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Dynamic programming (DP) and reinforcement learning (RL) can be used to address problems from a variety of fields, including automatic control, artificial intelligence, operations research, and economy. Many problems in these fields are described by continuous variables, whereas DP and RL can find exact solutions only in the discrete case. Therefore, approximation is essential in practical DP and RL. This chapter provides an in-depth review of the literature on approximate DP and RL in large or continuous-space, infinite-horizon problems. Value iteration, policy iteration, and policy search approaches are presented in turn. Model-based (DP) as well as online and batch model-free (RL) algorithms are discussed. We review theoretical guarantees on the approximate solutions produced by these algorithms. Numerical examples illustrate the behavior of several representative algorithms in practice. Techniques to automatically derive value function approximators are discussed, and a comparison between value iteration, policy iteration, and policy search is provided. The chapter closes with a discussion of open issues and promising research directions in approximate DP and RL.