Discrete Event Dynamic Systems

, Volume 13, Issue 1–2, pp 9–39

From Perturbation Analysis to Markov Decision Processes and Reinforcement Learning

  • Xi-Ren Cao
Article

DOI: 10.1023/A:1022188803039

Cite this article as:
Cao, XR. Discrete Event Dynamic Systems (2003) 13: 9. doi:10.1023/A:1022188803039

Abstract

The goals of perturbation analysis (PA), Markov decision processes (MDPs), and reinforcement learning (RL) are common: to make decisions to improve the system performance based on the information obtained by analyzing the current system behavior. In this paper, we study the relations among these closely related fields. We show that MDP solutions can be derived naturally from performance sensitivity analysis provided by PA. Performance potential plays an important role in both PA and MDPs; it also offers a clear intuitive interpretation for many results. Reinforcement learning, TD(λ), neuro-dynamic programming, etc., are efficient ways of estimating the performance potentials and related quantities based on sample paths. The sensitivity point of view of PA, MDP, and RL brings in some new insight to the area of learning and optimization. In particular, gradient-based optimization can be applied to parameterized systems with large state spaces, and gradient-based policy iteration can be applied to some nonstandard MDPs such as systems with correlated actions, etc. Potential-based on-line approaches and their advantages are also discussed.

Potentials Poisson equations gradient-based policy iteration perturbation realization Q-learning TD(λ) 

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Xi-Ren Cao
    • 1
  1. 1.Hong Kong University of Science and TechnologyKowloonHong Kong

Personalised recommendations