Advertisement

Handbook of Markov Decision Processes

Methods and Applications

  • Eugene A. Feinberg
  • Adam Shwartz

Part of the International Series in Operations Research & Management Science book series (ISOR, volume 40)

Table of contents

  1. Front Matter
    Pages i-viii
  2. Introduction

    1. Eugene A. Feinberg, Adam Shwartz
      Pages 1-17
  3. Finite State and Action Models

    1. Front Matter
      Pages 19-19
    2. Lodewijk Kallenberg
      Pages 21-87
    3. Mark E. Lewis, Martin L. Puterman
      Pages 89-111
    4. Konstantin E. Avrachenkov, Jerzy Filar, Moshe Haviv
      Pages 113-150
  4. Infinite State Models

    1. Front Matter
      Pages 151-151
    2. Eugene A. Feinberg
      Pages 173-207
    3. Eugene A. Feinberg, Adam Shwartz
      Pages 209-229
    4. Arie Hordijk, Alexander A. Yushkevich
      Pages 231-267
    5. Onésimo Hernández-Lerma, Jean B. Lasserre
      Pages 377-407
    6. Lester E. Dubins, Ashok P. Maitra, William D. Sudderth
      Pages 409-428
  5. Applications

    1. Front Matter
      Pages 429-429
    2. Bernard F. Lamond, Abdeslem Boukhtouta
      Pages 537-558
  6. Back Matter
    Pages 559-565

About this book

Introduction

Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re­ spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chap­ ters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems. The basic object is a discrete-time stochas­ tic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.

Keywords

Analysis Markov Chain Markov Chains Markov decision process Optimization Theory Stochastic Optimization linear optimization optimization programming

Editors and affiliations

  • Eugene A. Feinberg
    • 1
  • Adam Shwartz
    • 2
  1. 1.State University of New York at Stony BrookUSA
  2. 2.Technion—Israel Institute of TechnologyIsrael

Bibliographic information

  • DOI https://doi.org/10.1007/978-1-4615-0805-2
  • Copyright Information Kluwer Academic Publishers 2002
  • Publisher Name Springer, Boston, MA
  • eBook Packages Springer Book Archive
  • Print ISBN 978-1-4613-5248-8
  • Online ISBN 978-1-4615-0805-2
  • Series Print ISSN 0884-8289
  • Buy this book on publisher's site