Skip to main content
Log in

A note on ‘monotone optimal policies for markov decision processes’

  • Short Communication
  • Published:
Mathematical Programming Submit manuscript

Abstract

The purpose of this short note is to correct some oversights in [1]. More precisely, we point out that stronger assumptions have to be imposed on the decision model (in order to use results in [2]) and present a counterexample to a comment to [1, Theorem 3.1].

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. R.F. Serfozo, “Monotone optimal policies for Markov decision processes”,Mathematical Programming Study 6 (1976) 202–215.

    Google Scholar 

  2. M. Schäl, “Conditions for optimality in dynamic programming and for the limit ofn-stage optimal policies to be optimal”,Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 32 (1975) 179–196.

    Google Scholar 

  3. M. Schäl, “On the optimality of (s, S)-policies in dynamic inventory models with finite horizon”,SIAM Journal on Applied Mathematics 30 (1976) 518–537.

    Google Scholar 

  4. A.F. Veinott, “On the optimality of (s, S) inventory policies: New conditions and a new proof”,SIAM Journal on Applied Mathematics 14 (1966) 1067–1083.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kalin, D. A note on ‘monotone optimal policies for markov decision processes’. Mathematical Programming 15, 220–222 (1978). https://doi.org/10.1007/BF01609021

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01609021

Key words

Navigation