Bounded parameter Markov decision processes

  • Robert Givan
  • Sonia Leach
  • Thomas Dean
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1348)


In this paper, we introduce the notion of a bounded parameter Markov decision process (BMDP) as a generalization of the familiar exact MDP. A bounded parameter MDP is a set of exact MDPs specified by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). BMDPs form an efficiently solvable special case of the already known class of MDPs with imprecise parameters (MDPIPs). Bounded parameter MDPs can be used to represent variation or uncertainty concerning the parameters of sequential decision problems in cases where no prior probabilities on the parameter values are available. Bounded parameter MDPs can also be used in aggregation schemes to represent the variation in the transition probabilities for different base states aggregated together in the same aggregate state.

We introduce interval value functions as a natural extension of traditional value functions. An interval value function assigns a closed real interval to each state, representing the assertion that the value of that state falls within that interval. An interval value function can be used to bound the performance of a policy over the set of exact MDPs associated with a given bounded parameter MDP. We describe an iterative dynamic programming algorithm called interval policy evaluation which computes an interval value function for a given BMDP and specified policy. Interval policy evaluation on a policy π computes the most restrictive interval value function that is sound, i. e., that bounds the value function for π in every exact MDP in the set defined by the bounded parameter MDP. We define optimistic and pessimistic notions of optimal policy, and provide a variant of value iteration [Bellman, 1957] that we call interval value iteration which computes a policies for a BMDP that are optimal in these senses.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Bellman, 1957]
    Bellman, Richard 1957. Dynamic Programming. Princeton University Press.Google Scholar
  2. [Bertsekas and Castañion, 1989]
    Bertsekas, D. P. and Castafion, D. A. 1989. Adaptive aggregation for infinite horizon dynamic programming. IEEE Transactions on Automatic Control 34(6):589–598.Google Scholar
  3. [Boutilier and Dearden, 1994]
    Boutilier, Craig and Dearden, Richard 1994. Using abstractions for decision theoretic planning with time constraints. In Proceedings AAAI-94. AAAI. 1016–1022.Google Scholar
  4. [Boutilier, 1995a]
    [Boutilier et al., 1995a] Boutilier, Craig; Dean, Thomas; and Hanks, Steve 1995a. Planning under uncertainty: Structural assumptions and computational leverage. In Proceedings of the Third European Workshop on Planning.Google Scholar
  5. [Boutilier, 1995b]
    [Boutilier et al., 1995b] Boutilier, Craig; Dearden, Richard; and Goldszmidt, Moises 1995b. Exploiting structure in policy construction. In Proceedings IJCAI 14. IJCAII. 1104–1111.Google Scholar
  6. [Dean and Givan, 1997]
    Dean, Thomas and Givan, Robert 1997. Model minimization in Markov decision processes. In Proceedings AAAI-97. AAAI.Google Scholar
  7. [Dean, 1997]
    [Dean et al., 1997] Dean, Thomas; Givan, Robert; and Leach, Sonia 1997. Model reduction techniques for computing approximately optimal solutions for Markov decision processes. In Thirteenth Conference on Uncertainty in Artificial Intelligence.Google Scholar
  8. [Littman, 1995]
    [Littman et al., 1995] Littman, Michael; Dean, Thomas; and Kaelbling, Leslie 1995. On the complexity of solving Markov decision problems. In Eleventh Conference on Uncertainty in Artificial Intelligence. 394–402.Google Scholar
  9. [Littman, 1997]
    Littman, Michael L. 1997. Probabilistic propositional planning: Representations and complexity. In Proceedings AAAI-97. AAAI.Google Scholar
  10. [Lovejoy, 1991]
    Lovejoy, William S. 1991. A survey of algorithmic methods for partially observed Markov decision processes. Annals of Operations Research 28:47–66.Google Scholar
  11. [Puterman, 1994]
    Puterman, Martin L. 1994. Markov Decision Processes. John Wiley & Sons, New York.Google Scholar
  12. [Satia and Lave, 1973]
    Satia, J. K. and Lave, R. E. 1973. Markovian decision processes with uncertain transition probabilities. Operations Research 21:728–740.Google Scholar
  13. [White and Eldeib, 1986]
    White, C. C. and Eldeib, H. K. 1986. Parameter imprecision in finite state, finite action dynamic programs. Operations Research 34:120–129.Google Scholar
  14. [White and Eldeib, 1994]
    White, C. C. and Eldeib, H. K. 1994. Markov decision processes with imprecise transition probabilities. Operations Research 43:739–749.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Robert Givan
    • 1
  • Sonia Leach
    • 1
  • Thomas Dean
    • 1
  1. 1.Department of Computer ScienceBrown UniversityProvidenceUSA

Personalised recommendations