Conditions for Optimality in Multi-Stage Stochastic Programming Problems

  • Luuk Groenewegen
  • Jaap Wessels
Conference paper
Part of the Lecture Notes in Economics and Mathematical Systems book series (LNE, volume 179)


In this paper it is demonstrated how necessary and sufficient conditions for optimality of a strategy in multi-stage stochastic programs may be obtained without topological assumptions. The conditions are essentially based on a dynamic programming approach. These conditions — called conserving and equalizing — show the essential difference between finite-stage and ∞-stage stochastic programs.

Moreover, it is demonstrated how a recursive structure of the problem can give a reformulation of the conditions. These reformulated conditions may be used for the construction of numerical solution techniques.


Dynamic Programming Stochastic Program Recursive Structure Stochastic Programming Problem Dynamic Programming Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Dubins, L.E. and Savage, L.J.: How to gamble if you must: inequalities for stochastic processes. New York, MacGraw-Hill, 1965Google Scholar
  2. [2]
    Evers, J.J.M.: Linear programming over an infinite horizon. University of Tilburg Press, Tilburg 1973CrossRefGoogle Scholar
  3. [3]
    Furukawa, N. and Iwamoto, S.: Markovian decision processes with recursive reward function. Bull. Math. Statist. 15 (1973), pp.79–91Google Scholar
  4. [4]
    Grinold, R.: A new approach to multi-stage stochastic linear programs, pp. 19–29 in R.J.-B. Wets (ed.) Stochastic Systems II, Mathematical Programming Study no. 6, North-Holland Publ. Cy., Amsterdam 1976Google Scholar
  5. [5]
    Groenewegen, L.P.J.: Characterization of optimal strategies in dynamic games. Mathematical Centre Tract no. 90, Mathematical Centre Amsterdam 1979 (forthcoming)Google Scholar
  6. [6]
    Groenewegen, L.P.J. and Wessels, J.: On equilibrium strategies in noncooperative dynamic games. To appear in O. Moeschlin, D. Pallaschke (eds.), Game Theory and Related Topics, North-Holland Publ. Cy., Amsterdam 1979Google Scholar
  7. [7]
    Hinderer, K.: Foundations of non-stationary dynamic programming with discrete time parameter. Berlin, Springer 1970 (Lecture Notes in Oper. Res. & Math. Econ. no. 33)CrossRefGoogle Scholar
  8. [8]
    Hordijk, A.: Dynamic programming and Markov potential theory. Mathematical Centre Tract no. 51, Mathematical Centre Amsterdam 1974Google Scholar
  9. [9]
    Kertz, R.P. and Nachman, D.C.: Persistently optimal plans for non-stationary dynamic programming: the topology of weak convergence case. Annals of Probability (to appear)Google Scholar
  10. [10]
    Neveu, J.: Mathematical foundations of the calculus of probability. San Francisco, Holden-Day 1965Google Scholar
  11. [11]
    Rockafellar, R., Wets, R.J.-B.: Nonanticipativity and ℒ′-martingales in stochastic optimization problems, pp. 170–187 in the same volume as [4]Google Scholar
  12. [12]
    Sudderth, W.D.: On the Dubins and Savage characterization of optimal strategies. Ann. Math. Statist. 43 (1972), 498–507.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1980

Authors and Affiliations

  • Luuk Groenewegen
  • Jaap Wessels

There are no affiliations available

Personalised recommendations