Lazy Planning under Uncertainty by Optimizing Decisions on an Ensemble of Incomplete Disturbance Trees

  • Boris Defourny
  • Damien Ernst
  • Louis Wehenkel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5323)

Abstract

This paper addresses the problem of solving discrete-time optimal sequential decision making problems having a disturbance space W composed of a finite number of elements. In this context, the problem of finding from an initial state x0 an optimal decision strategy can be stated as an optimization problem which aims at finding an optimal combination of decisions attached to the nodes of a disturbance tree modeling all possible sequences of disturbances w0, w1, ..., \(w_{T-1} \in W^T\) over the optimization horizon T. A significant drawback of this approach is that the resulting optimization problem has a search space which is the Cartesian product of O(|W|T − 1) decision spaces U, which makes the approach computationally impractical as soon as the optimization horizon grows, even if W has just a handful of elements. To circumvent this difficulty, we propose to exploit an ensemble of randomly generated incomplete disturbance trees of controlled complexity, to solve their induced optimization problems in parallel, and to combine their predictions at time t = 0 to obtain a (near-)optimal first-stage decision. Because this approach postpones the determination of the decisions for subsequent stages until additional information about the realization of the uncertain process becomes available, we call it lazy. Simulations carried out on a robot corridor navigation problem show that even for small incomplete trees, this approach can lead to near-optimal decisions.

Keywords

Stochastic dynamic programming Ensemble methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Maciejowski, J.: Predictive Control with Constraints. Prentice Hall, Englewood Cliffs (2001)MATHGoogle Scholar
  2. 2.
    Morari, M., Lee, J.: Model predictive control: past, present and future. Computers and Chemical Engineering 23, 667–682 (1999)CrossRefGoogle Scholar
  3. 3.
    Birge, J., Louveaux, F.: Introduction to Stochastic Programming. Springer, New York (1997)MATHGoogle Scholar
  4. 4.
    Launchbury, J.: A natural semantics for lazy evaluation. In: POPL 1993: Proceedings of the 20th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 144–154. ACM, New York (1993)Google Scholar
  5. 5.
    Friedman, J., Kohavi, R., Yun, Y.: Lazy decision trees. In: Proc. of 13th National Conference on Artificial Intelligence, AAAI 1996. Part 1(of 2), pp. 717–724 (1996)Google Scholar
  6. 6.
    Heitsch, H., Römisch, W., Strugarek, C.: Stability of multistage stochastic programs. SIAM Journal on Optimization 17(2), 511–525 (2006)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Römisch, W.: Stability of stochastic programming problems. In: Ruszczyński, A., Shapiro, A. (eds.) Stochastic Programming. Handbooks in Operations Research and Management Science, vol. 10, pp. 483–554. Elsevier, Amsterdam (2003)Google Scholar
  8. 8.
    Dempster, M.: Sequential importance sampling algorithms for dynamic stochastic programming. Annals of Operations Research 84, 153–184 (1998)MathSciNetGoogle Scholar
  9. 9.
    Shapiro, A.: Monte Carlo sampling methods. In: Ruszczyński, A., Shapiro, A. (eds.) Stochastic Programming. Handbooks in Operations Research and Management Science, vol. 10, pp. 353–425. Elsevier, Amsterdam (2003)Google Scholar
  10. 10.
    Høyland, K., Wallace, S.: Generating scenario trees for multistage decision problems. Management Science 47(2), 295–307 (2001)CrossRefMATHGoogle Scholar
  11. 11.
    Hochreiter, R., Pflug, G.: Financial scenario generation for stochastic multi-stage decision processes as facility location problems. Annals of Operations Research 152, 257–272 (2007)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Rachev, S., Römisch, W.: Quantitative stability in stochastic programming: The method of probability metrics. Mathematics of Operations Research 27(4), 792–818 (2002)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Ernst, D., Glavic, M., Capitanescu, F., Wehenkel, L.: Reinforcement learning versus model predictive control: a comparison on a power system problem. IEEE Transactions on Systems, Man and Cybernetics - Part B (to appear, 2008)Google Scholar
  14. 14.
    Kothare, M., Balakrishnan, V., Morari, M.: Robust constrained model predictive control using matrix inequalities. Automatica 32, 1361–1379 (1996)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Nesterov, Y., Vial, J.P.: Confidence level solutions for stochastic programming. Automatica 44(6), 1559–1568 (2008)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Schapire, R.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)Google Scholar
  17. 17.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)MATHGoogle Scholar
  18. 18.
    Ernst, D., Geurts, P., Wehenkel, L.: Tree-based batch mode reinforcement learning. Journal of Machine Learning Research 6, 503–556 (2005)MathSciNetMATHGoogle Scholar
  19. 19.
    Sutton, R.: Generalization in reinforcement learning: successful examples using sparse coarse coding. Advances in Neural Information Processing Systems 8, 1038–1044 (1996)Google Scholar
  20. 20.
    Kearns, M., Mansour, Y., Ng, A.: A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning 49(2-3), 193–208 (2002)CrossRefMATHGoogle Scholar
  21. 21.
    Rubinstein, R., Kroese, D.: The Cross-Entropy Method. A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning. In: Information Science and Statistics. Springer, Heidelberg (2004)Google Scholar
  22. 22.
    Cassandra, A., Kaelbling, L., Littman, M.: Acting optimally in partially observable stochastic domains. In: Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI 1994), Seattle, Washington, USA, vol. 2, pp. 1023–1028. AAAI Press/MIT Press, Menlo Park (1994)Google Scholar
  23. 23.
    Ng, A., Jordan, M.: PEGASUS: a policy search method for large MDPs and POMDPs. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pp. 406–415 (1999)Google Scholar
  24. 24.
    Defourny, B.: Approximate solution to multistage stochastic programs with ensembles of randomized scenario trees. Master’s thesis, University of Liège, Department of Electrical Engineering and Computer Science (2007)Google Scholar
  25. 25.
    Defourny, B., Wehenkel, L.: Averaging decisions from an ensemble of scenario trees: a validation on newsvendor problems (submitted, 2008)Google Scholar
  26. 26.
    Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)MATHGoogle Scholar
  27. 27.
    Sutton, R., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems 12, 1057–1063 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Boris Defourny
    • 1
  • Damien Ernst
    • 1
  • Louis Wehenkel
    • 1
  1. 1.Department of Electrical Engineering and Computer ScienceUniversity of LiègeLiègeBelgium

Personalised recommendations