Advertisement

Machine Learning

, Volume 49, Issue 2–3, pp 193–208 | Cite as

A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes

  • Michael Kearns
  • Yishay Mansour
  • Andrew Y. Ng
Article

Abstract

A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor γ and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter.

Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).

reinforcement learning Markov decision processes planning 

References

  1. Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1974). The design and analysis of computer algorithms. Reading MA: Addison-Wesley.Google Scholar
  2. Barto, A. G., Bradtke, S. J., & Singh, S. P. (1995) Learning to act using real-time dynamic programming. Artificial Intelligence, 72, 81–138.Google Scholar
  3. Boutilier, C., Dearden, R., & Goldszmidt, M. (1995). Exploiting structure in policy construction. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (pp. 1104-1111).Google Scholar
  4. Boyen, X., & Koller, D. (1998). Tractable inference for complex stochastic processes. In Proceedings of the 1998 Conference on Uncertainty in Artificial Intelligence. San Mateo, CA: Morgan Kauffmann.Google Scholar
  5. Bonet, B., Loerincs, G., & Geffner, H. (1997). A robust and fast action selection mechanism for planning. In Proceedings of the Fourteenth National Conference on Artifial Intelligence.Google Scholar
  6. Dearden, R., & Boutilier, C. (1994). Integrating planning and execution in stochastic domains. In Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence.Google Scholar
  7. Davies, S., Ng, A. Y., & Moore, A. (1998). Applying online-search to reinforcement learning. In Proceedings of AAAI-98 (pp. 753–760). Menlo Park, CA: AAAI Press.Google Scholar
  8. Kearns, M., Mansour, Y., & Ng, A. Y. Approximate planning in large POMDPs via reusable trajectories. In neural information processing systems 13, to appear.Google Scholar
  9. Korf, R. E. (1990). Real-time heuristic search. Artificial Intelligence, 42, 189–211.Google Scholar
  10. Koller, D., & Parr, R. (1999). Computing factored value functions for policies in structured MDPs. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence. Google Scholar
  11. Koenig, S., & Simmons, R. (1998). Solving robot navigation problems with initial pose uncertainty using realtime heuristic search. In Proceedings of the Fourth International Conference on Artificial Intelligence Planning Systems. Google Scholar
  12. Kearns, M., & Singh, S. (1999). Finite-sample convergence rates for Q-learning and indirect algorithms. In Neural Information Processing systems 12. Cambridge, MA: MIT Press.Google Scholar
  13. Meuleau, N., Hauskrecht, M., Kim, K.-E., Peshkin, L., Kaelbling, L. P., Dean, T., & Boutilier, C. (1998). Solving very large weakly coupled Markov decision processes. In Proceedings of AAAI (pp. 165-172).Google Scholar
  14. McAllester, D., & Singh, S. (1999). Personal Communication.Google Scholar
  15. McAllester, D., & Singh, S. Approximate planning for factored POMDPs using belief state simplification. Preprint.Google Scholar
  16. Russell, S., & Norvig, P. (1995). Artificial Intelligence: A modern approach. Englewood cliffs, NJ: Prentice-Hall.Google Scholar
  17. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning. Cambridge, MA: MIT Press.Google Scholar
  18. Singh, S., & Yee, R. (1994). An upper bound on the loss from approximate optimal-value functions. Machine Learning, 16, 227–233.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Michael Kearns
    • 1
  • Yishay Mansour
    • 2
  • Andrew Y. Ng
    • 3
  1. 1.Department of Computer and Information ScienceUniversity of PennsylvaniaPhiladelphiaUSA
  2. 2.Department of Computer ScienceTel Aviv UniversityTel AvivIsrael
  3. 3.Department of Computer ScienceUniversity of BerkeleyBerkeleyUSA

Personalised recommendations