Abstract
In this paper we focus on distributed multiagent planning under uncertainty. For single-agent planning under uncertainty, the partially observable Markov decision process (POMDP) is the dominant model (see [Spaan and Vlassis, 2005] and references therein). Recently, several generalizations of the POMDP to multiagent settings have been proposed. Here we focus on the decentralized POMDP (Dec-POMDP) model for multiagent planning under uncertainty [Bernstein et al., 2002, Goldman and Zilberstein, 2004]. Solving a Dec-POMDP amounts to finding a set of optimal policies for the agents that maximize the expected shared reward. However, solving a Dec-POMDP has proven to be hard (NEXP-complete): The number of possible deterministic policies for a single agent grows doubly exponentially with the planning horizon, and exponentially with the number of actions and observations available. As a result, the focus has shifted to approximate solution techniques [Nair et al., 2003, Emery-Montemerlo et al., 2005, Oliehoek and Vlassis, 2007].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein. The complexity of decentralized control of Markov decision processes. Math. Oper. Res., 27 (4):819–840, 2002.
P.-T. de Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein. A tutorial on the cross-entropy method. Annals of Operations Research, 134(1):19–67, 2005.
R. Emery-Montemerlo, G. Gordon, J. Schneider, and S. Thrun. Game theoretic control for robot teams. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 1175–1181, 2005.
C. V. Goldman and S. Zilberstein. Decentralized control of cooperative systems: Categorization and complexity analysis. Journal of Artificial Intelligence Research (JAIR), 22:143–174, 2004.
D. Koller and A. Pfeffer. Representations and solutions for game-theoretic problems. Artificial Intelligence, 94(1–2):167–215, 1997.
S. Mannor, R. Rubinstein, and Y. Gat. The cross entropy method for fast policy search. In International Conference on Machine Learning, pages 512–519, 2003.
R. Nair, M. Tambe, M. Yokoo, D. V. Pynadath, and S. Marsella. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In Proc. Int. Joint Conf. on Artificial Intelligence, pages 705–711, 2003.
F. A. Oliehoek and N. Vlassis. Q-value functions for decentralized POMDPs. In Proc. of Int. Joint Conf. on Autonomous Agents and Multi Agent Systems, Honolulu, Hawai’i, 2007.
M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, July 1994.
M. T. J. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24:195–220, 2005.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Oliehoek, F.A., Kooij, J.F.P., Vlassis, N. (2008). A Cross-Entropy Approach to Solving Dec-POMDPs. In: Badica, C., Paprzycki, M. (eds) Advances in Intelligent and Distributed Computing. Studies in Computational Intelligence, vol 78. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74930-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-540-74930-1_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74929-5
Online ISBN: 978-3-540-74930-1
eBook Packages: EngineeringEngineering (R0)