SelfOptimizing and ParetoOptimal Policies in General Environments Based on BayesMixtures
 Marcus Hutter
 … show all 1 hide
Purchase on Springer.com
$29.95 / €24.95 / £19.95*
* Final gross prices may vary according to local VAT.
Abstract
The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle t action y _{ t } results in perception x _{ t } and reward r _{ t }, where all quantities in general may depend on the complete history. The perception x _{ t } and reward r _{ t } are sampled from the (reactive) environmental probability distribution μ. This very general setting includes, but is not limited to, (partial observable, kth order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ is known. Reinforcement learning is usually used if μ is unknown. In the Bayesian approach one defines a mixture distribution ξ as a weighted sum of distributions $ \mathcal{V} \in \mathcal{M} $ , where $ \mathcal{M} $ is any class of distributions including the true environment μ. We show that the Bayesoptimal policy p ^{ξ}based on the mixture ξ is selfoptimizing in the sense that the average value converges asymptotically for all $ \mu \in \mathcal{M} $ to the optimal value achieved by the (infeasible) Bayesoptimal policy p ^{μ} which knows μ in advance. We show that the necessary condition that $ \mathcal{M} $ admits selfoptimizing policies at all, is also sufficient. No other structural assumptions are made on $ \mathcal{M} $ . As an example application, we discuss ergodic Markov decision processes, which allow for selfoptimizing policies. Furthermore, we show that p^{λ} is Paretooptimal in the sense that there is no other policy yielding higher or equal value in all environments $ \mathcal{V} \in \mathcal{M} $ and a strictly higher value in at least one.
 R. Bellman. Dynamic Programming. Princeton University Press, New Jersey, 1957.
 D. P. Bertsekas. Dynamic Programming and Optimal Control, Vol. (I) and (II). Athena Scientific, Belmont, Massachusetts, 1995. Volumes 1 and 2.
 R. I. Brafman and M. Tennenholtz. A nearoptimal polynomial time algorithm for learning in certain classes of stochastic games. Artificial Intelligence, 121(1–2):31–47, 2000. CrossRef
 J. L. Doob. Stochastic Processes. John Wiley & Sons, New York, 1953.
 M. Hutter. A theory of universal artificial intelligence based on algorithmic complexity. Technical Report cs.AI/0004001, 62 pages, 2000. http://arxiv.org/abs/cs.AI/0004001.
 M. Hutter. General loss bounds for universal sequence prediction. Proceedings of the 18 ^{ th } International Conference on Machine Learning (ICML2001), pages 210–217, 2001.
 L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: a survey. Journal of AI research, 4:237–285, 1996.
 M. Kearns and S. Singh. Nearoptimal reinforcement learning in polynomial time. In Proc. 15th International Conf. on Machine Learning, pages 260–268. Morgan Kaufmann, San Francisco, CA, 1998.
 P. R. Kumar and P. P. Varaiya. Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, Englewood Cliffs, NJ, 1986.
 M. Li and P. M. B. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer, 2nd edition, 1997.
 S. J. Russell and P. Norvig. Artificial Intelligence. A Modern Approach. PrenticeHall, Englewood Cliffs, 1995.
 R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998.
 J. Schmidhuber. The Speed Prior: a new simplicity measure yielding nearoptimal computable predictions. Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), 2002.
 R. J. Solomonoff. Complexitybased induction systems: comparisons and convergence theorems. IEEE Trans. Inform. Theory, IT24:422–432, 1978. CrossRef
 Title
 SelfOptimizing and ParetoOptimal Policies in General Environments Based on BayesMixtures
 Book Title
 Computational Learning Theory
 Book Subtitle
 15th Annual Conference on Computational Learning Theory, COLT 2002 Sydney, Australia, July 8–10, 2002 Proceedings
 Pages
 pp 364379
 Copyright
 2002
 DOI
 10.1007/3540454357_25
 Print ISBN
 9783540438366
 Online ISBN
 9783540454359
 Series Title
 Lecture Notes in Computer Science
 Series Volume
 2375
 Series ISSN
 03029743
 Publisher
 Springer Berlin Heidelberg
 Copyright Holder
 SpringerVerlag Berlin Heidelberg
 Additional Links
 Topics
 Industry Sectors
 eBook Packages
 Editors

 Jyrki Kivinen ^{(1)}
 Robert H. Sloan ^{(2)}
 Editor Affiliations

 1. Research School of Information Sciences and Engineering, Australian National University
 2. Computer Science Department, University of Illinois at Chicago
 Authors

 Marcus Hutter ^{(5)}
 Author Affiliations

 5. IDSIA, Galleria 2, CH6928, MannoLugano, Switzerland
Continue reading...
To view the rest of this content please follow the download PDF link above.