Abstract
We consider a Reinforcement Learning setup without any (esp. MDP) assumptions on the environment. State aggregation and more generally feature reinforcement learning is concerned with mapping histories/raw-states to reduced/aggregated states. The idea behind both is that the resulting reduced process (approximately) forms a small stationary finite-state MDP, which can then be efficiently solved or learnt. We considerably generalize existing aggregation results by showing that even if the reduced process is not an MDP, the (q-)value functions and (optimal) policies of an associated MDP with same state-space size solve the original problem, as long as the solution can approximately be represented as a function of the reduced states. This implies an upper bound on the required state space size that holds uniformly for all RL problems. It may also explain why RL algorithms designed for MDPs sometimes perform well beyond MDPs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Fazekas, I., Klesov, O.: A general approach to the strong law of large numbers. Theory of Probability & Its Applications 45(3), 436–449 (2001)
Ferns, N., Panangaden, P., Precup, D.: Metrics for finite Markov decision processes. In: Proc. 20th Conf. on Uncertainty in Artificial Intelligence (UAI 2004), pp. 162–169 (2004)
Givan, R., Dean, T., Greig, M.: Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence 147(1–2), 163–223 (2003)
Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2005)
Hutter, M.: Feature dynamic Bayesian networks. In: Proc. 2nd Conf. on Artificial General Intelligence (AGI 2009), vol. 8, pp. 67–73. Atlantis Press (2009)
Hutter, M.: Feature reinforcement learning: Part I: Unstructured MDPs. Journal of Artificial General Intelligence 1, 3–24 (2009)
Hutter, M.: Extreme state aggregation beyond MDPs. Technical report (2014), http://www.hutter1.net/publ/exsaggx.pdf
Jaksch, T., Ortner, R., Auer, P.: Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research 11, 1563–1600 (2010)
Lattimore, T., Hutter, M.: PAC bounds for discounted MDPs. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds.) ALT 2012. LNCS (LNAI), vol. 7568, pp. 320–334. Springer, Heidelberg (2012)
Lattimote, T., Hutter, M.: General time consistent discounting. Theoretical Computer Science 519, 140–154 (2014)
Lattimore, T., Hutter, M., Sunehag, P.: The sample-complexity of general reinforcement learning. Journal of Machine Learning Research, W&CP: ICML 28(3), 28–36 (2013)
McCallum, A.K.: Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, Department of Computer Science, University of Rochester (1996)
Maillard, O.-A., Munos, R., Ryabko, D.: Selecting the state-representation in reinforcement learning. In: Advances in Neural Information Processing Systems (NIPS 2011), vol. 24, pp. 2627–2635 (2011)
Nguyen, P.: Feature Reinforcement Learning Agents. PhD thesis, Research School of Computer Science, Australian National University (2013)
Nguyen, P., Maillard, O., Ryabko, D., Ortner, R.: Competing with an infinite set of models in reinforcement learning. JMLR WS&CP AISTATS 31, 463–471 (2013)
Maillard, O.-A., Nguyen, P., Ortner, R., Ryabko, D.: Optimal regret bounds for selecting the state representation in reinforcement learning. JMLR W&CP ICML 28(1), 543–551 (2013)
Nguyen, P., Sunehag, P., Hutter, M.: Feature reinforcement learning in practice. In: Sanner, S., Hutter, M. (eds.) EWRL 2011. LNCS, vol. 7188, pp. 66–77. Springer, Heidelberg (2012)
Puterman, M.L.: Markov Decision Processes — Discrete Stochastic Dynamic Programming. Wiley, New York (1994)
Russell, S.J., Norvig, P.: Artificial Intelligence. A Modern Approach, 3rd edn. Prentice-Hall, Englewood Cliffs (2010)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Sunehag, P., Hutter, M.: Consistency of feature Markov processes. In: Hutter, M., Stephan, F., Vovk, V., Zeugmann, T. (eds.) ALT 2010. LNCS (LNAI), vol. 6331, pp. 360–374. Springer, Heidelberg (2010)
Strehl, A.L., Li, L., Littman, M.L.: Reinforcement learning in finite MDPs: PAC analysis. Journal of Machine Learning Research 10, 2413–2444 (2009)
Vovk, V., Gammerman, A., Shafer, G.: Algorithmic Learning in a Random World. Springer, New York (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Hutter, M. (2014). Extreme State Aggregation beyond MDPs. In: Auer, P., Clark, A., Zeugmann, T., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2014. Lecture Notes in Computer Science(), vol 8776. Springer, Cham. https://doi.org/10.1007/978-3-319-11662-4_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-11662-4_14
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11661-7
Online ISBN: 978-3-319-11662-4
eBook Packages: Computer ScienceComputer Science (R0)