Extreme State Aggregation beyond MDPs
We consider a Reinforcement Learning setup without any (esp. MDP) assumptions on the environment. State aggregation and more generally feature reinforcement learning is concerned with mapping histories/raw-states to reduced/aggregated states. The idea behind both is that the resulting reduced process (approximately) forms a small stationary finite-state MDP, which can then be efficiently solved or learnt. We considerably generalize existing aggregation results by showing that even if the reduced process is not an MDP, the (q-)value functions and (optimal) policies of an associated MDP with same state-space size solve the original problem, as long as the solution can approximately be represented as a function of the reduced states. This implies an upper bound on the required state space size that holds uniformly for all RL problems. It may also explain why RL algorithms designed for MDPs sometimes perform well beyond MDPs.
KeywordsState aggregation reinforcement learning non-MDP
Unable to display preview. Download preview PDF.
- [FPP04]Ferns, N., Panangaden, P., Precup, D.: Metrics for finite Markov decision processes. In: Proc. 20th Conf. on Uncertainty in Artificial Intelligence (UAI 2004), pp. 162–169 (2004)Google Scholar
- [Hut05]Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2005)Google Scholar
- [Hut09a]Hutter, M.: Feature dynamic Bayesian networks. In: Proc. 2nd Conf. on Artificial General Intelligence (AGI 2009), vol. 8, pp. 67–73. Atlantis Press (2009)Google Scholar
- [Hut14]Hutter, M.: Extreme state aggregation beyond MDPs. Technical report (2014), http://www.hutter1.net/publ/exsaggx.pdf
- [LHS13]Lattimore, T., Hutter, M., Sunehag, P.: The sample-complexity of general reinforcement learning. Journal of Machine Learning Research, W&CP: ICML 28(3), 28–36 (2013)Google Scholar
- [McC96]McCallum, A.K.: Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, Department of Computer Science, University of Rochester (1996)Google Scholar
- [MMR11]Maillard, O.-A., Munos, R., Ryabko, D.: Selecting the state-representation in reinforcement learning. In: Advances in Neural Information Processing Systems (NIPS 2011), vol. 24, pp. 2627–2635 (2011)Google Scholar
- [Ngu13]Nguyen, P.: Feature Reinforcement Learning Agents. PhD thesis, Research School of Computer Science, Australian National University (2013)Google Scholar
- [NMRO13]Nguyen, P., Maillard, O., Ryabko, D., Ortner, R.: Competing with an infinite set of models in reinforcement learning. JMLR WS&CP AISTATS 31, 463–471 (2013)Google Scholar
- [NOR13]Maillard, O.-A., Nguyen, P., Ortner, R., Ryabko, D.: Optimal regret bounds for selecting the state representation in reinforcement learning. JMLR W&CP ICML 28(1), 543–551 (2013)Google Scholar
- [RN10]Russell, S.J., Norvig, P.: Artificial Intelligence. A Modern Approach, 3rd edn. Prentice-Hall, Englewood Cliffs (2010)Google Scholar
- [SB98]Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar