Hutter’s optimal universal but incomputable AIXI agent models the environment as an initially unknown probability distribution-computing program. Once the latter is found through (incomputable) exhaustive search, classical planning yields an optimal policy. Here we reverse the roles of agent and environment by assuming a computable optimal policy realizable as a program mapping histories to actions. This assumption is powerful for two reasons: (1) The environment need not be probabilistically computable, which allows for dealing with truly stochastic environments, (2) All candidate policies are computable. In stochastic settings, our novel method Optimal Direct Policy Search (ODPS) identifies the best policy by direct universal search in the space of all possible computable policies. Unlike AIXI, it is computable, model-free, and does not require planning. We show that ODPS is optimal in the sense that its reward converges to the reward of the optimal policy in a very broad class of partially observable stochastic environments.
Keywords
- Optimal Policy
- Reinforcement Learning
- Turing Machine
- Markov Decision Process
- Direct Policy
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.