Feature Reinforcement Learning in Practice

  • Phuong Nguyen
  • Peter Sunehag
  • Marcus Hutter
Conference paper

DOI: 10.1007/978-3-642-29946-9_10

Part of the Lecture Notes in Computer Science book series (LNCS, volume 7188)
Cite this paper as:
Nguyen P., Sunehag P., Hutter M. (2012) Feature Reinforcement Learning in Practice. In: Sanner S., Hutter M. (eds) Recent Advances in Reinforcement Learning. EWRL 2011. Lecture Notes in Computer Science, vol 7188. Springer, Berlin, Heidelberg

Abstract

Following a recent surge in using history-based methods for resolving perceptual aliasing in reinforcement learning, we introduce an algorithm based on the feature reinforcement learning framework called ΦMDP [13]. To create a practical algorithm we devise a stochastic search procedure for a class of context trees based on parallel tempering and a specialized proposal distribution. We provide the first empirical evaluation for ΦMDP. Our proposed algorithm achieves superior performance to the classical U-tree algorithm [20] and the recent active-LZ algorithm [6], and is competitive with MC-AIXI-CTW [29] that maintains a bayesian mixture over all context trees up to a chosen depth. We are encouraged by our ability to compete with this sophisticated method using an algorithm that simply picks one single model, and uses Q-learning on the corresponding MDP. Our ΦMDP algorithm is simpler and consumes less time and memory. These results show promise for our future work on attacking more complex and larger problems.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Phuong Nguyen
    • 1
    • 2
  • Peter Sunehag
    • 1
  • Marcus Hutter
    • 1
    • 2
    • 3
  1. 1.Australian National UniversityAustralia
  2. 2.NICTAAustralia
  3. 3.ETHZSwitzerland

Personalised recommendations