Advertisement

Abstract

In this chapter, I examine other approaches to exploration that could be combined with texplore’s model. First, I introduce three domain classes that each suggest a different type of exploration. Then, in Section 6.1, I look at how to perform exploration in domains where a needle-in-a-haystack search is required to find an arbitrarily located reward or transition. In the next section, I look at the opposite case: can we explore better in a domain with a richer, more informative set of state features? Finally, in Section 6.3, I present an algorithm that can learn which of these exploration approaches to adopt on-line, while interacting with the environment. Then I present some empirical comparisons of these approaches against texplore in Section 6.4, before summarizing the chapter in Section 6.5.

Keywords

Reward Function Exploration Strategy Random Forest Model Intrinsic Reward External Reward 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  1. 1.Department of Computer Science University of Texas at AustinAustinUSA

Personalised recommendations