Transfer Approaches
Having provided a brief overview of the RL notation used in this monograph, we now enumerate possible approaches for transfer between RL tasks. This section lists attributes of methods used in the TL literature for each of the five dimensions discussed in Section 1.2.2, and summarizes the surveyed works in Table 3.1. The first two groups of methods apply to tasks which have the same state variables and actions. (Section 3.2 discusses the TL methods in the first block, and Section 3.3 discusses the MTL methods in the second block.) Groups three and four consider methods that transfer between tasks with different state variables and actions. (Section 3.4 discusses methods that use a representation that does not change when the underlying MDP changes, while Section 3.5 presents methods that must explicitly account for such changes.) The last group of methods (discussed in Section 3.6) learns a mapping between tasks like those used by methods in the fourth group of methods. Table 3.2 concisely enumerates the possible values for the attributes, as well as providing a key to Table 3.1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Taylor, M.E. (2009). Related Work. In: Transfer in Reinforcement Learning Domains. Studies in Computational Intelligence, vol 216. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01882-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-01882-4_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01881-7
Online ISBN: 978-3-642-01882-4
eBook Packages: EngineeringEngineering (R0)