Abstract
Transfer algorithms allow the use of knowledge previously learned on related tasks to speed-up learning of the current task. Recently, many complex reinforcement learning problems have been successfully solved by efficient transfer learners. However, most of these algorithms suffer from a severe flaw: they are implicitly tuned to transfer knowledge between tasks having a given degree of similarity. In other words, if the previous task is very dissimilar (resp. nearly identical) to the current task, then the transfer process might slow down the learning (resp. might be far from optimal speed-up). In this paper, we address this specific issue by explicitly optimizing the transfer rate between tasks and answer to the question : “can the transfer rate be accurately optimized, and at what cost ?”. We show that this optimization problem is related to the continuum bandit problem. We then propose a generic adaptive transfer method (AdaTran), which allows to extend several existing transfer learning algorithms to optimize the transfer rate. Finally, we run several experiments validating our approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Fernandez, F., Veloso, M.: Probabilistic Policy Reuse in a Reinforcement Learning Agent. In: The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS (2006)
Kakade, S.: On The Sample Complexity of Reinforcement Learning. Phd Thesis, University College London (2003)
Taylor, M.E., Stone, P.: Behavior Transfer for Value-Function-Based Reinforcement Learning. In: The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, July 2005, pp. 53–59. ACM Press, New York (2005)
Taylor, M.E., Whiteson, S., Stone, P.: Transfer via Inter-Task Mappings in Policy Search Reinforcement Learning. In: The Sixth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2007 (May 2007)
Perkins, T.J., Precup, D.: Using options for knowledge transfer in reinforcement learning. Technical report, University of Massachusetts, Amherst (1999)
Price, B., Boutilier, C.: Accelerating Reinforcement Learning through Implicit Imitation. Journal of Artificial Intelligence Research 19, 569–629 (2003)
Kleinberg, R.: Nearly tight bounds for the continuum-armed bandit problem. In: Advances in Neural Information Processing Systems 17 (NIPS 2004), pp. 697–704 (2004)
Auer, P., Ortner, R., Szepesvri, C.: Improved Rates for the Stochastic Continuum-Armed Bandit Problem. In: Bshouty, N.H., Gentile, C. (eds.) COLT. LNCS, vol. 4539, pp. 454–468. Springer, Heidelberg (2007)
Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a Rigged Casino: The adversarial multi-armed bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science (1998)
Carroll, J.L., Peterson, T.S.: Fixed vs. Dynamic Sub-transfer in Reinforcement Learning. In: Arif Wani, M. (ed.) ICMLA 2002, Las Vegas Nevada, USA, June 24-27, 2002. CSREA Press (2002)
Carroll, J.L., Seppi, K.: Task Similarity Measures for Transfer in Reinforcement Learning Task Libraries. In: The 2005 International Joint Conference on Neural Networks, IJCNN 2005 (2005)
Carroll, J.L., Peterson, T.S., Owens, N.E.: Memory-guided Exploration in Reinforcement Learning. In: The 2001 International Joint Conference on Neural Networks, IJCNN 2001 (2001)
Price, B., Boutilier, C.: A Bayesian Approach to Imitation in Reinforcement Learning. In: IJCAI 2003, pp. 712–720 (2003)
Taylor, M.E., Stone, P., Liu, Y.: Transfer Learning via Inter-Task Mappings for Temporal Difference Learning. Journal of Machine Learning Research 8(1), 2125–2167 (2007)
Zhou, D.-X.: A note on derivatives of Bernstein polynomials. Journal of Approximation Theory 78(1), 147–150 (1994)
Lorentz, G.G.: Bernstein Polynomials. Chelsea, New York (1986)
Strehl, A.L., Li, L., Wiewiora, E., Langford, J., Littman, M.L.: PAC model-free reinforcement learning. In: ICML 2006: Proceedings of the 23rd international conference on Machine learning, pp. 881–888 (2006)
Madden, M.G., Howley, T.: Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty. Artif. Intell. Rev. 21(3-4), 375–398 (2004)
Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Knowledge Transfer Via Advice Taking. In: Proceedings of the Third International Conference on Knowledge Capture, KCAP 2005 (2005)
Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning (2004)
Konidaris, G.D.: Autonomous Shaping: Knowledge Transfer in Reinforcement Learning. In: Proceedings of the Twenty Third International Conference on Machine Learning (ICML 2006), Pittsburgh, PA (June 2006)
Kearns, M., Singh, S.: Finite-Sample Rates of Convergence for Q-Learning and Indirect Methods. In: Advances in Neural Information Processing Systems 11, pp. 996–1002. The MIT Press, Cambridge (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chevaleyre, Y., Pamponet, A.M., Zucker, JD. (2009). Experiments with Adaptive Transfer Rate in Reinforcement Learning. In: Richards, D., Kang, BH. (eds) Knowledge Acquisition: Approaches, Algorithms and Applications. PKAW 2008. Lecture Notes in Computer Science(), vol 5465. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01715-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-01715-5_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01714-8
Online ISBN: 978-3-642-01715-5
eBook Packages: Computer ScienceComputer Science (R0)