Skip to main content

Experiments with Adaptive Transfer Rate in Reinforcement Learning

  • Conference paper
Knowledge Acquisition: Approaches, Algorithms and Applications (PKAW 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5465))

Included in the following conference series:

Abstract

Transfer algorithms allow the use of knowledge previously learned on related tasks to speed-up learning of the current task. Recently, many complex reinforcement learning problems have been successfully solved by efficient transfer learners. However, most of these algorithms suffer from a severe flaw: they are implicitly tuned to transfer knowledge between tasks having a given degree of similarity. In other words, if the previous task is very dissimilar (resp. nearly identical) to the current task, then the transfer process might slow down the learning (resp. might be far from optimal speed-up). In this paper, we address this specific issue by explicitly optimizing the transfer rate between tasks and answer to the question : “can the transfer rate be accurately optimized, and at what cost ?”. We show that this optimization problem is related to the continuum bandit problem. We then propose a generic adaptive transfer method (AdaTran), which allows to extend several existing transfer learning algorithms to optimize the transfer rate. Finally, we run several experiments validating our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fernandez, F., Veloso, M.: Probabilistic Policy Reuse in a Reinforcement Learning Agent. In: The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS (2006)

    Google Scholar 

  2. Kakade, S.: On The Sample Complexity of Reinforcement Learning. Phd Thesis, University College London (2003)

    Google Scholar 

  3. Taylor, M.E., Stone, P.: Behavior Transfer for Value-Function-Based Reinforcement Learning. In: The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, July 2005, pp. 53–59. ACM Press, New York (2005)

    Chapter  Google Scholar 

  4. Taylor, M.E., Whiteson, S., Stone, P.: Transfer via Inter-Task Mappings in Policy Search Reinforcement Learning. In: The Sixth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2007 (May 2007)

    Google Scholar 

  5. Perkins, T.J., Precup, D.: Using options for knowledge transfer in reinforcement learning. Technical report, University of Massachusetts, Amherst (1999)

    Google Scholar 

  6. Price, B., Boutilier, C.: Accelerating Reinforcement Learning through Implicit Imitation. Journal of Artificial Intelligence Research 19, 569–629 (2003)

    MATH  Google Scholar 

  7. Kleinberg, R.: Nearly tight bounds for the continuum-armed bandit problem. In: Advances in Neural Information Processing Systems 17 (NIPS 2004), pp. 697–704 (2004)

    Google Scholar 

  8. Auer, P., Ortner, R., Szepesvri, C.: Improved Rates for the Stochastic Continuum-Armed Bandit Problem. In: Bshouty, N.H., Gentile, C. (eds.) COLT. LNCS, vol. 4539, pp. 454–468. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  9. Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a Rigged Casino: The adversarial multi-armed bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science (1998)

    Google Scholar 

  10. Carroll, J.L., Peterson, T.S.: Fixed vs. Dynamic Sub-transfer in Reinforcement Learning. In: Arif Wani, M. (ed.) ICMLA 2002, Las Vegas Nevada, USA, June 24-27, 2002. CSREA Press (2002)

    Google Scholar 

  11. Carroll, J.L., Seppi, K.: Task Similarity Measures for Transfer in Reinforcement Learning Task Libraries. In: The 2005 International Joint Conference on Neural Networks, IJCNN 2005 (2005)

    Google Scholar 

  12. Carroll, J.L., Peterson, T.S., Owens, N.E.: Memory-guided Exploration in Reinforcement Learning. In: The 2001 International Joint Conference on Neural Networks, IJCNN 2001 (2001)

    Google Scholar 

  13. Price, B., Boutilier, C.: A Bayesian Approach to Imitation in Reinforcement Learning. In: IJCAI 2003, pp. 712–720 (2003)

    Google Scholar 

  14. Taylor, M.E., Stone, P., Liu, Y.: Transfer Learning via Inter-Task Mappings for Temporal Difference Learning. Journal of Machine Learning Research 8(1), 2125–2167 (2007)

    MathSciNet  MATH  Google Scholar 

  15. Zhou, D.-X.: A note on derivatives of Bernstein polynomials. Journal of Approximation Theory 78(1), 147–150 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  16. Lorentz, G.G.: Bernstein Polynomials. Chelsea, New York (1986)

    MATH  Google Scholar 

  17. Strehl, A.L., Li, L., Wiewiora, E., Langford, J., Littman, M.L.: PAC model-free reinforcement learning. In: ICML 2006: Proceedings of the 23rd international conference on Machine learning, pp. 881–888 (2006)

    Google Scholar 

  18. Madden, M.G., Howley, T.: Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty. Artif. Intell. Rev. 21(3-4), 375–398 (2004)

    Article  MATH  Google Scholar 

  19. Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Knowledge Transfer Via Advice Taking. In: Proceedings of the Third International Conference on Knowledge Capture, KCAP 2005 (2005)

    Google Scholar 

  20. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning (2004)

    Google Scholar 

  21. Konidaris, G.D.: Autonomous Shaping: Knowledge Transfer in Reinforcement Learning. In: Proceedings of the Twenty Third International Conference on Machine Learning (ICML 2006), Pittsburgh, PA (June 2006)

    Google Scholar 

  22. Kearns, M., Singh, S.: Finite-Sample Rates of Convergence for Q-Learning and Indirect Methods. In: Advances in Neural Information Processing Systems 11, pp. 996–1002. The MIT Press, Cambridge (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chevaleyre, Y., Pamponet, A.M., Zucker, JD. (2009). Experiments with Adaptive Transfer Rate in Reinforcement Learning. In: Richards, D., Kang, BH. (eds) Knowledge Acquisition: Approaches, Algorithms and Applications. PKAW 2008. Lecture Notes in Computer Science(), vol 5465. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01715-5_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01715-5_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01714-8

  • Online ISBN: 978-3-642-01715-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics