Skip to main content

Reinforcement Learning Transfer Using a Sparse Coded Inter-task Mapping

  • Conference paper
Book cover Multi-Agent Systems (EUMAS 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7541))

Included in the following conference series:

Abstract

Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 72.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ammar, H.B., Taylor, M.E.: Common subspace transfer for reinforcement learning tasks. In: Proceedings of the Adaptive and Learning Agents Workshop, at AAMAS 2011 (May 2011)

    Google Scholar 

  2. Ammar, H.B., Tuyls, K., Taylor, M.E., Driessens, K., Weiss, G.: Reinforcement learning transfer via sparse coding. In: Proceedings of the Eleventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS) (June 2012)

    Google Scholar 

  3. Buşoniu, L., Babuška, R., De Schutter, B., Ernst, D.: Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press, Boca Raton (2010)

    Google Scholar 

  4. Jean Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale l1-regularized logistic regression. Journal of Machine Learning Research (2007)

    Google Scholar 

  5. Konidaris, G.: A framework for transfer in reinforcement learning. In: Proceedings of the ICML 2006 Workshop on Structural Knowledge Transfer for Machine Learning (2006)

    Google Scholar 

  6. Kuhlmann, G., Stone, P.: Graph-Based Domain Mapping for Transfer Learning in General Games. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Lagoudakis, M.G., Parr, R.: Least-squares policy iteration. J. Mach. Learn. Res. 4, 1107–1149 (2003)

    MathSciNet  Google Scholar 

  8. Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: NIPS, pp. 801–808 (2007)

    Google Scholar 

  9. Liu, Y., Stone, P.: Value-function-based transfer for reinforcement learning using structure mapping. In: Proceedings of the Twenty-First National Conference on Artificial Intelligence, pp. 415–420 (July 2006)

    Google Scholar 

  10. Nocedal, J., Wright, S.J.: Numerical optimization. Springer (August 1999)

    Google Scholar 

  11. Rasmussen, C.E.: Gaussian processes for machine learning. MIT Press (2006)

    Google Scholar 

  12. Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudo-inputs. In: Advance in Neural Information Processing Systems, pp. 1257–1264. MIT Press (2006)

    Google Scholar 

  13. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction (1998)

    Google Scholar 

  14. Talvitie, E., Singh, S.: An experts algorithm for transfer learning. In: Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (2007)

    Google Scholar 

  15. Taylor, M.E., Kuhlmann, G., Stone, P.: Autonomous transfer for reinforcement learning. In: Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 283–290 (May 2008)

    Google Scholar 

  16. Taylor, M.E., Stone, P.: Cross-domain transfer for reinforcement learning. In: Proceedings of the Twenty-Fourth International Conference on Machine Learning, ICML (June 2007)

    Google Scholar 

  17. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  18. Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 156–163 (May 2007)

    Google Scholar 

  19. Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using Advice to Transfer Knowledge Acquired in One Reinforcement Learning Task to Another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 412–424. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ammar, H.B., Taylor, M.E., Tuyls, K., Weiss, G. (2012). Reinforcement Learning Transfer Using a Sparse Coded Inter-task Mapping. In: Cossentino, M., Kaisers, M., Tuyls, K., Weiss, G. (eds) Multi-Agent Systems. EUMAS 2011. Lecture Notes in Computer Science(), vol 7541. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34799-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34799-3_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34798-6

  • Online ISBN: 978-3-642-34799-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics