Skill Acquisition Via Transfer Learning and Advice Taking

  • Lisa Torrey
  • Jude Shavlik
  • Trevor Walker
  • Richard Maclin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)


We describe a reinforcement learning system that transfers skills from a previously learned source task to a related target task. The system uses inductive logic programming to analyze experience in the source task, and transfers rules for when to take actions. The target task learner accepts these rules through an advice-taking algorithm, which allows learners to benefit from outside guidance that may be imperfect. Our system accepts a human-provided mapping, which specifies the similarities between the source and target tasks and may also include advice about the differences between them. Using three tasks in the RoboCup simulated soccer domain, we demonstrate that this system can speed up reinforcement learning substantially.


Reinforcement Learning Skill Acquisition Transfer Learning Inductive Logic Programming Target Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Driessens, K., Dzeroski, S.: Integrating experimentation and guidance in relational reinforcement learning. In: Proc. ICML (2002)Google Scholar
  2. 2.
    Fung, G., Sandilya, S., Rao, B.: Rule extraction from linear support vector machines. In: Proc. KDD (2005)Google Scholar
  3. 3.
    Kuhlmann, G., Stone, P., Mooney, R., Shavlik, J.: Guiding a reinforcement learner with natural language advice: Initial results in RoboCup soccer. In: AAAI Workshop on Supervisory Control of Learning and Adaptive Systems (2004)Google Scholar
  4. 4.
    Maclin, R., Shavlik, J.: Creating advice-taking reinforcement learners. Machine Learning 22, 251–281 (1996)Google Scholar
  5. 5.
    Maclin, R., Shavlik, J., Torrey, L., Walker, T., Wild, E.: Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression. In: Proc. AAAI (2005)Google Scholar
  6. 6.
    Muggleton, S., De Raedt, L.: Inductive logic programming: Theory and methods. Journal of Logic Programming 19, 20, 629–679 (1994)CrossRefGoogle Scholar
  7. 7.
    Singh, S.: Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning 8(3-4), 323–339 (1992)MATHCrossRefGoogle Scholar
  8. 8.
  9. 9.
    Stone, P., Sutton, R.: Scaling reinforcement learning toward RoboCup soccer. In: Proc. ICML (2001)Google Scholar
  10. 10.
    Sun, R.: Knowledge extraction from reinforcement learning. New Learning Paradigms in Soft Computing, 170–180 (2002)Google Scholar
  11. 11.
    Sutton, R.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)Google Scholar
  12. 12.
    Sutton, R.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. In: Proc. NIPS (1996)Google Scholar
  13. 13.
    Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  14. 14.
    Taylor, M., Stone, P.: Behavior transfer for value-function-based reinforcement learning. In: Proc. AAMAS (2005)Google Scholar
  15. 15.
    Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using advice to transfer knowledge acquired in one reinforcement learning task to another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS, vol. 3720, pp. 412–424. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Lisa Torrey
    • 1
  • Jude Shavlik
    • 1
  • Trevor Walker
    • 1
  • Richard Maclin
    • 2
  1. 1.University of WisconsinMadisonUSA
  2. 2.University of MinnesotaDuluthUSA

Personalised recommendations