Advertisement

Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning

  • Weiwei Cheng
  • Johannes Fürnkranz
  • Eyke Hüllermeier
  • Sang-Hyeun Park
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6911)

Abstract

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a “preference-based” approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.

Keywords

Reinforcement Learning Inverted Pendulum Weighted Vote Policy Iteration Partial Order Relation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barto, A.G., Sutton, R.S., Anderson, C.: Neuron-like elements that can solve difficult learning control problems. IEEE Transaction on Systems, Man and Cybernetics 13, 835–846 (1983)Google Scholar
  2. 2.
    Bhatnagar, S., Sutton, R.S., Ghavamzadeh, M., Lee, M.: Natural actor-critic algorithms. Automatica 45(11), 2471–2482 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Dimitrakakis, C., Lagoudakis, M.G.: Rollout sampling approximate policy iteration. Machine Learning 72(3), 157–171 (2008)CrossRefGoogle Scholar
  4. 4.
    Fern, A., Yoon, S.W., Givan, R.: Approximate policy iteration with a policy language bias: Solving relational markov decision processes. Journal of Artificial Intelligence Research 25, 75–118 (2006)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Fürnkranz, J., Hüllermeier, E. (eds.): Preference Learning. Springer, Heidelberg (2010)zbMATHGoogle Scholar
  6. 6.
    Gabillon, V., Lazaric, A., Ghavamzadeh, M.: Rollout allocation strategies for classification-based policy iteration. In: Auer, P., Kaski, S., Szepesvàri, C. (eds.) Proceedings of the ICML 2010 Workshop on Reinforcement Learning and Search in Very Large Spaces (2010)Google Scholar
  7. 7.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.: The weka data mining software: An update. SIGKDD Explorations 11(1), 10–18 (2009)CrossRefGoogle Scholar
  8. 8.
    Hüllermeier, E., Fürnkranz, J., Cheng, W., Brinker, K.: Label ranking by learning pairwise preferences. Artificial Intelligence 172, 1897–1916 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Kersting, K., Driessens, K.: Non-parametric policy gradients: a unified treatment of propositional and relational domains. In: Cohen, W.W., McCallum, A., Roweis, S.T. (eds.) Proceedings of the 25th International Conference on Machine Learning (ICML 2008), pp. 456–463. ACM, Helsinki (2008)CrossRefGoogle Scholar
  10. 10.
    Konda, V.R., Tsitsiklis, J.N.: On actor-critic algorithms. SIAM Journal of Control and Optimization 42(4), 1143–1166 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Lagoudakis, M.G., Parr, R.: Reinforcement learning as classification: Leveraging modern classifiers. In: Fawcett, T.E., Mishra, N. (eds.) Proceedings of the 20th International Conference on Machine Learning (ICML 2003), pp. 424–431. AAAI Press, Washington, DC (2003)Google Scholar
  12. 12.
    Sutton, R.S.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)Google Scholar
  13. 13.
    Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Solla, S.A., Leen, T.K., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems 12 (NIPS-1999), pp. 1057–1063. MIT Press, Denver (1999)Google Scholar
  14. 14.
    Vembu, S., Gärtner, T.: Label ranking algorithms: A survey. In: Fürnkranz and Hüllermeier [5], pp. 45–64.Google Scholar
  15. 15.
    Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar
  16. 16.
    Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8, 229–256 (1992)zbMATHGoogle Scholar
  17. 17.
    Zhao, Y., Kosorok, M., Zeng, D.: Reinforcement learning design for cancer clinical trials. Statistics in Medicine 28, 3295–3315 (2009)MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Weiwei Cheng
    • 1
  • Johannes Fürnkranz
    • 2
  • Eyke Hüllermeier
    • 1
  • Sang-Hyeun Park
    • 2
  1. 1.Department of Mathematics and Computer ScienceMarburg UniversityGermany
  2. 2.Department of Computer ScienceTU DarmstadtGermany

Personalised recommendations