Ensemble Pruning Using Reinforcement Learning

  • Ioannis Partalas
  • Grigorios Tsoumakas
  • Ioannis Katakis
  • Ioannis Vlahavas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3955)


Multiple Classifier systems have been developed in order to improve classification accuracy using methodologies for effective classifier combination. Classical approaches use heuristics, statistical tests, or a meta-learning level in order to find out the optimal combination function. We study this problem from a Reinforcement Learning perspective. In our modeling, an agent tries to learn the best policy for selecting classifiers by exploring a state space and considering a future cumulative reward from the environment. We evaluate our approach by comparing with state-of-the-art combination methods and obtain very promising results.


Reinforcement Learn Markov Decision Process Ensemble Method Weight Vote Discount Return 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Dietterich, T.G.: Machine-learning research: Four current directions. The AI Magazine 18, 97–136 (1998)Google Scholar
  2. 2.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Tsoumakas, G., Katakis, I., Vlahavas, I.P.: Effective voting of heterogeneous classifiers. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS, vol. 3201, pp. 465–476. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble selection from libraries of models. In: ICML 2004: Proceedings of the twenty-first international conference on Machine learning, p. 18. ACM Press, New York (2004)Google Scholar
  5. 5.
    Sutton, R.S., Barto, A.G.: Reinforcmement Learning, An Introduction. MIT Press, Cambridge (1999)Google Scholar
  6. 6.
    Watkins, C., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar
  7. 7.
    Wolpert, D.H.: Stacked generalization. Technical Report LA-UR-90-3460, Los Alamos, NM (1990)Google Scholar
  8. 8.
    Christos Dimitrakakis, S.B.: Online adaptive policies for ensemble classifiers. Trends in Neurocomputing 64, 211–221 (2005)CrossRefGoogle Scholar
  9. 9.
    Lagoudakis, M.G., Littman, M.L.: Algorithm selection using reinforcement learning. In: Proc. 17th International Conf. on Machine Learning, pp. 511–518. Morgan Kaufmann, San Francisco (2000)Google Scholar
  10. 10.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  11. 11.
    Kohavi, R.: The power of decision tables. In: Lavrač, N., Wrobel, S. (eds.) ECML 1995. LNCS, vol. 912, pp. 174–189. Springer, Heidelberg (1995)CrossRefGoogle Scholar
  12. 12.
    Cohen, W.W.: Fast effective rule induction. In: Prieditis, A., Russell, S. (eds.) Proc. of the 12th International Conference on Machine Learning, pp. 115–123. Morgan Kaufmann, Tahoe City, CA (1995)Google Scholar
  13. 13.
    Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: Proc. 15th International Conf. on Machine Learning, pp. 144–151. Morgan Kaufmann, San Francisco (1998)Google Scholar
  14. 14.
    Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Francisco (1993)Google Scholar
  15. 15.
    Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6, 37–66 (1991)Google Scholar
  16. 16.
    Cleary, J.G., Trigg, L.E.: K*: an instance-based learner using an entropic distance measure. In: Proc. 12th International Conference on Machine Learning, pp. 108–114. Morgan Kaufmann, San Francisco (1995)Google Scholar
  17. 17.
    John, G.H., Langley, P.: Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, pp. 338–345 (1995)Google Scholar
  18. 18.
    Bishop, C.M.: Neural networks for pattern recognition. Oxford University Press, Oxford (1996)zbMATHGoogle Scholar
  19. 19.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ioannis Partalas
    • 1
  • Grigorios Tsoumakas
    • 1
  • Ioannis Katakis
    • 1
  • Ioannis Vlahavas
    • 1
  1. 1.Department of InformaticsAristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations