A Selecting-the-Best Method for Budgeted Model Selection

  • Gianluca Bontempi
  • Olivier Caelen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6911)

Abstract

The paper focuses on budgeted model selection, that is the selection between a set of alternative models when the ratio between the number of model assessments and the number of alternatives, though bigger than one, is low. We propose an approach based on the notion of probability of correct selection, a notion borrowed from the domain of Monte Carlo stochastic approximation. The idea is to estimate from data the probability that a greedy selection returns the best alternative and to define a sampling rule which maximises such quantity. Analytical results in the case of two alternatives are extended to a larger number of alternatives by using the Clark’s approximation of the maximum of a set of random variables. Preliminary results on synthetic and real model selection tasks show that the technique is competititive with state-of-the-art algorithms, like the bandit UCB.

Keywords

Correct Selection Bandit Problem Greedy Selection Model Selection Problem Sampling Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine Learning 47(2/3), 235–256 (2002)CrossRefMATHGoogle Scholar
  2. 2.
    Caelen, O.: Sélection Séquentielle en Environnement Aléatoire Appliquée à l’Apprentissage Supervisé. PhD thesis, ULB (2009)Google Scholar
  3. 3.
    Caelen, O., Bontempi, G.: Improving the exploration strategy in bandit algorithms. In: Proceedings of Learning and Intelligent OptimizatioN LION II, pp. 56–68 (2007)Google Scholar
  4. 4.
    Caelen, O., Bontempi, G.: On the evolution of the expected gain of a greedy action in the bandit problem. Technical report, Département d’Informatique, Université Libre de Bruxelles, Brussels, Belgium (2008)Google Scholar
  5. 5.
    Caelen, O., Bontempi, G.: A dynamic programming strategy to balance exploration and exploitation in the bandit problem. In: Annals of Mathematics and Artificial Intelligence (2010)Google Scholar
  6. 6.
    Claeskens, G., Hjort, N.L.: Model selection and model averaging. Cambridge University Press, Cambridge (2008)CrossRefMATHGoogle Scholar
  7. 7.
    Clark, C.E.: The greatest of a finite set of random variables. Operations Research, 145–162 (1961)Google Scholar
  8. 8.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  9. 9.
    Inoue, K., Chick, S.E., Chen, C.-H.: An empirical evaluation of several methods to select the best system. ACM Transactions on Modeling and Computer Simulation 9(4), 381–407 (1999)CrossRefGoogle Scholar
  10. 10.
    Iolis, B., Bontempi, G.: Comparison of selection strategies in monte carlo tree search for computer poker. In: Proceedings of the Annual Machine Learning Conference of Belgium and The Netherlands, BeNeLearn 2010 (2010)Google Scholar
  11. 11.
    Kim, S., Nelson, B.: Selecting the Best System. In: Handbooks in Operations Research and Management Science: Simulation. Elsevier, Amsterdam (2006)Google Scholar
  12. 12.
    Law, A.M., Kelton, W.D.: Simulation Modeling & analysis, 2nd edn. McGraw-Hill International, New York (1991)MATHGoogle Scholar
  13. 13.
    Madani, O., Lizotte, D., Greiner, R.: Active model selection. In: Proceedings of the Proceedings of the Twentieth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI 2004), pp. 357–365. AUAI Press (2004)Google Scholar
  14. 14.
    Maron, O., Moore, A.W.: The racing algorithm: Model selection for lazy learners. Artificial Intelligence Review 11(1-5), 193–225 (1997)CrossRefGoogle Scholar
  15. 15.
    Robbins, H.: Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society 58(5), 527–535 (1952)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Schneider, J., Moore, A.: Active learning in discrete input spaces. In: Proceedings of the 34th Interface Symposium (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Gianluca Bontempi
    • 1
  • Olivier Caelen
    • 1
  1. 1.Machine Learning Group Computer Science Department, Faculty of SciencesULB, Université Libre de BruxellesBelgium

Personalised recommendations