Advertisement

Machine Learning

, Volume 27, Issue 1, pp 7–50 | Cite as

An Experimental and Theoretical Comparison of Model Selection Methods

  • Michael Kearns
  • Yishay Mansour
  • Andrew Y. Ng
  • Dana Ron
Article

Abstract

We investigate the problem of model selection in the setting of supervised learning of boolean functions from independent random examples. More precisely, we compare methods for finding a balance between the complexity of the hypothesis chosen and its observed error on a random training sample of limited size, when the goal is that of minimizing the resulting generalization error. We undertake a detailed comparison of three well-known model selection methods — a variation of Vapnik's Guaranteed Risk Minimization (GRM), an instance of Rissanen's Minimum Description Length Principle (MDL), and (hold-out) cross validation (CV). We introduce a general class of model selection methods (called penalty-based methods) that includes both GRM and MDL, and provide general methods for analyzing such rules. We provide both controlled experimental evidence and formal theorems to support the following conclusions:

•Even on simple model selection problems, the behavior of the methods examined can be both complex and incomparable. Furthermore, no amount of “tuning” of the rules investigated (such as introducing constant multipliers on the complexity penalty terms, or a distribution-specific “effective dimension”) can eliminate this incomparability.

•It is possible to give rather general bounds on the generalization error, as a function of sample size, for penalty-based methods. The quality of such bounds depends in a precise way on the extent to which the method considered automatically limits the complexity of the hypothesis selected.

•For any model selection problem, the additional error of cross validation compared to any other method can be bounded above by the sum of two terms. The first term is large only if the learning curve of the underlying function classes experiences a phase transition” between (1-γ)m and m examples (where gamma is the fraction saved for testing in CV). The second and competing term can be made arbitrarily small by increasing γ.

•The class of penalty-based methods is fundamentally handicapped in the sense that there exist two types of model selection problems for which every penalty-based method must incur large generalization error on at least one, while CV enjoys small generalization error on both.

model selection complexity regularization cross validation minimum description length principle structural risk minimization vc dimension 

References

  1. Barron, A. R., & Cover, T. M. (1991). Minimum complexity density estimation. IEEE Transactions on Information Theory, 37, 1034-1054.CrossRefGoogle Scholar
  2. Blum, A., & Rivest R. L. (1989). Training a 3-node neural net is NP-Complete. In David S. Touretzky, editor, Advances in Neural Information Processing Systems I, (pp. 494-501). Morgan Kaufmann, San Mateo, CA.Google Scholar
  3. Cover T., & Thomas J. (1991). Elements of Information Theory. Wiley.Google Scholar
  4. Haussler, D., & Kearns, M., & Seung, S., & Tishby, N. (1994). Rigourous learning curve bounds from statistical mechanics. In Proceedings of the Seventh Annual ACM Confernce on Computational Learning Theory, (pp. 76-87).Google Scholar
  5. Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58, 13-30.Google Scholar
  6. Kearns, M. (1995). A bound on the error of cross validation, with consequences for the training-test split. In Advances in Neural Information Processing Systems 8. The MIT Press.Google Scholar
  7. Kearns, M., & Schapire, R., & Sellie, L. (1992). Toward efficient agnostic learning. In Proceedings of the 5th Annual Workshop on Computational Learning Theory, (pp. 341-352).Google Scholar
  8. Pitt, L., & Valiant, L. (1988). Computational limitations on learning from examples. Journal of the ACM, 35, 965-984.CrossRefGoogle Scholar
  9. Quinlan, J., & Rivest, R. (1989). Inferring decision trees using the minimum description length principle. Information and Computation, 80, 227-248.Google Scholar
  10. Rissanen, J. (1978). Modeling by shortest data description. Automatica, 14, 465-471.CrossRefGoogle Scholar
  11. Rissanen, J. (1986). Stochastic complexity and modeling. Annals of Statistics, 14, 1080-1100.Google Scholar
  12. Rissanen, J. (1989). Stochastic Complexity in Statistical Inquiry, volume 15 of Series in Computer Science. World Scientific.Google Scholar
  13. Schaffer, C. (1994). A conservation law for generalization performance. In Proceedings of the Eleventh International Conference on Machine Learning, (pp. 259-265).Google Scholar
  14. Seung, H. S., & Sompolinsky, H., & Tishby, N. (1992). Statistical mechanics of learning from examples. Physical Review, A45, 6056-6091.CrossRefGoogle Scholar
  15. tone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society B, 36, 111-147.Google Scholar
  16. tone, M. (1977). Asymptotics for and against cross-validation. Biometrika, 64, 29-35.Google Scholar
  17. Vapnik, V. (1982). Estimation of Dependences Based on Empirical Data. Springer-Verlag.Google Scholar
  18. Vapnik, V., & Chervonenkis, A. (1971). On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16, 264-280.Google Scholar
  19. Wolpert, D. (1992). On the connection between in-sample testing and generalization error. Complex Systems, 6, 47-94.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Michael Kearns
    • 1
  • Yishay Mansour
    • 2
  • Andrew Y. Ng
    • 3
  • Dana Ron
    • 4
  1. 1.AT&T Laboratories Research
  2. 2.Department of Computer ScienceTel Aviv UniversityTel AvivIsrael
  3. 3.Department of Computer ScienceCarnegie Mellon UniversityPittsburgh
  4. 4.Laboratory of Computer ScienceMITCambridge

Personalised recommendations