Machine Learning

, Volume 52, Issue 3, pp 239–281 | Cite as

Inference for the Generalization Error

  • Claude Nadeau
  • Yoshua Bengio


In order to compare learning algorithms, experimental results reported in the machine learning literature often use statistical tests of significance to support the claim that a new learning algorithm generalizes better. Such tests should take into account the variability due to the choice of training set and not only that due to the test examples, as is often the case. This could lead to gross underestimation of the variance of the cross-validation estimator, and to the wrong conclusion that the new algorithm is significantly better when it is not. We perform a theoretical investigation of the variance of a variant of the cross-validation estimator of the generalization error that takes into account the variability due to the randomness of the training set as well as test examples. Our analysis shows that all the variance estimators that are based only on the results of the cross-validation experiment must be biased. This analysis allows us to propose new estimators of this variance. We show, via simulations, that tests of hypothesis about the generalization error using those new variance estimators have better properties than tests involving variance estimators currently in use and listed in Dietterich (1998). In particular, the new tests have correct size and good power. That is, the new tests do not reject the null hypothesis too often when the hypothesis is true, but they tend to frequently reject the null hypothesis when the latter is false.

generalization error cross-validation variance estimation hypothesis tests size power 


  1. Blake, C., Keogh, E., & Merz, C.-J. (1998). UCI repository of machine learning databases.Google Scholar
  2. Breiman, L. (1996). Heuristics of instability and stabilization in model selection. Annals of Statistics,24:6, 2350–2383.Google Scholar
  3. Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Wadsworth International Group.Google Scholar
  4. Burges, C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:2, 1–47.Google Scholar
  5. Devroye, L., Gyröfi, L., & Lugosi, G. (1996). A probabilistic theory of pattern recognition. Springer-Verlag.Google Scholar
  6. Dietterich, T. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10:7, 1895–1924.Google Scholar
  7. Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. Monographs on Statistics and Applied Probability 57. New York, NY: Chapman & Hall.Google Scholar
  8. Everitt, B. (1977). The analysis of contingency tables. London: Chapman & Hall.Google Scholar
  9. Goutte, C. (1997). Note on free lunches and cross-validation. Neural Computation, 9:6, 1053–1059.Google Scholar
  10. Hinton, G., Neal, R., Tibshirani, R., & DELVE team members. (1995). Assessing learning procedures using DELVE. Technical report, University of Toronto, Department of Computer Science.Google Scholar
  11. Kearns, M., & Ron, D. (1997). Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Tenth Annual Conference on Computational Learning Theory (pp. 152–162). Morgan Kaufmann.Google Scholar
  12. Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. Proceeding of the Fourteenth International Joint Conference on Artificial Intelligence (pp. 1137–1143). Morgan Kaufmann.Google Scholar
  13. Kolen, J. & Pollack, J. (1991). Back propagation is sensitive to initial conditions. Advances in Neural Information Processing Systems (pp. 860–867). San Francisco, CA: Morgan Kauffmann.Google Scholar
  14. Nadeau, C., & Bengio, Y. (1999). Inference for the generalisation error. Technical report 99s-25, CIRANO.Google Scholar
  15. Vapnik, V. (1982). Estimation of dependences based on empirical data. Berlin: Springer-Verlag.Google Scholar
  16. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25.Google Scholar
  17. Wolpert, D., & Macready,W. (1995). No free lunch theorems for search. Technical report SFI-TR-95-02-010, The Santa Fe Institute.Google Scholar
  18. Zhu, H., & Rohwer, R. (1996). No free lunch for cross validation. Neural Computation, 8:7, 1421–1426.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Claude Nadeau
    • 1
  • Yoshua Bengio
    • 2
  1. 1.Health Canada, AL0900B1OttawaCanada
  2. 2.CIRANO and Dept. IROUniversité de MontréalMontréalCanada

Personalised recommendations