The Study of Leave-One-Out Error-Based Classification Learning Algorithm for Generalization Performance

  • Bin Zou
  • Jie Xu
  • Luoqing Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4221)


This note mainly focuses on a theoretical analysis of the generalization ability of classification learning algorithm. The explicit bound is derived on the relative difference between the generalization error and leave-one-out error for classification learning algorithm under the condition of leave-one-out stability by using Markov’s inequality, and then this bound is used to estimate the generalization error of classification learning algorithm. We compare the result in this paper with previous results in the end.


Loss Function Generalization Performance Generalization Error Algorithmic Stability Machine Learn Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alon, N., Ben-David, S., Cesa-Bianchi, N., Haussler, D.: Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM 44, 615–631 (1997)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Bousquet, O., Elisseeff, A.: Stability and generalization. Journal Machine Learning Research 2, 499–526 (2001)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Cucker, F., Smale, S.: On the mathematical foundations of learning. Bulletin of the American Mathematical Society 39, 1–49 (2002)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Devroye, L., Wagner, T.: Distribution-free inequalities for the deleted and holdout error estimates. IEEE trans. inform. theory 25, 202–207 (1979)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Devroye, L., Wagner, T.: Distribution-free performance bounds for potential function rules. IEEE Trans. Inform. Theory 25, 601–604 (1979)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Kearns, M., Ron, D.: Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation 11, 1427–1453 (1999)CrossRefGoogle Scholar
  7. 7.
    Kutin, S., Niyogi, P.: Almost-everywhere algorithmic stability and generalization error. In: Proceedings of Uncertainty in AI, Edmonton, Canda (2002)Google Scholar
  8. 8.
    McDiarmid, C.: On the method of bounded defferences. London Mathematical Lecture Note Series. vol. 141, pp. 148–188 (1989)Google Scholar
  9. 9.
    Mukherjee, S., Rifkin, R., Poggio, T.: Regression and classification with regularization. Lectures Notes in Statistics, vol. 171, pp. 107–124 (2002)Google Scholar
  10. 10.
    Rogers, W., Wagner, T.: A finite sample distribution-free performance bound for local discrimination rules. Annals of Statistics 6, 506–514 (1978)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Vapnik, V.N.: Statistical Learning Theroy. Wiley, NewYork (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Bin Zou
    • 1
  • Jie Xu
    • 1
    • 2
  • Luoqing Li
    • 1
  1. 1.Faculty of Mathematics and Computer ScienceHubei UniversityWuhanP.R. China
  2. 2.College of Computer ScienceHuazhong University of Science and TechnologyWuhanP.R. China

Personalised recommendations