The Study of Leave-One-Out Error-Based Classification Learning Algorithm for Generalization Performance
This note mainly focuses on a theoretical analysis of the generalization ability of classification learning algorithm. The explicit bound is derived on the relative difference between the generalization error and leave-one-out error for classification learning algorithm under the condition of leave-one-out stability by using Markov’s inequality, and then this bound is used to estimate the generalization error of classification learning algorithm. We compare the result in this paper with previous results in the end.
KeywordsLoss Function Generalization Performance Generalization Error Algorithmic Stability Machine Learn Research
Unable to display preview. Download preview PDF.
- 7.Kutin, S., Niyogi, P.: Almost-everywhere algorithmic stability and generalization error. In: Proceedings of Uncertainty in AI, Edmonton, Canda (2002)Google Scholar
- 8.McDiarmid, C.: On the method of bounded defferences. London Mathematical Lecture Note Series. vol. 141, pp. 148–188 (1989)Google Scholar
- 9.Mukherjee, S., Rifkin, R., Poggio, T.: Regression and classification with regularization. Lectures Notes in Statistics, vol. 171, pp. 107–124 (2002)Google Scholar
- 11.Vapnik, V.N.: Statistical Learning Theroy. Wiley, NewYork (1998)Google Scholar