New approaches to statistical learning theory
- 238 Downloads
We present new tools from probability theory that can be applied to the analysis of learning algorithms. These tools allow to derive new bounds on the generalization performance of learning algorithms and to propose alternative measures of the complexity of the learning task, which in turn can be used to derive new learning algorithms.
Key words and phrasesStatistical learning theory concentration inequalities Rademacher averages error bounds
Unable to display preview. Download preview PDF.
- Bartlett, P., Bousquet, O. and Mendelson, S. (2002b). Local rademacher complexities (preprint).Google Scholar
- Bartett, P., Bousquet, O. and Mendelson, S. (2002c). Localized rademacher complexity,Proceedings of the 15th Annual Conference on Computational Learning Theory, Lecture Notes in Comput. Sci., 44–58, Springer, Berlin.Google Scholar
- Boucheron, S., Lugosi, G. and Massart, P. (2002). Concentration inequalities using the entropy method,Ann. Probab. (to appear).Google Scholar
- Bousquet, O. (2002b). Concentration inequalities and empirical processes theory applied to the analysis of learning algorithms, Ph.D. thesis, Centre de Mathématiques Appliquées, Ecole Polytechnique (preprint).Google Scholar
- Koltchinskii, V. and Panchenko, D. (2000). Rademacher processes and bounding the risk of function learning,High Dimensional Probability II (eds. E. Gine, D. Mason and J. Wellner) 443–459.Google Scholar
- McDiarmid, C. (1989). On the method of bounded differences,Surveys in Combinatorics, London Math. Soc. Lecture Note Ser.,141, 148–188, Cambridge University Press, Cambridge.Google Scholar
- Vapnik, V. and Chervonenkis, A. (1991). The necessary and sufficient conditions for consistency of the method of empirical risk minimization,Pattern Recognition and Image Analysis,1(3), 284–305.Google Scholar