Abstract
We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on local norms of the Rademacher process indexed by the underlying function class, and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand’s type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero.
The research of V. Koltchinskii is partially supported by NSA Grant MDA904-991-0031.
The research of D. Panchenko was partially supported by Boeing Computer Services Grant 3-48181.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Barron, A., Birgé, L. and Massart, P. (1999) Risk Bounds for Model Selection via Penalization. Probability Theory and Related Fields 113, 301–413.
Birgé, L. and Massart, P. (1997) From Model Selection to Adaptive Estimation. In: Festschrift for L. Le Cam. Research Papers in Probability and Statistics, D. Pollard, E. Torgersen and G. Yang (Eds.), pp. 55–87. Springer-Verlag, New York.
Devroye, L., Györfi, L. and Lugosi, G. (1996) A probabilistic theory of pattern recognition, Springer-Verlag, New York.
Dudley, R.M. (1999) Uniform Central Limit Theorems, Cambridge University Press, Cambridge, UK.
Hush, D. and Scovel, C. (1999) Posterior performance bounds for machine learning, Los Alamos National Laboratory, preprint.
Koltchinskii, V. (1999) Rademacher penalties and structural risk minimization, preprint.
Koltchinskii, V., Abdallah, C.T., Ariola, M., Dorato, P., Panchenko, D. (1999) Statistical Learning Control of Uncertain Systems: It is better than it seems. Preprint, UNM.
Lozano, F. (1999) Model selection using Rademacher penalization, Proceedings of the ICSC Symposia on Neural Computation (NC’2000), 2000, Berlin, Germany.
Massart, P. (1998) About the constants in Talagrand’s concentration inequalities for empirical processes, Université Paris-Sud, preprint.
Mammen, E. and Tsybakov, A. (1995) Asymptotical minimax recovery of sets with smooth boundaries, Ann. Statist. 23, 502–524.
Talagrand, M. (1996) A new look at independence. Ann. Probab. 24, 1–34.
Talagrand, M. (1996) New concentration inequalities in product spaces, Invent. Math. 126, 505–563.
van der Vaart, A. and Wellner, J. (1996) Weak convergence and Empirical Processes with Applications to Statistics, Springer-Verlag, New York.
Vapnik, V. (1998) Statistical Learning Theory, John Wiley & Sons, New York.
Vidyasagar, M. (1997) A theory of learning and generalization, Springer-Verlag, New York.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer Science+Business Media New York
About this paper
Cite this paper
Koltchinskii, V., Panchenko, D. (2000). Rademacher Processes and Bounding the Risk of Function Learning. In: Giné, E., Mason, D.M., Wellner, J.A. (eds) High Dimensional Probability II. Progress in Probability, vol 47. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-1358-1_29
Download citation
DOI: https://doi.org/10.1007/978-1-4612-1358-1_29
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-1-4612-7111-6
Online ISBN: 978-1-4612-1358-1
eBook Packages: Springer Book Archive