Skip to main content

Rademacher Processes and Bounding the Risk of Function Learning

  • Conference paper

Part of the book series: Progress in Probability ((PRPR,volume 47))

Abstract

We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on local norms of the Rademacher process indexed by the underlying function class, and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand’s type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero.

The research of V. Koltchinskii is partially supported by NSA Grant MDA904-991-0031.

The research of D. Panchenko was partially supported by Boeing Computer Services Grant 3-48181.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A., Birgé, L. and Massart, P. (1999) Risk Bounds for Model Selection via Penalization. Probability Theory and Related Fields 113, 301–413.

    Article  MathSciNet  MATH  Google Scholar 

  2. Birgé, L. and Massart, P. (1997) From Model Selection to Adaptive Estimation. In: Festschrift for L. Le Cam. Research Papers in Probability and Statistics, D. Pollard, E. Torgersen and G. Yang (Eds.), pp. 55–87. Springer-Verlag, New York.

    Chapter  Google Scholar 

  3. Devroye, L., Györfi, L. and Lugosi, G. (1996) A probabilistic theory of pattern recognition, Springer-Verlag, New York.

    MATH  Google Scholar 

  4. Dudley, R.M. (1999) Uniform Central Limit Theorems, Cambridge University Press, Cambridge, UK.

    Book  MATH  Google Scholar 

  5. Hush, D. and Scovel, C. (1999) Posterior performance bounds for machine learning, Los Alamos National Laboratory, preprint.

    Google Scholar 

  6. Koltchinskii, V. (1999) Rademacher penalties and structural risk minimization, preprint.

    Google Scholar 

  7. Koltchinskii, V., Abdallah, C.T., Ariola, M., Dorato, P., Panchenko, D. (1999) Statistical Learning Control of Uncertain Systems: It is better than it seems. Preprint, UNM.

    Google Scholar 

  8. Lozano, F. (1999) Model selection using Rademacher penalization, Proceedings of the ICSC Symposia on Neural Computation (NC’2000), 2000, Berlin, Germany.

    Google Scholar 

  9. Massart, P. (1998) About the constants in Talagrand’s concentration inequalities for empirical processes, Université Paris-Sud, preprint.

    Google Scholar 

  10. Mammen, E. and Tsybakov, A. (1995) Asymptotical minimax recovery of sets with smooth boundaries, Ann. Statist. 23, 502–524.

    Article  MathSciNet  MATH  Google Scholar 

  11. Talagrand, M. (1996) A new look at independence. Ann. Probab. 24, 1–34.

    Article  MathSciNet  MATH  Google Scholar 

  12. Talagrand, M. (1996) New concentration inequalities in product spaces, Invent. Math. 126, 505–563.

    Article  MathSciNet  MATH  Google Scholar 

  13. van der Vaart, A. and Wellner, J. (1996) Weak convergence and Empirical Processes with Applications to Statistics, Springer-Verlag, New York.

    Book  MATH  Google Scholar 

  14. Vapnik, V. (1998) Statistical Learning Theory, John Wiley & Sons, New York.

    MATH  Google Scholar 

  15. Vidyasagar, M. (1997) A theory of learning and generalization, Springer-Verlag, New York.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media New York

About this paper

Cite this paper

Koltchinskii, V., Panchenko, D. (2000). Rademacher Processes and Bounding the Risk of Function Learning. In: Giné, E., Mason, D.M., Wellner, J.A. (eds) High Dimensional Probability II. Progress in Probability, vol 47. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-1358-1_29

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-1358-1_29

  • Publisher Name: Birkhäuser, Boston, MA

  • Print ISBN: 978-1-4612-7111-6

  • Online ISBN: 978-1-4612-1358-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics