Rademacher and Gaussian Complexities: Risk Bounds and Structural Results

  • Peter L. Bartlett
  • Shahar Mendelson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2111)


We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes.We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.


Support Vector Machine Boolean Function Function Class Empirical Measure Vote Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Peter L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525–536, 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Peter L. Bartlett, Stéphane Boucheron, and Gábor Lugosi. Model selection and error estimation. Machine Learning, 2001. (To appear).Google Scholar
  3. 3.
    Mostefa Golea, Peter L. Bartlett, and Wee Sun Lee. Generalization in decision trees and DNF: Does size matter? In NIPS 10, pages 259–265, 1998.Google Scholar
  4. 4.
    Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An experimental and theoretical comparison of model selection methods. Machine Learning, 27:7–50, 1997.CrossRefGoogle Scholar
  5. 5.
    V. Koltchinskii. Rademacher penalties and structural risk minimization. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.Google Scholar
  6. 6.
    V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.Google Scholar
  7. 7.
    V. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.Google Scholar
  8. 8.
    E.B. Kong and T.G. Dietterich. Error-correcting output coding corrects bias and variance. In Proc. 12th International Conference on Machine Learning, pages 313–321. Morgan Kaufmann, 1995.Google Scholar
  9. 9.
    M. Ledoux and M. Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer, 1991.Google Scholar
  10. 10.
    Llew Mason, Peter L. Bartlett, and Jonathan Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 38(3):243–255, 2000.zbMATHCrossRefGoogle Scholar
  11. 11.
    C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics 1989, pages 148–188. Cambridge University Press, 1989.Google Scholar
  12. 12.
    Shahar Mendelson. l-norm and its application to learning theory. Positivity, 2001. (To appear—see
  13. 13.
    Shahar Mendelson. Rademacher averages and phase transitions in Glivenko-Cantelli classes. (see, 2001.
  14. 14.
    Shahar Mendelson. Some remarks on covering numbers. (unpublished manuscript—see, 2001.
  15. 15.
    G. Pisier. The volume of convex bodies and Banach space geometry. Cambridge University Press, 1989.Google Scholar
  16. 16.
    Robert E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proc. Fourteenth International Conference, pages 313–321, 1997.Google Scholar
  17. 17.
    Robert E. Schapire, Yoav Freund, Peter L. Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, October 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimisation over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998.zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    N. Tomczak-Jaegermann. Banach-Mazur distance and finite-dimensional operator ideals. Number 38 in Pitman Monographs and Surveys in Pure and Applied Mathematics. Pitman, 1989.Google Scholar
  20. 20.
    Vladimir N. Vapnik and A.Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264–280, 1971.CrossRefzbMATHMathSciNetGoogle Scholar
  21. 21.
    R.C. Williamson, A.J. Smola, and B. Schölkopf. Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators. IEEE Transactions on Information Theory, 2001. (To appear).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Peter L. Bartlett
    • 1
  • Shahar Mendelson
    • 2
  1. 1.BIOwulf TechnologiesBerkeleyUSA
  2. 2.Research School of Information Sciences and EngineeringAustralian National UniversityCanberraAustralia

Personalised recommendations