Skip to main content

Distribution-Dependent Vapnik-Chervonenkis Bounds

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1572))

Included in the following conference series:

Abstract

Vapnik-Chervonenkis (VC) bounds play an important role in statistical learning theory as they are the fundamental result which explains the generalization ability of learning machines. There have been consequent mathematical works on the improvement of VC rates of convergence of empirical means to their expectations over the years. The result obtained by Talagrand in 1994 seems to provide more or less the final word to this issue as far as universal bounds are concerned. Though for fixed distributions, this bound can be practically outperformed. We show indeed that it is possible to replace the 2 2 under the exponential of the deviation term by the corresponding Cramér transform as shown by large deviations theorems. Then, we formulate rigorous distributionsensitive VC bounds and we also explain why these theoretical results on such bounds can lead to practical estimates of the effective VC dimension of learning structures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexander, K.: Probability Inequalities for Empirical Processes and a Law of the Iterated Logarithm. Annals of Probability 4 (1984) 1041–1067

    Article  Google Scholar 

  2. Azencott, R.: Grandes Déviations, in Hennequin, P.L. (ed.): Ecole d’Eté de Probabilités de Saint-Flour VIII-1978. Lecture Notes in Mathematics, Vol. 774. Springer-Verlag, Berlin Heidelberg New York (1978)

    Google Scholar 

  3. Bartlett, P., Lugosi, G.: An Inequality for Uniform Deviations of Sample Averages from their Means. To appear (1998)

    Google Scholar 

  4. Cohn, D., Tesauro, G.: How Tight Are the Vapnik-Chervonenkis Bounds? Neural Computation 4 (1992) 249–269

    Article  Google Scholar 

  5. Cohn, D.: Separating Formal Bounds from Practical Performance in Learning Systems. PhD thesis, University of Washington (1992)

    Google Scholar 

  6. Devroye, L.: Bounds for the Uniform Deviation of Empirical Measures. Journal of Multivariate Analysis 12 (1982) 72–79

    Article  MATH  MathSciNet  Google Scholar 

  7. Devroye, L., Györff, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer-Verlag, Berlin Heidelberg New York (1996)

    MATH  Google Scholar 

  8. Haussler, D.: Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension. Journal of Combinatorial Theory, Series A 69 (1995) 217–232

    Article  MATH  MathSciNet  Google Scholar 

  9. Haussler, D., Kearns, M., Seung, H.S., Tishby, N.: Rigorous Learning Curve Bounds from Statistical Mechanics. Machine Learning (1996) 195–236

    Google Scholar 

  10. Hoeffding, W.: Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association 58 (1963) 13–30

    Article  MATH  MathSciNet  Google Scholar 

  11. Kearns, M.J., Vazirani, U.V.: An Introduction to Computational Learning Theory. MIT Press, Cambridge Massachussets (1994)

    Google Scholar 

  12. Ledoux, M., Talagrand, M.: Probability in Banach Spaces. Springer-Verlag, Berlin Heidelberg New York (1992)

    Google Scholar 

  13. Lugosi, G.: Improved Upper Bounds for Probabilities of Uniform Deviations. Statistics and Probability Letters 25 (1995) 71–77

    Article  MATH  MathSciNet  Google Scholar 

  14. Massart, P.: Rates of Convergence in the Central Limit Theorem for Empirical Processes. Annales de l’lnstitut Henri Poincaré, Vol. 22, No. 4 (1986) 381–423

    MATH  MathSciNet  Google Scholar 

  15. Parrondo, J.M.R., Van den Broeck, C.: Vapnik-Chervonenkis Bounds for Generalization. J. Phys. A: Math. Gen. 26 (1993) 2211–2223

    Article  MATH  Google Scholar 

  16. Pollard, D.: Convergence of Stochastic Processes. Springer-Verlag, Berlin Heidelberg New York (1984)

    MATH  Google Scholar 

  17. Schuurmans, D.E.: Effective Classification Learning. PhD thesis, University of Toronto (1996)

    Google Scholar 

  18. Stroock, D.W.: Probability Theory, an Analytic View. Cambridge University Press (1993)

    Google Scholar 

  19. Talagrand, M.: Sharper Bounds for Gaussian and Empirical Processes. The Annals of Probability, Vol. 22, No. 1 (1994) 28–76

    Article  MATH  MathSciNet  Google Scholar 

  20. van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer-Verlag, Berlin Heidelberg New York (1996)

    MATH  Google Scholar 

  21. Vapnik, V.N., Chervonenkis, A. Ya.: On the Uniform Convergence of Relative Frequencies of Events to their Probabilities. Theory of Probability and its Applications, Vol.XVI, No. 2 (1971) 264–280

    Article  MathSciNet  Google Scholar 

  22. Vapnik, V.N., Chervonenkis, A. Ya.: Necessary and Sufficient Conditions for the Uniform Convergence of Means to their Expectations. Theory of Probability and its Applications, Vol. XXVI, No. 3 (1981) 532–553

    MathSciNet  Google Scholar 

  23. Vapnik, V. N.: Estimation of Dependences Based on Empirical Data. Springer-Verlag, Berlin Heidelberg New York (1982)

    MATH  Google Scholar 

  24. Vapnik, V.N., Levin, E., Le Cun, Y.: Measuring the VC Dimension of a Learning Machine. Neural Computation 6 (1994) 851–876

    Article  Google Scholar 

  25. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer-Verlag, Berlin Heidelberg New York (1995)

    MATH  Google Scholar 

  26. Vayatis, N.: Learning Complexity and Pattern Recognition. PhD thesis, Ecole Polytechnique. To appear (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vayatis, N., Azencott, R. (1999). Distribution-Dependent Vapnik-Chervonenkis Bounds. In: Fischer, P., Simon, H.U. (eds) Computational Learning Theory. EuroCOLT 1999. Lecture Notes in Computer Science(), vol 1572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49097-3_18

Download citation

  • DOI: https://doi.org/10.1007/3-540-49097-3_18

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65701-9

  • Online ISBN: 978-3-540-49097-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics