Skip to main content

VC Dimension

  • Living reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Mining
  • 494 Accesses

Motivation and Background

We define an important combinatorial parameter that measures the combinatorial complexity of a family of subsets taken from a given universe (learning domain) X. This parameter was originally defined by Vapnik and Chervonenkis (1971) and is thus commonly referred to as Vapnik-Chervonenkis dimension, commonly abbreviated as VC dimension. Subsequently, Dudley (19781979) generalized Vapnik and Chervonenkis’ (1971) results. The reader is also referred to Vapnik’s (2000) book in which he greatly extends the original ideas. This results in a theory which is called structural risk minimization.

The importance of the VC dimension for PAC learning was discovered by Blumer et al. (1989) who introduced the notion to computational learning theory.

As Anthony and Biggs (1992, Page 71) have put it, “The development of this notion is probably the most significant contribution that mathematics has made to Computational Learning Theory.”

Recall that we use | S | and (S) to...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Anthony M, Bartlett PL (1999) Neural network learning: theoretical foundations. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Anthony M, Biggs N (1992) Computational learning theory. Cambridge tracts in theoretical computer science, Vol 30. Cambridge University Press, Cambridge

    Google Scholar 

  • Arora S, Barak B (2009) Computational complexity: A Modern approach. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK (1989) Learnability and the Vapnik-Chervonenkis dimension. J ACM 36(4):929–965

    Article  MathSciNet  MATH  Google Scholar 

  • Dudley RM (1978) Central limit theorems for empirical measures. Ann Probab 6(6):899–929

    Article  MathSciNet  MATH  Google Scholar 

  • Dudley RM (1979) Corrections to “Central limit theorems for empirical measures”. Ann Probab 7(5):909–911

    Article  MathSciNet  MATH  Google Scholar 

  • Goldberg PW, Jerrum MR (1995) Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers. Mach Learn 18(2-3):131–148

    Article  MATH  Google Scholar 

  • Gurvits L (1997) Linear algebraic proofs of VC-dimension based inequalities. In: Ben-David S (ed) Proceedings of the third european conference on computational learning theory, EuroCOLT ’97, Jerusalem, Israel, March 1997, Lecture notes in artificial Intelligence, vol 1208. Springer, pp 238–250

    Google Scholar 

  • Haussler D, Littlestone N, Warmuth MK (1994) Predicting {0, 1} functions on randomly drawn points. Info Comput 115(2):248–292

    Article  MathSciNet  MATH  Google Scholar 

  • Haussler D, Welz E (1987) Epsilon nets and simplex range queries. Discret Comput Geom 2:127–151

    Article  MathSciNet  MATH  Google Scholar 

  • Karpinski M, Macintyre A (1995) Polynomial bounds for VC dimension of sigmoidal neural networks. In: Proceedings of the 27th annual ACM symposium on theory of computing, ACM Press, New York, pp 200–208

    Google Scholar 

  • Karpinski M, Werther T (1994) VC dimension and sampling complexity of learning sparse polynomials and rational functions. In: Hanson SJ, Drastal GA, Rivest RL (eds) Computational learning theory and natural learning systems. Constraints and prospects, vol I, chap. 11 MIT Press, pp 331–354

    Google Scholar 

  • Kearns MJ, Vazirani UV (1994) An Introduction to computational learning theory. The MIT Press, Cambridge, Massachusetts

    Google Scholar 

  • Linial N, Mansour Y, Rivest RL (1991) Results on learnability and the Vapnik-Chervonenkis dimension. Inform Comput 90(1):33–49

    Article  MathSciNet  MATH  Google Scholar 

  • Littlestone N (1988) Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Mach Learn 2(4):285–318

    Google Scholar 

  • Maass W, Turán G (1990) On the complexity of learning from counterexamples and membership queries. In: Proceedings of the 31st annual symposium on foundations of computer science (FOCS 1990), St. Louis, 22-24 October 1990. IEEE Computer Society Press, Los Alamitos, pp 203–210

    Google Scholar 

  • Mitchell A, Scheffer T, Sharma A, Stephan F (1999) The VC-dimension of subclasses of pattern languages. In: Watanabe O, Yokomori T (eds) Proceedings of the 10th international conference on algorithmic learning theory, ALT ’99, Tokyo, Dec 1999, Lecture notes in artificial intelligence, vol 1720. Springer, pp 93–105.

    Google Scholar 

  • Natschläger T, Schmitt M (1996) Exact VC-dimension of Boolean monomials. Infor Process Lett 59(1):19–20

    Article  MathSciNet  MATH  Google Scholar 

  • Papadimitriou CH, Yannakakis M (1996) On limited nondeterminism and the complexity of the V-C dimension. J Comput Syst Sci 53(2):161–170

    Article  MathSciNet  MATH  Google Scholar 

  • Sakurai A (1995) On the VC-dimension of depth four threshold circuits and the complexity of Boolean-valued functions. Theoret Comput Sci 137(1):109–127

    Article  MathSciNet  MATH  Google Scholar 

  • Sauer N (1972) On the density of families of sets. J Comb Theory (A) 13(1):145–147

    Article  MathSciNet  MATH  Google Scholar 

  • Schaefer M (1999) Deciding the Vapnik-Červonenkis dimension is \(\Sigma _{3}^{p}\)-complete. J Comput Syst Sci 58(1): 177–182

    Article  MathSciNet  MATH  Google Scholar 

  • Shinohara A (1995) Complexity of computing Vapnik-Chervonenkis dimension and some generalized dimensions. Theoret Comput Sci 137(1):129–144

    Article  MathSciNet  MATH  Google Scholar 

  • Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, Berlin

    Book  MATH  Google Scholar 

  • Vapnik VN, Chervonenkis AY (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab Appl 16(2):264–280

    Article  MATH  Google Scholar 

  • Vapnik VN, Chervonenkis AY (1974) Theory of pattern recognition. Nauka, Moskwa (In Russian)

    MATH  Google Scholar 

  • Wenocur RS, Dudley RM (1981) Some special Vapnik-Chervonenkis classes. Discret Math 33:313–318

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Zeugmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media New York

About this entry

Cite this entry

Zeugmann, T. (2016). VC Dimension. In: Sammut, C., Webb, G. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7502-7_881-1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7502-7_881-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Online ISBN: 978-1-4899-7502-7

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics