Skip to main content

An Improved VC Dimension Bound for Sparse Polynomials

  • Conference paper
Learning Theory (COLT 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3120))

Included in the following conference series:

  • 2203 Accesses

Abstract

We show that the function class consisting of k-sparse polynomials in n variables has Vapnik-Chervonenkis (VC) dimension at least nk+1. This result supersedes the previously known lower bound via k-term monotone disjunctive normal form (DNF) formulas obtained by Littlestone (1988). Moreover, it implies that the VC dimension for k-sparse polynomials is strictly larger than the VC dimension for k-term monotone DNF. The new bound is achieved by introducing an exponential approach that employs Gaussian radial basis function (RBF) neural networks for obtaining classifications of points in terms of sparse polynomials.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)

    Book  MATH  Google Scholar 

  • Bartlett, P.L., Maass, W.: Vapnik-Chervonenkis dimension of neural nets. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 1188–1192. MIT Press, Cambridge (2003)

    Google Scholar 

  • Bartlett, P.L., Maiorov, V., Meir, R.: Almost linear VC-dimension boundsfor piecewise polynomial networks. Neural Computation 10, 2159–2173 (1998)

    Article  Google Scholar 

  • Ben-David, S., Lindenbaum, M.: Localization vs. identification of semialgebraic sets. Machine Learning 32, 207–224 (1998)

    Article  MATH  Google Scholar 

  • Blum, A., Singh, M.: Learning functions of k terms. In: Fulk, M.A. (ed.) Proceedings of the Third Annual Workshop on Computational Learning Theory, pp. 144–153. Morgan Kaufmann, San Mateo (1990)

    Google Scholar 

  • Bshouty, N.H., Mansour, Y.: Simple learning algorithms for decision trees and multivariate polynomials. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pp. 304–311. IEEE Computer Society Press, Los Alamitos (1995)

    Google Scholar 

  • Durbin, R., Rumelhart, D.: Product units: A computationally powerful andbiologically plausible extension to backpropagation networks. Neural Computation 1, 133–142 (1989)

    Article  Google Scholar 

  • Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lowerbound on the number of examples needed for learning. Information and Computation 82, 247–261 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  • Erlich, Y., Chazan, D., Petrack, S., Levy, A.: Low er bound on VC-dimension by local shattering. Neural Computation 9, 771–776 (1997)

    Article  Google Scholar 

  • Fischer, P., Simon, H.U.: On learning ring-sum-expansions. SIAM Journal on Computing  21, 181–192 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  • Grigoriev, D.Y., Karpinski, M., Singer, M.F.: Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields. SIAM Journal on Computing 19, 1059–1063 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  • Haykin, S.: Neural Networks: A Comprehensive Fou ndation, 2nd edn. Prentice Hall, Upper Saddle River (1999)

    Google Scholar 

  • Huang, M.-D., Rao, A.J.: Interpolation of sparse multivariate polynomials over large finite fields with applications. Journal of Algorithms 33, 204–228 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  • Karpinski, M., Macintyre, A.: Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. Journal of Computer and System Sciences 54, 169–176 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  • Karpinski, M., Werther, T.: VC dimension and uniform learnability of sparse polynomials and rational functions. SIAM Journal on Computing 22, 1276–1285 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  • Koiran, P., Sontag, E.D.: Neural networks with quadratic VC dimension. Journal of Computer and System Sciences 54, 190–198 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  • Lee, W.S., Bartlett, P.L., Williamson, R.C.: Lower bounds on the VC dimension of smoothly parameterized function classes. Neural Computation 7, 1040–1053 (1995)

    Article  Google Scholar 

  • Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2, 285–318 (1988)

    Google Scholar 

  • Murao, H., Fujise, T.: Modular algorithm for sparse multivariate polynomial interpolation and its parallel implementation. Journal of Symbolic Computation 21, 377–396 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  • Roth, R.M., Benedek, G.M.: Interpolation and approximation of sparse multivariate polynomials over GF(2). SIAM Journal on Computing 20, 291–314 (1990)

    Article  MathSciNet  Google Scholar 

  • Schapire, R.E., Sellie, L.: Learning sparse multivariate polynomials over a field with queries and counterexamples. Journal of Computer and System Sciences 52, 201–213 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  • Schmitt, M.: Descartes’ rule of signs for radial basis function neural networks. Neural Computation 14, 2997–3011 (2002)

    Article  MATH  Google Scholar 

  • Schmitt, M.: Neural networks with local receptive fields and superlinear VC dimension. Neural Computation 14, 919–956 (2002)

    Article  MATH  Google Scholar 

  • Schmitt, M.: On the complexity of computing and learning with multiplicative neural networks. Neural Computation 14, 241–301 (2002)

    Article  MATH  Google Scholar 

  • Schmitt, M.: New designs for the Descartes rule of signs. American Mathematical Monthly 111, 159–164 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  • Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16, 264–280 (1971)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schmitt, M. (2004). An Improved VC Dimension Bound for Sparse Polynomials. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-27819-1_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22282-8

  • Online ISBN: 978-3-540-27819-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics