Abstract
We show that the function class consisting of k-sparse polynomials in n variables has Vapnik-Chervonenkis (VC) dimension at least nk+1. This result supersedes the previously known lower bound via k-term monotone disjunctive normal form (DNF) formulas obtained by Littlestone (1988). Moreover, it implies that the VC dimension for k-sparse polynomials is strictly larger than the VC dimension for k-term monotone DNF. The new bound is achieved by introducing an exponential approach that employs Gaussian radial basis function (RBF) neural networks for obtaining classifications of points in terms of sparse polynomials.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)
Bartlett, P.L., Maass, W.: Vapnik-Chervonenkis dimension of neural nets. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 1188–1192. MIT Press, Cambridge (2003)
Bartlett, P.L., Maiorov, V., Meir, R.: Almost linear VC-dimension boundsfor piecewise polynomial networks. Neural Computation 10, 2159–2173 (1998)
Ben-David, S., Lindenbaum, M.: Localization vs. identification of semialgebraic sets. Machine Learning 32, 207–224 (1998)
Blum, A., Singh, M.: Learning functions of k terms. In: Fulk, M.A. (ed.) Proceedings of the Third Annual Workshop on Computational Learning Theory, pp. 144–153. Morgan Kaufmann, San Mateo (1990)
Bshouty, N.H., Mansour, Y.: Simple learning algorithms for decision trees and multivariate polynomials. In: Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pp. 304–311. IEEE Computer Society Press, Los Alamitos (1995)
Durbin, R., Rumelhart, D.: Product units: A computationally powerful andbiologically plausible extension to backpropagation networks. Neural Computation 1, 133–142 (1989)
Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lowerbound on the number of examples needed for learning. Information and Computation 82, 247–261 (1989)
Erlich, Y., Chazan, D., Petrack, S., Levy, A.: Low er bound on VC-dimension by local shattering. Neural Computation 9, 771–776 (1997)
Fischer, P., Simon, H.U.: On learning ring-sum-expansions. SIAM Journal on Computing 21, 181–192 (1992)
Grigoriev, D.Y., Karpinski, M., Singer, M.F.: Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields. SIAM Journal on Computing 19, 1059–1063 (1990)
Haykin, S.: Neural Networks: A Comprehensive Fou ndation, 2nd edn. Prentice Hall, Upper Saddle River (1999)
Huang, M.-D., Rao, A.J.: Interpolation of sparse multivariate polynomials over large finite fields with applications. Journal of Algorithms 33, 204–228 (1999)
Karpinski, M., Macintyre, A.: Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. Journal of Computer and System Sciences 54, 169–176 (1997)
Karpinski, M., Werther, T.: VC dimension and uniform learnability of sparse polynomials and rational functions. SIAM Journal on Computing 22, 1276–1285 (1993)
Koiran, P., Sontag, E.D.: Neural networks with quadratic VC dimension. Journal of Computer and System Sciences 54, 190–198 (1997)
Lee, W.S., Bartlett, P.L., Williamson, R.C.: Lower bounds on the VC dimension of smoothly parameterized function classes. Neural Computation 7, 1040–1053 (1995)
Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2, 285–318 (1988)
Murao, H., Fujise, T.: Modular algorithm for sparse multivariate polynomial interpolation and its parallel implementation. Journal of Symbolic Computation 21, 377–396 (1996)
Roth, R.M., Benedek, G.M.: Interpolation and approximation of sparse multivariate polynomials over GF(2). SIAM Journal on Computing 20, 291–314 (1990)
Schapire, R.E., Sellie, L.: Learning sparse multivariate polynomials over a field with queries and counterexamples. Journal of Computer and System Sciences 52, 201–213 (1996)
Schmitt, M.: Descartes’ rule of signs for radial basis function neural networks. Neural Computation 14, 2997–3011 (2002)
Schmitt, M.: Neural networks with local receptive fields and superlinear VC dimension. Neural Computation 14, 919–956 (2002)
Schmitt, M.: On the complexity of computing and learning with multiplicative neural networks. Neural Computation 14, 241–301 (2002)
Schmitt, M.: New designs for the Descartes rule of signs. American Mathematical Monthly 111, 159–164 (2004)
Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16, 264–280 (1971)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Schmitt, M. (2004). An Improved VC Dimension Bound for Sparse Polynomials. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_27
Download citation
DOI: https://doi.org/10.1007/978-3-540-27819-1_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22282-8
Online ISBN: 978-3-540-27819-1
eBook Packages: Springer Book Archive