Sample Complexity of Linear Learning Machines with Different Restrictions over Weights
Known are many different capacity measures for learning machines like: Vapnik-Chervonenkis dimension, covering numbers or fat dimension. In this paper we present experimental results of sample complexity estimation, taking into account rather simple learning machines linear in parameters. We show that, sample complexity can be quite different even for learning machines having the same VC-dimension. Moreover, independently from the capacity of a learning machine, the distribution of data is also significant. Experimental results are compared with known theoretical results for sample complexity and generalization bounds.
KeywordsMean Square Error Sample Complexity True Error Bayesian Regularisation Capacity Concept
Unable to display preview. Download preview PDF.
- 1.Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press (1999)Google Scholar
- 5.Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines (2001), Software available at, http://www.csie.ntu.edu.tw/cjlin/libsvm
- 8.Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer (2009)Google Scholar
- 11.MacKay, D.J.C.: Information theory, inference, and learning algorithms. Cambridge University Press (2003)Google Scholar
- 12.Minka, T.P.: A comparison of numerical optimizers for logistic regression. Technical report, Dept. of Statistics, Carnegie Mellon Univ. (2003)Google Scholar
- 15.Vapnik, V.: Statistical learning theory. Wiley (1998)Google Scholar
- 16.Vincent, P., Bengio, Y.: K-local hyperplane and convex distance nearest neighbors algorithms. In: Advances in Neural Information Processing Systems, pp. 985–992 (2001)Google Scholar