Complexity control in statistical learning
- 38 Downloads
We consider the problem of determining a model for a given system on the basis of experimental data. The amount of data available is limited and, further, may be corrupted by noise. In this situation, it is important to control thecomplexity of the class of models from which we are to choose our model. In this paper, we first give a simplified overview of the principal features of learning theory. Then we describe how the method of regularization is used to control complexity in learning. We discuss two examples of regularization, one in which the function space used is finite dimensional, and another in which it is a reproducing kernel Hilbert space. Our exposition follows the formulation of Cucker and Smale. We give a new method of bounding the sample error in the regularization scenario, which avoids some difficulties in the derivation given by Cucker and Smale.
KeywordsComplexity control learning theory regularisation covering number
Unable to display preview. Download preview PDF.
- Burnham K P, Anderson D 2002Model selection and multi-model inference (Springer-Verlag)Google Scholar
- Schölkopf B, Smola A 2002Learning with kernels: Support vector machines, regularization, optimization and beyond (Cambridge, MA: MIT Press)Google Scholar
- Smale S, Zhou D 2005 Learning theory estimates via integral operators and their approximations. Preprint available at http://www.tti-c.orgsmale-papers/sampIII5412.pdfGoogle Scholar