Abstract
In this paper we investigate the problem of learning an unknown bounded function. We will emphasize special cases where it is possible to provide very simple (in terms of computation) estimates enjoying, in addition, the property of being universal, i.e. their construction does not depend on the a priori knowledge of regularity conditions on the unknown object and still they have almost optimal properties for a whole group of functions spaces. These estimates are constructed using a thresholding technique, which has proven in the last decade in statistics to have very good properties for recovering signals with inhomogeneous smoothness but has not been extensively developed in learning theory. We will basically consider two particular situations. In the first case, we consider the RKHS situation, where we produce a new algorithm and investigate its performances in \(L_2(\hat\rho_X)\). The exponential rates of convergences are proved to be almost optimal, and the regularity assumptions are expressed in simple terms. The second case considers a more specified situation where the Xi's are one-dimensional and the estimator is a wavelet thresholding estimate. The results are comparable in this setting to those obtained in the RKHS situation, as concerned the critical value and the exponential rates. The advantage here is that we are able to state the results in the \(L_2(\rho_X)\)-norm and the regularity conditions are expressed in terms of standard Holder spaces.
Keywords
Learn Theory Regularity Condition Besov Space Reproduce Kernel Hilbert Space Exponential RatePreview
Unable to display preview. Download preview PDF.