Abstract
We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.
Similar content being viewed by others
References
Bartlett P L, Jordan M I, McAuliffe J D. Convexity, classification, and risk bounds. J Amer Statist Assoc, 2006, 101:138–156
Chen D R, Wu Q, Ying Y M, et al. Support vector machine soft margin classifiers: error analysis. J Mach Learn Res, 2004, 5:1143–1175
Cucker F, Zhou D X. Learning Theory: an Approximation Theory Viewpoint. Cambridge: Cambridge University Press, 2007
Devroye L, Györfi L, Lugosi G. A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag, 1996
Steinwart I, Scovel C. Fast rates for support vector machines using Gaussian kernels. Ann Statist, 2007, 35:575–607
Strichartz R. A Guide to Distribution Theory and Fourier Transforms. Boca Raton: CRC Press, 1994
Tsybakov A B. Optimal aggregation of classifiers in statistical learning. Ann Statist, 2004, 32:135–166
Wu Q, Ying Y M, Zhou D X. Multi-kernel regularized classifiers. J Complexity, 2007, 23:108–134
Wu Q, Zhou D X. SVM soft margin classifiers: linear programming versus quadratic programming. Neural Comput, 2005, 17:1160–1187
Xiang D H, Zhou D X. Classification with Gaussians and convex loss. J Mach Learn Res, 2009, 10:1447–1468
Ying Y M, Campbell C. Generalization bounds for learning the kernel. In: Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009
Ying Y M, Zhou D X. Learnability of Gaussians with flexible variances. J Mach Learn Res, 2007, 8:249–276
Zhang T. Statistical behavior and consistency of classification methods based on convex risk minimization. Ann Statist, 2004, 32:56–85
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xiang, D. Classification with Gaussians and convex loss II: improving error bounds by noise conditions. Sci. China Math. 54, 165–171 (2011). https://doi.org/10.1007/s11425-010-4043-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11425-010-4043-2
Keywords
- reproducing kernel Hilbert space
- binary classification
- general convex loss
- Tsybakov noise condition
- Sobolev space