Skip to main content
Log in

Classification with Gaussians and convex loss II: improving error bounds by noise conditions

  • Articles
  • Published:
Science China Mathematics Aims and scope Submit manuscript

Abstract

We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bartlett P L, Jordan M I, McAuliffe J D. Convexity, classification, and risk bounds. J Amer Statist Assoc, 2006, 101:138–156

    Article  MATH  MathSciNet  Google Scholar 

  2. Chen D R, Wu Q, Ying Y M, et al. Support vector machine soft margin classifiers: error analysis. J Mach Learn Res, 2004, 5:1143–1175

    MathSciNet  Google Scholar 

  3. Cucker F, Zhou D X. Learning Theory: an Approximation Theory Viewpoint. Cambridge: Cambridge University Press, 2007

    Book  MATH  Google Scholar 

  4. Devroye L, Györfi L, Lugosi G. A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag, 1996

    MATH  Google Scholar 

  5. Steinwart I, Scovel C. Fast rates for support vector machines using Gaussian kernels. Ann Statist, 2007, 35:575–607

    Article  MATH  MathSciNet  Google Scholar 

  6. Strichartz R. A Guide to Distribution Theory and Fourier Transforms. Boca Raton: CRC Press, 1994

    MATH  Google Scholar 

  7. Tsybakov A B. Optimal aggregation of classifiers in statistical learning. Ann Statist, 2004, 32:135–166

    Article  MATH  MathSciNet  Google Scholar 

  8. Wu Q, Ying Y M, Zhou D X. Multi-kernel regularized classifiers. J Complexity, 2007, 23:108–134

    Article  MATH  MathSciNet  Google Scholar 

  9. Wu Q, Zhou D X. SVM soft margin classifiers: linear programming versus quadratic programming. Neural Comput, 2005, 17:1160–1187

    Article  MATH  MathSciNet  Google Scholar 

  10. Xiang D H, Zhou D X. Classification with Gaussians and convex loss. J Mach Learn Res, 2009, 10:1447–1468

    MathSciNet  Google Scholar 

  11. Ying Y M, Campbell C. Generalization bounds for learning the kernel. In: Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009

  12. Ying Y M, Zhou D X. Learnability of Gaussians with flexible variances. J Mach Learn Res, 2007, 8:249–276

    MathSciNet  Google Scholar 

  13. Zhang T. Statistical behavior and consistency of classification methods based on convex risk minimization. Ann Statist, 2004, 32:56–85

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to DaoHong Xiang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xiang, D. Classification with Gaussians and convex loss II: improving error bounds by noise conditions. Sci. China Math. 54, 165–171 (2011). https://doi.org/10.1007/s11425-010-4043-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11425-010-4043-2

Keywords

MSC(2000)

Navigation