Advertisement

Exponential Convergence Rates in Classification

  • Vladimir Koltchinskii
  • Olexandra Beznosova
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3559)

Abstract

Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (X i , Y i ), i = 1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier \({\hat{f}_{n}}\) that minimizes the following penalized empirical risk

$$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|f\|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$

over a Hilbert space \({\mathcal H}\) of functions with norm || ·||, ℓ being a convex loss function and λ >0 being a regularization parameter. In particular, \({\mathcal H}\) might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign \(({\hat{f}_{n}})\) converges to the Bayes risk exponentially fast.

Keywords

Convergence Rate Reproduce Kernel Hilbert Space Generalization Error Independent Copy Fast Convergence Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Audibert, J.–Y., Tsybakov, A.: Fast convergence rates for plug-in estimators under margin conditions (2004) (Unpublished manuscript)Google Scholar
  2. 2.
    Bartlett, P., Bousquet, O., Mendelson, S.: Local Rademacher Complexities. In: Annals of Statistics (2005) (to appear)Google Scholar
  3. 3.
    Bartlett, P., Jordan, M., McAuliffe, J.: Convexity, Classification and Risk Bounds. In: J. American Statistical Soc. (2004) (to appear)Google Scholar
  4. 4.
    Blanchard, G., Bousquet, O., Massart, P.: Statistical Performance of Support Vector Machines. Preprint 4, 861–894 (2003)Google Scholar
  5. 5.
    Blanchard, G., Lugosi, G., Vayatis, N.: On the rates of convergence of regularized boosting classifiers. Journal of Machine Learning Research 4, 861–894 (2003)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Koltchinskii, V.: Local Rademacher Complexities and Oracle Inequalities in Risk Minimization (2003) (preprint)Google Scholar
  7. 7.
    Scovel, C., Steinwart, I.: Fast Rates for Support Vector Machines (2003) (preprint) Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Vladimir Koltchinskii
    • 1
  • Olexandra Beznosova
    • 1
  1. 1.Department of Mathematics and StatisticsThe University of New MexicoAlbuquerqueUSA

Personalised recommendations