Exponential Convergence Rates in Classification

  • Vladimir Koltchinskii
  • Olexandra Beznosova
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3559)

Abstract

Let (X,Y) be a random couple, X being an observable instance and Y∈ {–1,1} being a binary label to be predicted based on an observation of the instance. Let (Xi, Yi), i = 1, . . . , n be training data consisting of n independent copies of (X,Y). Consider a real valued classifier \({\hat{f}_{n}}\) that minimizes the following penalized empirical risk

$$\frac{1}{n}\sum\limits_{i=1}^n \ell(Y_{i}f(X_{i})) + \lambda\|f\|^{2} \rightarrow {\rm min}, f\in {\mathcal H}$$

over a Hilbert space \({\mathcal H}\) of functions with norm || ·||, ℓ being a convex loss function and λ >0 being a regularization parameter. In particular, \({\mathcal H}\) might be a Sobolev space or a reproducing kernel Hilbert space. We provide some conditions under which the generalization error of the corresponding binary classifier sign \(({\hat{f}_{n}})\) converges to the Bayes risk exponentially fast.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Vladimir Koltchinskii
    • 1
  • Olexandra Beznosova
    • 1
  1. 1.Department of Mathematics and StatisticsThe University of New MexicoAlbuquerqueUSA

Personalised recommendations