Advertisement

Exact Rate of Convergence of Kernel-Based Classification Rule

Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 605)

Abstract

A binary classification problem is considered, where the posteriori probability is estimated by the nonparametric kernel regression estimate with naive kernel. The excess error probability of the corresponding plug-in decision classification rule according to the error probability of the Bayes decision is studied such that the excess error probability is decomposed into approximation and estimation error. A general formula is derived for the approximation error. Under a weak margin condition and various smoothness conditions, tight upper bounds are presented on the approximation error. By a Berry-Esseen type central limit theorem a general expression for the estimation error is shown.

Keywords

Lower bound Upper bound Classification error probability Kernel rule Margin condition 

AMS Classification

62G10 

References

  1. 1.
    Audibert J-Y, Tsybakov AB (2007) Fast learning rates for plug-in classifiers, Ann Stat 35:608–633Google Scholar
  2. 2.
    Devroye L (1981) On the almost everywhere convergence of nonparametric regression function estimates. Ann Stat 9:1310–1319MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Devroye L, Györfi L, Lugosi G (1996) A probabilistic theory of pattern recognition. Springer, New YorkCrossRefMATHGoogle Scholar
  4. 4.
    Devroye L, Wagner TJ (1980) Distribution-free consistency results in nonparametric discrimination and regression function estimation. Ann Stat 8:231–239MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Györfi L, Kohler M, Krzyżak A, Walk H (2002) A distribution-free theory of nonparametric regression. Springer, New YorkCrossRefMATHGoogle Scholar
  6. 6.
    Hall P, Kang K (2005) Bandwidth choice for nonparametric classification. Ann Stat 33:284–306MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Kohler M, Krzyżak A (2007) On the rate of convergence of local averaging plug-in classification rules under a margin condition. IEEE Trans Inf Theory 53:1735–1742CrossRefGoogle Scholar
  8. 8.
    Krzyżak A (1986) The rates of convergence of kernel regression estimates and classification rules. IEEE Trans Inf Theory IT-32:668–679Google Scholar
  9. 9.
    Krzyżak A, Pawlak M (1984) Distribution-free consistency of a nonparametric kernel regression estimate and classification. IEEE Trans Inf Theory IT-30:78–81Google Scholar
  10. 10.
    Mammen E, Tsybakov AB (1999) Smooth discrimination analysis. Ann Stat 27:1808–1829MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Michel R (1981) On the constant in the non-uniform version of the Berry-Esseen theorem. Z Wahrsch Verw Gebiete 55:109–117Google Scholar
  12. 12.
    Petrov VV (1975) Sums of independent random variables. Springer, BerlinCrossRefGoogle Scholar
  13. 13.
    Tsybakov AB (2004) Optimal aggregation of classifiers in statistical learning. Ann Stat 32:135–166MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute of Applied Mathematics and StatisticsUniversity of HohenheimStuttgartGermany
  2. 2.Department of Computer Science and Information TheoryBudapest University of Technology and EconomicsBudapestHungary
  3. 3.Department of MathematicsUniversity of StuttgartStuttgartGermany

Personalised recommendations