Skip to main content

Boosting Kernel Estimators

  • Chapter
  • First Online:
  • 13k Accesses

Abstract

A boosting algorithm [1, 2] could be seen as a way to improve the fit of statistical models. Typically, M predictions are operated by applying a base procedure—called a weak learner—to M reweighted samples. Specifically, in each reweighted sample an individual weight is assigned to each observation. Finally, the output is obtained by aggregating through majority voting. Boosting is a sequential ensemble scheme, in the sense the weight of an observation at step m depends (only) on the step m − 1. It appears clear that we obtain a specific boosting scheme when we choose a loss function, which orientates the data re-weighting mechanism, and a weak learner.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. R. E. Schapire, “The strength of weak learnability,” Machine Learning, vol. 5, pp. 197–227, 1990.

    Article  Google Scholar 

  2. Y. Freund, “Boosting a weak learning algorithm by majority,” Information and Computation/information and Control, vol. 121, pp. 256–285, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  3. M. D. Marzio and C. C. Taylor, “Using small bias nonparametric density estimators for confidence interval estimation using small bias nonparametric density estimators for confidence interval estimation,” Journal of Nonparametric Statistics, vol. 21, pp. 229–240, 2009.

    Article  MathSciNet  MATH  Google Scholar 

  4. P. Buhlmann and T. Hothorn, “Boosting Algorithms: Regularization, Prediction and Model Fitting,” Statistical Science, vol. 22, no. 4, pp. 477–505, 2007.

    MathSciNet  MATH  Google Scholar 

  5. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” in European Conference on Computational Learning Theory, pp. 23–37, 1995.

    Google Scholar 

  6. R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptative mixture of local experts,” Neural Computation, vol. 3, pp. 1–12, 1991.

    Article  Google Scholar 

  7. M. C. Jones, O. Linton, and J. P. Nielsen, “A simple bias reduction method for density estimation,” Biometrika, vol. 82, pp. 327–338, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  8. M. D. Marzio and C. C. Taylor, “Kernel density classification and boosting: an l2 analysis,” Statistics and Computing, vol. 15, pp. 113–123, 2005.

    Article  MathSciNet  Google Scholar 

  9. M. D. Marzio and C. C. Taylor, “On boosting kernel density methods for multivariate data: density estimation and classification,” Statistical Methods and Applications, vol. 14, pp. 163–178, 2005.

    Article  MathSciNet  MATH  Google Scholar 

  10. M. D. Marzio and C. C. Taylor, “Boosting kernel density estimates: A bias reduction technique?,” Biometrika, vol. 91, pp. 226–233, 2004.

    Article  MathSciNet  MATH  Google Scholar 

  11. J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors),” Annals of Statistics, vol. 28, pp. 337–407, 2000.

    Article  MathSciNet  MATH  Google Scholar 

  12. I. S. Abramson, “On Bandwidth Variation in Kernel Estimates-A Square Root Law,” The Annals of Statistics, vol. 4, pp. 1217–1223, 1982.

    MathSciNet  MATH  Google Scholar 

  13. J. W. Tukey, Exploratory Data Analysis. Addison-Wesley, Philippines, 1977.

    MATH  Google Scholar 

  14. M. D. Marzio and C. C. Taylor, “On boosting kernel regression,” Journal of Statistical Planning and Inference, vol. 138, pp. 2483–2498, 2008.

    Article  MathSciNet  MATH  Google Scholar 

  15. J. A. Rice, “Boundary modifications for kernel regreession,” Comm. Statist. Theory Meth., vol. 13, pp. 893–900, 1984.

    Article  MATH  Google Scholar 

  16. M. C. Jones, “Simple boundary correction for kernel density estimation,” Statistics and Computing, vol. 3, pp. 135–146, 1993.

    Article  Google Scholar 

  17. T. Gasser, H.-G. Müller, and V. Mammitzsch, “Kernels for nonparametric curve estimation,” Journal Royal Statist. Soc. B, vol. 47, pp. 238–252, 1985.

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles C. Taylor .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Di Marzio, M., Taylor, C.C. (2012). Boosting Kernel Estimators. In: Zhang, C., Ma, Y. (eds) Ensemble Machine Learning. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9326-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-9326-7_3

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-9325-0

  • Online ISBN: 978-1-4419-9326-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics