Skip to main content

Explaining AdaBoost

Abstract

BoostingBoosting—( is an approach to machine learning based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm of Freund and Schapire was the first practical boosting algorithm, and remains one of the most widely used and studied, with applications in numerous fields. This chapter aims to review some of the many perspectives and analyses of AdaBoost that have been applied to explain or understand it as a learning method, with comparisons of both the strengths and weaknesses of the various approaches.

Keywords

  • Test Error
  • Training Error
  • Generalization Error
  • Weak Classifier
  • Weak Learning

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-642-41136-6_5
  • Chapter length: 16 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   99.00
Price excludes VAT (USA)
  • ISBN: 978-3-642-41136-6
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   129.00
Price excludes VAT (USA)
Hardcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 5.1
Fig. 5.2
Fig. 5.3
Fig. 5.4

References

  1. Bartlett, P.L., Traskin, M.: AdaBoost is consistent. J. Mach. Learn. Res. 8, 2347–2368 (2007)

    MathSciNet  MATH  Google Scholar 

  2. Baum, E.B., Haussler, D.: What size net gives valid generalization? Neural Comput. 1(1), 151–160 (1989)

    CrossRef  Google Scholar 

  3. Bickel, P.J., Ritov, Y., Zakai, A.: Some theory for generalized boosting algorithms. J. Mach. Learn. Res. 7, 705–732 (2006)

    MathSciNet  MATH  Google Scholar 

  4. Breiman, L.: Prediction games and arcing classifiers. Neural Comput. 11(7), 1493–1517 (1999)

    CrossRef  Google Scholar 

  5. Breiman, L.: Population theory for boosting ensembles. Ann. Stat. 32(1), 1–11 (2004)

    CrossRef  MathSciNet  MATH  Google Scholar 

  6. Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach. Learn. 40(2), 139–158 (2000)

    CrossRef  Google Scholar 

  7. Frean, M., Downs, T.: A simple cost function for boosting. Technical report, Department of Computer Science and Electrical Engineering, University of Queensland (1998)

    Google Scholar 

  8. Freund, Y.: Boosting a weak learning algorithm by majority. Inf. Comput. 121(2), 256–285 (1995)

    CrossRef  MathSciNet  MATH  Google Scholar 

  9. Freund, Y.: An adaptive version of the boost by majority algorithm. Mach. Learn. 43(3), 293–318 (2001)

    CrossRef  MATH  Google Scholar 

  10. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    CrossRef  MathSciNet  MATH  Google Scholar 

  11. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)

    CrossRef  MATH  Google Scholar 

  12. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)

    CrossRef  MathSciNet  MATH  Google Scholar 

  13. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd edn. Springer, New York (2009)

    CrossRef  Google Scholar 

  14. Jiang, W.: Process consistency for AdaBoost. Ann. Stat. 32(1), 13–29 (2004)

    CrossRef  MATH  Google Scholar 

  15. Kalai, A.T., Servedio, R.A.: Boosting in the presence of noise. J. Comput. Syst. Sci. 71(3), 266–290 (2005)

    CrossRef  MathSciNet  MATH  Google Scholar 

  16. Long, P.M., Servedio, R.A.: Adaptive martingale boosting. In: 18th Annual Conference on Learning Theory, Bertinoro (2005)

    Google Scholar 

  17. Long, P.M., Servedio, R.A.: Adaptive martingale boosting. Advances in Neural Information Processing Systems 21, Vancouver (2009)

    Google Scholar 

  18. Long, P.M., Servedio, R.A.: Random classification noise defeats all convex potential boosters. Mach. Learn. 78, 287–304 (2010)

    CrossRef  MathSciNet  Google Scholar 

  19. Lugosi, G., Vayatis, N.: On the Bayes-risk consistency of regularized boosting methods. Ann. Stat. 32(1), 30–55 (2004)

    MathSciNet  MATH  Google Scholar 

  20. Maclin, R., Opitz, D.: An empirical evaluation of bagging and boosting. In: Proceedings of the 14th National Conference on Artificial Intelligence, Providence, pp. 546–551 (1997)

    Google Scholar 

  21. Mannor, S., Meir, R., Zhang, T.: Greedy algorithms for classification—consistency, convergence rates, and adaptivity. J. Mach. Learn. Res. 4, 713–742 (2003)

    MathSciNet  Google Scholar 

  22. Mason, L., Bartlett, P., Baxter, J.: Direct optimization of margins improves generalization in combined classifiers. Advances in Neural Information Processing Systems 12. MIT Press, Cambridge (2000)

    Google Scholar 

  23. Mason, L., Baxter, J., Bartlett, P., Frean, M.: Functional gradient techniques for combining hypotheses. Advances in Large Margin Classifiers. MIT Press, Cambridge (2000)

    Google Scholar 

  24. Mease, D., Wyner, A.: Evidence contrary to the statistical view of boosting. J. Mach. Learn. Res. 9, 131–156 (2008)

    Google Scholar 

  25. Onoda, T., Rätsch, G., Müller, K.R.: An asymptotic analysis of AdaBoost in the binary classification case. In: Proceedings of the 8th International Conference on Artificial Neural Networks, Skövde, pp. 195–200 (1998)

    Google Scholar 

  26. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)

    Google Scholar 

  27. Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for AdaBoost. Mach. Learn. 42(3), 287–320 (2001)

    CrossRef  MATH  Google Scholar 

  28. Reyzin, L., Schapire, R.E.: How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh (2006)

    Google Scholar 

  29. Rosset, S., Zhu, J., Hastie, T.: Boosting as a regularized path to a maximum margin classifier. J. Mach. Learn. Res. 5, 941–973 (2004)

    MathSciNet  MATH  Google Scholar 

  30. Schapire, R.E., Freund, Y.: Boosting: Foundations and Algorithms. MIT Press, Cambridge (2012)

    Google Scholar 

  31. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Mach. Learn. 37(3), 297–336 (1999)

    CrossRef  MATH  Google Scholar 

  32. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Stat. 26(5), 1651–1686 (1998)

    CrossRef  MathSciNet  MATH  Google Scholar 

  33. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. Royal Stat. Soc. B (Methodol.) 58(1), 267–288 (1996)

    Google Scholar 

  34. Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory Prob. Appl. 16(2), 264–280 (1971)

    CrossRef  MathSciNet  MATH  Google Scholar 

  35. Vapnik, V.N., Chervonenkis, A.Y.: Theory of Pattern Recognition. Nauka, Moscow (1974). (In Russian)

    MATH  Google Scholar 

  36. Wang, L., Sugiyama, M., Jing, Z., Yang, C., Zhou, Z.H., Feng, J.: A refined margin analysis for boosting algorithms via equilibrium margin. J. Mach. Learn. Res. 12, 1835–1863 (2011)

    MathSciNet  Google Scholar 

  37. Wyner, A.J.: On boosting and the exponential loss. In: Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics, Key West (2003)

    Google Scholar 

  38. Zhang, T.: Statistical behavior and consistency of classification methods based on convex risk minimization. Ann. Stat. 32(1), 56–134 (2004)

    CrossRef  MATH  Google Scholar 

  39. Zhang, T., Yu, B.: Boosting with early stopping: convergence and consistency. Ann. Stat. 33(4), 1538–1579 (2005)

    CrossRef  MATH  Google Scholar 

  40. Zhao, P., Yu, B.: Stagewise Lasso. J. Mach. Learn. Res. 8, 2701–2726 (2007)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Support for this research was generously providedBoosting—) by NSF under Award #1016029.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert E. Schapire .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Schapire, R.E. (2013). Explaining AdaBoost. In: Schölkopf, B., Luo, Z., Vovk, V. (eds) Empirical Inference. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41136-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41136-6_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41135-9

  • Online ISBN: 978-3-642-41136-6

  • eBook Packages: Computer ScienceComputer Science (R0)