Machine Learning

, Volume 43, Issue 3, pp 293–318 | Cite as

An Adaptive Version of the Boost by Majority Algorithm

  • Yoav Freund

Abstract

We propose a new boosting algorithm. This boosting algorithm is an adaptive version of the boost by majority algorithm and combines bounded goals of the boost by majority algorithm with the adaptivity of AdaBoost.

The method used for making boost-by-majority adaptive is to consider the limit in which each of the boosting iterations makes an infinitesimally small contribution to the process as a whole. This limit can be modeled using the differential equations that govern Brownian motion. The new boosting algorithm, named BrownBoost, is based on finding solutions to these differential equations.

The paper describes two methods for finding approximate solutions to the differential equations. The first is a method that results in a provably polynomial time algorithm. The second method, based on the Newton-Raphson minimization procedure, is much more efficient in practice but is not known to be polynomial.

boosting Brownian motion AdaBoost ensamble learning drifting games 

References

  1. Breiman, L. (1992). Probability. SIAM, classics edition. Original edition first published in 1968.Google Scholar
  2. Breiman, L. (1996). Bagging predictors. Machine Learning, 24:2, 123–140.Google Scholar
  3. Dietterich, T. G. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40:2, 139–158.Google Scholar
  4. Freund,Y. (1995). Boosting a weak learning algorithm by majority. Information and Computation, 121:2, 256–285.Google Scholar
  5. Freund, Y., & Opper, M. (2000). Continuous drifting games. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (pp. 126–132). Morgan Kaufman.Google Scholar
  6. Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:1, 119–139.Google Scholar
  7. Friedman, J., Hastie, T., & Tibshirani, R. (1998). Additive logistic regression: A statistical view of boosting. Technical Report.Google Scholar
  8. Mason, L., Bartlett, P., & Baxter, J. (1998). Direct optimization of margins improves generalization in combined classifiers. Technical report, Deparment of Systems Engineering, Australian National University.Google Scholar
  9. Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5:2, 197–227.Google Scholar
  10. Schapire, R. E. (2001). Drifting games. Machine Learning, 43:3, 265–291.Google Scholar
  11. Schapire, R. E., Freund, Y., Bartlett, P., & Lee, W. S. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26:5, 1651–1686.Google Scholar
  12. Schapire, R. E., & Singer, Y. (1999). Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:3, 297–336.Google Scholar
  13. Stoler, J., & Bulrisch, R. (1992). Introduction to Numerical Analysis. Springer-Verlag.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Yoav Freund
    • 1
  1. 1.AT&T LabsFlorham ParkUSA

Personalised recommendations