Martingale Boosting

  • Philip M. Long
  • Rocco A. Servedio
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3559)


Martingale boosting is a simple and easily understood technique with a simple and easily understood analysis. A slight variant of the approach provably achieves optimal accuracy in the presence of random misclassification noise.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ben-David, S., Long, P.M., Mansour, Y.: Agnostic boosting. In: Proceedings of the 14th Annual Conference on Computational Learning Theory, pp. 507–516 (2001)Google Scholar
  2. 2.
    Bshouty, N., Gavinsky, D.: On boosting with optimal poly-bounded distributions. Journal of Machine Learning Research 3, 483–506 (2002)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Dasgupta, S., Long, P.: Boosting with diverse base classifiers. In: COLT (2003)Google Scholar
  4. 4.
    Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning 40(2), 139–158 (2000)CrossRefGoogle Scholar
  5. 5.
    Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121(2), 256–285 (1995)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  7. 7.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Gavinsky, D.: Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research 4, 101–117 (2003)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Kalai, A., Servedio, R.: Boosting in the presence of noise. In: Proceedings of the 35th Annual Symposium on Theory of Computing (STOC), pp. 196–205 (2003)Google Scholar
  10. 10.
    Maclin, R., Opitz, D.: An empirical evaluation of bagging and boosting. In: AAAI/IAAI, pp. 546–551 (1997)Google Scholar
  11. 11.
    Mansour, Y., McAllester, D.: Boosting using branching programs. Journal of Computer and System Sciences 64(1), 103–112 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Mason, L., Bartlett, P.L., Baxter, J.: Improved generalization through explicit optimization of margins. Machine Learning 38(3), 243–255 (2000)zbMATHCrossRefGoogle Scholar
  13. 13.
    Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, New York, NY (1995)zbMATHGoogle Scholar
  14. 14.
    Rätsch, G., Onoda, T., Müller, K.-R.: Soft margins for AdaBoost. Machine Learning 42(3), 287–320 (2001)zbMATHCrossRefGoogle Scholar
  15. 15.
    Schapire, R.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)Google Scholar
  16. 16.
    Schapire, R., Freund, Y., Bartlett, P., Lee, W.: Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics 26(5), 1651–1686 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Servedio, R.: Smooth boosting and learning with malicious noise. Journal of Machine Learning Research 4, 633–648 (2003)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Philip M. Long
    • 1
  • Rocco A. Servedio
    • 2
  1. 1.Center for Computational Learning Systems 
  2. 2.Department of Computer ScienceColumbia University 

Personalised recommendations