Improvements to AdaBoost Dynamic

  • Erico N. de Souza
  • Stan Matwin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7310)

Abstract

This paper presents recent results in extending the well known Machine Learning ensemble method, boosting. The main idea is to vary the “weak” base classifier with each step of the method, using a classifier which performs “best” on the data presented in that iteration. We show that the solution is sensitive to the loss function used, and that the exponential loss function is detrimental to the performance of this kind of boosting. An approach which uses a logistic loss function performs better, but tends to overfit with a growing number of iterations. We show that this drawback can be overcome with the use of resampling technique, taken from the research on learning from imbalanced data.

Keywords

Loss Function Weak Learner Weight Calculation Imbalanced Data Strong Learner 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Botta, M.: Resampling vs Reweighting in Boosting a Relational Weak Learner. In: Esposito, F. (ed.) AI*IA 2001. LNCS (LNAI), vol. 2175, pp. 70–80. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  2. 2.
    Collins, M., Schapire, R.E., Singer, Y.: Logistic regression, adaboost and bregman distances. Machine Learning 48, 253–285 (2002)MATHCrossRefGoogle Scholar
  3. 3.
    de Souza, É.N., Matwin, S.: Extending AdaBoost to Iteratively Vary Its Base Classifiers. In: Butz, C., Lingras, P. (eds.) Canadian AI 2011. LNCS, vol. 6657, pp. 384–389. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Saitta, L. (ed.) Proc. of the 13th International Conf. on ML, pp. 148–156 (1996)Google Scholar
  5. 5.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. of Comp. and System Sc. 55(1), 119–139 (1997)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Golestani, A., Ahmadian, K., Amiri, A., JahedMotlagh, M.R.: A novel adaptive-boost-based strategy for combining classifiers using diversity concept. In: ACIS International Conf. on Comp. and Info. Science, pp. 128–134 (2007)Google Scholar
  7. 7.
    Mason, L., Baxter, J., Bartlett, P., Frean, M.: Boosting algorithms as gradient descent. In: Advances in Neural Information Processing Systems, vol. 12, pp. 512–518 (2000)Google Scholar
  8. 8.
    Webb, A.R.: Statistical Pattern Recognition, 2nd edn. John Wiley & Sons (October 2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Erico N. de Souza
    • 1
  • Stan Matwin
    • 1
  1. 1.School of Information Technology and EngineeringUniversity of OttawaOttawaCanada

Personalised recommendations