Improvements to AdaBoost Dynamic
This paper presents recent results in extending the well known Machine Learning ensemble method, boosting. The main idea is to vary the “weak” base classifier with each step of the method, using a classifier which performs “best” on the data presented in that iteration. We show that the solution is sensitive to the loss function used, and that the exponential loss function is detrimental to the performance of this kind of boosting. An approach which uses a logistic loss function performs better, but tends to overfit with a growing number of iterations. We show that this drawback can be overcome with the use of resampling technique, taken from the research on learning from imbalanced data.
KeywordsLoss Function Weak Learner Weight Calculation Imbalanced Data Strong Learner
Unable to display preview. Download preview PDF.
- 4.Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Saitta, L. (ed.) Proc. of the 13th International Conf. on ML, pp. 148–156 (1996)Google Scholar
- 6.Golestani, A., Ahmadian, K., Amiri, A., JahedMotlagh, M.R.: A novel adaptive-boost-based strategy for combining classifiers using diversity concept. In: ACIS International Conf. on Comp. and Info. Science, pp. 128–134 (2007)Google Scholar
- 7.Mason, L., Baxter, J., Bartlett, P., Frean, M.: Boosting algorithms as gradient descent. In: Advances in Neural Information Processing Systems, vol. 12, pp. 512–518 (2000)Google Scholar
- 8.Webb, A.R.: Statistical Pattern Recognition, 2nd edn. John Wiley & Sons (October 2002)Google Scholar