Advertisement

An Empirical Study of the Convergence of RegionBoost

  • Xinzhu Yang
  • Bo Yuan
  • Wenhuang Liu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5755)

Abstract

RegionBoost is one of the classical examples of Boosting with dynamic weighting schemes. Apart from its demonstrated superior performance on a variety of classification problems, relatively little effort has been devoted to the detailed analysis of its convergence behavior. This paper presents some results from a preliminary attempt towards understanding the practical convergence behavior of RegionBoost. It is shown that, in some situations, the training error of RegionBoost may not be able to converge consistently as its counterpart AdaBoost and a deep understanding of this phenomenon may greatly contribute to the improvement of RegionBoost.

Keywords

Boosting RegionBoost Convergence kNN Decision Stumps 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dietterich, T.G.: Machine Learning Research: Four Current Directions. AI Magazine 18(4), 97–136 (1997)Google Scholar
  2. 2.
    Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Breiman, L.: Random Forests. Machine Learning. 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  4. 4.
    Jordan, M.I., Jacobs, R.A.: Hierarchical Mixtures of Experts and the EM Algorithm. Neural Computation 6(2), 181–214 (1994)CrossRefGoogle Scholar
  5. 5.
    Ting, K.M., Witten, I.H.: Issues in Stacked Generalization. Journal of Artificial Intelligence Research 10, 271–289 (1999)zbMATHGoogle Scholar
  6. 6.
    Tsymbal, A., Puuronen, S.: Bagging and boosting with dynamic integration of classifiers. In: Zighed, D.A., Komorowski, J., Żytkow, J.M. (eds.) PKDD 2000. LNCS (LNAI), vol. 1910, pp. 116–125. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Dynamic integration with random forests. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 801–808. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Schapire, R.E.: The Strength of Weak Learnability. Machine Learning 5(2), 197–227 (1990)Google Scholar
  9. 9.
    Maclin, R.: Boosting Classifiers Regionally. In: The 15th National Conference on Artificial Intelligence, Madison, WI, pp. 700–705 (1998)Google Scholar
  10. 10.
    Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Moerland, P., Mayoraz, E.: DynaBoost: Combining Boosted Hypotheses in a Dynamic Way. In: IDIAP-RR, Switzerland (1999)Google Scholar
  12. 12.
    Kwek, S.S., Nguyen, C.: iBoost: Boosting using an instance-based exponential weighting scheme. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 245–257. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  13. 13.
    Jin, R., Liu, Y., Si, L., Carbonell, J., Hauptmann, A.G.: A New Boosting Algorithm Using Input-Dependent Regularizer. In: The 20th International Conference on Machine Learning, Washington, DC (2003)Google Scholar
  14. 14.
    Zhang, C.-X., Zhang, J.-S.: A Local Boosting Algorithm for Solving Classification Problems. Computational Statistics & Data Analysis 52, 1928–1941 (2008)zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. University of California, School of Information and Computer Science (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Xinzhu Yang
    • 1
  • Bo Yuan
    • 1
  • Wenhuang Liu
    • 1
  1. 1.Graduate School at ShenzhenTsinghua UniversityShenzhenP.R. China

Personalised recommendations