Advertisement

Comparison of Bagging and Boosting Algorithms on Sample and Feature Weighting

  • Satoshi Shirai
  • Mineichi Kudo
  • Atsuyoshi Nakamura
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5519)

Abstract

We compared boosting with bagging in different strengths of learning algorithms for improving the performance of the set of classifiers to be fused. Our experimental results showed that boosting worked much with weak algorithms and bagging, especially feature-based bagging, worked much with strong algorithms. On the basis of these observations we developed a mixed fusion method in which randomly chosen features are used with a standard boosting method. As a result, it was confirmed that the proposed fusion method worked well regardless of learning algorithms.

Keywords

Training Sample Feature Subset Fusion Method Testing Error Training Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning Journal 2 24, 123–140 (1996)zbMATHGoogle Scholar
  2. 2.
    Freund, Y., Shapire, E.: Experiments with a new boosting algorithm. In: Proc. 13th Int. Conf. on Machine Learning, pp. 148–156 (1996)Google Scholar
  3. 3.
    Ho, T.: The random Subspace method for constructing decision forests. IEEE Trans. on Pattern Analysis and Machine Intelligence 20, 832–844 (1998)CrossRefGoogle Scholar
  4. 4.
    Friedman, J.: On Bias, Variance, 0/1-Loss, and the Curse-of-Dimensionality. Data Mining and Knowledge Discovery 1, 55–77 (1997)CrossRefGoogle Scholar
  5. 5.
    Oh, S.: On the relationship between majority vote accuracy and dependency in multiple classifier systems. Pattern Recognition Letters 24, 359–363 (2003)CrossRefzbMATHGoogle Scholar
  6. 6.
    Ueda, N., Nakano, R.: Analysis of Improvement Effect for Generalization Error of Ensemble Estimators. IEICE technical report. Neurocomputing 95, 107–114 (1996) (in Japanese) Google Scholar
  7. 7.
    Skurichina, M., Duin, R.: Bagging, boosting and the random subspace method for linear classifiers. Pattern Analysis and Applications 5, 121–135 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Shirai, S., Kudo, M., Nakamura, A.: Bagging, Random Subspace Method and Biding. LNCS, vol. 5342, pp. 801–810. Springer, Heidelberg (2008)Google Scholar
  9. 9.
    Skurichina, L., Kuncheva, L.: An Experimental Study on Diversity for Bagging and Boosting with Linear Classifiers. Information Fusion 3, 245–258 (2002)CrossRefGoogle Scholar
  10. 10.
    Quinlan, J.: Bagging, boosting, and c4.5. In: Proceedings of the Thirteenth National Conference on Artificial Intelligence, pp. 725–730 (1996)Google Scholar
  11. 11.
    Kim, H., Pang, S., Je, H., Kim, D., Bang, S.: Pattern Classification Using Support Vector Machine Ensemble. In: Proc. 16th International Conference on Pattern Recognition, 2002, vol. 2, pp. 160–163 (2002)Google Scholar
  12. 12.
    Redpadth, D.B., Levart, K.: Boosting Feature Selection. LNCS, vol. 3086, pp. 305–314. Springer, Heidelberg (2005)Google Scholar
  13. 13.
    Blake, C., Merz, C.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/mlrepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Satoshi Shirai
    • 1
  • Mineichi Kudo
    • 1
  • Atsuyoshi Nakamura
    • 1
  1. 1.Division of Computer Science Graduate School of Information Science and TechnologyHokkaido UniversitySapporoJapan

Personalised recommendations