The Role of Combining Rules in Bagging and Boosting

  • Marina Skurichina
  • Robert P. W. Duin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1876)


To improve weak classifiers bagging and boosting could be used. These techniques are based on combining classifiers. Usually, a simple majority vote or a weighted majority vote are used as combining rules in bagging and boosting. However, other combining rules such as mean, product and average are possible. In this paper, we study bagging and boosting in Linear Discriminant Analysis (LDA) and the role of combining rules in bagging and boosting. Simulation studies, carried out for two artificial data sets and one real data set, show that bagging and boosting might be useful in LDA: bagging for critical training sample sizes and boosting for large training sample sizes. In contrast to a common opinion, we demonstrate that the usefulness of boosting does not directly depend on the instability of a classifier. It is also shown that the choice of the combining rule may affect the performance of bagging and boosting.


Linear Discriminant Analysis Majority Vote Generalization Error Training Object Bearing Fault 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Jain, A.K., Chandrasekaran, B.: Dimensionality and Sample Size Considerations in Pattern Recognition Practice. In: Krishnaiah, P.R., Kanal, L.N. (eds.): Handbook of Statistics, Vol. 2. North-Holland, Amsterdam (1987) 835–855Google Scholar
  2. 2.
    Friedman, J.H.: Regularized Discriminant Analysis. Journal of the American Statistical Association (JASA) 84 (1989) 165–175CrossRefGoogle Scholar
  3. 3.
    An, G.: The Effects of Adding Noise During Backpropagation Training on a Generalization Performance. Neural Computation 8 (1996) 643–674CrossRefGoogle Scholar
  4. 4.
    Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. Chapman and Hall, New York (1993)zbMATHGoogle Scholar
  5. 5.
    Breiman, L.: Bagging predictors. Machine Learning Journal 24(2) (1996) 123–140zbMATHMathSciNetGoogle Scholar
  6. 6.
    Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference (1996) 148–156Google Scholar
  7. 7.
    Schapire, R.E., Freund, Y, Bartlett, P., Lee, W.: Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. The Annals of Statistics 26(5) (1998) 1651–1686zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Breiman, L.: Arcing Classifiers. Annals of Statistics, 26(3) (1998) 801–849zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive Logistic Regression: a Statistical View of Boosting. Technical Report (1999)Google Scholar
  10. 10.
    Dietterich, T.G.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Machine Learning, to appearGoogle Scholar
  11. 11.
    Skurichina, M., Duin, R.P.W.: Bagging for Linear Classifiers. Pattern Recognition 31(7) (1998) 909–930CrossRefGoogle Scholar
  12. 12.
    Skurichina, M., Duin, R.P.W.: Boosting in Linear and Quadratic Discriminant Analysis. In preparation for submission to the First Int. Workshop on Multiple Classifier SystemsGoogle Scholar
  13. 13.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press (1990) 400–407Google Scholar
  14. 14.
    Cortes, C., Vapnik, V.: Support-Vector Networks. Machine Learning 20 (1995) 273–297zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Marina Skurichina
    • 1
  • Robert P. W. Duin
    • 1
  1. 1.Pattern Recognition Group, Department of Applied Physics, Faculty of Applied SciencesDelft University of TechnologyDelftThe Netherlands

Personalised recommendations