Skip to main content
Log in

Bagging, Boosting and the Random Subspace Method for Linear Classifiers

  • Published:
Pattern Analysis & Applications Aims and scope Submit manuscript

Abstract:

Recently bagging, boosting and the random subspace method have become popular combining techniques for improving weak classifiers. These techniques are designed for, and usually applied to, decision trees. In this paper, in contrast to a common opinion, we demonstrate that they may also be useful in linear discriminant analysis. Simulation studies, carried out for several artificial and real data sets, show that the performance of the combining techniques is strongly affected by the small sample size properties of the base classifier: boosting is useful for large training sample sizes, while bagging and the random subspace method are useful for critical training sample sizes. Finally, a table describing the possible usefulness of the combining techniques for linear classifiers is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received: 03 November 2000, Received in revised form: 02 November 2001, Accepted: 13 December 2001

Rights and permissions

Reprints and permissions

About this article

Cite this article

Skurichina, M., Duin, R. Bagging, Boosting and the Random Subspace Method for Linear Classifiers. Pattern Anal Appl 5, 121–135 (2002). https://doi.org/10.1007/s100440200011

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s100440200011

Navigation