Abstract
The use of multiple classifiers has raised much interest in the statistical learning community in the past few years. The basic principle of multiple classifiers algorithms, also called aggregation or ensemble or voting methods, is to construct, according to some algorithm, several (generally a few dozens) different classifiers belonging to a certain family (e.g. support vector machines, classification trees, neural nets...). The “aggregate” classifier is then obtained by majority vote among the outputs of the single constructed classifiers once they are presented a new instance. For some algorithms the majority vote is replaced by a weighted vote, with weights prescribed by the aggregation algorithm. Classical references about this kind of methods include [2, 3, 9, 8, 12, 14, 19].
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Y. Amit and G. Blanchard. Multiple randomized classifiers: MRCL. Technical report, University of Chicago, 2000.
Y. Amit and D. Geman. Shape quantization and recognition with randomized trees. Neural Computation, 9:1545–1588, 1997.
Y. Amit, D. Geman, and K. Wilder. Joint induction of shape features and tree classifiers. IEEE Trans. PAMI, 19(11):1300–1306, 1997.
G. Blanchard. The “progressive mixture” estimator for regression trees. Annales de l’LH.R, 35(6):793–820, 1999.
G. Blanchard. Mixture and aggregation of estimators for pattern recognition. Application to decision trees. PhD disseration, Université Paris-Nord, 2001. (In English, with an introductory part in French). Available at http://www.math.u-psud.fr/~blanchard/publi/these.ps.gz.
L. Breiman. Bagging predictors. Machine Learning, 26:123–140, 1996.
L. Breiman. Prediction games and arcing algorithms. Technical report, Statistics department, University of California at Berkeley, December 1997.
L. Breiman. Arcing classifiers. The annals of statistics, 26(3):801–849, 1998.
L. Breiman. Random forests — random features. Technical report, University of California, Berkeley, 1999.
H. Chipman, E. I. George, and E. McCulloch. Bayesian CART model search. JASA, 93:935–947, September 1998.
N. Christianini and J. Shawe-Taylor. An introduction to Support Vector Machines. Cambridge University Press, 2000.
T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. J. Artificial Intell. Res., 2:263–286, 1995.
Y. Freund and R. E. Schapire. Game theory, on-line prediciton and Boosting. In Proceedings of the 9th annual conference on computational learning theory, 1996.
J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, 28:337–374, 2000.
V. Kolchinkskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. To appear in Ann. Statist.
J. Langford, M. Seeger, and N. Megiddo. An improved predictive accuracy bound for averaging classifiers. NIPS 2001.
G. Lugosi. Lectures on statistical learning theory. Presented at the Garchy Seminar on Mathematical Statistics and Applications, available at http://www.econ.upf.es/~lugosi, 2000.
G. Ratsch, T. Onoda, and K.-R. Müller. Soft margins for AdaBoost. Machine Learning, 2000.
R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margins: a new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer Science+Business Media New York
About this chapter
Cite this chapter
Blanchard, G. (2003). Generalization Error Bounds for Aggregate Classifiers. In: Denison, D.D., Hansen, M.H., Holmes, C.C., Mallick, B., Yu, B. (eds) Nonlinear Estimation and Classification. Lecture Notes in Statistics, vol 171. Springer, New York, NY. https://doi.org/10.1007/978-0-387-21579-2_23
Download citation
DOI: https://doi.org/10.1007/978-0-387-21579-2_23
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-95471-4
Online ISBN: 978-0-387-21579-2
eBook Packages: Springer Book Archive