Encyclopedia of Machine Learning

2010 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

Stacked Generalization

Reference work entry
DOI: https://doi.org/10.1007/978-0-387-30164-8_778

Synonyms

Definition

Stacking is an  ensemble learning technique. A set of models are constructed from bootstrap samples of a dataset, then their outputs on a hold-out dataset are used as input to a “meta”-model. The set of base models are called level-0, and the meta-model level-1. The task of the level-1 model is to combine the set of outputs so as to correctly classify the target, thereby correcting any mistakes made by the level-0 models.

Recommended Reading

  1. Wolpert, D. H. (1992). Stacked generalization. Neural Networks 5(2), 241–259.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011