Machine Learning

, Volume 24, Issue 1, pp 49–64

Stacked regressions

Authors

  • Leo Breiman
    • Statistics DepartmentUniversity of California
Article

DOI: 10.1007/BF00117832

Cite this article as:
Breiman, L. Mach Learn (1996) 24: 49. doi:10.1007/BF00117832

Abstract

Stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use cross-validation data and least squares under non-negativity constraints to determine the coefficients in the combination. Its effectiveness is demonstrated in stacking regression trees of different sizes and in a simulation stacking linear subset and ridge regressions. Reasons why this method works are explored. The idea of stacking originated with Wolpert (1992).

Key words

Stacking Non-negativity Trees Subset regression Combinations

Copyright information

© Kluwer Academic Publishers 1996