, Volume 24, Issue 1, pp 4964
First online:
Stacked regressions
 Leo BreimanAffiliated withStatistics Department, University of California
Abstract
Stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use crossvalidation data and least squares under nonnegativity constraints to determine the coefficients in the combination. Its effectiveness is demonstrated in stacking regression trees of different sizes and in a simulation stacking linear subset and ridge regressions. Reasons why this method works are explored. The idea of stacking originated with Wolpert (1992).
Key words
Stacking Nonnegativity Trees Subset regression Combinations Title
 Stacked regressions
 Journal

Machine Learning
Volume 24, Issue 1 , pp 4964
 Cover Date
 199607
 DOI
 10.1007/BF00117832
 Print ISSN
 08856125
 Online ISSN
 15730565
 Publisher
 Kluwer Academic Publishers
 Additional Links
 Topics
 Keywords

 Stacking
 Nonnegativity
 Trees
 Subset regression
 Combinations
 Industry Sectors
 Authors

 Leo Breiman ^{(1)}
 Author Affiliations

 1. Statistics Department, University of California, 94720, Berkeley, CA