European Journal of Epidemiology

, Volume 33, Issue 5, pp 459–464 | Cite as

Stacked generalization: an introduction to super learning

  • Ashley I. Naimi
  • Laura B. Balzer


Stacked generalization is an ensemble method that allows researchers to combine several different prediction algorithms into one. Since its introduction in the early 1990s, the method has evolved several times into a host of methods among which is the “Super Learner”. Super Learner uses V-fold cross-validation to build the optimal weighted combination of predictions from a library of candidate algorithms. Optimality is defined by a user-specified objective function, such as minimizing mean squared error or maximizing the area under the receiver operating characteristic curve. Although relatively simple in nature, use of Super Learner by epidemiologists has been hampered by limitations in understanding conceptual and technical details. We work step-by-step through two examples to illustrate concepts and address common concerns.


Ensemble learning Machine learning Super Learner Stacked generalization Stacked regression 



We thank Susan Gruber and Mark J van der Laan for expert advice.


NIH Grant Number UL1TR001857 and R37AI051164.

Compliance with ethical standards

Conflicts of interest

The author declare that they have no conflict of interest.


  1. 1.
    Wolpert D. Stacked generalization. Neural Netw. 1992;5(2):241–59.CrossRefGoogle Scholar
  2. 2.
    Breiman L. Stacked regressions. Mach Learn. 1996;24:49–64.Google Scholar
  3. 3.
    van der Laan M, Dudoit S. Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: finite sample oracle inequalities and example. Technical report 30, division of biostatistics, University of California, Berkeley. 2003.Google Scholar
  4. 4.
    van der Laan M, Dudoit S, van der Vaart AW. The cross-validated adaptive epsilon-net estimator. Stat Decis. 2006;24:373.Google Scholar
  5. 5.
    van der Laan MJ, Polley EC, Hubbard AE. Super learner. Stat Appl Genet Mol Biol. 2007; 6:Article 25.Google Scholar
  6. 6.
    Polley EC, Rose S, van der Laan MJ. Super learning. In: van der Laan MJ, Rose S, editors. Targeted learning: causal inference for observational and experimental data. New York: Springer; 2011. p. 43–66.CrossRefGoogle Scholar
  7. 7.
    Rose S. Mortality risk score prediction in an elderly population using machine learning. Am J Epidemiol. 2013;177:443–52.CrossRefPubMedGoogle Scholar
  8. 8.
    Pirracchio R, Petersen ML, Carone M, Resche Rigon M, Chevret S, van der Laan M. Mortality prediction in intensive care units with the Super ICU Learner Algorithm (SICULA): a population-based study. Lancet Resp Med. 2015;3:42–52.CrossRefGoogle Scholar
  9. 9.
    Petersen M, LeDell E, Schwab J, Sarovar MS, et al. Super learner analysis of electronic adherence data improves viral prediction and may provide strategies for selective HIV RNA monitoring. J Acquir Immune Defic Syndr. 2015;69:109–18.CrossRefPubMedPubMedCentralGoogle Scholar
  10. 10.
    Zheng W, Balzer L, van der Laan M, Petersen M, the SEARCH Collaboration. Constrained binary classification using ensemble learning: an application to cost-efficient targeted PrEP strategies. Stat Med. 2017; (Early view).Google Scholar
  11. 11.
    Díaz I, Hubbard A, Decker A, Cohen M. Variable importance and prediction methods for longitudinal problems with missing variables. PLoS ONE. 2015;10:e0120031.CrossRefPubMedPubMedCentralGoogle Scholar
  12. 12.
    Kreif N, Tran L, Grieve R, De Stavola B, Tasker RC, Petersen M. Estimating the comparative effectiveness of feeding interventions in the pediatric intensive care unit: a demonstration of longitudinal targeted maximum likelihood estimation. Am J Epidemiol. 2017;186:1370–9.CrossRefPubMedPubMedCentralGoogle Scholar
  13. 13.
    Luque-Fernandez MA, Belot A, Valeri L, Ceruli G, Maringe C, Rachet B. Data-adaptive estimation for double-robust methods in population-based cancer epidemiology: risk differences for lung cancer mortality by emergency presentation. Am J Epidemiol. 2018;187(4):871–8. Scholar
  14. 14.
    Baćak A, Kennedy EH. Principled machine learning using the super learner: an application to predicting prison violence. Sociol Methods Res. 2018. Scholar
  15. 15.
    R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. 2017.Google Scholar
  16. 16.
    Hastie T, Tibshirani R. Generalized additive models. London: Chapman & Hall; 1990.Google Scholar
  17. 17.
    Hastie T, Tibshirani R, Friedman JH. The elements of statistical learning: data mining, inference, and prediction. New York: Springer; 2009.CrossRefGoogle Scholar
  18. 18.
    Polley E, LeDell E, Kennedy C, van der Laan M. Super learner: super learner prediction. 2016. R package version 2.0-22.
  19. 19.
  20. 20.
    Gelman A, Su Y-S. Arm: data analysis using regression and multilevel/hierarchical models. 2016. R package version 1.9-3.
  21. 21.
    Kooperberg C. Polspline: polynomial spline routines. 2015. R package version 1.1.12.
  22. 22.
    Milborrow S. Derived from MDA: mars by Trevor Hastie and Rob Tibshirani. Uses Alan Miller’s Fortran utilities with Thomas Lumley’s leaps wrapper. Earth: multivariate adaptive regression splines. 2017. R package version 4.5.0.
  23. 23.
    Erin L. Scalable ensemble learning and computationally efficient variance estimation. 2015. PhD Dissertation. UC Berkeley.Google Scholar
  24. 24.
    Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th international joint conference on artificial intelligence. 1995; vol 2.Google Scholar
  25. 25.
    Zhang Y, Yang Y. Cross-validation for selecting a model selection procedure. J Econom. 2015;187:95–112.CrossRefGoogle Scholar
  26. 26.
    Moodie EEM, Stephens DA. Treatment prediction, balance, and propensity score adjustment. Epidemiology. 2017;28:e51–3.CrossRefPubMedGoogle Scholar
  27. 27.
    Pirracchio R, Carone M. The balance super learner: a robust adaptation of the super learner to improve estimation of the average treatment effect in the treated based on propensity score matching. Stat Methods Med Res. 2016; :962280216682055.Google Scholar
  28. 28.
    McCaffrey DF, Griffin BA, Almirall D, Slaughter ME, Ramchand R, Burgette LF. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Stat Med. 2013;32:3388–414.CrossRefPubMedPubMedCentralGoogle Scholar
  29. 29.
    Westreich D, Lessler J, Funk MJ. Propensity score estimation: neural networks, support vector machines, decision trees (cart), and meta-classifiers as alternatives to logistic regression. J Clin Epidemiol. 2010;63:826–33.CrossRefPubMedPubMedCentralGoogle Scholar
  30. 30.
    Lee BK, Lessler J, Stuart EA. Improving propensity score weighting using machine learning. Stat Med. 2010;29:337–46.PubMedPubMedCentralGoogle Scholar
  31. 31.
    Snowden JM, Rose S, Mortimer KM. Implementation of g-computation on a simulated data set: demonstration of a causal inference technique. Am J Epidemiol. 2011;173:731–8.CrossRefPubMedPubMedCentralGoogle Scholar
  32. 32.
    Westreich D, Edwards JK, Cole SR, Platt RW, Mumford SL, Schisterman EF. Imputation approaches for potential outcomes in causal inference. Int J Epidemiol. 2015; Published ahead of print July 25, 2015.Google Scholar
  33. 33.
    van der Vaart A. Higher order tangent spaces and influence functions. Stat Sci. 2014;29:679–86.CrossRefGoogle Scholar
  34. 34.
    Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Am Stat Assoc. 1994;89:846–66.CrossRefGoogle Scholar
  35. 35.
    van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. New York: Springer; 2003.CrossRefGoogle Scholar
  36. 36.
    van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data. New York: Springer; 2011.CrossRefGoogle Scholar
  37. 37.
    Naimi AI, Kennedy EH. Nonparametric double robustness. arXiv:1711.07137v1 [stat.ME]
  38. 38.
    Kennedy EH, Balakrishnan S. Discussion of “Data-driven confounder selection via Markov and Bayesian networks” by Jenny Häggström. Biometrics. 2017; (in press).Google Scholar
  39. 39.
    Robins J, Li L, Tchetgen Tchetgen E, van der Vaart A. Higher order influence functions and minimax estimation of nonlinear functionals. In: Nolan D, Speed T (Eds.) Probability and statistics: essays in honor of David A. Freedman, Volume 2 of collections. Beachwood, Ohio: Institute of Mathematical Statistics. 2008; pp 335–421.
  40. 40.
    Diaz I, Carone M, van der Laan MJ. Second-order inference for the mean of a variable missing at random. 2016; 12:333.

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of EpidemiologyUniversity of PittsburghPittsburghUSA
  2. 2.Department of Biostatistics and EpidemiologyUniversity of MassachusettsAmherstUSA

Personalised recommendations