Advertisement

Frequentist Model Averaging

  • David FletcherEmail author
Chapter
Part of the SpringerBriefs in Statistics book series (BRIEFSSTATIST)

Abstract

We provide an overview of frequentist model averaging. For point estimation, we consider different methods for selecting the model weights, including those based on AIC, bagging, weighted AIC, stacking and focussed methods. For interval estimation, we consider Wald, MATA and percentile-bootstrap intervals. Use of the methods are illustrated by examples involving real data.

References

  1. 1.
    Aiolfi, M., Capistran, C., Timmermann, A.: Forecast combinations. In: Clements, M.P., Hendry, D.F. (eds.) Oxford Handbook of Economic Forecasting. Oxford University Press (2010)Google Scholar
  2. 2.
    Akaike, H.: Information theory as an extension of the maximum likelihood principle. In: Petrov, B.N., Csaki, F. (eds.) Second International Symposium on Information Theory, pp. 267–281. Akademiai Kiado, Budapest (1973)Google Scholar
  3. 3.
    Akaike, H.: A new look at the statistical model identification. IEEE Trans. Autom. Control. 19, 716–723 (1974)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Akaike, H.: A Bayesian analysis of the minimum AIC procedure. Ann. I. Stat. Math. 30, 9–14 (1978)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Akaike, H.: A Bayesian extension of the minimum AIC procedure of autoregressive model fitting. Biometrika 66, 237–242 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Aksu, C., Gunter, S.I.: An empirical analysis of the accuracy of SA, OLS, ERLS and NRLS combination forecasts. Int. J. Forecast. 8, 27–43 (1992)CrossRefGoogle Scholar
  7. 7.
    Amemiya, T.: Selection of regressors. Int. Econ. Rev. 21, 331–354 (1980)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Amini, S.M., Parmeter, C.F.: Comparisons of model averaging techniques: assessing growth determinants. J. Appl. Econ. 27, 870–876 (2012)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Anderson, D.R., Burnham, K.P., White, G.C.: AIC model selection in overdispersed capture-recapture data. Ecology 75, 1780–1793 (1994)CrossRefGoogle Scholar
  10. 10.
    Ando, T., Li, K.-C.: A model-averaging approach for high-dimensional regression. J. Am. Stat. Assoc. 109, 254–265 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Ando, T., Li, K.-C.: A weight-relaxed model averaging approach for high-dimensional generalized linear models. Ann. Stat. 45, 2654–2679 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Augustin, N., Sauerbrei, W., Schumacher, M.: The practical utility of incorporating model selection uncertainty into prognostic models for survival data. Stat. Model. 5, 95–118 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Bozdogan, H.: Akaike’s information criterion and recent developments in information complexity. J. Math. Psychol. 44, 62–91 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Breiman, L.: Stacked regressions. Mach. Learn. 24, 49–64 (1996)zbMATHGoogle Scholar
  16. 16.
    Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996)zbMATHGoogle Scholar
  17. 17.
    Breiman, L.: Heuristics of instability and stabilization in model selection. Ann. Stat. 24, 2350–2383 (1996)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)zbMATHCrossRefGoogle Scholar
  19. 19.
    Brewer, M.J., Butler, A., Cooksley, S.L.: The relative performance of AIC, AICC and BIC in the presence of unobserved heterogeneity. Methods Ecol. Evol. 7, 679–692 (2016)CrossRefGoogle Scholar
  20. 20.
    Buchholz, A., Hollnder, N., Sauerbrei, W.: On properties of predictors derived with a two-step bootstrap model averaging approach—a simulation study in the linear regression model. Comput. Stat. Data Anal. 52, 2778–2793 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Buckland, S.T., Burnham, K.P., Augustin, N.H.: Model selection: an integral part of inference. Biometrics 53, 603–618 (1997)zbMATHCrossRefGoogle Scholar
  22. 22.
    Buckland, S.T., Burnham, K.P., Augustin, N.H.: Rejoinder to the Letter to the Editors from Wagenmakers, E.-J., Farrell, S., Ratcliff, R. Biometrics 60, 283 (2004)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Burnham, K.P., Anderson, D.R., White, G.C.: Evaluation of the Kullback-Leibler discrepancy for model selection in open population capture-recapture models. Biometrical. J. 36, 299–315 (1994)zbMATHCrossRefGoogle Scholar
  24. 24.
    Burnham, K.P., Anderson, D.R.: Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd edn. Springer (2002)Google Scholar
  25. 25.
    Burnham, K.P., Anderson, D.R.: Multimodel inference understanding AIC and BIC in model selection. Sociol. Method. Res. 33, 261–304 (2004)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Cade, B.S.: Model averaging and muddled multimodel inferences. Ecology 96, 2370–2382 (2015)CrossRefGoogle Scholar
  27. 27.
    Candolo, C., Davison, A.C., Demtrio, C.G.B.: A note on model uncertainty in linear regression. J. R. Stat. Soc. D-Stat. 52, 165–177 (2003)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Carney, M., Cunningham, P.: Calibrating probability density forecasts with multi-objective search. Technical Report TCD-CS-2006-07, Trinity College, Dublin (2006)Google Scholar
  29. 29.
    Cavanaugh, J.E.: Unifying the derivations for the Akaike and corrected Akaike information criteria. Stat. Probabil. Lett. 33, 201–208 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Cavanaugh, J.E., Shumway, R.H.: A bootstrap variant of AIC for state-space model selection. Stat. Sin. 7, 473–496 (1997)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Cavanaugh, J.E.: A large-sample model selection criterion based on Kullback’s symmetric divergence. Stat. Probab. Lett. 42, 333–343 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Charkhi, A., Claeskens, G., Hansen, B.E.: Minimum mean squared error model averaging in likelihood models. Stat. Sin. 26, 809–840 (2016)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Chen, X., Zou, G., Zhang, X.: Frequentist model averaging for linear mixed-effects models. Front. Math. China 8, 497–515 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Chung, H.-Y., Lee, K.-W., Koo, J.-Y.: A note on bootstrap model selection criterion. Stat. Probab. Lett. 26, 35–41 (1996)MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Claeskens, G., Hjort, N.L.: The focused information criterion. J. Am. Stat. Assoc. 98, 900–916 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    Claeskens, G., Croux, C., Kerckhoven, J.V.: Variable selection for logistic regression using a prediction-focused information criterion. Biometrics 62, 972–979 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  37. 37.
    Claeskens, G., Carroll, R.J.: An asymptotic theory for model selection inference in general semiparametric problems. Biometrika 94, 249–265 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Claeskens, G., Hjort, N.L.: Model Selection and Model Averaging. Cambridge University Press, Cambridge (2008)zbMATHCrossRefGoogle Scholar
  39. 39.
    Claeskens, G., Magnus, J.R., Vasnev, A.L., Wang, W.: The forecast combination puzzle: a simple theoretical explanation. J. Forecast. 32, 754–762 (2016)CrossRefGoogle Scholar
  40. 40.
    Clyde, M.: Model uncertainty and health effect studies for particulate matter. Environmetrics 11, 745–763 (2000)CrossRefGoogle Scholar
  41. 41.
    Craven, P., Wahba, G.: Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31, 377–403 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    Dardanoni, V., Modica, S., Peracchi, F.: Regression with imputed covariates: a generalized missing-indicator approach. J. Econ. 162, 362–368 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    Dardanoni, V., de Luca, G., Modica, S., Peracchi, F.: Bayesian model averaging for generalized linear models with missing covariates. No. 1311. Einaudi Institute for Economics and Finance (2013)Google Scholar
  44. 44.
    Davison, A.C., Hinkley, D.V.: Bootstrap Methods and their Applications. Cambridge University Press, Cambridge (1997)zbMATHCrossRefGoogle Scholar
  45. 45.
    Debray, T.P.A., Koffijberg, H., Nieboer, D., Vergouwe, Y., Steyerbergb, E.W., Moonsa, K.G.M.: Meta-analysis and aggregation of multiple published prediction models. Stat. Med. 33, 2341–2362 (2014)MathSciNetCrossRefGoogle Scholar
  46. 46.
    De Luca, G., Magnus, J.R., Peracchi, F.: Weighted-average least squares estimation of generalized linear models. J. Econ. (2018).  https://doi.org/10.1016/j.jeconom.2017.12.007MathSciNetzbMATHCrossRefGoogle Scholar
  47. 47.
    Donohue, M.C., Overholser, R., Xu, R., Vaida, F.: Conditional Akaike information under generalized linear and proportional hazards mixed models. Biometrika 98, 685–700 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    Dormann, C.F., Calabrese, J.M., GuilleraArroita, G., Matechou, E., Bahn, V., Bartoń, K., Beale, C.M., Ciuti, S., Elith, J., Gerstner, K., Guelat, J., Keil, P., LahozMonfort, J.J., Pollock, L.J., Reineking, B., Roberts, D.R., Schröder, B., Thuiller, W., Warton, D.I., Wintle, B.A., Wood, S.N., Wüest, R.O., Hartig, F.: Model averaging in ecology: a review of Bayesian, information-theoretic, and tactical approaches for predictive inference. Ecol. Monogr. (2018).  https://doi.org/10.1002/ecm.1309CrossRefGoogle Scholar
  49. 49.
    Draper, D.: Model uncertainty yes, discrete model averaging maybe. Stat. Sci. 14, 405–409 (1999)Google Scholar
  50. 50.
    Drucker, H., Cortes, C., Jackel, L.D., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Comput. 6, 1289–1301 (1994)zbMATHCrossRefGoogle Scholar
  51. 51.
    Efron, B.: Estimating the error rate of a prediction rule: improvement on cross-validation. J. Am. Stat. Assoc. 78, 316–331 (1983)MathSciNetzbMATHCrossRefGoogle Scholar
  52. 52.
    Efron, B.: How biased is the apparent error rate of a prediction rule? J. Am. Stat. Assoc. 81, 461–470 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    Efron, B., Tibshirani, R.: Improvements on cross-validation: the 632+ bootstrap method. J. Am. Stat. Assoc. 92, 548–560 (1997)MathSciNetzbMATHGoogle Scholar
  54. 54.
    Efron, B.: The estimation of prediction error: covariance penalties and cross-validation. J. Am. Stat. Assoc. 99, 619–632 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  55. 55.
    Efron, B.: Estimation and accuracy after model selection. J. Am. Stat. Assoc. 109, 991–1007 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  56. 56.
    Efron, B., Hastie, T.: Computer Age Statistical Inference, vol. 5. Cambridge University Press (2016)Google Scholar
  57. 57.
    Ewald, K., Schneider, U.: Uniformly valid confidence sets based on the Lasso. Electron. J. Stat. 12, 1358–1387 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  58. 58.
    Fang, Y.: Asymptotic equivalence between cross-validations and Akaike information criteria in mixed-effects models. J. Data Sci. 9, 15–21 (2011)MathSciNetGoogle Scholar
  59. 59.
    Fletcher, D., Dillingham, P.W.: Model-averaged confidence intervals for factorial experiments. Comput. Stat. Data. An. 55, 3041–3048 (2011)MathSciNetCrossRefGoogle Scholar
  60. 60.
    Fletcher, D., Turek, D.: Model-averaged profile likelihood intervals. J. Agr. Biol. Environ. Stat. 17, 38–51 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  61. 61.
    Fletcher, D.: Estimating overdispersion when fitting a generalized linear model to sparse data. Biometrika 99, 230–237 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  62. 62.
    Foster, D.P., George, E.I.: The risk inflation criterion for multiple regression. Ann. Stat. 22, 1947–1975 (1994)MathSciNetzbMATHCrossRefGoogle Scholar
  63. 63.
    Freund, Y.: Boosting a weak learning algorithm by majority. Inform. Comput. 121, 256–285 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  64. 64.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of online learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)zbMATHCrossRefGoogle Scholar
  65. 65.
    Fu, P., Pan, J.: A review on high-dimensional frequentist model averaging. Open. J. Sta. 8, 513–518 (2018)CrossRefGoogle Scholar
  66. 66.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016)Google Scholar
  67. 67.
    Galipaud, M., Gillingham, M.A.F., David, M., Dechaume-Moncharmont, F.-X.: Ecologists overestimate the importance of predictor variables in model averaging: a plea for cautious interpretations. Methods Ecol. Evol. 5, 983–991 (2014)CrossRefGoogle Scholar
  68. 68.
    Galipaud, M., Gillingham, M.A.F., DechaumeMoncharmont, F.-X.: A farewell to the sum of Akaike weights: the benefits of alternative metrics for variable importance estimations in model selection. Methods Ecol. Evol. 8, 1668–1678 (2017)CrossRefGoogle Scholar
  69. 69.
    Gao, Y., Zhang, X., Wang, S., Zou, G.: Model averaging based on leave-subject-out cross-validation. J. Econ. 192, 139–151 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  70. 70.
    Gao, Y., Zhang, X., Wang, S., Chong, T.T., Zou, G. Frequentist model averaging for threshold models. Ann. I. Stat. Math. (2018).  https://doi.org/10.1007/s10463-017-0642-9
  71. 71.
    Genre, V., Kenny, G., Meyler, A., Timmermann, A.: Combining expert forecasts: can anything beat the simple average? Int. J. Forecast. 29, 108–121 (2013)CrossRefGoogle Scholar
  72. 72.
    George, E., Foster, D.P.: Calibration and empirical Bayes variable selection. Biometrika 87, 731–747 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  73. 73.
    Geweke, J., Amisano, G.: Optimal prediction pools. J. Econ. 164, 130–141 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  74. 74.
    Giam, X., Olden, J.D.: Quantifying variable importance in a multimodel inference framework. Methods Ecol. Evol. 7, 388–397 (2016)CrossRefGoogle Scholar
  75. 75.
    Graefe, A., Kchenhoff, H., Stierle, V., Riedl, B.: Limitations of ensemble Bayesian model averaging for forecasting social science problems. Int. J. Forecast. 31, 943–951 (2015)CrossRefGoogle Scholar
  76. 76.
    Greven, S., Kneib, T.: On the behaviour of marginal and conditional AIC in linear mixed models. Biometrika 97, 773–789 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  77. 77.
    Hall, S.G., Mitchell, J.: Combining density forecasts. Int. J. Forecast. 23, 1–13 (2007)CrossRefGoogle Scholar
  78. 78.
    Hansen, M.H., Yu, B.: Model selection and the principle of minimum description length. J. Am. Stat. Assoc. 96, 746–774 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  79. 79.
    Hansen, B.E.: Least squares model averaging. Econometrica 75, 1175–1189 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  80. 80.
    Hansen, B.E.: Least-squares forecast averaging. J. Econ. 146, 342–350 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  81. 81.
    Hansen, B.E.: Averaging estimators for regressions with a possible structural break. Economet. Theor. 25, 1498–1514 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  82. 82.
    Hansen, P.R., Lunde, A., Nason, J.M.: The model confidence set. Econometrica 79, 453–497 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  83. 83.
    Hansen, B.E., Racine, J.S.: Jackknife model averaging. J. Econ. 167, 38–46 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  84. 84.
    Hastie, T., Tibshirani, R., Friedman, J.J.H.: The Elements of Statistical Learning, vol. 1. Springer, New York (2001)zbMATHCrossRefGoogle Scholar
  85. 85.
    Hauenstein, S., Wood, S.N., Dormann, C.F.: Computing AIC for black-box models using generalized degrees of freedom: a comparison with cross-validation. Commun. Stat.-Simul. C. 47, 1382–1396 (2018)MathSciNetCrossRefGoogle Scholar
  86. 86.
    Henderson, D.J., Parmeter, C.F.: Model averaging over nonparametric estimators. In: Essays in Honor of Aman Ullah. Advances in Econometrics, vol. 36, pp. 539–560. Emerald Group Publishing Limited, UK (2016)Google Scholar
  87. 87.
    Hinde, J., Demtrio, C.G.B.: Overdispersion: models and estimation. Comput. Stat. Data Anal. 27, 151–170 (1998)zbMATHCrossRefGoogle Scholar
  88. 88.
    Hjort, N.L., Claeskens, G.: Frequentist model average estimators. J. Am. Stat. Assoc. 98, 879–945 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  89. 89.
    Hjort, N.L., Claeskens, G.: Rejoinder to the Discussion of Hjort, N.L., Claeskens, G.: Frequentist model average estimators. J. Am. Stat. Assoc. 98, 938–945 (2003)Google Scholar
  90. 90.
    Hjort, N.L., Claeskens, G.: Focused information criteria and model averaging for the Cox hazard regression model. J. Am. Stat. Assoc. 101, 1449–1464 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  91. 91.
    Hobbs, N.T., Hilborn, R.: Alternatives to statistical hypothesis testing in ecology: a guide to self teaching. Ecol. Appl. 16, 5–19 (2006)CrossRefGoogle Scholar
  92. 92.
    Holbrook, A., Gillen, D.: Estimating prediction error for complex samples (2017). arXiv preprint: arXiv:1711.04877
  93. 93.
    Hong, C.Y.: Focussed model averaging in generalised linear models. Thesis, Doctor of Philosophy, University of Otago (2018)Google Scholar
  94. 94.
    Hoogerheide, L., Kleijn, R., Ravazzolo, F., Van Dijk, H.K., Verbeek, M.: Forecast accuracy and economic gains from Bayesian model averaging using time-varying weights. J. Forecast. 29, 251–269 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  95. 95.
    Hurvich, C.M., Tsai, C.-L.: Regression and time series model selection in small samples. Biometrika 76, 297–307 (1989)MathSciNetzbMATHCrossRefGoogle Scholar
  96. 96.
    Hurvich, C.M., Tsai, C.-L.: Model selection for extended quasi-likelihood models in small samples. Biometrics 51, 1077–1084 (1995)zbMATHCrossRefGoogle Scholar
  97. 97.
    Ishiguro, M., Sakamoto, Y.: WIC: An Estimation-free Information Criterion. Research Memorandum, Institute of Statistical Mathematics, Tokyo (1991)Google Scholar
  98. 98.
    Ishiguro, M., Sakamoto, Y., Kitagawa, G.: Bootstrapping log likelihood and EIC, an extension of AIC. Ann. Inst. Stat. Math. 49, 411–434 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  99. 99.
    Jacobs, R.A.: Methods for combining experts’ probability assessments. Neural Comput. 7, 867–888 (1995)CrossRefGoogle Scholar
  100. 100.
    Jensen, S.M., Ritz, C.: Simultaneous inference for model averaging of derived parameters. Risk Anal. 35, 68–76 (2015)CrossRefGoogle Scholar
  101. 101.
    Jiang, J., Rao, J.S., Gu, Z., Nguyen, T.: Fence methods for mixed model selection. Ann. Stat. 36, 1669–1692 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  102. 102.
    Jin, S., Ankargren, S.: Frequentist model averaging in structural equation modelling. Psychometrika (2018).  https://doi.org/10.1007/s11336-018-9624-y
  103. 103.
    Johnson, W.O.: Discussion of Hjort, N.L., Claeskens, G.: Frequentist model average estimators. J. Am. Stat. Assoc. 98, 919–921 (2003)Google Scholar
  104. 104.
    Jullum, M., Hjort, N.L.: Parametric or nonparametric: the FIC approach. Stat. Sin. 27, 951–981 (2017)MathSciNetzbMATHGoogle Scholar
  105. 105.
    Kabaila, P., Leeb, H.: On the large-sample minimal coverage probability of confidence intervals after model selection. J. Am. Stat. Assoc. 101, 619–629 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  106. 106.
    Kabaila, P., Welsh, A.H., Abeysekera, W.: Model-averaged confidence intervals. Scand. J. Stat. 43, 35–48 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  107. 107.
    Kabaila, P., Welsh, A.H., Mainzer, R.: The performance of model averaged tail area confidence intervals Commun. Stat-Theor. M. 46, 10718–10732 (2016)zbMATHCrossRefGoogle Scholar
  108. 108.
    Kabaila, P., Wijethunga, C.: Confidence intervals centered on bootstrap smoothed estimators (2016). arXiv preprint: arXiv:1610.09802
  109. 109.
    Kabaila, P.: On the minimum coverage probability of model averaged tail area confidence intervals. Can. J. Stat. 46, 279–297 (2018)MathSciNetCrossRefGoogle Scholar
  110. 110.
    Kapetanios, G., Mitchell, J., Price, S., Fawcett, N.: Generalised density forecast combinations. J. Econ. 188, 150–165 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  111. 111.
    LeBlanc, M., Tibshirani, R.: Combining estimates in regression and classification. J. Am. Stat. Assoc. 91, 1641–1650 (1996)MathSciNetzbMATHGoogle Scholar
  112. 112.
    Lee, H., Jogesh Babu, G., Rao, C.R.R.: A jackknife type approach to statistical model selection. J. Stat. Plan. Inference 142, 301–311 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  113. 113.
    Leeb, H., Pötscher, B.M.: Model selection and inference: facts and fiction. Econ. Theory 21, 21–59 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  114. 114.
    Leeb, H., Pötscher, B.M.: Can one estimate the conditional distribution of post-model-selection estimators? Ann. Stat. 34, 2554–2591 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  115. 115.
    Lemke, C., Budka, M., Gabrys, B.: Metalearning: a survey of trends and technologies. Artif. Intell. Rev. 44, 117–130 (2015)CrossRefGoogle Scholar
  116. 116.
    Lenkoski, A., Eicher, T.S., Raftery, A.E.: Two-stage Bayesian model averaging in endogenous variable models. Econ. Rev. 33, 122–151 (2014)MathSciNetCrossRefGoogle Scholar
  117. 117.
    Leung, G., Barron, A.R.: Information theory and mixing least-squares regressions. IEEE Trans. Inf. Theory 52, 3396–3410 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  118. 118.
    Li, C., Li, Q., Racine, J.S., Zhang, D.: Optimal model averaging of varying-coefficient models. Stat. Sin. (2018).  https://doi.org/10.5705/ss.202017.0034
  119. 119.
    Li, J., Xia, X., Wong, W.K., Nott, D.: Varying-coefficient semiparametric model averaging prediction. Biometrics (2018).  https://doi.org/10.1111/biom.12904
  120. 120.
    Liang, H., Wu, H., Zou, G.: A note on conditional AIC for linear mixed-effects models. Biometrika 95, 773–778 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  121. 121.
    Liang, H., Zou, G., Wan. A.T.K., Zhang, X.: Optimal weight choice for frequentist model average estimators: J. Am. Stat. Assoc. 106, 1053–1066 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  122. 122.
    Lieb, L., Smeekes, S.: Inference for impulse responses under model uncertainty (2017). arXiv preprint: arXiv:1709.09583
  123. 123.
    Lin, B., Wang, Q., Zhang, J., Pang, Z.: Stable prediction in high-dimensional linear models. Stat. Comput. 27, 1401–1412 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  124. 124.
    Link, W., Barker, R.: Model weights and the foundations of multimodel inference. Ecology 87, 2626–2635 (2006)CrossRefGoogle Scholar
  125. 125.
    Liu, Q., Okui, R.: Heteroscedasticity-robust \(\text{ C }_{\text{ p }}\) model averaging. Econ. J. 16, 463–472 (2013)MathSciNetGoogle Scholar
  126. 126.
    Liu, S., Yang, Y.: Combining models in longitudinal data analysis. Ann. Inst. Stat. Math. 64, 233–254 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  127. 127.
    Liu, S., Yang, Y.: Mixing partially linear regression models. Sankhyā 75, 74–95 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  128. 128.
    Liu, C.A.: Distribution theory of the least squares averaging estimator. J. Econ. 186, 142–159 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  129. 129.
    Liu, Q., Okui, R., Yoshimura, A.: Generalized least squares model averaging. Econ. Rev. 35, 1692–1752 (2016)MathSciNetCrossRefGoogle Scholar
  130. 130.
    Longford, N.T.: An alternative to model selection in ordinary regression. Stat. Comput. 13, 67–80 (2003)MathSciNetCrossRefGoogle Scholar
  131. 131.
    Longford, N.T.: An alternative analysis of variance. SORT Stat. Oper. Res. T. 32, 77–92 (2008)MathSciNetzbMATHGoogle Scholar
  132. 132.
    Lu, X., Su, L.: Jackknife model averaging for quantile regressions. J. Econ. 188, 40–58 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  133. 133.
    Lumley, T., Scott, A.: AIC and BIC for modeling with complex survey data. J. Surv. Stat. Methodol. 3, 1–18 (2015)CrossRefGoogle Scholar
  134. 134.
    Lv, J., Liu, J.S.: Model selection principles in misspecified models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 76, 141–167 (2014)MathSciNetCrossRefGoogle Scholar
  135. 135.
    Magnus, J.R., Wan, A.T.K., Zhang, X: Weighted average least squares estimation with nonspherical disturbances and an application to the Hong Kong housing market. Comput. Stat. Data Anal. 55, 1331–1341 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  136. 136.
    Magnus, J.R., De Luca, G.: Weighted-average least squares (WALS): a survey. J. Econ. Surv. 30, 117–148 (2016)CrossRefGoogle Scholar
  137. 137.
    Mallows, C.L.: Some comments on Cp. Technometrics 42, 87–94 (2000)Google Scholar
  138. 138.
    Martins, L.F., Gabriel, V.J.: Linear instrumental variables model averaging estimation. Comput. Stat. Data. Anal. 71, 709–724 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  139. 139.
    McQuarrie, A., Shumway, R., Tsai, C.-L.: The model selection criterion AICu. Stat. Probabil. Lett. 34, 285–292 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  140. 140.
    McQuarrie, A.D.R., Tsai, C.-L.: Regression and Time Series Model Selection. World Scientific, Singapore (1998)zbMATHCrossRefGoogle Scholar
  141. 141.
    Mead, R.: The Design of Experiments: Statistical Principles for Practical Applications. Cambridge University Press, Cambridge (1988)zbMATHGoogle Scholar
  142. 142.
    Mitra, P., Lian, H., Mitra, R., Liang, H., Xie, M.: A general framework for frequentist model averaging (2018). arXiv preprint: arXiv:1802.03511
  143. 143.
    Moody, J.E.: The effective number of parameters: an analysis of generalization and regularization in nonlinear learning systems. In: Moody, J.E., Hanson, S.J., Lippmann, R.P. (eds.) Advances in Neural Information Processing Systems, vol. 4, pp. 847–854. Morgan Kaufmann, San Mateo, California (1992)Google Scholar
  144. 144.
    Müller, S., Scealy, J.L., Welsh, A.H.: Model selection in linear mixed models. Stat. Sci. 28, 135–167 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  145. 145.
    Murata, N., Yoshizawa, S., Amari, S.: Network information criterion-determining the number of hidden units for artificial neural network models. IEEE Trans. Neural Netw. 5, 865–872 (1994)CrossRefGoogle Scholar
  146. 146.
    Murray, K., Conner, M.M.: Methods to quantify variable importance: implications for the analysis of noisy ecological data. Ecology 90, 348–355 (2009)CrossRefGoogle Scholar
  147. 147.
    Naftaly, U., Intrator, N., Horn, D.: Optimal ensemble averaging of neural networks. Network-Comp. Neural 8, 283–296 (1997)zbMATHCrossRefGoogle Scholar
  148. 148.
    Nakagawa, S., Freckleton, R.P.: Model averaging, missing data and multiple imputation: a case study for behavioural ecology. Behav. Ecol. Sociobiol. 65, 103–116 (2011)CrossRefGoogle Scholar
  149. 149.
    Neath, A.A., Cavanaugh, J.E., Weyhaupt, A.G.: Model evaluation, discrepancy function estimation, and social choice theory. Comput. Stat. 29, 1–19 (2014)zbMATHCrossRefGoogle Scholar
  150. 150.
    Owen, A.B.: Small sample central confidence intervals for the mean. Technical Report 302, Department of Statistics, Stanford University (1988)Google Scholar
  151. 151.
    Polley, E.C., van der Laan, M.J.: Super learner in prediction. UC Berkeley Division of Biostatistics Working Paper Series. Working Paper 266 (2010). http://biostats.bepress.com/ucbbiostat/paper266
  152. 152.
    Poeter, E.P., Hill, M.C.: MMA, a computer code for multi-model analysis. U.S. Geological Survey Techniques and Methods TM6-E3. Reston, Virginia (2007)Google Scholar
  153. 153.
    Pötscher, B.M.: The distribution of model averaging estimators and an impossibility result regarding its estimation. Inst. Math. S. 52, 113–129 (2006)MathSciNetzbMATHGoogle Scholar
  154. 154.
    Quenouille, M.H.: Notes on bias in estimation. Biometrika 43, 353–360 (1956)MathSciNetzbMATHCrossRefGoogle Scholar
  155. 155.
    R Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2017). https://www.R-project.org/
  156. 156.
    Raftery, A.E., Zheng, Y.: Discussion of Hjort, N.L., Claeskens, G.: Frequentist model average estimators. J. Am. Stat. Assoc. 98, 931–938 (2003)Google Scholar
  157. 157.
    Rao, J.S., Tibshirani, R.: The out-of-bootstrap method for model averaging and selection. University of Toronto (1997)Google Scholar
  158. 158.
    Richards, S.A.: Testing ecological theory using the information-theoretic approach: examples and cautionary results. Ecology 86, 2805–2814 (2005)CrossRefGoogle Scholar
  159. 159.
    Ripley, B.D.: Selecting amongst large classes of models. In: Adams, N., Crowder, M., Hand, D.J., Stephens, D. (eds.) Methods and Models in Statistics: in Honor of Professor John Nelder, FRS, pp. 155–170. Imperial College Press, London (2004)CrossRefGoogle Scholar
  160. 160.
    Rubin, D.B.: Inference and missing data. Biometrika 63, 581–592 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  161. 161.
    Saefken, B., Kneib, T., van Waveren, C.-S., Greven, S.: A unifying approach to the estimation of the conditional Akaike information in generalized linear mixed models. Electron. J. Stat. 8, 201–225 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  162. 162.
    Sapp, S., van der Laan, M.J., Canny, J.: Subsemble: an ensemble method for combining subset-specific algorithm fits. J. Appl. Stat. 41, 1247–1259 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  163. 163.
    Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5, 197–227 (1990)Google Scholar
  164. 164.
    Schomaker, M.: Shrinkage averaging estimation. Stat. Pap. 53, 1015–1034 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  165. 165.
    Schomaker, M., Wan, A.T.K., Heumannm, C.: Frequentist model averaging with missing observations. Comput. Stat. Data. Anal. 54, 3336–3347 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  166. 166.
    Schomaker, M., Heumannm, C.: Model selection and model averaging after multiple imputation. Comput. Stat. Data. Anal. 71, 758–770 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  167. 167.
    Schomaker, M., Heumann, C.: When and when not to use optimal model averaging (2018). arXiv preprint: arXiv:1802.04589
  168. 168.
    Shan, K., Yang, Y.: Combining regression quantile estimators. Stat. Sin. 19, 1171–1191 (2009)MathSciNetzbMATHGoogle Scholar
  169. 169.
    Shang, J., Cavanaugh, J.E.: Bootstrap variants of the Akaike information criterion for mixed model selection. Comput. Stat. Data. Anal. 52, 2004–2021 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  170. 170.
    Shen, X., Huang, H.-C., Ye, J.: Adaptive model selection and assessment for exponential family distributions. Technometrics 46, 306–317 (2004)MathSciNetCrossRefGoogle Scholar
  171. 171.
    Shen, X., Huang, Huang.-C.: Optimal model assessment, selection, and combination. J. Am. Stat. Assoc. 101, 554–568 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  172. 172.
    Shibata, R.: Bootstrap estimate of Kullback-Leibler information for model selection. Stat. Sin. 7, 375–394 (1997)MathSciNetzbMATHGoogle Scholar
  173. 173.
    Smith, A.C., Koper, N., Francis, C.M., Fahrig, L.: Confronting collinearity: comparing methods for disentangling the effects of habitat loss and fragmentation. Landscape Ecol. 24, 1271–1285 (2009)CrossRefGoogle Scholar
  174. 174.
    Smyth, P., Wolpert, D.: Linearly combining density estimators via stacking. Mach. Learn. 36, 59–83 (1999)CrossRefGoogle Scholar
  175. 175.
    Stone, M.: Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B. (Methodol.) 36, 111–147 (1974)MathSciNetzbMATHGoogle Scholar
  176. 176.
    Stone, M.: An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. J. R. Stat. Soc. Ser. B. (Methodol.) 39, 44–47 (1977)MathSciNetzbMATHGoogle Scholar
  177. 177.
    Sugiura, N.: Further analysts of the data by Akaike’s information criterion and the finite corrections: further analysts of the data by Akaike’s. Commun. Stat. Theory 7, 13–26 (1978)zbMATHCrossRefGoogle Scholar
  178. 178.
    Takeuchi, K.: Distribution of informational statistics and a criterion of model fitting. Suri-Kagaku (Math. Sci.) 153, 12–18 (1976)Google Scholar
  179. 179.
    Timmermann, A.: Forecast combinations. In: Elliott, G., Granger, C.W.J., Timmermann, A. (eds.) Handbook of Economic Forecasting, pp. 135–196. Elsevier, Amsterdam (2006)CrossRefGoogle Scholar
  180. 180.
    Ting, K.M., Witten, I.H.: Issues in stacked generalization. J. Artif. Intell. Res. 10, 271–289 (1999)zbMATHCrossRefGoogle Scholar
  181. 181.
    Turek, D., Fletcher, D.: Model-averaged Wald confidence intervals. Comput. Stat. Data. Anal. 56, 2809–2815 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  182. 182.
    Turek, D.: Comparison of the frequentist MATA confidence interval with Bayesian model-averaged confidence intervals. J. Probab. Stat. (2015).  https://doi.org/10.1155/2015/420483MathSciNetzbMATHCrossRefGoogle Scholar
  183. 183.
    Ullah, A., Wang, H.: Parametric and nonparametric frequentist model selection and model averaging. Econ. J. 1, 157–179 (2013)Google Scholar
  184. 184.
    Vaida, F., Blanchard, S.: Conditional Akaike information for mixed-effects models. Biometrika 92, 351–370 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  185. 185.
    van der Laan, M.J., Dudoit, S., Keles, S.: Asymptotic optimality of likelihood-based cross-validation. Stat. Appl. Genet. Mol. 3, Article 4 (2004)Google Scholar
  186. 186.
    van der Laan, M.J., Polley, E.C., Hubbard, A.E.: Super learner. Stat. Appl. Genet. Mol. Biol. 6, 1–23 (2007)MathSciNetzbMATHGoogle Scholar
  187. 187.
    Wagenmakers, E.-J., Farrell, S., Ratcliff, R.: Letter to the editors. Biometrics 60, 281–283 (2004)MathSciNetCrossRefGoogle Scholar
  188. 188.
    Wager, S., Hastie, T., Efron, B.: Confidence intervals for random forests: the jackknife and the infinitesimal jackknife. J. Mach. Learn. Res. 15, 1625–1651 (2014)MathSciNetzbMATHGoogle Scholar
  189. 189.
    Wallis, K.F.: Combining density and interval forecasts: a modest proposal. Oxford B. Econ. Stat. 67, 983–994 (2005)CrossRefGoogle Scholar
  190. 190.
    Wan, A.T.K., Zhang, X., Zou, G.: Least squares model averaging by Mallows criterion. J. Econ. 156, 277–283 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  191. 191.
    Wan, A.T.K., Zhang, X., Wang, S.: Frequentist model averaging for multinomial and ordered logit models. In. J. Forecast. 30, 118–128 (2014)CrossRefGoogle Scholar
  192. 192.
    Wang, H., Zou, G., Wan, A.T.K.: Model averaging for varying-coefficient partially linear measurement error models. Electron. J. Stat. 6, 1017–1039 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  193. 193.
    Wang, H., Zhou, S.Z.F.: Interval estimation by frequentist model averaging. Commun. Stat. Theory 42, 4342–4356 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  194. 194.
    Wang, H.Y., Chen, X., Flournoy, N.: The focused information criterion for varying-coefficient partially linear measurement error models. Stat. Pap. 1–15. Springer, Heidelberg (2014)Google Scholar
  195. 195.
    Wang, H., Li, Y., Sun, J.: Focused and model average estimation for regression analysis of panel count data. Scand. J. Stat. 42, 732–745 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  196. 196.
    Wedderburn, R.W.M.: Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika 61, 439–447 (1974)MathSciNetzbMATHGoogle Scholar
  197. 197.
    White, H.: Maximum likelihood estimation of misspecified models. Econometica 50, 1–25 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  198. 198.
    Wolpert, D.H.: Stacked generalization. Neural Netw. 5, 241–259 (1992)CrossRefGoogle Scholar
  199. 199.
    Wood, S.N.: Core Statistics. Cambridge University Press, Cambridge (2015)Google Scholar
  200. 200.
    Xie, T.: Prediction model averaging estimator. Econ. Lett. 131, 5–8 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  201. 201.
    Xu, R., Gamst, A., Donohue, M., Vaida, F., Harrington, D.P.: Using profile likelihood for semiparametric model selection with application to proportional hazards mixed models. Harvard University Biostatistics Working Paper Series, Paper 43 (2006). http://biostats.bepress.com/harvardbiostat/paper43/
  202. 202.
    Xu, G., Wang, S., Huang, J.Z.: Focused information criterion and model averaging based on weighted composite quantile regression. Scand. J. Stat. 41, 365–381 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  203. 203.
    Xu, R., Mehrotra, D.V., Shaw, P.A.: Incorporating baseline measurements into the analysis of crossover trials with timetoevent endpoints. Stat. Med. (2018).  https://doi.org/10.1002/sim.7834MathSciNetCrossRefGoogle Scholar
  204. 204.
    Yang, Y.: Adaptive regression by mixing. J. Am. Stat. Assoc. 96, 574–588 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  205. 205.
    Yang, Y.: Regression with multiple candidate models: selecting or mixing? Stat. Sin. 13, 783–809 (2003)MathSciNetzbMATHGoogle Scholar
  206. 206.
    Yang, Y.: Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation. Biometrika 92, 937–950 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  207. 207.
    Ye, J.: On measuring and correcting the effects of data mining and model selection. J. Am. Stat. Assoc. 93, 120–131 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  208. 208.
    Yu, D., Yau, K.K.W.: Conditional Akaike information criterion for generalized linear mixed models. Comput. Stat. Data. Anal. 56, 629–644 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  209. 209.
    Yu, Y., Thurston, S.W., Hauser, R., Liang, H.: Model averaging procedure for partially linear single-index models. J. Stat. Plan. Infer. 143, 2160–2170 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  210. 210.
    Yu, W., Xu, W., Zhu, L.: Transformation-based model averaged tail area inference. Comput. Stat. 29, 1713–1726 (2014)MathSciNetzbMATHCrossRefGoogle Scholar
  211. 211.
    Yu, D., Zhang, X., Yau, K.K.W.: Asymptotic properties and information criteria for misspecified generalized linear mixed models. J. R. Stat. Soc. Ser. B (Methodol.) (2018).  https://doi.org/10.1111/rssb.12270MathSciNetzbMATHCrossRefGoogle Scholar
  212. 212.
    Yuan, Z., Yang, Y.: Combining linear regression models. J. Am. Stat. Assoc. 100, 1202–1214 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  213. 213.
    Yuan, Z., Ghosh, D.: Combining multiple biomarker models in logistic regression. Biometrics 64, 431–439 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  214. 214.
    Zeng, J.: Model-Averaged Confidence Intervals. (Thesis, Doctor of Philosophy). University of Otago (2013)Google Scholar
  215. 215.
    Zeng, J., Cheng, W., Hu, G., Ronga, Y.: Model averaging procedure for varying-coefficient partially linear models with missing responses. J. Korean Stat. Soc. 47, 379–394 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  216. 216.
    Zhang, X., Liang, H.: Focused information criterion and model averaging for generalized additive partial linear models. Ann. Stat. 39, 174–200 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  217. 217.
    Zhang, C., Ma, Y.: (eds.) Ensemble Machine Learning: Methods and Applications. Springer, New York (2012)zbMATHGoogle Scholar
  218. 218.
    Zhang, X., Wan, A.T.K., Zhou, S.Z.: Focused information criteria, model selection, and model averaging in a Tobit model with a nonzero threshold. J. Bus. Econ. Stat. 30, 132–142 (2012)MathSciNetCrossRefGoogle Scholar
  219. 219.
    Zhang, X., Wan, A.T.K., Zou, G.: Model averaging by jackknife criterion in models with dependent data. J. Econ. 174, 82–94 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  220. 220.
    Zhang, X., Zou, G., Carroll, R.J.: Model averaging based on Kullback-Leibler distance. Stat. Sin. 25, 1583–1598 (2015)MathSciNetzbMATHGoogle Scholar
  221. 221.
    Zhang, X.: Consistency of model averaging estimators. Econ. Lett. 130, 120–123 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  222. 222.
    Zhang, Y., Yang, Y.: Cross-validation for selecting a model selection procedure. J. Econ. 187, 95–112 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  223. 223.
    Zhang, X., Yu, D., Zou, G., Liang, H.: Optimal model averaging estimation for generalized linear models and generalized linear mixed-effects models. J. Am. Stat. Assoc. 111, 1775–1790 (2016)MathSciNetCrossRefGoogle Scholar
  224. 224.
    Zhang, Q., Duan, X., Ma, S.: Focused information criterion and model averaging with generalized rank regression. Stat. Probabil. Lett. 122, 11–19 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  225. 225.
    Zhao, N., Zhao, Z., Liao, S.: Probabilistic model combination for support vector machine using positive-definite kernel-based regularization path. In: Wang, Y., Li, T. (eds.) Foundations of Intelligent Systems. Advances in Intelligent and Soft Computing, vol. 122, pp. 201–206. Springer, Heidelberg (2011)Google Scholar
  226. 226.
    Zhao, S., Zhang, X., Gao, Y.: Model averaging with averaging covariance matrix. Econ. Lett. 145, 214–217 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  227. 227.
    Zhao, S., Ullah, A., Zhang, X.: A class of model averaging estimators. Econ. Lett. 162, 101–106 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  228. 228.
    Zou, G., Wan, A.T.K., Wu, X., Chen, T.: Estimation of regression coefficients of interest when other regression coefficients are of no interest: the case of non-normal errors. Stat. Probabil. Lett. 77, 803–810 (2007)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© The Author(s), under exclusive licence to Springer-Verlag GmbH, DE, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Mathematics and StatisticsUniversity of OtagoDunedinNew Zealand

Personalised recommendations