Subspace Methods

  • Tom BootEmail author
  • Didier Nibbering
Part of the Advanced Studies in Theoretical and Applied Econometrics book series (ASTA, volume 52)


With increasingly many variables available to macroeconomic forecasters, dimension reduction methods are essential to obtain accurate forecasts. Subspace methods are a new class of dimension reduction methods that have been found to yield precise forecasts when applied to macroeconomic and financial data. In this chapter, we review three subspace methods: subset regression, random projection regression, and compressed regression. We provide currently available theoretical results, and indicate a number of open avenues. The methods are illustrated in various settings relevant to macroeconomic forecasters.


  1. Achlioptas, D. (2003). Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4), 671–687.CrossRefGoogle Scholar
  2. Bai, J., & Ng, S. (2006). Confidence intervals for diffusion index forecasts and inference for factor-augmented regressions. Econometrica, 74(4), 1133–1150.CrossRefGoogle Scholar
  3. Bai, J., & Ng, S. (2008). Forecasting economic time series using targeted predictors. Journal of Econometrics, 146(2), 304–317.CrossRefGoogle Scholar
  4. Bay, S. D. (1998). Combining nearest neighbor classifiers through multiple feature subsets. In Proceedings of the Fifteenth International Conference on Machine Learning ICML (vol. 98, pp. 37–45). San Francisco: Morgan Kaufmann.Google Scholar
  5. Bingham, E., & Mannila, H. (2001). Random projection in dimensionality reduction: Applications to image and text data. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 245–250). New York: ACM.CrossRefGoogle Scholar
  6. Boot, T., & Nibbering, D. (2019). Forecasting using random subspace methods. Journal of Econometrics, 209(2), 391–406. Scholar
  7. Boutsidis, C., Zouzias, A., Mahoney, M. W., & Drineas, P. (2015). Randomized dimensionality reduction for k-means clustering. IEEE Transactions on Information Theory, 61(2), 1045–1062.CrossRefGoogle Scholar
  8. Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123–140.Google Scholar
  9. Breiman, L. (1999). Pasting small votes for classification in large databases and on-line. Machine Learning, 36(1–2), 85–103.CrossRefGoogle Scholar
  10. Bryll, R., Gutierrez-Osuna, R., & Quek, F. (2003). Attribute bagging: Improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition, 36(6), 1291–1302.CrossRefGoogle Scholar
  11. Cannings, T. I., & Samworth, R. J. (2017). Random-projection ensemble classification. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(4), 959–1035.CrossRefGoogle Scholar
  12. Chiong, K. X., & Shum, M. (2018). Random projection estimation of discrete-choice models with large choice sets. Management Science, 65, 1–457.Google Scholar
  13. Claeskens, G., Magnus, J. R., Vasnev, A. L., & Wang, W. (2016). The forecast combination puzzle: a simple theoretical explanation. International Journal of Forecasting, 32(3), 754–762.CrossRefGoogle Scholar
  14. Dasgupta, S., & Gupta, A. (2003). An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures & Algorithms, 22(1), 60–65.CrossRefGoogle Scholar
  15. Elliott, G., Gargano, A., & Timmermann, A. (2013). Complete subset regressions. Journal of Econometrics, 177(2), 357–373.CrossRefGoogle Scholar
  16. Elliott, G., Gargano, A., & Timmermann, A. (2015). Complete subset regressions with large-dimensional sets of predictors. Journal of Economic Dynamics and Control, 54, 86–110.CrossRefGoogle Scholar
  17. Elliott, G., & Timmermann, A. (2013). Handbook of economic forecasting. Amsterdam: Elsevier.Google Scholar
  18. Fradkin, D., & Madigan, D. (2003). Experiments with random projections for machine learning. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 517–522). New York: ACM.CrossRefGoogle Scholar
  19. Frieze, A., Kannan, R., & Vempala, S. (2004). Fast Monte-Carlo algorithms for finding low-rank approximations. Journal of the Association for Computing Machinery, 51(6), 1025–1041.CrossRefGoogle Scholar
  20. Garcia, M. G., Medeiros, M. C., & Vasconcelos, G. F. (2017). Real-time inflation forecasting with high-dimensional models: The case of Brazil. International Journal of Forecasting, 33(3), 679–693.CrossRefGoogle Scholar
  21. Gillen, B. J. (2016). Subset optimization for asset allocation. Social Science Working Paper, 1421, California Institute of Technology, PasadenaGoogle Scholar
  22. Guhaniyogi, R., & Dunson, D. B. (2015). Bayesian compressed regression. Journal of the American Statistical Association, 110(512), 1500–1514.CrossRefGoogle Scholar
  23. Hansen, B. E. (2010). Averaging estimators for autoregressions with a near unit root. Journal of Econometrics, 158(1), 142–155.CrossRefGoogle Scholar
  24. Ho, T. K. (1998). The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8), 832–844.CrossRefGoogle Scholar
  25. Johnson, W. B., & Lindenstrauss, J. (1984). Extensions of Lipschitz mappings into a Hilbert space. Contemporary Mathematics, 26(189–206), 1.Google Scholar
  26. Kabán, A. (2014). New bounds on compressive linear least squares regression. In Artificial intelligence and statistics (pp. 448–456). Boston: Addison-Wesley.Google Scholar
  27. Koop, G., Korobilis, D., & Pettenuzzo, D. (2019). Bayesian compressed vector autoregressions. Journal of Econometrics, 210(1), 135–154.CrossRefGoogle Scholar
  28. Leroux, M., Kotchoni, R., & Stevanovic, D. (2017). Forecasting economic activity in data-rich environment. University of Paris Nanterre, EconomiX.Google Scholar
  29. Li, P., Hastie, T. J., & Church, K. W. (2006). Very sparse random projections. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 287–296). New York: ACM.CrossRefGoogle Scholar
  30. Mahoney, M. W., & Drineas, P. (2009). CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3), 697–702.CrossRefGoogle Scholar
  31. Maillard, O., & Munos, R. (2009). Compressed least-squares regression. In Advances in neural information processing systems (pp. 1213–1221). Cambridge: MIT Press.Google Scholar
  32. McCracken, M. W., & Ng, S. (2016). FRED-MD: A monthly database for macroeconomic research. Journal of Business & Economic Statistics, 34(4), 574–589.CrossRefGoogle Scholar
  33. Meligkotsidou, L., Panopoulou, E., Vrontos, I. D., & Vrontos, S. D. (2019). Out-of-sample equity premium prediction: A complete subset quantile regression approach. European Journal of Finance, 1–26.Google Scholar
  34. Ng, S. (2013). Variable selection in predictive regressions. Handbook of Economic Forecasting, 2(Part B), 752–789.Google Scholar
  35. Pick, A., & Carpay, M. (2018). Multi-step forecasting with large vector autoregressions. Working Paper.Google Scholar
  36. Schneider, M. J., & Gupta, S. (2016). Forecasting sales of new and existing products using consumer reviews: A random projections approach. International Journal of Forecasting, 32(2), 243–256.CrossRefGoogle Scholar
  37. Shen, W., & Wang, J. (2017). Portfolio selection via subset resampling. In Thirty-First AAAI Conference on Artificial Intelligence (pp. 1517–1523).Google Scholar
  38. Slawski, M. et al. (2018). On principal components regression, random projections, and column subsampling. Electronic Journal of Statistics, 12(2), 3673–3712.CrossRefGoogle Scholar
  39. Stock, J. H., & Watson, M. W. (2002). Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association, 97(460), 1167–1179.CrossRefGoogle Scholar
  40. Stock, J. H., & Watson, M. W. (2006). Forecasting with many predictors. Handbook of economic forecasting (vol. 1, pp. 515–554). Amsterdam: Elsevier.Google Scholar
  41. Thanei, G.-A., Heinze, C., & Meinshausen, N. (2017). Random projections for large-scale regression. In Big and complex data analysis (pp. 51–68). Berlin: Springer.CrossRefGoogle Scholar
  42. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58,(1), 267–288.Google Scholar
  43. Timmermann, A. (2006). Forecast combinations. In Handbook of economic forecasting (vol. 1, pp. 135–196). Amsterdam: Elsevier.CrossRefGoogle Scholar
  44. Vempala, S. S. (2005). The random projection method.Series in discrete mathematics and theoretical computer science. Providence: American Mathematical Society.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Economics, Econometrics and FinanceUniversity of GroningenGroningenThe Netherlands
  2. 2.Department of Econometrics and Business StatisticsMonash UniversityClaytonAustralia

Personalised recommendations