The development of new tests and methods used in the evaluation of time series forecasts and forecasting models remains as important today as it has been for the last 50 years. Paraphrasing what Sir Clive W.J. Granger (arguably the father of modern day time series forecasting) said in the 1990s at a conference in Svinkloev, Denmark, “OK, the model looks like an interesting extension, but can it forecast better than existing models?” Indeed, the forecast evaluation literature continues to expand, with interesting new tests and methods being developed at a rapid pace. In this chapter, we discuss a selected group of predictive accuracy tests and model selection methods that have been developed in recent years, and that are now widely used in the forecasting literature. We begin by reviewing several tests for comparing the relative forecast accuracy of different models, in the case of point forecasts. We then broaden the scope of our discussion by introducing density-based predictive accuracy tests. We conclude by noting that predictive accuracy is typically assessed in terms of a given loss function, such as mean squared forecast error or mean absolute forecast error. Most tests, including those discussed here, are consequently loss function dependent, and the relative forecast superiority of predictive models is therefore also dependent on specification of a loss function. In light of this fact, we conclude this chapter by discussing loss function robust predictive density accuracy tests that have recently been developed using principles of stochastic dominance.
- Clark, T. E., & McCracken, M. W. (2003). Evaluating long horizon forecasts. Working Paper, University of Missouri-Columbia.Google Scholar
- Corradi, V., & Distaso, W. (2011). Multiple forecast model evaluation. In The handbook of economic forecasting (pp. 391–414). Oxford: Oxford University Press.Google Scholar
- Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6, 65–70.Google Scholar
- Kitamura, Y. (2002). Econometric comparisons of conditional models. Working Paper, University of Pennsylvania.Google Scholar
- Linton, O. B., Maasoumi, E., & Whang, Y. J. (2002). Consistent testing for stochastic dominance: A subsampling approach. Social Science Electronic Publishing, 72(3), 735–765.Google Scholar
- Politis, D. N., Romano, J. P., & Wolf, M. (1999). Subsampling. Springer Series in Statistics. New York: Springer.Google Scholar