Chapter

Principles of Forecasting

Volume 30 of the series International Series in Operations Research & Management Science pp 443-472

Evaluating Forecasting Methods

  • J. Scott ArmstrongAffiliated withThe Wharton School, University of Pennsylvania

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Ideally, forecasting methods should be evaluated in the situations for which they will be used. Underlying the evaluation procedure is the need to test methods against reasonable alternatives. Evaluation consists of four steps: testing assumptions, testing data and methods, replicating outputs, and assessing outputs. Most principles for testing forecasting methods are based on commonly accepted methodological procedures, such as to prespecify criteria or to obtain a large sample of forecast errors. However, forecasters often violate such principles, even in academic studies. Some principles might be surprising, such as do not use R-square, do not use Mean Square Error, and do not use the within-sample fit of the model to select the most accurate time-series model. A checklist of 32 principles is provided to help in systematically evaluating forecasting methods.

Keywords

Backcasting benchmarks competing hypotheses concurrent validity construct validity disconfirming evidence domain knowledge error measures face validity fit jackknife validation M-Competitions outliers predictive validity replication statistical significance successive updating