Forecasting is concerned with making statements about the as yet unknown. There are many ways that people go about deriving forecasts. This entry is concerned primarily with procedures that have performed well in empirical studies that contrast the accuracy of alternative methods.
Evidence about forecasting procedures has been codified as condition-action statements, rules, guidelines or, as we refer to them, principles. At the time of writing there are 140 principles. Think of them as being like a safety checklist for a commercial airliner – if the forecast is important, it is important to check all relevant items on the list. Most of these principles were derived as generalized findings from empirical comparisons of alternative forecasting methods. Interestingly, the empirical evidence sometimes conflicts with common beliefs about how to forecast.
Primarily due to the strong emphasis placed on empirical comparisons of alternative methods, researchers have made many...
References and Further Reading
- Armstrong JS (1985) Long-range forecasting. Wiley, New YorkGoogle Scholar
- Armstrong JS (2006) Findings from evidence-based forecasting: methods for reducing forecast error. Int J Forecasting 22: 583–598Google Scholar
- Armstrong JS (2001a) Judgmental bootstrapping: inferring experts’ rules for forecasting. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 171–192Google Scholar
- Armstrong JS (2001b) Extrapolation for time-series and cross-sectional data. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 217–243Google Scholar
- Armstrong JS (2001c) Combining forecasts. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 417–440Google Scholar
- Collopy F, Armstrong JS (1992) Rule-based forecasting: development and validation of an expert systems approach to combining time-series extrapolations. Manage Sci 38:1394–1414Google Scholar
- Collopy F, Adya M, Armstrong JS (2001) Expert systems for forecasting. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 285–300Google Scholar
- Green KC (2005) Game theory, simulated interaction, and unaided judgement for forecasting decisions in conicts: further evidence. Int J Forecasting 21:463–472Google Scholar
- Green KC, Armstrong JS (2007) Structured analogies for forecasting. Int J Forecasting 23:365–376Google Scholar
- Green KC, Armstrong JS, Graefe A (2007) Methods to elicit forecasts from groups: Delphi and prediction markets compared. Foresight Int J Appl Forecasting 8:17–20Google Scholar
- MacGregor DG (2001) Decomposition in judgmental forecasting and estimation. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 107–124Google Scholar
- Makridakis S, Hibon M (2000) The M-3 competition: results, conclusions and implications. Int J Forecasting 16:451–476Google Scholar
- Makridakis S, Andersen S, Carbone R, Fildes R, Hibon M, Lewandowski R, Newton J, Parzen E, Winkler R (1982) The accuracy of extrapolation (time series) methods: results of a forecasting competition. J Forecasting 1:111–153Google Scholar
- Rowe G, Wright G (2001) Expert opinions in forecasting: the role of the Delphi technique. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 125–144Google Scholar