International Encyclopedia of Statistical Science

2011 Edition
| Editors: Miodrag Lovric

Forecasting Principles

  • Kesten C. Green
  • Andreas Graefe
  • J. Scott Armstrong
Reference work entry
DOI: https://doi.org/10.1007/978-3-642-04898-2_257

Introduction

Forecasting is concerned with making statements about the as yet unknown. There are many ways that people go about deriving forecasts. This entry is concerned primarily with procedures that have performed well in empirical studies that contrast the accuracy of alternative methods.

Evidence about forecasting procedures has been codified as condition-action statements, rules, guidelines or, as we refer to them, principles. At the time of writing there are 140 principles. Think of them as being like a safety checklist for a commercial airliner – if the forecast is important, it is important to check all relevant items on the list. Most of these principles were derived as generalized findings from empirical comparisons of alternative forecasting methods. Interestingly, the empirical evidence sometimes conflicts with common beliefs about how to forecast.

Primarily due to the strong emphasis placed on empirical comparisons of alternative methods, researchers have made many...

This is a preview of subscription content, log in to check access.

References and Further Reading

  1. Armstrong JS (1985) Long-range forecasting. Wiley, New YorkGoogle Scholar
  2. Armstrong JS (2006) Findings from evidence-based forecasting: methods for reducing forecast error. Int J Forecasting 22: 583–598Google Scholar
  3. Armstrong JS (2001a) Judgmental bootstrapping: inferring experts’ rules for forecasting. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 171–192Google Scholar
  4. Armstrong JS (2001b) Extrapolation for time-series and cross-sectional data. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 217–243Google Scholar
  5. Armstrong JS (2001c) Combining forecasts. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 417–440Google Scholar
  6. Collopy F, Armstrong JS (1992) Rule-based forecasting: development and validation of an expert systems approach to combining time-series extrapolations. Manage Sci 38:1394–1414Google Scholar
  7. Collopy F, Adya M, Armstrong JS (2001) Expert systems for forecasting. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 285–300Google Scholar
  8. Green KC (2005) Game theory, simulated interaction, and unaided judgement for forecasting decisions in conicts: further evidence. Int J Forecasting 21:463–472Google Scholar
  9. Green KC, Armstrong JS (2007) Structured analogies for forecasting. Int J Forecasting 23:365–376Google Scholar
  10. Green KC, Armstrong JS, Graefe A (2007) Methods to elicit forecasts from groups: Delphi and prediction markets compared. Foresight Int J Appl Forecasting 8:17–20Google Scholar
  11. MacGregor DG (2001) Decomposition in judgmental forecasting and estimation. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 107–124Google Scholar
  12. Makridakis S, Hibon M (2000) The M-3 competition: results, conclusions and implications. Int J Forecasting 16:451–476Google Scholar
  13. Makridakis S, Andersen S, Carbone R, Fildes R, Hibon M, Lewandowski R, Newton J, Parzen E, Winkler R (1982) The accuracy of extrapolation (time series) methods: results of a forecasting competition. J Forecasting 1:111–153Google Scholar
  14. Rowe G, Wright G (2001) Expert opinions in forecasting: the role of the Delphi technique. In: Armstrong JS (ed) Principles of forecasting, Kluwer, Boston, pp 125–144Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Kesten C. Green
    • 1
  • Andreas Graefe
    • 2
  • J. Scott Armstrong
    • 3
  1. 1.University of South AustraliaAdelaideAustralia
  2. 2.Karlsruhe Institute of TechnologyKarlsruheGermany
  3. 3.University of PennsylvaniaPhiladelphiaUSA