Selecting Forecasting Methods

  • J. Scott Armstrong
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 30)

Abstract

Six ways of selecting forecasting methods are described: Convenience, “what’s easy,” is inexpensive but risky. Market popularity, “what others do,” sounds appealing but is unlikely to be of value because popularity and success may not be related and because it overlooks some methods. Structured judgment, “what experts advise,” which is to rate methods against prespecified criteria, is promising. Statistical criteria, “what should work,” are widely used and valuable, but risky if applied narrowly. Relative track records, “what has worked in this situation,” are expensive because they depend on conducting evaluation studies. Guidelines from prior research, “what works in this type of situation,” relies on published research and offers a low-cost, effective approach to selection. Using a systematic review of prior research, I developed a flow chart to guide forecasters in selecting among ten forecasting methods. Some key findings: Given enough data, quantitative methods are more accurate than judgmental methods. When large changes are expected, causal methods are more accurate than naive methods. Simple methods are preferable to complex methods; they are easier to understand, less expensive, and seldom less accurate. To select a judgmental method, determine whether there are large changes, frequent forecasts, conflicts among decision makers, and policy considerations. To select a quantitative method, consider the level of knowledge about relationships, the amount of change involved, the type of data, the need for policy analysis, and the extent of domain knowledge. When selection is difficult, combine forecasts from different methods.

Keywords

Accuracy analogies combined forecasts conjoint analysis cross-sectional data econometric methods experiments expert systems extrapolation intentions judgmental bootstrapping policy analysis role playing rule-based forecasting structured judgment track records time-series data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adya, M., J. S. Armstrong, F. Collopy and M. Kennedy (2000), “An application of rule-based forecasting to a situation lacking domain knowledge,” International Journal of Forecasting, 16, 477–484.CrossRefGoogle Scholar
  2. Ahlburg, D. (1991), “Predicting the job performance of managers: What do the experts know?” International Journal of Forecasting, 7, 467–472.CrossRefGoogle Scholar
  3. Allen, P. G. and R. Fildes (2001), “Econometric forecasting,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  4. Armstrong, J. S. (1984), “Forecasting by extrapolation: Conclusions from twenty-five years of research,” (with commentary), Interfaces, 14 (Nov.—Dec.), 52–66. Full text at hops.wharton.upenn.edu/forecast.CrossRefGoogle Scholar
  5. Armstrong, J. S. (1985), Long-Range Forecasting: From Crystal Ball to Computer (2“d ed.). New York: John Wiley. Full text at hops.wharton.upenn.edu/forecastGoogle Scholar
  6. Armstrong, J. S. (2001a), “Evaluating forecasting methods,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  7. Armstrong, J. S. (2001b), “Judgmental bootstrapping: Inferring experts’ rules for forecasting,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  8. Armstrong, J. S. (2001c), “Combining forecasts,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA. Kluwer Academic Publishers.Google Scholar
  9. Armstrong, J. S. (2001d), “Standards and practices for forecasting,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  10. Armstrong, J. S. (2001e), “Role Playing: A Method to Forecast Decisions,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  11. Armstrong, J. S., M. Adya and F. Collopy (2001), “Rule-based forecasting: Using judgment in time-series extrapolation,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA. Kluwer Academic Publishers.Google Scholar
  12. Armstrong, J. S., R. Brodie and S. McIntyre (1987), “Forecasting methods for marketing,” International Journal of Forecasting, 3, 355–376. Full text at hops.wharton.upenn.edu/forecastCrossRefGoogle Scholar
  13. Armstrong, J. S. and M. Grohman (1972), “A comparative study of methods for long-range market forecasting,” Management Science, 19, 211–221. Full text at hops.wharton.upenn.edu/forecastCrossRefGoogle Scholar
  14. Armstrong, J. S. and T. Overton (1971), “Brief vs. comprehensive descriptions in measuring intentions to purchase,” Journal of Marketing Research, 8, 114–117. Full text at hops.wharton.upenn.edu/forecast.CrossRefGoogle Scholar
  15. Assmus, G., J. U. Farley and D. R. Lehmann (1984), “How advertising affects sales: A meta-analysis of econometric results,” Journal of Marketing Research, 21, 65–74.CrossRefGoogle Scholar
  16. Bailey, C. D. and S. Gupta (1999), “Judgment in learning-curve forecasting: A laboratory study,” Journal of Forecasting, 18, 39–57.CrossRefGoogle Scholar
  17. Braun, P. A. and I. Yaniv (1992), “A case study of expert judgment: Economists’ probabilities versus base-rate model forecasts,” Journal of Behavioral Decision Making, 5, 217–231.CrossRefGoogle Scholar
  18. Bretschneider, S. I., W. L. Gorr, G. Grizzle and E. Klay (1989), “Political and organizational influences on the accuracy of forecasting state government revenues,” International Journal of Forecasting, 5, 307–319.CrossRefGoogle Scholar
  19. Buehler, R., D. Griffin and M. Ross (1994), “Exploring the `planning fallacy’: Why people underestimate their task completion times,” Journal of Personality and Social Psychology, 67, 366–381.CrossRefGoogle Scholar
  20. Campion, M. A., D. K. Palmer and J. E. Campion (1997), “A review of structure in the selection interview,” Personnel Psychology, 50, 655–701.CrossRefGoogle Scholar
  21. Carbone, R. and J. S. Armstrong (1982), “Evaluation of extrapolation forecasting methods: Results of a survey of academicians and practitioners,” Journal of Forecasting, 1, 215–217. Full text at hops.wharton.upenn.edu/forecastCrossRefGoogle Scholar
  22. Chambers, J. C., S. Mullick and D. D. Smith (1971), “How to choose the right forecasting technique,” Harvard Business Review, 49, 45–71.Google Scholar
  23. Chambers, J. C., S. Mullick and D. D. Smith (1974), An Executive ‘s Guide to Forecasting. New York: John Wiley.Google Scholar
  24. Collopy, F., M. Adya and J. S. Armstrong (2001), “Expert systems for forecasting,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA. Kluwer Academic Publishers.Google Scholar
  25. Cooper, A., C. Woo and W. Dunkelberg (1988), “Entrepreneurs’ perceived chances for success,” Journal of Business Venturing, 3, 97–108.CrossRefGoogle Scholar
  26. Cowles, A. (1933), “Can stock market forecasters forecast?” Econometrica, 1, 309–324.CrossRefGoogle Scholar
  27. Cox, J. E. Jr. and D. G. Loomis (2001), “Diffusion of forecasting principles: An assessment of books relevant to forecasting,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  28. Dahan, E. and V. Srinivasan (2000), “The predictive power of internet-based product concept testing using visual depiction and animation,” Journal of Product Innovation Management, 17, 99–109.CrossRefGoogle Scholar
  29. Dakin, S. and J. S. Armstrong (1989), “Predicting job performance: A comparison of expert opinion and research findings,” International Journal of Forecasting, 5, 187–194. Full text at hops.wharton.upenn.edu/forecast.CrossRefGoogle Scholar
  30. Dalrymple, D. J. (1987), “Sales forecasting practices: Results from a United States survey,” International Journal of Forecasting, 3, 379–391.CrossRefGoogle Scholar
  31. Fildes, R. (1989), “Evaluation of aggregate and individual forecast method selection rules, Management Science, 35, 1056–1065.CrossRefGoogle Scholar
  32. Frank, H. A. and J. McCollough (1992) “Municipal forecasting practice: `Demand’ and `supply’ side perspectives,” International Journal of Public Administration, 15, 1669–1696.CrossRefGoogle Scholar
  33. Freyd, M. (1925), “The statistical viewpoint in vocational selection,” Journal of Applied Psychology, 9, 349–356.CrossRefGoogle Scholar
  34. Fullerton, D. and T. C. Kinnaman (1996), “Household responses to pricing garbage by the bag,” American Economic Review, 86, 971–984.Google Scholar
  35. Georgoff, D. M. and R. G. Murdick (1986), “Manager’s guide to forecasting,” Harvard Business Review, 64, January-February, 110–120.Google Scholar
  36. Grove, W. M. and P. E. Meehl (1996), “Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy,” Psychology, Public Policy and Law, 2, 293–323.CrossRefGoogle Scholar
  37. Kahneman, D. and D. Lovallo (1993), “Timid choices and bold forecasts: A cognitive perspective on risk taking,” Management Science, 39, 17–31.CrossRefGoogle Scholar
  38. Lemert, J. B. (1986), “Picking the winners: Politician vs. voter predictions of two controversial ballot measures,” Public Opinion Quarterly, 50, 208–221.CrossRefGoogle Scholar
  39. Lewis-Beck, M. S. and C. Tien (1999), “Voters as forecasters: A micromodel of election prediction,” International Journal of Forecasting, 15, 175–184.CrossRefGoogle Scholar
  40. Locke, E. A. (1986), Generalizing from Laboratory to Field Settings. Lexington, MA: Lexington Books.Google Scholar
  41. MacGregor, D. G. (2001), “Decomposition for judgmental forecasting an estimation,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  42. Mahmoud, E., G. Rice and N. Malhotra (1986), “Emerging issues in sales forecasting on decision support systems,” Journal of the Academy of Marketing Science, 16, 47–61.CrossRefGoogle Scholar
  43. Makridakis, S. (1990), “Sliding simulation: A new approach to time series forecasting,” Management Science, 36, 505–512.CrossRefGoogle Scholar
  44. Makridakis, S., A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen and R. Winkler (1982), “The accuracy of extrapolation (time series) methods: Results of a forecasting competition,” Journal of Forecasting, 1, 111–153.CrossRefGoogle Scholar
  45. Meehl, P. E. (1954), Clinical vs. Statistical Prediction. MN: University of Minnesota Press.Google Scholar
  46. Mentzer, J. T. and J. E. Cox, Jr. (1984), “Familiarity, application, and performance of sales forecasting techniques,” Journal of Forecasting, 3, 27–36CrossRefGoogle Scholar
  47. Mentzer, J. T. and K. B. Kahn (1995), “Forecasting technique familiarity, satisfaction, usage, and application,” Journal of Forecasting, 14, 465–476.CrossRefGoogle Scholar
  48. Rhyne, D. M. (1989), “Forecasting systems in managing hospital services demand: A review of utility,” Socio-economic Planning Sciences, 23, 115–123.CrossRefGoogle Scholar
  49. Rowe, G. and G. Wright (2001), “Expert opinions in forecasting: Role of the Delphi technique,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  50. Sanders, N. R. and K. B. Manrodt (1994), “Forecasting practices in U. S. corporations: Survey results,” Interfaces, 24 (2), 92–100.CrossRefGoogle Scholar
  51. Sarbin, T. R. (1943), “A contribution to the study of actuarial and individual methods of prediction,” American Journal of Sociology, 48, 593–602.CrossRefGoogle Scholar
  52. Schnaars, S. P. (1984), “ Situational factors affecting forecast accuracy,” Journal ofMarketing Research, 21, 290–297.CrossRefGoogle Scholar
  53. Sethuraman, R. and G. J. Tellis (1991), “An analysis of the tradeoff between advertising and price discounting,” Journal of Marketing Research, 28, 160–174.CrossRefGoogle Scholar
  54. Sherden, W. A. (1998), The Fortune Sellers. New York: John Wiley.Google Scholar
  55. Slovic, P. and D. J. McPhillamy (1974), “Dimensional commensurability and cue utilization in comparative judgment,” Organizational Behavior and Human Performance, 11, 172–194.CrossRefGoogle Scholar
  56. Smith, S. K. (1997), “Further thoughts on simplicity and complexity in population projection models,” International Journal of Forecasting, 13, 557–565.CrossRefGoogle Scholar
  57. Stewart, T. R. and T. M. Leschine (1986), “Judgment and analysis in oil spill risk assessment,” Risk Analysis 6, 305–315.CrossRefGoogle Scholar
  58. Tellis, G. J. (1988), “The price elasticity of selective demand: A meta-analysis of econometric models of sales,” Journal of Marketing Research, 25, 331–341.CrossRefGoogle Scholar
  59. Witt, S. F. and C. A. Witt (1995), “Forecasting tourism demand: A review of empirical research,” International Journal of Forecasting, 11, 447–475.CrossRefGoogle Scholar
  60. Wittink, D. R. and T. Bergestuen (2001), “Forecasting with conjoint analysis,” in J. S. Armstrong (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic Publishers.Google Scholar
  61. Wright, M. and P. Gendall (1999), “Making survey-based price experiments more accurate,” Journal of the Market Research Society, 41, (2) 245–249.Google Scholar
  62. Yokum, T. and J. S. Armstrong (1995), “Beyond accuracy: Comparison of criteria used to select forecasting methods,” International Journal of Forecasting, 11, 591–597. Full text at hops.wharton.upenn.edu/forecastCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • J. Scott Armstrong
    • 1
  1. 1.The Wharton SchoolUniversity of PennsylvaniaUSA

Personalised recommendations