Improving Judgment in Forecasting

  • Nigel Harvey

Abstract

Principles designed to improve judgment in forecasting aim to minimize inconsistency and bias at different stages of the forecasting process (formulation of the forecasting problem, choice of method, application of method, comparison and combination of forecasts, assessment of uncertainty in forecasts, adjustment of forecasts, evaluation of forecasts). The seven principles discussed concern the value of checklists, the importance of establishing agreed criteria for selecting forecast methods, retention and use of forecast records to obtain feedback, use of graphical rather than tabular data displays, the advantages of fitting lines through graphical displays when making forecasts, the advisability of using multiple methods to assess uncertainty in forecasts, and the need to ensure that people assessing the chances of a plan’s success are different from those who develop and implement it.

Key words

Cognitive biases confidence forecasting heuristics judgment 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexander, A., M. O’Connor and R. Edmundson (1997), “It ain’t what you do, it’s the way that you do it, ” P.per presented at International Symposium on Forecasting, Barbados. Angus-Leppan, P. and V. Fatseas (1986), “The forecasting accuracy of trainee accountants using judgmental and statistical techniques,” Accounting and Business Research, 16, 179–188.Google Scholar
  2. Arkes, H. R. (2001), “Overconfidence in judgmental forecasting,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  3. Arkes, H. R., R. L. Wortman, P. D. Saville and A. R. Harkness (1981), “Hindsight bias among physicians weighing the likelihood of diagnoses,” Journal of Applied Psychology, 66, 252–254.CrossRefGoogle Scholar
  4. Armstrong, J. S. (1985), Long Range Forecasting: From Crystal Ball to Computer. New York: Wiley. Full text at hops.wharton.upenn.edu/forecast.Google Scholar
  5. Armstrong, J. S. (2001), “Role playing: A method to forecast decisions,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  6. Armstrong, J. S. and F. Collopy (1992), `Error measures for generalizing about forecasting methods: Empirical comparisons,“ International Journal of Forecasting, 8, 69–80. Full text at hops.wharton.upenn.edu/forecast.Google Scholar
  7. Armstrong, J. S. and F. Collopy (1993), “Causal forces: Structuring knowledge for time-series extrapolation,” Journal of Forecasting, 12, 103–115. Full text at hops.wharton.upenn.edu/forecast.Google Scholar
  8. Balzer, W. K., M. E. Doherty and R. O’Connor, Jr. (1989), “Effects of cognitive feedback on performance,” Psychological Bulletin, 106, 410–433.CrossRefGoogle Scholar
  9. Bolger, F. and N. Harvey (1993), “Context-sensitive heuristics in statistical reasoning,” Quarterly Journal of Experimental Psychology, 46A, 779–811.Google Scholar
  10. Bolger, F. and N. Harvey (1995), “Judging the probability that the next point in an observed time series will be below, or above, a given value,” Journal of Forecasting, 14, 597607.Google Scholar
  11. Bolger, F. and G. Wright (1994), “Assessing the quality of expert judgment,” Decision Support Systems, 11, 1–24.CrossRefGoogle Scholar
  12. Cohen, J. and E. J. Dearnaley (1962), “Skill and judgment of footballers in attempting to score goals: A study of psychological probability,” British Journal of Psychology, 53, 71–86.CrossRefGoogle Scholar
  13. Cohen, J., E. J. Dearnaley and C. E. M. Hansel (1956), “Risk and hazard: Influence of training on the performance of bus drivers,” Operational Research Quarterly, 7, 67128.CrossRefGoogle Scholar
  14. Cohen, J., E. J. Dearnaley and C. E. M. Hansel (1958), “The risk taken in driving under the influence of alcohol,” British Medical Journal, 1, 1438–1442.CrossRefGoogle Scholar
  15. Dalrymple, D. J. (1987), “Sales forecasting practices: Results from a United States survey,” International Journal of Forecasting, 3, 379–391.CrossRefGoogle Scholar
  16. Dickson, G. W., G. DeSanctis and D. J. McBride (1986), “Understanding the effectiveness of computer graphics for decision support: A cumulative experimental approach,” Communications of the ACM, 20, 68–102.Google Scholar
  17. Dubé-Rioux, L. and J. E. Russo (1988), “The availability bias in professional judgment,” Journal of Behavioral Decision Making, 1, 223–237.CrossRefGoogle Scholar
  18. Ebbesen, E. and V. Konecni (1975), “Decision making and information integration in the courts: The setting of bail,” Journal of Personality and Social Psychology, 32, 805821.Google Scholar
  19. Eggleton, I. R. C. (1982), “Intuitive time-series extrapolation,” Journal of Accounting Research, 20, 68–102.CrossRefGoogle Scholar
  20. Einhorn, H. J. and R. M. Hogarth (1978), “Confidence in judgment: Persistence of the illusion of validity,” Psychological Review, 85, 395–416.CrossRefGoogle Scholar
  21. Evans, J. St. B. T., C. Harries, I. Dennis and J. Dean (1995), “General practitioners’ tacit and stated policies in the prescription of lipid-lowering agents,” British Journal of General Practice, 45, 15–18.Google Scholar
  22. Fildes, R. and R. Hastings (1994), “The organization and improvement of market forecasting,” Journal of the Operational Research Society, 45, 1–16.Google Scholar
  23. Fischer, I. and N. Harvey (1999), “Combining forecasts: What information do judges need to outperform the simple average?” International Journal of Forecasting, 15, 227246.Google Scholar
  24. Fischhoff, B. (1977), “Perceived informativeness of facts,” Journal of Experimental Psychology: Human Perception and Performance, 3, 349–358.CrossRefGoogle Scholar
  25. Fischhoff, B. (2001), “Learning from experience: Coping with hindsight bias and ambiguity,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  26. Fischhoff, B. and R. Beyth (1975), “’I knew it would happen.’ Remembered probabilities of once-future things,” Organizational Behavior and Human Decision Processes, 13, 116.Google Scholar
  27. Gaeth, G. J. and J. Shanteau (1984), “Reducing the influence of irrelevant information on experienced decision makers,” Organizational Behavior and Human Performance, 33, 263–282.CrossRefGoogle Scholar
  28. Getty, D. J., R. M. Pickett, S. J. D’Orsi and J. A. Swets (1988), “Enhanced interpretation of diagnostic images,” Investigative Radiology, 23, 244–252.CrossRefGoogle Scholar
  29. Goldfarb, R. L. (1965), Ransom. New York: Harper and Row.Google Scholar
  30. Goodwin, P. (1996), “Subjective correction of judgmental point forecasts and decisions,” Omega, 24, 551–559.CrossRefGoogle Scholar
  31. Hammond, K. R. (1971), “Computer graphics as an aid to learning,” Science, 172, 903908.Google Scholar
  32. Harries, C., J. St. B. T. Evans, I. Dennis and J. Dean (1996), “A clinical judgment analysis of prescribing decisions in general practice,” Le Travail Humain, 59, 87–111.Google Scholar
  33. Harvey, N. (1988), “Judgmental forecasting of univariate time-series,” Journal of Behavioral Decision Making, 1, 95–110.CrossRefGoogle Scholar
  34. Harvey, N. (1990), “Effects of difficulty on judgmental probability forecasting of control response efficacy,” Journal of Forecasting, 9, 373–387.CrossRefGoogle Scholar
  35. Harvey, N. (1994), “Relations between confidence and skilled performance,” in G. Wright and P. Ayton, (eds.), Subjective Probability. New York: Wiley, pp. 321–352.Google Scholar
  36. Harvey, N. (1995), “Why are judgments less consistent in less predictable task situations?” Organizational Behavior and Human Decision Processes, 63, 247–263.CrossRefGoogle Scholar
  37. Harvey, N. (1997), “Improving judgmental forecasts,” Paper presented at International Symposium on Forecasting, Barbados.Google Scholar
  38. Harvey, N. and F. Bolger (1996), “Graphs versus tables: Effects of presentation format on judgmental forecasting,” International Journal of Forecasting, 12, 119–137.CrossRefGoogle Scholar
  39. Harvey, N., T. Ewart and R. West (1997), “Effects of data noise on statistical judgment,” Thinking and Reasoning, 3, 111–132.CrossRefGoogle Scholar
  40. Harvey, N., D. J. Koehler and P. Ayton (1997), “Judgments of decision effectiveness: Actor-observer differences in overconfidence,” Organizational Behavior and Human Decision Processes, 70, 267–282.CrossRefGoogle Scholar
  41. Hawkins, S. A. and R. Hastie (1990), “Hindsight: Biased judgments of past events after outcomes are known,” Psychological Bulletin, 107, 311–327.CrossRefGoogle Scholar
  42. Jones, G. V. (1979), “A generalized polynomial model for perception of exponential growth,” Perception and Psychophysics, 25, 232–234.CrossRefGoogle Scholar
  43. Koehler, D. J. and N. Harvey (1997), “Confidence judgments by actors and observers,” Journal of Behavioral Decision Making, 10, 117–133.CrossRefGoogle Scholar
  44. Lawrence, M. J. (1983), “An exploration of some practical issues in the use of quantitative forecasting models,” Journal of Forecasting, 2, 169–179.CrossRefGoogle Scholar
  45. Lawrence, M. J., R. H. Edmundson and M. J. O’Connor (1985), “An examination of the accuracy of judgemental extrapolation of time series,” International Journal of Forecasting, 1, 25–35.CrossRefGoogle Scholar
  46. Lawrence, M. J. and S. Makridakis (1989), “Factors affecting judgmental forecasts and confidence intervals,” Organizational Behavior and Human Decision Processes, 42, 17 2187.Google Scholar
  47. Lawrence, M. and M. O’Connor (1993), “Scale, variability and the calibration of judgmental prediction intervals,” Organizational Behavior and Human Decision Processes, 56, 441–458.CrossRefGoogle Scholar
  48. Lim, J. S. and M. O’Connor (1995), “Judgmental adjustment of initial forecasts: Its effectiveness and biases,” Journal of Behavioral Decision Making, 8, 149–168.CrossRefGoogle Scholar
  49. MacGregor, D. G. (2001), “Decomposition for judgmental forecasting and estimation,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  50. Mentzer, J. T. and J. E. Cox, Jr. (1984), “Familiarity, application and performance of sales forecasting techniques,” Journal of Forecasting, 3, 27–36.CrossRefGoogle Scholar
  51. Mentzer, J. T. and K. B. Kahn (1995), “Forecasting technique familiarity, satisfaction, usage and application,” Journal of Forecasting, 14, 465–476.CrossRefGoogle Scholar
  52. Morwitz, V. G. (2001), “Methods for forecasting from intentions data,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  53. Murphy, A. H. and H. Daan (1984), “Impacts of feedback and experience on the quality of subjective probability forecasts: Comparison of results from the first and second years of the Zierikzee experiment,” Monthly Weather Review, 112, 413–423.CrossRefGoogle Scholar
  54. O’Connor, M. and M. Lawrence (1989), “An examination of the accuracy of judgmental confidence intervals in time series forecasting,” Journal of Forecasting, 8, 141–155.CrossRefGoogle Scholar
  55. O’Connor, M. and M. Lawrence (1992), “Time series characteristics and the widths of judgmental confidence intervals,” International Journal of Forecasting, 7, 413–420.CrossRefGoogle Scholar
  56. O’Connor, M., W. Remus and K. Griggs (1997), “Going up-going down: How good are people at forecasting trends and changes in trends?” Journal of Forecasting, 16, 165176.Google Scholar
  57. Önkal, D. and G. Muradoglu (1995), “Effects of feedback on probabilistic forecasts of stock prices,” International Journal of Forecasting, 11, 307–319.CrossRefGoogle Scholar
  58. Phillips, L. D. (1982), “Requisite decision modelling: A case study,” Journal of the Operational Research Society, 33, 303–311.Google Scholar
  59. Phillips, L. D. (1984), “A theory of requisite decision models,” Acta Psychologica, 56, 2948.CrossRefGoogle Scholar
  60. Pitz, G. F. (1974), “Subjective probability distributions for imperfectly known quantities,” in L. W. Gregg, (ed.), Knowledge and Cognition. New York: Wiley.Google Scholar
  61. Poulton, E. C. (1989), Bias in Quantifying Judgments. Hove: Erlbaum.Google Scholar
  62. Poulton, E. C. (1994), Behavioural Decision Theory: A New Approach. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  63. Rowe, G. and G. Wright (2001), “Expert opinions in forecasting: The role of the Delphi technique,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  64. Russo, J. E. and K. Y. Kolzow (1994), “Where is the fault in fault trees?” Journal of Experimental Psychology: Human Perception and Performance, 20, 17–32.CrossRefGoogle Scholar
  65. Sanders, N. R. (1992), “Accuracy of judgmental forecasts: A comparison,” Omega, 20, 353–364.CrossRefGoogle Scholar
  66. Sanders, N. R. and L. P. Ritzman (2001), “Judgmental adjustment of statistical forecasts,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  67. Schmitt, N., B. W. Coyle and B. B. Saari (1977), “Types of task information feedback in multiple-cue probability learning,” Organizational Behavior and Human Performance, 18, 316–328.CrossRefGoogle Scholar
  68. Seaver, D. A., D. von Winterfeldt and W. Edwards (1978), “Eliciting subjective probability distributions on continuous variables,” Organizational Behavior and Human Performance, 21, 379–391.CrossRefGoogle Scholar
  69. Simon, H. (1957), Administrative Behavior. New York: Wiley.Google Scholar
  70. Slovic, P. (1969), “Analyzing the expert judge: A descriptive study of a stockbroker’s decision processes,” Journal of Applied Psychology, 53, 255–263.CrossRefGoogle Scholar
  71. Sparkes, J. R. and A. K. McHugh (1984), “Awareness and use of forecasting techniques in British industry,” Journal of Forecasting, 3, 37–42.CrossRefGoogle Scholar
  72. Spetzler, C. S. and C. A. S. StA 1 von Holstein (1975), “Probability encoding in decision analysis,” Management Science, 22, 340–358.CrossRefGoogle Scholar
  73. Stewart, T. R. (2001), “Improving reliability of judgmental forecasts,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  74. Tape, T. G., J. Kripal, and R. S. Wigton (1992), “Comparing methods of learning clinical prediction from case studies,” Medical Decision Making, 12, 213–221.CrossRefGoogle Scholar
  75. Tversky, A. and D. Kahneman (1974), “Judgment under uncertainty: Heuristics and biases,” Science, 185, 1127–1131.CrossRefGoogle Scholar
  76. Wagenaar, W. A. and S. D. Sagaria (1975), “Misperception of exponential growth,” Perception and Psychophysics, 18, 416–422.CrossRefGoogle Scholar
  77. Wason, P. C. (1968), “Reasoning about a rule,” Quarterly Journal of Experimental Psychology, 20, 273–281.CrossRefGoogle Scholar
  78. Webby, R., M. O’Connor and M. Lawrence (2001), “Judgmental time-series forecasting using domain knowledge,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar
  79. Wittink, D. R. and T. Bergestuen (2001), “Forecasting with conjoint analysis,” in J. S. Armstrong, (ed.), Principles of Forecasting. Norwell, MA: Kluwer Academic.Google Scholar

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • Nigel Harvey
    • 1
  1. 1.Department of PsychologyUniversity College LondonUK

Personalised recommendations