Advertisement

Journal of Management Control

, 22:79 | Cite as

The impact of biases on simulation-based risk aggregation: modeling cognitive influences on risk assessment

  • Matthias MeyerEmail author
  • Cathérine Grisar
  • Felix Kuhnert
Original Paper

Abstract

This paper develops a systematic approach to quantifying the effect of judgmental biases on aggregate risk measures. Starting with the standard risk management process, we derive the areas that require expert judgment as input in order to aggregate risk into risk measures such as Earnings at Risk. We specify three possible gateways for biases and identify several psychological theories to quantify deviations of expert judgments from objective probabilities. The impact of these cognitive biases on the aggregate risk measure is investigated via Monte Carlo simulation experiments. Through experimental design, we can determine the size of both the average and the possible interaction effects of the different biases. The results show that aggregate risk is systematically underestimated if it is based on biased subjective judgment. Moreover, the existence of interaction effects indicates potential problems of simple debiasing strategies.

Keywords

Cognitive biases Experimental design Expert judgment Risk management Simulation 

References

  1. Adams, J. K., & Adams, P. A. (1961). Realism of confidence judgments. Psychological Review, 68(1), 33–45. CrossRefGoogle Scholar
  2. Alexander, C., & Sheedy, E. (2008). Developing a stress testing framework based on market risk models. Journal of Banking & Finance, 32(10), 2220–2236. CrossRefGoogle Scholar
  3. Antony, J. (2003). Design of experiments for engineers and scientists. Amsterdam: Butterworth-Heinemann. Google Scholar
  4. Aragonés, J. R., Blanco, C., & Dowd, K. (2001). Incorporating stress tests into market risk modeling. Derivatives Quarterly, 7(3), 44–49. Google Scholar
  5. Ayyub, B. M. (2003). Risk analysis in engineering and economics. Boca Raton: Chapman & Hall/CRC. CrossRefGoogle Scholar
  6. Baron, J. (2008). Thinking and deciding (4th ed.). Cambridge: Cambridge Univ. Press. Google Scholar
  7. Basel Committee on Banking Supervision (2009). Principles for sound stress testing practices and supervision: [final paper]. Bank for International Settlements, Basel. http://www.bis.org/publ/bcbs147.pdf?noframes=1. Accessed 28 April 2010.
  8. Bedford, T., & Cooke, R. (2001). Probabilistic risk analysis: foundations and methods. Cambridge: Cambridge Univ. Press. Google Scholar
  9. Berkowitz, J. (1999). A coherent framework for stress-testing. Board of Governors of the Federal Reserve, Washington, 1–14. Google Scholar
  10. Brenner, L. A. (2003). A random support model of the calibration of subjective probabilities. Organizational Behavior and Human Decision Processes, 90(1), 87–110. CrossRefGoogle Scholar
  11. Brenner, L. A., Griffin, D., & Koehler, D. J. (2005). Modeling patterns of probability calibration with random support theory: diagnosing case-based judgment. Organizational Behavior and Human Decision Processes, 97(1), 64–81. CrossRefGoogle Scholar
  12. Brenner, L. A., Koehler, D. J., & Rottenstreich, Y. (2008). Remarks on support theory: recent advances and future directions. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: the psychology of intuitive judgment (pp. 489–509). Cambridge: Cambridge Univ. Press. Google Scholar
  13. Bunn, D. W. (1980). On the calibration of continuous subjective probability distributions. R&D Management, 10(2), 87–90. CrossRefGoogle Scholar
  14. Clemen, R. T., & Lichtendahl, K. C. Jr. (2002). Debiasing expert overconfidence: a Bayesian calibration model. In Conference proceedings of the 6th international conference on probabilistic safety assessment and management, pp. 1–16. Google Scholar
  15. Cooke, R. M. (1991). Experts in uncertainty: opinion and subjective probability in science. New York: Oxford Univ. Press. Google Scholar
  16. COSO (2004). Enterprise risk management—integrated framework: executive summary. http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf. Accessed 14 June 2011.
  17. Dowd, K. (2003). Beyond value at risk: the new science of risk management. Chichester: Wiley. Google Scholar
  18. Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the social scientist (2nd ed.). Maidenhead: Open Univ. Press. Google Scholar
  19. Griffin, D., & Brenner, L. A. (2005). Perspectives on probability judgment calibration. In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 177–199). Oxford: Blackwell. Google Scholar
  20. Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24(3), 411–435. CrossRefGoogle Scholar
  21. Griffin, D., Gonzalez, R., & Varey, C. (2001). The heuristics and biases approach to judgment under uncertainty. In A. Tesser & N. Schwarz (Eds.), Blackwell handbook of social psychology: intra individual processes (pp. 207–235). Malden: Blackwell. Google Scholar
  22. Institut der Wirtschaftsprüfer [IDW] (1999). IDW Prüfungsstandard: Die Prüfung des Risikofrüherkennungssystems nach § 317 Abs. 4 HGB (IDW PS 340). WPg, 52(16), 658–662. Google Scholar
  23. Institut der Wirtschaftsprüfer [IDW] (2006). WP Handbuch 2006. Düsseldorf: IDW Verlag GmbH. Google Scholar
  24. Jennings, D. L., Amabile, T. M., & Ross, L. (1982). Informal covariation assessment: data-based versus theory-based judgments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: heuristics and biases (23rd ed., pp. 211–230). Cambridge: Cambridge Univ. Press. Google Scholar
  25. Jorion, P. (2007). Value at risk: the new benchmark for managing financial risk (3rd ed.). New York: McGraw-Hill. Google Scholar
  26. Kahneman, D., & Tversky, A. (1972). Subjective probability: a judgment of representativeness. Cognitive Psychology, 3(3), 430–454. CrossRefGoogle Scholar
  27. Kelton, D. W., & Barton, R. R. (2003). Experimental design for simulation. In Proceedings of the 2003 winter simulation conference (pp. 59–65). Google Scholar
  28. Keppel, G., & Wickens, T. D. (2004). Design and analysis: a researcher’s handbook (4th ed.). Upper Saddle River: Pearson Prentice Hall. Google Scholar
  29. Keren, G. (1991). Calibration and probability judgments: conceptual and methodological issues. Acta Psychologica, 77(3), 217–273. CrossRefGoogle Scholar
  30. Klayman, J., González-Vallejo, C., & Barlas, S. (1999). Overconfidence: it depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes, 79(3), 216–247. CrossRefGoogle Scholar
  31. Kleijnen, J. P. C. (2008). Design and analysis of simulation experiments. Boston: Springer. Google Scholar
  32. Koehler, D. J., Brenner, L. A., & Griffin, D. (2008). The calibration of expert judgment: heuristics and biases beyond the laboratory. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: the psychology of intuitive judgment (pp. 686–715). Cambridge: Cambridge Univ. Press. Google Scholar
  33. Kupiec, P. (2002). Stress testing in a value at risk framework. In M. A. H. Dempster (Ed.), Risk management: value at risk and beyond (pp. 76–99). Cambridge: Cambridge Univ. Press. CrossRefGoogle Scholar
  34. Law, A. M. (2007). Simulation modeling and analysis (4th ed.). Boston: McGraw-Hill. Google Scholar
  35. Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: the state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: heuristics and biases (23rd ed., pp. 306–334). Cambridge: Cambridge Univ. Press. Google Scholar
  36. Lorscheid, I., Heine, B., & Meyer, M. (2011). Opening the ‘black box’ of simulations: increased transparency of simulation models and effective results reports through the systematic design of experiments. CMOT, forthcoming. Google Scholar
  37. McClelland, A. G. R., & Bolger, F. (1994). The calibration of subjective probabilities: theories and models 1980–94. In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 453–482). Chichester: Wiley. Google Scholar
  38. Meyer, M. A., & Booker, J. M. (1991). Eliciting and analyzing expert judgment: a practical guide. London: Academic Press. Google Scholar
  39. Montgomery, D. C. (2005). Design and analysis of experiments (6th ed.). Hoboken: Wiley. Google Scholar
  40. Murphy, A. H., & Winkler, R. L. (1984). Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387), 489–500. CrossRefGoogle Scholar
  41. Nocco, B. W., & Stulz, R. M. (2006). Enterprise risk management: theory and practice. Journal of Applied Corporate Finance, 8(4), 8–20. CrossRefGoogle Scholar
  42. Rottenstreich, Y., & Tversky, A. (1997). Unpacking, repacking, and anchoring: advances in support theory. Psychological Review, 104(2), 406–415. CrossRefGoogle Scholar
  43. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118. CrossRefGoogle Scholar
  44. SOX (2002). Sarbanes-Oxley act of 2002 (Enrolled Bill [Final as Passed Both House and Senate]—ENR), The Library of Congress. http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.3763.ENR. Accessed 11 April 2011.
  45. Spetzler, C. S., & Staehl von Holstein, C.-A. S. (1975). Probability encoding in decision analysis. Management Science, 22(3), 340–358. CrossRefGoogle Scholar
  46. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases: biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157), 1124–1131. CrossRefGoogle Scholar
  47. Tversky, A., & Koehler, D. J. (1994). Support theory: a nonextensional representation of subjective probability. Psychological Review, 101(4), 547–567. CrossRefGoogle Scholar
  48. Viemann, K. (2005). Risikoadjustierte Performancemaße. Zeitschrift für Planung und Unternehmenssteuerung, 16(3), 373–380. CrossRefGoogle Scholar
  49. Vose, D. (1996). Quantitative risk analysis: a guide to Monte Carlo simulation modelling. Chichester: Wiley. Google Scholar
  50. Vose, D. (2008). Risk analysis: a quantitative guide (3rd ed.). Chichester: Wiley. Google Scholar
  51. Wallsten, T. S., & Budescu, D. V. (1983). Encoding subjective probabilities: a psychological and psychometric review. Management Science, 29(2), 151–173. CrossRefGoogle Scholar
  52. Wieske, D., & Van der Meer, R. (2006). Monte Carlo simulations and corporate risk management in Germany. http://api.ning.com/files/og1Ma9CrBKYp6Mt-3cQ2LZvZEjnTYEKkQOb3dFu7nwbkgIMriyO*NH2rCvwbfYwuBcx6pUdL*IiiYhdlDWqMQfJrKnowuNB2/MonteCarloandearningsatrisk.pdf. Accessed 14 June 2011.
  53. Winkler, R. L., & Murphy, A. H. (1968). Good probability assessors. Journal of Applied Meteorology, 7(5), 751–758. CrossRefGoogle Scholar
  54. Yates, J. F. (1990). Judgment and decision making. Englewood Cliffs: Prentice Hall. Google Scholar

Copyright information

© Springer Verlag 2011

Authors and Affiliations

  • Matthias Meyer
    • 1
    Email author
  • Cathérine Grisar
    • 1
  • Felix Kuhnert
    • 2
  1. 1.Institute of Management Control and AccountingHamburg University of TechnologyHamburgGermany
  2. 2.Chair of Management Accounting and ControlWHU–Otto Beisheim School of ManagementVallendarGermany

Personalised recommendations