Abstract

Risk intelligence is the ability to estimate probabilities accurately. In this context, accuracy does not imply the existence of objective probabilities; on the contrary, risk intelligence presupposes a subjective interpretation of probability. Risk intelligence can be measured by calibration testing. This involves collecting many probability estimates of statements whose correct answer is known or will shortly be known to the experimenter, and plotting the proportion of correct answers against the subjective estimates. Between 1960 and 1980, psychologists measured the calibration of many specific groups, such as medics and weather forecasters, but did not gather extensive data on the calibration of the general public. This chapter presents new data from calibration tests of over 6,000 people of all ages and from a wide variety of countries. High risk intelligence is rare. Fifty years of research in the psychology of judgment and decision-making shows that most people are not very good at thinking clearly about risky choices. They often disregard probability entirely, and even when they do take probability into account, they make many errors when estimating it. However, there are some groups of people with unusually high levels of risk intelligence. Lessons can be drawn from these groups to develop new tools to enhance risk intelligence in others. First, such tools should accustom users to specifying probability estimates in numerical terms. Second, they should focus on a relatively narrow area of expertise, if possible. Thirdly, these tools should provide the user with prompt and well-defined feedback. Regular calibration testing might fulfill all three of these requirements, though training assessors by giving them feedback about their calibration has shown mixed results. More research is needed before we can reach a definitive verdict on the value of this method.

Keywords

Probability Estimate Hedge Fund Educational Achievement Calibration Test Brier Score 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Apgar D (2006) Risk intelligence: learning to manage what we don’t know. Harvard Business School Press, Cambridge, MAGoogle Scholar
  2. Brier GW (1950) Verification of forecasts expressed in terms of probability. Mon Weather Rev 78(1):1–3CrossRefGoogle Scholar
  3. Ceci SJ, Liker JK (1986) A day at the races: s study of IQ, expertise, and cognitive complexity. J Exp Psychol Gen 115:255–266CrossRefGoogle Scholar
  4. Christensen-Szalanski JJJ, Bushyhead JB (1981) Physicians’ use of probabilistic information in a real clinical setting. J Exp Psychol Hum Percept Perform 7:928–935CrossRefGoogle Scholar
  5. Darwin C (1871) The descent of man. John Murray, LondonGoogle Scholar
  6. Evans D (2012) Risk intelligence: how to live with uncertainty. New York, Free PressGoogle Scholar
  7. Funston F, Wagner S (2010) Surviving and thriving in uncertainty: creating the risk intelligent enterprise. Wiley, HobokenGoogle Scholar
  8. Gardner H (1983) Frames of mind: the theory of multiple intelligences. Basic Books, New YorkGoogle Scholar
  9. Knight FH (1921) Risk, uncertainty and profit. University of Michigan Library, Michigan, 2009 editionGoogle Scholar
  10. Koriat A, Lichtenstein S et al (1980) Reasons for confidence. J Exp Psychol Hum Learn Mem 6:107–118CrossRefGoogle Scholar
  11. Krell E (2010) RiskChat: what is risk intelligence? http://businessfinancemag.com/article/riskchat-what-risk-intelligence-0621. Accessed 7 July 2010
  12. Kruger J, Dunning D (1999) Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol 77(6):1121–1134CrossRefGoogle Scholar
  13. Lichtenstein S, Fischhoff B (1977) Do those who know more also know more about how much they know? Organ Behav Hum Perform 20:159–183CrossRefGoogle Scholar
  14. Lichtenstein S, Fischhoff B (1981) The effects of gender and instructions on calibration. Decision research report. Decision Research, Eugene, OR, 81Google Scholar
  15. Lichtenstein S, Fischhoff B et al (1982) Calibration of probabilities: the state of the art to 1980. In: Kahneman D, Slovic P, Tversky A (eds) Judgement under uncertainty: heuristics and biases. Cambridge University Press, Cambridge, pp 306–334Google Scholar
  16. Lomborg B (2009) Scared silly over climate change. Guardian online. http://www.guardian.co.uk/commentisfree/cif-green/2009/jun/15/climate-change-children. Accessed 3 March 2011
  17. Murphy AH (1973) A new vector partition of the probability score. J Appl Meteorol 12:595–600CrossRefGoogle Scholar
  18. Murphy AH, Winkler RL (1977) Reliability of subjective probability forecasts of precipitation and temperature. J R Stat Soc C Appl Stat 26(1):41–47Google Scholar
  19. Taleb NN (2007) The black swan: the impact of the highly improbable. Allen Lane, LondonGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.School of MedicineUniversity College CorkCorkIreland

Personalised recommendations