Cognitive bias, situationism, and virtue reliabilism

Abstract

Mark Alfano claims that the heuristics and biases literature supports inferential cognitive situationism, i.e., the view that most of our inferential beliefs are arrived at and retained by means of unreliable heuristics rather than intellectual virtues. If true, this would present virtue reliabilists with an unpleasant choice: they can either accept inferential skepticism, or modify or abandon reliabilism. Alfano thinks that the latter course of action is most plausible, and several reliabilists seem to agree. I argue that this is not the case. If situationism is true, then inferential non-skepticism is no more plausible than reliabilism. But inferential cognitive situationism is false. The heuristic-based inferences that facilitate successful perception and communication have proven remarkably accurate, and even the psychological research on inductive reasoning does not support Alfano’s situationism. More generally, negative assessments of human reasoning tend to ignore the fact that the research on cognitive biases focuses primarily on the performance of individuals in isolation. Several studies suggest that we reason much more effectively when in critical dialogue with others, which highlights the fact that our epistemic performance depends not only on the inner workings of our cognitive processes, but on the environments in which they operate.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    Alfano notes that Sosa (2017) also reacts to the situationist threat by lowering the bar for knowledge, while remaining committed to reliabilism (Alfano 2017, n. 11).

  2. 2.

    Alfano then proceeds to voice his own reservations concerning this reaction to epistemic situationism.

  3. 3.

    Different forms of radical skepticism concern different domains: while Pyrrhonian skeptics deny the possibility of any knowledge whatsoever, Cartesian skeptics deny knowledge of the external world.

  4. 4.

    This formulation of inferential skepticism is highly implausible. Given a weak principle of epistemic closure, we can be said to know a tremendous amount inferentially. I know, for example, that because it is the year 2018, it is not the year 2017, 2016, 2015, etc. A more plausible formulation of inferential skepticism, and one in keeping with Alfano’s concerns about our use of heuristics, would exclude these sorts of deductively closed trivial truths. This is a much weaker doctrine than any form of radical skepticism. I am thankful to Jon Marsh for pointing this out to me.

  5. 5.

    Alternatively, one might want to classify this belief as the result of intuition rather than inference. This re-classification does nothing to address my objection, however, since the heuristics and biases literature that Alfano relies on to make his case for ICS may be used to call the reliability of this intuition into doubt, as I do in the remainder of this paragraph.

  6. 6.

    For an important collection of papers on stereotype accuracy, see Lee et al. (1995).

  7. 7.

    For a classic formulation of this finding, see Meehl (1954). For a synopsis of more recent evidence, together with a philosophical analysis of its implications for epistemology, see Bishop and Trout (2002).

  8. 8.

    This move parallels Gigerenzer’s move to an ecological conception of rationality, which “…refers to the study of how cognitive strategies exploit the representation and structure of information in the environment to make reasonable judgments and decisions” (Gigerenzer 2000, p. 57).

  9. 9.

    For a clear delineation of the reckoning and response theories of inference, see Siegel (2017, Ch. 5).

  10. 10.

    In fairness to Alfano and Gould, Kahneman and Tversky themselves drew this conclusion from their early work:

    In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors (Kahneman and Tversky 1973, p. 273).

  11. 11.

    This line of argument constitutes what Carter and Pritchard (2017) call bias-driven skepticism. They find it not only in Alfano’s work on situationism, but in Saul’s claim that “…what we know about implicit biases shows us that we have very good reason to believe that we cannot properly trust our knowledge-seeking faculties” (2013, p. 243).

  12. 12.

    Two cautionary points are worth emphasizing. First, it remains to be empirically established that confirmation bias is sufficiently ubiquitous to pose a threat to our inferential cognition generally. Second, confirmation bias has a possible upside: while it sometimes prevents us from abandoning false beliefs, it can also decrease our chances of abandoning true beliefs. Thus, the existence of confirmation bias can be used to bolster ICS only if there are independent grounds for thinking that our inferential processes produce a significant number of false beliefs. Even if both of these claims can be established, however, there are reasons to be dubious of the situationist’s pessimistic conclusion, as I will argue below.

  13. 13.

    ‘Superforecasters’ is Tetlock’s term for individuals who outperform the vast majority of forecasters. Tetlock found similar results more generally, i.e. with regular forecasters as well: “At the end of the [first] year, the results were unequivocal: on average, teams were 23% more accurate than individuals” (Tetlock and Gardner 2015, p. 201).

  14. 14.

    They are also what Morton (2012) calls, more broadly, paradoxical virtues.

References

  1. Alfano, M. (2012). Extending the situationist challenge to responsibilist virtue epistemology. Philosophical Quarterly, 62(247), 223–249.

    Google Scholar 

  2. Alfano, M. (2014). Extending the situationist challenge to reliabilism about inference. In A. Fairweather (Ed.), Virtue epistemology naturalized (pp. 103–122). Dordrecht: Springer.

    Google Scholar 

  3. Alfano, M. (2017). Epistemic situationism: An extended prolepsis. In A. Fairweather & M. Alfano (Eds.), Epistemic situationism (pp. 44–61). Oxford: Oxford University Press.

    Google Scholar 

  4. Baron, R. S., Hoppe, S. I., Kao, C. F., Brunsman, B., Linneweh, B., & Rogers, D. (1996). Social corroboration and opinion extremity. Journal of Experimental Social Psychology, 32, 537–560.

    Google Scholar 

  5. Bishop, M., & Trout, J. D. (2002). 50 Years of successful predictive modeling should be enough: lessons for philosophy of science. Philosophy of Science: PSA 2000 Symposium Papers 69, 69, 197–208.

    Google Scholar 

  6. Carroll, J., Winer, R., Coates, D., Galegher, J., & Alibrio, J. (1988). Evaluation, diagnosis, and prediction in parole decision-making. Law and Society Review, 17, 199–228.

    Google Scholar 

  7. Carter, J. A., & Pritchard, D. (2017). Cognitive bias, scepticism and understanding. In S. R. Grimm, C. Baumberger, & S. Ammon (Eds.), Explaining understanding: New perspectives from epistemology and philosophy of science (pp. 272–292). New York: Routledge.

    Google Scholar 

  8. Davidson, D. (1986). A coherence theory of truth and knowledge. In E. LePore (Ed.), Truth and interpretation: Perspectives on the philosophy of Donald Davidson (pp. 307–319). Oxford: Basil Blackwell.

    Google Scholar 

  9. Fairweather, A., & Montemayor, C. (2017). Knowledge, dexterity, and attention: A theory of epistemic agency. Cambridge: Cambridge University Press.

    Google Scholar 

  10. Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 422–444). Cambridge: Cambridge University Press.

    Google Scholar 

  11. Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford: Oxford University Press.

    Google Scholar 

  12. Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. London: Penguin Books.

    Google Scholar 

  13. Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684–704.

    Google Scholar 

  14. Gould, S. J. (1992). Bully for brontosaurus: Further reflections in natural history. New York: Penguin Books.

    Google Scholar 

  15. Hertwig, R., & Gigerenzer, G. (1999). The “conjunction fallacy” revisited: How intelligent inferences look like reasoning error. Journal of Behavioral Decision Making, 12, 275–305.

    Google Scholar 

  16. Hoffrage, U. (2004). Overconfidence. In R. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 235–254). Hove: Psychology Press.

    Google Scholar 

  17. Kahneman, D. (2011). Thinking fast and slow. London: Penguin Books.

    Google Scholar 

  18. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psycholoigcal Review, 80, 237–251.

    Google Scholar 

  19. Kenyon, T., & Beaulac, G. (2014). Critical thinking education and debiasing. Informal Logic, 34(4), 341–363.

    Google Scholar 

  20. Lee, Y., Jussim, L. J., & McCauley, C. R. (1995). Stereotype accuracy: Toward appreciating group differences. Washington, D.C.: American Psychological Association.

    Google Scholar 

  21. Lehman, D. R., Lempert, R. O., & Nisbett, R. (1988). The effects of graduate training on reasoning: Formal discipline and thinking about everyday-life events. American Psychologist, 43(6), 431–442.

    Google Scholar 

  22. Meehl, P. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press.

    Google Scholar 

  23. Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74.

    Google Scholar 

  24. Mercier, H., & Sperber, D. (2017). The enigma of reason. Cambridge: Harvard University Press.

    Google Scholar 

  25. Morton, A. (2012). Bounded thinking: Intellectual virtues for limited agents. Oxford: Oxford University Press.

    Google Scholar 

  26. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomena in many guises. Review of General Psychology, 2, 175–220.

    Google Scholar 

  27. Nisbett, R., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625–631.

    Google Scholar 

  28. Nisbett, R., Krantz, D. H., Jepson, C., & Kunda, Z. (2002). The use of statistical heuristics in everyday inductive reasoning. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 510–533). Cambridge: Cambridge University Press.

    Google Scholar 

  29. O’Brien, B. (2009). Prime suspect: An examination of factors that aggravate and counteract confirmation bias in criminal investigations. Psychology, Public Policy, and Law, 15(4), 315–334.

    Google Scholar 

  30. Pinker, S. (1997). How the mind works. New York: W. W. Norton & Company.

    Google Scholar 

  31. Pronin, E., Lin, D., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28, 369–381.

    Google Scholar 

  32. Saul, J. (2013). Scepticism and implicit bias. Disputatio, 5(37), 243–263.

    Google Scholar 

  33. Sedikides, C., & Gregg, A. P. (2007). They why’s the limit: Curtailing self-enhancement with explanatory introspection. Journal of Personality, 75, 783–824.

    Google Scholar 

  34. Siegel, S. (2017). The rationality of perception. Oxford: Oxford University Press.

    Google Scholar 

  35. Smart, P. R. (2018). Mandevillian intelligence. Synthese, 195, 4169–4200.

    Google Scholar 

  36. Sosa, E. (2017). Virtue theory against situationism. In A. Fairweather & M. Alfano (Eds.), Epistemic situationism (pp. 116–134). Oxford: Oxford University Press.

    Google Scholar 

  37. Stasser, G., & Titus, W. (2003). Hidden profiles: A brief history. Psychological Inquiry, 14, 204–313.

    Google Scholar 

  38. Tetlock, P. (2005). Expert political judgment. Princeton: Princeton University Press.

    Google Scholar 

  39. Tetlock, P., & Gardner, D. (2015). Superforecasting: The art and science of prediction. Toronto: Signal.

    Google Scholar 

  40. Turri, J. (2017). Epistemic situationism and cognitive ability. In A. Fairweather & M. Alfano (Eds.), Epistemic situationism (pp. 158–167). Oxford: Oxford University Press.

    Google Scholar 

  41. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232.

    Google Scholar 

  42. Tversky, A., & Kahneman, D. (2002). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. In T. Gilovich, D. W. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 19–48). Cambridge: Cambridge University Press.

    Google Scholar 

  43. Wilson, T. D., Centerbar, D. B., & Brekke, N. (2002). Mental contamination and the debiasing problem. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 185–200). Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Acknowledgements

I am grateful to an audience at the University of Glasgow for their discussion of an earlier version of this paper. I owe a special note of thanks to Jon Marsh and two of this journal’s referees for their insightful, detailed, and constructive comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Steven Bland.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bland, S. Cognitive bias, situationism, and virtue reliabilism. Synthese 198, 471–490 (2021). https://doi.org/10.1007/s11229-018-02031-6

Download citation

Keywords

  • Cognitive bias
  • Virtue epistemology
  • Skepticism
  • Epistemic situationism