Philosophical Studies

, Volume 140, Issue 1, pp 47–63 | Cite as

Seeing and believing: perception, belief formation and the divided mind

Article

Abstract

On many of the idealized models of human cognition and behavior in use by philosophers, agents are represented as having a single corpus of beliefs which (a) is consistent and deductively closed, and (b) guides all of their (rational, deliberate, intentional) actions all the time. In graded-belief frameworks, agents are represented as having a single, coherent distribution of credences, which guides all of their (rational, deliberate, intentional) actions all of the time. It’s clear that actual human beings don’t live up to this idealization. The systems of belief that we in fact have are fragmented. Rather than having a single system of beliefs that guides all of our behavior all of the time, we have a number of distinct, compartmentalized systems of belief, different ones of which drive different aspects of our behavior in different contexts. It’s tempting to think that, while of course people are fragmented, it would be better (from the perspective of rationality) if they weren’t, and the only reason why our fragmentation is excusable is that we have limited cognitive resources, which prevents us from holding too much information before our minds at a time. Give us enough additional processing capacity, and there’d be no justification for any continued fragmentation. I argue that this is not so. There are good reasons to be fragmented rather than unified, independent of the limitations on our available processing power. In particular, there are ways our belief-forming mechanisms—including our perceptual systems—could be constructed that would make it better to be fragmented than to be unified. And there are reasons to think that some of our belief-forming mechanisms really are constructed that way.

Keywords

Belief Perception Rationality Fragmentation 

Notes

Acknowledgements

Thanks to Alan Hajek, Martin Davies, John Campbell, and audiences at Victoria University of Wellington, the Australian National University, and the 2007 Pacific APA for helpful discussion, comments, questions, and objections, in particular to Adam Elga for the ongoing series of conversations that has shaped most of my thinking about these questions.

References

  1. Egan, A., & Elga, A. (2005). I can’t believe I’m stupid. Philosophical Perspectives, 19, 77–94.CrossRefGoogle Scholar
  2. Elga, A. (2008). How to be incoherent, and why. (forthcoming).Google Scholar
  3. Gilbert, D. (1991). How mental systems believe. American Psychologist, 46, 101–119.CrossRefGoogle Scholar
  4. Gilbert, D., Tafarodi, R., & Malone, P. (1993). You can’t not believe everything you read. Journal of Personality and Social Psychology, 65, 221–233.CrossRefGoogle Scholar
  5. Jackson, F., & Pargetter, R. (1986). Oughts, options and actualism. The Philosophical Review, 95, 233–255.CrossRefGoogle Scholar
  6. Kant, I. (1785/1998). Groundwork for the metaphysics of morals. Cambridge: Cambridge University Press.Google Scholar
  7. Lewis, D. (1982). Logic for equivocators. Noûs, 16, 431–441. Reprinted in D. Lewis, Papers in philosophical logic (pp. 97–110). Oxford: Oxford University Press.Google Scholar
  8. Lipsey, R., & Lancaster, K. (1956). The general theory of second best. The Review of Economic Studies, 24, 11–32.CrossRefGoogle Scholar
  9. Stalnaker, R. (1984). Inquiry. Cambridge: MIT Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of MichiganAnn ArborUSA

Personalised recommendations