Skip to main content
Log in

Knowledge, belief, and egocentric bias

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Changes in conversationally salient error possibilities, and/or changes in stakes, appear to generate shifts in our judgments regarding the correct application of ‘know’. One prominent response to these shifts is to argue that they arise due to shifts in belief and do not pose a problem for traditional semantic or metaphysical accounts of knowledge (or ‘know’). Such doxastic proposals face familiar difficulties with cases where knowledge is ascribed to subjects in different practical or conversational situations from the speaker. Jennifer Nagel has recently offered an ingenious response to these problematic cases—appeal to egocentric bias. Appeal to this kind of bias also has the potential for interesting application in other philosophical arenas, including discussions of epistemic modals. In this paper, I draw on relevant empirical literature to clarify the nature of egocentric bias as it manifests in children and adults, and argue that appeal to egocentric bias is ill-suited to respond to the problem cases for doxastic accounts. Our discussion also has significant impact on the prospects for application of egocentric bias in other arenas.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See e.g. Vogel (1990: pp. 15–16), Cohen (1999: p. 58), DeRose (2009: pp. 1–5) Lewis (1996). There is now a substantial empirical literature investigating the extent to which these shifts in judgment are exhibited by ordinary speakers. There seems to be some good evidence that shifts in the salience of error possibilities generate shifts in ordinary speakers’ judgments regarding ‘know’, but the situation is arguably less clear in regard to stakes effects. See e.g. Buckwalter (2014), Schaffer and Knobe (2012) and Buckwalter and Schaffer (2014) for some relevant empirical work and discussion.

  2. Prominent contextualist accounts include Cohen (1999), DeRose (1995, 2009), Lewis (1996), Blome-Tillmann (2014) and Ichikawa (2017). Impurist (or ‘anti-intellectualist’) accounts include Hawthorne (2004: ch. 4), Stanley (2005), Fantl and McGrath (2009), Weatherson (2005, 2017).

  3. It is possible to pursue a doxastic approach to explaining shifts in our judgments about ‘know’ that is metaphysically or semantically non-conservative (see e.g. Weatherson 2005). In the present paper, I shall focus on conservative (i.e. classical invariantist) attempts to pursue a doxastic approach, but our discussion plausibly has significance for some non-conservative doxastic approaches as well.

  4. For some relevant empirical work on these kinds of judgments, see e.g. Schaffer and Knobe (2012) and Buckwalter (2014). Note that to generate the reported judgments it may be necessary to amend Table B to ensure that the possibility of tricky lighting becomes sufficiently salient (see Schaffer and Knobe 2012: pp. 19–22). If necessary, the discussion to follow could be recast in terms of such amended cases.

  5. For characterisation of the relevant positions in the debate, see e.g. DeRose (2009: pp. 1–49) or MacFarlane (2014: ch. 7). Note that classical invariantists also reject the semantic claim associated with relativist or perspectival accounts of ‘know’—viz. the claim that the contents of ‘knowledge’-ascribing and denying sentences are only true relative to some additional ‘epistemic standards’ parameter (see e.g. MacFarlane 2005, 2014: ch. 7).

  6. These three broad approaches can be found in Nagel (2010b), though Nagel does not take pains to distinguish them. Bach (2005: §V) does not indicate the mechanism via which a subject like John might lose his belief, merely remarking (in regard to a similar case) that the subject’s belief “may be shaken somewhat”.

  7. Note that this particular account of how John loses his belief may ultimately require accepting impurism, and thus be unacceptable to those seeking to defend classical invariantism. See Weatherson (2005) and Nagel (2010b: pp. 417–418) for some relevant discussion.

  8. See Nagel (2010b: esp. 416–421, 2011: pp. 13–15, 2010a: p. 303) for development of ideas along similar lines, and discussion of relevant psychological literature.

  9. It may be natural to pursue a similar proposal if one thinks that the doxastic requirement on knowledge is not belief, but is rather ‘being sure’ or ‘being (subjectively) certain’ (see e.g. DeRose 2009: p. 186n).

  10. Nagel appears to show preference for a view along the lines of removes psychological conviction (see e.g. Nagel 2010b: p. 418). One possible advantage of this proposal is that it may be more naturally suited to explaining our judgments regarding just how much additional evidence John requires in order to know that the table is red (see esp. Nagel 2011: pp. 13–15; also 2010a: p. 303). Issues surrounding how much additional evidence subjects like John need to possess in order to know will return later (Sects. 2, 6).

  11. Nagel (2010b: p. 420n) offers a slightly different suggestion: that the subject’s belief may appear to fall short of knowledge because it will seem to lack the epistemic virtues necessary for knowledge. The differences between this proposal and the one in the main text are not important for the discussion to follow.

  12. Sripada and Stanley (2012: pp. 18–23) criticise Nagel’s strategy for handling stipulated-belief cases as either implausible or committed to impurism (and so unsuitable for preserving classical invariantism). Nagel addresses some concerns along these lines in Nagel (2010b: pp. 427–428; see also 2011: pp. 13–15). Shin (2014: pp. 173–177) also raises some concerns for Nagel’s proposal.

  13. As in the case of Table B, it may be necessary to amend the case to ensure that the possibility of tricky lighting becomes sufficiently salient in order to generate the judgment that John’s denial of knowledge to Alice seems true (see fn. 4). (Similar remarks may also apply to Table (Modal Contrast).) Such amendments are not important to the discussion that follows.

  14. Note that Mona’s utterance may seem true (and not strange) if we suppose that had John’s son not raised the possibility of tricky lighting, John would have looked up at the lighting, and so been able to confirm that the lighting conditions are normal. But I take it that this is not the natural reading of the case. The natural reading is that if his son had not raised the possibility of tricky lighting, John would have had just the same grounds to believe that the table is red that he has in the actual case (roughly, how the table looks), and so would still lack confirmation that that the lighting conditions are normal.

  15. It should be noted that the range of problem cases extends beyond examples like Table (3rdPerson) and Table (Modal Contrast). Other relevant examples include Temporal Contrast cases (see e.g. Stanley 2005: p. 106), Third-Person Contrast cases (see e.g. Neta 2007: pp. 182–183), and Retraction cases (see e.g. MacFarlane 2005: §2.3). Our discussion could just as easily have focused on these examples.

  16. I take it that we have some intuitive grasp on what is needed for a subject like John to ‘rule out’ that the table is white but illuminated by red lights. Ruling out that possibility seems to require something like the evidence acquired by explicitly looking up and checking the lighting, and something over and above mere statistical evidence for thinking that the relevant tricky-lighting scenario is unlikely. It should be noted that some philosophers think that our intuitive notion of ‘ruling out’ may simply collapse into knowledge that the relevant possibility does not obtain (see e.g. DeRose 1995: pp. 16–17). But this is not important for us here: the present appeal to ruling out error possibilities is being made for illustrative (and not reductive) purposes.

  17. See Nagel (2010b) for extensive discussion of natural (“evidence-based”) belief formation vs. epistemically problematic belief formation.

  18. It might be suggested that the best response to examples like Table (3rdPerson) and Table (Modal Contrast) is to combine a doxastic approach with some other explanatory approach (e.g. the pragmatic approach found in Brown 2006, Rysiew 2007). An initial concern with such hybrid approaches is that they are liable to render appeal to doxastic factors explanatorily redundant. But the more pressing concern is that such hybrid accounts seem liable to inherit the various problems associated with those other explanatory approaches (see e.g. Nagel 2010a: pp. 286–301; Blome-Tillmann 2013; Dimmock and Huvenes 2014; Dinges forthcoming(a) for some discussion of relevant problems).

  19. Nagel is not the only classical invariantist to invoke psychological bias to explain problematic judgments. Williamson (2005) and Gerken (2012) (see also Gerken and Beebe 2016) also defend classical invariantism via appeal to psychological bias. For criticism of Williamson’s approach, see Nagel (2010a: pp. 286–301); for criticism of Gerken, see Stoutenberg (2017).

  20. Nagel does not explicitly cite any literature in support of this claim. Although it is widely accepted that we tend to treat others as sharing our beliefs, attitudes and concerns (see e.g. the literature on the false consensus effect (Ross et al. 1977; Dawes 1989)), it is less clear that these tendencies will all have the same characteristics as epistemic egocentrism (understood narrowly as a tendency to treat others as sharing our knowledge). In particular, it is less clear whether tendencies to treat others as sharing our ‘beliefs, attitudes and concerns’ will be as robust as our tendency to treat others as sharing our knowledge. (Of potential relevance here: see Birch and Bloom (2004: pp. 257–258; also 256, Box 1) on the contrast between treating others as sharing our knowledge vs. sharing our ignorance.) For the purposes the present paper, I shall just grant to Nagel that the relevant tendencies are equally robust.

  21. Nagel (2010b: pp. 425–426) offers an alternative explanation for stakes-based cases similar to Table (3rdPerson). Her response mirrors one found in Stanley (2005: pp. 102–104). For criticism of that proposal, see Schaffer (2006: pp. 93–94) & MacFarlane (2014: pp. 186–187). Bach (2005: §V) also offers an alternative error-theoretic treatment of the cases; I consider Bach’s response in fn. 37.

  22. What about structurally similar cases that concern shifts in practical factors, like stakes, rather than shifts in salient error possibilities (see e.g. Stanley 2005: pp. 3–6 and 106 for relevant cases)? Nagel (2010b) proposes that high stakes subjects exhibit higher levels of ‘epistemic anxiety’ than low stakes subjects. (A subject’s level of epistemic anxiety corresponds (roughly) to the amount of evidence the subject needs to possess in order to be able to naturally form the relevant belief.) To handle stakes-based versions of examples like Table (3rdPerson) and Table (Modal Contrast), Nagel (2010b: pp. 425–426) suggests that, due to egocentric bias, high stakes subjects may be prone to treat low stakes subjects as though they share their own high levels of epistemic anxiety. The concerns to follow carry over fairly straightforwardly to this kind of egocentric strategy as well (cf. fn. 33). (Nagel (2008: p. 292) offers a slightly different egocentric bias strategy for handling so-called ‘Ignorant High Stakes’ cases; the concerns raised below may also pose a problem for this explanation, but for considerations of space, I cannot pursue the issue here.)

  23. Stoutenberg (2017: pp. 2037–2039) objects to Nagel’s appeal to epistemic egocentrism on the grounds that she has not first explained why we treat subjects who are considering (e.g.) the possibility that the table is white but illuminated by red lights as needing to rule out that possibility in order to know that the table is red. However, Stoutenburg does not mention or consider the doxastic elements of Nagel’s proposal that are intended to handle this issue. Roughly, Nagel’s suggestion is that, due to the psychological constraints on belief formation, subjects who are considering the possibility of tricky lighting will not be psychologically able to form the belief that the table is red (without the influence of epistemically problematic factors) unless they have evidence sufficient to rule that possibility out (see §1–2 above and esp. Nagel 2011: pp. 13–15).

  24. See Dinges forthcoming(b) for a recent development of this kind of proposal. I take the concerns raised in Sects. 46 to carry over fairly straightforwardly to Dinges’ proposal, but for considerations of space, I cannot engage with his proposal here. Thanks to an anonymous referee for drawing my attention to Dinges’ article.

  25. The point here is that appeal to egocentric bias appears to show significant prima facie explanatory promise. The extent to which appeal to egocentric bias can assist in explaining our judgments seems liable to depend, inter alia, on the precise details of the contextualist accounts at issue. Similar remarks apply to the other potential applications of egocentric bias sketched above.

  26. The traditional understanding of these tasks is that they reveal a more fundamental cognitive deficit in very young children than simply epistemic egocentrism. Birch and Bloom (2004, 2007) (also Birch and Bernstein 2007) attempt to push back against that traditional understanding, but the dispute is not important for our purposes. The central point is that epistemic egocentrism does not manifest in such a powerful way in adults.

  27. Birch and Bloom’s (2007) study was conducted on a more complicated example than the one discussed in the main text. Their example involved several baskets of different shapes and colours, and the baskets themselves were also moved around while the Sally-character was out of the room. The results showed statistically significant bias in the judgments of subjects who knew where the object (a violin) was placed: those participants who knew which basket the violin was in judged it more likely that the Sally-character would first look in that basket than did those participants ignorant of the violin’s location. (Though interestingly Birch and Bloom found no statistically significant bias in versions of the example where it was especially implausible that Sally would first look in the basket where the violin in fact was. To the extent that it is especially implausible that a typical subject would be considering the possibility that the table is white but illuminated by red lights, this may be a source of additional concern for Nagel’s proposal. But I set it aside.)

  28. The cases were presented along with several others in a within-subjects design; the positive outcome case was spread far apart in the presentation from its corresponding negative outcome case to reduce reliance on memory.

  29. Participants were asked to express their probability estimates as percentages.

  30. Note that unlike the Baron and Hershey (1988) study, Fischoff’s study employed a between-subjects design (as did Birch and Bloom 2007).

  31. In regard to Baron and Hershey’s (1988) investigation into our evaluation of medical and monetary decisions, Nagel herself writes that “the subjects began to misrepresent the decision-makers egocentrically as though they did have some degree of foreknowledge” (Nagel 2010a: p. 303; emphasis added). This passage suggests that Nagel recognises the partial nature of epistemic egocentrism as it manifests with respect to knowledge. It is thus somewhat surprising that she does not acknowledge that the same is likely to be the case with respect to her own proposed bias.

  32. In the example that Nagel (2010a: p. 287) focuses on, it is not stipulated (or otherwise apparent) that the subject is not considering the possibility that the table is white but illuminated by red lights. In regard to this kind of case, it may be plausible to suggest that we tend to straightforwardly treat the subject as also considering the error possibility that we are considering (cf. Nickerson 1999 on how epistemic egocentrism manifests with respect to knowledge). Indeed, Alexander et al. (2014) conducted empirical research that lends some support to the claim that we do engage in straightforward projection of salient error possibilities in such cases (see Nagel and Smith (2017: §5) for some relevant discussion). But obviously the appeal to egocentric bias has very limited application if it can only be used to address cases where the relevant differences between us and the subject are not stipulated to be present (or are not otherwise apparent). (Most immediately, the proposal could not be used to address Table (3rdPerson) or Table (Modal Contrast).) And note also that Nagel (2010b: pp. 425–426) explicitly seeks to extend the proposal to cases where the relevant differences in conversational or practical concerns are stipulated to be present.

  33. Note that it is possible to put forward an egocentric proposal that focuses directly on sharing elements of our doxastic condition, rather than on sharing our consideration of error possibilities. For example, it could be proposed that, due to egocentric bias, we directly treat others as though they require the level of evidence that we require in order to take it to be settled that the table is red (cf. Nagel 2010b: p. 420n).

    However, so long as it is apparent from the case description that (e.g.) the subject does not require the evidence that we require in order to take it be settled that the table is red, such alternative proposals will fail for similar reasons as those to be outlined in the next section. The relevant doxastic differences between us and the subject may already be suitably apparent due to the stipulation in the case description that the subject is not considering the error possibilities that we are considering, and also due to it plausibly being common-knowledge that subjects do not typically need to check the lighting before forming colour beliefs. But it does not seem to reverse our judgments if we also make the relevant doxastic differences more explicit. For example, in regard to Table (3rdPerson), even if we stipulate in the case description that Alice takes it to be settled that the table is red in a typical automatic way, it still seems plausible that, once the possibility that the table is white but illuminated by red lights has been made suitably salient, we will judge that John’s utterance of ‘She has the same evidence as me. She doesn’t know either’ seems true. Similar remarks apply to Table (Modal Contrast).

  34. Nagel (2010a: p. 301) suggests that egocentric bias effects are stronger when the fact that the subject differs in the relevant respect (e.g. considering the possibility of tricky lighting) is not in focus. Could this form the basis for alleging that the egocentric effect associated with our consideration of error possibilities is likely to be stronger than the egocentric effect associated with our God’s-eye-view assurance that the table is red?

    The outlook for such a response is poor. Most immediately, it is far from clear that the fact that the relevant subjects in our ‘Table’ cases are not considering the error possibilities that we are considering is less in focus than is the fact that those subjects do not share our God’s-eye-view assurance that the table is red. In this regard, note (e.g.) that the fact that Alice in Table (3rdPerson) is not considering error possibilities is explicitly stated in the case description, whereas the fact that Alice does not share our God’s-eye-view is not. It also seems especially difficult to pursue this strategy in regard to examples like Table (Modal Contrast). In these kinds of examples, we are being asked to make a judgment about conditionals like ‘If those error possibilities hadn’t been mentioned, the subject would know’ or ‘If the subject had not been considering error possibilities, he would know’. When making judgments about these sorts of conditionals, the fact that the subject in the counterfactual situation is not going to be considering the error possibilities that we are seems to be very much in focus; it seems hard to argue that the fact that the subject in the counterfactual situation lacks our God’s-eye-view assurance is significantly more in focus when making judgments about such conditionals.

  35. See Nagel (2010b) for some relevant discussion concerning the evidence required for belief formation among subjects in different cognitive conditions.

  36. Advocates of the doxastic approach are also apparently left unable to respect the more general observation made at the close of Sect. 2: that once an error possibility has become suitably salient, we are prone to judge as though both subjects who are and subjects who are not considering that error possibility must be able to rule it out in order for them to be truly said to ‘know’.

    Nagel and Smith (2017: §5) raise another potential concern with Nagel’s appeal to egocentric bias: that if the proposal is right, we should judge that a subject like Alice lacks both knowledge and justified belief, but we are only tempted to judge that such a subject lacks knowledge. I cannot assess this concern here, but note that it seems to rest inter alia on the assumption that the doxastic condition on knowledge is the same as the state picked out by our ordinary use of ‘belief’; it is possible to pursue a doxastic approach to explaining our judgments about ‘know’ without endorsing that assumption (see fn. 9). See also Pynn (2014: pp. 129–130) for some brief criticism of Nagel’s proposal.

  37. As noted earlier (fn. 21), Nagel does offer an alternative explanation for some cases similar to Table (3rdPerson), but that approach has met with criticism (and is limited in scope). Bach (2005: §V) offers a different response to cases like Table (3rdPerson) and Table (Modal Contrast). He suggests (roughly) that we treat the amount of evidence that we require in order to form a belief as the amount that other people require in order to know—and do so even when (as often happens) the amount required for others to know varies very significantly from how much we require in order to form the relevant belief. For example, consider Table (3rdPerson). Suppose that we require evidence sufficient to rule out that the table is white but illuminated by red lights in order to believe that it is red. Bach’s proposal is that we will then (mistakenly) treat Alice as requiring that much evidence in order to know that it is red. But why would we do this? As far as I can see, Bach provides no adequate answer to this question. (Interestingly, one might try to appeal to egocentric bias to explain why we treat others as requiring the evidence that we require, and thereby attempt to fill the explanatory hole in Bach’s account. However, structurally similar concerns about partial bias will plausibly undermine such a strategy).

References

  • Alexander, J., Gonnerman, C., & Waterman, J. (2014). Salience and epistemic egocentrism: An empirical study. In J. Beebe (Ed.), Advances in experimental epistemology (pp. 97–118). London: Bloomsbury.

    Google Scholar 

  • Bach, K. (2005). The emperor’s new ‘Knows’. In G. Preyer & G. Peter (Eds.), Contextualism in philosophy: Knowledge, meaning, and truth (pp. 51–89). Oxford: Oxford University Press.

    Google Scholar 

  • Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of mind”? Cognition, 21, 37–46.

    Article  Google Scholar 

  • Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54, 569–79.

    Article  Google Scholar 

  • Birch, S., & Bernstein, D. (2007). What can children tell us about hindsight bias: A fundamental constraint on perspective-taking? Social Cognition, 25, 98–113.

    Article  Google Scholar 

  • Birch, S., & Bloom, P. (2004). Understanding children’s and adults’ limitations in mental state reasoning. Trends in Cognitive Sciences, 8, 255–260.

    Article  Google Scholar 

  • Birch, S., & Bloom, P. (2007). The curse of knowledge in reasoning about false beliefs. Psychological Science, 18, 382–6.

    Article  Google Scholar 

  • Blome-Tillmann, M. (2009). Contextualism, subject-sensitive invariantism, and the interaction of ‘Knowledge’-ascriptions with modal and temporal operators. Philosophy and Phenomenological Research, 79, 315–331.

    Article  Google Scholar 

  • Blome-Tillmann, M. (2013). Knowledge and implicatures. Synthese, 190, 4293–4319.

    Article  Google Scholar 

  • Blome-Tillmann, M. (2014). Knowledge and presupposition. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Brown, J. (2006). Contextualism and Warranted Assertibility Manoeuvres. Philosophical Studies, 130, 407–35.

    Article  Google Scholar 

  • Buckwalter, W. (2014). The mystery of stakes and error in ascriber intuitions. In J. Beebe (Ed.), Advances in experimental epistemology (pp. 145–74). London: Bloomsbury.

    Google Scholar 

  • Buckwalter, W., & Schaffer, J. (2014). Knowledge, stakes, and mistakes. Noûs, 49, 201–34.

    Article  Google Scholar 

  • Camerer, C., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic settings. Journal of Political Economy, 97, 1232–1254.

    Article  Google Scholar 

  • Cohen, S. (1999). Contextualism, skepticism, and the structure of reasons. Philosophical Perspectives, 13, 57–89.

    Google Scholar 

  • Cohen, S. (2002). Basic Knowledge and the Problem of Easy Knowledge. Philosophy and Phenomenological Research, 65, 309–29.

    Article  Google Scholar 

  • Dawes, R. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1–17.

    Article  Google Scholar 

  • DeRose, K. (1995). Solving the skeptical problem. The Philosophical Review, 104, 1–51.

    Article  Google Scholar 

  • DeRose, K. (2009). The case for contextualism (Vol. 1). Oxford: Clarendon Press.

    Book  Google Scholar 

  • Dimmock, P., & Huvenes, T. (2014). Knowledge, conservatism, and pragmatics. Synthese, 191, 3239–3269.

    Article  Google Scholar 

  • Dinges, A. (forthcoming a). Knowledge, intuition, and implicature. Synthese.

  • Dinges, A. (forthcoming b). Anti-intellectualism, egocentrism, and bank case intuitions. Philosophical Studies.

  • Fantl, J., & McGrath, M. (2009). Knowledge in an uncertain world. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Fischoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1, 288–299.

    Google Scholar 

  • Gerken, M. (2012). Epistemic focal bias. Australasian Journal of Philosophy, 91, 41–61.

    Article  Google Scholar 

  • Gerken, M., & Beebe, J. (2016). Knowledge in and out of contrast. Noûs, 50, 133–64.

    Article  Google Scholar 

  • Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Clarendon Press.

    Google Scholar 

  • Hawthorne, J. (2007). Eavesdroppers and epistemic modals. Philosophical Issues, 17, 92–101.

    Article  Google Scholar 

  • Ichikawa, J. J. (2017). Contextualising knowledge: Epistemology and semantics. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Kelley, H. (1972). Attribution in social interaction. In E. Jones, D. Kanouse, H. Kelley, R. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 1–26). Morristown, NJ: General Learning Press.

    Google Scholar 

  • Khoo, J. & Knobe, J. (forthcoming). Moral disagreement and moral semantics. Noûs.

  • Lewis, D. (1996). Elusive Knowledge. Australasian Journal of Philosophy, 74, 549–67.

    Article  Google Scholar 

  • MacFarlane, J. (2005). The assessment sensitivity of knowledge attributions. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 1, pp. 197–233). Oxford: Oxford University Press.

    Google Scholar 

  • MacFarlane, J. (2014). Assessment sensitivity: Relative truth and its applications. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Nagel, J. (2008). Knowledge ascriptions and the psychological consequences of changing stakes. Australasian Journal of Philosophy, 86, 279–294.

    Article  Google Scholar 

  • Nagel, J. (2010a). Knowledge ascriptions and the psychological consequences of thinking about error. The Philosophical Quarterly, 60, 286–306.

    Article  Google Scholar 

  • Nagel, J. (2010b). Epistemic anxiety and adaptive invariantism. Philosophical Perspectives, 24, 407–435.

    Article  Google Scholar 

  • Nagel, J. (2011). The psychological basis of the Harman-Vogel paradox. Philosophers’ Imprint, 11, 1–28.

    Google Scholar 

  • Nagel, J., & Smith, J. J. (2017). The psychological context of contextualism. In J. J. Ichikawa (Ed.), The Routledge handbook of epistemic contextualism (pp. 94–104). Abingdon: Routledge.

    Chapter  Google Scholar 

  • Neta, R. (2007). Anti-intellectualism and the knowledge-action principle. Philosophy and Phenomenological Research, 75, 180–87.

    Article  Google Scholar 

  • Nickerson, R. (1999). How we know—And sometimes misjudge—What other people know: Imputing one’s own knowledge to others. Psychological Bulletin, 125, 737–759.

    Article  Google Scholar 

  • Pynn, G. (2014). Unassertibility and the appearance of ignorance. Episteme, 11, 125–143.

    Article  Google Scholar 

  • Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279–301.

    Article  Google Scholar 

  • Rysiew, P. (2007). Speaking of Knowing. Noûs, 41, 627–62.

    Article  Google Scholar 

  • Schaffer, J. (2006). The Irrelevance of the Subject: Against Subject Sensitive Invariantism. Philosophical Studies, 127, 87–107.

    Article  Google Scholar 

  • Schaffer, J., & Knobe, J. (2012). Contrastive knowledge surveyed. Noûs, 46, 675–708.

    Article  Google Scholar 

  • Shin, J. (2014). Time constraints and pragmatic encroachment on knowledge. Episteme, 11, 157–180.

    Article  Google Scholar 

  • Sripada, C., & Stanley, J. (2012). Empirical Tests of Interest Relative Invariantism. Episteme, 9, 3–26.

    Article  Google Scholar 

  • Stanley, J. (2005). Knowledge and practical interests. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Stoutenberg, G. (2017). Strict moderate invariantism and knowledge Denials. Philosophical Studies, 174, 2029–44.

    Article  Google Scholar 

  • Vogel, J. (1990). Are there counter-examples to the closure principle? In M. Roth & G. Ross (Eds.), Doubting: Contemporary perspectives on skepticism (pp. 13–28). Dordrecht: Kluwer.

    Chapter  Google Scholar 

  • Weatherson, B. (2005). Can we do without pragmatic encroachment? Philosophical Perspectives, 19, 417–43.

    Article  Google Scholar 

  • Weatherson, B. (2017). Interest-relative invariantism. In J. J. Ichikawa (Ed.), The Routledge handbook of epistemic contextualism (pp. 240–54). Abingdon: Routledge.

    Chapter  Google Scholar 

  • Williamson, T. (2005). Contextualism, subject-sensitive invariantism and knowledge of knowledge. The Philosophical Quarterly, 55, 213–35.

    Article  Google Scholar 

  • Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103–28.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to Jessica Brown, Yuri Cath, Torfinn Huvenes, Michael Lynch, Daniele Sgaravatti, and an anonymous referee for helpful comments and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Dimmock.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dimmock, P. Knowledge, belief, and egocentric bias. Synthese 196, 3409–3432 (2019). https://doi.org/10.1007/s11229-017-1603-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-017-1603-9

Keywords

Navigation