Skip to main content

Broad and narrow epistemic standing: its relevance to the epistemology of disagreement

Abstract

Epistemologists who have studied disagreement have started to devote attention to the notion of epistemic standing (i.e., epistemic peerhood, superiority, or inferiority). One feature of epistemic standing they have not drawn attention to is a distinction between what I call “broad” and “narrow” epistemic standing. Someone who is, say, your broad epistemic peer with respect to some topic is someone who is generally as familiar with and good at handling the evidence as you are. But someone who is your narrow epistemic peer with respect to that topic is someone who is familiar with the evidence and as good at handling it as you are on that particular occasion. Thus, it’s possible for you to be my broad peer while also being my narrow inferior or superior. Attending to this distinction elicits different intuitions about some of the well-known cases in the epistemology of disagreement. Focusing on broad epistemic standing, which epistemologists have done, tends to yield conciliationist responses. But focusing on narrow epistemic standing, which epistemologists have not done, yields steadfast responses. The reason for this difference has to do with how we figure out someone’s broad or narrow epistemic standing: to determine her broad epistemic standing, you need to look at her epistemic traits and her familiarity with the evidence rather than to examine the evidence she gives. But to determine her narrow epistemic standing, you have to focus on her disclosed evidence rather than her epistemic traits or familiarity with the evidence.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    In their article surveying the literature on disagreement up through 2018, Bryan Frances and Jonathan Matheson write, “[t]he vast majority of the literature on the epistemic significance of disagreement … concerns recognized peer disagreement” (Frances and Matheson 2018, §4, p. 16).

  2. 2.

    Though Frances and Matheson (2018, §5, pp. 16–17) list four views articulating how one should respond to disagreement among competent peers—the Equal Weight, Steadfast, Justificationist, and Total Evidence Views—they classify the Equal Weight, Justificationist, and Total Evidence Views as “conciliatory views” (Frances and Matheson 2018, §6, p. 44). The thought seems to be that if a view thinks you should ever treat the bare fact that a peer believes P as itself giving you a reason to believe P, then that view is conciliationist.

  3. 3.

    Here are the assumptions that Frances and Matheson list:

    [For most any controversial ethical, political, or religious belief B], [y]ou know that [it] has been investigated and debated (i) for a very long time by (ii) a great many (iii) very smart people who (iv) are your epistemic peers and superiors on the matter and (v) have worked very hard (vi) under optimal circumstances to figure out if B is true. But you also know that (vii) these experts have not come to any significant agreement on B and (viii) those who agree with you are not, as a group, in an appreciably better position to judge B than those who disagree with you (Frances and Matheson 2018, §7, p. 47).

  4. 4.

    These considerations come from King (2012, pp. 253–266). King’s view is that there can’t be exact epistemic peers. Biro and Lampert (2018, pp. 383–390) claim that if there could be exact epistemic peers, then they couldn’t disagree. Matheson (2014, p. 315) says that the only exact epistemic peer he could have is himself (presumably, indexed to a particular time).

  5. 5.

    For these definitions of close/perfect peers and distant peers, see Vorobej (2011, p. 711); for this definition of remote peers, see Vorobej (2011, p. 713).

  6. 6.

    Lampert and Biro argue that epistemologists of disagreement fail to properly distinguish between reasons and evidence. On their view, while all evidence amounts to a reason, not all reasons amount to evidence. This is because they think that, in order for something to count as evidence, there must be a “content-connection” between the evidence e and the proposition p it supposedly supports: “If e is evidence that p, then e is explained by p’s being the case” (Lampert and Biro 2017, p. 204). For example, if a mathematician, Jones, proves Goldbach’s conjecture, then his proof of the conjecture is evidence for the conjecture (his proof’s success is explained by the conjecture’s being true), but while his claim that he has proven the conjecture is evidence that he believes he has proven the conjecture, it is not evidence for the conjecture’s being true (after all, his claim could be explained by his being overconfident rather than the conjecture’s truth). So, the fact that he’s an expert, and the fact that he tells you he has proven Goldbach’s conjecture give you reason to believe Goldbach’s conjecture, but it doesn’t give you evidence for its truth. On this approach, there is no such thing that is “second-order” evidence (i.e., the evidence you get in favor of P from the fact that a broad peer or superior asserted P)—there is just evidence (all of which is first-order) and reasons. In this paper, I follow Lampert and Biro.

  7. 7.

    Christensen’s (2011, pp. 9–10) explanation is as follows: it’s prima facie highly unlikely that a competent tip-calculator would make an error repeatedly after carefully checking his work. Consequently, before learning of your peer’s (continuing) disagreement, you should be extremely confident about your answer. In addition, you have what Lackey (2010, p. 277) calls “personal information” about yourself: you’re aware of your sobriety, lack of neurological impairment, etc. But you don’t have access to the same information about your peer. Consequently, because it’s extremely unlikely that you made an error in this case, and because you know that nothing is wrong with you, you should conclude that something is wrong with your peer.

  8. 8.

    For a real-life example of a case like Careful Checking, see Hathaway (2015).

  9. 9.

    Christensen (2011, pp. 9–10) says that in Careful Checking, since you have personal information about yourself that rules out mental incapacity, and since it’s extremely unlikely that someone of sound mind would make a mistake in such a case, you should conclude that your peer has something wrong with him (see footnote 7). But against this, Feldman (2007, p. 208) could say that, for all you know, you have an ailment that makes you think nothing is wrong with you when in fact something is, so it would be unjustifiably egocentric to think that the fault is with your peer rather than with you.

  10. 10.

    See footnote 8.

  11. 11.

    I see a similar degree of unrealism in other places. For example, in defending conciliationism from the argument that it is self-undermining (here’s the argument: by conciliationism’s own lights, if epistemic peers reject conciliationism, then conciliationists should become agnostic about conciliationism; but epistemic peers do reject conciliationism; thus, conciliationists should become agnostic about conciliationism), Christensen invokes the following principle:

    Minimal Humility: If I have thought casually about P for 10 min, and have decided it is correct, and then find out that many people, most of them smarter and more familiar with the relevant evidence and arguments than I, have thought long and hard about P, and have independently but unanimously decided that P is false, I should become less confident in P (Christensen 2009, p. 763).

    About this, Christensen writes, “[c]learly, Minimal Humility will self-undermine in certain evidential situations.” Consequently, “we should be cautious before taking potential self-undermining as showing a principle false” (Christensen 2009, p. 763).

    But in what evidential situations will Minimal Humility self-undermine? Only when a bunch of competent epistemologists think long and hard about Minimal Humility and then conclude that it’s a non-starter. But why on earth would they do that? The only way they would is if they were in a distant possible world, one whose relevance to our own is quite unclear. In a world like our own, there’s no good reason for them to arrive at this conclusion. So, you can’t just invoke an incredibly unrealistic case to defend your view.

    I bring all this up because I believe considering just broad peerhood moves one to underweight, sometimes radically underweight, the fact that first-order evidence is itself quite important. By contrast, if you focus on narrow epistemic standing, you are less likely to neglect the importance of first-order evidence, which will in turn shape your examples.

  12. 12.

    What if we’re both justifiably extremely confident about our perceptions, and yet we still disagree? Do we still conciliate? I don’t know. Such cases are rare, and, usually, strange. Rather than offer a general theory of what to do in cases of disagreement where both parties are justifiably extremely confident, I think we should look at them case-by-case. As is the case with Dean on the Quad, sometimes there is disclosable, available evidence showing that one party is right and the other is wrong.

  13. 13.

    It’s worth taking a moment to put my conclusions in the language Brian Weatherson uses in Weatherson (2010). Weatherson imagines that “a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p” (Weatherson 2010, p. 1). Weatherson calls the judgment that S makes about p, on the basis of E, “J”. He then asks, “How many pieces of evidence does the agent have that bear on p?” and writes that there are three possible answers to this question: “1. Two—Both J and E. 2. One—E subsumes whatever evidential force J has. 3. One—J subsumes whatever evidential force E has” (Weatherson 2010, p. 1). So, given what I have claimed in this paper about broad and narrow peerhood, how do I answer Weatherson’s question?

    In cases where agents have a perceptual disagreement, there is public evidence (for example, if Bea and Abel watch a boxing match and it seems to Bea that the boxer hit his opponent before the bell, while it seems to Abel that the boxer hit his opponent after the bell, then the public evidence is what they saw), but there is also private evidence (how it seemed to Bea, and how it seemed to Abel), and that private evidence is non-disclosable. Moreover, the public evidence, now that it is in the past, is no longer disclosable either. I have argued that in such a case, both Bea and Abel should conciliate, on the grounds that they have nothing to go on besides their broad peerhood—i.e., their judgments. But note that there is no evidence they can share, so this does not amount to endorsing what Weatherson calls “JSE”, or “Judgments Screen Evidence” (Weatherson 2010, p. 1).

    In cases where broad peers have a disagreement with public evidence (say, a disagreement between two philosophers, where what is in question is whether a particular argument, A, is sound), then I believe that the agents’ judgments about A should count as reasons for both the agents, but only before they present their takes on A to each other. After they present their takes then the bare fact that a broad peer has judged so-and-so to be the case no longer matters; what each should go by is her own informed take (i.e., her take after considering and responding to the other’s take) on the evidence. This seems to me to amount to endorsing Weatherson’s 2, “E subsumes whatever evidential force J has” or, ESJ, evidence screens judgments. In slogan form (which oversimplifies somewhat): when there is no evidence, but there are judgments, then conciliate; but, when there are evidence and judgments, then evidence screens judgments.

References

  1. Biro, J., & Lampert, F. (2018). ‘Peer disagreement’ and evidence of evidence. Logos and Episteme, 9(4), 379–402.

    Article  Google Scholar 

  2. Christensen, D. (2007). Epistemology of disagreement: The good news. The Philosophical Review, 116(2), 187–217.

    Article  Google Scholar 

  3. Christensen, D. (2009). Disagreement as evidence: The epistemology of controversy. Philosophy Compass, 4(5), 756–767.

    Article  Google Scholar 

  4. Christensen, D. (2011). Disagreement, question-begging, and epistemic self-criticism. Philosophers’ Imprint, 11, 1–22.

    Google Scholar 

  5. Drożdżowicz, A. (2018). Philosophical expertise beyond intuitions. Philosophical Psychology, 31(2), 253–277.

    Article  Google Scholar 

  6. Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  7. Feldman, R. (2005). Respecting the evidence. Philosophical Perspectives, 19(1), 95–119.

    Article  Google Scholar 

  8. Feldman, R. (2007). Reasonable religious disagreements. In L. Antony (Ed.), Philosophers without gods: Mediations on atheism and the secular life (pp. 194–214). Oxford: Oxford University Press.

    Google Scholar 

  9. Frances, B., & Matheson, J. (2018). Disagreement. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (winter 2019 edition). Retrieved November 13, 2019, from https://plato.stanford.edu/entries/disagreement/.

  10. Hathaway, J. (2015). Bodybuilders try, fail, to calculate number of days in a week. In Gawker. Retrieved November 13, 2019, from https://gawker.com/bodybuilders-try-fail-to-calculate-number-of-days-in-1677545788.

  11. Huemer, M. (2001). Skepticism and the veil of perception. Banham, MD: Rowman and Littlefield.

    Google Scholar 

  12. Kelly, T. (2005). The epistemic significance of disagreement. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 1, pp. 167–196). Oxford: Oxford University Press.

    Google Scholar 

  13. Kelly, T. (2010). Peer disagreement and higher-order evidence. In R. Feldman & T. Warfield (Eds.), Disagreement (pp. 111–174). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  14. King, N. L. (2012). Disagreement: What’s the problem? or a good peer is hard to find. Philosophy and Phenomenological Research, 85(2), 249–272.

    Article  Google Scholar 

  15. Lackey, J. (2010). What should we do when we disagree? In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 3, pp. 274–293). Oxford: Oxford University Press.

    Google Scholar 

  16. Lampert, F., & Biro, J. (2017). What is evidence of evidence evidence of? Logos and Episteme, 8(2), 195–206.

    Article  Google Scholar 

  17. Matheson, J. (2014). Disagreement: Idealized and everyday. In J. Matheson & R. Vitz (Eds.), The ethics of belief: Individual and social (pp. 315–330). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  18. Nadelhoffer, T., & Feltz, A. (2008). The actor-observer bias and moral intuitions: Adding fuel to Sinnott–Armstrong’s fire. Neuroethics, 1(2), 133–144.

    Article  Google Scholar 

  19. O’Sullivan, M., & Ekman, P. (2004). The wizards of deception detection. In P. Granhag & L. Strömwall (Eds.), The detection of deception in forensic contexts (pp. 269–286). Cambridge: Cambridge Uninversity Press.

    Chapter  Google Scholar 

  20. Pryor, J. (2000). The skeptic and the dogmatist. Noûs, 34(4), 517–549.

    Article  Google Scholar 

  21. Schulz, E., Cokely, E., & Feltz, A. (2011). Persistent bias in expert judgments about free will and moral responsibility: A test of the expertise defense. Consciousness and Cognition, 20(4), 1722–1731.

    Article  Google Scholar 

  22. Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137.

    Article  Google Scholar 

  23. Titelbaum, M. (2015). Rationality’s fixed point (or: in defense of right reason). In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 5, pp. 253–294). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  24. Tobia, K. P., Buckwalter, W., & Stich, S. (2013). Moral intuitions: Are philosophers experts? Philosophical Psychology, 26(5), 629–638.

    Article  Google Scholar 

  25. Tucker, C. (Ed.). (2013). Seemings and justiication: New essays on dogmatism and phenomenal conservatism. Oxford: Oxford University Press.

    Google Scholar 

  26. Vaesen, K., Peterson, M., & Van Bezooijen, B. (2013). The reliability of armchair intuitions. Metaphilosophy, 44(5), 559–578.

    Article  Google Scholar 

  27. Vorobej, M. (2011). Distant peers. Metaphilosophy, 42(5), 708–722.

    Article  Google Scholar 

  28. Weatherson, B. (2010). Do judgments screen evidence? Unpublished manuscript. Retrieved January 25, 2020, from http://brian.weatherson.org/JSE.pdf.

Download references

Acknowledgements

I would like to thank California State University, Northridge’s College of Humanities and Department of Philosophy for their generous funding that made the writing of this paper possible. I would also like to thank Tim Black, Daniel Kaufman, Brian Kim, Jonathon Matheson, Lawrence Pasternack, Ted Poston, Shannon Spaulding, Weimin Sun, Gregory Velazco-y-Trianosky, and Takashi Yagisawa for their helpful remarks and suggestions. Finally, I would like to thank the two anonymous reviewers at Synthese for their extremely helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Robert Gressis.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gressis, R. Broad and narrow epistemic standing: its relevance to the epistemology of disagreement. Synthese 198, 8289–8306 (2021). https://doi.org/10.1007/s11229-020-02573-8

Download citation

Keywords

  • Disagreement
  • Peerhood
  • Expertise
  • Seemings