Skip to main content
Log in

Fundamental disagreements and the limits of instrumentalism

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

I argue that the skeptical force of a disagreement is mitigated to the extent that it is fundamental, where a fundamental disagreement is one that is driven by differences in epistemic starting points. My argument has three steps. First, I argue that proponents of conciliatory policies have good reason to affirm a view that I call “instrumentalism,” a view that commends treating our doxastic inclinations like instrumental readouts. Second, I show that instrumentalism supplies a basis for demanding conciliatory requirements in superficial disagreements but not in fundamental disagreements. Third, I argue that the frequently invoked “independence” principle, which arguably would require significant conciliation in fundamental disputes, is unmotivated in light of the explanatory power of instrumentalism. The most plausible conciliatory view, then, is a weak conciliationism that features instrumentalism rather than independence as the central principle, and that therefore gives us a principled basis for thinking that fundamental disagreements should occasion less doxastic revision than shallow disagreements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See, for example, Christensen (2011), Elga (2007), Holley (2013), Konigsberg (2013) and Pettit (2006). Others (e.g., Feldman 2007; Kornblith 2010) contest the view that fundamental disagreements exert less conciliatory pressure.

  2. Elga (2007, pp. 492–7) and Christensen (2011, pp. 15–16) offer two prominent explanations.

  3. Elga’s explanation is critiqued in Christensen (2011), Kornblith (2010) and Simpson (2013); Christensen’s explanation is critiqued in Bobier (2012).

  4. The term “epistemic peer” was coined by Gutting (1982). Roughly speaking, two subjects are epistemic peers with respect to p just in case the subjects’ epistemic positions with respect to p are equal in strength, both in terms of quality and quantity of evidence and the capacity to rightly assess that evidence. What constitutes “peerhood” is an important question in the literature on disagreement, but my argument does not depend on any particular resolution of this question. The equal weight view was so named by Elga (2007).

  5. Analogies involving thermometers or other instruments are common in discussions of conciliatory views. See, e.g., Bogardus (2009), Christensen (2007, 2016), Enoch (2010), Kelly (2010), Littlejohn (2013) and White (2009).

  6. Adapted from Elga (2007).

  7. Let A stand for the proposition that Horse A won the race, SA stand for the proposition that it seems to me that Horse A won by a nose, and C be my credence function prior to learning SA but after I see that the race is coming down to the wire and that some horse will win by a nose. Since I know that I reach an accurate judgment in races this close 90% of the time (regardless of which horse wins), \(\hbox {C}({ SA}|A)=0.9\). Given the assumption that I have no reason to think that a Horse A victory is more or less likely in a close race than a Horse B victory, \(\hbox {C}(A)=0.5\). And since I do not think I am more likely to seem to see a Horse A victory than a Horse B victory, C(SA) = 0.5. Bayes’s theorem requires that \(\hbox {C}(A|{ SA})= \hbox {C}({ SA}|{A})\cdot \hbox {C}({A})/\hbox {C}({ SA})\). Substituting, we get \(\hbox {C}(A|{ SA})=0.9\).

  8. This result was shown by White (2009, p. 239), who shows that the sort of “splitting the difference” prescribed by the equal weight view aligns with the requirements of conditionalization only if we also conform to a principle he calls the “Calibration Rule.” White argues that the Calibration Rule is quite implausible, though White’s target is a naïve principle that differs from the instrumentalism I advocate here. I discuss this naïve form of instrumentalism in Sect. 3.

  9. Let A stand for the proposition that Horse A won the race, and D (for ‘disagreement’) stand for the proposition that Beth judges that Horse B won the race. According to the equal weight view, my credence for A after learning of the disagreement should be 0.5. Reaching this by conditionalizing on D would require that, before the disagreement, \(\hbox {C}(A|D)= 0.5\). This, together with Bayes’s Theorem and the law of total probability, requires that \(0.5 = \hbox {C}(D|A)\cdot \hbox {C}(A)/[\hbox {C}(A) \cdot \hbox {C}(D |A)+ (1 -\hbox {C}(A)) \cdot \hbox {C}(D |\sim A)]\). Since I know Beth’s accuracy is 0.9 no matter which horse wins, I know that \(\hbox {C}(D |A)= 0.1\) and \(\hbox {C}(D |\sim A) = 0.9\). Making these substitutions gives us \(0.5 = 0.1 \cdot \hbox {C}(A)/[\hbox {C}(A)\cdot 0.1+(1-\hbox {C}(A)) \cdot 0.9]\), and solving for \(\hbox {C}(A)\) gives us \(\hbox {C}(A)=0.9\). So the equal weight view is compatible with conditionalization only if before learning of the disagreement I already have an instrumentalist attitude towards my own judgment that Horse A won the race.

  10. Anyone who updates in a manner that departs from conditionalization in a determinate and predictable way is susceptible to a “Dutch Book” strategy in which a “bookie” offers a series of bets to the agent that the agent views as fair (at the time the bet is made) but that guarantees monetary loss for the agent and gain for the bookie. For an overview of the “diachronic Dutch Book argument” for this conclusion, see (Hájek 2009). The reflection principle originally formulated by Bas van Fraassen (1984, p. 244) states that an agent’s current credence for p should “reflect” her expected future credence for p; more precisely, her conditional credence for p given that at some future time her credence for p will be r ought to be r. While reflection is subject to well-known counterexamples, these counterexamples do not raise doubts about the applicability of the reflection requirement in contexts where I know that in the future my epistemic position with respect to p will in every respect be at least as strong as my current epistemic position with respect to p (Briggs 2009).

  11. Thanks to an anonymous referee for pointing this out.

  12. White’s discussion is the basis of the following example of the thermometer in Panama.

  13. This can be shown using Bayes’s Theorem, but here is an intuitive way of justifying this claim. Out of 100 races, I’d expect that Horse A would win about 80 times, and that in about 72 of these races I’d correctly judge that Horse A won and in about 8 of these races incorrectly judge that Horse B won. I’d also expect Horse B to win about 20 times, and that in about 18 of these races I’d correctly judge that Horse B won and in about 2 races I’d incorrectly judge that Horse A won. Thus, the proportion of judgments in favor of Horse A that I expect to be correct is 0.97 (72/74), and the proportion of judgments in favor of Horse B that I expect to be correct is 0.69 (18/26).

  14. Schoenfield considers challenges to calibrationism in her (2015) without explicitly rejecting or endorsing the principle, but she defends calibrationism in her (2016).

  15. The same sort of criticism applies to Sliwa and Horowitz’s discussion of the case they call “calculation” (2015, pp. 2836–44).

  16. Maria Lasonen-Aarnio (2013) advances a similar argument against splitting the difference.

  17. Again, let A stand for the proposition that Horse A won the race and D (for ‘disagreement’) stand for the proposition that Beth judges that Horse B won the race. If C is my credence function after seeming to see Horse A win but before learning of the disagreement, using Bayes’s Theorem and the law of total probability, my credence after the disagreement should be \(\hbox {C}(D|A)\cdot \hbox {C}(A)/[\hbox {C}(A) \cdot \hbox {C}(D |A)+ (1-\hbox {C}(A)) \cdot \hbox {C}(D |\sim A)]\), which is equal to \(0.1\cdot 0.973/[0.973\cdot 0.1 + 0.028 \cdot 0.9]\), or 0.8.

  18. Enoch (2010, pp. 961–5) also argues for the impossibility of a thoroughly instrumentalist perspective. By giving a formal account of instrumentalism, I aim to characterize more precisely why instrumentalism is limited in scope and which doxastic inclinations can and cannot be treated in an instrumentalist fashion.

  19. For a similar argument, see Lasonen-Aarnio (2013, p. 773).

  20. A certain sort of coherentist might be skeptical of the idealized Bayesian picture that is presupposed here, according to which our credences are the product of our empirical evidence and fundamental pre-evidential plausibility assessments—our ur-priors. Perhaps any doxastic inclination towards p can be critically evaluated from some standpoint that includes a credence for p and that, for purposes ofevaluating that particular doxastic inclination, counts as being rationally prior to the inclination. In this case, no disagreement would count as being fundamental (since there would no attitudes that uniquely qualify as our epistemic starting points), and the process of instrumentalizing could go on indefinitely. Nonetheless, the key point holds: because there is no guarantee that two disputants who continue to treat more and more attitudes in an instrumentalist fashion will eventually land on a common prior for p, a common commitment to instrumentalism will not guarantee convergence in credences.

  21. For early conciliatory arguments in favor of such dispute-independence requirements, see Christensen (2007) and Elga (2007).

  22. Fundamental calibration bears obvious similarities to calibrationism and Evidential Calibration. In requiring that the agent use a reliability estimate for someone “relevantly similar” (in dispute-neutral respects), fundamental calibration is perhaps most similar to Christensen’s “Idealized Thermometer Model” (2016, p. 409).

  23. Suppose that before Roger learns Fiona’s view, he learns that she is inclined to assign an ur-prior of 0.99 to her view (whatever it is). Presumably, this information should not change Roger’s confidence, so his credence for p should remain 0.99. Next, Roger learns D, which stands for the proposition that Fiona judges that one way is false. Using Bayes’s theorem and the law of total probability, we know that prior to learning D, Roger should satisfy the following: \(\hbox {C}({\textsc {one}}\,{\textsc {way}}|D) = \hbox {C}(D|{\textsc {one}}\,{\textsc {way}})\cdot \hbox {C}({\textsc {one}}\,{\textsc {way}}) / (\hbox {C}({\textsc {one}}\,{\textsc {way}})\cdot \hbox {C}(D|{\textsc {one}}\,{\textsc {way}}) + \hbox {C}(\sim {\textsc {one}}\,{\textsc {way}}) \cdot \hbox {C}(D |\sim {\textsc {one}}\,{\textsc {way}}))\). Because Roger expects that Fiona is 90% reliable in the present circumstances, we know that \(\hbox {C}(D|{\textsc {one}}\,{\textsc {way}}) = 0.1\) and \(\hbox {C}(D|\sim {\textsc {one}}\,{\textsc {way}}) = 0.9\). Substituting, we get \(\hbox {C}(\textsc {one way }|D)=0.1\cdot 0.99/(0.99 \cdot 0.1 + 0.01 \cdot 0.9) \approx 0.917.\)

  24. Christensen, for example, concedes that in many philosophical controversies, the base of p-neutral considerations is robust enough to give him a “strong, dispute-independent reason to think that those who disagree with [him] are as well-informed, and as likely to have reasoned correctly from their evidence, as those who agree with [him]” (Christensen 2015, p. 147).

  25. Some examples include the Alston (1991, chs. 4 & 7), Enoch and Schechter (2008) and Wright (2004).

  26. Thanks to an anonymous referee for suggesting that I consider Christensen’s view at this juncture.

  27. Vavova (2014) appeals to Christensen’s graded conciliatory view in order to argue that significant conciliation is not required in moral disagreements that are in a certain respect “deep” or “fundamental.” The key idea is that when I disagree over some moral proposition p with someone who agrees with me on most other moral matters, this wide base of agreement gives me strong independent reason for thinking that my disputant is reliable with respect to p; but when I disagree over p with someone who shares few of my moral convictions, I lack strong independent reason for thinking that my disputant is reliable with respect to p. If this is right, then graded strong conciliationism would require significant conciliation in the first but not the second sort of disagreement. While Vavova’s conclusion may sound quite similar to my claim that fundamental disagreements generate less conciliatory pressure than superficial disagreements, what Vavova means by a “fundamental” disagreement is importantly different from the meaning I employ here. For Vavova, two disputants have a “fundamental” disagreement to the extent that they disagree across a wide range of moral issues; a narrow disagreement on a single moral issue is not fundamental in the relevant sense, and is exactly the sort of case where equal weighting is likely to be required. As I am using the term here, however, a disagreement counts as “fundamental” when it is the result of differing ur-priors. Narrow moral disagreements that occur against a backdrop of widespread moral agreement can be fundamental in this sense, and it is also easy to imagine cases where there is merely superficial disagreement across a wide range of moral questions. In any case, because I do not think graded strong conciliationism is plausible (for the reason developed in the following paragraphs), I do not think Vavova has shown that the correct conciliatory norm is less demanding in systematic moral disagreements. For additional criticisms of Vavova’s argument, see Fritz (2016).

  28. Here, I rely on an “epistemic reflection” principle of the sort defended in Briggs (2009).

  29. Let r be the probability that right reasoning leads to a true first order view on p. Recall our stipulation that weak conciliationists assign a credence of 0.7 to their first order view. The inaccuracy of a credence c for some true proposition as measured by the Brier score is \((1-c)^{2}\). In this case, lower Brier scores are better. Using the Brier score, the expected inaccuracy score for procedure 1 is \(r(1-0.7)^{2}+(1-r)(1-0.3)^{2}=0.49-0.4r.\) The expected inaccuracy score for procedure 3, strong conciliationism, is \((1-0.5)^{2}=0.25\). So procedure 1 outperforms procedure 3 just in case \(0.49-0.4r<0.25\), or, in other words, anytime \(r>0.6\). Thus, as long as it is more than 60% probable that right reasoning leads to the correct judgment about p, then procedure 1 has a higher expected accuracy.

  30. I’m grateful to an anonymous referee for raising this challenge.

References

  • Alston, W. P. (1991). Perceiving god: The epistemology of religious experience. Ithaca, NY: Cornell University Press.

    Google Scholar 

  • Bobier, C. (2012). The conciliatory view and the charge of wholesale skepticism. Logos and Episteme: An International Journal of Epistemology, 3(4), 619–627.

    Article  Google Scholar 

  • Bogardus, T. (2009). A vindication of the equal-weight view. Episteme, 6(3), 324–335.

    Article  Google Scholar 

  • Briggs, R. (2009). Distorted reflection. Philosophical Review, 118(1), 59–85.

    Article  Google Scholar 

  • Carey, B., & Matheson, J. (2013). How skeptical is the equal weight view? In D. E. Machuca (Ed.), Disagreement and skepticism (pp. 131–49). New York: Routledge.

    Google Scholar 

  • Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.

    Article  Google Scholar 

  • Christensen, D. (2011). Disagreement, question-begging and epistemic self-criticism. Philosophers’ Imprint, 11(6), 1–22.

    Google Scholar 

  • Christensen, D. (2015). Disagreement and public controversy. In J. Lackey (Ed.), Essays in collective epistemology. New York: Oxford University Press.

    Google Scholar 

  • Christensen, D. (2016). Disagreement, drugs, etc.: From accuracy to akrasia. Episteme, 13(4), 392–422.

    Article  Google Scholar 

  • Cohen, S. (2013). A defense of the (almost) equal weight view. In D. Christensen & J. Lackey (Eds.), The epistemology of disagreement: New essays (pp. 98–119). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  • Elga, A. (2008). Lucky to be rational. Unpublished manuscript. www.princeton.edu/~adame/papers/bellingham-lucky.pdf

  • Enoch, D. (2010). Not just a truthometer: Taking oneself seriously (but not too seriously) in cases of peer disagreement. Mind, 119(476), 953–997.

    Article  Google Scholar 

  • Enoch, D., & Schechter, J. (2008). How are basic belief-forming methods justified? Philosophy and Phenomenological Research, 76(3), 547–579.

    Article  Google Scholar 

  • Feldman, R. (2007). Reasonable religious disagreements. In L. M. Antony (Ed.), Philosophers without gods: Meditations on atheism and the secular life (pp. 194–214). New York: Oxford University Press.

    Google Scholar 

  • Fritz, J. (2016). Conciliationism and moral spinelessness. Episteme,. https://doi.org/10.1017/epi.2016.44.

    Article  Google Scholar 

  • Gutting, G. (1982). Religious belief and religious skepticism. Notre Dame, IN: University of Notre Dame Press.

    Google Scholar 

  • Hájek, A. (2009). Dutch book arguments. In P. Anand, P. K. Pattanaik, & C. Puppe (Eds.), The handbook of rational and social choice (pp. 173–95). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Holley, D. M. (2013). Religious disagreements and epistemic rationality. International Journal for Philosophy of Religion, 74, 33–49.

    Article  Google Scholar 

  • Kelly, T. (2005). The epistemic significance of disagreement. Oxford studies in epistemology, 1, 167–196.

    Google Scholar 

  • Kelly, T. (2010). Peer disagreement and higher order evidence. In R. Feldman & T. A. Warfield (Eds.), Disagreement (pp. 111–174). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Konigsberg, A. (2013). The problem with uniform solutions to peer disagreement. Theoria, 79(2), 96–126.

    Article  Google Scholar 

  • Kornblith, H. (2010). Belief in the face of controversy. In T. Warfield & R. Feldman (Eds.), Disagreement (pp. 29–52). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Lasonen-Aarnio, M. (2013). Disagreement and evidential attenuation. Noûs, 47(4), 767–794.

    Article  Google Scholar 

  • Littlejohn, C. (2013). Disagreement and defeat. In D. E. Machuca (Ed.), Disagreement and skepticism (pp. 169–192). New York: Routledge.

    Google Scholar 

  • Pettit, P. (2006). When to defer to majority testimony—and when not. Analysis, 66(3), 179–87.

    Article  Google Scholar 

  • Roush, S. (2009). Second guessing: A self-help manual. Episteme, 6(3), 251–268.

    Article  Google Scholar 

  • Schoenfield, M. (2015). A dilemma for calibrationism. Philosophy and Phenomenological Research, 91(2), 425–55.

    Article  Google Scholar 

  • Schoenfield, M. (2016). An accuracy based approach to higher order evidence. Philosophical and Phenomenological Research. https://doi.org/10.1111/phpr.12329.

    Article  Google Scholar 

  • Simpson, R. M. (2013). Epistemic peerhood and the epistemology of disagreement. Philosophical Studies, 164(2), 561–577.

    Article  Google Scholar 

  • Sliwa, P., & Horowitz, S. (2015). Respecting all the evidence. Philosophical Studies, 172(11), 2835–2858.

    Article  Google Scholar 

  • Titelbaum, M. G. (2015). Rationality’s fixed point (or: In defense of right reason). Oxford Studies in Epistemology, 5, 253–294.

    Article  Google Scholar 

  • van Fraassen, B. C. (1984). Belief and the will. The Journal of Philosophy, 81(5), 235–256.

    Article  Google Scholar 

  • Vavova, K. (2014). Moral disagreement and moral skepticism. Philosophical Perspectives, 28(1), 302–333.

    Article  Google Scholar 

  • White, R. (2009). On treating oneself and others as thermometers. Episteme, 6(3), 233–250.

    Article  Google Scholar 

  • Wright, C. (2004). Warrant for nothing (and foundations for free)? Aristotelian Society Supplementary, 78(1), 167–212.

    Article  Google Scholar 

  • Zagzebski, L. T. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. New York: Oxford University Press.

    Book  Google Scholar 

Download references

Acknowledgements

Early versions of this paper were presented at a Philosophy of Religion Colloquium at Yale University and at the Defeat and Religious Epistemology Workshop at Oxford University, and I benefited from helpful comments at both venues. I’m also grateful to Alex Arnold, David Christensen, Keith DeRose, Dan Greco, Jack Sanchez, Miriam Schoenfield, Zoltan Szabo, Joshua Thurow, Bruno Whittle, and some anonymous referees for helpful feedback and discussion at various points in the process of writing the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Pittard.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pittard, J. Fundamental disagreements and the limits of instrumentalism. Synthese 196, 5009–5038 (2019). https://doi.org/10.1007/s11229-018-1691-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-018-1691-1

Keywords

Navigation