No one can serve two epistemic masters

Abstract

Consider two epistemic experts—for concreteness, let them be two weather forecasters. Suppose that you aren’t certain that they will issue identical forecasts, and you would like to proportion your degrees of belief to theirs in the following way: first, conditional on either’s forecast of rain being x, you’d like your own degree of belief in rain to be x. Secondly, conditional on them issuing different forecasts of rain, you’d like your own degree of belief in rain to be some weighted average of the forecast of each (perhaps with weights determined by their prior reliability). Finally, you’d like your degrees of belief to be given by an orthodox probability measure. Moderate ambitions, all. But you can’t always get what you want.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    There are a wide variety of putative experts and a wide variety of principles of expert deference on offer in the current epistemology literature. However, the reader will note that everything we will say here about Al and Bert goes just as well for chance, your rational future self, and so on and so forth. The reader will also note that if Al and Bert have all your evidence and more, and if they are additionally certain of their own forecasts, then every extant principle of expert deference will entail both (2) and (3). For instance, given an expert \(\mathcal{E}\) who is certain of their own credences and has all your evidence and more, the following ways of treating \(\mathcal{E}\) as an expert are all equivalent: for all pxE (1) \(C(p \mid \mathcal{E}(p)=x) = x\), (2) \(C(p \mid \mathcal{E} = E) = E(p)\), (3) \(C(p \mid \mathcal{E}=E) = E(p \mid \mathcal{E}=E)\), and \(C(p) = \sum _x x \cdot C(\mathcal{E}(p)=x)\). See Gallow (msb) for a proof of this claim and more on the relationship between various principles of expert deference.

  2. 2.

    By an ‘orthodox probability function’, I will mean that C is non-negative, normalized, countably additive, and conglomerable. These assumptions go beyond probabilism in the case where we are considering infinitely many possible values for \(\mathcal{A}\) and \(\mathcal{B}\). However, if we suppose that there are at most finitely many potential values for \(\mathcal{A}\) and \(\mathcal{B}\), then an ‘orthodox probability function’ is just any finitely additive probability. Thanks to an anonymous reviewer for their clarifying comments on this point.

  3. 3.

    Matthew 6:24.

  4. 4.

    See, for starters, Gaifman (1988), Lewis (1980, 1994), Hall (1994), van Fraassen (1984, 1995), Christensen (2010), and Elga (2013).

  5. 5.

    Many of these principles of expert deference look different from (2) and (3); fortunately, in most cases, we can present them in the form of (2) and (3) by simply shifting our attention to a different expert. For instance, Lewis (1994) and Hall (1994) both say that we should not defer to the judgments of chance, but rather the judgments of chance, conditionalized on the proposition that it is chance. In such a case, we could consider the relevant expert to be, not chance itself, but rather chance conditionalized on chance, and we will get back a principle looking like (2) and (3). Cf. Hall and Arntzenius (2003) and Schaffer (2003). (See also footnote 1.)

  6. 6.

    Cf. Gallow (msa).

  7. 7.

    See, for starters, Kelly (2005), Elga (2007), and Christensen (2007, 2010, 2011).

  8. 8.

    See, e.g., Shogenji (ms) and Fitelson and Jehle (2009).

  9. 9.

    See, e.g., Wagner (1985) and Staffel (2015, §6).

  10. 10.

    Cf. Levinstein (2015).

References

  1. Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.

    Article  Google Scholar 

  2. Christensen, D. (2010). Rational reflection. Philosophical Perspectives, 24, 121–140.

    Article  Google Scholar 

  3. Christensen, D. (2011). Disagreement, question-begging, and epistemic self-criticism. Philosopher’s Imprint, 11(6).

  4. Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  5. Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies, 164, 127–139.

    Article  Google Scholar 

  6. Fitelson, B., & Jehle, D. (2009). What is the ‘equal weight view’? Episteme, 6(3), 280–293.

    Article  Google Scholar 

  7. Gaifman, H. (1988). A theory of higher order probabilities. In B. Skyrms & W. L. Harper (Eds.), Causation, chance, and credence: Proceedings of the Irvine conference on probability and causation (Vol. 1, pp. 191–220). Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  8. Gallow, J. D. (msa). Expert deference and news from the future.

  9. Gallow, J. D. (msb). Kinds of experts, forms of deference.

  10. Hall, N. (1994). Correcting the guide to objective chance. Mind, 103(412), 505–517.

    Article  Google Scholar 

  11. Hall, N., & Arntzenius, F. (2003). On what we know about chance. The British Journal for the Philosophy of Science, 54(2), 171–179.

    Article  Google Scholar 

  12. Kelly, T. (2005). The epistemic significance of disagreement. In J. Hawthorne & T. Gendler (Eds.), Oxford studies in epistemology (Vol. 1, pp. 167–196). Oxford: Oxford University Press.

    Google Scholar 

  13. Levinstein, B. A. (2015). With all due respect: The macro-epistemology of disagreement. Philosopher’s Imprint, 15(13), 1–20.

    Google Scholar 

  14. Lewis, D. K. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. II, pp. 263–293). Berkeley: University of California Press.

    Google Scholar 

  15. Lewis, D. K. (1994). Humean supervenience debugged. Mind, 103(412), 473–490.

    Article  Google Scholar 

  16. Schaffer, J. (2003). Principled chances. The British Journal for the Philosophy of Science, 54(1), 27–41.

    Article  Google Scholar 

  17. Shogenji, T. (ms). A conundrum in Bayesian epistemology of disagreement.

  18. Staffel, J. (2015). Disagreement and epistemic utility-based compromise. Journal of Philosophical Logic, 44, 273–286.

    Article  Google Scholar 

  19. van Fraassen, B. C. (1984). Belief and the will. The Journal of Philosophy, 81(5), 235–256.

    Article  Google Scholar 

  20. van Fraassen, B. C. (1995). Belief and the problem of ulysses and the sirens. Philosophical Studies, 77, 7–37.

    Article  Google Scholar 

  21. Wagner, C. (1985). On the formal properties of weighted averaging as a method of aggregation. Synthese, 62(1), 97–108.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to Michael Caie, Daniel Drucker, Harvey Lederman, and an anonymous reviewer for helpful conversations and feedback.

Author information

Affiliations

Authors

Corresponding author

Correspondence to J. Dmitri Gallow.

Appendix: Proof of Propositions 1 and 2

Appendix: Proof of Propositions 1 and 2

Proof

We establish three lemmas, from which the theorems follow immediately. (Note: throughout, I will use ‘C’ indiscriminately for (1) a joint probability density function over the values of \(\mathcal{A}\) and \(\mathcal{B}\), as well as (2) the corresponding marginal densities, and (3) the corresponding probability function. In the event that there are at most finitely many possible values of \(\mathcal{A}\) and \(\mathcal{B}\), ‘C’ will everywhere denote a probability function and integrals may be exchanged for sums throughout.)

Lemma 1

If (2), (3), and (4) hold, then so do (5) and (6).

Proof

Since C is a countably additive, conglomerable probability, for all a,

$$\begin{aligned} C(r \mid \mathcal{A}=a)&= \int _0^1 C(r \mid \mathcal{A}= a, \mathcal{B}= b) \cdot C(\mathcal{B}= b \mid \mathcal{A}= a) \cdot db \\&= \int _0^1 \left( \alpha a + \beta b \right) \cdot C(\mathcal{B}= b \mid \mathcal{A}=a) \cdot db \\&= \alpha a \cdot \int _0^1 C(\mathcal{B}= b \mid \mathcal{A}= a) \cdot db \,\,+\,\, \beta \int _0^1 b \cdot C(\mathcal{B}= b \mid \mathcal{A}= a) \cdot db \\&= \alpha a + \beta \mathbb {E}[\mathcal{B}\mid \mathcal{A}=a ] \end{aligned}$$

Then, because \(C(r \mid \mathcal{A}=a)=a\) and \(\beta = 1-\alpha\), we have (6). Following the same procedure, with ‘\(\mathcal{A}\)’ and ‘\(\mathcal{B}\)’ exchanged throughout, establishes (5).

Lemma 2

If (5) and (6) hold, then so does (7).

$$\begin{aligned} \mathbb {E}[\mathcal{A}\mathcal{B}] = \mathbb {E}[\mathcal{A}^2] = \mathbb {E}[\mathcal{B}^2] \end{aligned}$$
(7)

Proof

$$\begin{aligned} \mathbb {E}[\mathcal{A}\mathcal{B}]&= \int _0^1 \int _0^1 a b \cdot C(\mathcal{A}=a, \mathcal{B}=b) \cdot db \cdot da \\&= \int _0^1 a \cdot C(\mathcal{A}=a) \cdot \left[ \int _0^1 b \cdot C(\mathcal{B}= b \mid \mathcal{A}=a) \cdot db \right] \cdot da \\&= \int _0^1 a \cdot C(\mathcal{A}=a) \cdot \mathbb {E}[\mathcal{B}\mid \mathcal{A}=a] \cdot da \\&= \int _0^1 a^2 \cdot C(\mathcal{A}=a) \cdot da \\&= \mathbb {E}[\mathcal{A}^2] \end{aligned}$$

The same procedure, with ‘\(\mathcal{A}\)’ exchanged for ‘\(\mathcal{B}\)’ throughout, establishes that \(\mathbb {E}[\mathcal{A}\mathcal{B}] = \mathbb {E}[\mathcal{B}^2]\).

Lemma 3

If (7) holds, then so does (8).

$$\begin{aligned} \mathbb {E}[(\mathcal{A}- \mathcal{B})^2] = 0 \end{aligned}$$
(8)

Proof

$$\begin{aligned} \mathbb {E}[(\mathcal{A}- \mathcal{B})^2]&= \mathbb {E}[\mathcal{A}^2] - 2 \mathbb {E}[\mathcal{A}\mathcal{B}] + \mathbb {E}[\mathcal{B}^2] = 0 \end{aligned}$$

If the expectation of \((\mathcal{A}- \mathcal{B})^2\) is 0, then \(C(\mathcal{A}=\mathcal{B}) = 1\), and (1) is violated.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gallow, J.D. No one can serve two epistemic masters. Philos Stud 175, 2389–2398 (2018). https://doi.org/10.1007/s11098-017-0964-8

Download citation

Keywords

  • Expert deference
  • Disagreement
  • Linear averaging