Skip to main content
Log in

Confirmation and the ordinal equivalence thesis

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

According to a widespread but implicit thesis in Bayesian confirmation theory, two confirmation measures are considered equivalent if they are ordinally equivalent—call this the “ordinal equivalence thesis” (OET). I argue that adopting OET has significant costs. First, adopting OET renders one incapable of determining whether a piece of evidence substantially favors one hypothesis over another. Second, OET must be rejected if merely ordinal conclusions are to be drawn from the expected value of a confirmation measure. Furthermore, several arguments and applications of confirmation measures given in the literature already rely on a rejection of OET. I also contrast OET with stronger equivalence theses and show that they do not have the same costs as OET. On the other hand, adopting a thesis stronger than OET has costs of its own, since a rejection of OET ostensibly implies that people’s epistemic states have a very fine-grained quantitative structure. However, I suggest that the normative upshot of the paper in fact has a conditional form, and that other Bayesian norms can also fruitfully be construed as having a similar conditional form.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. From now on I will suppress mention of the background theory.

  2. Equivalently, if and only if \(Pr(H|E) > Pr(H|\lnot {E})\) or if and only if \(Pr(E|H) > Pr(E|\lnot {H})\). Disconfirmation and absence of confirmation (neutrality) can be defined analogously.

  3. Of course, \(Pr_K\) is assumed to be a probability distribution defined on a Boolean algebra of propositions that includes both H and E.

  4. It is not customary to specify the base of the logarithm.

  5. This measure is also sometimes called the ”Joyce–Christensen measure,” after Joyce (1999) and Christensen (1999).

  6. Example: \(Pr(H) = 0.1\), \(Pr(H|E) = 0.9\), \(Pr(H') = 0.01\), \(Pr(H'|E) = 0.5\). Here H is better confirmed than \(H'\) according to d, but \(H'\) is better confirmed than H according to r.

  7. Interestingly, the standard measures do correlate fairly well (Tentori et al. 2007).

  8. Numerous conversations I have had with philosophers who work on Bayesian confirmation theory have convinced me that it is standard for philosophers to regard ordinally equivalent measures as interchangeable in general.

  9. Note: the likelihood ratio is not a Bayesian measure of confirmation. Rather, it is a direct measure of the evidential support that one hypothesis enjoys vis-a-vis another one. As Fitelson (2007) points out, the standard Bayesian confirmation measure that agrees with using the likelihood ratio to compare the relative support of two hypotheses is the ratio measure. Thus, implicitly, Royall is setting thresholds for interpreting quantities of the form \(\frac{r(H, E)}{r(H', E)}\).

  10. There are several conditions we could put on D. For example, one reasonable requirement is that confirmation measures scores can be arbitrarily close to each other according to D.

  11. The proof that only linear functions obey (1) is trivial and omitted.

  12. Here is a simple counter-example. Suppose we have the following probabilities: \(p(H_1) = 0.5\), \(p(H_1|E) = 0.6\), \(p(H_1|\lnot {E})=0.2\), \(p(E) = 0.625\), \(p(H_2) = 0.4\), \(p(H_2|E)=0.2\), \(p(H_2|\lnot {E}) = 0.7333\). As can be verified, we have: \(\mathrm {E}{[d(H_1, E)]} = 0 = \mathrm {E}{[d(H_2, E)]}\). However, \(\mathrm {E}{[d(H_1, E)^3]} < \mathrm {E}{[d(H_2, E)^3]}\). Note that this example assumes that \(H_1\) and \(H_2\) are not exhaustive hypotheses; i.e., there must be at least one other hypothesis, \(H_3\), etc. in the partition of hypotheses.

  13. Indeed, under several reasonable conditions, the class of linear functions is the only class that satisfies (6).

  14. I thank a referee for helpful comments on this paragraph.

  15. Which, of course, is not the only solution. See Rinard (2014) for instance.

  16. For examples of other applications, see Good (1985).

  17. Numerical examples are easy to come up with, but tedious. Note also that if there are many hypotheses, then at least some of the probabilities must be small.

  18. In particular, if the hypothesis space is large, it will generally be the case that \(p(E|\lnot {H}) \approx p(E)\), for most H’s, and hence the log-likelihood measure and log-ratio measure will have numerically similar outputs. Indeed, if the hypothesis space is parameterized by a continuous parameter, \(\Theta \), then, for every \(\theta \in \Theta \), we have \(l(\theta , E) = \log {\frac{Pr(E|\theta )}{Pr(E|\lnot {\theta })}} = \log {\frac{Pr(E|\theta )}{\int _{\Theta *}Pr(E|{{\theta })Pr(\theta )d\theta }}}\), where \(\Theta *\) is \(\Theta \) with \(\theta \) taken out. But removing a single point from the parameter space will not have any effect on the integral, so \({\int _{\Theta *}Pr(E|{{\theta })Pr(\theta )d\theta }} = \int _{\Theta }Pr(E|{{\theta })Pr(\theta )d\theta } = Pr(E)\). Therefore, \(l(\theta , E) = \log {\frac{Pr(E|\theta )}{Pr(E|\lnot {\theta })}} = \log {\frac{Pr(E|\theta )}{Pr(E)}} = lr(\theta , E)\), and so \(l(\theta , E)\) is actually identical to \(lr(\theta , E)\) when the hypothesis space is continuous. As far as I know, this fact has not been noted before. On the other hand, the fact that the Kemeny–Oppenheim measure and the log-likelihood measure are ordinally equivalent means that they will always agree on whether \(c(H, E) > c(H', E)\), but they will often strongly disagree on whether the difference between c(HE) and \(c(H', E)\) is small, large, or trivial; their interval judgments are in other words quite different.

  19. Of course, many Bayesians want to argue for this stronger unconditional norm as well.

  20. Of course, philosophers often want to go further; they want to say, for example, that you ought to have the goal of avoiding sure losses or having accurate credences.

  21. Thanks to Branden Fitelson, Malcolm Forster, Elliott Sober, and Mike Titelbaum for reading a draft of this paper. Thanks also to the audience at a presentation of an earlier version at the 2014 pacific APA, in particular Brad Armendt, Kenny Easwaran, Sam Fletcher, and Greg Gandenberger. Finally, thanks to several anonymous reviewers at Synthese.

References

  • Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference. Journal of the Royal Statistical Society. Series B (Methodological), 41(2), 113–147.

    Article  Google Scholar 

  • Brössel, P., & Huber, F. (2014). Bayesian confirmation: A means with no end. The British Journal for the Philosophy of Science, 66, 737.

    Article  Google Scholar 

  • Carnap, R. (1962). Logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Christensen, D. (1999). Measuring confirmation. Journal of Philosophy, XCVI, 437–461.

    Article  Google Scholar 

  • Crupi, V., & Tentori, K. (2014). Measuring information and confirmation. Studies in the History and Philosophy of Science Part A, 47, 81–90.

    Article  Google Scholar 

  • Easwaran, K. (2016). Dr. Truthlove or: How I learned to stop worrying and love Bayesian probabilities. Nous, 50(4), 816–853.

    Article  Google Scholar 

  • Fitelson, B. (1999). The plurality of Bayesian measures of confirmation and the problem of measure sensitivity. Philosophy of Science, 66, S362–S378.

    Article  Google Scholar 

  • Fitelson, B. (2007). Likelihoodism, Bayesianism, and relational confirmation. Synthese, 156, 473–489.

    Article  Google Scholar 

  • Fitelson, B., & Hawthorne, J. (2010). How Bayesian confirmation theory handles the paradox of the ravens. In E. Eells & J. Hawthorne (Eds.), The place of probability in science. Boston Studies in the Philosophy of Science (Vol. 284, pp. 247–275). Dordrecht: Springer.

  • Gillies, D. (1986). In defense of the Popper–Miller argument. Philosophy of Science, 53, 110–113.

    Article  Google Scholar 

  • Good, I. J. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley, & A. F. M. Smith (Eds.), Bayesian statistics 2 (pp. 249–270). Amsterdam: Elsevier.

    Google Scholar 

  • Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Kemeny, J. G., & Oppenheim, P. (1952). Degree of factual support. Philosophy of Science, 19(4), 307–324.

    Article  Google Scholar 

  • Kullback, S., & Leibler, R. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22(1), 79–86.

    Article  Google Scholar 

  • Myrvold, W. (2003). A Bayesian account of the virtue of unification. Philosophy of Science, 70(2), 399–423.

    Article  Google Scholar 

  • Myrvold, W. (2016). On the evidential import of unification. Unpublished manuscript.

  • Popper, K., & Miller, D. (1983). The impossibility of inductive probability. Nature, 302, 687–688.

    Article  Google Scholar 

  • Redhead, M. (1985). On the impossibility of inductive probability. The British Journal for the Philosophy of Science, 36(2), 185–191.

    Article  Google Scholar 

  • Rinard, S. (2014). A new Bayesian solution to the paradox of the ravens. Philosophy of Science, 81(1), 81–100.

    Article  Google Scholar 

  • Royall, R. (1997). Statistical evidence: A likelihood paradigm. Boca Raton: Chapman and Hall/CRC.

    Google Scholar 

  • Schlesinger, G. (1995). Measuring degrees of confirmation. Analysis, 55, 208–212.

    Article  Google Scholar 

  • Shogenji, T. (2012). The degree of epistemic justification and the conjunction fallacy. Synthese, 184(1), 29–48.

    Article  Google Scholar 

  • Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103(2684), 577–580.

    Article  Google Scholar 

  • Tentori, K., Crupi, V., & Osherson, D. (2007). Comparison of confirmation measures. Cognition, 103, 107–119.

    Article  Google Scholar 

  • Vassend, O. B. (2015). Confirmation measures and sensitivity. Philosophy of Science, 82(5), 892–904.

    Article  Google Scholar 

  • Vranas, P. (2004). Hempel’s raven paradox: A lacuna in the standard Bayesian solution. The British Journal for the Philosophy of Science, 42, 393–401.

    Google Scholar 

  • Zalabardo, J. (2009). An argument for the likelihood ratio measure of confirmation. Analysis, 69, 630–635.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Olav B. Vassend.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vassend, O.B. Confirmation and the ordinal equivalence thesis. Synthese 196, 1079–1095 (2019). https://doi.org/10.1007/s11229-017-1500-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-017-1500-2

Keywords

Navigation