Skip to main content
Log in

Confirmation measures and collaborative belief updating

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

There are some candidates that have been thought to measure the degree to which evidence incrementally confirms a hypothesis. This paper provides an argument for one candidate—the log-likelihood ratio measure. For this purpose, I will suggest a plausible requirement that I call the Requirement of Collaboration. And then, it will be shown that, of various candidates, only the log-likelihood ratio measure \(l\) satisfies this requirement. Using this result, Jeffrey conditionalization will be reformulated so as to disclose explicitly what determines new credences after experience.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In scientific contexts, the notion of confirmation is often thought of as being objective. On the other hand, what is considered here is the rational way to update our subjective probabilities, i.e. our credences. Thus, some readers may think that confirmation theory, which concerns the objective relation between evidence and hypotheses, has little to do with updating our credences. Admittedly, I do not think that all confirmation theory has to do is to suggest a rule governing our belief-updating behavior. Nevertheless, it is very natural that the degree to which evidence objectively confirms a hypothesis should influence our credence in a hypothesis in a significant way. [Some discussions about the relation between credences and confirmation can be found in Hawthorne (2005), Huber (2005), Lange (1999) and Skyrm (1986)]. So, it is legitimate to use the notion of confirmation in order to explicate the elements that determine our new subjective probabilities after experience. Thanks to an anonymous referee for informing me of this point.

  2. Regarding the assumption that the degree of confirmation is relative to a particular credence function, I should mention two things. First, we should note that the credence functions in this paper are assumed to convey the totality of what is known to the relevant agents. Admittedly, this assumption, in the context of confirmation theory, leads Bayesians to the old evidence problem. However, the problem has little to do with the discussion that follows. This is because we will consider only the cases in which our old credence in a relevant proposition is less than 1. Second, it is also noteworthy that the assumption that the degree of confirmation is relative to a particular credence function does not imply that the notion of confirmation is entirely subjective. I do not rule out the possibility that the credence function in question is regarded as objective in the sense that all agents have the same credence function if they have the same background knowledge. The following discussion does not trade on the distinction between subjective and objective confirmation. For some discussions about subjective and objective confirmation and probability, see Maher (1996) and Hawthorne (2005). I will reconsider briefly this point in note 6.

  3. The measures in Table 1 are collected from some papers—for example, Crupi et al. (2006), Fitelson (1999), Kyburg (1964), and so forth. Note that ‘\(O(X)\)’ and ‘\(O(X|Y)\)’ respectively refer to the odds of \(X\) and the conditional odds of \(X\) given \(Y\). An example that shows some measures in Table 1 are not ordinally equivalent to each other can be found in Crupi et al. (2006, n. 2). Here we should pay attention to two things. The first is that the original versions of \(r\), \(l\), \(o\) and \(g\) are not logarithmic functions. That is, the original functions of them are just ratio measures—for example, the original version of \(r\) is defined as following: \({r*}_{P}(X,E)=P(X|E)/P(X)\). Note that \(r*\) is ordinally equivalent to \(r\). (The base of logarithm is \(e\), which is greater than 1. Indeed, the logarithm in question does not have to be the natural logarithm. The logarithms whose bases are greater than 1 are also good for the ratio confirmation measures.) And we should also note that \(r\) satisfies the following condition, but \(r*\) doesn’t: \(C_{P}(H,E)>0\) when \(P(H|E)>P(H)\); \(C_{P}(H,E)=0\) when \(P(H|E)=P(H)\); \(C_{P}(H,E)<0\) when \(P(H|E)<P(H)\). This is thought of as a necessary condition to be a Bayesian confirmation measure. So, it is not \(r*\) but \(r\) that is regarded as one candidate for confirmation measure. The same goes with \(l\), \(o\) and \(g\). Secondly, we need to note that there are some measures that are not in Table 1:

    \(g'_{P}(X,E)=1-P(\lnot X)/P(\lnot X|E)\)    Rips (2001)

    \(r'_{P}(X,E)=P(X|E)/P(X)-1\)   Finch (1960)

    \(l'_{P}(X,E)=[P(E|X)-P(E|\lnot X)]/[P(E|X)+P(E|\lnot X)]\)   Kemeny and Oppenheim (1952)

    \(le_{P}(X,E)=P(X|E)P(\lnot X)-P(\lnot X|E)P(X)\)   Levi (1962–1963)

    The measures \(g'\), \(r'\), \(l'\) and \(le\) are ordinally equivalent to \(g\), \(r\), \(l\) and \(d\), respectively. Thus, we don’t have to consider these measures in what follows.

  4. There are some differences between JC and JC*. Note that when \(Q(E)\ne 1\), JC is equivalent to JC*. However, when \(Q(E)=1\), this equivalence does not hold. This is because \(\alpha ^{E}\) is undefined when \(Q(E)=1\). Here we should pay attention to a simple version of JC, i.e. Bayesian conditionalization (BC): When experience makes an agent be certain of a proposition \(E\) and nothing else, \(Q(X)=P(X|E)\) for all \(X\). Considering the above difference between JC and JC*, we should say that BC is a special case of JC, but a limiting case of JC*. As the credence in \(E\) approaches 1, \(\alpha ^{E}\) approaches \(+\infty \) and so \(Q(X)\) in JC* approaches \(P(X|E)\). See Field (1978, p. 365). In addition to this mathematical difference between JC and JC*, there is also an epistemological difference between them. Someone may say that the observational inputs of JC are assumed to be the new credences whereas the inputs of JC* are assumed to be the Bayes factors. If so, it should be said, in light of the previous consideration, that the inputs of JC represent both of the old credences and the observation itself, whereas the inputs of JC* represent only the observation itself with the old credences factored out.

  5. I borrow ‘collaborative’ from Richard Jeffrey. For some discussions about this collaborative belief update, see Hendrickson and Jeffrey (1988) and Jeffrey (1992, 2002a, b, 2004).

  6. In order that the Requirement of Collaboration is not vacuous, it should be epistemically possible that two agent’s confirmation measures are different from each other. This possibility is virtually guaranteed by the subjectivist notion of confirmation. Thus, some readers may think that the plausibility of the requirement depends on the subjectivist notion of confirmation. However, the objective confirmation theory doesn’t have to contend that two agent’s confirmation measures should always be the same as each other. Suppose that the degree of confirmation is objective in the sense that all agents have the same confirmation measure when they have the same background knowledge. Then, it is natural that when two agents have different background knowledge, their objective degrees of confirmation could be different from each other. That is, the objectivity of confirmation does not entail that \(C_{1}(H,E)=C_{2}(H,E)\) and \(C_{1}(H,\lnot E)=C_{2}(H,\lnot E)\). Thus, the Requirement of Collaboration maintains its plausibility no matter whether the notion of confirmation is subjective or objective. For this reason, it can be said that my discussion does not depend on the distinction between subjective and objective confirmation. I owe a debt of gratitude to an anonymous referee for helping me make this point clear.

  7. Note that the above argument concerns only the cases where one agent updates collaboratively her credences using the other agent’s uncertain evidence. Interestingly, the same result is also obtained when one agent’s credences are updated with the other agent’s certain evidence. As mentioned in note 4, Bayesian conditionalization is a limiting case of JC*. Likewise, the collaborative belief updating with certain evidence can be regarded as a limiting case of \(Q_{direct}\) and \(Q_{indirect}\). Suppose that experience makes \(S_{2}\) be certain of \(E\) and nothing else. With the help of (7) and the probability calculus, then, we obtain that:

    $$\begin{aligned} Q_{direct}(H)&= \lim _{\alpha _{2}^{E}\rightarrow +\infty }\frac{\alpha _{2}^{E}P_{1}(\textit{EH})+P_{1}(\lnot \textit{EH})}{\alpha _{2}^{E}P_{1}(E)+P_{1}(\lnot E)}=P_{1}(H|E)\\&= \frac{P_{1}(E|H)P_{1}(H)}{P_{1}(E|H)P_{1}(H)+P_{1}(E|\lnot H)P_{1}(\lnot H)}. \end{aligned}$$

    On the other hand, it should be noted that:

    $$\begin{aligned} \lim _{\alpha _{2}^{E}\rightarrow +\infty }\alpha _{2}^{H}=\lim _{\alpha _{2}^{E}\rightarrow +\infty }\frac{Q_{2}(H)/Q_{2}(\lnot H)}{P_{2}(H)/P_{2}(\lnot H)}=\lim _{\alpha _{2}^{E}\rightarrow +\infty }\frac{\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)}{\alpha _{2}^{E}P_{2}(E|\lnot H)+P_{2}(\lnot E|\lnot H)}=\frac{P_{2}(E|H)}{P_{2}(E|\lnot H)}. \end{aligned}$$

    Here we obtain \(Q_{2}\) from \(P_{2}\) using JC* on \(\{E,\lnot E\}\). From (8), then, it follow that:

    $$\begin{aligned} Q_{indirect}(H)&= \lim _{\alpha _{2}^{E}\rightarrow +\infty }\frac{\alpha _{2}^{H}P_{1}(H)}{\alpha _{2}^{H}P(H)+(1-P_{1}(H))}=\frac{P_{2}(E|H)P_{1}(H)}{P_{2}(E|H)P_{1}(H)+P_{2}(E|\lnot H)P_{1}(\lnot H)}. \end{aligned}$$

    Now we immediately obtain (A) using these equations. Thus, we can conclude that even when \(S_{1}\) updates collaboratively her credence in \(H\) using \(S_{2}\)’s certain evidence \(E\), all but \(l\) of the candidates listed in Table 1 fail to satisfy the Requirement of Collaboration.

  8. Suppose that an agent gets to be certain of a proposition \(E\). According to Bayesian conditionalization, it holds that for all \(X\), \(Q(X)=P(X|E)\) and \(Q(\lnot X)=P(\lnot X|E)\). (\(P\) and \(Q\) are the agent’s old and new credence functions, respectively.) From the probability calculus, thus, it follows that \(\frac{P(E|X)}{P(E|\lnot X)}=\frac{P(X|E)/P(\lnot X|E)}{P(X)/P(\lnot X)}=\frac{Q(X)/Q(\lnot X)}{P(X)/P(\lnot X)}=\beta _{P,Q}(X,\lnot X)\). Note that \(l_{P}(X,E)=ln\left( \frac{P(E|X)}{P(E|\lnot X)}\right) \). Thus, we have that \(\beta _{P,Q}(X,\lnot X)=e^{l_{P}(X,E)}\).

  9. Wagner (2003, pp. 360–362) suggests three criteria of adequacy for the observational parameters that represent the impacts of observation on our credence function:

    I.:

    The representation should ensure satisfaction of the Commutativity Principle.

    II.:

    The representation should ensure that learning identical to that prompting a probability kinematical revision should prompt a probability kinematical revision on the same partition.

    III.:

    The representation of new learning should not unduly restrict the set of priors amenable to revision in response to such learning.

    Bayes factors satisfy all of the criteria, whereas \(\delta \) fails to satisfy II and III, and \(\pi \) (and \(D\)) fails to satisfy III. Interestingly, an anonymous referee points out that when the new credence in \(X\) is regarded as the observational parameter of \(X\), the argument for the log-likelihood ratio measure cannot go through. In response to this comment, it should be noted first that the new credences fail to satisfy I and II. Consider two possible sequential revisions of an agent’s credence in (an atomic proposition) \(A\). In the first, her credence in \(A\) is revised to 0.3, which is then revised to 0.8. In the second, the order in which the new credences are incorporated into the agent’s belief system is reversed—that is, the credence is revised to 0.8, which is then revised to 0.3. Note that the agent’s final credence in \(A\) is different from each other. Thus, when the new credences are regarded as the observational parameters, the Commutativity Principle does not hold. We can provide a similar example showing that the new credences fail to satisfy II. Therefore, if Wagner’s criteria is plausible, then the new credences cannot be regarded as observational parameters. However, it should be emphasized here that Wagner’s discussion concerns only the cases in which our new credences are less than 1, and that when experience makes an agent be certain of a proposition \(E\), the Bayes factor of \(E\) against \(\lnot E\) is undefined since the new credence in \(\lnot E\) is zero. Thus, some readers may think that my argument in this paper cannot help failing where one agent’s credences are updated with the other agent’s certain evidence. Fortunately, there is a proper way to respond to this worry. As shown in note 7, the collaborative belief updating with certain evidence can be successfully modeled using the Bayes factor and JC. Moreover, the result obtained so shows that all but \(l\) of the candidates listed in Table 1 fail to satisfy the Requirement of Collaboration.

  10. In order to avoid a notational confusion, let \(\bar{\alpha }^{X}\) be the observational parameter of \(X\) that is defined as Probability factors. Then, we have that \(\alpha ^{X}=\beta _{P,Q}(X,\lnot X)=\pi _{P,Q}(X)/\pi _{P,Q}(\lnot X)=\bar{\alpha }^{X}/\bar{\alpha }^{\lnot X}\). So, if I regard my colleague’s Probability factors of \(X\) and of \(\lnot X\) as my own factors, I should also regard my colleague’s Bayes factor of \(X\) against \(\lnot X\) as my own factor. According to the results in Jeffrey (Jeffrey 2002a, p. 9) and some relevant considerations in this paper, then, we have that:

    $$\begin{aligned} Q_{direct}(H)=\frac{\bar{\alpha }_{2}^{E}P_{1}(\textit{EH})+\bar{\alpha }_{2}^{\lnot E}P_{1}(\lnot \textit{EH})}{\bar{\alpha }_{2}^{E}P_{1}(E)+\bar{\alpha }_{2}^{\lnot E}P_{1}(\lnot E)};\quad Q_{indirect}(H)=\frac{\bar{\alpha }_{2}^{H}P_{1}(H)}{\bar{\alpha }_{2}^{H}P(H)+\bar{\alpha }_{2}^{\lnot H}(1-P_{1}(H))}. \end{aligned}$$

    For any \(X\), \(\bar{\alpha }_{2}^{X}\) refers to \(S_{2}\)’s probability factor of \(X\). Note that these equations imply (7) and (8), respectively. Thus, we can conclude that only \(l\) of the candidates in Table 1 satisfies the Requirement of Collaboration even if the observational factors are defined as Probability factors.

References

  • Carnap, R. (1962). Logical foundations of probability. Chicago: University of Chicago Press.

    Google Scholar 

  • Christensen, D. (1999). Measuring confirmation. Journal of Philosophy, 96, 437–461.

    Article  Google Scholar 

  • Cornfield, J. (1951). A method for estimating comparative rates from clinical data. Applications to cancer of the lung, breast, and cervix. Journal of the National Cancer Institute, 11, 1269–1275.

    Google Scholar 

  • Crupi, V., Tentori, K., & Gonzales, M. (2006). On bayesian measures of evidential support: Theoretical and empirical issues. Philosophy of Science, 74, 229–252.

    Article  Google Scholar 

  • Eells, E., & Fitelson, B. (2002). Symmetries and asymmetries in evidential support. Philosophical Studies, 107, 129–142.

    Article  Google Scholar 

  • Field, H. (1978). A note on Jeffrey conditionalization. Philosophy of Science, 45, 361–367.

    Article  Google Scholar 

  • Finch, H. A. (1960). Confirming power of observations metricized for decisions among hypotheses. Philosophy of Science, 27(293–307), 391–404.

    Article  Google Scholar 

  • Fitelson, B. (1999). The plurality of bayesian measures of confirmation and the problem of measure sensitivity. Philosophy of Science, 66, S362–S378.

    Article  Google Scholar 

  • Gaifman, H. (1979). Subjective probability, natural predicates and Hempel’s Ravens. Erkenntnis, 21, 105–147.

    Google Scholar 

  • Good, I. J. (1950). Probability and the weighing of evidence. London: Griffin.

    Google Scholar 

  • Hawthorne, J. (2005). Degree-of-belief and degree-of-support: Why Bayesians need both notions. Mind, 114, 277–320.

    Article  Google Scholar 

  • Hendrickson, M., & Jeffrey, R. (1988). Probabilizing pathology. Proceedings of the Aristotelian Society, 89, 211–225.

    Google Scholar 

  • Huber, F. (2005). Subjective probabilities as basis for scientific reasoning? British Journal for the Philosophy of Science, 56, 101–116.

    Article  Google Scholar 

  • Jeffrey, R. (1992a). Probability and the art of judgment. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Jeffrey, R. (1992b). Radical probabilism (Prospectus for a User’s manual). Philosophical Issue, 2, 193–204.

    Article  Google Scholar 

  • Jeffrey, R. (2002a). Epistemology probabilized. Technologies for Constructing Intelligent Systems, 3–16.

  • Jeffrey, R. (2002b). Petrus Hispanus Lectures: I. Actas da Sociedade Portuguesa da Filosofia: After Logical Empiricism; II. Radical Probabilism.

  • Jeffrey, R. (2004). Subjective probability: The real thing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Joyce, J. (2003). Bayes’s Theorem. In E. N. Zalta (ed.), The Stanford encyclopedia of philosophy. Stanford, CA: Stanford University Press. http://plato.stanford.edu/archives/sum2004/entries/bayes-theorem/.

  • Kemeny, J., & Oppenheim, P. (1952). Degrees of factual support. Philosophy of Science, 19, 307–324.

    Article  Google Scholar 

  • Keynes, J. (1921). A treatise on probability. London: Macmillan.

    Google Scholar 

  • Kyburg, H. E. (1964). Recent work in inductive logic. American Philosophical Quarterly, 1, 249–287.

    Google Scholar 

  • Lange, M. (1999). Calibration and the epistemological role of Bayesian conditionalization. Journal of Philosophy, 75, 730–737.

    Google Scholar 

  • Levi, I. (1962–1963). Corroboration and rules of acceptance. British Journal for the Philosophy of Science, 13, 307–313.

    Google Scholar 

  • Maher, P. (1996). Subjective and objective confirmation. Philosophy of Science, 63, 149–174.

    Article  Google Scholar 

  • Mortimer, H. (1988). The logic of induction. Paramus, NJ: Prentice Hall.

    Google Scholar 

  • Nozick, R. (1981). Philosophical explanations. Oxford: Clarendon.

    Google Scholar 

  • Popper, K. (1954–1955). Degree of confirmation. British Journal for the Philosophy of Science, 5, 143–149.

    Article  Google Scholar 

  • Rescher, N. (1958). A theory of evidence. Philosophy of Science, 25, 83–94.

    Article  Google Scholar 

  • Rips, L. J. (2001). Two kinds of reasoning. Psychological Science, 12, 129–134.

    Article  Google Scholar 

  • Schupbach, J. N., & Sprenger, J. (2011). The logic of explanatory power. Philosophy of Science, 78, 105–127.

    Article  Google Scholar 

  • Skyrm, B. (1986). Choice and chance (3rd ed.). Belmont, CA: Wadsworth.

    Google Scholar 

  • Wagner, C. (2002). Probability kinematics and commutativity. Philosophy of Science, 69, 266–278.

    Article  Google Scholar 

  • Wagner, C. (2003). Commuting probability revisions: The uniformity rule. Erkenntnis, 59, 349–364.

    Article  Google Scholar 

Download references

Acknowledgments

An earlier version of this paper was presented at 2013 Annual Conference of the Korean Society for the Philosophy of Science in Seoul. I am grateful to the participants of the conference. In particular, I would like to thank Insok Ko, Jungwon Lee, Youngeui Rhee and Yeongseo Yeo. Special thanks are due to Inkyo Chung, Alan Hájek, Jaemin Jung, Namjoong Kim, Hye Eun Lee and Il Kwon Lee for their beautiful comments and suggestions. Of course, I am very grateful to several anonymous reviewers for their invaluable feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ilho Park.

Appendices

Appendix 1

For the discussion that follows, we need to transform properly (7) and (8). Let’s consider (7). With the help of the probability calculus, we have that:

$$\begin{aligned} \begin{array}{ll} P_{1}(E)=P_{1}({ EH })+P_{1}(E\lnot H); &{} P_{1}({ EH })=P_{1}(E|H)P_{1}(H);\\ P_{1}(\lnot E)=P_{1}(\lnot { EH })+P_{1}(\lnot E\lnot H); &{} P_{1}(\lnot { EH })=P_{1}(\lnot E|H)P_{1}(H). \end{array} \end{aligned}$$

Then, (7) can be transformed as follows:

$$\begin{aligned} Q_{direct}(H)=\frac{[\alpha _{2}^{E}P_{1}(E|H)+P_{1}(\lnot E|H)]P_{1}(H)}{[\alpha _{2}^{E}P_{1}(E|H)+P_{1}(\lnot E|H)]P_{1}(H)+[\alpha _{2}^{E}P_{1}(E|\lnot H)+P_{1}(\lnot E|\lnot H)]P_{1}(\lnot H)}. \end{aligned}$$
(7*)

On the other hand, when \(S_{2}\) updates her credences using JC* on \(\{E,\lnot E\}\), then it holds that:

$$\begin{aligned} Q_{2}(H)=\frac{\alpha _{2}^{E}P_{2}({ HE })+P_{2}(H\lnot E)}{\alpha _{2}^{E}P_{2}(E)+P_{2}(\lnot E)}\,\,\mathrm{and}\,\, Q_{2}(\lnot H)=\frac{\alpha _{2}^{E}P_{2}(\lnot { HE })+P_{2}(\lnot H\lnot E)}{\alpha _{2}^{E}P_{2}(E)+P_{2}(\lnot E)}. \end{aligned}$$

Note also that \(\alpha _{2}^{H}=[Q_{2}(H)/Q_{2}(\lnot H)]/[P_{2}(H)/P_{2}(\lnot H)]\). Thus, we have that:

$$\begin{aligned} \alpha _{2}^{H}&=\frac{[\alpha _{2}^{E}P_{2}({ HE })+P_{2}(H\lnot E)]/[\alpha _{2}^{E}P_{2}(\lnot { HE })+P_{2}(\lnot H\lnot E)]}{P_{2}(H)/P_{2}(\lnot H)}\\&=\frac{\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)}{\alpha _{2}^{E}P_{2}(E|\lnot H)+P_{2}(\lnot E|\lnot H)}. \end{aligned}$$

Then, (8) is transformed as follows:

$$\begin{aligned} Q_{indirect}(H)=\frac{[\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)]P_{1}(H)}{[\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)]P_{1}(H)+[\alpha _{2}^{E}P_{2}(E|\lnot H)+P_{2}(\lnot E|\lnot H)]P_{1}(\lnot H)}. \end{aligned}$$
(8*)

Now, let’s suppose that \(E\) is probabilistically independent of \(H\) relative to \(P_{1}\) as well as to \(P_{2}\). That is, it holds that:

$$\begin{aligned} \begin{array}{ll} P_{1}(E|H)=P_{1}(E|\lnot H)=P_{1}(E); &{} P_{1}(\lnot E|H)=P_{1}(\lnot E|\lnot H)=P_{1}(\lnot E);\\ P_{2}(E|H)=P_{2}(E|\lnot H)=P_{2}(E); &{} P_{2}(\lnot E|H)=P_{2}(\lnot E|\lnot H)=P_{1}(\lnot E). \end{array} \end{aligned}$$

From (7*) and (8*), thus, it follows that \(Q_{direct}(H)=Q_{indirect}(H)=P_{1}(H)\).

Appendix 2

Considering (7*) and (8*) in “Appendix 1”, it is obviously true that:

(A) :

\(Q_{direct}(H)=Q_{indirect}(H)\) if \(P_{1}(E|H)=P_{2}(E|H)\) and \(P_{1}(E|\lnot H)=P_{2}(E|\lnot H)\).

Now, we should prove that when \(E\) isn’t probabilistically independent of \(H\) relative to \(P_{1}\) and/or to \(P_{2}\), the following two propositions are equivalent to each other:

$$\begin{aligned} P_{1}(E|H)&=P_{2}(E|H)\,\,\mathrm{and}\,\, P_{1}(E|\lnot H)=P_{2}(E|\lnot H). \end{aligned}$$
(2a)
$$\begin{aligned} l_{1}(H,E)&=l_{2}(H,E)\,\,\mathrm{and}\,\, l_{1}(H,\lnot E)=l_{2}(H,\lnot E). \end{aligned}$$
(2b)

It is shown easily that (2a) mathematically implies (2b). Then, let’s prove that (2b) mathematically implies (2a). For this purpose, it is sufficient that (2a) is implied by

$$\begin{aligned} \frac{P_{1}(E|H)}{P_{1}(E|\lnot H)}=\frac{P_{2}(E|H)}{P_{2}(E|\lnot H)}\,\,\mathrm{and}\,\,\frac{P_{1}(\lnot E|H)}{P_{1}(\lnot E|\lnot H)}=\frac{P_{2}(\lnot E|H)}{P_{2}(\lnot E|\lnot H)}. \end{aligned}$$
(2c)

Suppose that the values of \(P_{1}(E|H)/P_{1}(E|\lnot H)\) and \(P_{2}(E|H)/P_{2}(E|\lnot H)\) are all \(k\). From the first equation of (2c), it follows that:

$$\begin{aligned} P_{1}(E|H)P_{2}(E|\lnot H)&= P_{2}(E|H)P_{1}(E|\lnot H), \nonumber \\ P_{1}(E|H)&= kP_{1}(E|\lnot H),\,\,\mathrm{and} \nonumber \\ P_{2}(E|H)&= kP_{2}(E|\lnot H). \end{aligned}$$
(2d)

The probability calculus says that for all \(X\) and \(Y\), \(P(\lnot X|Y)=1{-}P(X|Y)\). Thus, it follows from the second equation of (2c) that:

$$\begin{aligned}&1-P_{1}(E|H)- P_{2}(E|\lnot H)+P_{1}(E|H)P_{2}(E|\lnot H)\nonumber \\&\quad =1- P_{2}(E|H)-P_{1}(E|\lnot H)+P_{2}(E|H)P_{1}(E|\lnot H). \end{aligned}$$
(2e)

From (2d) and (2e), then, it follows that \((1-k)(P_{1}(E|\lnot H)-P_{2}(E|\lnot H))=0\). Note that we have assumed that \(E\) isn’t probabilistically independent of \(H\) relative to \(P_{1}\) and/or to \(P_{2}\). Thus, \(k\ne 1\). Hence, \(P_{1}(E|\lnot H)=P_{2}(E|\lnot H)\). In a similar way, we can also derive that \(P_{1}(E|H)=P_{2}(E|H)\).

Table 3 Old Credence Assignments

Appendix 3

Consider the confirmation measure \(r\). Suppose that Tables 3a, b describe \(S_{1}\)’s and \(S_{2}\)’s old credence assignments, respectively. Suppose also that \(S_{2}\)’s credence in \(E\) is changed from 1/3 to 4/5. And let \(r_{1}\) and \(r_{2}\) be \(S_{1}\)’s and \(S_{2}\)’s confirmation measure \(r\) relative to \(P_{1}\) and to \(P_{2}\), respectively. According to these tables, then, it holds that \(P_{1}(H|E)=1/2\), \(P_{1}(H)=1/4\), \(P_{2}(H|E)=2/3\) and \(P_{2}(H)=1/3\). Thus, we have that:

$$\begin{aligned} r_{1}(H,E)=ln\left( \frac{P_{1}(H|E)}{P_{1}(H)}\right) =ln2=0.693=ln\left( \frac{P_{2}(H|E)}{P_{2}(H)}\right) =r_{2}(H,E). \end{aligned}$$

Similarly, we obtain that:

$$\begin{aligned} r_{1}(H,\lnot E)=ln\left( \frac{P_{1}(H|\lnot E)}{P_{1}(H)}\right) =ln(1/2)=-0.693=ln\left( \frac{P_{2}(H|\lnot E)}{P_{2}(H)}\right) =r_{2}(H,\lnot E). \end{aligned}$$

On the other hand, from the assumption that \(S_{2}\)’s credence in \(E\) is changed from 1/3 to 4/5, it follows that \(\alpha _{2}^{E}=8\). With the help of (7*) and (8*) in “Appendix 1”, then, it follows from Tables 3a,b that:

$$\begin{aligned} Q_{direct}(H)&= \frac{[\alpha _{2}^{E}P_{1}(E|H)+P_{1}(\lnot E|H)]P_{1}(H)}{[\alpha _{2}^{E}P_{1}(E|H)+P_{1}(\lnot E|H)]P_{1}(H)+[\alpha _{2}^{E}P_{1}(E|\lnot H)+P_{1}(\lnot E|\lnot H)]P_{1}(\lnot H)}\\&= \frac{\left[ 8\cdot \frac{2}{3}+\frac{1}{3}\right] \cdot \frac{1}{4}}{\left[ 8\cdot \frac{2}{3}+\frac{1}{3}\right] \cdot \frac{1}{4}+\left[ 8\cdot \frac{2}{9}+\frac{7}{9}\right] \cdot \frac{3}{4}}=\frac{17}{40}=0.425.\\ Q_{indirect}(H)&= \frac{[\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)]P_{1}(H)}{[\alpha _{2}^{E}P_{2}(E|H)+P_{2}(\lnot E|H)]P_{1}(H)+[\alpha _{2}^{E}P_{2}(E|\lnot H)+P_{2}(\lnot E|\lnot H)]P_{1}(\lnot H)}\\&= \frac{\left[ 8\cdot \frac{2}{3}+\frac{1}{3}\right] \cdot \frac{1}{4}}{\left[ 8\cdot \frac{2}{3}+\frac{1}{3}\right] \cdot \frac{1}{4}+\left[ 8\cdot \frac{1}{6}+\frac{5}{6}\right] \cdot \frac{3}{4}}=\frac{34}{73}=0.466. \end{aligned}$$

So, \(Q_{direct}(H)=0.425\ne 0.466=Q_{indirect}(H)\). With the help of the above example, therefore, we can conclude that the log-probability ratio measure \(r\) fails to satisfy the Requirement of Collaboration.

Appendix 4

Suppose that experience directly changes an agent’s credence in \(E\) from \(P(E)\) to \(Q(E)\) and nothing else. From JC* and the probability calculus, then, it follows that: for any \(X\),

$$\begin{aligned} Q(X)&=\frac{\alpha ^{E}P(\textit{EX})+P(\lnot \textit{EX})}{\alpha ^{E}P(E)+P(\lnot E)}\nonumber \\&=\frac{[\alpha ^{E}P(E|X)+P(\lnot E|X)]P(X)}{[\alpha ^{E}P(E|X)\!+\!P(\lnot E|X)]P(X)\!+\![\alpha ^{E}P(E|\lnot X)\!+\!P(\lnot E|\lnot X)]P(\lnot X)}. \end{aligned}$$
(4a)

For any \(X\) that is probabilistically independent of \(E\) relative to \(P\), then, it follows from (4a) that:

$$\begin{aligned} Q(X)=P(X). \end{aligned}$$
(4b)

On the other hand, when \(X\) is not probabilistically independent of \(E\) relative to \(P\), it holds that:

$$\begin{aligned} P(E|X)&=\gamma _{1}\left( \frac{1-\gamma _{2}}{\gamma _{1}-\gamma _{2}}\right) , \,\, P(\lnot E|X)=\gamma _{2}\left( \frac{\gamma _{1}-1}{\gamma _{1}-\gamma _{2}}\right) ,\nonumber \\ P(E|\lnot X)&=\frac{1-\gamma _{2}}{\gamma _{1}-\gamma _{2}},\quad \mathrm{and}\,\, P(\lnot E|\lnot X)=\frac{\gamma _{1}-1}{\gamma _{1}-\gamma _{2}}. \end{aligned}$$
(4c)

where \(\gamma _{1}=e^{C_{P}(X,E)}=e^{l_{P}(X,E)}=P(E|X)/P(E|\lnot X)\) and \(\gamma _{2}=e^{C_{P}(X,\lnot E)}=e^{l_{P}(X,\lnot E)}=P(\lnot E|X)/P(\lnot E|\lnot X)\). Then, for any \(X\) that is not probabilistically independent of \(E\) relative to \(P\), (4a) implies that:

$$\begin{aligned} Q(X)&=\frac{\left[ \alpha ^{E}\left( \frac{1-\gamma _{2}}{\gamma _{1}-\gamma _{2}}\right) \gamma _{1}\!+\!\left( \frac{\gamma _{1}-1}{\gamma _{1}-\gamma _{2}}\right) \gamma _{2}\right] P(X)}{\left[ \alpha ^{E}\left( \frac{1-\gamma _{2}}{\gamma _{1}-\gamma _{2}}\right) \gamma _{1}+\left( \frac{\gamma _{1}-1}{\gamma _{1}-\gamma _{2}}\right) \gamma _{2}\right] P(X)+\left[ \alpha ^{E}\left( \frac{1-\gamma _{2}}{\gamma _{1}-\gamma _{2}}\right) +\left( \frac{\gamma _{1}-1}{\gamma _{1}-\gamma _{2}}\right) \right] P(\lnot X)}\nonumber \\&=\frac{\left[ \frac{\alpha ^{E}\left( \frac{\gamma _{1}}{1-\gamma _{1}}\right) -\frac{\gamma _{2}}{1-\gamma _{2}}}{\alpha ^{E}\left( \frac{1}{1-\gamma _{1}}\right) -\frac{1}{1-\gamma _{2}}}\right] P(X)}{\left[ \frac{\alpha ^{E}\left( \frac{\gamma _{1}}{1-\gamma _{1}}\right) -\frac{\gamma _{2}}{1-\gamma _{2}}}{\alpha ^{E}\left( \frac{1}{1-\gamma _{1}}\right) -\frac{1}{1-\gamma _{2}}}\right] P(X)+(1-P(X))}. \end{aligned}$$
(4d)

Note that \(X\)’s being probabilistically independent of \(E\) relative to \(P\) is equivalent to the proposition that \(\gamma _{1}=1\) or \(\gamma _{2}=1\). Using (4c) and (4d), thus, JC can be transformed into \(\hbox {JC}\mathbf \dag {} \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Park, I. Confirmation measures and collaborative belief updating. Synthese 191, 3955–3975 (2014). https://doi.org/10.1007/s11229-014-0507-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0507-1

Keywords

Navigation