Keynes’s Coefficient of Dependence Revisited

Abstract

Probabilistic dependence and independence are among the key concepts of Bayesian epistemology. This paper focuses on the study of one specific quantitative notion of probabilistic dependence. More specifically, section 1 introduces Keynes’s coefficient of dependence and shows how it is related to pivotal aspects of scientific reasoning such as confirmation, coherence, the explanatory and unificatory power of theories, and the diversity of evidence. The intimate connection between Keynes’s coefficient of dependence and scientific reasoning raises the question of how Keynes’s coefficient of dependence is related to truth, and how it can be made fruitful for epistemological considerations. This question is answered in section 2 of the paper. Section 3 outlines the consequences the results have for epistemology and the philosophy of science from a Bayesian point of view.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    Keynes (1921) attributes this measure to William Ernest Johnson’s manuscript Cumulative Formula which was unpublished at that time. The author could not verify whether the latter manuscript has been published since.

  2. 2.

    Wheeler (2009) calls this the Wayne–Shogenji correlation measure, not because Wayne or Shogenji invented it, but because of the conflicting interpretations that have recently been attached to it by these authors. Wayne (1995) tentatively suggests \({{\mathfrak {pd}}}\) as a similarity measure. Shogenji (1999) interprets it as a coherence measure. In the following no such interpretation is presupposed. Since many philosophers before Shogenji and Wheeler used this measure—such as Keynes (1921), Mackie (1969) and Horwich (1982)—I refrain from following Wheeler (2009) in calling it the Wayne–Shogenji correlation measure. In contrast to Wheeler (2009) and Brössel (2013c) I also cautiously refrain from calling it a measure of correlation (I am grateful to an anonymous referee for comments which have made me cautious on this score). Finally, it is worth mentioning that one can also find alternative measures in the literature. In information theory the most common way to measure the distance between probability distributions is the Kullback–Leibler Divergence. Applied as a measure of deviation from independence, it measures the distance between the actual probability of the conjunction and the independent distribution. In particular, as a measure of deviation from independence the Kullback–Leibler Divergence looks like this:

    Definition (Probabilistic Dependence: Kullback–Leibler Divergence)

    $${{\mathfrak {pd}}}_{KL}(A_1,\ldots , A_n)=\big [\Pr (A_1\cap \cdots \cap A_n)\times \log [{\mathfrak {pd}}_K(A_1,\ldots , A_n)]\big ]. $$

    As one can see, this measure of probabilistic dependence depends crucially on Keynes’s coefficient of dependence \({\mathfrak {pd}}_K\). Thus, studying Keynes’s coefficient of dependence is more fundamental. A third measure of deviation from independence is the following:

    Definition (Deviation From Independence: Difference Measures)

    $${\mathfrak {pd}}_{d}(A_1, \ldots , A_n)=\Pr (A_1\cap \cdots \cap A_n)-\Pr (A_1)\times \cdots \times \Pr (A_n) $$

    The difference measure is usually applied when one is interested in the deviation from independence of two propositions. Its generalization for measuring the dependence of two random variables is known as the measure of covariance between the random variables. Unfortunately, discussing and comparing these measures of probabilistic dependence goes beyond the scope of the present paper. This topic will be the content of subsequent research.

  3. 3.

    Keynes (1921) actually discusses primarily the conditional variant of \({\mathfrak {pd}}_K\).

  4. 4.

    The following corollary and all other corollaries and theorems are proved in the Appendices.

  5. 5.

    The original formulation of Popper’s (1959) measure of explanatory power is this: \(EP_{P}(T,E)=\frac{\Pr (E|T)-\Pr (T)}{\Pr (E|T)+\Pr (T)}.\)

  6. 6.

    According to Hájek and Joyce (2008), a measure ordinally equivalent to \(EP_2\) can itself be considered a confirmation measure. However, apart from this paper, I am not aware of any other philosophical text which suggests this.

  7. 7.

    Independently of Myrvold, McGrew (2003) discusses an account of unification (or, as McGrew puts it, of theoretical consilience) that is effectively equivalent to Myrvold’s account. For a detailed discussion of both approaches and their connection see Schupbach (2005).

  8. 8.

    This has been noted already by Shogenji (1999). Of course, Shogenji is discussing this property in the context of Bayesian coherentism and he takes \({\mathfrak {pd}}_k\) to be a measure of coherence.

  9. 9.

    Branden Fitelson argued to this effect in a private e-mail exchange.

  10. 10.

    Unfortunately, in such a short paper it is not possible to provide a detailed exposition of the mathematically intricate convergence theorems. Accordingly, this paper will presuppose previous acquaintance with these convergence theorems. I refer the interested reader to Hawthorne’s (2014) very intelligible exposition of an approach to arrive at one of these convergence results.

  11. 11.

    A sequence of pieces of evidence separates the set of possibilities \(W\) if and only if for every pair of worlds \(w_i\) and \(w_j \in W\) (with \(w_i\ne w_j\)) there is one piece of evidence in the sequence such that it is true in one of the possibilities and false in the other.

  12. 12.

    This lemma and the following theorem have been anticipated and Huber (2005, 2008). See also Brössel (2008, 2013b).

  13. 13.

    Note that Theorem 2 restricts these claims: \({\mathfrak {pd}}_K\) satisfies both conditions only almost surely: it only holds for every \(w\in W^\prime \) for some \(W^\prime \) such that \(W^\prime \subseteq W\) and \(\Pr ^*(W^\prime )=1\). It does not necessarily hold for all \(w\in W\).

  14. 14.

    For a more detailed discussion of convergence theorems and their philosophical implications especially for scientific realism and anti-realism see Brössel (2013b).

  15. 15.

    Not all readers think that this result fits our intuitions. Accordingly, I at least want to show that it is supported by other intuitions shared by many Bayesian epistemologists. It is supported by our intuitions about unification. More specifically, many Bayesian epistemologists share the intuition that ceteris paribus a hypothesis is the more confirmed by the evidence the more it unifies the single pieces of evidence (see Sect. 1.5) and they assume the following: (1) the unificatory power of a theory consists in its ability to increase the coefficient of dependence between the pieces of evidence (McGrew 2003; Myrvold 2003), (2) the theory is the more confirmed by the evidence the higher the coefficient of dependence between them. Hence, the more the theory in question increases the coefficient of dependence between the pieces of evidence, the higher is the coefficient of dependence between the theory and the conjunction of the pieces of evidence in question.

  16. 16.

    Again, not all readers will think this result fits our intuitions. Accordingly, I at least want to show that it is supported by other intuitions shared by many Bayesian epistemologists. It is supported by our intuitions about the diversity of evidence. More specifically, many epistemologists share the intuition that ceteris paribus hypotheses are confirmed more by diverse pieces of evidence than by uniform pieces of evidence (see Sect. 1.5) and they assume the following: (1) the evidence is the more uniform or similar the more the pieces of evidence are probabilistically dependent (e.g. Wayne 1995) and (2) the hypothesis is the more confirmed by the evidence the stronger the probabilistic dependence. Hence, ceteris paribus a higher coefficient of dependence between the pieces of evidence has a negative effect on the coefficient of dependence between the theory and the evidence.

  17. 17.

    In his paper Shogenji (2007) wants to demonstrate why the coherence (in the sense of his coherence measure \(Coh_S\)) of mutually independent pieces of evidence seems to be truth-conducive for some hypothesis \(H\), even if various impossibility results demonstrated that this cannot be the case (Bovens and Hartmann 2003; Olsson 2002). For this he considers in how far the “degree of coherence between the focal piece (usually the new piece) of evidence and the rest of the evidence is a significant factor in the confirmation of the hypothesis” Shogenji (2007: 367). Accordingly, Shogenji considers the interaction between the coherence/coefficient of dependence of the independent pieces of evidence \(e_1, \ldots ,e_m\) and the coherence/coefficient of dependence of the independent pieces of evidence \(e_1, \ldots ,e_{m-1}\) where \(e_m\) represents the focal or new piece of evidence and proves that

    $$\begin{aligned} {\mathfrak {pd}}_K(e_1\cap \cdots \cap e_{m-1},e_m)=\dfrac{{\mathfrak {pd}}_K(e_1, \ldots ,e_m)}{{\mathfrak {pd}}_K(e_1, \ldots ,e_{m-1}).} \end{aligned}$$

    if \(\Pr (e_1\cap \cdots \cap e_m)>0\).

    The conclusion Shogenji finally reaches is this:

    We have uncovered that when the pieces of evidence are independent with regard to the hypothesis and the rest of the evidence supports the hypothesis, the more coherent the focal piece of evidence is with the rest of the evidence, the more strongly the focal evidence supports the hypothesis. This leaves us with the impression that coherence is truth conducive. (Shogenji 2007: 371)

    In the light of Corollary 6 (which does not presuppose that the pieces of evidence are mutually independent) one can see that actually something stronger is true. Ceteris paribus the coefficient of dependence between the pieces of evidence has a negative effect on the coefficient of dependence between the theory and the conjunction of the evidence. However, ceteris paribus the coefficient of dependence between the pieces of evidence and the theory has a positive effect on the coefficient of dependence between the theory and the conjunction of the evidence.

References

  1. Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

    Google Scholar 

  2. Brössel, P. (2008). Theory assessment and coherence. Abstracta, 4, 57–71.

    Google Scholar 

  3. Brössel, P. (2012). Rethinking Bayesian confirmation theory—Steps towards a new theory of confirmation. Unpublished PhD-thesis (Konstanz).

  4. Brössel, P. (2013a). The problem of measure sensitivity redux. Philosophy of Science, 80(3), 378–397.

    Article  Google Scholar 

  5. Brössel, P. (2013b). Assessing theories: The coherentist approach. Erkenntnis, 79, 593–623.

    Article  Google Scholar 

  6. Brössel, P. (2013c). Correlation and truth. In V. Karakostas & D. Dieks (Eds.), Recent progress in philosophy of science: Perspectives and foundational problems (pp. 41–54). Dordrecht: Springer.

    Google Scholar 

  7. Christensen, D. (1999). Measuring Confirmation. Journal of Philosophy, 96, 437–461.

  8. Carnap, R. (1962). The logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  9. Crupi, V. (2013). Confirmation. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy. http://plato.stanford.edu/archives/win2013/entries/confirmation/.

  10. Crupi, V., & Tentori, K. (2012). A second look at the logic of explanatory power (with two novel representation theorems). Philosophy of Science, 79, 365–385.

    Article  Google Scholar 

  11. Crupi, V., Tentori, K., & Gonzalez, M. (2007). On Bayesian measures of evidential support: Theoretical and Empirical Issues. Philosophy of Science, 74, 229–252.

  12. Douven, I., & Meijs, W. (2007). Measuring coherence. Synthese, 156, 405–425.

    Article  Google Scholar 

  13. Fitelson, B. (1999). The plurality of Bayesian measures of confirmation and the problem of measure sensitivity. Philosophy of Science, 66, S362–S378.

    Article  Google Scholar 

  14. Fitelson, B. (2001). Studies in Bayesian confirmation theory. PhD. Dissertation, University of Wisconsin-Madison (Philosophy).

  15. Fitelson, B. (2003). A probabilistic theory of coherence. Analysis, 63, 194–199.

    Article  Google Scholar 

  16. Gaifman, H., & Snir, M. (1982). Probabilities over rich languages, testing, and randomness. Journal of Symbolic Logic, 47, 495–548.

    Article  Google Scholar 

  17. Good, I. J. (1960). Weight of evidence, corroboration, explanatory power, information and the utility of experiments. Journal of the Royal Statistical Society. Series B (Methodological), 22, 319–331.

    Google Scholar 

  18. Hawthorne, J. (2014). Inductive logic. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2014 Edition). http://plato.stanford.edu/archives/sum2014/entries/logic-inductive/.

  19. Hempel, C. G. (1960). Inductive inconsistencies. Synthese, 12, 439–469.

    Article  Google Scholar 

  20. Horwich, P. (1982). Probability and evidence. Cambridge: Cambridge University Press.

    Google Scholar 

  21. Huber, F. (2005). What is the point of confirmation? Philosophy of Science, 72, 1146–1159.

    Article  Google Scholar 

  22. Huber, F. (2008). Assessing theories, Bayes style. Synthese, 161, 89–118.

    Article  Google Scholar 

  23. Joyce, J. (1999). Foundations of causal decision theory. Cambridge: Cambridge University Press.

    Google Scholar 

  24. Keynes, J. (1921). A treatise on probability. London: Macmillan.

    Google Scholar 

  25. Levi, I. (1967). Gambling with truth. New York: A. A. Knopf.

    Google Scholar 

  26. Mackie, J. (1969). The relevance criterion of confirmation. The British Journal for the Philosophy of Science, 20, 27–40.

    Article  Google Scholar 

  27. McGrew, T. (2003). Confirmation, heuristics, and explanatory reasoning. The British Journal for the Philosophy of Science, 54, 553–567.

    Article  Google Scholar 

  28. Milne, P. (1996). \(log[p(h/eb)/p(h/b)]\) is the one true measure of confirmation. Philosophy of Science, 63, 21–26.

    Article  Google Scholar 

  29. Mortimer, H. (1988). The Logic of Induction. Paramus, NJ: Prentice Hall.

  30. Myrvold, W. (1996). Bayesianism and diverse evidence: A reply to Andrew Wayne. Philosophy of Science, 63, 661–665.

    Article  Google Scholar 

  31. Nozick, R. (1981). Philosophical explanations. Oxford: Clarenden.

  32. Myrvold, W. (2003). A Bayesian account of the virtue of unification. Philosophy of Science, 70, 399–423.

    Article  Google Scholar 

  33. Olsson, E. (2002). What is the problem of coherence and truth? The Journal of Philosophy, 99, 246–272.

    Article  Google Scholar 

  34. Popper, K. (1959). The logic of scientific discovery. London: Hutchinson.

    Google Scholar 

  35. Schervish, M., & Seidenfeld, T. (1990). An approach to consensus and certainty with increasing evidence. Journal of Statistical Planning and Inference, 25, 401–414.

    Article  Google Scholar 

  36. Shogenji, T. (1999). Is coherence truth conducive? Analysis, 59, 338–345.

    Article  Google Scholar 

  37. Shogenji, T. (2007). Why does coherence appear truth-conducive? Synthese, 157, 361–372.

    Article  Google Scholar 

  38. Schupbach, J. (2011a). Comparing probabilistic measures of explanatory power. Philosophy of Science, 78, 813–829.

    Article  Google Scholar 

  39. Schupbach, J. (2011b). New hope for Shogenji’s coherence measure. The British Journal for the Philosophy of Science, 62, 125–142.

    Article  Google Scholar 

  40. Schupbach, J. (2005). On a Bayesian analysis of unification. Philosophy of Science, 72, 594–607.

    Article  Google Scholar 

  41. Schupbach, J., & Sprenger, J. (2011). The logic of explanatory power. Philosophy of Science, 78, 105–127.

    Article  Google Scholar 

  42. Wayne, A. (1995). Bayesianism and diverse evidence. Philosophy of Science, 62, 111–121.

    Article  Google Scholar 

  43. Wheeler, G. (2009). Focused correlation and confirmation. The British Journal for the Philosophy of Science, 60, 79–100.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Peter Brössel.

Additional information

The present paper has its origin in a short paper that appeared as “Correlation and Truth” in: V. Karakostas, D. Dieks (eds.). EPSA11 Recent Progress in Philosophy of Science: Perspectives and Foundational Problems. Dordrecht: Springer, 41–54. For their helpful remarks and suggestions on previous versions of the paper I am very thankful to Ralf Busse, Franz Huber, Gregory Wheeler, two anonymous referees of this journal, and (especially) Anna-Maria A. Eder and Ben Young.

Appendices

Appendix 1: Proofs of Corollaries

Proof of Corollary 1

  1. 1.

    According to Definition 3:

    $$\begin{aligned} d(T,E)&=\Pr (T|E)-\Pr (T)\\&=\left[ \frac{\Pr (T|E)}{\Pr (T)}-1\right] \times \Pr (T)\\&=\left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \Pr (T) \end{aligned}$$
  2. 2.

    According to Definition 3:

    $$\begin{aligned} M(T,E)&=\Pr (E|T)-\Pr (E)\\&=\left[ \frac{\Pr (E|T)}{\Pr (E)}-1\right] \times \Pr (E)\\&=\left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \Pr (E) \end{aligned}$$
  3. 3.

    According to Definition 3:

    $$\begin{aligned} S(T,E)&=\Pr (T|E)-\Pr (T|(\overline{E}))\\&=\frac{\Pr (T|E)-\Pr (T)}{\Pr (\overline{E})}\\&=\left[ \frac{\frac{\Pr (T|E)}{\Pr (T)}-1}{\Pr (\overline{E})}\right] \times \Pr (T)\\&=\left[ \frac{{\mathfrak {pd}}_K(T,E)-1}{1-\Pr (E)}\right] \times \Pr (T) \end{aligned}$$
  4. 4.

    According to Definition 3:

    $$\begin{aligned} M(T,E)&=\Pr (E|T)-\Pr (E|(\overline{T}))\\&=\frac{\Pr (E|T)-\Pr (E)}{\Pr (\overline{T})}\\&=\left[ \frac{\frac{\Pr (E|T)}{\Pr (E)}-1}{\Pr (\overline{T})}\right] \times \Pr (E)\\&=\left[ \frac{{\mathfrak {pd}}_K(T,E)-1}{1-\Pr (T)}\right] \times \Pr (E) \end{aligned}$$
  5. 5.

    According to Definition 3:

    $$\begin{aligned} r(T,E)=\log \left[ \frac{\Pr (T|E)}{\Pr (T)}\right] , \end{aligned}$$

    if \(\Pr (T)>0\) and \(\Pr (E)>0\). Since \(\frac{\Pr (T|E)}{\Pr (T)}= \frac{\Pr (T\cap E)}{\Pr (T) \times \Pr (E)}\) it follows with Definition 1 that:

    $$\begin{aligned} r(T,E)=\log \left[ {\mathfrak {pd}}_K(T,E)\right] , \end{aligned}$$

    if \(1>\Pr (T)>0\) and \(\Pr (E)>0\).

  6. 6.

    According to Definition 3:

    $$\begin{aligned} Z(T,E)&= {\left\{ \begin{array}{ll}\frac{d(T,E)}{1- \Pr (T)} &\quad \hbox{if}\, \Pr (T|E )\ge \Pr (T)>0\\ \frac{d(T,E)}{1- \Pr (\overline{T})} &\quad \hbox{if}\, \Pr (T|E )< \Pr (T)\\ 1 &\quad \hbox{if}\, \Pr (T)=0 \end{array}\right. }\\&= {\left\{ \begin{array}{ll}\frac{\left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \Pr (T)}{1- \Pr (T)} &\quad \hbox{if}\, \Pr (T|E )\ge \Pr (T)>0\\ \frac{\left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \Pr (T)}{\Pr (T)} &\quad \hbox{if}\, \Pr (T|E )< \Pr (T)\\ 1 &\quad \hbox{if}\, \Pr (T)=0 \end{array}\right. }\\&= {\left\{ \begin{array}{ll}\left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \frac{\Pr (T)}{1- \Pr (T)} & {\hbox{if}}\, \Pr (T|E )\ge \Pr (T)>0\\ \left[ {\mathfrak {pd}}_K(T,E)-1\right] \times \frac{\Pr (T)}{\Pr (T)} & {\hbox{if }}\, \Pr (T|E )< \Pr (T)\\ 1 & {\hbox{if}}\, \Pr (T)=0 \end{array}\right. }\\\end{aligned}$$
  7. 7.

    According to Definition 3:

    $$\begin{aligned} l(T,E)=\log \left[ \frac{\Pr (E|T)}{\Pr (E|\overline{T})}\right] , \end{aligned}$$

    if \(1>\Pr (T)>0\) and \(\Pr (E)>0\). It holds however that:

    $$\begin{aligned} \log \left[ \frac{\Pr (E|T)}{\Pr (E|\overline{T})}\right] =\log \left[ \frac{\frac{\Pr (E|T)}{\Pr (E)}}{\frac{\Pr (E|\overline{T}}{\Pr (E)})}\right] . \end{aligned}$$

    This implies

    $$\begin{aligned} l(T,E)=\log \left[ \dfrac{{\mathfrak {pd}}_K(T,E)}{{\mathfrak {pd}}_K(\overline{T},E)}\right] , \end{aligned}$$

    if \(1>\Pr (T)>0\) and \(\Pr (E)>0\).

Proof of Corollary 2

  1. 1.

    According to Definition 6:

    $$\begin{aligned} EP_1(T,E)&=\frac{\Pr (E|T)}{\Pr (E)}\\&=\frac{\Pr (E\cap T)}{\Pr (E)\times \Pr (T)}\\&={\mathfrak {pd}}_K(T,E)\,{\hbox{(with\, to\, Definition\, 1)}} \end{aligned}$$
  2. 2.

    According to Definition 7:

    $$\begin{aligned} EP_2(T,E)&= \left[ \frac{\Pr (T|E)-\Pr (T|\overline{E})}{\Pr (T|E)+\Pr (T|\overline{E})}\right] \\&= \left[ \frac{\frac{\Pr (T|E)-\Pr (T|\overline{E})}{\Pr (T)}}{\frac{\Pr (T|E)+\Pr (T|\overline{E})}{\Pr (T)}}\right] \\&= \left[ \frac{\frac{\Pr (T|E)}{\Pr (T)}-\frac{\Pr (T|\overline{E})}{\Pr (T)}}{\frac{\Pr (T|E)}{\Pr (T)}+\frac{\Pr (T|\overline{E})}{\Pr (T)}}\right] \\&= \frac{{\mathfrak {pd}}_K(E,T)-{\mathfrak {pd}}_K(\overline{E},T)}{{\mathfrak {pd}}_K(E,T)+{\mathfrak {pd}}_K(\overline{E},T)} \end{aligned}$$

Proof of Corollary 3

According to Definition 8:

$$\begin{aligned} UP(e_1,e_2;T)&= \log \left[ \frac{\frac{\Pr (e_1\cap e_2|T)}{\Pr (e_1|T)\times \Pr (e_2|T)}}{\frac{\Pr (e_1\cap e_2)}{\Pr (e_1)\times \Pr (e_2)}}\right] \\&= \log \left[ \dfrac{{\mathfrak {pd}}_K(e_1,e_2|T)}{{\mathfrak {pd}}_K(e_1,e_2)}\right] \end{aligned}$$

Proof of Corollary 4

According to Definition 9:

$$\begin{aligned} SE(e_1,e_2)&=\frac{\Pr (e_1\cap e_2)}{\Pr (e_1)\times \Pr (e_2)}\\&={\mathfrak {pd}}_K(e_1,e_2) \end{aligned}$$

Proof of Corollary 5

$$\begin{aligned} {\mathfrak {pd}}_K(T,e_1\cap \cdots \cap e_m)&= \frac{\Pr (T\cap e_1\cap \cdots \cap e_m)}{\Pr (T)\times \Pr (e_1\cap \cdots \cap e_m)}\\&= \frac{\frac{\Pr (T\cap e_1\cap \cdots \cap e_m)}{\Pr (T)\times \Pr (e_1)\times \cdots \times \Pr (e_m) }}{\frac{\Pr (e_1\cap \cdots \cap e_m)}{\Pr (e_1)\times \cdots \times \Pr (e_m)}}\\&= \frac{{\mathfrak {pd}}_K(T,e_1, \ldots , e_m)}{{\mathfrak {pd}}_K(e_1, \ldots , e_m)} \end{aligned}$$

Proof of Corollary 6

$$\begin{aligned}& {\mathfrak {pd}}_K(T_1,e_1\cap \cdots \cap e_m)> {\mathfrak {pd}}_K(T_2,e_1\cap \cdots \cap e_m) \\ &\quad \Leftrightarrow \frac{{\mathfrak {pd}}_K(T_1,e_1, \ldots , e_m)}{{\mathfrak {pd}}_K(e_1, \ldots , e_m)}> \frac{{\mathfrak {pd}}_K(T_2,e_1, \ldots , e_m)}{{\mathfrak {pd}}_K(e_1, \ldots , e_m)} \\ &\quad \Leftrightarrow {\mathfrak {pd}}_K(T_1,e_1,\ldots , e_m)> {\mathfrak {pd}}_K(T_2,e_1,\ldots , e_m). \end{aligned}$$

Proof of Corollary 7

Corollary 7 follows trivially from Corollary 1 for the \(l\) confirmation measure and Theorem 3.

Proof of Corollary 8

$$\begin{aligned} {\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,E)&= \frac{\Pr (h_1\cap \cdots \cap h_n\cap E)}{\Pr (h_1\cap \cdots \cap h_n)\times \Pr (E)}\\&= \frac{\frac{\Pr (h_1\cap \cdots \cap h_n\cap E)}{\Pr (h_1)\times \cdots \times \Pr (h_n) \times \Pr (E)}}{\frac{\Pr (h_1\cap \cdots \cap h_n)}{\Pr (h_1)\times \cdots \times \Pr (h_n)}}\\&= \frac{{\mathfrak {pd}}_K(h_1, \ldots , h_n,E)}{{\mathfrak {pd}}_K(h_1, \ldots , h_n)} \end{aligned}$$

Appendix 2: Proofs of Theorems

Proof of Theorem 1

  1. 1.
    $$\begin{aligned} \Pr (A_1\cap \cdots \cap A_n)&= \Pr (A_1\cap \cdots \cap A_n)\\ \Pr (A_1\cap \cdots \cap A_n)&= \dfrac{\Pr (A_1\cap \cdots \cap A_n)}{\prod _{1\le i\le n}\Pr (A_i)}\times \prod _{1\le i\le n}\Pr (A_i)\\ \Pr (A_1\cap \cdots \cap A_n)&= {\mathfrak {pd}}_K(A_1, \ldots A_n)\times \prod _{1\le i\le n}\Pr (A_i) \end{aligned}$$
  2. 2.
    $$\begin{aligned} \Pr (A_1\cap \cdots \cap A_n|B)&= \Pr (A_1\cap \cdots \cap A_n|B)\\ \Pr (A_1\cap \cdots \cap A_n|B)&= \dfrac{\Pr (A_1\cap \cdots \cap A_n|B)}{\prod _{1\le i\le n}\Pr (A_i|B)}\times \prod _{1\le i\le n}\Pr (A_i|B)\\ \Pr (A_1\cap \cdots \cap A_n|B)&= {\mathfrak {pd}}_K(A_1, \ldots A_n|B)\times \prod _{1\le i\le n}\Pr (A_i|B) \end{aligned}$$

Proof of Theorem 2

The proof proceeds as follows: First, the Gaifman–Snir Theorem is presented (for a proof see Gaifman and Snir 1982). Second, Lemma 1 is proven. Third, Theorem 2 is completed.

1. The Gaifman–Snir Theorem: Let \(W\) be a set of possibilities and let \(\mathcal {A}\) be some algebra over \(W\). The elements of \(\mathcal {A}\) are interpreted as propositions expressible in some language \(\mathcal {L}\) suitable for arithmetic. In particular, let \(\mathcal {L}\) be some first order language containing the numerals ‘1’, ‘2’, ‘3’,… as names, respectively, individual constants, and symbols for addition, multiplication, identity etc. In addition, let \(\mathcal {L}\) contain finitely many relations and functional symbols. Gaifman and Snir (1982) call them the ‘empirical symbols’. Accordingly one can think of the possibilities in \(W\) as models for that language \(\mathcal {L}\) (which agree on the interpretation of the mathematical symbols but can disagree on the interpretation of the empirical symbols).

Now let \(e_1\),…, \(e_n\),… be a sequence of propositions of \(\mathcal {A}\) that separates \(W\), and for all \(w\in W\) let \(e^w_i =e_i\), if \(w\,\vDash\, e_i\) and \(\overline{e_i}\) otherwise. Let \(\Pr \) be a regular (or strict) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\).

Then there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories \(T\) of \(\mathcal {A}\):

$$\begin{aligned} lim_{n\rightarrow \infty }\Pr (T|E^w_n)=\mathcal {I}(T,w) \end{aligned}$$

where \(\mathcal {I}(T,w)=1\), if \(w\,\vDash\, T\) and 0 otherwise.

2. Lemma 1: Let \(W\) be a set of possibilities and let \(\mathcal {A}\) be some algebra over \(W\). The elements of \(\mathcal {A}\) are interpreted as propositions expressible in some suitable language \(\mathcal {L}\) as specified in more detail above. The possibilities in \(W\) can be interpreted as models for \(\mathcal {L}\). Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) that separates \(W\), and let \(e^w_i =e_i\) if \(w\,\vDash\, e_i\) and \(\overline{e_i}\) otherwise. Let \(\Pr \) be a regular (or strict) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\).

Then according to the Gaifman–Snir Theorem there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories \(T\) of \(\mathcal {A}\):

$$\begin{aligned} lim_{n\rightarrow \infty }\Pr (T|E^w_n)=\mathcal {I}(T,w) \end{aligned}$$

where \(\mathcal {I}(T,w)=1\), if \(w\,\vDash\, T\) and 0 otherwise.

Now it holds that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T,E^w_n)&= \lim _{n\rightarrow \infty }\dfrac{\Pr (T|E^w_n)}{\Pr (T)}\\&= \lim _{n\rightarrow \infty }\Pr (T|E^w_n)\times \lim _{n\rightarrow \infty }\dfrac{1}{\Pr (T)}\\&= \lim _{n\rightarrow \infty }\Pr (T|E^w_n)\times \dfrac{1}{\Pr (T)}\\&= {\left\{ \begin{array}{ll} \dfrac{1}{\Pr (T)}, & {\hbox{if}}\,\lim _{n\rightarrow \infty }\Pr (T|E^w_n)=1\\ 0, & {\hbox{if}}\, \lim _{n\rightarrow \infty }\Pr (T|E^w_n)=0\end{array}\right. }\\&= {\left\{ \begin{array}{ll} \dfrac{1}{\Pr (T)}, & {\hbox{if}}\,w\,\vDash\, T_1\\ 0, & {\hbox{otherwise.}} \end{array}\right. } \end{aligned}$$

3. Proof of Theorem 2: Let \(W\) be a set of possibilities and let \(\mathcal {A}\) be some algebra over \(W\). The elements of \(\mathcal {A}\) are interpreted as propositions expressible in some suitable language \(\mathcal {L}\) as specified in more detail above. The possibilities in \(W\) can be interpreted as models for \(\mathcal {L}\). Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) that separates \(W\), and let \(e^w_i =e_i\) if \(w\,\vDash\, e_i\) and \(\overline{e_i}\) otherwise. Let \(\Pr \) be a regular (or strict) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\).

Then according to Lemma 1 there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories \(T\) of \(\mathcal {A}\):

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T,E^w_n)=\dfrac{1}{\Pr (T)}\quad{\hbox{if }}\,w\,\vDash\, T\,{\hbox{and\, 0\, otherwise.}} \end{aligned}$$

1. Let \(T_1\) and \(T_2\) be two theories of \(\mathcal {A}\) and suppose additionally that \(w \,\vDash\, T_1\) and \(w \,\vDash\, \overline{T_2}\).

We know that \(\lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T_1,E^w_n)=\frac{1}{\Pr (T_1)}\) since \(w \,\vDash\, T_1\). We also know that \(\lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T_2,E^w_n)=0\) since \(w \,\vDash\, \overline{T_2}\).

Let \(\epsilon =\frac{\frac{1}{\Pr (T_1)}}{2}\). By the definition of \(\lim \) it holds that: \(\exists n\in \mathbb {N}\forall m\ge n : |\frac{1}{\Pr (T_1)}-{\mathfrak {pd}}_K(T_1,E^{w^\prime }_m)|<\epsilon \) and \(\exists n^\prime \in \mathbb {N} \forall m\ge n^\prime |0-{\mathfrak {pd}}_K(T_2,E^{w^\prime }_m)|<\epsilon \).

Now let \(n_1=\)max.\(\{n, n^\prime \}\). Then it holds for all \(m\ge n_1:\)

$$\begin{aligned} {\mathfrak {pd}}_K(T_1,E^{w^\prime }_m)> {\mathfrak {pd}}_K(T_2 ,E^{w^\prime }_m) \end{aligned}$$

2. Now assume that \(w \,\vDash\, T_1\) and \(w \,\vDash\, T_2\) and \(T_1\,\vDash\, T_2\) but \(T_2\,\nvDash \,T_1\).

Because of Lemma 1 we know that \(\lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T_1,E^w_n)=\frac{1}{\Pr (T_1)}\) since \(w \,\vDash\, T_1\). We also know that \(\lim _{n\rightarrow \infty }{\mathfrak {pd}}_K(T_2,E^w_n)=\frac{1}{\Pr (T_2)}\) since \(w \,\vDash\, T_2\).

The assumption is that \(\Pr \) is a strict probability function and \(T_1\,\vDash\, T_2\) but \(T_2\,\nvDash\, T_1\). It follows that \(\Pr (T_1)<\Pr (T_2)\) and \(\frac{1}{p(T_1)}> \frac{1}{p(T_2)}\).

Now let \(\epsilon =\frac{\frac{1}{p(T_1)}-\frac{1}{p(T_2)}}{2}\). Then it holds that: \(\exists n\in \mathbb {N}\forall m\ge n : |\frac{1}{p(T_1)}-{\mathfrak {pd}}_K(T_1,E^w_m)|<\epsilon \) and \(\exists n^\prime \in \mathbb {N} \forall m\ge n^\prime |\frac{1}{p(T_2)}-{\mathfrak {pd}}_K(T_2,E^w_m)|<\epsilon \).

Now let \(n_1=\)max.\(\{n, n^\prime \}\). Then it holds for all \(m\ge n_1:\)

$$\begin{aligned} {\mathfrak {pd}}_K(T_1,E^w_m)> {\mathfrak {pd}}_K(T_2 ,E^w_m) \end{aligned}$$

Proof of Theorem 3

$$\begin{aligned} {\mathfrak {pd}}_K(T,e_1\cap \cdots \cap e_m)&= \frac{\Pr (e_1\cap \cdots \cap e_m\cap T)}{\Pr (e_1\cap \cdots \cap e_m)\times \Pr (T)}\\&= \frac{\Pr (T|e_1)}{\Pr (T)}\times \cdots \times \frac{\Pr (T|e_m)}{\Pr (T)}\times \frac{\frac{\Pr (e_1\cap \cdots \cap e_m|T)}{\Pr (e_1\cap \cdots \cap e_m)}}{ \frac{\Pr (T|e_1)}{\Pr (T)}\times \cdots \times \frac{\Pr (T|e_m)}{\Pr (T)}}\\&= \frac{\Pr (T|e_1)}{\Pr (T)}\times \cdots \times \frac{\Pr (T|e_m)}{\Pr (T)}\times \frac{\frac{\Pr (e_1\cap \cdots \cap e_m|T)}{\Pr (e_1\cap \cdots \cap e_m)}}{\frac{\Pr (e_1|T)\times \cdots \times \Pr (e_m|T)}{\Pr (e_1)\times \cdots \times \Pr (e_m)}}\\&= \frac{\Pr (T|e_1)}{\Pr (T)}\times \cdots \times \frac{\Pr (T|e_m)}{\Pr (T)}\times \frac{\frac{\Pr (e_1\cap \cdots \cap e_m|T)}{\Pr (e_1|T)\times \cdots \times \Pr (e_m|T) }}{\frac{\Pr (e_1\cap \cdots \cap e_m)}{\Pr (e_1)\times \cdots \times \Pr (e_m)}}\\&={\mathfrak {pd}}_K(T,e_1)\times \cdots \times {\mathfrak {pd}}_K(T,e_m)\times \dfrac{{\mathfrak {pd}}_K(e_1,\ldots , e_m|T)}{{\mathfrak {pd}}_K(e_1,\ldots ,e_m)} \end{aligned}$$

Proof of Theorem 4

Let \(W\) be a set of possibilities and let \(\mathcal {A}\) be some algebra over \(W\). The elements of \(\mathcal {A}\) are interpreted as propositions expressible in some suitable language \(\mathcal {L}\) as specified in more detail in the Appendix 2 in connection with Theorem 2. The possibilities in \(W\) can be interpreted as models for \(\mathcal {L}\). Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) that separates \(W\), and let \(e^w_i =e_i\) if \(w\,\vDash\, e_i\) and \(\overline{e_i}\) otherwise. Let \(\Pr \) be a regular (or strict) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\).

Then according to Theorem 2 there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories \(T_1\) and \(T_2\) of \(\mathcal {A}\).

  1. 1.

    If \(w\,\vDash\, T_1\) and \(w\,\vDash\, \overline{T_2}\), then:

    $$\begin{aligned} \exists n \forall m\ge n{:}\, \left[{\mathfrak {pd}}_K\left(T_1, \bigcap _{0\le i\le m}e^w_i \right)>{\mathfrak {pd}}_K\left(T_2 , \bigcap _{0\le i\le m}e^w_i\right)\right] \end{aligned}$$
  2. 2.

    If \(w\,\vDash\, T_1\cap T_2\) and \(T_1\,\vDash\, T_2\) but \(T_2\,\nvDash\, T_1\), then:

    $$\begin{aligned} \exists n \forall m\ge n{:}\, \left[{\mathfrak {pd}}_K\left(T_1,\bigcap _{0\le i\le m}e^w_i\right)> {\mathfrak {pd}}_K\left(T_2 ,\bigcap _{0\le i\le m}e^w_i\right)\right]. \end{aligned}$$

According to Corollary 6 it holds that

$$\begin{aligned} &{\mathfrak {pd}}_K\left(T_1,\bigcap _{0\le i\le m}e^w_i\right)> {\mathfrak {pd}}_K\left(T_2,\bigcap _{0\le i\le m}e^w_i\right) \\ &\quad \Leftrightarrow {\mathfrak {pd}}_K\left(T_1,e^w_1,\ldots , e^w_m\right)> {\mathfrak {pd}}_K\left(T_2,e^w_1,\ldots , e^w_m\right). \end{aligned}$$

Hence, there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories \(T_1\) and \(T_2\) of \(\mathcal {A}\).

  1. 1.

    If \(w\,\vDash\, T_1\) and \(w\,\vDash\, \overline{T_2}\), then:

    $$\begin{aligned} \exists n \forall m\ge n{:}\, [{\mathfrak {pd}}_K(T_1,e^w_1 ,\ldots , e^w_m )>{\mathfrak {pd}}_K(T_2 ,e^w_1 ,\ldots , e^w_m )] \end{aligned}$$
  2. 2.

    If \(w\,\vDash\, T_1\cap T_2\) and \(T_1\,\vDash\, T_2\) but \(T_2\,\nvDash\, T_1\), then:

    $$\begin{aligned} \exists n \forall m\ge n{:}\, [{\mathfrak {pd}}_K(T_1,e^w_1 ,\ldots , e^w_m )> {\mathfrak {pd}}_K(T_2 ,e^w_1 ,\ldots , e^w_m )]. \end{aligned}$$

Proof of Theorem 5

$$\begin{aligned} {\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,E)&= \frac{\Pr (h_1\cap \cdots \cap h_n\cap E)}{\Pr (h_1\cap \cdots \cap h_n)\times \Pr (E)}\\&= \frac{\Pr (h_1|E)}{\Pr (h_1)}\times \cdots \times \frac{\Pr (h_n|E)}{\Pr (h_n)}\times \frac{\frac{\Pr (h_1\cap \cdots \cap h_n|E)}{\Pr (h_1\cap \cdots \cap h_n)}}{ \frac{\Pr (h_1|E)}{\Pr (h_1)}\times \cdots \times \frac{\Pr (h_n|E)}{\Pr (h_n)}}\\&= \frac{\Pr (h_1|E)}{\Pr (h_1)}\times \cdots \times \frac{\Pr (h_n|E)}{\Pr (h_n)}\times \frac{\frac{\Pr (h_1\cap \cdots \cap h_n|E)}{\Pr (h_1\cap \cdots \cap h_n)}}{\frac{\Pr (h_1|E)\times \cdots \times \Pr (h_n|E)}{\Pr (h_1)\times \cdots \times \Pr (h_m)}}\\&={\mathfrak {pd}}_K(h_1,E)\times \cdots \times {\mathfrak {pd}}_K(h_n,E)\times \dfrac{{\mathfrak {pd}}_K(h_1,\ldots , h_n|E)}{{\mathfrak {pd}}_K(h_1,\ldots ,h_n)} \end{aligned}$$

Proof of Theorem 6

$$\begin{aligned} {\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,e_1\cap \cdots \cap e_m)&= \frac{\Pr (h_1\cap \cdots \cap h_n \cap e_1\cap \cdots \cap e_m)}{\Pr (h_1\cap \cdots \cap h_n)\times \Pr (e_1\cap \cdots \cap e_m)}\\&= \frac{\frac{\Pr (h_1\cap \cdots \cap h_n \cap e_1\cap \cdots \cap e_m)}{\Pr (h_1)\times \cdots \times \Pr (h_n)\times \Pr (e_1)\times \cdots \times \Pr (e_m)}}{\frac{\Pr (h_1\cap \cdots \cap h_n)}{\Pr (h_1)\times \cdots \times \Pr (h_n)}\times \frac{\Pr (e_1\cap \cdots \cap e_m)}{\Pr (e_1)\times \cdots \times \Pr (e_m)}}\\&=\dfrac{{\mathfrak {pd}}_K(h_1, \ldots ,h_n,e_1,\ldots , e_m)}{{\mathfrak {pd}}_K(h_1,\ldots , h_n)\times {\mathfrak {pd}}_K(e_1, \ldots , e_m)} \end{aligned}$$

Proof of Theorem 7

Let \(W\) be a set of possibilities and let \(\mathcal {A}\) be some algebra over \(W\). The elements of \(\mathcal {A}\) are interpreted as propositions expressible in some suitable language \(\mathcal {L}\) as specified in more detail in the Appendix 2 in connection with Theorem 2. The possibilities in \(W\) can be interpreted as models for \(\mathcal {L}\). Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) that separates \(W\), and let \(e^w_i =e_i\) if \(w\,\vDash\, e_i\) and \(\overline{e_i}\) otherwise. Let \(\Pr \) be a regular (or strict) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\).

According to Theorem 2 there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all hypotheses \(h_1\),… \(h_n\) and \(h^\prime _1\),… \(h^\prime _m\) of \(\mathcal {A}\).

  1. 1.

    If \(w\,\vDash\, h_1\cap \cdots \cap h_n\) and \(w\,\vDash\, (\overline{h^\prime _1 \cap \cdots \cap h^\prime _m})\), then:

    $$\begin{aligned} \exists k \forall l\ge k{:}\, [{\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,E^w_l)> {\mathfrak {pd}}_K(h^\prime _1 \cap \cdots \cap h^\prime _m ,E^w_l)] \end{aligned}$$
  2. 2.

    If \(w\,\vDash\, h_1\cap \cdots \cap h_n\cap h^\prime _1 \cap \cdots \cap h^\prime _m\) and \(h_1\cap \cdots \cap h_n\,\vDash\, h^\prime _1 \cap \cdots \cap h^\prime _m\) but \(h^\prime _1 \cap \cdots \cap h^\prime _m\,\nvDash\, h_1\cap \cdots \cap h_n\), then:

    $$\begin{aligned} \exists k \forall l\ge k{:}\, [{\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,E^w_l)> {\mathfrak {pd}}_K(h^\prime _1 \cap \cdots \cap h^\prime _m ,E^w_l)] \end{aligned}$$

    where \(E^w_l=\bigcap _{0\le i\le l}e^w_i\).

Corollary 8 implies that:

$$\begin{aligned}&{\mathfrak {pd}}_K(h_1\cap \cdots \cap h_n,E^w_l)> {\mathfrak {pd}}_K(h^\prime _1 \cap \cdots \cap h^\prime _m ,E^w_l)\\ &\quad\Leftrightarrow\frac{{\mathfrak {pd}}_K(h_1, \ldots , h_n,E)}{{\mathfrak {pd}}_K(h_1, \ldots , h_n)}>\frac{{\mathfrak {pd}}_K(h^\prime _1, \ldots , h^\prime _m,E)}{{\mathfrak {pd}}_K(h^\prime _1, \ldots , h^\prime _m)} \end{aligned}$$

Hence, there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all hypotheses \(h_1\),… \(h_n\) and \(h^\prime _1\),… \(h^\prime _m\) of \(\mathcal {A}\).

  1. 1.

    If \(w\,\vDash\, h_1\cap \cdots \cap h_n\) and \(w\,\vDash\, (\overline{h^\prime _1 \cap \cdots \cap h^\prime _m})\), then:

    $$\begin{aligned} \exists k \forall l\ge k{:}\, \left[ \dfrac{{\mathfrak {pd}}_K(h_1, \ldots , h_n, E^w_l )}{{\mathfrak {pd}}_K(h_1, \ldots , h_n)}>\dfrac{{\mathfrak {pd}}_K(h^\prime _1, \ldots , h^\prime _m, E^w_l )}{{\mathfrak {pd}}_K(h_1^\prime , \ldots , h^\prime _n)}\right] . \end{aligned}$$
  2. 2.

    If \(w\,\vDash\, h_1\cap \cdots \cap h_n\cap h^\prime _1 \cap \cdots \cap h^\prime _m\) and \(h_1\cap \cdots \cap h_n\,\vDash\, h^\prime _1 \cap \cdots \cap h^\prime _m\) but \(h^\prime _1 \cap \cdots \cap h^\prime _m\,\nvDash\, h_1\cap \cdots \cap h_n\), then:

    $$\begin{aligned} \exists k \forall l\ge k{:}\, \left[ \dfrac{{\mathfrak {pd}}_K(h_1, \ldots , h_n, E^w_l )}{{\mathfrak {pd}}_K(h_1, \ldots , h_n)}>\dfrac{{\mathfrak {pd}}_K(h^\prime _1, \ldots , h^\prime _m, E^w_l )}{{\mathfrak {pd}}_K(h_1^\prime , \ldots , h^\prime _n)}\right] \end{aligned}$$

    where \(E^w_l=\bigcap _{0\le i\le l}e^w_i\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Brössel, P. Keynes’s Coefficient of Dependence Revisited. Erkenn 80, 521–553 (2015). https://doi.org/10.1007/s10670-014-9672-3

Download citation