Skip to main content
Log in

On the role of explanatory and systematic power in scientific reasoning

  • S.I. : Understanding Through Modeling
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

The paper investigates measures of explanatory power and how to define the inference schema “Inference to the Best Explanation” (IBE). It argues that these measures can also be used to quantify the systematic power of a hypothesis and defines the inference schema “Inference to the Best Systematization” (IBS). It demonstrates that systematic power is a fruitful criterion for theory choice and that IBS is truth-conducive. It also shows that even radical Bayesians must admit that systematic power is an integral component of Bayesian reasoning. Finally, the paper puts the achieved results in perspective with van Fraassen’s famous criticism of IBE.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Some philosophers argue that one must relativize the third premise to a set of available hypotheses. Whether some hypothesis is the best explanation can only be evaluated in contrast to other hypotheses, so the argument goes.

  2. Note that the following Requirement 2 seems to admit of the possibility that some hypotheses can explain the evidence while actually lowering its probability. (The requirement specifies how a measure of explanatory power behaves in case \(\Pr (E|H)<\Pr (E)\), even though we do not know whether the assumption that the hypothesis actually explains the evidence is compatible with the case of \(\Pr (E|H)<\Pr (E)\). A Gricean implicature of this specification is that advocates of this requirement are ready to admit the possibility that a hypothesis explains the evidence even though \(\Pr (E|H)<\Pr (E)\). If they do not admit that this is a possible case, why would they specify a necessary requirement for it?). Thus, it is an interesting question whether this is possible, and the answer certainly depends on the notion of explanation presupposed. For example, probabilistic theories of causation allow for the possibility of probability-lowering causes. Thus, given an understanding of explanation as casual explanation it seems possible to give an explanation of some observational fact by citing a cause that decreases the probability. For a discussion of the formal requirement in the context of confirmation theory, see Crupi (2007).

  3. I adopt the following notational convention with respect to function terms: I use gothic fonts for function variables and normal calligraphic fonts for function constants.

  4. A closely related requirement can already be found in the work of Harman (1967) in connection with the inference schema IBE (even though it is restricted to statistical probabilities and statistical hypotheses). Harman calls it the generalized maximum likelihood condition. Crupi calls a related requirement for measures of confirmation the Final Probability requirement. For a discussion of the latter requirement see Crupi (2013).

  5. The original formulation of Popper’s (1959) measure of explanatory power is this: \(ep_{\Pr }^{Popper}(E,H)=\frac{\Pr (E|H)-\Pr (H)}{\Pr (E|H)+\Pr (H)}\).

  6. Especially since such an undertaking would not only require a great deal of interpretational work, but we would also have to elaborate the basis upon which we disagree with Rescher’s arguments.

  7. In this theorem and in Theorem 3 and Corollary 1 below we assume that the probability function is a strict or regular probability function. Thus, in Howson’s words, the theorems again confirm

    Hume’s argument that there is no sound inductive argument from experiential data that does not incorporate an inductive premise, and it also tells us what the inductive premise will look like: it will be a probability assignment that is not deducible from the probability axioms. (Howson 2003, p. 134)

    Without the “inductive premise” that we are dealing with a strict or regular probability function, we would have to replace the ‘\(>\)’ in Point 2 of Theorems 13 by ‘\(\ge \)’. In consequence, considerations of systematic power could not distinguish between a logically stronger true hypothesis \(H_1\) and a logically weaker true hypothesis \(H_2\), if both of them had the same prior probability despite the difference in their logical strength. Indeed no purely probabilistic inference rule could distinguish between them, since from a probabilistic perspective there would be no discernible difference between the hypotheses, because in this case \(\Pr (H_2\rightarrow H_1)=1\) and, thus, Bayesians would treat \((H_2\rightarrow H_1\wedge H_2)\) as if it were a logical truth. For a longer and more comprehensive discussion of the connection between logical strength, prior probabilities (or informativity) and theory choice see Brössel (2014).

  8. Hempel (1960), and Levi (1967) and Huber (2008) disagree in their evaluation of false hypotheses. Where Hempel prefers logically weaker false hypotheses to logically stronger false hypotheses, Levi and Huber prefer logically stronger false hypotheses to logically weaker false hypotheses. For discussion, see Brössel (2014).

References

  • Belot, G. (2013). Bayesian orgulity. Philosophy of Science, 80, 483–503.

    Article  Google Scholar 

  • Brössel, P. (2012). Rethinking Bayesian confirmation theory—Steps towards a a new Bayesian theory of confirmation. PhD-thesis, University of Konstanz.

  • Brössel, P. (2008). Theory assessment and coherence. Abstracta, 4, 57–71.

    Google Scholar 

  • Brössel, P. (2014). Assessing theories: The coherentist approach. Erkenntnis, 79, 593–623.

    Article  Google Scholar 

  • Brössel, P. (2015). Keynes’s coefficient of dependence revisited. Erkenntnis, 80, 521–553.

    Article  Google Scholar 

  • Brössel, P., & Eder, A.-M. A. (2014). How to resolve doxastic disagreement. Synthese, 191, 2359–2381.

    Article  Google Scholar 

  • Carnap, R. (1950). Empiricism, semantics, and ontology. Revue Inter-nationale de Philosophie, 4, 20–40. Reprinted in the Supplement to Carnap, R. (1956). Meaning and necessity: A study in semantics and modal logic. Chicago: University of Chicago Press.

  • Crupi, V. (2013). Confirmation. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy. http://plato.stanford.edu/archives/win2013/entries/confirmation/.

  • Crupi, V., et al. (2007). On Bayesian measures of evidential support: Theoretical and empirical issues. Philosophy of Science, 74, 229–252.

    Article  Google Scholar 

  • Crupi, V., & Tentori, K. (2012). A second look at the logic of explanatory power (with two novel representation theorems). Philosophy of Science, 79, 365–385.

    Article  Google Scholar 

  • Darwin, C. (1872). The origin of species by means of natural selection, or the preservation of favoured races in the struggle for life (6th ed.). London: John Murray.

    Google Scholar 

  • Douven, I. (2011). Abduction. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2011 Edition). http://plato.stanford.edu/archives/spr2011/entries/abduction/.

  • Earman, J. (1992). Bayes or bust? A critical examination of Bayesian confirmation theory. Cambridge: MIT Press.

    Google Scholar 

  • Friedman, M. (2002). Kant, Kuhn, and the rationality of science. Philosophy of Science, 69, 171–190.

  • Gaifman, H., & Snir, M. (1982). Probabilities over rich languages, testing, and randomness. Journal of Symbolic Logic, 47, 495–548.

    Article  Google Scholar 

  • Good, I. (1960). Weight of evidence, corroboration, explanatory power, information and the utility of experiments. Journal of The Royal Statistical Society. Series B (Methodological), 22, 319–331.

    Google Scholar 

  • Harman, G. (1965). The inference to the best explanation. Philosophical Review, 74, 88–95.

    Article  Google Scholar 

  • Harman, G. (1967). Detachment, probability, and maximum likelihood. Noûs, 1, 401–411.

    Article  Google Scholar 

  • Hawthorne, J. (2014). Inductive logic. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2014 Edition). http://plato.stanford.edu/archives/sum2014/entries/logic-inductive/.

  • Hempel, C. (1958). The theoretician’s dilemma. In H. Feigl, M. Scriven, & G. Maxwell (Eds.), Minnesota studies in the philosophy of science (Vol. 2). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Hempel, C. (1960). Inductive inconsistencies. Synthese, 12, 439–469.

    Article  Google Scholar 

  • Hempel, C., & Oppenheim, P. (1948). Studies in the logic of explanation. In C. Hempel (Ed.), Aspects of scientific explanation and other essays in the philosophie of science (pp. 245–291). New York: Free Press. (1965).

    Google Scholar 

  • Horwich, P. (1982). Probability and evidence. Cambridge: Cambridge University Press.

    Google Scholar 

  • Howson, C. (2003). Hume’s problem. Oxford: Oxford University Press.

    Google Scholar 

  • Huber, F. (2008). Assessing theories, Bayes’ style. Synthese, 161, 89–118.

    Article  Google Scholar 

  • Huttegger, S. (2015a). Merging of opinions and probability kinematics. Review of Symbolic Logic (Accepted).

  • Huttegger, S. (2015b). Bayesian convergence to the truth and the metaphysics of possible worlds. Philosophy of Science (Accepted).

  • Jeffrey, R. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 23, 237–246.

    Article  Google Scholar 

  • Jeffrey, R. (1992). Radical probabilism (prospectus for a user’s manual). Philosophical Issues, 2, 193–204.

    Article  Google Scholar 

  • Kemeny, J., & Oppenheim, P. (1952). Degree of factual support. Philosophy of Science, 19, 307–324.

    Article  Google Scholar 

  • Keynes, J. (1921). A treatise on probability. London: Macmillan.

    Google Scholar 

  • Kolmogorov, A. (1933). Grundbegriffe der Wahrscheinlichkeitsrechnung. Berlin: Julius Springer.

    Book  Google Scholar 

  • Levi, I. (1967). Gambling with truth. New York: A. A. Knopf.

    Google Scholar 

  • Lewis, D. (1973). Counterfactuals. Harvard: Harvard University of Press.

    Google Scholar 

  • McMullin, E. (1992). The inference that makes science. Milwaukee: Marquette University Press.

    Google Scholar 

  • Milne, P. (1996). \(log[p(h/eb)/p(h/b)]\) is the one true measure of confirmation. Philosophy of Science, 63, 21–26.

    Article  Google Scholar 

  • Newton, I. (1729). The mathematical principles of natural philosophy (A. Motte, Trans.). New York: Daniel Adler.

  • Niiniluoto, I. (2011). Scientific progress. In E. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2011 Edition). http://plato.stanford.edu/archives/sum2011/entries/scientific-progress/.

  • Okasha, S. (2000). Van Fraassen’s critique of inference to the best explanation. Studies in History and Philosophy of Science, 31, 691–710.

    Article  Google Scholar 

  • Popper, K. (1959). The logic of scientific discovery. London: Hutchinson.

    Google Scholar 

  • Psillos, S. (1996). On van Fraassen’s critique of abductive reasoning. The Philosophical Quarterly, 46, 31–47.

    Article  Google Scholar 

  • Rescher, N. (2005). Studies in pragmatism. Heusenstamm: Ontos.

    Book  Google Scholar 

  • Schervish, M., & Seidenfeld, T. (1990). An approach to consensus and certainty with increasing evidence. Journal of Statistical Planning and Inference, 25, 401–414.

    Article  Google Scholar 

  • Schupbach, J. (2011). Comparing probabilistic measures of explanatory power. Philosophy of Science, 78, 813–829.

  • Schupbach, J. (2014). Is the bad lot objection just misguided? Erkenntnis, 79, 55–64.

    Article  Google Scholar 

  • Schupbach, J., & Sprenger, J. (2011). The logic of explanatory power. Philosophy of Science, 78, 105–127.

    Article  Google Scholar 

  • van Fraassen, B. (1980). The scientific image. Oxford: Oxford University Press.

    Book  Google Scholar 

  • van Fraassen, B. (1989). Laws and symmetry. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Vogel, J. (1990). Cartesian skepticism and inference to the best explanation. Journal of Philosophy, 87, 658–666.

    Google Scholar 

  • Weisberg, J. (2009). Locating IBE in the Bayesian framework. Synthese, 167, 125–143.

    Article  Google Scholar 

Download references

Acknowledgments

I am indebted to Ralf Busse, Vincenzo Crupi, Markus Eronen, Branden Fitelson (for making me aware of the Harman (1967) paper), Albert Newen, Gerhard Schurz (and the members of his research colloquium), and especially Matteo Colombo, Anna-Maria A. Eder, Jan Sprenger, and Ben Young. Finally, I am also grateful to two (very challenging) referees of this journal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Brössel.

Appendix

Appendix

1.1 Proof of Theorem 1

We have to show that \(ep^1\)\(ep^3\) satisfy requirements 1–3.

Proof for \(ep^1\)

  1. 1.

    \(ep^1\) satisfies Requirement 1 trivially. It is defined in terms of probabilities.

  2. 2.

    \(ep^1\) satisfies Requirement 2 with marker 1:

    $$\begin{aligned} ep_{\Pr }^1(H,E)= \frac{\Pr (E|H)}{\Pr (E)}= {\left\{ \begin{array}{ll} >1, &{} \Pr (E|H )>\Pr (E)\\ =1, &{} \Pr (E|H )=\Pr (E)\\ <1, &{} \Pr (E|H )<\Pr (E) \end{array}\right. } \end{aligned}$$
  3. 3.

    \(ep^1\) satisfies Requirement 3: if \(\Pr (E|H_1)>\Pr (E|H_2)\), then

    $$\begin{aligned} ep_{\Pr }^1(H_1,E)=\frac{\Pr (E|H_1)}{\Pr (E)}>\frac{\Pr (E|H_2)}{\Pr (E)}=ep_{\Pr }^1(H_2,E). \end{aligned}$$

Proof for \(ep^2\)

  1. 1.

    \(ep^2\) satisfies Requirement 1 trivially. It is defined in terms of probabilities.

  2. 2.

    \(ep^2\) satisfies Requirement 2 with marker 0: First note that

    $$\begin{aligned} ep_{\Pr }^2(H,E)= \frac{\Pr (H|E)-\Pr (H|\lnot E)}{\Pr (H|E)+\Pr (H|\lnot E)}={\left\{ \begin{array}{ll} >0, &{} \Pr (H|E )>\Pr (H|\lnot E)\\ =0, &{} \Pr (H|E )=\Pr (H|\lnot E)\\ <0, &{} \Pr (H|E )<\Pr (H|\lnot E) \end{array}\right. } \end{aligned}$$

    Now we only have to see that

    $$\begin{aligned} \Pr (H|E )\begin{array}{rl} >\\ = \\ < \end{array} \Pr (H|\lnot E) \Leftrightarrow \Pr (H|E)\begin{array}{rl} >\\ = \\ < \end{array}\Pr (H)\Leftrightarrow \Pr (E|H)\begin{array}{rl} >\\ = \\ < \end{array}\Pr (E) \end{aligned}$$
  3. 3.

    \(ep^2\) satisfies Requirement 3: if \(\Pr (E|H_1)>\Pr (E|H_2)\), then

    1. (a)

      \(\frac{\Pr (E|H_1)}{\Pr (E)}>\frac{\Pr (E|H_2)}{\Pr (E)}\)

    2. (b)

      \(\Pr (\lnot E|H_1)<\Pr (\lnot E|H_2)\) and, thus, also: \(\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}<\frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\)

    (a) and (b) imply that:

    $$\begin{aligned} \left[ \frac{\Pr (E|H_1)}{\Pr (E)}\times \frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\right]- & {} \left[ \frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\times \frac{\Pr (E|H_2)}{\Pr (E)}\right] \\> & {} \\ \left[ \frac{\Pr (E|H_2)}{\Pr (E)}\times \frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\right]- & {} \left[ \frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\times \frac{\Pr (E|H_1)}{\Pr (E)}\right] \end{aligned}$$

    and that therefore:

    $$\begin{aligned} \bigg [\frac{\Pr (E|H_1)}{\Pr (E)}\times \frac{\Pr (E|H_2)}{\Pr (E)}+\frac{\Pr (E|H_1)}{\Pr (E)}\times \frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]- & {} \bigg [\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\times \frac{\Pr (E|H_2)}{\Pr (E)}+\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\times \frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]\\> & {} \\ \bigg [\frac{\Pr (E|H_1)}{\Pr (E)}\times \frac{\Pr (E|H_2)}{\Pr (E)}+\frac{\Pr (E|H_2)}{\Pr (E)}\times \frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\bigg ]- & {} \bigg [\frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\times \frac{\Pr (E|H_1)}{\Pr (E)}+\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\times \frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ] \end{aligned}$$

    and

    $$\begin{aligned} \bigg [\frac{\Pr (E|H_1)}{\Pr (E)}-\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\bigg ]\times & {} \bigg [\frac{\Pr (E|H_2)}{\Pr (E)}+\frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]\\> & {} \\ \bigg [\frac{\Pr (E|H_2)}{\Pr (E)}-\frac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]\times & {} \bigg [\frac{\Pr (E|H_1)}{\Pr (E)}+\frac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\bigg ] \end{aligned}$$

    which implies:

    $$\begin{aligned} \frac{\bigg [\dfrac{\Pr (E|H_1)}{\Pr (E)}-\dfrac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\bigg ]}{\bigg [\dfrac{\Pr (E|H_1)}{\Pr (E)}+\dfrac{\Pr (\lnot E|H_1)}{\Pr (\lnot E)}\bigg ]} >\frac{\bigg [\dfrac{\Pr (E|H_2)}{\Pr (E)}-\dfrac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]}{\bigg [\dfrac{\Pr (E|H_2)}{\Pr (E)}+\dfrac{\Pr (\lnot E|H_2)}{\Pr (\lnot E)}\bigg ]} \end{aligned}$$

    We can reformulate this as follows:

    $$\begin{aligned} \frac{\bigg [\dfrac{\Pr (H_1|E)}{\Pr (H_1)}-\dfrac{\Pr (H_1|\lnot E)}{\Pr (H_1)}\bigg ]}{\bigg [\dfrac{\Pr (H_1|E)}{\Pr (H_1)}+\dfrac{\Pr (H_1|\lnot E)}{\Pr (H_1)}\bigg ]} >\frac{\bigg [\dfrac{\Pr (H_2|E)}{\Pr (H_2)}-\dfrac{\Pr (H_2|\lnot E)}{\Pr (H_2)}\bigg ]}{\bigg [\dfrac{\Pr (H_2|E)}{\Pr (H_2)}+\dfrac{\Pr (H_2|\lnot E)}{\Pr (H_2)}\bigg ]} \end{aligned}$$

    Finally, by cancelling \(\Pr (H_1)\), respectively \(\Pr (H_2)\) out of these formulae we get the desired result:

    $$\begin{aligned} \frac{\big [\Pr (H_1|E)-\Pr (H_1|\lnot E)\big ]}{\big [\Pr (H_1|E)+\Pr (H_1|\lnot E)\big ]} >\frac{\big [\Pr (H_2|E)-\Pr (H_2|\lnot E)\big ]}{\big [\Pr (H_2|E)+\Pr (H_2|\lnot E)\big ]} \end{aligned}$$

Proof for \(ep^3\)

  1. 1.

    \(ep^3\) satisfies Requirement 1 trivially. It is defined in terms of probabilities.

  2. 2.

    \(ep^3\) satisfies Requirement 2 with marker 0: First note that

    $$\begin{aligned} ep_{\Pr }^3(H, E)= {\left\{ \begin{array}{ll}\frac{\Pr (E|H)-\Pr (E)}{1- \Pr (E)} &{} \hbox { if } \Pr (E|H )\ge \Pr (E)>0\\ \frac{\Pr (E|H)-\Pr (E)}{\Pr (E)} &{} \hbox { if } \Pr (E|H )< \Pr (E)\\ \end{array}\right. } \end{aligned}$$

    Thus,

  3. 3.

    \(ep^2\) satisfies Requirement 3: if \(\Pr (E|H_1)>\Pr (E|H_2)\), then

    1. (a)

      \(ep_{\Pr }^3(H_1,E)=\frac{\Pr (E|H_1)-\Pr (E)}{1-\Pr (E)}>ep_{\Pr }^3(H_2,E)=\frac{\Pr (E|H_2)-\Pr (E)}{1-\Pr (E)}\), if \(\Pr (E|H_1)\ge \Pr (E)\) and \(\Pr (E|H_2)\ge \Pr (E)\).

    2. (b)

      \(ep_{\Pr }^3(H_1,E)=\frac{\Pr (E|H_1)-\Pr (E)}{\Pr (E)}>ep_{\Pr }^3(H_2,E)=\frac{\Pr (E|H_2)-\Pr (E)}{\Pr (E)}\), if \(\Pr (E|H_1)<\Pr (E)\) and \(\Pr (E|H_2)<\Pr (E)\).

    3. (c)

      \(ep_{\Pr }^3(H_1,E)=\frac{\Pr (E|H_1)-\Pr (E)}{1-\Pr (E)}>ep_{\Pr }^3(H_2,E)=\frac{\Pr (E|H_2)-\Pr (E)}{\Pr (E)}\), if \(\Pr (E|H_1)\ge \Pr (E)\) and \(\Pr (E|H_2)<\Pr (E)\).

1.2 Proof of Theorem 2

Let W be a set of possible worlds and let \(\mathcal {A}\) be some algebra over W. The elements of \(\mathcal {A}\) are interpreted as propositions. Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) which separates W, and let \(e^w_i =e_i\) if \(w\vDash e_i\) and \(\lnot e_i\) otherwise. Let \(\Pr \) be a strict (or regular) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\). Then there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all hypotheses \(H \in \mathcal {A}\) and for all \(\mathfrak {sp}_{\Pr }\) satisfying Requirements 1–3.

Then, according to the Gaifman–Snir Theorem (1982), there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all theories H of \(\mathcal {A}\):

$$\begin{aligned} \lim _{n\implies \infty }\Pr (H|E^w_n)=\mathcal {I}(H,w) \end{aligned}$$

where \(\mathcal {I}(H,w)=1\), if \(w\vDash H\) and 0 otherwise.

  1. 1.

    Suppose \(w\vDash H_1\) and \(w\vDash \lnot H_2\). Then \(\lim _{n\implies \infty }\Pr (H_1|E^w_n)=1\) and \(\lim _{n\implies \infty }\Pr (H_2|E^w_n)=0\) which implies that \( \exists n \forall m\ge n:\Pr (H_1|E^w_n)>\Pr (H_1) \& \Pr (H_2|E^w_n)<\Pr (H_2)\). The latter entails by symmetry of probabilistic relevance that \( \exists n \forall m\ge n:\Pr (E^w_n|H_1)>\Pr (E^w_n) \& \Pr (E^w_n|H_2)<\Pr (E^w_n)\) and that therefore \(\exists n \forall m\ge n:\Pr (E^w_n|H_1)>\Pr (E^w_n|H_2)\). Thus, with Requirement 3 on measures of explanatory and systematic power we can conclude that: \(\exists n \forall m\ge n: [\mathfrak {sp}_{\Pr }(H_1, E^w_m )>\mathfrak {sp}_{\Pr }(H_2 , E^w_m )]\).

  2. 2.

    Suppose \(w\vDash H_1\cap H_2\) and \(H_1\vDash H_2\), but \(H_2\nvDash H_1\). We already know that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\left[ \frac{\Pr (H|E_n^w)}{\Pr (H)}\right] = \dfrac{1}{\Pr (H)},\hbox { if }\lim _{n\implies \infty }\Pr (H|E^w_n)=1. \end{aligned}$$

    and thus that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\left[ \frac{\Pr (E_n^w|H)}{\Pr (E_n^w)}\right] = \dfrac{1}{\Pr (H)},\hbox { if }\lim _{n\implies \infty }\Pr (H|E^w_n)=1. \end{aligned}$$

    The latter implies that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\left[ \frac{\Pr (E_n^w|H_1)}{\Pr (E_n^w)}\right] = \dfrac{1}{\Pr (H_1)}>\lim _{n\rightarrow \infty }\left[ \frac{\Pr (E_n^w|H_2)}{\Pr (E_n^w)}\right] = \dfrac{1}{\Pr (H_2)} \end{aligned}$$

    since \(H_1\vDash H_2\) implies that \(H_2\nvDash H_1\), \(\Pr (H_1)<\Pr (H_2)\). This means that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\left[ \Pr (E_n^w|H_1)\right] >\lim _{n\rightarrow \infty }\left[ \Pr (E_n^w|H_2)\right] \end{aligned}$$

    and that therefore \(\exists n \forall m\ge n:\Pr (E^w_n|H_1)>\Pr (E^w_n|H_2)\). Thus, with Requirement 3 on measures of explanatory and systematic power we can conclude that:

    $$\begin{aligned} \exists n \forall m\ge n: [\mathfrak {sp}_{\Pr }(H_1, E^w_m )>\mathfrak {sp}_{\Pr }(H_2 , E^w_m )]. \end{aligned}$$

where \(E^w_m=\bigcap _{0\le i\le m}e^w_i\).

1.3 Proof of Theorem 3

Let W be a set of possible worlds and let \(\mathcal {A}\) be some algebra over W. The elements of \(\mathcal {A}\) are interpreted as propositions. Let \(e_0,\ldots , e_n,\ldots \) be a sequence of propositions of \(\mathcal {A}\) which separates W, and let \(e^w_i =e_i\) if \(w\vDash e_i\) and \(\lnot e_i\) otherwise. Let \(\Pr \) be a strict (or regular) probability function on \(\mathcal {A}\). Let \(\Pr ^*\) be the unique probability function on the smallest \(\sigma \)-field \(\mathcal {A}^*\) containing the field \(\mathcal {A}\) satisfying \(\Pr ^*(A)=\Pr (A)\) for all \(A\in \mathcal {A}\). Then there is a \(W^\prime \subseteq W\) with \(\Pr ^*(W^\prime )=1\) so that the following holds for every \(w\in W^\prime \) and all hypotheses \(H \in \mathcal {A}\) and for all \(\mathfrak {sp}_{\Pr }\) satisfying Requirements 1–3.

  1. 1.

    Suppose there is a \(H_j\in \{H_1, \ldots , H_n\}\) such that \(w\vDash H_j\). Then according to Theorem 2, for every false hypothesis \(H_i\in \{H_1, \ldots , H_n\}\) and all for all \(\mathfrak {sp}_{\Pr }\) satisfying Requirements 1–3: \(\exists n \forall m\ge n: [\mathfrak {sp}_{\Pr }(H_j, E^w_m )>\mathfrak {sp}_{\Pr }(H_i , E^w_m )]\) (note Requirement 3 is the crucial requirement here). Since there are only finitely many false hypotheses in \(\{H_1, \ldots , H_n\}\) we can conclude that:

    $$\begin{aligned}&\exists n \forall m\ge n \hbox { such that }\exists H_j\in \{H_1, \ldots , H_n\}\hbox { with }w\vDash H_j\hbox { and } \forall H_i\in \{H_1, \ldots , H_n\}\hbox { with }w\vDash \lnot H_i:\\&\quad [\mathfrak {sp}_{\Pr }(H_j, E^w_m )>\mathfrak {sp}_{\Pr }(H_i , E^w_m )] \end{aligned}$$

    Thus, with Definition 9 we can conclude that: \(\exists n \forall m\ge n\) such that if \(H_i\) is the best systematization for \(E^w_m\) with respect to the set of hypotheses \(\{H_1, \ldots , H_n\}\), then \(w\vDash H_i\). For Definitions 10 and 11 we can show the same since according to Theorem 2 the true hypothesis will be more probable than the false ones after finitely many steps of observation and for every observation thereafter.

  2. 2.

    The proof for the second part proceeds along the same lines.

where \(E^w_m=\bigcap _{0\le i\le m}e^w_i\).

1.4 Proof of Theorem 6

$$\begin{aligned}&\Pr _{t_0}(H|E)=\frac{\dfrac{e^{\tanh ^{-1}\left[ {sp^2}_{t_0}(H,E)\right] } \Pr _{t_0}(E)}{e^{\tanh ^{-1}\left[ {sp^2}_{t_0}(H,E)\right] } \Pr _{t_0}(E)+e^{\tanh ^{-1}{sp^2}_{t_0}(H,\lnot E)} \Pr _{t_0}(\lnot E)}}{\Pr _{t_0}(E)}\times \Pr _{t_0}(H)\\&\quad =\frac{\dfrac{e^{\frac{1}{2}\left[ \log \left[ {sp^2}_{t_0}(H,E)+1\right] -\log \left[ 1-{sp^2}_{t_0}(H,E)\right] \right] } \Pr _{t_0}(E)}{e^{\frac{1}{2}\left[ \log \left[ {sp^2}_{t_0}(H,E)+1\right] -\log \left[ 1-{sp^2}_{t_0}(H,E)\right] \right] } \Pr _{t_0}(E)+e^{\frac{1}{2}\left[ \log \left[ {sp^2}_{t_0}(H,\lnot E)+1\right] -\log \left[ 1-{sp^2}_{t_0}(H,\lnot E)\right] \right] } \Pr _{t_0}(\lnot E)}}{\Pr _{t_0}(E)}\\&\qquad \times \Pr _{t_0}(H)\\&\quad =\frac{\dfrac{e^{\frac{1}{2}\left[ \log \left[ \frac{{sp^2}_{t_0}(H,E)+1}{1-{sp^2}_{t_0}(H,E)}\right] \right] } \Pr _{t_0}(E)}{e^{\frac{1}{2}\left[ \log \left[ \frac{{sp^2}_{t_0}(H,E)+1}{1-{sp^2}_{t_0}(H,E)}\right] \right] } \Pr _{t_0}(E)+e^{\frac{1}{2}\left[ \log \left[ \frac{{sp^2}_{t_0}(H,\lnot E)+1}{1-{sp^2}_{t_0}(H,\lnot E)}\right] \right] } \Pr _{t_0}(\lnot E)}}{\Pr _{t_0}(E)}\times \Pr _{t_0}(H) \end{aligned}$$

Now we know that

$$\begin{aligned} \frac{\frac{\Pr _{t_0}(\lnot E|H)}{\Pr _{t_0}(\lnot E)}}{\frac{\Pr _{t_0}(E|H)}{\Pr _{t_0}(E)}}&=\frac{e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] }}{e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}\quad (\hbox {by the definition of }e\hbox { and }\log )\\ \frac{\Pr _{t_0}(\lnot E|H)}{\Pr _{t_0}(E|H)}&=\frac{\Pr _{t_0}(\lnot E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] }}{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}\\ \frac{\Pr _{t_0}(\lnot E|H)}{\Pr _{t_0}(E|H)}+\frac{\Pr _{t_0}(E|H)}{\Pr _{t_0}(E|H)}&=\frac{\Pr _{t_0}(\lnot E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] }}{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}+\frac{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}\\ \frac{1}{\Pr _{t_0}(E|H)}&=\frac{\Pr _{t_0}(\lnot E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] }+\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}\\ \Pr _{t_0}(E|H)&=\frac{\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}{\Pr _{t_0}(\lnot E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] }+\Pr _{t_0}(E)\times e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] }}\\ \Pr _{t_0}(E|H)&=\frac{e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] } \Pr _{t_0}(E)}{e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|E)}{\Pr _{t_0}(H|\lnot E)}\right] } \Pr _{t_0}(E)+e^{\frac{1}{2}\log \left[ \frac{\Pr _{t_0}(H|\lnot E)}{\Pr _{t_0}(H| E)}\right] } \Pr _{t_0}(\lnot E)} \end{aligned}$$

and since \(\frac{{sp^2}_{t_0}(H,E)+1}{1-{sp^2}_{t_0}(H,E)}=\frac{\Pr (H|E)}{\Pr (H|\lnot E)}\) we can conclude that

$$\begin{aligned} \Pr _{t_0}(H|E)&=\frac{\Pr _{t_0}(E|H)}{\Pr _{t_0}(E)}\times \Pr _{t_0}(H) \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brössel, P. On the role of explanatory and systematic power in scientific reasoning. Synthese 192, 3877–3913 (2015). https://doi.org/10.1007/s11229-015-0870-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-015-0870-6

Keywords

Navigation