Skip to main content
Log in

Lewis’ Triviality for Quasi Probabilities

  • Published:
Journal of Logic, Language and Information Aims and scope Submit manuscript

Abstract

According to Stalnaker’s Thesis (S), the probability of a conditional is the conditional probability. Under some mild conditions, the thesis trivialises probabilities and conditionals, as initially shown by David Lewis. This article asks the following question: does (S) still lead to triviality, if the probability function in (S) is replaced by a probability-like function? The article considers plausibility functions, in the sense of Friedman and Halpern, which additionally mimic probabilistic additivity and conditionalisation. These quasi probabilities comprise Friedman–Halpern’s conditional plausibility spaces, as well as other known representations of conditional doxastic states. The paper proves Lewis’ triviality for quasi probabilities and discusses how this has implications for three other prominent strategies to avoid Lewis’ triviality: (1) Adams’ thesis, where the probability function on the left in (S) is replaced by a probability-like function, (2) abandoning conditionalisation, where probability conditionalisation on the right in (S) is replaced by another propositional update procedure and (3) the approximation thesis, where equality in (S) is replaced by approximation. The paper also shows that Lewis’ triviality result is really about ‘additiveness’ and ‘conditionality’.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. \({\mathcal {A}}\) is an algebra over \(\varOmega \) iff \(\varOmega \in {\mathcal {A}}\), \({\mathcal {A}}\) is closed under complements and finite unions.

  2. A similar proof can be found in Hájek (2011).

  3. For defences of the Stalnaker thesis, see Stalnaker (1968, 1970), Van Fraassen (1976), Rehder (1982), Edgington (1995), Bennett (2003) and Evans and Over (2004). For a criticism, other triviality results, and a discussion of why the thesis ‘seems right’, see Stalnaker (1976), Lewis (1986), Hájek (1989, 1994, 2011, 2012), Hájek and Hall (1994) and Milne (2003).

  4. Other studies confirm these results for many indicative conditionals. See Evans et al. (2003), Over et al. (2007), Politzer et al. (2010) and Fugard et al. (2011).

  5. Under negative relevance, or irrelevance, this equality is systematically violated. Compare (Skovgaard-Olsen et al. 2017).

  6. See footnote 3 for references.

  7. For reasons to adopt Adams’ thesis, see Hájek (2012).

  8. For arguments against conditionalisation, see Meacham (2016).

  9. Gärdenfors (1982, 1988) generalises imaging to represent Stalnaker conditionals without the uniqueness assumption.

  10. Note that imaging satisfies success but violates certainty preservation as well as moderation (introduced below).

  11. Gärdenfors (1982, theorem 3) proved a similar result on stronger assumptions: there is no non-trivial finite certainty preserving and rich (probabilistic) belief-change model which satisfies the Stalnaker Thesis.

  12. Rott (2011) has devised other ways to avoid triviality for belief revision. See also Leitgeb (2010) for escape routes to Gärdenfors-like triviality.

  13. Options 1, 2 and 3 have similar approximate versions.

  14. The construction, however, depends on the assumption of independence and identical distribution for repeated events.

  15. They are equivalent when \(P_B\) is interpreted as conditionalisation—proving equivalence of (a) and (b)—and under the assumption of the simple Stalnaker thesis—proving equivalence of (b) and (c).

  16. I am thankful to an anonymous referee for this remark.

  17. For one such attempt, see Khoo (2013, 2016). But compare the triviality result for Kratzer semantics by Charlow (2016).

  18. For all \(x,y,z \in X\): \(x \le _X x\); if \(x\le _X y\) and \(y \le _X z\) then \(x \le _X z\); if \(x\le _X y\) and \(y \le _X x\) then \(x = y\).

  19. See Spohn (1988) and Stalnaker (2012) for a definition.

  20. See Dubois and Prade (1988) for a definition.

  21. See Dempster (1967) and Shafer (1976).

  22. See Friedman and Halpern (1995).

  23. Friedman and Halpern (1995) showed that DECOMP is equivalent to \(\oplus \) being monotonic, and additionally forming a commutative category, where \(\mathbf {1}\) is absorbing. However, as we will show in the next section, these conditions already follow from \(\oplus \) being just separable and standard properties of the algebraic operations \(\cup , \cap , ^C\) with respect to \(\emptyset , \varOmega \).

  24. DECOMP is a weak variant of a property of qualitative probabilities called “disjoint unions” (cf. Friedman and Halpern 1995).

  25. From weak decomposability it follows that if for disjoint AB, \(F(A)= \mathbf {0}= F(B)\) then \(F(A \cup B) = F(A \cup \emptyset ) =\mathbf {0}\). However, there are belief functions such that \(Bel(A)=0=Bel(B)\) and \(Bel(A \cup B)=1\). See Friedman and Halpern (1995) for an example of a non-decomposable belief function.

  26. For a probability function P\(\langle P(BA), P(A)\rangle \) is a P-consistent couple iff \(P(A)>0\). For a ranking function \(\kappa \)\(\langle \kappa (BA), \kappa (A) \rangle \) is a \(\kappa \)-consistent couple iff \(\kappa (A) < \infty \).

  27. Usually this is called ‘non-trivial’. But ‘trivial’ is reserved for another meaning. \(\mathbf {1}>\mathbf {0}\) ensures that \(F\) does not have the same values everywhere and that conditionalisation is at least defined for \(\varOmega \). Otherwise all values of all conditional plausibilities would collapse.

  28. This does not assume that sequential conditionalisation \((F_A)_B\) is defined, but only that it can be reduced to the cumulative conditionalisation \(F_{A \cap B}\) in the mentioned case in the premisses.

  29. The qualitative version of coincidence follows from the AGM postulates of sub-expansion and super-expansion, provided we consider expansion as the second revision step.

  30. For positive and two-sided ranking functions, see Spohn (2012, def. 5.20, 5.21).

  31. Note: if A is \(F\)-consistent then \(F_A\) is defined. If furthermore B is \(F_A\)-consistent then \((F_A)_B\) is defined. Coincidence then says that \((F_B)_A\) as well as \(F_{A \cap B}\) are also defined.

  32. Initially I called them “nice plausibilities” to turn a remark of Friedman and Halpern (1995) into a definition. However, their ‘nice’ plausibilities satisfy, in addition to C3, also two other properties (their C1 and C2).

  33. However, I shy away from adopting this terminology, since in just the same sense, \(\oplus \) and \(\displaystyle \oslash \) also behave almost like the known \(\min \) and −.

  34. They show that decomposability DECOMP is equivalent to the plausibility being \(\oplus \)-separable and \(\oplus \) satisfying (1,2,3,6) and monotone in both arguments (when composition is defined), where they call ‘additive’ the properties (3,6). They do not prove the ortho-complement structure (5). However, note that by the previous Theorem 2 and the remark thereafter, separability is enough to obtain \(\hbox {DECOMP}_=\) and monotone separability would be enough for DECOMP. The other laws follow from boolean laws of the algebra \({\mathcal {A}}\).

  35. X are the objects of the category, \(X \oplus X\) are the arrows, where \(f: a {\mathop {\rightarrow }\limits ^{b}} (a \oplus b)\) is an arrow iff ab are disjoint, the source of f being a and the result \(a \oplus b\), and arrow composition is defined if the result of the first equals the source of the second. This forms a category over X, with same right and left identities, by (2) and (3). It is commutative by (1) and it is ortho-complemented by (5). The term “ortho-complemented” is taken from lattice theory, but in our context it is much weaker. Ortho-complementation here only requires \(\oplus \) (\(\vee \) in lattice theory) to have the property (5.a, b)—complement law and involution—but not the additional order inverting property, nor the analogue three properties for the \(\otimes \) (\(\wedge \) in lattice theory).

  36. A monoïd is a group where inverses might not exist.

  37. \(\min \) forms a commutative monoïd over the rank-values in \(\mathbb {N}_{\infty }\), with \(\mathbf {1}=0\) and \(\mathbf {0}=\infty \) which additionally is uniquely complemented, i.e., there is a unique ortho-complement for any number, namely 0. Similarly for a possibility measure.

  38. \(+\), although total over \(\mathbb {R}\), is not total over [0, 1].

  39. This was the reason for Friedman and Halpern (1995) to define standardness by \(X_A \sqsubseteq X\)or\(X_A = \{\mathbf {0}_A\}\). However, it is a curious consequence to obtain conditioning on \(F(A)= \mathbf {0}\) to trivialise in this sense, given they started out with Popper-measure like plausibility structures, for which one would have expected that conditioning on \(F(A)= \mathbf {0}\) should be non-trivially defined.

  40. By the latter assumption, all axioms for \(\otimes \) are in fact implicitly restricted. \(\oplus ,\mathbf {0}\) forms a commutative (i.e., abelian) group, \(\otimes ,\mathbf {1}\) forms a commutative group over \(X {\setminus } \{\mathbf {0}\}\) and thus \(\otimes \) is invertible, and X is totally ordered by \(\le \) with minimum \(\mathbf {0}\) and maximum \(\mathbf {1}\), such that \(\oplus ,\otimes \) are monotone, and additionally \(\otimes \) distributes over \(\oplus \).

  41. This ‘undefinedness’ assumption is similar to the trivialisation assumption \(F(B|\emptyset )= \mathbf {0}\) in Friedman and Halpern (1995), and is explicitly avoided here, in the sense that it is left open, whether \(F(B \cap A) \displaystyle \oslash F(A)\) is defined or not for \(F(A)= \mathbf {0}\).

  42. For \(\displaystyle \oslash \) properly extended to \(\mathbf {0}\) in the first argument.

  43. The curious removal of \(\mathbf {0}\) from the \(\otimes \)-structure also makes conditional valuation functions ill defined, since then \(V(\emptyset |A)\) is undefined, even if \(V(A)>\mathbf {0}\).

  44. Although totality is not assumed in Darwiche and Ginsberg (1992), all examples involve total \(\oplus \) (cf. Friedman and Halpern 1995).

  45. In the footnotes here and in the proofs in the “Appendix” it is made precise where only conditionalisability is required.

  46. This is a plausibilistic analogue to the ‘import–export’ Lemma of Hájek (2011).

  47. \(\displaystyle \oslash \)-conditionalisability suffices.

  48. Again \(\displaystyle \oslash \)-conditionalisability suffices.

  49. Again \(\displaystyle \oslash \)-conditionalisability suffices.

  50. Here \(F\) needs to be for the first time seperable and \(\displaystyle \oslash \)-conditionalisable.

  51. A weaker result can be proven: there is no class \(\mathbb {C}\) of plausibility functions such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\), unless any non-trivial \(F\in \mathbb {C}\) is not a quasi probability.

  52. I thank an anonymous referee who pressed me on this issue.

  53. As suggested by the referee in question: \(P(A>C) \ge P((A> C) \cap C) = P(A > C|C) P(C) =P(C)\), where the last equality follows from Lewis’ proof (i.e., the accept Lemma).

  54. A positive ranking function \(\beta \) satisfies this, \(\beta (A|A)= \kappa (\overline{A}|A)= \infty \), as well as a two-sided ranking function \(\tau \), \(\tau (A|A)= \infty \), as well as a necessity measure \(\eta \), \(\eta (A|A)= 1- \rho (\overline{A}|A) = 1 - 0 =1\) and even imaging.

  55. For probabilistic versions of such updates and arguments for them, see Levi (1967), Williamson (1998), Franke and Jager (2010), Wenmackers and Romeijn (2016), Meacham (2016) and Raidl (2018).

  56. Not to be confused with those of Friedman and Halpern (1995).

References

  • Adams, E. (1965). The logic of conditionals. Inquiry, 8, 166–197.

    Google Scholar 

  • Adams, E. (1975). The logic of conditionals. Dordrecht: Reidel.

    Google Scholar 

  • Adams, E. (1998). A primer of probability logic. Stanford, CA: CSLI, Stanford University.

    Google Scholar 

  • Arló-Costa, H. (1999). Belief revision conditionals: Basic iterated systems. Annals of Pure and Applied Logic, 96, 3–28.

    Google Scholar 

  • Bennett, J. (2003). A philosophical guide to conditionals. New York: Oxford.

    Google Scholar 

  • Bradley, R. (2007). A defense of the Ramsey test. Mind, 116(461), 1–21.

    Google Scholar 

  • Charlow, N. (2016). Triviality for restrictor conditionals. Noûs, 50(3), 533–564.

    Google Scholar 

  • Darwiche, A., & Ginsberg, M. L. (1992). A symbolic generalization of probability theory. In Proceedings of the national conference on artificial intelligence AAAI’92 (pp. 622–627). Menlo Park, CA: AAAI Press.

  • Dempster, A. P. (1967). Upper and lower probabilities induced by a multivalued mapping. The Annals of Mathematical Statistics, 38(2), 325–339.

    Google Scholar 

  • Dietz, R., & Douven, I. (2011). A puzzle about Stalnaker’s hypothesis. Topoi, 30(1), 31–37.

    Google Scholar 

  • Douven, I. (2016). The epistemology of indicative conditionals: Combining formal and empirical approaches. Cambridge: Cambridge University Press.

    Google Scholar 

  • Douven, I., & Verbrugge, S. (2010). The Adams family. Cognition, 117, 302–318.

    Google Scholar 

  • Douven, I., & Verbrugge, S. (2013). The probabilities of conditionals revisited. Cognitive Science, 37, 711–730.

    Google Scholar 

  • Dubois, D., & Prade, H. (1988). Possibility theory: An approach to computerized processing of uncertainty. New York: Plenum Press.

    Google Scholar 

  • Edgington, D. (1995). On conditionals. Mind, 104(414), 235–329.

    Google Scholar 

  • Evans, J., Handley, S., & Over, D. (2003). Conditionals and conditional probability. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 321–355.

    Google Scholar 

  • Evans, J. S. B. T., & Over, D. (2004). If. Oxford: Oxford University Press.

    Google Scholar 

  • Franke, M., & de Jager, T. (2010). Now that you mention it: Awareness dynamics in discourse and decisions. In A. Benz, et al. (Eds.), Language, games, and evolution. LNAI 6207 (pp. 60–91). Berlin: Springer.

    Google Scholar 

  • Friedman, N., & Halpern, J. Y. (1995). Plausibility measures: A user’s guide. In Proceedings of the eleventh conference on uncertainty in artificial intelligence UAI’95 (pp. 175–184). San Francisco, CA: Morgan Kaufmann.

  • Fugard, A., Pfeifer, N., Mayerhofer, B., & Kleiter, G. (2011). How people interpret conditionals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 635–648.

    Google Scholar 

  • Gärdenfors, P. (1982). Imaging and conditionalization. The Journal of Philosophy, 79(12), 747–760.

    Google Scholar 

  • Gärdenfors, P. (1986). Belief revisions and the Ramsey test for conditionals. The Philosophical Review, 95, 81–93.

    Google Scholar 

  • Gärdenfors, P. (1987). Variations on the Ramsey test: More triviality results. Studia Logica, 46(4), 321–327.

    Google Scholar 

  • Gärdenfors, P. (1988). Knowledge in flux. Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press.

    Google Scholar 

  • Hájek, A. (1989). Probabilities of conditionals: Revisited. Journal of Philosophical Logic, 18, 423–428.

    Google Scholar 

  • Hájek, A. (1994). Triviality on the cheap? In E. Eells & B. Skyrms (Eds.), Probability and conditionals (pp. 113–140). Cambridge: Cambridge University Press.

    Google Scholar 

  • Hájek, A. (2011). Triviality pursuit. Topoi, 30(1), 3–15.

    Google Scholar 

  • Hájek, A. (2012). The fall of ‘Adams’ thesis’? Journal of Language, Logic and Information, 21(2), 145–161.

    Google Scholar 

  • Hájek, A., & Hall, N. (1994). The hypothesis of the conditional construal of conditional probability. In E. Eells & B. Skyrms (Eds.), Probability and conditionals (pp. 75–111). Cambridge: Cambridge University Press.

    Google Scholar 

  • Jackson, F. (1987). Conditionals. Oxford: Blackwell.

    Google Scholar 

  • Kaufmann, S. (2009). Conditionals right and left: Probabilities for the whole family. Journal of Philosophical Logic, 38, 1–53.

    Google Scholar 

  • Kaufmann, S. (2015). Conditionals, conditional probabilities, and conditionalization. In H.-C. Schmitz & H. Zeevat (Eds.), Bayesian natural language semantics and pragmatics (pp. 71–94). Berlin: Springer.

    Google Scholar 

  • Kern-Isberner, G. (2004). A thorough axiomatization of a principle of conditional preservation in belief revision. Annals of Mathematics and Artificial Intelligence, 40(1–2), 127–164.

    Google Scholar 

  • Khoo, J. (2013). Conditionals, indeterminacy, and triviality. Philosophical Perspectives, 27(1), 260–287.

    Google Scholar 

  • Khoo, J. (2016). Probabilities of conditionals in context. Linguistics & Philosophy, 39(1), 1–43.

    Google Scholar 

  • Kratzer, A. (1986). Conditionals. Chicago Linguistics Society, 22(2), 1–15.

    Google Scholar 

  • Khoo, J. (1991). Modality. In A. von Stechow & D. Wunderlich (Eds.), Semantics: An international handbook of contemporary research (pp. 639–650). Berlin: de Gruyter.

    Google Scholar 

  • Leitgeb, H. (2010). On the Ramsey test without triviality. Notre Dame Journal of Formal Logic, 51(1), 21–54.

    Google Scholar 

  • Levi, I. (1967). Probability kinematics. British Journal for the Philosophy of Science, 18, 197–209.

    Google Scholar 

  • Lewis, D. (1976). Probabilities of conditionals and conditional probabilities. Philosophical Review, 85, 297–315.

    Google Scholar 

  • Lewis, D. (1986). Probabilities of conditionals and conditional probabilities II. Philosophical Review, 95, 581–589.

    Google Scholar 

  • Meacham, C. J. G. (2016). Ur-priors, conditionalization and ur-prior conditionalization. Ergo, 3(17), 444–492.

    Google Scholar 

  • Milne, P. (2003). The simplest Lewis-style triviality proof yet? Analysis, 63(4), 300–303.

    Google Scholar 

  • Morgan, C. G. (1999). Conditionals, comparative probability, and triviality: The conditional of conditional probability cannot be represented in the object language. Topoi, 18, 97–116.

    Google Scholar 

  • Morgan, C. G., & Mares, E. D. (1995). Conditionals, probability, and non-triviality. Journal of Philosophical Logic, 24(5), 455–467.

    Google Scholar 

  • Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford: Oxford University Press.

    Google Scholar 

  • Over, D. (2009). New paradigm psychology of reasoning. Thinking and Reasoning, 15, 431–438.

    Google Scholar 

  • Over, D., Hadjichristidis, C., Evans, J., Handley, S., & Sloman, S. (2007). The probability of causal conditionals. Cognitive Psychology, 54, 62–97.

    Google Scholar 

  • Politzer, G., Over, D., & Baratgin, J. (2010). Betting on conditionals. Thinking and Reasoning, 16, 172–197.

    Google Scholar 

  • Popper, K. R. (1955). Two autonomous axiom systems for the calculus of probabilities. British Journal for the Philosophy of Science, 6, 51–57.

    Google Scholar 

  • Raidl, E. (2018). Open-minded orthodox Bayesianism by Epsilon-conditionalisation. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy075.

  • Ramsey, F. P. (1926). Truth and probability. In R. B. Braithwaite (Ed.), The foundations of mathematics and other logical essays (pp. 156–198). London: Kegan, Paul, Trench, Trubner.

    Google Scholar 

  • Rehder, W. (1982). Conditions for probabilities of conditionals to be conditional probabilities. Synthese, 53, 439–443.

    Google Scholar 

  • Rott, H. (1986). If’s, though, and because. Ekenntnis, 25, 345–370.

    Google Scholar 

  • Rott, H. (2011). Reapproaching Ramsey: Conditionals and iterated belief change in the spirit of AGM. Journal of Philosophical Logic, 40, 155–191.

    Google Scholar 

  • Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton University Press.

    Google Scholar 

  • Skovgaard-Olsen, N., Kellen, D., Krahl, H., & Klauer, K. C. (2017). Relevance differently affects the truth, acceptability, and probability evaluations of ‘and’, ‘but’, ‘therefore’, and ‘if then’. Thinking and Reasoning, 23(4), 449–482.

    Google Scholar 

  • Skovgaard-Olsen, N., Singmann, H., & Klauer, K. C. (2016). The relevance effect and conditionals. Cognition, 150, 26–36.

    Google Scholar 

  • Stalnaker, R. (1968). A theory of conditionals. Studies in logical theory, American Philosophical Quarterly Monograph series (Vol. 2). Oxford: Blackwell.

    Google Scholar 

  • Stalnaker, R. (1970). Probability and conditionals. Philosophy of Science, 37, 64–80.

    Google Scholar 

  • Stalnaker, R. (1975). Indicative conditionals. In W. L. Harper, R. Stalnaker, & G. Pearce (Eds.), Ifs: Conditionals, belief, decision, chance and time. The University of Western Ontario series in philosophy of science (Vol. 15, pp. 193–210). Dordrecht: Springer.

    Google Scholar 

  • Stalnaker, R. (1976). Letter to van Fraassen. In W. L. Harper & C. A. Hooker (Eds.), Foundations of probability theory, statistical inference and statistical theories of science (Vol. I, pp. 302–306). Dordrecht: Reidel.

    Google Scholar 

  • Stalnaker, R. (2012). The laws of belief: Ranking theory and its philosophical applications. Oxford: Oxford University Press.

    Google Scholar 

  • Stalnaker, R. (2014). Context. Oxford: Oxford University Press.

    Google Scholar 

  • Stalnaker, R. (2015). Conditionals: A unifying ranking-theoretic perspective. Philosopher’s Imprint, 15(1), 1–30.

    Google Scholar 

  • Stalnaker, R., & Jeffrey, R. (1994). Conditionals as random variables. In E. Eells & B. Skyrms (Eds.), Probabilities and conditionals: Belief revision and rational decision (pp. 31–46). Cambridge: Cambridge University Press.

    Google Scholar 

  • Spohn, W. (1988). Ordinal conditional functions: A dynamic theory of epistemic states. In W. L. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics (Vol. 2, pp. 105–134). Dordrecht: Kluwer.

    Google Scholar 

  • Van Fraassen, B. (1976). Probabilities of conditionals. In W. L. Harper & C. A. Hooker (Eds.), Foundations of probability theory, statistical inference, and statistical theories of science (Vol. I, pp. 261–301). Dordrecht: Reidel.

    Google Scholar 

  • Wenmackers, S., & Romeijn, J.-W. (2016). New theory about old evidence. A framework for open-minded Bayesianism. Synthese, 193(4), 1225–1250.

    Google Scholar 

  • Weydert, E. (1994). General belief measures. In R. López de Mántara & D. Poole (Eds.), Proceedings of the tenth conference on uncertainty in artificial intelligence, UAI ’94 (pp. 575–582). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Williams, J. R. G. (2012). Counterfactual triviality: A Lewis-impossibility argument for counterfactuals. Philosophy and Phenomenological Research, 85, 648–670.

    Google Scholar 

  • Williamson, T. (1998). Conditionalizing on knowledge. British Journal for the Philosophy of Science, 49, 89–121.

    Google Scholar 

Download references

Acknowledgements

I would like to thank Alan Hájek for encouraging me to publish these ideas, Arno Göbel for several discussions on this topic, Niels Skovgaard-Olsen for helpful comments, the people present at the European Epistemology Workshop 2016 in Paris and colleagues from the DFG-funded ‘What-if?’ research group in Konstanz for their comments, as well as two anonymous referees who have significantly contributed to improving the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Raidl.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Funding for this research was provided by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Research Unit FOR 1614 and under Germany’s Excellence Strategy – EXC-Number 2064/1 – Project number 390727645.

Appendices

Proofs

1.1 Restricted Standard Conditional Plausibility Spaces

Lemma 7

Let \(F\) weakly \(\displaystyle \oslash \)-conditionalisable and A be \(F\)-consistent. Then \(X \sqsubseteq X'\) and \(F\) and \(F_A\) are plausibilities over \(X'\).

Proof

We may consider \(X' = X \cup \{F_A(B) : B \in {\mathcal {A}}, F(A)> \mathbf {0}\}\). \(X'\) is partially ordered by assumption.

  • Pl1: Let \(A \subseteq B\) and C\(F\)-consistent. Thus \(A \cap C \subseteq B \cap C\). Hence \(F(A \cap C) \le _X F(B \cap C)\) (Pl1). And since \(\displaystyle \oslash \) is monotone in the first argument, we have \(F(A|C) = F(A \cap C) \displaystyle \oslash F(C) \le _X F(B \cap C) \displaystyle \oslash F(C) = F(B|C)\).

  • Pl2: first w.r.t. \(\mathbf {0},\mathbf {1}\). \(F(\varOmega |A)= F(\varOmega \cap A) \displaystyle \oslash F(A) = F(A \cap A) \displaystyle \oslash F(A)= F(A|A)= F(\varOmega ) = \mathbf {1}\) by success. \(F(\emptyset |A) = F(\emptyset ) = \mathbf {0}\) (by same \(\mathbf {0}\)). By Pl1 (and for \(C\in {\mathcal {A}}\), s.t. \(F(C)> \mathbf {0}\)) we have for all \(B \in {\mathcal {A}}\), \(\mathbf {0}\le F_C(B) \le \mathbf {1}\), so that \(\mathbf {0}, \mathbf {1}\) are the minmum and maximum of \(X'\), which thereby is pointed and \(\mathbf {0}_C = \mathbf {0}, \mathbf {1}_C = \mathbf {1}\). This proves Pl2.

  • \(X \sqsubseteq X'\). \(X \subseteq X'\) by definition, \(X'\) is pointed by the above remark and \(X'\) has the same \(\mathbf {0}\) and \(\mathbf {1}\) as X. Finally, \(\le '\) restricted to X is \(\le \) by the above definition of \(X'\).

  • \(F=F(.|\varOmega )\) and \(F(\varOmega )=F(\varOmega |\varOmega )> \mathbf {0}\) by spreading. Thus by the above, \(F\) can also be seen as a plausibility over \(X'\). \(\square \)

Theorem 6

\(S=\{\langle \varOmega , X_A, F_A \rangle :A \in {\mathcal {A}}, F(A)>_{\varOmega } \mathbf {0}_{\varOmega }\}\) with \(F= F(.|\varOmega )\) and \(X=X_{\varOmega }\) is a proper conditional plausibility space satisfying C3.1 iff \(F\) is weakly \(\displaystyle \oslash \)-conditionalisable. S satisfies additionally C3.2 iff \(F\) additionally satisfies coincidence. (For C3.1, C3.2, see Theorem 3.)

Proof

\((\Rightarrow )\) :

Since we restrict to \(F_A\)’s such that \(F(A)>0\), by standardness all \(X_A\) have the same \(\mathbf {1}\) and \(\mathbf {0}\). Denote \({\mathrm{Im}}(F)= \text{ Im }[F(.|\varOmega )]\) and consider \(X' = \bigcup _{F_A \in S} \text{ Im }(F_A)\).

0.:

\(F=F(.|\varOmega )\) by convention and \(F(.|\varOmega ) \in S\) is spreading by assumption and thus \(F\) is a spreading plausibility function over \(X=X_{\varOmega }\).

1.:

For \(\langle x,y \rangle \in {\mathrm{Im}}(F)^2\)\(F\)-consistent, i.e., such that there are AC with \(x=F(A \cap C)\) and \(y = F(C) > \mathbf {0}\), define \(x \displaystyle \oslash y = F(A|C)\). Then \(\displaystyle \oslash \) is a partial function \(X^2 \longrightarrow X'\) (using C3.1 twice), defined (and total) over consistent couples in \({\mathrm{Im}}(F)\) (by the above definition) and \(X,X_A \sqsubseteq X'\) (by definition of \(X'\)). \(\displaystyle \oslash \) is monotone in first argument by C3.1.

2.:

As a consequence \(F(A|C)\) is defined for C\(F\)-consistent and \(F(A|C) = F(A \cap C) \displaystyle \oslash F(C)\).

3.:

Weak vacuity by convention.

4.:

Success: Since \(F(\varOmega A) =F(AA)\) and \(F(A)= F(A)>0\). Thus \(F(A|A) = F(\varOmega |A) =F(\varOmega |\varOmega ) = F(\varOmega )\) (C3.1 twice).

5.:

\(F(\emptyset |\varOmega ) = \mathbf {0}_{\varOmega } = \mathbf {0}=\mathbf {0}_A = F(\emptyset |A)\) by standardness.

\((\Leftarrow )\) :
1.:

Partial conditional plausibility space: Consider \(S=\{\langle \varOmega , X_A, F_A \rangle : F(A)> \mathbf {0}, A \in {\mathcal {A}}\}\). By weak vacuity \(F= F(.|\varOmega )\) which is assumed spreading and thus \(F\in S\). Additionally, \(F_A\) and \(F\) are plausibilities over \(X'\) for A\(F\)-consistent (see Lemma 7) and we can take \(X_A=X' = X_{\varOmega }\).

2.:

Standard: Follows from \(X_A=X'\).

3.:

C3.1: follows from monotonicity in the first argument of \(\displaystyle \oslash \) and weak vacuity.

  • C3.2 implies that we can extend \(\displaystyle \oslash \) such that \(F(A|BC) = F(AB|C) \displaystyle \oslash F(B|C)\), for \(F(B|C)> \mathbf {0}\) and for C\(F\)-consistent. Therefore BC is \(F\)-consistent: \(F(B|C)>\mathbf {0}\) is defined and thus \(F(BC) \displaystyle \oslash F(C)>\mathbf {0}\). But then \(F(BC)> \mathbf {0}\), else, i.e., if \(F(BC) = \mathbf {0}\), then we would have \(F(B|C)= \mathbf {0}\) by the fact that \(\mathbf {0}\) is the left-identity of \(\displaystyle \oslash \). Thus \(F(.|BC)\) is in effect defined. Consider now an \(F\)-consistent C and an \(F_C\)-consistent B. Then by the above equation we obtain coincidence:

    $$\begin{aligned} (F_C)_B(A)= F(AB|C) \displaystyle \oslash F(B|C) = F(A|BC)= F_{BC}(A) \end{aligned}$$
  • Conversely. Suppose coincidence holds. Then for the above C and B, we obtain

    $$\begin{aligned} F(AB|C) \displaystyle \oslash F(B|C) = (F_C)_B(A) = F_{BC}(A)= F(A|BC) \end{aligned}$$

    Thus for such CB, we obtain the implication in C3.2, since \(\displaystyle \oslash \) is a function. \(\square \)

1.2 The Algebraic Structure of \(\oplus \) and \(\displaystyle \oslash \)

Lemma 8

Let \(F\) be a separable plausibility over the algebra \({\mathcal {A}}\). Then \(\oplus \) forms an ortho-complemented category over \({\mathrm{Im}}(F)\), with \(\mathbf {1}\) being \(\oplus \)-absorbing (as in Lemma 1).

Proof

  1. 1.

    \(\mathbf {0}\) is the (unique) \(\oplus \)-Identity over\({\mathrm{Im}}(F)\): Let \(b \in {\mathrm{Im}}(F)\subseteq X\). Thus there is \(B \in {\mathcal {A}}\) with \(F(B)=b\).

    $$\begin{aligned} \begin{array}{rlr} \mathbf {0}\oplus b &{} = F(\emptyset ) \oplus F(B) &{} \text{(Pl2, } \text{ assumption) } \\ &{} = F(\emptyset \cup B) &{} \text{(separable) }\\ &{} = F(B) &{} \text{(algebra: } \emptyset \text{ identity } \text{ for } \cup \text{) }\\ &{} = b &{} \text{(assumption) } \end{array} \end{aligned}$$

    The same reasoning can be applied to obtain the left identity \(b \oplus \mathbf {0}= b\). Uniqueness: Suppose (for reductio) there is \(\mathbf {0}' \in {\mathrm{Im}}(F)\) such that \(\mathbf {0}' \ne \mathbf {0}\) and which also satisfies \(\mathbf {0}' \oplus b = b\) and \(b \oplus \mathbf {0}' = b\) for all \(b \in {\mathrm{Im}}(F)\). Then in particular \(\mathbf {0}\oplus \mathbf {0}' = \mathbf {0}\) as well as \(\mathbf {0}\oplus \mathbf {0}' =\mathbf {0}'\). Thus \(\mathbf {0}' = \mathbf {0}\). Contradiction.

  2. 2.

    Existence of\(\oplus \)complements over\({\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\subseteq X\), then there is \(A \in {\mathcal {A}}\), such that \(F(A)=a\). Thus there is \(\overline{a} \in X\) such that \(F(\overline{A}) = \overline{a}\) and \(a, \overline{a}\) are \(\oplus \) complements to each other:

    $$\begin{aligned} \begin{array}{rlr} a \oplus \overline{a} &{} = F(A) \oplus F(\overline{A}) &{} \text{(assumption) }\\ &{} = F( A\cup \overline{A}) &{} \text{(separable) }\\ &{} = F(\varOmega ) &{} \text{(algebra) }\\ &{} = \mathbf {1}&{} \text{(Pl2) } \end{array} \end{aligned}$$
  3. 3.

    Ortho-complements over\({\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\subseteq X\). Then there is \(A \in {\mathcal {A}}\) such that \(F(A)=a\). And \(F(\overline{A}) = \overline{a}\) as well as \(F(\overline{\overline{A}})=\overline{\overline{a}}\), and from algebra:

    $$\begin{aligned} \overline{\overline{a}}=F(\overline{\overline{A}}) = F(A)=a \end{aligned}$$
  4. 4.

    \(\oplus \) is associative over disjoint triples in \({\mathrm{Im}}(F)\): Let \(a,b,c \in {\mathrm{Im}}(F)\subseteq X\) be a disjoint triple. Thus there are \(A,B,C \in {\mathcal {A}}\), such that \(F(A) =a, F(B)=b, F(C) =c\) and ABC are mutually disjoint. Therefore \(A \cup B\) and C are disjoint. Similarly A and \(B \cup C\) are disjoint. Thus:

    $$\begin{aligned} \begin{array}{rlr} (a \oplus b) \oplus c &{} =(F(A) \oplus F(B)) \oplus F(C) &{} \text{(assumption) }\\ &{} = F(A \cup B) \oplus F(C) &{} \text{(separable) }\\ &{} = F((A \cup B) \cup C) &{} \text{(separable) }\\ &{} = F(A \cup (B \cup C)) &{} \text{(algebra: } \cup \text{ associative) }\\ &{} = F(A) \oplus F( B \cup C) &{} \text{(separable) }\\ &{} = F(A) \oplus (F( B) \oplus F(C)) &{} \text{(separable) }\\ &{} = a \oplus (b \oplus c) &{} \text{(assumption) } \end{array} \end{aligned}$$
  5. 5.

    \(\oplus \)is commutative for disjoint couples over\({\mathrm{Im}}(F)\): Let \(a,b \in {\mathrm{Im}}(F)\subseteq X\) be disjoint couples. Thus there are disjoint \(A,B \in {\mathcal {A}}\), such that \(F(A)=a\) and \(F(B)=b\). Thus:

    $$\begin{aligned} \begin{array}{rlr} a \oplus b &{} =F(A) \oplus F(B) &{} \text{(assumption) }\\ &{} = F(A \cup B) &{} \text{(separable) }\\ &{} = F(B \cup A) &{} \text{(algebra: } \cup \text{ commutes) }\\ &{} = F(B) \oplus F(A) &{} \text{(seperable) }\\ &{} = b \oplus a &{} \text{(assumption) } \end{array} \end{aligned}$$
  6. 6.

    \(\mathbf {1}\) is \(\oplus \)-absorbing over disjoint couples of\({\mathrm{Im}}(F)\): assume \(\langle a, \mathbf {1}\rangle \in X^2\) is a disjoint couple. Then there are disjoint \(A,B \in {\mathcal {A}}\) such that \(F(A)=a\), \(F(B)=\mathbf {1}\) (note B need not be \(\varOmega \) and therefore A need not be \(\emptyset \)). Thus:

    $$\begin{aligned} \begin{array}{rlr} \mathbf {1}&{}= F(B) &{} \text{(assumption) }\\ {} &{}\le _X F(A \cup B) &{} \text{(Pl1) }\\ &{}\le _X F(\varOmega ) &{} \text{(Pl1) }\\ {} &{}= \mathbf {1}&{} \text{(Pl2) } \end{array} \end{aligned}$$

    \(a \oplus \mathbf {1}= F(A) \oplus F(B) = F(A \cup B) = \mathbf {1}\) follows by antisymmetry of \(\le _X\). \(\square \)

Lemma 9

If \(F\) is a weakly \(\displaystyle \oslash \)-conditionalisable plausibility, then for \(a \in {\mathrm{Im}}(F)\), \(\displaystyle \oslash \) satisfies the properties (1–4) of Lemma 2 (\(\mathbf {1}\)\(\displaystyle \oslash \)-right Identity, \(\displaystyle \oslash \)-Inverses, \(\mathbf {1}\) unique, \(\mathbf {0}\) left absorbing). If \(F\) is \(\displaystyle \oslash \)-conditionalisable, then \(\displaystyle \oslash \) also satisfies (5) \(\displaystyle \oslash \)-cancelling.

Proof

  1. 1.

    \(\mathbf {1}\)\(\displaystyle \oslash \)-right identity: Let \(a \in {\mathrm{Im}}(F)\). Thus there is \(A \in {\mathcal {A}}\) such that \(a=F(A) =F(A | \varOmega ) = F(A \cap \varOmega ) \displaystyle \oslash F(\varOmega )= F(A) \displaystyle \oslash \mathbf {1}=a \displaystyle \oslash \mathbf {1}\) (by weak vacuity and \(A \cap \varOmega =A\)).

  2. 2.

    \(\displaystyle \oslash \)-Inverses for\(F\)-consistent\(a \in {\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\)\(F\)-consistent. Thus there is \(A \in {\mathcal {A}}\) such that \(a = F(A)>_X \mathbf {0}\). But \(F_A\) has the same \(\mathbf {1}\) as \(F\) (from success). Thus \(\mathbf {1}= F(\varOmega )= F(\varOmega |A)=F(A \cap \varOmega ) \displaystyle \oslash F(A) = F(A) \displaystyle \oslash F(A)\).

  3. 3.

    \(\mathbf {1}\)Uniqueness: Assume (for reductio) there is another \(\mathbf {1}' \in {\mathrm{Im}}(F)\) such that \(a \displaystyle \oslash \mathbf {1}' = a\) for all \(a \in {\mathrm{Im}}(F)\). But then \(\mathbf {1}' \displaystyle \oslash \mathbf {1}' = \mathbf {1}'\). Yet \(\mathbf {1}' \displaystyle \oslash \mathbf {1}' = \mathbf {1}\) by Inverses. Thus \(\mathbf {1}' = \mathbf {1}\).

  4. 4.

    \(\mathbf {0}\) is left \(\displaystyle \oslash \)-absorbing for\(a\in {\mathrm{Im}}(F)F\)-consistent: Let \(a \in {\mathrm{Im}}(F)\)\(F\)-consistent. Thus there is \(A \in {\mathcal {A}}\) such that \(F(A)=a\) and \(F(.|A)\) is defined (\(F\)-consistency). \( \mathbf {0}\displaystyle \oslash a = F(\emptyset ) \displaystyle \oslash F(A) = F(\emptyset \cap A) \displaystyle \oslash F(A)= F(\emptyset | A) = F(\emptyset ) = \mathbf {0}\) (by Pl2, conditionalisation, closure and the fact that \(F\) and \(F(.|A)\) have the same \(\mathbf {0}\)).

  5. 5.

    Cancelling: Can be proven from coincidence, see the if-and Lemma 11. But since it is not directly required, we simply state it. \(\square \)

1.3 Pseudo-Inverse

Given \(\displaystyle \oslash \), let us define \(x \otimes b= a\), whenever \(a \displaystyle \oslash b\) is defined and \(a\displaystyle \oslash b =x\). \(\otimes \) is then a partial function \({\mathrm{Im}}(F)(\displaystyle \oslash ) \times {\mathrm{Im}}(F)\longrightarrow {\mathrm{Im}}(F)\), defined for all \(x \in {\mathrm{Im}}(F)(\displaystyle \oslash )\) and \(b \in {\mathrm{Im}}(F)\) such that \(b=F(B)\), B\(F\)-consistent and \(x= F(A|B)\) for some A, and then \(x \otimes b = F(A \cap B)\). Thus, intuitively \(\otimes \) is defined over conditional plausibility values \(x=F(A|B)\) and corresponding plausibility values \(b=F(B)\). Note then, that \(\otimes \) satisfies the fundamental equation for \(\langle a,b\rangle \) an \(F\)-consistent couple:

$$\begin{aligned} (a \displaystyle \oslash b) \otimes b =a \end{aligned}$$
(A.1)

In this sense, \(\otimes \) partially inverses \(\displaystyle \oslash \). From this, we obtain

Lemma 10

(pseudo-inverses) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility. For \(x\in {\mathrm{Im}}(F)\), \(F\)-consistent we have

  1. 1.

    \(\mathbf {1}\otimes x = x = x \otimes \mathbf {1}\).

  2. 2.

    \(\mathbf {0}\otimes x = \mathbf {0}\).

Proof

Since \(x \in {\mathrm{Im}}(F)\) is \(F\)-consistent there is \(A \in {\mathcal {A}}\) such that \(F(A)=x\) and A is \(F\)-consistent, so that \(F_A\) is defined.

  1. 1.

    \(\mathbf {1}\otimes x\) is defined for \(F(A)=x\) and A\(F\)-consistent: Chose \(B = \varOmega \). Then \(z=F(A \cap B) = F(A \cap \varOmega ) = F(A)=x\). Thus we can write \(\mathbf {1}= x \displaystyle \oslash x = z \displaystyle \oslash x\) (\(\displaystyle \oslash \)-inverses). Hence \(\mathbf {1}\otimes x = (x \displaystyle \oslash x) \otimes x = (z \displaystyle \oslash x) \otimes x = z = x\) by the fundamental equation. Similarly \(x \otimes \mathbf {1}\) is defined for \(x \in {\mathrm{Im}}(F)\): Let \(F(A)=x \in {\mathrm{Im}}(F)\). Thus \(x = x \displaystyle \oslash 1\) (by 1 right \(\displaystyle \oslash \)-identity) and \(x = F(A|\varOmega )\). Hence \(x \otimes \mathbf {1}= x\) by the fundamental equation.

  2. 2.

    \(\mathbf {0}\otimes x\) is defined for A\(F\)-consistent and \(F(A)=x\): Chose \(B = \overline{A}\). Then \(A \cap B = \emptyset \) and thus \(\mathbf {0}= F(A \cap B) = z \) (Pl2). Hence we can rewrite \(\mathbf {0}= z \displaystyle \oslash x\). Thus \(\mathbf {0}\otimes x = (z \displaystyle \oslash x) \otimes x = z = \mathbf {0}\) by the fundamental equation. \(\square \)

1.4 Triviality

In all what follows, we assume \(F\) to be a quasi probability. The following lemma proves the plausibilistic analogue to what Douven called the Generalised Stalnaker (hypo)thesis:

Lemma 11

(if-and) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility and \(A,B \in {\mathcal {A}}\) such that \(A \cap B\) is \(F\)-consistent and assume \({\mathrm{S}}(F_B)\) (as in Lemma 3). Then

$$\begin{aligned} F(A > C| B) = F(C| A \cap B). \end{aligned}$$
(A.2)

Proof

Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility, \(A \cap B\)\(F\)-consistent (i.e., \(F(A\cap B) >_X \mathbf {0}\)) and \(F_B\) satisfies the Stalnaker Thesis. Then B as well as A are \(F\)-consistent (Pl1). Thus \(F_B\) and \(F_{A \cap B}\) are defined and thus the Stalnaker Thesis makes sense for \(F_B\) and both sides of the equation are defined. Furthermore, we obtain that A remains \(F_B\)-consistent since \(\displaystyle \oslash \) is monotone in the first argument. In effect, consider \(F_B(A)= F(A \cap B) \displaystyle \oslash F(B)\). We have \(F(A \cap B) >_X \mathbf {0}= F(\emptyset \cap B)\). By monotonicity in the first argument \(F(A \cap B) \displaystyle \oslash F(B) >_X F(\emptyset \cap B) \displaystyle \oslash F(B) = \mathbf {0}\). Thus A remains \(F_B\)-consistent. Hence coincidence applies, i.e., \((F_B)_A = F_{A \cap B}\). Thus

$$\begin{aligned} \begin{array}{rlr} F(A> C| B) &{} = F_B(A > C) &{} (B \text{ is } F\text{-consistent })\\ &{} = F_B(C|A) &{} ({\mathrm{S}}(F_B), )\\ &{} = (F_B)_A(C) &{} (A \text{ is } F_B\text{-consistent })\\ &{} = F(C |A \cap B) &{} (\text{ coincidence }, F \text{ is } A \cap B \text{ conditionalisable }) \end{array} \end{aligned}$$

Note that thereby (line 2 to 4) we have indirectly proven the cancellation property: \((x \displaystyle \oslash b) \displaystyle \oslash (y \displaystyle \oslash b) = x \displaystyle \oslash y\). For this use coincidence and \(x = F(A \cap B \cap C), y = F(A \cap B)\) and \(b = F(B)\). \(\square \)

Lemma 12

(accept) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility and \(A,C \in {\mathcal {A}}\) such that \(A \cap C\) is \(F\)-consistent and assume \({\mathrm{S}}(F_C)\) (as in Lemma 4). Then

$$\begin{aligned} F(A > C| C) \otimes F(C) = F(C). \end{aligned}$$
(A.3)

Proof

Let \(F, A,C\) as described, we may thus apply the if-and Lemma. Note that \(F(A> C|C)= F((A > C) \cap C) \displaystyle \oslash F(C)\) so that the left hand side of the above equation is defined. Note also, that C and \(A \cap C\) are \(F\)-consistent, so that \(F(.|A \cap C)\) and \(F(.|C)\) are defined. Since \(F\) is \(\displaystyle \oslash \)-conditionalisable, the law of \(\displaystyle \oslash \)-inverses holds and \(\mathbf {1}\) is the left \(\otimes \)-identity. Thus

$$\begin{aligned} \begin{array}{lr} F(A > C| C) \otimes F(C) = F(C| A \cap C) \otimes F(C) &{} \text{(if-and } \text{ Lemma) }\\ \quad = [~F(C \cap (A \cap C)) \displaystyle \oslash F(A \cap C)~] \otimes F(C) &{} (\text{ conditionalisable })\\ \quad = [~F(A \cap C) \displaystyle \oslash F(A \cap C) ~] \otimes F(C) &{} (\text{ algebra })\\ \quad = \mathbf {1}\otimes F(C) &{} (\displaystyle \oslash \text{-inverses })\\ \quad = F(C) &{} (\mathbf {1}\otimes \text{-left } \text{ Identity) } \end{array} \end{aligned}$$

\(\square \)

Lemma 13

(reject) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable and \(A,\overline{C} \in {\mathcal {A}}\) such that \(A \cap \overline{C}\) is \(F\)-consistent and assume \({\mathrm{S}}(F_{\overline{C}})\) (as in Lemma 5). Then

$$\begin{aligned} F(A > C| \overline{C}) \otimes F(\overline{C}) = \mathbf {0}. \end{aligned}$$
(A.4)

Proof

Let \(F, A, \overline{C}\) as described, we may thus apply the if-and Lemma 11. Note that \(F(A> C|\overline{C})= F((A > C) \cap \overline{C}) \displaystyle \oslash F(\overline{C})\), so that the left hand side of the above equation is defined. Note also, that \(\overline{C}\) and \(A \cap \overline{C}\) are \(F\)-consistent and hence \(F(.|A \cap \overline{C})\) and \(F(.|\overline{C})\) are defined and by conditionalisability have the same \(\mathbf {0}\) as \(F\), so that \(\mathbf {0}\) is \(\displaystyle \oslash \)-left absorbing. Finally \(\mathbf {0}\) is \(\otimes \)-left absorbing. Thus

$$\begin{aligned} \begin{array}{lr} F(A > C| \overline{C}) \otimes F(\overline{C}) =F(C| A \cap \overline{C}) \otimes F(\overline{C}) &{} \text{(if-and } \text{ Lemma) }\\ \quad =[~F(C \cap (A \cap \overline{C})) \displaystyle \oslash F(A \cap \overline{C}) ~] \otimes F(\overline{C}) &{} (\text{ conditionalisable })\\ \quad = [~F(\emptyset ) \displaystyle \oslash F(A \cap \overline{C}) ~] \otimes F(\overline{C}) &{} \text{(algebra) }\\ \quad = [~\mathbf {0}\displaystyle \oslash F(A \cap \overline{C})~] \otimes F(\overline{C}) &{} (\text{ Pl2, } \text{ minimum })\\ \quad = \mathbf {0}\otimes F(\overline{C}) &{} (\mathbf {0}~\text{ is } \displaystyle \oslash \text{-left } \text{ absorbing })\\ \quad = \mathbf {0}&{} (\mathbf {0} \text{ is } \otimes \text{-left } \text{ absorbing }) \end{array} \end{aligned}$$

\(\square \)

Lemma 14

(totality) Let \(F\) be a quasi probability (\(\oplus \)-separable and \(\displaystyle \oslash \)-conditionalisable plausibility) and let \(C,\overline{C}\) be \(F\)-consistent (as in Lemma 6). Then

$$\begin{aligned} F(A) = \oplus [~F(A|C) \otimes F(C) ~,~F(A|\overline{C}) \otimes F(\overline{C})~]. \end{aligned}$$
(A.5)

Proof

Let \(F\) and \(A,C \in {\mathcal {A}}\) as claimed and \(\otimes \) as previously defined. Setting \(a=F(A \cap C)\), \(a' = F(A \cap \overline{C})\), \(c = F(C), c' = F(\overline{C})\), we obtain \((\sharp )\): \(F(A \cap C)= (F(A \cap C) \displaystyle \oslash F(C)) \otimes F(C) = F(A|C) \otimes F(C)\) and \(F(A\cap \overline{C})= (F(A \cap \overline{C}) \displaystyle \oslash F(\overline{C})) \otimes F(\overline{C}) = F(A|\overline{C}) \otimes F(\overline{C})\). Then

$$\begin{aligned} \begin{array}{rlr} F(A) &{} =F((A \cap C) \cup (A \cap \overline{C}) )&{} (\text{ algebra })\\ &{} =\oplus [~F(A \cap C)~,~F(A \cap \overline{C}) ~]&{} (\text{ separable })\\ &{}= \oplus [~F(A|C) \otimes F(C) ~,~F(A|\overline{C}) \otimes F(\overline{C})~]&{} (\sharp ) \end{array} \end{aligned}$$

\(\square \)

Theorem 7

(triviality) Assume \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\) for the class of plausibilities \(\mathbb {C}\). Let \(F\) be a quasi probability (\(\displaystyle \oslash \)-conditionalisable and \(\oplus \)-separable plausibility) in \(\mathbb {C}\). Additionally, let \(A,C \in {\mathcal {A}}\) such that \(F(A\cap C), F(A \cap \overline{C})>_X \mathbf {0}\) (as in Theorem 4). Then

$$\begin{aligned} F(A > C) = F(C). \end{aligned}$$
(A.6)

Proof

Since \(A \cap C\) and \(A \cap \overline{C}\) are \(F\)-consistent, \(A,C,\overline{C}\) are \(F\)-consistent. Thus by seperability and conditionalisability, the Lemma 14 of totality applies for the partition \(\{C, \overline{C}\}\). Additionally, since \(F\) is a \(\displaystyle \oslash \)-conditionalisable plausibility, we have A is \(F_C\)-consistent and \(F_{\overline{C}}\)-consistent (see previous argument). Thus \(F_C\) as \(F_{\overline{C}}\) are defined and by \({\mathrm{C}}(\mathbb {C})\), they are in \(\mathbb {C}\). By \({\mathrm{S}}(\mathbb {C})\) the Stalnaker Thesis holds for them. Thus we also have the conditions of the accept and reject Lemma 12, 13. Hence:

$$\begin{aligned} \begin{array}{lr} F(A> C) = \oplus ( F(A> C| C) \otimes F(C)~, ~F(A > C| \overline{C}) \otimes F(\overline{C})) &{} (\text{ totality } \text{ Lemma })\\ \quad = \oplus (F(C)~,~\mathbf {0}) &{} (\text{ accept, } \text{ reject } \text{ Lemma })\\ \quad = F(C) &{} (\mathbf {0}\oplus \text{-Identity }) \end{array} \end{aligned}$$

\(\square \)

Corollary 5

For any quasi probability \(F\in \mathbb {C}\), \(A,C \in {\mathcal {A}}\) such that \(F(AC), F(A \overline{C})>_X \mathbf {0}\): \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\) imply \(F(C|A) =F(A > C)=F(C)\).

Call a plausibility function \(F\)trivial, if there are no 3 disjoint propositions \(C,D,E \in {\mathcal {A}}\) such that they are \(F\)-possible (this is, \(F(C), F(D), F(E)>_X \mathbf {0}\)). We now prove Triviality (Corollary 4 in the text).

Corollary 6

(Lewis-like triviality) There is no class \(\mathbb {C}\) of quasi probability functions, such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\), unless all \(F\in \mathbb {C}\) are trivial.

Proof

Assume there is a class \(\mathbb {C}\) of quasi probability functions such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\). Assume (for reductio) that \(\mathbb {C}\) contains a non-trivial quasi probability \(F\). Therefore, there are disjoint \(C,D,E \in {\mathcal {A}}\) such that \(F(C), F(D), F(E) >_X \mathbf {0}\). Consider \(A = C \cup D\). Thus \(AC = C\) and \(A \overline{C} =D\). Hence \(F(AC),F(A\overline{C})>0\), as well as \(F(A), F(C), F(\overline{C})>_X \mathbf {0}\) (Pl1).

Since \(F\in \mathbb {C}\) and \(A,C \in {\mathcal {A}}\), we have \(A > C \in {\mathcal {A}}\) (from \({\mathrm{S}}(\mathbb {C})\)) and because \(F\) is only defined over elements of \({\mathcal {A}}\)). Thus by the Corollary 5 to the triviality Theorem 7:

$$\begin{aligned} F(C|A) = F(C). \end{aligned}$$

However, note that \(F(C|A) = F(CA) \displaystyle \oslash F(A) = F(C) \displaystyle \oslash (F(C) \oplus F(D)) \ne F(C)\). Otherwise \(F(C) \oplus F(D) =\mathbf {1}\) (uniqueness of \(\mathbf {1}\)). But then \((F(C) \oplus F(D)) \oplus F(E) = \mathbf {1}\) and thus \(\mathbf {1}\oplus F(E) = \mathbf {1}\) and therefore \(F(E) = \mathbf {0}\) (uniqueness of \(\mathbf {0}\)). But then it is not possible that \(F(E)>_X \mathbf {0}\). This contradicts the assumption. \(\square \)

Comparative Remarks

The main differences between the approach of Morgan and Mares (1995) and Morgan (1999) and the one presented here are the following ones. In Morgan and Mares (1995) \(P(C| \varGamma )\) is a previously understood function, defined for a sentenceC and a set\(\varGamma \) of sentences. The generalised Stalnaker Thesis is directly assumed and instead of exploiting the equality \(P(C) = P(C|A)\) to prove triviality, they exploit the inequality \(P(C) \le P(C|A)\) (as I did in Theorem 5). In their framework this is the inequality \(P(C|\varGamma ) \le P(C| \varGamma \cup \{A\})\) [theorem 2.4 in Morgan and Mares (1995) and theorem V.6 in Morgan (1999)]. They also prove a slightly different triviality result, namely \(P(A|\varGamma )= P(B|\varGamma )\) for all \(A,B, \varGamma \) such that \(P(A|\varGamma ), P(B|\varGamma )\) are not maximal [theorem 3.1 in Morgan and Mares (1995) and theorem V.7 in Morgan (1999)]. The framework targets Adams’ Thesis and is built to allow formulating a non-standard probabilistic semantics. The logical relation between their assumptions and the assumptions made here is not clean cut, because of the difference in the result and the difference in the form of the conditional plausibility.

Roughly, the differences are as follows: Their assumptions (C1–C5)Footnote 56 used to prove a triviality in Morgan and Mares (1995) are on the one hand ‘stronger’ and on the other hand ‘weaker’ than the assumptions made here. In particular, their C1 assumes a total order, C2 is a success postulate, from C3 a version of \(\displaystyle \oslash \)-conditionalisability can be arrived at, C4 is a version of the generalised Stalnaker thesis and C5 is a weak assumption on negation. The assumptions are ‘stronger’, in the sense that neither totality, nor a form of C3, nor the generalised Stalnaker thesis are assumed here. Note also that if we consider their functions up to logical equivalence then C1–C5 imply a version of \(\displaystyle \oslash \)-conditionalisability. On the other hand, especially C3 cannot be derived from the assumptions made here—it would correspond to the cancelling property, but for arguments for which this property is not necessarily defined. The assumptions are ‘weaker’, because their P may treat logically equivalent sentences (or sets of sentences) in a different manner and C5 is a weaker assumption than separability, i.e., it can be derived from the latter.

Similar things can still be said about the weaker assumptions in Morgan (1999), i.e., only a transitive relation is assumed (CCP1), the success postulate is weakened with (CCP2) and the negation postulate as well with (CCP5). However, the generalised Stalnaker thesis is still directly assumed with (CCP4) and a weak version of C3 is present in (CCP3) which is still not derivable from the assumptions made here. Note also that the triviality obtained from replacing (CCP4) by a simple Stalnaker Thesis (CCP4’) still assumes a special form of conditionalisation (CCP0) which is built on the general iteration property \(P_{\varSigma }(A| \varGamma )= P(A | \varGamma \cup \varSigma )\) and which I only assume in special cases (in the property of coincidence).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Raidl, E. Lewis’ Triviality for Quasi Probabilities. J of Log Lang and Inf 28, 515–549 (2019). https://doi.org/10.1007/s10849-019-09289-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10849-019-09289-0

Keywords

Navigation