Abstract
According to Stalnaker’s Thesis (S), the probability of a conditional is the conditional probability. Under some mild conditions, the thesis trivialises probabilities and conditionals, as initially shown by David Lewis. This article asks the following question: does (S) still lead to triviality, if the probability function in (S) is replaced by a probability-like function? The article considers plausibility functions, in the sense of Friedman and Halpern, which additionally mimic probabilistic additivity and conditionalisation. These quasi probabilities comprise Friedman–Halpern’s conditional plausibility spaces, as well as other known representations of conditional doxastic states. The paper proves Lewis’ triviality for quasi probabilities and discusses how this has implications for three other prominent strategies to avoid Lewis’ triviality: (1) Adams’ thesis, where the probability function on the left in (S) is replaced by a probability-like function, (2) abandoning conditionalisation, where probability conditionalisation on the right in (S) is replaced by another propositional update procedure and (3) the approximation thesis, where equality in (S) is replaced by approximation. The paper also shows that Lewis’ triviality result is really about ‘additiveness’ and ‘conditionality’.
Similar content being viewed by others
Notes
\({\mathcal {A}}\) is an algebra over \(\varOmega \) iff \(\varOmega \in {\mathcal {A}}\), \({\mathcal {A}}\) is closed under complements and finite unions.
A similar proof can be found in Hájek (2011).
For defences of the Stalnaker thesis, see Stalnaker (1968, 1970), Van Fraassen (1976), Rehder (1982), Edgington (1995), Bennett (2003) and Evans and Over (2004). For a criticism, other triviality results, and a discussion of why the thesis ‘seems right’, see Stalnaker (1976), Lewis (1986), Hájek (1989, 1994, 2011, 2012), Hájek and Hall (1994) and Milne (2003).
Under negative relevance, or irrelevance, this equality is systematically violated. Compare (Skovgaard-Olsen et al. 2017).
See footnote 3 for references.
For reasons to adopt Adams’ thesis, see Hájek (2012).
For arguments against conditionalisation, see Meacham (2016).
Note that imaging satisfies success but violates certainty preservation as well as moderation (introduced below).
Gärdenfors (1982, theorem 3) proved a similar result on stronger assumptions: there is no non-trivial finite certainty preserving and rich (probabilistic) belief-change model which satisfies the Stalnaker Thesis.
Options 1, 2 and 3 have similar approximate versions.
The construction, however, depends on the assumption of independence and identical distribution for repeated events.
They are equivalent when \(P_B\) is interpreted as conditionalisation—proving equivalence of (a) and (b)—and under the assumption of the simple Stalnaker thesis—proving equivalence of (b) and (c).
I am thankful to an anonymous referee for this remark.
For all \(x,y,z \in X\): \(x \le _X x\); if \(x\le _X y\) and \(y \le _X z\) then \(x \le _X z\); if \(x\le _X y\) and \(y \le _X x\) then \(x = y\).
See Dubois and Prade (1988) for a definition.
See Friedman and Halpern (1995).
Friedman and Halpern (1995) showed that DECOMP is equivalent to \(\oplus \) being monotonic, and additionally forming a commutative category, where \(\mathbf {1}\) is absorbing. However, as we will show in the next section, these conditions already follow from \(\oplus \) being just separable and standard properties of the algebraic operations \(\cup , \cap , ^C\) with respect to \(\emptyset , \varOmega \).
DECOMP is a weak variant of a property of qualitative probabilities called “disjoint unions” (cf. Friedman and Halpern 1995).
From weak decomposability it follows that if for disjoint A, B, \(F(A)= \mathbf {0}= F(B)\) then \(F(A \cup B) = F(A \cup \emptyset ) =\mathbf {0}\). However, there are belief functions such that \(Bel(A)=0=Bel(B)\) and \(Bel(A \cup B)=1\). See Friedman and Halpern (1995) for an example of a non-decomposable belief function.
For a probability function P\(\langle P(BA), P(A)\rangle \) is a P-consistent couple iff \(P(A)>0\). For a ranking function \(\kappa \)\(\langle \kappa (BA), \kappa (A) \rangle \) is a \(\kappa \)-consistent couple iff \(\kappa (A) < \infty \).
Usually this is called ‘non-trivial’. But ‘trivial’ is reserved for another meaning. \(\mathbf {1}>\mathbf {0}\) ensures that \(F\) does not have the same values everywhere and that conditionalisation is at least defined for \(\varOmega \). Otherwise all values of all conditional plausibilities would collapse.
This does not assume that sequential conditionalisation \((F_A)_B\) is defined, but only that it can be reduced to the cumulative conditionalisation \(F_{A \cap B}\) in the mentioned case in the premisses.
The qualitative version of coincidence follows from the AGM postulates of sub-expansion and super-expansion, provided we consider expansion as the second revision step.
For positive and two-sided ranking functions, see Spohn (2012, def. 5.20, 5.21).
Note: if A is \(F\)-consistent then \(F_A\) is defined. If furthermore B is \(F_A\)-consistent then \((F_A)_B\) is defined. Coincidence then says that \((F_B)_A\) as well as \(F_{A \cap B}\) are also defined.
Initially I called them “nice plausibilities” to turn a remark of Friedman and Halpern (1995) into a definition. However, their ‘nice’ plausibilities satisfy, in addition to C3, also two other properties (their C1 and C2).
However, I shy away from adopting this terminology, since in just the same sense, \(\oplus \) and \(\displaystyle \oslash \) also behave almost like the known \(\min \) and −.
They show that decomposability DECOMP is equivalent to the plausibility being \(\oplus \)-separable and \(\oplus \) satisfying (1,2,3,6) and monotone in both arguments (when composition is defined), where they call ‘additive’ the properties (3,6). They do not prove the ortho-complement structure (5). However, note that by the previous Theorem 2 and the remark thereafter, separability is enough to obtain \(\hbox {DECOMP}_=\) and monotone separability would be enough for DECOMP. The other laws follow from boolean laws of the algebra \({\mathcal {A}}\).
X are the objects of the category, \(X \oplus X\) are the arrows, where \(f: a {\mathop {\rightarrow }\limits ^{b}} (a \oplus b)\) is an arrow iff a, b are disjoint, the source of f being a and the result \(a \oplus b\), and arrow composition is defined if the result of the first equals the source of the second. This forms a category over X, with same right and left identities, by (2) and (3). It is commutative by (1) and it is ortho-complemented by (5). The term “ortho-complemented” is taken from lattice theory, but in our context it is much weaker. Ortho-complementation here only requires \(\oplus \) (\(\vee \) in lattice theory) to have the property (5.a, b)—complement law and involution—but not the additional order inverting property, nor the analogue three properties for the \(\otimes \) (\(\wedge \) in lattice theory).
A monoïd is a group where inverses might not exist.
\(\min \) forms a commutative monoïd over the rank-values in \(\mathbb {N}_{\infty }\), with \(\mathbf {1}=0\) and \(\mathbf {0}=\infty \) which additionally is uniquely complemented, i.e., there is a unique ortho-complement for any number, namely 0. Similarly for a possibility measure.
\(+\), although total over \(\mathbb {R}\), is not total over [0, 1].
This was the reason for Friedman and Halpern (1995) to define standardness by \(X_A \sqsubseteq X\)or\(X_A = \{\mathbf {0}_A\}\). However, it is a curious consequence to obtain conditioning on \(F(A)= \mathbf {0}\) to trivialise in this sense, given they started out with Popper-measure like plausibility structures, for which one would have expected that conditioning on \(F(A)= \mathbf {0}\) should be non-trivially defined.
By the latter assumption, all axioms for \(\otimes \) are in fact implicitly restricted. \(\oplus ,\mathbf {0}\) forms a commutative (i.e., abelian) group, \(\otimes ,\mathbf {1}\) forms a commutative group over \(X {\setminus } \{\mathbf {0}\}\) and thus \(\otimes \) is invertible, and X is totally ordered by \(\le \) with minimum \(\mathbf {0}\) and maximum \(\mathbf {1}\), such that \(\oplus ,\otimes \) are monotone, and additionally \(\otimes \) distributes over \(\oplus \).
This ‘undefinedness’ assumption is similar to the trivialisation assumption \(F(B|\emptyset )= \mathbf {0}\) in Friedman and Halpern (1995), and is explicitly avoided here, in the sense that it is left open, whether \(F(B \cap A) \displaystyle \oslash F(A)\) is defined or not for \(F(A)= \mathbf {0}\).
For \(\displaystyle \oslash \) properly extended to \(\mathbf {0}\) in the first argument.
The curious removal of \(\mathbf {0}\) from the \(\otimes \)-structure also makes conditional valuation functions ill defined, since then \(V(\emptyset |A)\) is undefined, even if \(V(A)>\mathbf {0}\).
In the footnotes here and in the proofs in the “Appendix” it is made precise where only conditionalisability is required.
This is a plausibilistic analogue to the ‘import–export’ Lemma of Hájek (2011).
\(\displaystyle \oslash \)-conditionalisability suffices.
Again \(\displaystyle \oslash \)-conditionalisability suffices.
Again \(\displaystyle \oslash \)-conditionalisability suffices.
Here \(F\) needs to be for the first time seperable and \(\displaystyle \oslash \)-conditionalisable.
A weaker result can be proven: there is no class \(\mathbb {C}\) of plausibility functions such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\), unless any non-trivial \(F\in \mathbb {C}\) is not a quasi probability.
I thank an anonymous referee who pressed me on this issue.
As suggested by the referee in question: \(P(A>C) \ge P((A> C) \cap C) = P(A > C|C) P(C) =P(C)\), where the last equality follows from Lewis’ proof (i.e., the accept Lemma).
A positive ranking function \(\beta \) satisfies this, \(\beta (A|A)= \kappa (\overline{A}|A)= \infty \), as well as a two-sided ranking function \(\tau \), \(\tau (A|A)= \infty \), as well as a necessity measure \(\eta \), \(\eta (A|A)= 1- \rho (\overline{A}|A) = 1 - 0 =1\) and even imaging.
Not to be confused with those of Friedman and Halpern (1995).
References
Adams, E. (1965). The logic of conditionals. Inquiry, 8, 166–197.
Adams, E. (1975). The logic of conditionals. Dordrecht: Reidel.
Adams, E. (1998). A primer of probability logic. Stanford, CA: CSLI, Stanford University.
Arló-Costa, H. (1999). Belief revision conditionals: Basic iterated systems. Annals of Pure and Applied Logic, 96, 3–28.
Bennett, J. (2003). A philosophical guide to conditionals. New York: Oxford.
Bradley, R. (2007). A defense of the Ramsey test. Mind, 116(461), 1–21.
Charlow, N. (2016). Triviality for restrictor conditionals. Noûs, 50(3), 533–564.
Darwiche, A., & Ginsberg, M. L. (1992). A symbolic generalization of probability theory. In Proceedings of the national conference on artificial intelligence AAAI’92 (pp. 622–627). Menlo Park, CA: AAAI Press.
Dempster, A. P. (1967). Upper and lower probabilities induced by a multivalued mapping. The Annals of Mathematical Statistics, 38(2), 325–339.
Dietz, R., & Douven, I. (2011). A puzzle about Stalnaker’s hypothesis. Topoi, 30(1), 31–37.
Douven, I. (2016). The epistemology of indicative conditionals: Combining formal and empirical approaches. Cambridge: Cambridge University Press.
Douven, I., & Verbrugge, S. (2010). The Adams family. Cognition, 117, 302–318.
Douven, I., & Verbrugge, S. (2013). The probabilities of conditionals revisited. Cognitive Science, 37, 711–730.
Dubois, D., & Prade, H. (1988). Possibility theory: An approach to computerized processing of uncertainty. New York: Plenum Press.
Edgington, D. (1995). On conditionals. Mind, 104(414), 235–329.
Evans, J., Handley, S., & Over, D. (2003). Conditionals and conditional probability. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 321–355.
Evans, J. S. B. T., & Over, D. (2004). If. Oxford: Oxford University Press.
Franke, M., & de Jager, T. (2010). Now that you mention it: Awareness dynamics in discourse and decisions. In A. Benz, et al. (Eds.), Language, games, and evolution. LNAI 6207 (pp. 60–91). Berlin: Springer.
Friedman, N., & Halpern, J. Y. (1995). Plausibility measures: A user’s guide. In Proceedings of the eleventh conference on uncertainty in artificial intelligence UAI’95 (pp. 175–184). San Francisco, CA: Morgan Kaufmann.
Fugard, A., Pfeifer, N., Mayerhofer, B., & Kleiter, G. (2011). How people interpret conditionals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 635–648.
Gärdenfors, P. (1982). Imaging and conditionalization. The Journal of Philosophy, 79(12), 747–760.
Gärdenfors, P. (1986). Belief revisions and the Ramsey test for conditionals. The Philosophical Review, 95, 81–93.
Gärdenfors, P. (1987). Variations on the Ramsey test: More triviality results. Studia Logica, 46(4), 321–327.
Gärdenfors, P. (1988). Knowledge in flux. Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press.
Hájek, A. (1989). Probabilities of conditionals: Revisited. Journal of Philosophical Logic, 18, 423–428.
Hájek, A. (1994). Triviality on the cheap? In E. Eells & B. Skyrms (Eds.), Probability and conditionals (pp. 113–140). Cambridge: Cambridge University Press.
Hájek, A. (2011). Triviality pursuit. Topoi, 30(1), 3–15.
Hájek, A. (2012). The fall of ‘Adams’ thesis’? Journal of Language, Logic and Information, 21(2), 145–161.
Hájek, A., & Hall, N. (1994). The hypothesis of the conditional construal of conditional probability. In E. Eells & B. Skyrms (Eds.), Probability and conditionals (pp. 75–111). Cambridge: Cambridge University Press.
Jackson, F. (1987). Conditionals. Oxford: Blackwell.
Kaufmann, S. (2009). Conditionals right and left: Probabilities for the whole family. Journal of Philosophical Logic, 38, 1–53.
Kaufmann, S. (2015). Conditionals, conditional probabilities, and conditionalization. In H.-C. Schmitz & H. Zeevat (Eds.), Bayesian natural language semantics and pragmatics (pp. 71–94). Berlin: Springer.
Kern-Isberner, G. (2004). A thorough axiomatization of a principle of conditional preservation in belief revision. Annals of Mathematics and Artificial Intelligence, 40(1–2), 127–164.
Khoo, J. (2013). Conditionals, indeterminacy, and triviality. Philosophical Perspectives, 27(1), 260–287.
Khoo, J. (2016). Probabilities of conditionals in context. Linguistics & Philosophy, 39(1), 1–43.
Kratzer, A. (1986). Conditionals. Chicago Linguistics Society, 22(2), 1–15.
Khoo, J. (1991). Modality. In A. von Stechow & D. Wunderlich (Eds.), Semantics: An international handbook of contemporary research (pp. 639–650). Berlin: de Gruyter.
Leitgeb, H. (2010). On the Ramsey test without triviality. Notre Dame Journal of Formal Logic, 51(1), 21–54.
Levi, I. (1967). Probability kinematics. British Journal for the Philosophy of Science, 18, 197–209.
Lewis, D. (1976). Probabilities of conditionals and conditional probabilities. Philosophical Review, 85, 297–315.
Lewis, D. (1986). Probabilities of conditionals and conditional probabilities II. Philosophical Review, 95, 581–589.
Meacham, C. J. G. (2016). Ur-priors, conditionalization and ur-prior conditionalization. Ergo, 3(17), 444–492.
Milne, P. (2003). The simplest Lewis-style triviality proof yet? Analysis, 63(4), 300–303.
Morgan, C. G. (1999). Conditionals, comparative probability, and triviality: The conditional of conditional probability cannot be represented in the object language. Topoi, 18, 97–116.
Morgan, C. G., & Mares, E. D. (1995). Conditionals, probability, and non-triviality. Journal of Philosophical Logic, 24(5), 455–467.
Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford: Oxford University Press.
Over, D. (2009). New paradigm psychology of reasoning. Thinking and Reasoning, 15, 431–438.
Over, D., Hadjichristidis, C., Evans, J., Handley, S., & Sloman, S. (2007). The probability of causal conditionals. Cognitive Psychology, 54, 62–97.
Politzer, G., Over, D., & Baratgin, J. (2010). Betting on conditionals. Thinking and Reasoning, 16, 172–197.
Popper, K. R. (1955). Two autonomous axiom systems for the calculus of probabilities. British Journal for the Philosophy of Science, 6, 51–57.
Raidl, E. (2018). Open-minded orthodox Bayesianism by Epsilon-conditionalisation. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy075.
Ramsey, F. P. (1926). Truth and probability. In R. B. Braithwaite (Ed.), The foundations of mathematics and other logical essays (pp. 156–198). London: Kegan, Paul, Trench, Trubner.
Rehder, W. (1982). Conditions for probabilities of conditionals to be conditional probabilities. Synthese, 53, 439–443.
Rott, H. (1986). If’s, though, and because. Ekenntnis, 25, 345–370.
Rott, H. (2011). Reapproaching Ramsey: Conditionals and iterated belief change in the spirit of AGM. Journal of Philosophical Logic, 40, 155–191.
Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton University Press.
Skovgaard-Olsen, N., Kellen, D., Krahl, H., & Klauer, K. C. (2017). Relevance differently affects the truth, acceptability, and probability evaluations of ‘and’, ‘but’, ‘therefore’, and ‘if then’. Thinking and Reasoning, 23(4), 449–482.
Skovgaard-Olsen, N., Singmann, H., & Klauer, K. C. (2016). The relevance effect and conditionals. Cognition, 150, 26–36.
Stalnaker, R. (1968). A theory of conditionals. Studies in logical theory, American Philosophical Quarterly Monograph series (Vol. 2). Oxford: Blackwell.
Stalnaker, R. (1970). Probability and conditionals. Philosophy of Science, 37, 64–80.
Stalnaker, R. (1975). Indicative conditionals. In W. L. Harper, R. Stalnaker, & G. Pearce (Eds.), Ifs: Conditionals, belief, decision, chance and time. The University of Western Ontario series in philosophy of science (Vol. 15, pp. 193–210). Dordrecht: Springer.
Stalnaker, R. (1976). Letter to van Fraassen. In W. L. Harper & C. A. Hooker (Eds.), Foundations of probability theory, statistical inference and statistical theories of science (Vol. I, pp. 302–306). Dordrecht: Reidel.
Stalnaker, R. (2012). The laws of belief: Ranking theory and its philosophical applications. Oxford: Oxford University Press.
Stalnaker, R. (2014). Context. Oxford: Oxford University Press.
Stalnaker, R. (2015). Conditionals: A unifying ranking-theoretic perspective. Philosopher’s Imprint, 15(1), 1–30.
Stalnaker, R., & Jeffrey, R. (1994). Conditionals as random variables. In E. Eells & B. Skyrms (Eds.), Probabilities and conditionals: Belief revision and rational decision (pp. 31–46). Cambridge: Cambridge University Press.
Spohn, W. (1988). Ordinal conditional functions: A dynamic theory of epistemic states. In W. L. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics (Vol. 2, pp. 105–134). Dordrecht: Kluwer.
Van Fraassen, B. (1976). Probabilities of conditionals. In W. L. Harper & C. A. Hooker (Eds.), Foundations of probability theory, statistical inference, and statistical theories of science (Vol. I, pp. 261–301). Dordrecht: Reidel.
Wenmackers, S., & Romeijn, J.-W. (2016). New theory about old evidence. A framework for open-minded Bayesianism. Synthese, 193(4), 1225–1250.
Weydert, E. (1994). General belief measures. In R. López de Mántara & D. Poole (Eds.), Proceedings of the tenth conference on uncertainty in artificial intelligence, UAI ’94 (pp. 575–582). San Francisco: Morgan Kaufmann.
Williams, J. R. G. (2012). Counterfactual triviality: A Lewis-impossibility argument for counterfactuals. Philosophy and Phenomenological Research, 85, 648–670.
Williamson, T. (1998). Conditionalizing on knowledge. British Journal for the Philosophy of Science, 49, 89–121.
Acknowledgements
I would like to thank Alan Hájek for encouraging me to publish these ideas, Arno Göbel for several discussions on this topic, Niels Skovgaard-Olsen for helpful comments, the people present at the European Epistemology Workshop 2016 in Paris and colleagues from the DFG-funded ‘What-if?’ research group in Konstanz for their comments, as well as two anonymous referees who have significantly contributed to improving the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Funding for this research was provided by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Research Unit FOR 1614 and under Germany’s Excellence Strategy – EXC-Number 2064/1 – Project number 390727645.
Appendices
Proofs
1.1 Restricted Standard Conditional Plausibility Spaces
Lemma 7
Let \(F\) weakly \(\displaystyle \oslash \)-conditionalisable and A be \(F\)-consistent. Then \(X \sqsubseteq X'\) and \(F\) and \(F_A\) are plausibilities over \(X'\).
Proof
We may consider \(X' = X \cup \{F_A(B) : B \in {\mathcal {A}}, F(A)> \mathbf {0}\}\). \(X'\) is partially ordered by assumption.
-
Pl1: Let \(A \subseteq B\) and C\(F\)-consistent. Thus \(A \cap C \subseteq B \cap C\). Hence \(F(A \cap C) \le _X F(B \cap C)\) (Pl1). And since \(\displaystyle \oslash \) is monotone in the first argument, we have \(F(A|C) = F(A \cap C) \displaystyle \oslash F(C) \le _X F(B \cap C) \displaystyle \oslash F(C) = F(B|C)\).
-
Pl2: first w.r.t. \(\mathbf {0},\mathbf {1}\). \(F(\varOmega |A)= F(\varOmega \cap A) \displaystyle \oslash F(A) = F(A \cap A) \displaystyle \oslash F(A)= F(A|A)= F(\varOmega ) = \mathbf {1}\) by success. \(F(\emptyset |A) = F(\emptyset ) = \mathbf {0}\) (by same \(\mathbf {0}\)). By Pl1 (and for \(C\in {\mathcal {A}}\), s.t. \(F(C)> \mathbf {0}\)) we have for all \(B \in {\mathcal {A}}\), \(\mathbf {0}\le F_C(B) \le \mathbf {1}\), so that \(\mathbf {0}, \mathbf {1}\) are the minmum and maximum of \(X'\), which thereby is pointed and \(\mathbf {0}_C = \mathbf {0}, \mathbf {1}_C = \mathbf {1}\). This proves Pl2.
-
\(X \sqsubseteq X'\). \(X \subseteq X'\) by definition, \(X'\) is pointed by the above remark and \(X'\) has the same \(\mathbf {0}\) and \(\mathbf {1}\) as X. Finally, \(\le '\) restricted to X is \(\le \) by the above definition of \(X'\).
-
\(F=F(.|\varOmega )\) and \(F(\varOmega )=F(\varOmega |\varOmega )> \mathbf {0}\) by spreading. Thus by the above, \(F\) can also be seen as a plausibility over \(X'\). \(\square \)
Theorem 6
\(S=\{\langle \varOmega , X_A, F_A \rangle :A \in {\mathcal {A}}, F(A)>_{\varOmega } \mathbf {0}_{\varOmega }\}\) with \(F= F(.|\varOmega )\) and \(X=X_{\varOmega }\) is a proper conditional plausibility space satisfying C3.1 iff \(F\) is weakly \(\displaystyle \oslash \)-conditionalisable. S satisfies additionally C3.2 iff \(F\) additionally satisfies coincidence. (For C3.1, C3.2, see Theorem 3.)
Proof
- \((\Rightarrow )\) :
-
Since we restrict to \(F_A\)’s such that \(F(A)>0\), by standardness all \(X_A\) have the same \(\mathbf {1}\) and \(\mathbf {0}\). Denote \({\mathrm{Im}}(F)= \text{ Im }[F(.|\varOmega )]\) and consider \(X' = \bigcup _{F_A \in S} \text{ Im }(F_A)\).
- 0.:
-
\(F=F(.|\varOmega )\) by convention and \(F(.|\varOmega ) \in S\) is spreading by assumption and thus \(F\) is a spreading plausibility function over \(X=X_{\varOmega }\).
- 1.:
-
For \(\langle x,y \rangle \in {\mathrm{Im}}(F)^2\)\(F\)-consistent, i.e., such that there are A, C with \(x=F(A \cap C)\) and \(y = F(C) > \mathbf {0}\), define \(x \displaystyle \oslash y = F(A|C)\). Then \(\displaystyle \oslash \) is a partial function \(X^2 \longrightarrow X'\) (using C3.1 twice), defined (and total) over consistent couples in \({\mathrm{Im}}(F)\) (by the above definition) and \(X,X_A \sqsubseteq X'\) (by definition of \(X'\)). \(\displaystyle \oslash \) is monotone in first argument by C3.1.
- 2.:
-
As a consequence \(F(A|C)\) is defined for C\(F\)-consistent and \(F(A|C) = F(A \cap C) \displaystyle \oslash F(C)\).
- 3.:
-
Weak vacuity by convention.
- 4.:
-
Success: Since \(F(\varOmega A) =F(AA)\) and \(F(A)= F(A)>0\). Thus \(F(A|A) = F(\varOmega |A) =F(\varOmega |\varOmega ) = F(\varOmega )\) (C3.1 twice).
- 5.:
-
\(F(\emptyset |\varOmega ) = \mathbf {0}_{\varOmega } = \mathbf {0}=\mathbf {0}_A = F(\emptyset |A)\) by standardness.
- \((\Leftarrow )\) :
-
- 1.:
-
Partial conditional plausibility space: Consider \(S=\{\langle \varOmega , X_A, F_A \rangle : F(A)> \mathbf {0}, A \in {\mathcal {A}}\}\). By weak vacuity \(F= F(.|\varOmega )\) which is assumed spreading and thus \(F\in S\). Additionally, \(F_A\) and \(F\) are plausibilities over \(X'\) for A\(F\)-consistent (see Lemma 7) and we can take \(X_A=X' = X_{\varOmega }\).
- 2.:
-
Standard: Follows from \(X_A=X'\).
- 3.:
-
C3.1: follows from monotonicity in the first argument of \(\displaystyle \oslash \) and weak vacuity.
-
C3.2 implies that we can extend \(\displaystyle \oslash \) such that \(F(A|BC) = F(AB|C) \displaystyle \oslash F(B|C)\), for \(F(B|C)> \mathbf {0}\) and for C\(F\)-consistent. Therefore BC is \(F\)-consistent: \(F(B|C)>\mathbf {0}\) is defined and thus \(F(BC) \displaystyle \oslash F(C)>\mathbf {0}\). But then \(F(BC)> \mathbf {0}\), else, i.e., if \(F(BC) = \mathbf {0}\), then we would have \(F(B|C)= \mathbf {0}\) by the fact that \(\mathbf {0}\) is the left-identity of \(\displaystyle \oslash \). Thus \(F(.|BC)\) is in effect defined. Consider now an \(F\)-consistent C and an \(F_C\)-consistent B. Then by the above equation we obtain coincidence:
$$\begin{aligned} (F_C)_B(A)= F(AB|C) \displaystyle \oslash F(B|C) = F(A|BC)= F_{BC}(A) \end{aligned}$$ -
Conversely. Suppose coincidence holds. Then for the above C and B, we obtain
$$\begin{aligned} F(AB|C) \displaystyle \oslash F(B|C) = (F_C)_B(A) = F_{BC}(A)= F(A|BC) \end{aligned}$$Thus for such C, B, we obtain the implication in C3.2, since \(\displaystyle \oslash \) is a function. \(\square \)
1.2 The Algebraic Structure of \(\oplus \) and \(\displaystyle \oslash \)
Lemma 8
Let \(F\) be a separable plausibility over the algebra \({\mathcal {A}}\). Then \(\oplus \) forms an ortho-complemented category over \({\mathrm{Im}}(F)\), with \(\mathbf {1}\) being \(\oplus \)-absorbing (as in Lemma 1).
Proof
-
1.
\(\mathbf {0}\) is the (unique) \(\oplus \)-Identity over\({\mathrm{Im}}(F)\): Let \(b \in {\mathrm{Im}}(F)\subseteq X\). Thus there is \(B \in {\mathcal {A}}\) with \(F(B)=b\).
$$\begin{aligned} \begin{array}{rlr} \mathbf {0}\oplus b &{} = F(\emptyset ) \oplus F(B) &{} \text{(Pl2, } \text{ assumption) } \\ &{} = F(\emptyset \cup B) &{} \text{(separable) }\\ &{} = F(B) &{} \text{(algebra: } \emptyset \text{ identity } \text{ for } \cup \text{) }\\ &{} = b &{} \text{(assumption) } \end{array} \end{aligned}$$The same reasoning can be applied to obtain the left identity \(b \oplus \mathbf {0}= b\). Uniqueness: Suppose (for reductio) there is \(\mathbf {0}' \in {\mathrm{Im}}(F)\) such that \(\mathbf {0}' \ne \mathbf {0}\) and which also satisfies \(\mathbf {0}' \oplus b = b\) and \(b \oplus \mathbf {0}' = b\) for all \(b \in {\mathrm{Im}}(F)\). Then in particular \(\mathbf {0}\oplus \mathbf {0}' = \mathbf {0}\) as well as \(\mathbf {0}\oplus \mathbf {0}' =\mathbf {0}'\). Thus \(\mathbf {0}' = \mathbf {0}\). Contradiction.
-
2.
Existence of\(\oplus \)complements over\({\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\subseteq X\), then there is \(A \in {\mathcal {A}}\), such that \(F(A)=a\). Thus there is \(\overline{a} \in X\) such that \(F(\overline{A}) = \overline{a}\) and \(a, \overline{a}\) are \(\oplus \) complements to each other:
$$\begin{aligned} \begin{array}{rlr} a \oplus \overline{a} &{} = F(A) \oplus F(\overline{A}) &{} \text{(assumption) }\\ &{} = F( A\cup \overline{A}) &{} \text{(separable) }\\ &{} = F(\varOmega ) &{} \text{(algebra) }\\ &{} = \mathbf {1}&{} \text{(Pl2) } \end{array} \end{aligned}$$ -
3.
Ortho-complements over\({\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\subseteq X\). Then there is \(A \in {\mathcal {A}}\) such that \(F(A)=a\). And \(F(\overline{A}) = \overline{a}\) as well as \(F(\overline{\overline{A}})=\overline{\overline{a}}\), and from algebra:
$$\begin{aligned} \overline{\overline{a}}=F(\overline{\overline{A}}) = F(A)=a \end{aligned}$$ -
4.
\(\oplus \) is associative over disjoint triples in \({\mathrm{Im}}(F)\): Let \(a,b,c \in {\mathrm{Im}}(F)\subseteq X\) be a disjoint triple. Thus there are \(A,B,C \in {\mathcal {A}}\), such that \(F(A) =a, F(B)=b, F(C) =c\) and A, B, C are mutually disjoint. Therefore \(A \cup B\) and C are disjoint. Similarly A and \(B \cup C\) are disjoint. Thus:
$$\begin{aligned} \begin{array}{rlr} (a \oplus b) \oplus c &{} =(F(A) \oplus F(B)) \oplus F(C) &{} \text{(assumption) }\\ &{} = F(A \cup B) \oplus F(C) &{} \text{(separable) }\\ &{} = F((A \cup B) \cup C) &{} \text{(separable) }\\ &{} = F(A \cup (B \cup C)) &{} \text{(algebra: } \cup \text{ associative) }\\ &{} = F(A) \oplus F( B \cup C) &{} \text{(separable) }\\ &{} = F(A) \oplus (F( B) \oplus F(C)) &{} \text{(separable) }\\ &{} = a \oplus (b \oplus c) &{} \text{(assumption) } \end{array} \end{aligned}$$ -
5.
\(\oplus \)is commutative for disjoint couples over\({\mathrm{Im}}(F)\): Let \(a,b \in {\mathrm{Im}}(F)\subseteq X\) be disjoint couples. Thus there are disjoint \(A,B \in {\mathcal {A}}\), such that \(F(A)=a\) and \(F(B)=b\). Thus:
$$\begin{aligned} \begin{array}{rlr} a \oplus b &{} =F(A) \oplus F(B) &{} \text{(assumption) }\\ &{} = F(A \cup B) &{} \text{(separable) }\\ &{} = F(B \cup A) &{} \text{(algebra: } \cup \text{ commutes) }\\ &{} = F(B) \oplus F(A) &{} \text{(seperable) }\\ &{} = b \oplus a &{} \text{(assumption) } \end{array} \end{aligned}$$ -
6.
\(\mathbf {1}\) is \(\oplus \)-absorbing over disjoint couples of\({\mathrm{Im}}(F)\): assume \(\langle a, \mathbf {1}\rangle \in X^2\) is a disjoint couple. Then there are disjoint \(A,B \in {\mathcal {A}}\) such that \(F(A)=a\), \(F(B)=\mathbf {1}\) (note B need not be \(\varOmega \) and therefore A need not be \(\emptyset \)). Thus:
$$\begin{aligned} \begin{array}{rlr} \mathbf {1}&{}= F(B) &{} \text{(assumption) }\\ {} &{}\le _X F(A \cup B) &{} \text{(Pl1) }\\ &{}\le _X F(\varOmega ) &{} \text{(Pl1) }\\ {} &{}= \mathbf {1}&{} \text{(Pl2) } \end{array} \end{aligned}$$\(a \oplus \mathbf {1}= F(A) \oplus F(B) = F(A \cup B) = \mathbf {1}\) follows by antisymmetry of \(\le _X\). \(\square \)
Lemma 9
If \(F\) is a weakly \(\displaystyle \oslash \)-conditionalisable plausibility, then for \(a \in {\mathrm{Im}}(F)\), \(\displaystyle \oslash \) satisfies the properties (1–4) of Lemma 2 (\(\mathbf {1}\)\(\displaystyle \oslash \)-right Identity, \(\displaystyle \oslash \)-Inverses, \(\mathbf {1}\) unique, \(\mathbf {0}\) left absorbing). If \(F\) is \(\displaystyle \oslash \)-conditionalisable, then \(\displaystyle \oslash \) also satisfies (5) \(\displaystyle \oslash \)-cancelling.
Proof
-
1.
\(\mathbf {1}\)\(\displaystyle \oslash \)-right identity: Let \(a \in {\mathrm{Im}}(F)\). Thus there is \(A \in {\mathcal {A}}\) such that \(a=F(A) =F(A | \varOmega ) = F(A \cap \varOmega ) \displaystyle \oslash F(\varOmega )= F(A) \displaystyle \oslash \mathbf {1}=a \displaystyle \oslash \mathbf {1}\) (by weak vacuity and \(A \cap \varOmega =A\)).
-
2.
\(\displaystyle \oslash \)-Inverses for\(F\)-consistent\(a \in {\mathrm{Im}}(F)\): Let \(a \in {\mathrm{Im}}(F)\)\(F\)-consistent. Thus there is \(A \in {\mathcal {A}}\) such that \(a = F(A)>_X \mathbf {0}\). But \(F_A\) has the same \(\mathbf {1}\) as \(F\) (from success). Thus \(\mathbf {1}= F(\varOmega )= F(\varOmega |A)=F(A \cap \varOmega ) \displaystyle \oslash F(A) = F(A) \displaystyle \oslash F(A)\).
-
3.
\(\mathbf {1}\)Uniqueness: Assume (for reductio) there is another \(\mathbf {1}' \in {\mathrm{Im}}(F)\) such that \(a \displaystyle \oslash \mathbf {1}' = a\) for all \(a \in {\mathrm{Im}}(F)\). But then \(\mathbf {1}' \displaystyle \oslash \mathbf {1}' = \mathbf {1}'\). Yet \(\mathbf {1}' \displaystyle \oslash \mathbf {1}' = \mathbf {1}\) by Inverses. Thus \(\mathbf {1}' = \mathbf {1}\).
-
4.
\(\mathbf {0}\) is left \(\displaystyle \oslash \)-absorbing for\(a\in {\mathrm{Im}}(F)F\)-consistent: Let \(a \in {\mathrm{Im}}(F)\)\(F\)-consistent. Thus there is \(A \in {\mathcal {A}}\) such that \(F(A)=a\) and \(F(.|A)\) is defined (\(F\)-consistency). \( \mathbf {0}\displaystyle \oslash a = F(\emptyset ) \displaystyle \oslash F(A) = F(\emptyset \cap A) \displaystyle \oslash F(A)= F(\emptyset | A) = F(\emptyset ) = \mathbf {0}\) (by Pl2, conditionalisation, closure and the fact that \(F\) and \(F(.|A)\) have the same \(\mathbf {0}\)).
-
5.
Cancelling: Can be proven from coincidence, see the if-and Lemma 11. But since it is not directly required, we simply state it. \(\square \)
1.3 Pseudo-Inverse
Given \(\displaystyle \oslash \), let us define \(x \otimes b= a\), whenever \(a \displaystyle \oslash b\) is defined and \(a\displaystyle \oslash b =x\). \(\otimes \) is then a partial function \({\mathrm{Im}}(F)(\displaystyle \oslash ) \times {\mathrm{Im}}(F)\longrightarrow {\mathrm{Im}}(F)\), defined for all \(x \in {\mathrm{Im}}(F)(\displaystyle \oslash )\) and \(b \in {\mathrm{Im}}(F)\) such that \(b=F(B)\), B\(F\)-consistent and \(x= F(A|B)\) for some A, and then \(x \otimes b = F(A \cap B)\). Thus, intuitively \(\otimes \) is defined over conditional plausibility values \(x=F(A|B)\) and corresponding plausibility values \(b=F(B)\). Note then, that \(\otimes \) satisfies the fundamental equation for \(\langle a,b\rangle \) an \(F\)-consistent couple:
In this sense, \(\otimes \) partially inverses \(\displaystyle \oslash \). From this, we obtain
Lemma 10
(pseudo-inverses) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility. For \(x\in {\mathrm{Im}}(F)\), \(F\)-consistent we have
-
1.
\(\mathbf {1}\otimes x = x = x \otimes \mathbf {1}\).
-
2.
\(\mathbf {0}\otimes x = \mathbf {0}\).
Proof
Since \(x \in {\mathrm{Im}}(F)\) is \(F\)-consistent there is \(A \in {\mathcal {A}}\) such that \(F(A)=x\) and A is \(F\)-consistent, so that \(F_A\) is defined.
-
1.
\(\mathbf {1}\otimes x\) is defined for \(F(A)=x\) and A\(F\)-consistent: Chose \(B = \varOmega \). Then \(z=F(A \cap B) = F(A \cap \varOmega ) = F(A)=x\). Thus we can write \(\mathbf {1}= x \displaystyle \oslash x = z \displaystyle \oslash x\) (\(\displaystyle \oslash \)-inverses). Hence \(\mathbf {1}\otimes x = (x \displaystyle \oslash x) \otimes x = (z \displaystyle \oslash x) \otimes x = z = x\) by the fundamental equation. Similarly \(x \otimes \mathbf {1}\) is defined for \(x \in {\mathrm{Im}}(F)\): Let \(F(A)=x \in {\mathrm{Im}}(F)\). Thus \(x = x \displaystyle \oslash 1\) (by 1 right \(\displaystyle \oslash \)-identity) and \(x = F(A|\varOmega )\). Hence \(x \otimes \mathbf {1}= x\) by the fundamental equation.
-
2.
\(\mathbf {0}\otimes x\) is defined for A\(F\)-consistent and \(F(A)=x\): Chose \(B = \overline{A}\). Then \(A \cap B = \emptyset \) and thus \(\mathbf {0}= F(A \cap B) = z \) (Pl2). Hence we can rewrite \(\mathbf {0}= z \displaystyle \oslash x\). Thus \(\mathbf {0}\otimes x = (z \displaystyle \oslash x) \otimes x = z = \mathbf {0}\) by the fundamental equation. \(\square \)
1.4 Triviality
In all what follows, we assume \(F\) to be a quasi probability. The following lemma proves the plausibilistic analogue to what Douven called the Generalised Stalnaker (hypo)thesis:
Lemma 11
(if-and) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility and \(A,B \in {\mathcal {A}}\) such that \(A \cap B\) is \(F\)-consistent and assume \({\mathrm{S}}(F_B)\) (as in Lemma 3). Then
Proof
Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility, \(A \cap B\)\(F\)-consistent (i.e., \(F(A\cap B) >_X \mathbf {0}\)) and \(F_B\) satisfies the Stalnaker Thesis. Then B as well as A are \(F\)-consistent (Pl1). Thus \(F_B\) and \(F_{A \cap B}\) are defined and thus the Stalnaker Thesis makes sense for \(F_B\) and both sides of the equation are defined. Furthermore, we obtain that A remains \(F_B\)-consistent since \(\displaystyle \oslash \) is monotone in the first argument. In effect, consider \(F_B(A)= F(A \cap B) \displaystyle \oslash F(B)\). We have \(F(A \cap B) >_X \mathbf {0}= F(\emptyset \cap B)\). By monotonicity in the first argument \(F(A \cap B) \displaystyle \oslash F(B) >_X F(\emptyset \cap B) \displaystyle \oslash F(B) = \mathbf {0}\). Thus A remains \(F_B\)-consistent. Hence coincidence applies, i.e., \((F_B)_A = F_{A \cap B}\). Thus
Note that thereby (line 2 to 4) we have indirectly proven the cancellation property: \((x \displaystyle \oslash b) \displaystyle \oslash (y \displaystyle \oslash b) = x \displaystyle \oslash y\). For this use coincidence and \(x = F(A \cap B \cap C), y = F(A \cap B)\) and \(b = F(B)\). \(\square \)
Lemma 12
(accept) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable plausibility and \(A,C \in {\mathcal {A}}\) such that \(A \cap C\) is \(F\)-consistent and assume \({\mathrm{S}}(F_C)\) (as in Lemma 4). Then
Proof
Let \(F, A,C\) as described, we may thus apply the if-and Lemma. Note that \(F(A> C|C)= F((A > C) \cap C) \displaystyle \oslash F(C)\) so that the left hand side of the above equation is defined. Note also, that C and \(A \cap C\) are \(F\)-consistent, so that \(F(.|A \cap C)\) and \(F(.|C)\) are defined. Since \(F\) is \(\displaystyle \oslash \)-conditionalisable, the law of \(\displaystyle \oslash \)-inverses holds and \(\mathbf {1}\) is the left \(\otimes \)-identity. Thus
\(\square \)
Lemma 13
(reject) Let \(F\) be a \(\displaystyle \oslash \)-conditionalisable and \(A,\overline{C} \in {\mathcal {A}}\) such that \(A \cap \overline{C}\) is \(F\)-consistent and assume \({\mathrm{S}}(F_{\overline{C}})\) (as in Lemma 5). Then
Proof
Let \(F, A, \overline{C}\) as described, we may thus apply the if-and Lemma 11. Note that \(F(A> C|\overline{C})= F((A > C) \cap \overline{C}) \displaystyle \oslash F(\overline{C})\), so that the left hand side of the above equation is defined. Note also, that \(\overline{C}\) and \(A \cap \overline{C}\) are \(F\)-consistent and hence \(F(.|A \cap \overline{C})\) and \(F(.|\overline{C})\) are defined and by conditionalisability have the same \(\mathbf {0}\) as \(F\), so that \(\mathbf {0}\) is \(\displaystyle \oslash \)-left absorbing. Finally \(\mathbf {0}\) is \(\otimes \)-left absorbing. Thus
\(\square \)
Lemma 14
(totality) Let \(F\) be a quasi probability (\(\oplus \)-separable and \(\displaystyle \oslash \)-conditionalisable plausibility) and let \(C,\overline{C}\) be \(F\)-consistent (as in Lemma 6). Then
Proof
Let \(F\) and \(A,C \in {\mathcal {A}}\) as claimed and \(\otimes \) as previously defined. Setting \(a=F(A \cap C)\), \(a' = F(A \cap \overline{C})\), \(c = F(C), c' = F(\overline{C})\), we obtain \((\sharp )\): \(F(A \cap C)= (F(A \cap C) \displaystyle \oslash F(C)) \otimes F(C) = F(A|C) \otimes F(C)\) and \(F(A\cap \overline{C})= (F(A \cap \overline{C}) \displaystyle \oslash F(\overline{C})) \otimes F(\overline{C}) = F(A|\overline{C}) \otimes F(\overline{C})\). Then
\(\square \)
Theorem 7
(triviality) Assume \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\) for the class of plausibilities \(\mathbb {C}\). Let \(F\) be a quasi probability (\(\displaystyle \oslash \)-conditionalisable and \(\oplus \)-separable plausibility) in \(\mathbb {C}\). Additionally, let \(A,C \in {\mathcal {A}}\) such that \(F(A\cap C), F(A \cap \overline{C})>_X \mathbf {0}\) (as in Theorem 4). Then
Proof
Since \(A \cap C\) and \(A \cap \overline{C}\) are \(F\)-consistent, \(A,C,\overline{C}\) are \(F\)-consistent. Thus by seperability and conditionalisability, the Lemma 14 of totality applies for the partition \(\{C, \overline{C}\}\). Additionally, since \(F\) is a \(\displaystyle \oslash \)-conditionalisable plausibility, we have A is \(F_C\)-consistent and \(F_{\overline{C}}\)-consistent (see previous argument). Thus \(F_C\) as \(F_{\overline{C}}\) are defined and by \({\mathrm{C}}(\mathbb {C})\), they are in \(\mathbb {C}\). By \({\mathrm{S}}(\mathbb {C})\) the Stalnaker Thesis holds for them. Thus we also have the conditions of the accept and reject Lemma 12, 13. Hence:
\(\square \)
Corollary 5
For any quasi probability \(F\in \mathbb {C}\), \(A,C \in {\mathcal {A}}\) such that \(F(AC), F(A \overline{C})>_X \mathbf {0}\): \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\) imply \(F(C|A) =F(A > C)=F(C)\).
Call a plausibility function \(F\)trivial, if there are no 3 disjoint propositions \(C,D,E \in {\mathcal {A}}\) such that they are \(F\)-possible (this is, \(F(C), F(D), F(E)>_X \mathbf {0}\)). We now prove Triviality (Corollary 4 in the text).
Corollary 6
(Lewis-like triviality) There is no class \(\mathbb {C}\) of quasi probability functions, such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\), unless all \(F\in \mathbb {C}\) are trivial.
Proof
Assume there is a class \(\mathbb {C}\) of quasi probability functions such that \({\mathrm{S}}(\mathbb {C})\) and \({\mathrm{C}}(\mathbb {C})\). Assume (for reductio) that \(\mathbb {C}\) contains a non-trivial quasi probability \(F\). Therefore, there are disjoint \(C,D,E \in {\mathcal {A}}\) such that \(F(C), F(D), F(E) >_X \mathbf {0}\). Consider \(A = C \cup D\). Thus \(AC = C\) and \(A \overline{C} =D\). Hence \(F(AC),F(A\overline{C})>0\), as well as \(F(A), F(C), F(\overline{C})>_X \mathbf {0}\) (Pl1).
Since \(F\in \mathbb {C}\) and \(A,C \in {\mathcal {A}}\), we have \(A > C \in {\mathcal {A}}\) (from \({\mathrm{S}}(\mathbb {C})\)) and because \(F\) is only defined over elements of \({\mathcal {A}}\)). Thus by the Corollary 5 to the triviality Theorem 7:
However, note that \(F(C|A) = F(CA) \displaystyle \oslash F(A) = F(C) \displaystyle \oslash (F(C) \oplus F(D)) \ne F(C)\). Otherwise \(F(C) \oplus F(D) =\mathbf {1}\) (uniqueness of \(\mathbf {1}\)). But then \((F(C) \oplus F(D)) \oplus F(E) = \mathbf {1}\) and thus \(\mathbf {1}\oplus F(E) = \mathbf {1}\) and therefore \(F(E) = \mathbf {0}\) (uniqueness of \(\mathbf {0}\)). But then it is not possible that \(F(E)>_X \mathbf {0}\). This contradicts the assumption. \(\square \)
Comparative Remarks
The main differences between the approach of Morgan and Mares (1995) and Morgan (1999) and the one presented here are the following ones. In Morgan and Mares (1995) \(P(C| \varGamma )\) is a previously understood function, defined for a sentenceC and a set\(\varGamma \) of sentences. The generalised Stalnaker Thesis is directly assumed and instead of exploiting the equality \(P(C) = P(C|A)\) to prove triviality, they exploit the inequality \(P(C) \le P(C|A)\) (as I did in Theorem 5). In their framework this is the inequality \(P(C|\varGamma ) \le P(C| \varGamma \cup \{A\})\) [theorem 2.4 in Morgan and Mares (1995) and theorem V.6 in Morgan (1999)]. They also prove a slightly different triviality result, namely \(P(A|\varGamma )= P(B|\varGamma )\) for all \(A,B, \varGamma \) such that \(P(A|\varGamma ), P(B|\varGamma )\) are not maximal [theorem 3.1 in Morgan and Mares (1995) and theorem V.7 in Morgan (1999)]. The framework targets Adams’ Thesis and is built to allow formulating a non-standard probabilistic semantics. The logical relation between their assumptions and the assumptions made here is not clean cut, because of the difference in the result and the difference in the form of the conditional plausibility.
Roughly, the differences are as follows: Their assumptions (C1–C5)Footnote 56 used to prove a triviality in Morgan and Mares (1995) are on the one hand ‘stronger’ and on the other hand ‘weaker’ than the assumptions made here. In particular, their C1 assumes a total order, C2 is a success postulate, from C3 a version of \(\displaystyle \oslash \)-conditionalisability can be arrived at, C4 is a version of the generalised Stalnaker thesis and C5 is a weak assumption on negation. The assumptions are ‘stronger’, in the sense that neither totality, nor a form of C3, nor the generalised Stalnaker thesis are assumed here. Note also that if we consider their functions up to logical equivalence then C1–C5 imply a version of \(\displaystyle \oslash \)-conditionalisability. On the other hand, especially C3 cannot be derived from the assumptions made here—it would correspond to the cancelling property, but for arguments for which this property is not necessarily defined. The assumptions are ‘weaker’, because their P may treat logically equivalent sentences (or sets of sentences) in a different manner and C5 is a weaker assumption than separability, i.e., it can be derived from the latter.
Similar things can still be said about the weaker assumptions in Morgan (1999), i.e., only a transitive relation is assumed (CCP1), the success postulate is weakened with (CCP2) and the negation postulate as well with (CCP5). However, the generalised Stalnaker thesis is still directly assumed with (CCP4) and a weak version of C3 is present in (CCP3) which is still not derivable from the assumptions made here. Note also that the triviality obtained from replacing (CCP4) by a simple Stalnaker Thesis (CCP4’) still assumes a special form of conditionalisation (CCP0) which is built on the general iteration property \(P_{\varSigma }(A| \varGamma )= P(A | \varGamma \cup \varSigma )\) and which I only assume in special cases (in the property of coincidence).
Rights and permissions
About this article
Cite this article
Raidl, E. Lewis’ Triviality for Quasi Probabilities. J of Log Lang and Inf 28, 515–549 (2019). https://doi.org/10.1007/s10849-019-09289-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10849-019-09289-0