Appendix: Proofs for Section 2.1
Proof of the equivalence of (1) and (2). That (2) implies (1) (and even closure under infinite intersections) is trivial. For the converse, suppose that \(LS_t^P\) is closed under finite intersections. Since we are assuming that the field for P is finite, we get that \(\bigcap LS_t^P = LC_t^P\) is in \(LS_t^P\). Thus, by the definition of \(LS_t^P\), \(P(LC_t^P) \underset{(-)}{>}t\). By the laws of probability, we then get for all A such that \(LC_t^P \subseteq A\), \(P(A) \underset{(-)}{>}t\), that is \(A \in LS_t^P\). This is one direction of (2). For the other direction, suppose that A is in \(LS_t^P\). Then \(\bigcap LS_t^P \subseteq A\), i.e., \(LC_t^P \subseteq A\), as desired.
Proof of the equivalence of (4) and (5), with identical propositions X. Assume (4), fix a suitable proposition X from (4) and let \(w\in X\). By the left-to-right part of (4), \(P(W-\{w\}) \underset{)-(}{<}t\), i.e., \(P(w) \underset{)-(}{>}1-t\). By the right-to-left part of (4), \(P(X)\underset{(-)}{>}t\), i.e., \(P(\overline{X } )\underset{(-)}{<}1-t\). Taking this together, we get (5), for the same proposition X as in (4). For the converse, assume (5) and fix a suitable proposition X from (5). In order to prove (4), let first \(P(A)\underset{(-)}{>}t\). Suppose for reductio that \(X \not \subseteq A\). Then \(A \subseteq W-\{w\}\) and thus \(P(A) \le 1-P(w)\) for some w in X. So, for this w, \(P(A) \le 1 - P(w) \underset{)-(}{<}1-(1-t) = t\), by (5), and we get a contradiction. Conversely, let \(X\subseteq A\). Then \(P(X)\le P(A)\). But by (5), \(P(X)\underset{(-)}{>}t\), so \(P(A)\underset{(-)}{>}t\), as desired for (4), with the same proposition X as in (5).
Proof that (5) implies (6). Take the X from (5). We first check the range of the threshold t. For version (\({\hbox {LT}}_>^t\)) of the Lockean thesis, (5) requires us to have \(P(\overline{X } ) < 1-t \le \min _{w\in X} P(w)\) which means that t is in the interval \([1-\min _{w\in X} P(w),P(X))\). For version (\({\hbox {LT}}_\ge ^t\)) of the Lockean thesis, (5) requires us to have \(P(\overline{X } ) \le 1-t < \min _{w\in X} P(w)\) which means that t is in the interval \((1-\min _{w\in X} P(w),P(X)]\). But this is just what the first part of (6) says. For the second part of (6), let B be such that \(B\cap X \not = \emptyset\). Then, since by (5) \(P(w)>0\) for every \(w\in X\), \(P(B)>0\). Moreover, \(P(X\vert B) = \frac{P(X\cap B)}{P(B)} = \frac{P(X\cap B)}{P(X\cap B)+P(\overline{X } \cap B)}\). Now on the one hand \(P(\overline{X } \cap B) \le P(\overline{X } ) \underset{(-)}{<}1-t\) by (5), and on the other hand \(P(X\cap B) \ge P(w)\) for some w in X and \(P(w)\underset{)-(}{>}1-t\) by (5) again, so \(P(X\cap B)\underset{)-(}{>}1-t\). Thus clearly \(P(\overline{X } \cap B)<P(X\cap B)\), and thus \(\frac {1}{2} < P(X\vert B) \le 1\).
Proof that (6) implies (5). Take X from (6), choose an arbitrary \(w \in X\) and put \(B:=\overline{X } \cup \{w\}\). From (6), we then get \(P(\overline{X } \cup \{w\})>0\) and \(P(X\vert \overline{X } \cup \{w\}) > \frac {1}{2}\), that is \(\frac{P(X\cap (\overline{X } \cup \{w\}))}{P(\overline{X } \cup \{w\})} > \frac {1}{2}\). Since \(\overline{X }\) and \(\{w\}\) are disjoint, this reduces to \(\frac{P(w)}{P(\overline{X } )+P(w)} > \frac {1}{2}\), or just \(P(w) > P(\overline{X } )\). Since w was chosen arbitrarily from X and the threshold condition is checked as in the direction from (5) to (6), this establishes (5).