Intuitionistc probability and the Bayesian objection to dogmatism

Given a few assumptions, the probability of a conjunction is raised, and the probability of its negation is lowered, by conditionalising upon one of the conjuncts. This simple result appears to bring Bayesian confirmation theory into tension with the prominent dogmatist view of perceptual justification—a tension often portrayed as a kind of ‘Bayesian objection’ to dogmatism. In a recent paper, David Jehle and Brian Weatherson observe that, while this crucial result holds within classical probability theory, it fails within intuitionistic probability theory. They conclude that the dogmatist who is willing to take intuitionistic logic seriously can make a convincing reply to the Bayesian objection. In this paper, I argue that this conclusion is premature—the Bayesian objection can survive the transition from classical to intuitionistic probability, albeit in a slightly altered form. I shall conclude with some general thoughts about what the Bayesian objection to dogmatism does and doesn’t show.

~(B  A): It's not the case that I am a brain in a vat and it appears to me that I have a mole on my left thigh.
According to the first dogmatist claim, learning A can provide me with justification for believing M without my having any antecedent justification for believing ~(B  A). But I can clearly see that M entails ~B, which entails ~(B  A) in which case, by the second dogmatist claim, once I learn A and acquire justification for believing M, I will also acquire justification for believing ~(B  A).
It's strange to think, though, that I could acquire justification for believing that B  A is false by learning that A is true. It's strange to think that noticing the appearance of a mole on my left thigh could provide me with justification for believing that I'm not a brain in a vat being supplied with an appearance of a mole on my left thigh. Suppose I do have a genuine paranoid concern that I may be a brain in a vat trapped in a simulated world in which I appear to have a mole on my left thigh. As I apprehensively roll up my trousers to look for a mole, I'm hardly going to be relieved if I actually find one. Far from relieving my anxieties, this is the very discovery that is going to stoke them.
One way to make these impressions more precise is by appealing to Bayesian confirmation theory. On the Bayesian picture, the degrees of support conferred upon propositions by a given body of evidence can be represented by a probability function, with a piece of evidence  confirming a hypothesis  just in case conditionalising on  raises the evidential probability of , and a piece of evidence  disconfirming a hypothesis  just in case conditionalising on  lowers the evidential probability of . More formally, if Pr is a prior evidential probability function representing the probabilities imposed by background evidence then, according to the Bayesian,  confirms  relative to Pr just in case Pr( | ) > Pr() and disconfirms  relative to Pr just in case Pr( | ) < Pr(). Conditional probabilities are usually taken to be defined in terms of unconditional probabilities via the standard ratio formula (RF): Pr( | ) = Pr(  )/Pr() if Pr() > 0, and undefined in case Pr() = 0, but they could equally be taken as primitive, with the ratio formula treated as a theorem.
Prior to checking my left thigh, it is possible, but less than certain, that there will appear to me to be a mole there -in which case we have it that 0 < Pr(A) < 1. It is presumably also possible that I am a brain in a vat being supplied with an appearance as of a mole on my left thigh in which case we also have it that Pr(B  A) > 0. But, given just these assumptions, and provided that Pr is a classical (Kolmogorovian) probability function, it can be proved that Pr(~(B  A) | A) < Pr(~(B  A)). In addition to the ratio formula, the following proof makes use of three well known theorems of classical probability: The Bayesian, then, is committed to the claim that learning A disconfirms ~(B  A). This is just an instance of a more general principle that Bayesian confirmation theory vindicates: If a conjunction and one of its conjuncts both have probabilities strictly between 0 and 1 then learning that the conjunct is true will serve to confirm the conjunction and disconfirm its negation. According to Just, if I acquire justification for believing a hypothesis  by learning a piece of evidence , then the evidential probability of  given  cannot be any lower than the prior evidential probability of . In previous work, I've defended a view on which Just can fail (Smith, 2010(Smith, , 2016, though I'm inclined to think that challenging Just may not be a promising path for the dogmatist to take. On my view, there are certain kinds of evidence that can probabilistically support a proposition without conferring justification for believing it. As such, it is possible to construct cases in which evidence of this sort is replaced by probabilistically weaker evidence that is justification conferring, making for counterexamples to Just. But this particular kind of structure is not (or not obviously) present in the cases of Just that would need to fail in order to preserve dogmatism 1 .
In any case, I won't explore this further here, as my concern is with another, rather unexpected, option that is available to the dogmatist -the option explored by Jehle and Weatherson. While the above proof is sound when Pr is interpreted as a classical probability function, it fails if Pr is interpreted as an intuitionistic probability function -that is, a probability function constrained by an intuitionistic logical consequence relation rather than a classical one. Indeed, as Jehle and Weatherson demonstrate, it is relatively easy to construct intuitionistic probability functions for which (1) is true and (9) is false.
The dogmatist, it seems, can embrace Just and acquiesce in the general Bayesian picture of confirmation, so long as he is prepared to question some of the strong classical assumptions that conventionally inform Bayesianism -and this may seem like a relatively low price to pay. The plan for the remainder of this paper is as follows: In the next section I The conditional probability functor can be defined via RF or it can be supplied with its own axioms, with RF emerging as a theorem. Jehle and Weatherson prefer the latter approach but, for ease, I will opt for the former here. The only assumptions about conditional probability that are made in this paper are RF and its consequences and, as such, it makes no difference for present purposes whether RF be deemed a definition or a theorem.
The behaviour of a ├-probability function will, in effect, reflect the behaviour of the underlying logical consequence relation ├ in combination with the axioms P0 -P3. As Weatherson (2003) shows, a probability function based upon classical logical consequence will be identical to a probability function as described by Kolmogorov's original axioms.
Classical probability functions can be constructed in the following, familiar way: Let W be a finite set of possible worlds and V a valuation function taking members of L into subsets of W and meeting the following constraints: Finally, let m be a probability mass distribution defined upon W -that is, a function from W into the real unit interval such that, x  W m(x) = 1. For each  L, if we let Pr() = x  V() m(x), it can be shown that Pr will qualify as a classical probability function.
Intuitionistic logic is weaker than classical logic -that is, while all intuitionistic theorems are classical theorems, some classical theorems fail within intuitionistic logic, the most well known being the Law of Excluded Middle (LEM):   ~. In general, weaker logical consequence relations will give rise, via P0-P3, to probability functions that are less tightly constrained. As Jehle and Weatherson demonstrate, intuitionistic probability functions are freed up in just such a way as to tolerate the kind of probabilistic behaviour that the dogmatist seems to require.
Intuitionistic probability functions can be constructed by exploiting slightly simplified versions of the models developed by Kripke (1965) in his semantics for intuitionistic logic.
These models take the form W, R, V where W, as before, is a finite set of possible worlds and R is a reflexive, transitive relation on W. Let V be a valuation function taking sentences of L into subsets of W that are closed under R -that is, for any  L, if w  V() and wRw then w  V(). The valuation clauses for conjunction and disjunction are unchanged, but the clause for negation is amended as follows: w  V(~) iff for all w such that wRw, w  V() Once again, let m be a probability mass distribution defined over the members of W with Pr() = x  V() m(x) for any   L. As Weatherson (2003) demonstrates, any Pr, so defined, will meet the conditions for an intuitionistic probability function.
As I noted in the previous section, the derivation of rather than a classical, probability function. Up until step (5) the proof survives the transition to intuitionistic probability, but it falters at step (6)  Thinking about intuitionistic probability in this way can also help us to appreciate why Pr(~(B  A) | A) < Pr(~(B  A)) fails to follow from 0 < Pr(B  A)  Pr(A) < 1 when Pr is interpreted as an intuitionistic probability function. If the probability of B  A is greater than 0 and the probability of A is less than 1, then the effect of conditionalising upon A will be to redistribute probability mass to B  A. This holds true for both classical and intuitionistic probability functions. In the case of a classical function, however, this additional mass must be drained from the mass assigned to ~(B  A) as there is, in effect, no other source from which it can issue. In the case of an intuitionistic probability function, however, there may be a reservoir of unclaimed mass that can meet this need. Indeed, conditionalising upon A can have the effect of redistributing mass from the reservoir to both It is relatively straightforward to construct an intuitionistic probability function for By the clauses for negation and conjunction we have it that: Graphically, the model can be represented as follows: The following probabilities can be easily calculated: Pr ( Thinking of probability functions as tethered to logical consequence relations offers an intriguing new perspective on Bayesian confirmation theory -a perspective on which it incorporates two, potentially separable, commitments: First, there is a commitment to the idea that degrees of evidential support behave like probabilities and that confirmation and disconfirmation should be understood in terms of probability raising and lowering respectively. Second, there is a commitment to classical logic. The claim that A disconfirms ~(B  A) emerges from the combination of both commitments, but could coherently be denied by someone who held on to the former commitment, while opting for intuitionistic logic -someone who still deserves to be described as a 'Bayesian' in a suitably inclusive According to Jehle and Weatherson, then, the dogmatist who embraces intuitionistic logic has nothing to fear from the Bayesian objection. Further, Jehle and Weatherson claim that it isn't even necessary for the dogmatist to fully commit to intuitionistic logic in order to put the Bayesian objection to one side -it may be enough that the dogmatist be less than fully committed to classical logic. If one is uncertain whether intuitionistic or classical logic is correct then, on one natural way of understanding what this uncertainty amounts to, one should have a preference for intuitionistic over classical probability (Jehle and Weatherson, 2012, section 2).
I'm inclined to think, however, that these conclusions are premature -even the dogmatist who is fully confident that intuitionistic logic is correct does not yet have a viable response to the Bayesian objection. Consider the sentence 'Either I'm wearing socks or I'm not'. If I take intuitionistic logic seriously, then I may have reason to doubt that this sentence is true purely in virtue of its form, but I needn't, of course, have any reason to doubt that it's true. If I adhere to a theory that prevents me from accepting this sentence, then I can't get off the hook simply by expressing sympathy with intuitionistic logic. Similarly, if I take intuitionistic logic and probability seriously, I will have reason to doubt that Pr(~(B  A) | A) < Pr(~(B  A)) is true purely in virtue of its form, but I needn't have any reason to doubt that it's true, given the intended interpretations of A and B. The Bayesian objection to dogmatism requires only the latter claim, and not the former. As I shall argue in the remainder of this paper, the move from classical to intuitionistic probability leaves us, in fact, with very strong reasons for thinking that Pr(~(B  A) | A) < Pr(~(B  A)).
My argument works as follows: Given the stipulation that 0 < Pr(B  A)  Pr(A) < 1 and the supposition that Pr(~(B  A) | A)  Pr(~(B  A)), I shall derive three further results within intuitionistic probability theory. Given the intended interpretations of A and B, each of these results is individually highly implausible. Taken together they amount, I think, to a strong reductio of the supposition. I shall conclude that even the dogmatist who embraces intuitionistic probability is under substantial pressure to concede that Pr(~(B  A) | A) < Pr(~(B  A)).

III. THREE TROUBLESOME RESULTS
Consider the following Additivity principle (AD): If    is an intuitionistic antitheorem, then Pr(  ) = Pr() + Pr(). That this holds for intuitionistic probability functions can be proved straightforwardly from P0-P3:
While other well known theorems of classical probability theory don't survive the transition to intuitionistic probability, it is often possible to identify weakened substitute theorems that do persevere. For instance, whilst Complementation fails for intuitionistic probability functions, the following theorem, which I shall call Weak Complementation (WC) holds: Pr() = Pr(  ~) -Pr(~). This follows immediately from AD, given that   ~ is an intuitionistic antitheorem.
A Weak Conditional Complementation principle (WCC) can also be proved:

Proof
(1) Pr() > 0 Stipulation intuitionistic probability function, this is the second result to which I wish to draw attention: This is nicely illustrated by the intuitionistic probability function developed in the previous section. Relative to that function, as can be easily checked, conditionalising on A boosts the evidential probability of ~(B  A) by 1/6 (moving it from 1/3 to 1/2) and boosts the evidential probability of (B  A)  ~(B  A) by 1/3 (taking it from 2/3 to 1) 3 .
Not only must the dogmatist grant that (B  A)  ~(B  A) is doubtful or uncertain, he must, by this result, also grant that it is positively confirmed by A. Not only must the dogmatist harbour some suspicion that there is no fact of the matter as to whether or not I am a brain in vat being supplied with an appearance as of a mole on my left thigh, the dogmatist must also hold that an appearance of a mole on my left thigh would actually help to put these suspicions to rest -would help to confirm that there really is a fact of the matter after all. This is surely very puzzling. And the verification transcendence motivation for doubting instances of LEM, as far as I can tell, would only serve to deepen our puzzlement over this.
Thinking about intuitionistic probability functions as arising from mass distributions over the points of Kripke models can, I think, help us to appreciate why these two results Certain theories of vagueness predict that LEM can fail in borderline cases of vague predicates -so a sentence such as 'The patch is red or the patch is not red' will fail to be true for borderline red patches. Plausibly, there can be 'borderline' moles and 'borderline' appearances in which case there could be borderline cases for a sentence such as A which, on the present approach, will be cases in which A  ~A fails to be true. Needless to say, this is a very controversial approach to vagueness 4 , but even if we accept it, it provides little comfort to a dogmatist. This account of vagueness will only license doubts about A  ~A in certain specialised cases, but the predictions made by dogmatism are in no way limited to such cases.
If I'm certain that the present case is not a borderline case for A, then it will remain true that Any dogmatist who appeals to intuitionistic probability in order to try and circumvent the Bayesian objection is committed to the three consequences derived here. And, while I haven't discussed these consequences in great detail, to an extent they speak for themselves.
At the very least, it is incumbent upon the dogmatist who wishes to exploit the intuitionistic option to explain just how such consequences could be acceptable, otherwise the Bayesian objection retains its bite.

IV CONCLUSION
I shall conclude with some brief, general thoughts about the Bayesian objection to dogmatism and, in particular, about the very idea of modifying Bayesian confirmation theory in an attempt to accommodate this theory 5 . In a way, I think that the Bayesian objection to dogmatism is misnamed. It's a mistake to portray this objection as involving a clash between two theories -dogmatism and Bayesian confirmation theory -either of which might be fair game when it comes to affecting a resolution. The true clash, I think, is between dogmatism and a series of very intuitive claims, such as the claim I've focussed on here: By learning Athat it appears to me that I have a mole on my left thigh -I cannot acquire justification for believing ~(B  A) -that I'm not a brain in a vat being supplied with an appearance of a mole on my left thigh.
It is true enough that this claim can be derived from classical Bayesian confirmation theory, along with a philosophical assumption such as Just -but it's not as though we need to derive the claim from anything in order to convince ourselves that it's true. To put things slightly differently, this claim is not some artefact of classical Bayesian confirmation theory -not some surprising prediction that the theory foists upon us. On the contrary, this is a claim that any adequate approach to confirmation and justification should arguably delivera claim that is plausible before we've engaged in any systematic theorising about these topics. And an intuitionistic Bayesian confirmation theory will deliver the claim -it's just that it needs to be supplemented by further, very plausible, assumptions in order to do so.
Modifying classical Bayesian confirmation theory is not, I think, a viable way to address the Bayesian objection to dogmatism. This is not because classical Bayesian confirmation theory is sacrosanct -rather, it is because any viable modification of the theory should continue to deliver the claims that clash with dogmatism. The Bayesian objection to dogmatism is not, at its heart, 'Bayesian' at all. In my view, classical Bayesian confirmation theory offers just one way of dramatising a problem that is, for all intents and purposes, internal to dogmatism itself.