Suppose we wish to model the total doxastic state of a typical (non-ideal) subject, whom we’ll call \(\upalpha \).Footnote 1 We’ll need two main ingredients: one, a way to represent potential objects of thought, the kinds of things fit to serve as the contents of some cognitive mental state; and two, a way of representing which of these are the contents of \(\upalpha \)’s attitudes.

If our model is to be faithful to the facts, then it’s important that we don’t end up representing \(\upalpha \) as being much more rational than she in fact is. What needs to be done to satisfy this desideratum depends on just how irrational we think non-ideal agents can be, and opinions vary widely on this matter. But here is something that almost everyone agrees on: we are not logically infallible. The total doxastic state of any ordinary agent will usually be logically incoherent in some respect or other. Total belief sets probably aren’t going to be closed under logical implication, even on those accounts that seem to make us look very rational indeed (e.g., Lewis 1982; Stalnaker 1984). And on the face of it, beliefs don’t appear to be closed under even logical equivalence. The same applies to other kinds of doxastic attitudes: prima facie, one can be fully confident that either it is raining or it’s not without thereby also being fully confident that it’s not the case that it’s raining and not raining. The intuitive data of logical incoherence and hyperintensionality needs to be accounted for—usually, by modelling the objects of belief using entities that cut finer than logical equivalence.

In this paper, I argue that one common strategy for modelling logically fallible agents and hyperintensional contents (viz., through the use of impossible worlds) does not sit nicely with another very common approach to modelling total doxastic states (viz., through the use of a numerically-valued function defined on a Boolean algebra of propositions; e.g., a probability function). Roughly, the source of the problem is that most of the propositions which can be constructed out of a sufficiently rich space of possible and impossible worlds are in a certain strong sense inexpressible, and any Boolean algebra defined on such a space will contain at least as many inexpressible propositions as expressible propositions. Since it’s reasonable to think that most (if not all) of our doxastic attitudes are expressible, a model which commits us to widespread inexpressibility looks problematic. We can impose restrictions on the space of worlds which would prevent the inclusion of inexpressible propositions in the algebra, but only at the cost of reintroducing (a strong degree of) infallibility.

In Sect. 1, I outline an assumption about the expressibility of thought which be helpful in setting up my main argument. Then, in Sect. 2, I provide some background on the problems of logical omniscience as they apply to a standard way of modelling full belief, and discuss how the introduction of impossible worlds is supposed to help solve these problems. In Sect. 3, I introduce probabilistic analogues to the classical problems of logical omniscience, for which an analogous solution involving impossible worlds seems to apply. Finally, in Sect. 4 I present the central argument of the paper, and in Sects. 5 and 6, discuss responses.

Before moving on, it’s worth noting some things that I’m not arguing. First, I do not think that the mere existence of inexpressible propositions should be considered problematic for the impossible worlds model—nor for that matter do I think that they would be especially problematic for the possible worlds model. I would not consider it a devastating problem if our formal models implied that inexpressible propositions exist, and could potentially serve as the objects of thought for some believers. I do, however, think that there is serious issue when our models commit us to saying that inexpressibility is the norm, and it is this problem that I intend to highlight here. (See Sect. 6 for more discussion on this point.) And second, my argument should not be read as being against the intelligibility of impossible worlds in general, nor do I want to claim that there are no benefits to including them within our ontology.

1 The expressibility hypothesis

In setting up my argument, I will presuppose the existence of an artificial language, \({\varvec{\mathcal {L}}}\), about which I will make some assumptions. \({\varvec{\mathcal {L}}}\) can be thought of as a class of declarative sentences, each a (possibly infinite) string of symbols taken from a (possibly infinite) alphabet, with a corresponding interpretation. We suppose that every sentence in \({\varvec{\mathcal {L}}}\) is non-ambiguous, precise, and for the sake of simplicity, context-independent. I’ll stick to characterising \({\varvec{\mathcal {L}}}\) at the sentential level, since it is here that the issues we will be interested in arise. Nothing in what follows should be taken to suggest that there can be no quantifiers, modal operators, and so on, in \({\varvec{\mathcal {L}}}\).

Next, we will want \({\varvec{\mathcal {L}}}\) to be as expressive as possible with respect to \(\upalpha \)’s (partial) beliefs, within the bounds allowed by the present assumptions.Footnote 2 The most straightforward version of my argument then proceeds on the basis of an assumption, which I will call the expressibility hypothesis: that \({\varvec{\mathcal {L}}}\) is maximallyexpressive, in the sense that for each distinct belief (or partial belief) that \(\upalpha \) has, there is a distinct sentence S in \({\varvec{\mathcal {L}}}\) which expresses the content of that exact belief and no other. \({\varvec{\mathcal {L}}}\) may be capable of saying much more than this as well, but to begin with we will assume that it is capable of saying at least this much.

Furthermore, besides having beliefs simpliciter, I assume that \(\upalpha \) can also have negative and conjunctive beliefs. For example, \(\upalpha \) might believe that rosesare red, that violets are blue, and that roses are red and violets are blue, where the latter content intuitively has normative connections to the former two of the kind we might try to cash out in terms of conjunction introduction and elimination rules. If the content of the first belief is captured by a sentence \(\hbox {S}_{1}\) of \({\varvec{\mathcal {L}}}\), and the content of the second by \(\hbox {S}_{2}\), then we will use ‘\(\hbox {S}_{1} \wedge \hbox {S}_{2}\)’ to pick out the sentence (or a sentence) of \({\varvec{\mathcal {L}}}\) which express the third content. Likewise, if \(\upalpha \) comes to later believe that roses are not red, then there’s another sentence, ‘\(\lnot \hbox {S}_{1}\)’, which expresses her changed belief.

In saying this, I’m not making any strong commitments in relation to the syntax of \({\varvec{\mathcal {L}}}\), which may consist entirely of ‘atomic’ sentences for all I’ve said here. But I see no good reason to think, if it is possible to have a language capable of expressing all of our beliefs at all, that there couldn’t also be such a language which contains a unary connective and a binary connective corresponding to negation and conjunction respectively. Nor am I saying that \(\upalpha \) can only have atomic, negative, and conjunctive beliefs. She may also have conditional beliefs, e.g., a belief that if roses are red then violets are blue, where this is not just another way of saying that \(\upalpha \) believes that it’s not the case that: roses are red and violets are not blue. In that case, we may also want to have primitive conditional sentences in \({\varvec{\mathcal {L}}}\). Likewise, \(\upalpha \) may believe that roses are red or violets are blue, where this is not the same thing as believing that it’s not the case that: roses are not red and violets are not blue. We need not commit either way on these questions. It’s perfectly reasonable to think that \({\varvec{\mathcal {L}}}\) has some non-trivial syntax at the sentential level. But we may well find that having just two connectives is fewer than we need to adequately distinguish between the full range of contents that a typical subject might believe, so we will remain neutral on just what that syntax is. (The upshot of these points will become apparent in the final paragraphs of Sect. 4.)

Whatever \({\varvec{\mathcal {L}}}\) is, it’s obviously not English, nor any other natural language. But there is no need to interpret my talk of ‘sentences’ and ‘languages’ too closely on the model of natural languages. The ‘language’ in question may not be the sort of thing that any human being could speak, nor need it correspond very closely to the structure of thought. The ‘sentences’ may be purely mathematical objects, or arbitrary sets of abstracta. For example, one might want to simply let every object of belief just be a sentence of \({\varvec{\mathcal {L}}}\), and stipulate that every sentence expresses itself.Footnote 3 Alternatively, perhaps an appropriately constructed Lagadonian language would be expressive enough for our purposes.Footnote 4 In a series of recent works, Mark Jago has defended just this idea (see esp. his 2012; 2015a; b; cf. also Berto 2010). Indeed, the expressive richness of Jago’s language is a central component of his use of ersatz possible and impossible worlds to model hyperintensional contents, in roughly the manner described in the next section. As he puts it, for sets of ersatz possible and/or impossible worlds to be an adequate model of hyperintensional content and to overcome the infamous ‘problem of descriptive power’, the world-building “language must be expressible enough to represent all of the possible and impossible situations we want to represent, and to represent distinct (possible or impossible) situations as distinct situations” (Jago 2015b, p. 718).

The reader may already be chomping at the bits to deny the expressibility hypothesis. I ask that they hold off their objections for now. I will return to discuss the matter in detail in Sect. 6, where I will argue three things—in order of importance,

  1. (i)

    Although inconclusive, there are general reasons to accept the hypothesis.

  2. (ii)

    There are prominent accounts of impossible worlds such that the hypothesis (or a close analogue thereof) is taken for granted, and would be difficult to deny.

  3. (iii)

    Even if we ultimately ought to deny the hypothesis, the main thrust of the argument will be largely unchanged.

But that will have to wait until Sect. 6. The central thread of the argument goes through much smoother if we take the expressibility hypothesis for granted, and it’s better to discuss the consequences of denying the hypothesis once its importance to my argument is clear.

2 The problems of logical omniscience

Suppose that \(\Omega \) is a non-empty space of possible worlds. I remain neutral as to what worlds are; what’s important is just that a world \(\upomega \) is the kind of thing such that it makes sense to say of a declarative sentence S that S is true at \(\upomega \). In calling \(\Omega \) a set of possible worlds, I’m saying that for every \(\upomega \in \Omega \) and S, \(\hbox {S}_{1}\), \(\hbox {S}_{2},\,\ldots \,\in \,{\varvec{\mathcal {L}}}\):

  • Non-Contradiction:

  • At most one of S or \(\lnot \hbox {S}\) is true at \(\upomega \)

  • Maximal Specificity:

  • At least one of S or \(\lnot \hbox {S}\) is true at \(\upomega \)

  • Closure under Implication:

  • If \(\hbox {S}_{1}, \hbox {S}_{2}, \ldots \) are true at \(\upomega \) and jointly imply S, then S is true at \(\upomega \)

What happens at worlds with respect to sentences that are not in \({\varvec{\mathcal {L}}}\) won’t be important for our purposes, so in the sequel it should be assumed that the sentences \(\hbox {S}_{1}, \hbox {S}_{2}\), etc., that I quantify over are always members of \({\varvec{\mathcal {L}}}\). I’ll also assume that the relevant notion of implication (here and throughout) is at least as strong as that of classical sentential logic. If need be, we can also throw some conceptual or metaphysical necessities in as well, so as to rule out worlds with, e.g., married bachelors, four-sided triangles, non-dihydrogen oxide water, and the like.

Call any subset of \(\Omega \) a proposition. This is a stipulative usage: I do not assume that propositions are objects of thought. The powerset of \(\Omega ,\,\wp (\Omega )\), contains every proposition that can be formed from the worlds in \(\Omega \). Every sentence S in \({\varvec{\mathcal {L}}}\) can be mapped to some (possibly empty) member of \(\wp (\Omega )\) which contains all and only the worlds where what S says is true. Following a standard notation, we’ll designate this ‘truth set’ of S using \(\Vert \hbox {S}\Vert \). Given the three assumptions I’ve made about \(\Omega \), logically equivalent sentences will always have the same truth sets. Moreover, \(\lnot \) and \(\wedge \) will correspond to the basic set operations of complementation and intersection in the following way:

  1. (i)

    \(\Vert \lnot \hbox {S}\Vert = \Vert \hbox {S}\Vert ^{\mathrm{C}}\)

  2. (ii)

    \(\Vert \hbox {S}_{1}\,\wedge \,\hbox {S}_{2}\Vert = \Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \)

With this in place, we might make a start on modelling a total doxastic state. Begin with a model that originates with (Hintikka 1962), which focuses solely on full belief. Again, \(\upalpha \) is our subject. For each world \(\upomega \), we can associate \(\upalpha \) with exactly one proposition in \(\wp (\Omega )\), which we’ll label \(R_{\upalpha }(\upomega ). R_{\upalpha }(\upomega )\) is taken to represent the way the world must be given all of \(\upalpha \)’s beliefs at \(\upomega \). The worlds in \(R_{\upalpha }(\upomega )\) are \(\upalpha \)’s doxastically accessible worlds (at \(\upomega )\). In order to capture what \(\upalpha \) believes at \(\upomega \), we might first say that any proposition in \(\wp (\Omega )\) of which \(R_{\upalpha }(\upomega )\) is a subset represents a content that \(\upalpha \) believes. The upshot is a compact model of \(\upalpha \)’s total belief state. Fix an appropriate space of worlds \(\Omega \) and \(R_{\upalpha }(\upomega )\), and the rest of the work is done automatically by the subset relation.

But that’s a little too quick. Even supposing that there are enough propositions in \(\wp (\Omega )\) to represent all objects of belief, it may still be the case that \(\wp (\Omega )\) also contains many propositions that correspond to nothing that can properly be believed. Modelling objects of belief as sets of worlds does not commit one to saying that every set of worlds models an object of belief, and it shouldn’t be taken for granted that every way the world might be corresponds to something that \(\upalpha \) can believe.Footnote 5 So let’s make a very minor adjustment to the basic Hintikkan model. Suppose that \({\varvec{\mathcal {B}}} \subseteq \wp (\Omega )\) contains just those propositions that do model genuine objects of belief, and say:

\(\upalpha \) believes P iff \(R_{\upalpha }(\upomega ) \subseteq P\) and \(P \in {\varvec{\mathcal {B}}}\)

If every proposition is thinkable, then the inclusion of \({\varvec{\mathcal {B}}}\) adds nothing to the original model; if not, \({\varvec{\mathcal {B}}}\) serves to filter out any ‘unthinkable’ propositions.

Now it’s well known that this model suffers from a cluster of issues that usually come under the heading of the problems of logical omniscience. Let me highlight three examples:

  1. (i)

    If \(\hbox {S}_{1}\) implies \(\hbox {S}_{2}\) and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), then \(\upalpha \) believes \(\hbox {S}_{1}\) only if she also believes \(\hbox {S}_{2}\)

  2. (ii)

    If S is a tautology and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\upalpha \) believes S

  3. (iii)

    \(\upalpha \)’s beliefs are inconsistent only if \(R_{\upalpha }(\upomega )=\emptyset \) (so \(\upalpha \) believes everything in \({\varvec{\mathcal {B}}}\))

The first is a result of Closure under Implication, which ensures that if \(\hbox {S}_{1}\) implies \(\hbox {S}_{2}\), then \(\Vert \hbox {S}_{1}\Vert \subseteq \Vert \hbox {S}_{2}\Vert \). Corollary: if \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are logically equivalent, then \(\Vert \hbox {S}_{1}\Vert = \Vert \hbox {S}_{2}\Vert \). Maximal Specificity and Closure under Implication together imply that if S is a tautology, then \(\Vert \hbox {S}\Vert = \Omega \), and since \(\Omega \) is a superset of any proposition in \({\varvec{\mathcal {B}}}\), this gives rise to our second problem. With the addition of Non-Contradiction we also get that if S is a contradiction, then \(\Vert \hbox {S}\Vert = \emptyset \), which ultimately leads to the third problem. Indeed, Non-Contradiction alone says that \(\Vert \hbox {S}\Vert \) and \(\Vert \lnot \hbox {S}\Vert \) are disjoint, so \(\upalpha \) can believe both S and \(\lnot \hbox {S}\) only if \(R_{\upalpha }(\upomega )=\emptyset \).

There’s a number of ways we might try to respond to these problems. Perhaps the error is in thinking that we can adequately model belief sets using unstructured sets of possible worlds and simple subset relations. Or, perhaps the error is in thinking that we can use a single set of worlds \(R_{\upalpha }(\upomega )\) to encode an agent’s total doxastic state at \(\upomega \), which may be better represented using multiple ‘fragments’. Or perhaps there isn’t really a problem here after all, we really are logically omniscient and it is only the complexities of belief attribution in natural language and our imperfect access to our own beliefs which makes it seem otherwise. I think that each of these captures part of the truth, but my intention for this paper is not to suggest a positive solution to the problems of logical omniscience. Instead, I wish to focus on one common response, which begins with the thought that perhaps there are not enough propositions in \(\wp (\Omega )\): we need to make our space of worlds bigger, to accommodate more fine-grained divisions amongst the objects of thought.

Suppose we make an extension to \(\Omega \), such that it now contains not only all of the original possible worlds, but also worlds where various kinds of impossible affairs obtain.Footnote 6 To make sure that \(\Omega \) is rich enough, we will want worlds which are obviously inconsistent (where both S and \(\lnot \hbox {S}\) are true), as well as worlds which are inconsistent in more subtle ways (e.g., worlds where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, but \(\hbox {S}_{1} \wedge \,\hbox {S}_{2}\) is not true). Indeed, if we think that agents are capable of extreme logical incoherence, then we will want to ensure that our worlds are not closed under any non-trivial consequence relation. It would not be very helpful to remove closure under classical consequence but retain closure under, e.g., intuitionistic consequence—otherwise, we’re just swapping one sort of logical omniscience for another.

To really free up the model, then, proponents of impossible worlds will typically posit a highly permissive comprehension principle, along the following lines:Footnote 7

  • Unrestricted Comprehension:

  • For any maximal set of sentences \({\varvec{\mathcal {S}}} \subseteq {\varvec{\mathcal {L}}}\), there will be worlds in \(\Omega \) where every \(\hbox {S}\,\in {\varvec{\mathcal {S}}}\) is true and no \(\hbox {S}\, \in {\varvec{\mathcal {L}}} \backslash {\varvec{\mathcal {S}}}\) is true

Now \(\Omega \) contains every logically possible world, plus every maximally specific impossible world. (Some impossible worlds theorists choose to drop even Maximal Specificity to allow for incomplete worlds as well. Whether or not we include incomplete worlds in \(\Omega \) won’t make a difference to the arguments of this section.)

By building a model around this expanded space of worlds, it’s easy to block all three of the unwelcome ‘omniscience’ problems noted earlier. Indeed, we can say more than this. Let \(\{\hbox {S}_{1}, \,\hbox {S}_{2}, \ldots \}\) be any consistent or inconsistent set of sentences, and let \(R_{\upalpha }(\upomega )\) be the intersection of \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \), .... Now \(R_{\upalpha }(\upomega )\) will be non-empty, and for any S that’s not in \(\{\hbox {S}_{1},\,\hbox {S}_{2}\), ...}, there will be at least one maximally specific world in \(R_{\upalpha }(\upomega )\) where S is not true. So, regardless of what we take \(\upalpha \)’s set of beliefs \(\{\hbox {S}_{1}, \hbox {S}_{2}\), ...} to be, we will be able to find some \(R_{\upalpha }(\upomega )\) such that \(R_{\upalpha }(\upomega ) \subseteq \Vert \hbox {S}\Vert \) if and only if \(\upalpha \) believes S. That looks like a nice property for our model to have, and all we had to do was load \(\Omega \) up with enough impossible worlds.

But note a consequence of Unrestricted Comprehension: there is no sentence S—at least, no sentence in \({\varvec{\mathcal {L}}}\)—such that S is true at all and only the worlds in \(R_{\upalpha }(\upomega )\) (assuming that \(\upalpha \) believes more than one thing). Say that a proposition P is expressible (relative to \({\varvec{\mathcal {L}}}\)) just in case there is a sentence \(\hbox {S } \in \,{\varvec{\mathcal {L}}}\) such that \(P = \Vert \hbox {S}\Vert \). The set of expressible propositions, \(\{\Vert \hbox {S}\Vert : \hbox {S } \in \,{\varvec{\mathcal {L}}}\}\), is an antichain of \(< \wp (\Omega ), \subseteq>\): for any two distinct sentences \(\hbox {S}_{1}, \hbox {S}_{2}\), there will be worlds in \(\Omega \) where \(\hbox {S}_{1}\) is true and \(\hbox {S}_{2}\) isn’t true; so, \(\Vert \hbox {S}_{1}\Vert \) will never be a subset of \(\Vert \hbox {S}_{2}\Vert \). Suppose that \(\upalpha \) believes \(\hbox {S}_{1}\) and at least one other thing \(\hbox {S}_{2}\). Whatever \(R_{\upalpha }(\upomega )\) ends up being, it will have to be a proper subset of both \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \). So, there’s no \(\hbox {S}_{3}\) such that \(\Vert \hbox {S}_{3}\Vert = R_{\upalpha }(\upomega )\). \(R_{\upalpha }(\upomega )\) is inexpressible in \({\varvec{\mathcal {L}}}\).Footnote 8

At this point, let me bring in the expressibility hypothesis: for every one of \(\upalpha \)’s beliefs, \({\varvec{\mathcal {L}}}\) includes a sentence S which expresses exactly that belief. If this is reasonable, then it’s only natural to suppose that a proposition should be found in \({\varvec{\mathcal {B}}}\) only if it is expressible in \({\varvec{\mathcal {L}}}\); that is, \({\varvec{\mathcal {B}}} \subseteq \{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\). After all, what could it mean to represent \(\upalpha \) as believing a proposition P, where P is not characterised by any sentence in a language which, ex hypothesi, is capable of expressing every one of \(\upalpha \)’s beliefs? And, since \(R_{\upalpha }(\upomega )\) is inexpressible, \(R_{\upalpha }(\upomega ) \notin {\varvec{\mathcal {B}}}\).

Is this a problem? I’m inclined to think that the inexpressibility of \(R_{\upalpha }(\upomega )\) is not byitself problematic. It would perhaps have been problematic if we were forced to assume that \(R_{\upalpha }(\upomega )\) must itself represent something that \(\upalpha \) believes, and hence that it should always be included within \({\varvec{\mathcal {B}}}\). However, nothing internal to the model I’ve described requires this to be the case. That \(R_{\upalpha }(\upomega )\) should itself be a proposition that \(\upalpha \) believes was never a commitment of the original model, even when we were working with just possible worlds. What’s needed for the representational system to work is that (a) if \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}} \subseteq \wp (\Omega )\) is the set of all and only those propositions towards which some agent \(\upalpha \) has beliefs at \(\upomega \), then \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}}\) has some lower bound with respect to \(\subseteq \) which we can designate as \(R_{\upalpha }(\upomega )\); and (b) if \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}} \ne {\varvec{\mathcal {P}}}_{{\varvec{\upbeta }}}\), then \(R_{\upalpha }(\upomega ) \ne R_{\upbeta }(\upomega )\). That is, every distinct total belief state can be uniquely represented by (at least one) set of doxastically accessible worlds. We can satisfy this by letting \(R_{\upalpha }(\upomega )\) be the intersection of each proposition that \(\upalpha \) believes, without supposing that \(R_{\upalpha }(\upomega )\) is itself something that \(\upalpha \) believes.

None of this is to say that the impossible worlds model of belief just developed is without problems—just that it doesn’t commit us to saying that \(\upalpha \) believes something she cannot possibly believe. It is worth noting that if we can only believe expressible propositions, and no expressible proposition is a subset of any other expressible proposition, then there is a genuine question as to the point of using this kind of set-theoretic model to represent our beliefs in the first place. The machinery of set theory only comes into play at a single step, linking the (non-believed) proposition \(R_{\upalpha }(\upomega )\) to the set of expressible propositions that \(\upalpha \) believes, the latter of which has no interesting set-theoretic structure. The only thing which unites the worlds in the proposition \(R_{\upalpha }(\upomega )\) is that they are those worlds where each member of a set of sentences \(\hbox {S}_{1}, \hbox {S}_{2}, \hbox {S}_{3}\), ...is true—and characterising that proposition amounts to just listing all and only those sentences which express something \(\upalpha \) believes. What we’ve done with \(R_{\upalpha }(\upomega )\) and \(\subseteq \), we could have done more perspicuously with a simple list of sentences. We gain nothing in economy by the addition of \(R_{\upalpha }(\upomega )\), and modelling beliefs as supersets of \(R_{\upalpha }(\upomega )\) doesn’t seem to illuminate anything of interest.Footnote 9

3 The problems of probabilistic coherence

So much for full belief. But if you’re like me and you think that beliefs generally come in degrees (so that full belief is ultimately just a species of partial belief), then you will likely want your model of \(\upalpha \)’s doxastic states in general to represent all of her partial beliefs, not just those that qualify as full beliefs. Luckily enough, there are natural ways to generalise the basic model outlined in the previous section. As Lewis puts it,

[W]e must also provide for partial belief. Being a [doxastically accessible world] is not an all or nothing matter, rather it must admit of degree. The simplest picture, idealised to be sure, replaces the sharp-edged class of [doxastically accessible worlds] by a subjective probability distribution. ...We can say that a [doxastically accessible world] simpliciter is a possible [world which] gets a non-zero (though perhaps infinitesimal) share of probability, but the non-zero shares are not all equal. (1986, p. 30)

In the rest of this paper, I want to focus on partial belief. In the present section, I will note how problems analogous to those of the traditional (full belief) problems of logical omniscience arise under a probabilistic model, and how different assumptions about the structure of \(\Omega \) affect it.

For the sake of concreteness, I outline one way to generalise the full belief model to partial beliefs, along the lines suggested by Lewis. I want to stress that what follows is an illustrative example only: many of the specific details are not crucial to my main argument (e.g., the use of a probability mass function \(\mathcal {D}\) to induce the credence function \(\mathcal {C}r\)). Readers already familiar with the idea of extending probability theory to an impossible worlds framework may choose to skim this section.

Let \(\Omega \) be any non-empty space of possible and/or impossible worlds.Footnote 10 This time, instead of assigning a single proposition \(R_{\upalpha }(\upomega )\) as \(\upalpha \)’s doxastically accessible worlds, we will instead represent \(\upalpha \)’s total doxastic state using a probability distribution \(\mathcal {D}{:}\,\Omega \mapsto \,[0, 1]\). One could interpret \(\mathcal {D}\) as representing \(\upalpha \)’s degree of belief that the actual world is \(\upomega \), for each \(\upomega \) in \(\Omega \), to the extent at least that (singleton sets of) worlds are to be included amongst the purported objects of partial belief. But this interpretation is unnecessary: \(\mathcal {D}\), like \(R_{\upalpha }(\upomega )\) earlier, should in the first instance be understood as a formal tool for modelling doxastic states in the manner to be outlined presently.

In the simplest case, \(\mathcal {D}\) assigns 0 to all but countably many \(\upomega \) in \(\Omega \), and a real value between 0 and 1 to the remaining worlds such that those values sum to unity. We can then use \(\mathcal {D}\) to induce a function \(\mathcal {C}r\) on any subset \({\varvec{\mathcal {B}}}\) of \(\wp (\Omega )\) by stipulating that for each \(P \in {\varvec{\mathcal {B}}}\),

$$\begin{aligned} \mathcal {C}r(P)=\mathop \sum \limits _{\upomega \in P} {\mathcal {D}}(\upomega ) \end{aligned}$$

Independent of any assumptions about what kinds of worlds are in \(\Omega \) and what propositions get into \({\varvec{\mathcal {B}}}\), we know that \(\mathcal {C}r\) will satisfy:

  • Nonnegativity:

  • If \(\emptyset \) is in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\emptyset ) = 0\)

  • Normalisation:

  • If \(\Omega \) is in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Omega ) = 1\)

  • Monotonicity:

  • For all pairs \(P_{1}, P_{2}\) in \({\varvec{\mathcal {B}}}\), if \(P_{1} \subseteq P_{2}\), then \(\mathcal {C}r(P_{1}) \le \mathcal {C}\hbox {r}(P_{2})\)

  • \({\varvec{\Sigma }}\) -Additivity:

  • If \({\varvec{\mathcal {P}}}\) is any countable set of disjoint propositions in \({\varvec{\mathcal {B}}}\) whose union (\(\bigcup \,\,{\varvec{\mathcal {P}}}\)) is also in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\bigcup \,\,{\varvec{\mathcal {P}}}) = \mathop \sum \nolimits _{ P\in {\varvec{\mathcal {P}}}}\,\, \mathcal {C}r(P)\)

If \({\varvec{\mathcal {B}}}\) contains \(\emptyset \), then \(\mathcal {C}r\) is a measure on \({\varvec{\mathcal {B}}}\). But it’s not yet a probability function, as usually understood. For that, we need to make the additional assumption that \({\varvec{\mathcal {B}}}\) is some Boolean sub-algebra of \(\wp (\Omega )\). That is, given what we’ve just said, \(\mathcal {C}r\) is a probability function just in case:

  • Booleanism:

  • For all \(P, P_{1}, P_{2} \in \wp (\Omega )\),

  1. (i)

    If \(P \in {\varvec{\mathcal {B}}}\), then \(P^{\mathrm{C}} \in {\varvec{\mathcal {B}}}\)

  2. (ii)

    If \(P_{1}, P_{2} \in {\varvec{\mathcal {B}}}\), then \(P_{1} \cap P_{2} \in {\varvec{\mathcal {B}}}\)

From (i) and (ii), it follows that if \(P_{1}, P_{2} \in {\varvec{\mathcal {B}}}\), then \(P_{1} \cup P_{2} \in {\varvec{\mathcal {B}}}\). Booleanism is standard for the large majority of models of partial belief and a background requirement for many of the results in probability theory. I return to discuss it again in Sects. 4 and 5. As we’ll see, it leads to problems if we assume that \(\Omega \) has a certain minimal structure, and that \({\varvec{\mathcal {B}}}\subseteq \{\Vert \hbox {S}\Vert : \hbox { S } \in \, {\varvec{\mathcal {L}}}\}\).

But for now, suppose only that \({\varvec{\mathcal {B}}}\) includes all and only those propositions towards which \(\upalpha \) has partial beliefs, whatever they may be. In that case, a very natural way to read \(\mathcal {C}r\) is as a representation of \(\upalpha \)’s total degree of belief state:

P is believed by \(\upalpha \) to degree x if and only if \(\mathcal {C}r(P)=x\)

This generalises the earlier model of full belief quite nicely. On the simplest generalisation, say that full belief equates to degree of belief 1. Then, we will be able to characterise \(R_{\upalpha }(\upomega )\) as just that set of worlds which are assigned some positive value by \(\mathcal {D}\); thus, \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 1\) for every \(\Vert S\Vert \in {\varvec{\mathcal {B}}}\) such that \(R_{\upalpha }(\upomega ) \subseteq \Vert \hbox {S}\Vert \). But now we can also represent each of the many non-extremal grades of belief that \(\upalpha \) can have towards any proposition in \({\varvec{\mathcal {B}}}\), removing the sharp edges between belief and non-belief.

However, if \(\Omega \) is a space of possible worlds, then it’s easy to see that the new model will have its very own problems with logical omniscience. Corresponding to Nonnegativity, Normalisation, Monotonicity and \(\varSigma \)-Additivity respectively, we can quickly derive the following constraints of probabilistic coherence:

  1. (i)

    If S is a contradiction and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 0\)

  2. (ii)

    If S is a tautology and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 1\)

  3. (iii)

    If \(\hbox {S}_{1}\) implies \(\hbox {S}_{2}\) and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) \le \mathcal {C}r(\Vert \hbox {S}_{2}\Vert )\)

  4. (iv)

    If \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are inconsistent and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert , \Vert \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\Vert ) = \mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) + \mathcal {C}r(\Vert \hbox {S}_{2}\Vert )\)

Additionally, if full belief is degree of belief 1, then the new model implies that it’s not even possible for \(\upalpha \) to have inconsistent beliefs: \(\mathcal {D}\) must assign a positive value to at least one possible world \(\upomega \), and the set of propositions P such that \(\mathcal {C}r(P) = 1\) will be consistent. On an alternative account, full belief might be characterised in terms of exceeding some threshold degree t, for \(t < 1\). In that case, there may be no \(R_{\upalpha }(\upomega )\) such that \(\upalpha \) believes P if and only if \(R_{\upalpha }(\upomega ) \subseteq P\), and it may be possible for \(\upalpha \)’s beliefs to be inconsistent. However, if \(t > 0.5\), then it will be impossible for \(\upalpha \) to believe both S and \(\lnot \hbox {S}\) simultaneously; and as long as \(t > 0\), it will be impossible for \(\upalpha \) to believe any contradictions.

If logical omniscience is bad, then strict probabilistic coherence seems much worse. And the problems aren’t limited to just probabilistic representations. For instance, Dubois and Prade’s (1988) possibility theory allows us to systematically construct a degree of belief function on the basis of what they call a possibility distribution; i.e., a function \(\mathcal {D}^\prime \) from \(\Omega \) into [0, 1] such that \(\mathcal {D}^\prime (\upomega ) = 1\) for at least one world \(\upomega \). Taking \(\mathcal {D}^\prime \) as the basis for our model instead of \(\mathcal {D}\), we can define \(\mathcal {C}r\) on any subset \({\varvec{\mathcal {B}}}\) of \(\wp (\Omega )\) as follows:

$$\begin{aligned} \mathcal {C}r(\emptyset ) = 0, \hbox { and if } P \ne \emptyset , \hbox { then } \mathcal {C}r(P) = sup\{\mathcal {D}^\prime (\upomega ): \upomega \in P\} \end{aligned}$$

Defining \(\mathcal {C}r\) in this way implies that it is sub-additive:

If \(P_{1}, P_{2}, P_{1} \cup P_{2} \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(P_{1} \cup P_{2}) = {max}\{\mathcal {C}r(P_{1}), \mathcal {C}r(P_{2})\} \le \mathcal {C}r(P_{1}) + \mathcal {C}r(P_{2})\)

So, to a limited extent, using possibility distributions would let us avoid strict probabilistic coherence—though, sub-additivity is still a very strong constraint! More importantly, \(\mathcal {C}r\) so-defined will still satisfy Nonnegativity, Normalisation, and Monotonicity, and so \(\mathcal {C}r\) will still be constrained by (i)–(iii). In that sense, the possibilistic model still has to deal with a version of the problems of probabilistic coherence.

The same applies more generally: the vast majority of formal systems for the representation of partial beliefs will have \(\mathcal {C}r\) satisfy at least one of Nonnegativity, Normalisation, and Monotonicity (or something very similar). For example, Choquet capacities (Choquet 1954; applied in, e.g., Tversky and Kahneman 1992), Dempster–Shafer belief and plausibility functions (Dempster 1968; Shafer 1976), ranking functions (Spohn 2012), and the set-valued functions of Levi (1974) and Kyburg (1992). Where \(\Omega \) consists of only possible worlds, all of these models will have to deal with very strong coherence constraints.

But never fear—impossible worlds to the rescue! If we were to instead define the probability distribution \(\mathcal {D}\) on a space of worlds \(\Omega \) that satisfies Unrestricted Comprehension, then \(\mathcal {C}r\) need not satisfy any of the constraints (i)–(iv). Indeed \(\mathcal {C}r\) can be almost as wild and wacky as we want it to be. For instance, suppose that \(\mathcal {D}\) assigns a positive value only to worlds where S and S \(\wedge \lnot \hbox {S}\) are both true, and never to worlds where \(\lnot \hbox {S}\) or \(\lnot (\hbox {S } \wedge \lnot \hbox {S}\)) are true. Now, assuming that all of the relevant propositions are in \(\mathcal {C}r\)’s domain, \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = \mathcal {C}r(\Vert S \wedge \lnot \hbox {S}\Vert ) = 1\), and \(\mathcal {C}r(\Vert \lnot \hbox {S}\Vert ) = \mathcal {C}r(\Vert \lnot (\hbox {S} \wedge \lnot \hbox {S})\Vert ) = 0\).

Proviso: if Maximal Specificity holds and \(\Vert S\Vert \), \(\Vert \lnot \hbox {S}\Vert \,\in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert S\Vert ) + \mathcal {C}r(\Vert \lnot \hbox {S}\Vert ) \ge 1\).Footnote 11 So Unrestricted Comprehension does not give us total freedom to let \(\mathcal {C}r\) assign values to expressible propositions however we like. But we can fix this if we’re willing to expand \(\Omega \) even further, to allow for non-maximal worlds:

  • Really Unrestricted Comprehension:

  • For any set of sentences \({\varvec{\mathcal {S}}} \subseteq {\varvec{\mathcal {L}}}\), there will be worlds in \(\Omega \) where every \(\hbox {S} \in {\varvec{\mathcal {S}}}\) is true and no \(\hbox {S} \in {\varvec{\mathcal {L}}} \backslash {\varvec{\mathcal {S}}}\) is true

Now if you want \(\mathcal {C}r\) to assign 0 to both \(\Vert \hbox {S}\Vert \) and \(\Vert \lnot \hbox {S}\Vert \), you just need to make sure that \(\mathcal {D}\) assigns positive values only to worlds where neither S nor \(\lnot \hbox {S}\) is true. More generally, for any way you might want \(\mathcal {C}r\) to distribute values across a countable set of expressible propositions, we’ll be able to find a \(\mathcal {D}\) which generates exactly that distribution. A quick example to demonstrate the point. Let \(\hbox {S}_{1}, \hbox {S}_{2}\) and \(\hbox {S}_{3}\) be any three distinct sentences whatsoever. Suppose we want a \(\mathcal {C}r\) such that:

$$\begin{aligned} \mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) = x, \mathcal {C}r(\Vert \hbox {S}_{2}\Vert ) = y, \mathcal {C}r(\Vert \hbox {S}_{3}\Vert ) = z, \hbox { and } \mathcal {C}r(P) = 0 \hbox { otherwise}, \end{aligned}$$

where \(x> y > z \ge 0\). To accomplish this, we let \(\mathcal {D}\) be as follows. Where \(\upomega _{1}\) is the world where only \(\hbox {S}_{1}, \hbox {S}_{2}\) and \(\hbox {S}_{3}\) are true, \(\mathcal {D}(\upomega _{1})=z\). Where \(\upomega _{2}\) is the world where only \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, \(\mathcal {D}(\upomega _{2})=y - z\). Where \(\upomega _{3}\) is the world where only \(\hbox {S}_{1}\) is true, (\(\mathcal {D}\upomega _{3})=x - y\). The ‘empty world’ (where no sentences whatsoever are true) is then assigned \(1 - x\), and every other world is assigned 0. It follows that \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) = x, \mathcal {C}r(\Vert \hbox {S}_{2}\Vert ) = y, \mathcal {C}r(\Vert \hbox {S}_{3}\Vert ) = z\), and \(\mathcal {C}r(P) = 0\) otherwise. Given my assumptions about \(\mathcal {D}\), the same basic trick can be adopted for any \(\mathcal {C}r\) that assigns a positive value to countably many expressible propositions.

The idea to use a probability function over a space of possible and impossible worlds in order to model probabilistically incoherent agents is common in conversation, but also shows up at several points in the literature. Cozic (2006) has recently advocated the strategy, and Halpern and Pucella (2011, §4) make similar points. Lipman (1997) and (1999) attempts to deal with logical non-omniscience by deriving a probabilistic expected utility representation from an agent’s preferences, where the probability function in question is defined over a state-space involving both possibilities and impossibilities. Easwaran (2014, esp. pp. 1–2, 29) also suggests using impossible worlds in our probabilistic models of agents’ doxastic states, albeit in a slightly different context.

At the risk of belabouring a point that will already be clear to many, let me summarise the discussion of this section. We can see the ‘problems of probabilistic coherence’ as a consequence of a sequence of modelling choices. First, we need to choose what kinds of worlds get into \(\Omega \). Second, we need to define the function \(\mathcal {C}r\), and characterise the structure of its domain, \({\varvec{\mathcal {B}}}\). And finally, we need to say something about how we are going to interpret \(\mathcal {C}r\). In this respect, things are closely analogous to the problems of logical omniscience, and the same basic strategies for response are applicable. The response we’ve discussed centres upon the first modelling choice: by introducing enough impossible worlds into \(\Omega \), we can avoid all of the probabilistic coherence constraints (i) through (iv) above, and indeed, we can make \(\mathcal {C}r\) appear as irrational as we like.

4 The problem of inexpressibility

In this section, I will argue that if \(\Omega \) satisfies a very weak (and very plausible) richness assumption, then either Booleanism is false, or our model won’t plausibly represent highly logically fallible agents—which, of course, was the central motivation for introducing impossible worlds in the first place. The most straightforward way to make the argument begins with the premise that whatever \({\varvec{\mathcal {B}}}\) is, it should contain only propositions which are expressible in \({\varvec{\mathcal {L}}}\).

For any \(\hbox {S}_{1}\), take the set of all worlds in \(\Omega \) where S is true, and consider its complement \(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\). If Unrestricted Comprehension holds, then there is no \(\hbox {S}_{2}\) such that \(\Vert \hbox {S}_{2}\Vert = \Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\). As we’ve already noted, for any pair of sentences \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\), there will be worlds where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true. And if ReallyUnrestricted Comprehension also holds, then there will be also be worlds where neither \(\hbox {S}_{1}\) nor \(\hbox {S}_{2}\) is true. In either case, \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \) cannot be complements of one another. Hence, if \(\Vert \hbox {S}_{1}\Vert \) is expressible, then \(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\) is inexpressible. And since we’ve assumed that \({\varvec{\mathcal {B}}}\) is closed under complementation, it follows that there must be at least as many inexpressible propositions in \(\mathcal {C}r\)’s domain as there are expressible propositions. And that’s not a nice result: \({\varvec{\mathcal {L}}}\) is supposed to include a sentence capable of expressing every object of thought towards which we might have partial beliefs, and yet the model we’ve now developed is assigning nonsensical values to propositions expressed by no sentences of \({\varvec{\mathcal {L}}}\).

We could get around the foregoing argument if (and only if) we adopt the following restriction on \(\Omega \):

  • Restriction R1:

  • For every \(\hbox {S}_{1}\) such that \(\Vert \hbox {S}_{1}\Vert \in {\varvec{\mathcal {B}}}\), there is an \(\hbox {S}_{2}\) such that for any \(\upomega \in \Omega \), exactly one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true

I’ll have more to say about R1 in a moment, but first, note that merely imposing R1 on \(\Omega \) won’t solve all our problems. We’ve also supposed that \({\varvec{\mathcal {B}}}\) is closed under (at least finite) intersections, and with only R1 in place the set of expressible propositions (in \({\varvec{\mathcal {B}}}\)) will still be an antichain of \(\langle \wp (\Omega ), \subseteq \rangle \). (The only difference from before is that \(\{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\) will now be closed under complementation.) So take any two sentences \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) such that \(\Vert \hbox {S}_{1}\Vert \ne \Vert \hbox {S}_{2}\Vert \): there is no \(\hbox {S}_{3}\) such that \(\Vert \hbox {S}_{3}\Vert = \Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \). After all, nothing about R1 implies that there must be any sentences in \({\varvec{\mathcal {L}}}\) which are true at a world if and only if two other sentences are true at that world. Likewise, there is no \(\hbox {S}_{3}\) such that \(\Vert \hbox {S}_{3}\Vert = \Vert \hbox {S}_{1}\Vert \cup \Vert \hbox {S}_{2}\Vert \). Consequence: even with R1 in place, there will still be at least as many inexpressible propositions in \(\mathcal {C}r\)’s domain as there are expressible propositions.

The following is necessary and sufficient for ensuring that the intersection of any two expressible propositions (in \({\varvec{\mathcal {B}}}\)) is itself expressible:

  • Restriction R2:

  • For every pair \(\hbox {S}_{1}, \hbox {S}_{2}\) such that \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), there is an \(\hbox {S}_{3}\) such for any \(\upomega \in \Omega , \hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true at \(\upomega \) if and only if \(\hbox {S}_{3}\) is true at \(\upomega \)

Given R1, R2 also implies that the union of any two expressible propositions (in \({\varvec{\mathcal {B}}}\)) is expressible. That is, for any pair of expressible propositions \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \) in \({\varvec{\mathcal {B}}}\), there is some sentence \(\hbox {S}_{3}\) such that \(\hbox {S}_{3}\) is true at \(\upomega \) if and only if at least one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true at \(\upomega \).

Exactly how restrictive R1 and R2 end up being depends heavily on which expressible propositions end up included in \({\varvec{\mathcal {B}}}\). We can safely assume that whatever \({\varvec{\mathcal {B}}}\) is, it will be richly populated with plenty of expressible propositions, so R1 and R2 are never trivially satisfied. On the other hand, if there are sentences whose characteristic propositions are not in \({\varvec{\mathcal {B}}}\), then R1 and R2 are consistent with certain a degree of freedom in relation to those sentences. But this is not especially interesting: since \({\varvec{\mathcal {B}}}\) contains all of the propositions in \(\mathcal {C}r\)’s domain, whatever is true of the expressible propositions not in \({\varvec{\mathcal {B}}}\) will be irrelevant to the model of \(\upalpha \)’s degrees of belief that we are left with. Hence, we can simplify the discussion and pretend henceforth that \({\varvec{\mathcal {B}}} = \{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\).

The key point in what follows will be that how R1 and R2 can be implemented is constrained by what kinds of worlds we want to keep in \(\Omega \). For example, if we were to require that \(\Omega \) contains at least all of the logically possible worlds, then the \(\hbox {S}_{2}\) referred to in R1 must be logically equivalent to \(\lnot \hbox {S}_{1}\) (if not identical to \(\lnot \hbox {S}_{1})\): every logically possible world where \(\hbox {S}_{1}\) doesn’t hold is one where \(\lnot \hbox {S}_{1}\) holds, and if \(\hbox {S}_{2}\) and \(\lnot \hbox {S}_{1}\) are true at the very same logically possible worlds then they must be logically equivalent.

I will not assume that \(\Omega \) contains every logically possible world, though I think that something in the vicinity must be true if we want to use \(\mathcal {C}r\) as a model of ideal agents as well as non-ideal agents. Instead, I will assume something much weaker. Say that \(\hbox {S}_{1}\) is blatantly inconsistent with \(\hbox {S}_{2}\) just in case either \(\hbox {S}_{1} = \lnot \hbox {S}_{2}\) or \(\hbox {S}_{2} = \lnot \hbox {S}_{1}\). Then my assumption can be expressed as follows:

Minimal Richness:

For any consistent triple \(\hbox {S}_{1}, \hbox {S}_{2}\), \(\hbox {S}_{3}\), there is at least one world \(\upomega \in \Omega \) such that:

  1. (i)

    \(\hbox {S}_{1}, \hbox {S}_{2}\), and \(\hbox {S}_{3}\) are all true at \(\upomega \), and

  2. (ii)

    If \(\hbox {S}_{4}\) is blatantly inconsistent with any of \(\hbox {S}_{1}, \hbox {S}_{2}\), or \(\hbox {S}_{3}\), then \(\hbox {S}_{4}\) is not true at \(\upomega \)

Minimal Richness should be uncontroversial, especially since it can be motivated by precisely the same sorts of considerations which motivate including a rich space of impossible worlds into our models in the first place.Footnote 12 Consider: if \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), and \(\hbox {S}_{3}\) are jointly consistent, then it is surely possible for \(\upalpha \) to have a confidence of, say, greater than 2/3 in their simultaneous truth, which will only be possible if there is a world in \(\Omega \) where each of the three sentences is true.Footnote 13 Similarly, it’s surely possible to have the same high degree of confidence regarding their simultaneous truth while having zero confidence towards any \(\hbox {S}_{4}\) that’s blatantly inconsistent with \(\hbox {S}_{1}, \hbox {S}_{2}\), or \(\hbox {S}_{3}\). And this is would only be possible if there’s a world where \(\hbox {S}_{1}, \hbox {S}_{2}\), and \(\hbox {S}_{3}\) are all true and \(\hbox {S}_{4}\) isn’t—for otherwise, \(\Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \cap \Vert \hbox {S}_{3}\Vert \subseteq \Vert \hbox {S}_{4}\Vert \), and since \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \cap \Vert \hbox {S}_{3}\Vert )\,> 0\), we know that \(\mathcal {C}r(\Vert \hbox {S}_{4}\Vert )\, > 0\).

So let’s consider R1, which states that every \(\hbox {S}_{1}\) can be paired with another sentence \(\hbox {S}_{2}\) which is true at a world \(\upomega \) if and only if \(\hbox {S}_{1}\) is not true at \(\upomega \). If Minimal Richness holds, then whatever \(\hbox {S}_{2}\) ends up being, it must be logically equivalent to \(\lnot \hbox {S}_{1}\). For suppose that \(\hbox {S}_{2}\) is not logically equivalent to \(\lnot \hbox {S}_{1}\). Then either \(\hbox {S}_{2}\) does not imply \(\lnot \hbox {S}_{1}\), or \(\lnot \hbox {S}_{1}\) does not imply \(\hbox {S}_{2}\) (or both). If \(\hbox {S}_{2}\) does not imply \(\lnot \hbox {S}_{1}\), then {\(\hbox {S}_{2}, \hbox {S}_{1}\)} is consistent, and there will be at least one world where \(\hbox {S}_{2}\) and \(\hbox {S}_{1}\) are both true, which contradicts R1. On the other hand, if \(\lnot \hbox {S}_{1}\) does not imply \(\hbox {S}_{2}\), then \(\{\lnot \hbox {S}_{1}, \lnot \hbox {S}_{2}\}\) is consistent and there will be worlds where \(\lnot \hbox {S}_{1}\) and \(\lnot \hbox {S}_{2}\) are both true. Since \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are blatantly inconsistent with \(\lnot \hbox {S}_{1}\) and \(\lnot \hbox {S}_{2}\) respectively, this would have to be a world where neither \(\hbox {S}_{1}\) nor \(\hbox {S}_{2}\) is true, which also contradicts R1. Hence, any sentence \(\hbox {S}_{2}\) that satisfies R1 must be logically equivalent to \(\lnot \hbox {S}_{1}\), if Minimal Richness is true.

This leaves us with a limited range of options for implementing R1. The most straightforward way would be to let the required sentence \(\hbox {S}_{2}\) just be\(\lnot \hbox {S}_{1}\). In effect, this is just to assume that the worlds in \(\Omega \) satisfy Non-Contradiction and Maximal Specificity. And it’s easy enough to think of some plausible motivations for assuming Non-Contradiction: one could argue that no model of a minimally rational agent’s doxastic state should represent her as having any degree of belief that both \(\hbox {S}_{1}\) and \(\lnot \hbox {S}_{1}\) could be true simultaneously (cf. Lewis 2004; Bjerring 2013; Jago 2014b). To the extent that we make errors of logical reasoning, they tend to be more subtle—e.g., a failure to deduce a downstream consequence of what we believe, rather believing in blatant inconsistencies.

Motivating Maximal Specificity is a little more difficult, as it amounts to removing all incomplete worlds from \(\Omega \). Some are independently happy to do this (e.g., Bjerring 2014; Bjerring and Schwarz 2017, p. 28; cf. Stalnaker 1996). For others, incomplete worlds are a crucial aspect of the model (Jago 2014a, b). Furthermore, it’ll be a consequence of assuming Non-Contradiction and Maximal Specificity together that we lose the capacity to have \(\mathcal {C}r\) assign wholly independent values to the pairs \(\Vert \hbox {S}\Vert \) and \(\Vert \lnot \hbox {S}\Vert \). Indeed, the worlds we are left with are closed under the rules of double negation introduction and elimination, with \(\mathcal {C}r\) satisfying \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = \mathcal {C}r(\Vert \lnot \lnot \hbox {S}\Vert \)) for all \(\Vert \hbox {S}\Vert \) in \({\varvec{\mathcal {B}}}\). This is already quite a strong restriction.

Nevertheless, there are good reasons to think that if the implementation of R1 is to be even remotely well-motivated, then \(\hbox {S}_{2}\) shouldn’t be anything other than \(\lnot \hbox {S}_{1}\). Suppose that \(\hbox {S}_{2}\) is any sentence that’s logically equivalent to \(\lnot \hbox {S}_{1}\) other than \(\lnot \hbox {S}_{1}\) itself—say, \(\lnot \lnot \lnot \hbox {S}_{1}\). We might then keep some non-maximally specific and/or contradictory worlds in \(\Omega \), but now our worlds will be closed under the rules of sextuplenegationintroduction (SNI) and elimination (SNE):

  • (SNI) From S, infer \(\lnot \lnot \lnot \lnot \lnot \lnot \hbox {S}\)

  • (SNE) From \(\lnot \lnot \lnot \lnot \lnot \lnot \hbox {S}\), infer S

Any reasons we might have had to avoid closing worlds under the (relatively simple) rules of double negation would apply with all the more force here: to the extent that ordinary agents might generally accept something like (SNI) and (SNE), it’s because they accept that \(\hbox {S}_{1}\) is true if and only if \(\lnot \hbox {S}_{1}\) is not true. Given Minimal Richness, the very best case we can make for implementing R1 involves letting \(\hbox {S}_{2}\) be \(\lnot \hbox {S}_{1}\). Anything else would look implausible and arbitrary.

But it is in combination with R2 that R1 most worrisome. R2 states that every pair of sentences \(\hbox {S}_{1}, \hbox {S}_{2}\) can be paired with a some \(\hbox {S}_{3}\) such that \(\hbox {S}_{3}\) is true at a world if and only if both \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true at that world. Given Minimal Richness, we know that \(\hbox {S}_{3}\) must be logically equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\). The argument here is similar to the one earlier with R1. Suppose that \(\hbox {S}_{3}\) is not logically equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\). Then \(\hbox {S}_{3}\) doesn’t imply \(\hbox {S}_{1} \wedge \hbox {S}_{2}\), or \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) doesn’t imply \(\hbox {S}_{2}\). If \(\hbox {S}_{3}\) doesn’t imply \(\hbox {S}_{1} \wedge \hbox {S}_{2}\), then at least one of the following is consistent:

$$\begin{aligned} \{\hbox {S}_{3}, \lnot \hbox {S}_{1}, \lnot \hbox {S}_{2}\},\, \{\hbox {S}_{3}, \lnot \hbox {S}_{1}, \hbox {S}_{2}\},\, \{\hbox {S}_{3}, \hbox {S}_{1}, \lnot \hbox {S}_{2}\} \end{aligned}$$

In each case, there will be at least one world in \(\Omega \) where \(\hbox {S}_{3}\) is true and at least one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is not true, which would contradict R2. If \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) does not imply \(\hbox {S}_{3}\), then \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) do not jointly imply \(\hbox {S}_{3}\), so \(\{\hbox {S}_{1}, \hbox {S}_{2}\), \(\lnot \hbox {S}_{3}\}\) is consistent and there is at least one world in \(\Omega \) where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true and \(\hbox {S}_{3}\) is not. This would also contradict R2. So, \(\hbox {S}_{3}\) must be at least logically equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\).

An argument analogous to that given for R1 then immediately suggests how we ought to implement the restriction, if at all: require that all worlds in \(\Omega \) satisfy \(\wedge \)-Consistency:

  • \({\varvec{\wedge }}\) -Consistency:

  • For all \(\hbox {S}_{1}, \hbox {S}_{2} \in \)\({\varvec{\mathcal {L}}}, \hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true at \(\upomega \) if and only if \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) is true at \(\upomega \)

Certainly, it would be absurd to suppose that R2 is not satisfied by \(\hbox {S}_{1} \wedge \hbox {S}_{2}\), but rather some other sentence equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\). For suppose that R2 was satisfied by, say, \(\lnot (\lnot \hbox {S}_{1} \wedge \hbox {S}_{2}) \wedge \lnot (\lnot \hbox {S}_{2} \wedge \hbox {S}_{1}) \wedge \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\). Then our models would have us representing \(\upalpha \) as someone who, without fail, always infers back and forth between \(\hbox {S}_{1}\), \(\hbox {S}_{2}\) and \(\lnot (\lnot \hbox {S}_{1} \wedge \hbox {S}_{2}) \wedge \lnot (\lnot \hbox {S}_{2} \wedge \hbox {S}_{1}) \wedge \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\), while potentially skipping over the much more natural and direct inferences between \(\hbox {S}_{1}, \hbox {S}_{2}\) and \(\hbox {S}_{1} \wedge \hbox {S}_{2}\). But anyone who doesn’t reliably follow the rules of conjunction introduction and elimination is not going to be unfailingly adhere to any inference rules which link \(\hbox {S}_{1}, \hbox {S}_{2}\) and \(\lnot (\lnot \hbox {S}_{1} \wedge \hbox {S}_{2}) \wedge \lnot (\lnot \hbox {S}_{2} \wedge \hbox {S}_{1}) \wedge \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) to one another. (To be sure, one could in principle describe a consequence relation such that the later inferences are admitted but the former are not. But why would we think that closing the worlds in \(\Omega \) under that relation makes for a good model any doxastic agent, let alone the typical believer?)

In conjunction with Non-Contradiction and Maximal Specificity, \({\wedge }\)-Consistency guarantees that \(\lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) is true at any world where at least one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) are true: for any \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \),

$$\begin{aligned} \Vert \hbox {S}_{1}\Vert ^{\mathrm{C}} = \Vert \lnot \hbox {S}_{1}\Vert , \hbox { and } \Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert = \Vert \hbox {S}_{1} \wedge \hbox {S}_{2}\Vert , \end{aligned}$$

Hence,

$$\begin{aligned} (\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}} \cap \Vert \hbox {S}_{2}\Vert ^{\mathrm{C}})^{\mathrm{C}} = \Vert \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\Vert = \Vert \hbox {S}_{1}\Vert \cup \Vert \hbox {S}_{2}\Vert \end{aligned}$$

In fact, they imply that (a) every Boolean combination of expressible propositions will be expressible by some sentence involving \(\lnot \) and/or \(\wedge \), and more generally that (b) every world in \(\Omega \) will be closed under the \(\{\lnot , \wedge \}\) fragment of classical propositional logic. We’re fast running out of impossibilities—and with them, our capacity to represent logically non-ideal subjects.

Now I want to be clear that I’ve not yet said that \(\Omega \) contains no impossible worlds whatsoever. If there are irreducibly disjunctive sentences in \({\varvec{\mathcal {L}}}\), then a sentence like \(\hbox {S}_{1} \vee \hbox {S}_{2}\) may still behave erratically by, e.g., not being true at all and only the worlds where at least one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true. Likewise, if \({\varvec{\mathcal {L}}}\) contains a primitive conditional connective \(\rightarrow \) (i.e., where \(\hbox {S}_{1} \rightarrow \hbox {S}_{2}\) is not simply a shorthand for \(\lnot (\hbox {S}_{1} \wedge \)\(\lnot \hbox {S}_{2}))\), then we’ve not said anything to guarantee that the worlds in \(\Omega \) must validate even very simple inference rules like modus ponens. Thus, there may still be plenty of logically impossible worlds in \(\Omega \). Nevertheless, with Non-Contradiction and Maximal Specificity, \(\wedge \)-Consistency alone we’ve managed to close \(\Omega \) under a very strong consequence relation. Indeed, \(\Omega \) is already only apt for modelling agents who are very good logical reasoners: for every classically valid inference pattern \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), ...\(\Rightarrow \) S, the worlds in \(\Omega \) will be closed under an corresponding inference which replaces each of \(\hbox {S}_{1}, \hbox {S}_{2}\), ...and S with a classically equivalent sentence expressed using only \(\lnot \) and \(\wedge \). For instance, while \(\Omega \) might not be closed under disjunction introduction, we do know that at any world where either \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true, \(\lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) will also be true. And at any world where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, \(\lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2}) \wedge \lnot (\lnot \hbox {S}_{1} \wedge \hbox {S}_{2}) \wedge \, \lnot (\hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) is true. What we have, in effect, is a model of an agent who is logically infallible with respect to a huge range of sometimes very complex inferences. That the agent might also be logically incompetent with respect to other very basic inferences hardly seems to help.

In summary: given Minimal Richness, if we want to preserve Booleanism alongside the expressibility hypothesis, then we have to close \(\Omega \) under some (classically valid) inferences. We have a certain degree of choice as to what inferences these might be (e.g., double negation elimination versus sextuple negation elimination). But closing \(\Omega \) under the most simple and natural rules—that is, those rules which ordinary agents are most likely to consistently follow—leads us directly into closing \(\Omega \) under a complete fragment of classical logic, and, plausibly, under classical logic simpliciter.

5 Responses

At the end of Sect. 3, I noted that the problems of probabilistic coherence result from a sequence of choices, about the formal properties and interpretation of \(\Omega \), \({\varvec{\mathcal {B}}}\), and \(\mathcal {C}r\). All standard models of partial belief presuppose that \({\varvec{\mathcal {B}}}\) satisfies Booleanism, and \(\mathcal {C}r\) satisfying at least one of Nonnegativity, Normalisation, and Monotonicity or something very similar; combined with a space of worlds limited only to the possible, these quickly get us to some very strong coherence constraints on degrees of belief. We can avoid these constraints without making any significant changes to the standard models if \(\Omega \) includes enough impossible worlds, but doing so will generate a problem with expressibility.

There are a lot of moving parts here, and consequently, plenty of ways to respond. As a (non-exhaustive) list of options, we might:

  1. 1.

    Keep the standard probabilistic model of partial belief, and bite the bullet on the matter of probabilistic coherence.

  2. 2.

    Develop a non-standard model of partial belief which keeps Booleanism but avoids the probabilistic coherence without resorting to impossible worlds.

  3. 3.

    Develop a non-standard model of partial belief which involves impossible worlds but doesn’t presuppose Booleanism.

  4. 4.

    Offer an alternative interpretation of \(\mathcal {C}r\) (i.e., such that \(\mathcal {C}r\) being defined for inexpressible propositions does not conflict with the expressibility hypothesis).

  5. 5.

    Reject the expressibility hypothesis.

I’m inclined to think that some combination of the first strategy and (to a much lesser extent) the second strategy is our best bet. It would be better if we didn’t have to throw out most of what we’ve managed to achieve regarding the formal representation of partial belief, so making very significant changes to the basic model outlined in Sect. 3 seems like a rash decision. But moreover, just how bad the problems of probabilistic coherence actually are depends on just how probabilistically irrational the typical human is, and there are reasons to think that the probabilistic (possible worlds) model isn’t too far from the truth (appearances to the contrary notwithstanding). But that is a big debate, and arguing the point is best left for a different discussion. To conclude, then, I will in this section say a few words about the third and fourth types of response, and discuss the fifth type of response in Sect. 6.

With respect to the third strategy, it’s worth noting that Booleanism is not something to be given up lightly. To be sure, the definition of \(\mathcal {C}r\) in terms of a probability distribution \(\mathcal {D}\) that I gave in Sect. 3 in no way required any special assumptions about the structure of \({\varvec{\mathcal {B}}}\); so it’s clear that we can construct a recognisably ‘probabilistic’ model of partial belief without assuming Booleanism. But then we can raise a version of the point made at the end of Sect. 2: if we let \(\Omega \) satisfy ReallyUnrestricted Comprehension, and simply define \({\varvec{\mathcal {B}}}\) as \(\{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\), then while it’s true that \(\mathcal {D}\) will let us encode any arbitrary assignment of values into \(\mathcal {C}r\), it’s hard to see why we should want to use a probability distribution in the first place. \(\mathcal {D}\) itself doesn’t directly represent anything about \(\upalpha \)’s doxastic state—no S will be true at just one world \(\upomega \), so \(\mathcal {D}(\upomega )\) cannot be interpreted as a degree of belief towards the singleton proposition {\(\upomega \)}. What we really have is just a complicated way of listing out \(\upalpha \)’s degree of belief states, with the probabilistic aspects adding nothing to efficiency or illumination.

But that isn’t the only worry in the vicinity. A more important concern, I think, arises from the fact that Booleanism frequently comes up as a basic assumption in various representation theorems, where the requirement that \({\varvec{\mathcal {B}}}\) has some minimally rich algebraic structure is a prerequisite for our being able to assign numerical values to the contents of \({\varvec{\mathcal {B}}}\) in a meaningful and systematic way. For example, the assumption plays a role throughout Jeffrey’s (1990) representation theorem for expected utility theory—where, if we were to assume that the space of thinkable propositions \({\varvec{\mathcal {B}}}\) was such that none of its members is a subset of any other members, almost all of his axioms would be either meaningless or trivial. Booleanism is a standard assumption for theories of decision making and uncertainty, with almost all axiomatic decision theories being built around it. Or consider the common approach to characterising numerical degrees of belief defined in terms of qualitative belief orderings over propositions, based on the work of de Finetti (1931) and Scott (1964). Representation theorems which take us from qualitative belief orderings to probabilities are importantly dependent on \({\varvec{\mathcal {B}}}\) having a rich algebraic structure. Without something like the axiom of qualitative additivity—that if \(P_{1}\) and \(P_{2}\) both have null intersection with \(P_{3}\), then one holds \(P_{1}\) to be more likely than \(P_{2}\) if and only if one holds \(P_{1} \cup P_{3}\) to be more likely than \(P_{2} \cup P_{3}\)—the qualitative belief ordering would lack a sufficiently rich structure to support anything more than a simple (and representationally inadequate) ordinal scale.Footnote 14

With respect to the fourth strategy, we could perhaps keep the probabilistic model as it is (more or less), but make changes to how we interpret \(\mathcal {C}r\).Footnote 15 For instance, instead of saying that \(\mathcal {C}r(P)=x\) if and only if \(\upalpha \) has degree of belief x towards some object of belief represented by P, we might instead say that \(\mathcal {C}r\) represents \(\upalpha \)’s degrees of belief only where the propositions in question are expressible. But what then of the values that \(\mathcal {C}r\) assigns to inexpressible propositions? One thought would be to say that while \(\mathcal {C}r\) represents \(\upalpha \)’s degrees of belief when P is expressible, it represents some other propositional attitude \(\upphi \) when P is inexpressible. For instance, one might think that if P is expressible, then \(\mathcal {C}r(P^{\mathrm{C}})\) represents \(\upalpha \)’s degree of rejection towards P, which plausibly is \(1 - \mathcal {C}r(P)\). However, this kind of ‘rejectionist’ proposal will only work if the complement of every inexpressible proposition is expressible, which is not in general the case. In particular, the domain of \(\mathcal {C}r\) has to be closed under intersections and unions, and the complement of the (inexpressible) intersection or union of two expressible propositions will often be itself inexpressible.

Of course, there may exist some other broadly ‘doxastic’ attitude \(\upphi \) that I’ve not considered, which takes inexpressible propositions as its objects—but what reason do we have for positing the existence of this \(\upphi \), beyond the desire to preserve some modelling assumptions?

6 The expressibility hypothesis (again)

Finally, one may want to go after the assumption that there exists an \({\varvec{\mathcal {L}}}\) of the kind described in Sect. 1, in which everything that \(\upalpha \) believes or partially believes is expressible. If this is false, then the presence of inexpressible propositions in the domain of \(\mathcal {C}r\) is perhaps even to be expected, not shunned. Maybe we have just discovered that sometimes our partial beliefs towards expressible propositions comes hand-in-hand with partial beliefs towards inexpressible propositions; the latter are perfectly legitimate objects of thought, but not all such objects are expressible.

First things first, it should be noted that there are accounts of what worlds are which cannot plausibly avoid a version of my argument by denying the expressibility hypothesis. For example, Nolan (1997) favours an approach where (in his terminology) ‘propositions’—the meanings of sentences and the objects of thought—are taken to be the fundamental entities from which worlds are constructed. On this picture, possible worlds are maximal consistent sets of propositions à la Adams (1974), while impossible worlds are those sets of propositions which are inconsistent and/or non-maximal. Adopting this view, we could let \({\varvec{\mathcal {L}}}\) simply be the class of all propositions qua objects of thought, trivialising the question as to whether \({\varvec{\mathcal {L}}}\) is ‘expressively rich enough’ to capture every belief that \(\upalpha \) might have. We can then easily see that once something like Unrestricted Comprehension holds, there will be sets of worlds with no proposition in common amongst their members. These sets of worlds will not only be linguistically inexpressible, but quite literally unthinkable.Footnote 16

Furthermore, I have already noted Jago’s work on the expressiveness of Lagadonian languages in Sect. 1, which undergirds his linguistic ersatz account of impossible worlds as arbitrary sets of sentences taken from a pre-specified ‘world-making’ language \({\varvec{\mathcal {L}}}\). And note the central importance of the expressibility hypothesis to the account, according to which a set-of-worlds proposition P represents some content C just in case, for every world \(\upomega \) in P, there is a sentence S in \(\upomega \) which expresses that C. In general, this brand of linguistic ersatzer argues for the representational adequacy of their propositions qua sets of ‘worlds’ by arguing first that the basic world-making language is up to the task of distinguishing between all possible contents of belief, from which it quickly follows that sets of sets of these sentences can distinguish between different belief contents—for the simple reason that there is a one-to-one correspondence between the set of sentences S of a language, \({\varvec{\mathcal {L}}}\), and the set of \(P \subseteq {\varvec{\mathcal {L}}}\) such that \(\hbox {S } \in P\). The expressiveness of the ersatz sets-of-worlds model is directly grounded in the expressiveness of the language it’s built upon, with propositional representation achieved directly through the meanings of the sentences shared by the worlds within the propositions.

To be sure, one can imagine an ersatzer who begins with a langauge \({\varvec{\mathcal {L}}}\) which is expressively inadequate, and claims that those beliefs which cannot be represented by any sentence of \({\varvec{\mathcal {L}}}\) are nevertheless represented by those subsets which have no sentences in common. But how is this representation achieved? Certainly, not in the standard way. Indeed, what reason could we have for thinking that sets of sets of world-making sentences which have nothing in common will do a reasonable job of representing the purportedly ‘inexpressible’ beliefs? What content would the inexpressible proposition \(\{\emptyset \}\) represent, and does it represent anything different than the distinct inexpressible proposition \(\emptyset \)? And what does \(\{\{\hbox {S}_{1}, \hbox {S}_{2}\}, \{\hbox {S}_{3}\}\}\) represent? That either \(\hbox {S}_{1} \wedge \hbox {S}_{2}\), or \(\hbox {S}_{3}\)? We already have a content for that:

$$\begin{aligned} \{\upomega \subseteq \Omega : (\hbox {S}_{1} \wedge \hbox {S}_{2}) \vee \hbox {S}_{3} \in \upomega \} \end{aligned}$$

It’s hard to imagine any sort of systematic story about how ersatz propositions with nothing in common amongst their members could nevertheless serve to represent a genuine content. And absent such a story, we’re stuck with the standard approach, which presupposes the expressive adequacy of the world-making language \({\varvec{\mathcal {L}}}\).

But I don’t want my argument to rest upon specific approaches to characterising worlds. So, to conclude the discussion, I will proceed as follows. First, I’ll make a few general points in favour of the expressibility hypothesis. I don’t take any of these to be conclusive; much like the present state of the literature on the expressibility of thought, there is plenty of space for disagreement here. It is enough to show, however, that denying the expressibility hypothesis is no trivial matter. Secondly, and much more importantly, I’ll end by saying why I don’t think that denying the expressibility hypothesis is the right way to respond to the argument.

Let me start then by noting that although there are surprisingly few philosophical discussions regarding whether every possible object of thought is linguistically expressible, to the extent that the question has been discussed the usual presumptive answer has been affirmative; e.g., Searle (1969, pp. 19ff), Katz (1978; 1981), Schiffer (2003, p. 71), Priest (2006, p. 54), and Hofweber (2006). Michael Dummett goes so far as to state a priori that:

Thoughts differ in all else that is said to be among the contents of the mind in being wholly communicable: it is of the essence of thought that I can convey to you the very thought I have [...] It is of the essence of thought, not merely to be communicable, but to be communicable, without residue, by means of language. (1978, p. 142)

Most of these discussions focus on natural languages, which makes it a little hard to apply them to the non-natural language \({\varvec{\mathcal {L}}}\). Of particular note is that natural languages will contain a variety of context-dependent expressions which serve to expand their expressiveness, whereas I’ve stipulated that the sentences of \({\varvec{\mathcal {L}}}\) have their meanings independent of context. Since I’ve made very few substantive assumptions about \({\varvec{\mathcal {L}}}\), it’s hard to see why there would be any particular problems for applying lessons drawn from natural languages to an language \({\varvec{\mathcal {L}}}\) besides those which arise from context-sensitivity. Certainly, the fact that the interpretation of \({\varvec{\mathcal {L}}}\)’s sentences are unambiguous and precise shouldn’t give us any reason to think that it’s less likely we’ll find the right sentences in \({\varvec{\mathcal {L}}}\).

We could re-run the argument without supposing that \({\varvec{\mathcal {L}}}\) contains only context-insensitive expressions. We would then need to speak not of expressibility and inexpressibility simpliciter, but rather expressibility relative to a context. But, if it’s not already plausible that every object of belief is expressible in a context-insensitive language, then it’s not clear why every content of belief should be expressible in a context-sensitive language in a specific context. A better option, if we thought that every belief were expressible in some natural language \({\varvec{\mathcal {L}}}_{n}\), would be to take \({\varvec{\mathcal {L}}}_{n}\) as the basis for the construction of \({\varvec{\mathcal {L}}}\), which proceeds by systematically eliminating the context-sensitivity of \({\varvec{\mathcal {L}}}_{n}\) while preserving overall expressibility. The received view is that such an elimination is entirely possible—and indeed, easy. As Stalnaker puts it, it seems at first pass “easy to eliminate context-dependence [since for] any proposition expressed in context c by sentence S, we may simply stipulate that some other sentence \(\hbox {S}^\prime \) shall express, in all contexts, that same proposition” (Stalnaker 1984, pp. 151–152).Footnote 17If this kind of elimination strategy is viable, then we have every reason to think that whatever we can say in, e.g., English, we can say in a spruced up and context-independent version of English.

But all this depends on a more general assumption that our beliefs ought to be linguistically expressible somehowor other, which the reader may very well doubt. Nevertheless, the existence of something much like \({\varvec{\mathcal {L}}}\) is strongly suggested by a wide variety of positions in philosophy. The assumption plays a role in important attempts to explain mental representation. If one accepts the arguments for the existence of a Language of Thought as the psychological basis for our capacity to have propositional attitudes, then the existence of a language like \({\varvec{\mathcal {L}}}\) seems hard to deny. According to this popular view, thinking in general is a computational process sensitive only to the (context-independent) syntax of strings of symbols in a compositional Language of Thought, and one has a belief with content P only in the event that they are appropriately related to a sentence in this language which means that P. The existence of a language rich enough to express each of our beliefs is also presupposed a number of models of mental content. For instance, and besides the Lagadonian approaches already mentioned, Chalmers models the contents of thoughts—including our partial beliefs—as sets of scenarios, with each scenario being an ‘epistemically complete’ description of way the world might be for all we know a priori in an idealised language consisting of vocabulary for describing the microphysical and phenomenal characteristics of the world (see his 2011, 2012). That is, each scenario is a (potentially infinitary) conjunction of sentences in an ideal language, with each scenario being inconsistent with every other scenario. To express any set of scenarios in this language, a (potentially infinitary) disjunction of scenarios will suffice.

With all that said, the recent literature has seen some purported counterexamples to my assumption about the expressibility of belief. Shaw (2013) develops a variation on the Berry paradox to argue for the existence of a kind of inexpressible thought content—an instance of a case which he says “happens on extremely rare occasions due to a particular kind of linguistic technicality” (p. 70). Hellie (2004) has also argued that there may be truths about phenomenal experience which we can appreciate but cannot express linguistically. And if one thinks that there is a one-to-one correspondence between ways the world might be and possible belief contents, then there are also classic expressive inadequacy arguments involving qualitatively indiscernible individuals and alien properties, to the effect that no language can describe every possibility (e.g., Lewis 1986, p. 157ff; Bricker 1987). I will not discuss any of these points in detail. Perhaps each gives rise to a genuine problem for the expressibility hypothesis. But acquiescing on this point hardly seems to help with the problem currently at hand. The inexpressibility of most of \(\mathcal {C}r\)’s domain cannot be explained by an occasional linguistic technicality. And moreover, the inexpressible propositions that we have been describing are not plausibly about some ineffable aspect of our phenomenal experience, alien properties, or qualitatively indiscernible individuals.

If\({\varvec{\mathcal {L}}}\) lacks the expressive power to represent our thoughts about such things—so be it. Let \({\varvec{\mathcal {L}}}\) represent a language capable of expressing only those more mundane beliefs which are expressible, like the belief that roses are red. (If need be, let \({\varvec{\mathcal {L}}}\) be the set of declarative sentences of English, and fix a context.) What kind of content could the set of worlds where ‘Roses are red’ is not true represent, if not that roses are not red? Clearly, it has something to do with roses and redness—but what? We can’t express it, sure, but it doesn’t even seem like there’s anything content-like in the vicinity for us to believe. At best, the inexpressible propositions we’ve been talking about look like an artefact of the model, not some newly discovered kind of content towards which most of our beliefs are directed.

This is, of course, a version of the argument above against the hypothetical linguistic ersatzer who denies the expressibility hypothesis. The point here is general, and constitutes the central reason why going after the expressibility hypothesis looks like the wrong strategy. An adequate response to the argument of Sect. 4 can’t be to just point out that there may be some possible things that \(\upalpha \)could believe which are not expressible. The odd inexpressible object of thought here and there isn’t an immediate cause for concern: the underlying problem survives mere counterexamples to the existence of \({\varvec{\mathcal {L}}}\). Unless we make serious changes to the basic probabilistic model of our beliefs, then so long as Booleanism and (Really) Unrestricted Comprehension are true, if you have a degree of belief x towards \(\Vert \hbox {S}\Vert \) you will have a degree of belief (1 – x) towards the mysteriously inexpressible proposition \(\Vert \hbox {S}\Vert ^{\mathrm{C}}\); and if you have degrees of belief x and y towards \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \) then you’ll have some degree of belief \(z \le x, y\) towards the inexpressible \(\Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \) and \(((x+y) - z)\) towards \(\Vert \hbox {S}_{1}\Vert \cup \Vert \hbox {S}_{2}\Vert \). Inexpressibility on this model is not some esoteric phenomenon resting on a technicality, nor does it seem to be limited to a specific kind of topic (e.g., phenomenology, alien properties, and indiscernible individuals) about which we might have beliefs.

For similar reasons, I am not moved by simple cardinality arguments aimed at showing that we must accept the existence of inexpressible propositions, regardless of whether we adopt impossible worlds into our ontology or not. Some vigorously intuit that for any subset \({\varvec{\mathcal {S}}}\) of any language \({\varvec{\mathcal {L}}}, \upalpha \) might (partially) believe that all and only the sentences of \({\varvec{\mathcal {S}}}\) are true. If \({\varvec{\mathcal {L}}}\) is set-sized, then the cardinality of the \(\wp ({\varvec{\mathcal {L}}}\)) is strictly greater than that of \({\varvec{\mathcal {L}}}\). It follows that \({\varvec{\mathcal {L}}}\) cannot contain a unique sentence S for each subset \({\varvec{\mathcal {S}}}\subseteq {\varvec{\mathcal {L}}}\) to the effect of ‘All and only the elements of \({\varvec{\mathcal {S}}}\) are true’. Thus, either the content in question is not expressible at all, or it cannot be expressed in \({\varvec{\mathcal {L}}}\)—either way, \({\varvec{\mathcal {L}}}\) is not up to the task of expressing everything that \(\upalpha \) might believe. But even if the intuition underlying this argument is correct—and it is by no means obvious that it is—the conclusion is merely that we must accept that we might have some inexpressible (partial) beliefs. What the argument doesn’t do is give us any reason to think that the algebra of propositions \({\varvec{\mathcal {B}}}\) that constitutes what \(\upalpha \) actually has partial beliefs towards is filled to the brim with inexpressible propositions. Indeed, it’s perfectly consistent with the argument’s conclusion that \({\varvec{\mathcal {B}}}\) contains no inexpressible propositions at all!

We get to keep the model only if we’re happy with the implication that thinkers systematically have at least as many partial beliefs towards inexpressible propositions as they do towards expressible propositions. And that is a hard pill to swallow. If we’re to be expected to swallow it, we’ll need good reasons to think that (a) these inexpressible propositions exist, (b) that they have such-and-such systematic relations to the expressible propositions, and (c) that they can and indeed always are believed. And those reasons can’t be just that these are consequence of a model which includes possible and impossible worlds.

The probabilistic analogues of the problems of logical omniscience require some response. The solution we end up with may involve the introduction of impossible worlds, but this looks to be a viable solution only if we drop the very standard—and very important—assumption of Booleanism, or if we embrace the inexpressibility of most of our thoughts. Neither option seems particularly appealing, and we may well do better to look for a solution without the impossible.