Impossible worlds and partial belief
 386 Downloads
 1 Citations
Abstract
One response to the problem of logical omniscience in standard possible worlds models of belief is to extend the space of worlds so as to include impossible worlds. It is natural to think that essentially the same strategy can be applied to probabilistic models of partial belief, for which parallel problems also arise. In this paper, I note a difficulty with the inclusion of impossible worlds into probabilistic models. Under weak assumptions about the space of worlds, most of the propositions which can be constructed from possible and impossible worlds are in an important sense inexpressible; leaving the probabilistic model committed to saying that agents in general have at least as many attitudes towards inexpressible propositions as they do towards expressible propositions. If it is reasonable to think that our attitudes are generally expressible, then a model with such commitments looks problematic.
Keywords
Impossible worlds Partial belief Credence Logical omniscience Probabilistic coherenceSuppose we wish to model the total doxastic state of a typical (nonideal) subject, whom we’ll call \(\upalpha \).^{1} We’ll need two main ingredients: one, a way to represent potential objects of thought, the kinds of things fit to serve as the contents of some cognitive mental state; and two, a way of representing which of these are the contents of \(\upalpha \)’s attitudes.
If our model is to be faithful to the facts, then it’s important that we don’t end up representing \(\upalpha \) as being much more rational than she in fact is. What needs to be done to satisfy this desideratum depends on just how irrational we think nonideal agents can be, and opinions vary widely on this matter. But here is something that almost everyone agrees on: we are not logically infallible. The total doxastic state of any ordinary agent will usually be logically incoherent in some respect or other. Total belief sets probably aren’t going to be closed under logical implication, even on those accounts that seem to make us look very rational indeed (e.g., Lewis 1982; Stalnaker 1984). And on the face of it, beliefs don’t appear to be closed under even logical equivalence. The same applies to other kinds of doxastic attitudes: prima facie, one can be fully confident that either it is raining or it’s not without thereby also being fully confident that it’s not the case that it’s raining and not raining. The intuitive data of logical incoherence and hyperintensionality needs to be accounted for—usually, by modelling the objects of belief using entities that cut finer than logical equivalence.
In this paper, I argue that one common strategy for modelling logically fallible agents and hyperintensional contents (viz., through the use of impossible worlds) does not sit nicely with another very common approach to modelling total doxastic states (viz., through the use of a numericallyvalued function defined on a Boolean algebra of propositions; e.g., a probability function). Roughly, the source of the problem is that most of the propositions which can be constructed out of a sufficiently rich space of possible and impossible worlds are in a certain strong sense inexpressible, and any Boolean algebra defined on such a space will contain at least as many inexpressible propositions as expressible propositions. Since it’s reasonable to think that most (if not all) of our doxastic attitudes are expressible, a model which commits us to widespread inexpressibility looks problematic. We can impose restrictions on the space of worlds which would prevent the inclusion of inexpressible propositions in the algebra, but only at the cost of reintroducing (a strong degree of) infallibility.
In Sect. 1, I outline an assumption about the expressibility of thought which be helpful in setting up my main argument. Then, in Sect. 2, I provide some background on the problems of logical omniscience as they apply to a standard way of modelling full belief, and discuss how the introduction of impossible worlds is supposed to help solve these problems. In Sect. 3, I introduce probabilistic analogues to the classical problems of logical omniscience, for which an analogous solution involving impossible worlds seems to apply. Finally, in Sect. 4 I present the central argument of the paper, and in Sects. 5 and 6, discuss responses.
Before moving on, it’s worth noting some things that I’m not arguing. First, I do not think that the mere existence of inexpressible propositions should be considered problematic for the impossible worlds model—nor for that matter do I think that they would be especially problematic for the possible worlds model. I would not consider it a devastating problem if our formal models implied that inexpressible propositions exist, and could potentially serve as the objects of thought for some believers. I do, however, think that there is serious issue when our models commit us to saying that inexpressibility is the norm, and it is this problem that I intend to highlight here. (See Sect. 6 for more discussion on this point.) And second, my argument should not be read as being against the intelligibility of impossible worlds in general, nor do I want to claim that there are no benefits to including them within our ontology.
1 The expressibility hypothesis
In setting up my argument, I will presuppose the existence of an artificial language, \({\varvec{\mathcal {L}}}\), about which I will make some assumptions. \({\varvec{\mathcal {L}}}\) can be thought of as a class of declarative sentences, each a (possibly infinite) string of symbols taken from a (possibly infinite) alphabet, with a corresponding interpretation. We suppose that every sentence in \({\varvec{\mathcal {L}}}\) is nonambiguous, precise, and for the sake of simplicity, contextindependent. I’ll stick to characterising \({\varvec{\mathcal {L}}}\) at the sentential level, since it is here that the issues we will be interested in arise. Nothing in what follows should be taken to suggest that there can be no quantifiers, modal operators, and so on, in \({\varvec{\mathcal {L}}}\).
Next, we will want \({\varvec{\mathcal {L}}}\) to be as expressive as possible with respect to \(\upalpha \)’s (partial) beliefs, within the bounds allowed by the present assumptions.^{2} The most straightforward version of my argument then proceeds on the basis of an assumption, which I will call the expressibility hypothesis: that \({\varvec{\mathcal {L}}}\) is maximally expressive, in the sense that for each distinct belief (or partial belief) that \(\upalpha \) has, there is a distinct sentence S in \({\varvec{\mathcal {L}}}\) which expresses the content of that exact belief and no other. \({\varvec{\mathcal {L}}}\) may be capable of saying much more than this as well, but to begin with we will assume that it is capable of saying at least this much.
Furthermore, besides having beliefs simpliciter, I assume that \(\upalpha \) can also have negative and conjunctive beliefs. For example, \(\upalpha \) might believe that roses are red, that violets are blue, and that roses are red and violets are blue, where the latter content intuitively has normative connections to the former two of the kind we might try to cash out in terms of conjunction introduction and elimination rules. If the content of the first belief is captured by a sentence \(\hbox {S}_{1}\) of \({\varvec{\mathcal {L}}}\), and the content of the second by \(\hbox {S}_{2}\), then we will use ‘\(\hbox {S}_{1} \wedge \hbox {S}_{2}\)’ to pick out the sentence (or a sentence) of \({\varvec{\mathcal {L}}}\) which express the third content. Likewise, if \(\upalpha \) comes to later believe that roses are not red, then there’s another sentence, ‘\(\lnot \hbox {S}_{1}\)’, which expresses her changed belief.
In saying this, I’m not making any strong commitments in relation to the syntax of \({\varvec{\mathcal {L}}}\), which may consist entirely of ‘atomic’ sentences for all I’ve said here. But I see no good reason to think, if it is possible to have a language capable of expressing all of our beliefs at all, that there couldn’t also be such a language which contains a unary connective and a binary connective corresponding to negation and conjunction respectively. Nor am I saying that \(\upalpha \) can only have atomic, negative, and conjunctive beliefs. She may also have conditional beliefs, e.g., a belief that if roses are red then violets are blue, where this is not just another way of saying that \(\upalpha \) believes that it’s not the case that: roses are red and violets are not blue. In that case, we may also want to have primitive conditional sentences in \({\varvec{\mathcal {L}}}\). Likewise, \(\upalpha \) may believe that roses are red or violets are blue, where this is not the same thing as believing that it’s not the case that: roses are not red and violets are not blue. We need not commit either way on these questions. It’s perfectly reasonable to think that \({\varvec{\mathcal {L}}}\) has some nontrivial syntax at the sentential level. But we may well find that having just two connectives is fewer than we need to adequately distinguish between the full range of contents that a typical subject might believe, so we will remain neutral on just what that syntax is. (The upshot of these points will become apparent in the final paragraphs of Sect. 4.)
Whatever \({\varvec{\mathcal {L}}}\) is, it’s obviously not English, nor any other natural language. But there is no need to interpret my talk of ‘sentences’ and ‘languages’ too closely on the model of natural languages. The ‘language’ in question may not be the sort of thing that any human being could speak, nor need it correspond very closely to the structure of thought. The ‘sentences’ may be purely mathematical objects, or arbitrary sets of abstracta. For example, one might want to simply let every object of belief just be a sentence of \({\varvec{\mathcal {L}}}\), and stipulate that every sentence expresses itself.^{3} Alternatively, perhaps an appropriately constructed Lagadonian language would be expressive enough for our purposes.^{4} In a series of recent works, Mark Jago has defended just this idea (see esp. his 2012; 2015a; b; cf. also Berto 2010). Indeed, the expressive richness of Jago’s language is a central component of his use of ersatz possible and impossible worlds to model hyperintensional contents, in roughly the manner described in the next section. As he puts it, for sets of ersatz possible and/or impossible worlds to be an adequate model of hyperintensional content and to overcome the infamous ‘problem of descriptive power’, the worldbuilding “language must be expressible enough to represent all of the possible and impossible situations we want to represent, and to represent distinct (possible or impossible) situations as distinct situations” (Jago 2015b, p. 718).
 (i)
Although inconclusive, there are general reasons to accept the hypothesis.
 (ii)
There are prominent accounts of impossible worlds such that the hypothesis (or a close analogue thereof) is taken for granted, and would be difficult to deny.
 (iii)
Even if we ultimately ought to deny the hypothesis, the main thrust of the argument will be largely unchanged.
2 The problems of logical omniscience

NonContradiction:

At most one of S or \(\lnot \hbox {S}\) is true at \(\upomega \)

Maximal Specificity:

At least one of S or \(\lnot \hbox {S}\) is true at \(\upomega \)

Closure under Implication:

If \(\hbox {S}_{1}, \hbox {S}_{2}, \ldots \) are true at \(\upomega \) and jointly imply S, then S is true at \(\upomega \)
 (i)
\(\Vert \lnot \hbox {S}\Vert = \Vert \hbox {S}\Vert ^{\mathrm{C}}\)
 (ii)
\(\Vert \hbox {S}_{1}\,\wedge \,\hbox {S}_{2}\Vert = \Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \)
But that’s a little too quick. Even supposing that there are enough propositions in \(\wp (\Omega )\) to represent all objects of belief, it may still be the case that \(\wp (\Omega )\) also contains many propositions that correspond to nothing that can properly be believed. Modelling objects of belief as sets of worlds does not commit one to saying that every set of worlds models an object of belief, and it shouldn’t be taken for granted that every way the world might be corresponds to something that \(\upalpha \) can believe.^{5} So let’s make a very minor adjustment to the basic Hintikkan model. Suppose that \({\varvec{\mathcal {B}}} \subseteq \wp (\Omega )\) contains just those propositions that do model genuine objects of belief, and say:
\(\upalpha \) believes P iff \(R_{\upalpha }(\upomega ) \subseteq P\) and \(P \in {\varvec{\mathcal {B}}}\)
If every proposition is thinkable, then the inclusion of \({\varvec{\mathcal {B}}}\) adds nothing to the original model; if not, \({\varvec{\mathcal {B}}}\) serves to filter out any ‘unthinkable’ propositions.
 (i)
If \(\hbox {S}_{1}\) implies \(\hbox {S}_{2}\) and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), then \(\upalpha \) believes \(\hbox {S}_{1}\) only if she also believes \(\hbox {S}_{2}\)
 (ii)
If S is a tautology and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\upalpha \) believes S
 (iii)
\(\upalpha \)’s beliefs are inconsistent only if \(R_{\upalpha }(\upomega )=\emptyset \) (so \(\upalpha \) believes everything in \({\varvec{\mathcal {B}}}\))
There’s a number of ways we might try to respond to these problems. Perhaps the error is in thinking that we can adequately model belief sets using unstructured sets of possible worlds and simple subset relations. Or, perhaps the error is in thinking that we can use a single set of worlds \(R_{\upalpha }(\upomega )\) to encode an agent’s total doxastic state at \(\upomega \), which may be better represented using multiple ‘fragments’. Or perhaps there isn’t really a problem here after all, we really are logically omniscient and it is only the complexities of belief attribution in natural language and our imperfect access to our own beliefs which makes it seem otherwise. I think that each of these captures part of the truth, but my intention for this paper is not to suggest a positive solution to the problems of logical omniscience. Instead, I wish to focus on one common response, which begins with the thought that perhaps there are not enough propositions in \(\wp (\Omega )\): we need to make our space of worlds bigger, to accommodate more finegrained divisions amongst the objects of thought.
Suppose we make an extension to \(\Omega \), such that it now contains not only all of the original possible worlds, but also worlds where various kinds of impossible affairs obtain.^{6} To make sure that \(\Omega \) is rich enough, we will want worlds which are obviously inconsistent (where both S and \(\lnot \hbox {S}\) are true), as well as worlds which are inconsistent in more subtle ways (e.g., worlds where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, but \(\hbox {S}_{1} \wedge \,\hbox {S}_{2}\) is not true). Indeed, if we think that agents are capable of extreme logical incoherence, then we will want to ensure that our worlds are not closed under any nontrivial consequence relation. It would not be very helpful to remove closure under classical consequence but retain closure under, e.g., intuitionistic consequence—otherwise, we’re just swapping one sort of logical omniscience for another.

Unrestricted Comprehension:

For any maximal set of sentences \({\varvec{\mathcal {S}}} \subseteq {\varvec{\mathcal {L}}}\), there will be worlds in \(\Omega \) where every \(\hbox {S}\,\in {\varvec{\mathcal {S}}}\) is true and no \(\hbox {S}\, \in {\varvec{\mathcal {L}}} \backslash {\varvec{\mathcal {S}}}\) is true
By building a model around this expanded space of worlds, it’s easy to block all three of the unwelcome ‘omniscience’ problems noted earlier. Indeed, we can say more than this. Let \(\{\hbox {S}_{1}, \,\hbox {S}_{2}, \ldots \}\) be any consistent or inconsistent set of sentences, and let \(R_{\upalpha }(\upomega )\) be the intersection of \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \), .... Now \(R_{\upalpha }(\upomega )\) will be nonempty, and for any S that’s not in \(\{\hbox {S}_{1},\,\hbox {S}_{2}\), ...}, there will be at least one maximally specific world in \(R_{\upalpha }(\upomega )\) where S is not true. So, regardless of what we take \(\upalpha \)’s set of beliefs \(\{\hbox {S}_{1}, \hbox {S}_{2}\), ...} to be, we will be able to find some \(R_{\upalpha }(\upomega )\) such that \(R_{\upalpha }(\upomega ) \subseteq \Vert \hbox {S}\Vert \) if and only if \(\upalpha \) believes S. That looks like a nice property for our model to have, and all we had to do was load \(\Omega \) up with enough impossible worlds.
But note a consequence of Unrestricted Comprehension: there is no sentence S—at least, no sentence in \({\varvec{\mathcal {L}}}\)—such that S is true at all and only the worlds in \(R_{\upalpha }(\upomega )\) (assuming that \(\upalpha \) believes more than one thing). Say that a proposition P is expressible (relative to \({\varvec{\mathcal {L}}}\)) just in case there is a sentence \(\hbox {S } \in \,{\varvec{\mathcal {L}}}\) such that \(P = \Vert \hbox {S}\Vert \). The set of expressible propositions, \(\{\Vert \hbox {S}\Vert : \hbox {S } \in \,{\varvec{\mathcal {L}}}\}\), is an antichain of \(< \wp (\Omega ), \subseteq>\): for any two distinct sentences \(\hbox {S}_{1}, \hbox {S}_{2}\), there will be worlds in \(\Omega \) where \(\hbox {S}_{1}\) is true and \(\hbox {S}_{2}\) isn’t true; so, \(\Vert \hbox {S}_{1}\Vert \) will never be a subset of \(\Vert \hbox {S}_{2}\Vert \). Suppose that \(\upalpha \) believes \(\hbox {S}_{1}\) and at least one other thing \(\hbox {S}_{2}\). Whatever \(R_{\upalpha }(\upomega )\) ends up being, it will have to be a proper subset of both \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \). So, there’s no \(\hbox {S}_{3}\) such that \(\Vert \hbox {S}_{3}\Vert = R_{\upalpha }(\upomega )\). \(R_{\upalpha }(\upomega )\) is inexpressible in \({\varvec{\mathcal {L}}}\).^{8}
At this point, let me bring in the expressibility hypothesis: for every one of \(\upalpha \)’s beliefs, \({\varvec{\mathcal {L}}}\) includes a sentence S which expresses exactly that belief. If this is reasonable, then it’s only natural to suppose that a proposition should be found in \({\varvec{\mathcal {B}}}\) only if it is expressible in \({\varvec{\mathcal {L}}}\); that is, \({\varvec{\mathcal {B}}} \subseteq \{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\). After all, what could it mean to represent \(\upalpha \) as believing a proposition P, where P is not characterised by any sentence in a language which, ex hypothesi, is capable of expressing every one of \(\upalpha \)’s beliefs? And, since \(R_{\upalpha }(\upomega )\) is inexpressible, \(R_{\upalpha }(\upomega ) \notin {\varvec{\mathcal {B}}}\).
Is this a problem? I’m inclined to think that the inexpressibility of \(R_{\upalpha }(\upomega )\) is not by itself problematic. It would perhaps have been problematic if we were forced to assume that \(R_{\upalpha }(\upomega )\) must itself represent something that \(\upalpha \) believes, and hence that it should always be included within \({\varvec{\mathcal {B}}}\). However, nothing internal to the model I’ve described requires this to be the case. That \(R_{\upalpha }(\upomega )\) should itself be a proposition that \(\upalpha \) believes was never a commitment of the original model, even when we were working with just possible worlds. What’s needed for the representational system to work is that (a) if \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}} \subseteq \wp (\Omega )\) is the set of all and only those propositions towards which some agent \(\upalpha \) has beliefs at \(\upomega \), then \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}}\) has some lower bound with respect to \(\subseteq \) which we can designate as \(R_{\upalpha }(\upomega )\); and (b) if \({\varvec{\mathcal {P}}}_{{\varvec{\upalpha }}} \ne {\varvec{\mathcal {P}}}_{{\varvec{\upbeta }}}\), then \(R_{\upalpha }(\upomega ) \ne R_{\upbeta }(\upomega )\). That is, every distinct total belief state can be uniquely represented by (at least one) set of doxastically accessible worlds. We can satisfy this by letting \(R_{\upalpha }(\upomega )\) be the intersection of each proposition that \(\upalpha \) believes, without supposing that \(R_{\upalpha }(\upomega )\) is itself something that \(\upalpha \) believes.
None of this is to say that the impossible worlds model of belief just developed is without problems—just that it doesn’t commit us to saying that \(\upalpha \) believes something she cannot possibly believe. It is worth noting that if we can only believe expressible propositions, and no expressible proposition is a subset of any other expressible proposition, then there is a genuine question as to the point of using this kind of settheoretic model to represent our beliefs in the first place. The machinery of set theory only comes into play at a single step, linking the (nonbelieved) proposition \(R_{\upalpha }(\upomega )\) to the set of expressible propositions that \(\upalpha \) believes, the latter of which has no interesting settheoretic structure. The only thing which unites the worlds in the proposition \(R_{\upalpha }(\upomega )\) is that they are those worlds where each member of a set of sentences \(\hbox {S}_{1}, \hbox {S}_{2}, \hbox {S}_{3}\), ...is true—and characterising that proposition amounts to just listing all and only those sentences which express something \(\upalpha \) believes. What we’ve done with \(R_{\upalpha }(\upomega )\) and \(\subseteq \), we could have done more perspicuously with a simple list of sentences. We gain nothing in economy by the addition of \(R_{\upalpha }(\upomega )\), and modelling beliefs as supersets of \(R_{\upalpha }(\upomega )\) doesn’t seem to illuminate anything of interest.^{9}
3 The problems of probabilistic coherence
In the rest of this paper, I want to focus on partial belief. In the present section, I will note how problems analogous to those of the traditional (full belief) problems of logical omniscience arise under a probabilistic model, and how different assumptions about the structure of \(\Omega \) affect it.[W]e must also provide for partial belief. Being a [doxastically accessible world] is not an all or nothing matter, rather it must admit of degree. The simplest picture, idealised to be sure, replaces the sharpedged class of [doxastically accessible worlds] by a subjective probability distribution. ...We can say that a [doxastically accessible world] simpliciter is a possible [world which] gets a nonzero (though perhaps infinitesimal) share of probability, but the nonzero shares are not all equal. (1986, p. 30)
For the sake of concreteness, I outline one way to generalise the full belief model to partial beliefs, along the lines suggested by Lewis. I want to stress that what follows is an illustrative example only: many of the specific details are not crucial to my main argument (e.g., the use of a probability mass function \(\mathcal {D}\) to induce the credence function \(\mathcal {C}r\)). Readers already familiar with the idea of extending probability theory to an impossible worlds framework may choose to skim this section.
Let \(\Omega \) be any nonempty space of possible and/or impossible worlds.^{10} This time, instead of assigning a single proposition \(R_{\upalpha }(\upomega )\) as \(\upalpha \)’s doxastically accessible worlds, we will instead represent \(\upalpha \)’s total doxastic state using a probability distribution \(\mathcal {D}{:}\,\Omega \mapsto \,[0, 1]\). One could interpret \(\mathcal {D}\) as representing \(\upalpha \)’s degree of belief that the actual world is \(\upomega \), for each \(\upomega \) in \(\Omega \), to the extent at least that (singleton sets of) worlds are to be included amongst the purported objects of partial belief. But this interpretation is unnecessary: \(\mathcal {D}\), like \(R_{\upalpha }(\upomega )\) earlier, should in the first instance be understood as a formal tool for modelling doxastic states in the manner to be outlined presently.

Nonnegativity:

If \(\emptyset \) is in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\emptyset ) = 0\)

Normalisation:

If \(\Omega \) is in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Omega ) = 1\)

Monotonicity:

For all pairs \(P_{1}, P_{2}\) in \({\varvec{\mathcal {B}}}\), if \(P_{1} \subseteq P_{2}\), then \(\mathcal {C}r(P_{1}) \le \mathcal {C}\hbox {r}(P_{2})\)

\({\varvec{\Sigma }}\) Additivity:

If \({\varvec{\mathcal {P}}}\) is any countable set of disjoint propositions in \({\varvec{\mathcal {B}}}\) whose union (\(\bigcup \,\,{\varvec{\mathcal {P}}}\)) is also in \({\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\bigcup \,\,{\varvec{\mathcal {P}}}) = \mathop \sum \nolimits _{ P\in {\varvec{\mathcal {P}}}}\,\, \mathcal {C}r(P)\)

Booleanism:

For all \(P, P_{1}, P_{2} \in \wp (\Omega )\),
 (i)
If \(P \in {\varvec{\mathcal {B}}}\), then \(P^{\mathrm{C}} \in {\varvec{\mathcal {B}}}\)
 (ii)
If \(P_{1}, P_{2} \in {\varvec{\mathcal {B}}}\), then \(P_{1} \cap P_{2} \in {\varvec{\mathcal {B}}}\)
But for now, suppose only that \({\varvec{\mathcal {B}}}\) includes all and only those propositions towards which \(\upalpha \) has partial beliefs, whatever they may be. In that case, a very natural way to read \(\mathcal {C}r\) is as a representation of \(\upalpha \)’s total degree of belief state:
P is believed by \(\upalpha \) to degree x if and only if \(\mathcal {C}r(P)=x\)
This generalises the earlier model of full belief quite nicely. On the simplest generalisation, say that full belief equates to degree of belief 1. Then, we will be able to characterise \(R_{\upalpha }(\upomega )\) as just that set of worlds which are assigned some positive value by \(\mathcal {D}\); thus, \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 1\) for every \(\Vert S\Vert \in {\varvec{\mathcal {B}}}\) such that \(R_{\upalpha }(\upomega ) \subseteq \Vert \hbox {S}\Vert \). But now we can also represent each of the many nonextremal grades of belief that \(\upalpha \) can have towards any proposition in \({\varvec{\mathcal {B}}}\), removing the sharp edges between belief and nonbelief.
 (i)
If S is a contradiction and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 0\)
 (ii)
If S is a tautology and \(\Vert \hbox {S}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = 1\)
 (iii)
If \(\hbox {S}_{1}\) implies \(\hbox {S}_{2}\) and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) \le \mathcal {C}r(\Vert \hbox {S}_{2}\Vert )\)
 (iv)
If \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are inconsistent and \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert , \Vert \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\Vert \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(\Vert \lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\Vert ) = \mathcal {C}r(\Vert \hbox {S}_{1}\Vert ) + \mathcal {C}r(\Vert \hbox {S}_{2}\Vert )\)
If \(P_{1}, P_{2}, P_{1} \cup P_{2} \in {\varvec{\mathcal {B}}}\), then \(\mathcal {C}r(P_{1} \cup P_{2}) = {max}\{\mathcal {C}r(P_{1}), \mathcal {C}r(P_{2})\} \le \mathcal {C}r(P_{1}) + \mathcal {C}r(P_{2})\)
So, to a limited extent, using possibility distributions would let us avoid strict probabilistic coherence—though, subadditivity is still a very strong constraint! More importantly, \(\mathcal {C}r\) sodefined will still satisfy Nonnegativity, Normalisation, and Monotonicity, and so \(\mathcal {C}r\) will still be constrained by (i)–(iii). In that sense, the possibilistic model still has to deal with a version of the problems of probabilistic coherence.
The same applies more generally: the vast majority of formal systems for the representation of partial beliefs will have \(\mathcal {C}r\) satisfy at least one of Nonnegativity, Normalisation, and Monotonicity (or something very similar). For example, Choquet capacities (Choquet 1954; applied in, e.g., Tversky and Kahneman 1992), Dempster–Shafer belief and plausibility functions (Dempster 1968; Shafer 1976), ranking functions (Spohn 2012), and the setvalued functions of Levi (1974) and Kyburg (1992). Where \(\Omega \) consists of only possible worlds, all of these models will have to deal with very strong coherence constraints.
But never fear—impossible worlds to the rescue! If we were to instead define the probability distribution \(\mathcal {D}\) on a space of worlds \(\Omega \) that satisfies Unrestricted Comprehension, then \(\mathcal {C}r\) need not satisfy any of the constraints (i)–(iv). Indeed \(\mathcal {C}r\) can be almost as wild and wacky as we want it to be. For instance, suppose that \(\mathcal {D}\) assigns a positive value only to worlds where S and S \(\wedge \lnot \hbox {S}\) are both true, and never to worlds where \(\lnot \hbox {S}\) or \(\lnot (\hbox {S } \wedge \lnot \hbox {S}\)) are true. Now, assuming that all of the relevant propositions are in \(\mathcal {C}r\)’s domain, \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = \mathcal {C}r(\Vert S \wedge \lnot \hbox {S}\Vert ) = 1\), and \(\mathcal {C}r(\Vert \lnot \hbox {S}\Vert ) = \mathcal {C}r(\Vert \lnot (\hbox {S} \wedge \lnot \hbox {S})\Vert ) = 0\).

Really Unrestricted Comprehension:

For any set of sentences \({\varvec{\mathcal {S}}} \subseteq {\varvec{\mathcal {L}}}\), there will be worlds in \(\Omega \) where every \(\hbox {S} \in {\varvec{\mathcal {S}}}\) is true and no \(\hbox {S} \in {\varvec{\mathcal {L}}} \backslash {\varvec{\mathcal {S}}}\) is true
The idea to use a probability function over a space of possible and impossible worlds in order to model probabilistically incoherent agents is common in conversation, but also shows up at several points in the literature. Cozic (2006) has recently advocated the strategy, and Halpern and Pucella (2011, §4) make similar points. Lipman (1997) and (1999) attempts to deal with logical nonomniscience by deriving a probabilistic expected utility representation from an agent’s preferences, where the probability function in question is defined over a statespace involving both possibilities and impossibilities. Easwaran (2014, esp. pp. 1–2, 29) also suggests using impossible worlds in our probabilistic models of agents’ doxastic states, albeit in a slightly different context.
At the risk of belabouring a point that will already be clear to many, let me summarise the discussion of this section. We can see the ‘problems of probabilistic coherence’ as a consequence of a sequence of modelling choices. First, we need to choose what kinds of worlds get into \(\Omega \). Second, we need to define the function \(\mathcal {C}r\), and characterise the structure of its domain, \({\varvec{\mathcal {B}}}\). And finally, we need to say something about how we are going to interpret \(\mathcal {C}r\). In this respect, things are closely analogous to the problems of logical omniscience, and the same basic strategies for response are applicable. The response we’ve discussed centres upon the first modelling choice: by introducing enough impossible worlds into \(\Omega \), we can avoid all of the probabilistic coherence constraints (i) through (iv) above, and indeed, we can make \(\mathcal {C}r\) appear as irrational as we like.
4 The problem of inexpressibility
In this section, I will argue that if \(\Omega \) satisfies a very weak (and very plausible) richness assumption, then either Booleanism is false, or our model won’t plausibly represent highly logically fallible agents—which, of course, was the central motivation for introducing impossible worlds in the first place. The most straightforward way to make the argument begins with the premise that whatever \({\varvec{\mathcal {B}}}\) is, it should contain only propositions which are expressible in \({\varvec{\mathcal {L}}}\).
For any \(\hbox {S}_{1}\), take the set of all worlds in \(\Omega \) where S is true, and consider its complement \(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\). If Unrestricted Comprehension holds, then there is no \(\hbox {S}_{2}\) such that \(\Vert \hbox {S}_{2}\Vert = \Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\). As we’ve already noted, for any pair of sentences \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\), there will be worlds where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true. And if Really Unrestricted Comprehension also holds, then there will be also be worlds where neither \(\hbox {S}_{1}\) nor \(\hbox {S}_{2}\) is true. In either case, \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \) cannot be complements of one another. Hence, if \(\Vert \hbox {S}_{1}\Vert \) is expressible, then \(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\) is inexpressible. And since we’ve assumed that \({\varvec{\mathcal {B}}}\) is closed under complementation, it follows that there must be at least as many inexpressible propositions in \(\mathcal {C}r\)’s domain as there are expressible propositions. And that’s not a nice result: \({\varvec{\mathcal {L}}}\) is supposed to include a sentence capable of expressing every object of thought towards which we might have partial beliefs, and yet the model we’ve now developed is assigning nonsensical values to propositions expressed by no sentences of \({\varvec{\mathcal {L}}}\).

Restriction R1:

For every \(\hbox {S}_{1}\) such that \(\Vert \hbox {S}_{1}\Vert \in {\varvec{\mathcal {B}}}\), there is an \(\hbox {S}_{2}\) such that for any \(\upomega \in \Omega \), exactly one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true

Restriction R2:

For every pair \(\hbox {S}_{1}, \hbox {S}_{2}\) such that \(\Vert \hbox {S}_{1}\Vert , \Vert \hbox {S}_{2}\Vert \in {\varvec{\mathcal {B}}}\), there is an \(\hbox {S}_{3}\) such for any \(\upomega \in \Omega , \hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true at \(\upomega \) if and only if \(\hbox {S}_{3}\) is true at \(\upomega \)
Exactly how restrictive R1 and R2 end up being depends heavily on which expressible propositions end up included in \({\varvec{\mathcal {B}}}\). We can safely assume that whatever \({\varvec{\mathcal {B}}}\) is, it will be richly populated with plenty of expressible propositions, so R1 and R2 are never trivially satisfied. On the other hand, if there are sentences whose characteristic propositions are not in \({\varvec{\mathcal {B}}}\), then R1 and R2 are consistent with certain a degree of freedom in relation to those sentences. But this is not especially interesting: since \({\varvec{\mathcal {B}}}\) contains all of the propositions in \(\mathcal {C}r\)’s domain, whatever is true of the expressible propositions not in \({\varvec{\mathcal {B}}}\) will be irrelevant to the model of \(\upalpha \)’s degrees of belief that we are left with. Hence, we can simplify the discussion and pretend henceforth that \({\varvec{\mathcal {B}}} = \{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\).
The key point in what follows will be that how R1 and R2 can be implemented is constrained by what kinds of worlds we want to keep in \(\Omega \). For example, if we were to require that \(\Omega \) contains at least all of the logically possible worlds, then the \(\hbox {S}_{2}\) referred to in R1 must be logically equivalent to \(\lnot \hbox {S}_{1}\) (if not identical to \(\lnot \hbox {S}_{1})\): every logically possible world where \(\hbox {S}_{1}\) doesn’t hold is one where \(\lnot \hbox {S}_{1}\) holds, and if \(\hbox {S}_{2}\) and \(\lnot \hbox {S}_{1}\) are true at the very same logically possible worlds then they must be logically equivalent.
I will not assume that \(\Omega \) contains every logically possible world, though I think that something in the vicinity must be true if we want to use \(\mathcal {C}r\) as a model of ideal agents as well as nonideal agents. Instead, I will assume something much weaker. Say that \(\hbox {S}_{1}\) is blatantly inconsistent with \(\hbox {S}_{2}\) just in case either \(\hbox {S}_{1} = \lnot \hbox {S}_{2}\) or \(\hbox {S}_{2} = \lnot \hbox {S}_{1}\). Then my assumption can be expressed as follows:
Minimal Richness:
 (i)
\(\hbox {S}_{1}, \hbox {S}_{2}\), and \(\hbox {S}_{3}\) are all true at \(\upomega \), and
 (ii)
If \(\hbox {S}_{4}\) is blatantly inconsistent with any of \(\hbox {S}_{1}, \hbox {S}_{2}\), or \(\hbox {S}_{3}\), then \(\hbox {S}_{4}\) is not true at \(\upomega \)
So let’s consider R1, which states that every \(\hbox {S}_{1}\) can be paired with another sentence \(\hbox {S}_{2}\) which is true at a world \(\upomega \) if and only if \(\hbox {S}_{1}\) is not true at \(\upomega \). If Minimal Richness holds, then whatever \(\hbox {S}_{2}\) ends up being, it must be logically equivalent to \(\lnot \hbox {S}_{1}\). For suppose that \(\hbox {S}_{2}\) is not logically equivalent to \(\lnot \hbox {S}_{1}\). Then either \(\hbox {S}_{2}\) does not imply \(\lnot \hbox {S}_{1}\), or \(\lnot \hbox {S}_{1}\) does not imply \(\hbox {S}_{2}\) (or both). If \(\hbox {S}_{2}\) does not imply \(\lnot \hbox {S}_{1}\), then {\(\hbox {S}_{2}, \hbox {S}_{1}\)} is consistent, and there will be at least one world where \(\hbox {S}_{2}\) and \(\hbox {S}_{1}\) are both true, which contradicts R1. On the other hand, if \(\lnot \hbox {S}_{1}\) does not imply \(\hbox {S}_{2}\), then \(\{\lnot \hbox {S}_{1}, \lnot \hbox {S}_{2}\}\) is consistent and there will be worlds where \(\lnot \hbox {S}_{1}\) and \(\lnot \hbox {S}_{2}\) are both true. Since \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are blatantly inconsistent with \(\lnot \hbox {S}_{1}\) and \(\lnot \hbox {S}_{2}\) respectively, this would have to be a world where neither \(\hbox {S}_{1}\) nor \(\hbox {S}_{2}\) is true, which also contradicts R1. Hence, any sentence \(\hbox {S}_{2}\) that satisfies R1 must be logically equivalent to \(\lnot \hbox {S}_{1}\), if Minimal Richness is true.
This leaves us with a limited range of options for implementing R1. The most straightforward way would be to let the required sentence \(\hbox {S}_{2}\) just be \(\lnot \hbox {S}_{1}\). In effect, this is just to assume that the worlds in \(\Omega \) satisfy NonContradiction and Maximal Specificity. And it’s easy enough to think of some plausible motivations for assuming NonContradiction: one could argue that no model of a minimally rational agent’s doxastic state should represent her as having any degree of belief that both \(\hbox {S}_{1}\) and \(\lnot \hbox {S}_{1}\) could be true simultaneously (cf. Lewis 2004; Bjerring 2013; Jago 2014b). To the extent that we make errors of logical reasoning, they tend to be more subtle—e.g., a failure to deduce a downstream consequence of what we believe, rather believing in blatant inconsistencies.
Motivating Maximal Specificity is a little more difficult, as it amounts to removing all incomplete worlds from \(\Omega \). Some are independently happy to do this (e.g., Bjerring 2014; Bjerring and Schwarz 2017, p. 28; cf. Stalnaker 1996). For others, incomplete worlds are a crucial aspect of the model (Jago 2014a, b). Furthermore, it’ll be a consequence of assuming NonContradiction and Maximal Specificity together that we lose the capacity to have \(\mathcal {C}r\) assign wholly independent values to the pairs \(\Vert \hbox {S}\Vert \) and \(\Vert \lnot \hbox {S}\Vert \). Indeed, the worlds we are left with are closed under the rules of double negation introduction and elimination, with \(\mathcal {C}r\) satisfying \(\mathcal {C}r(\Vert \hbox {S}\Vert ) = \mathcal {C}r(\Vert \lnot \lnot \hbox {S}\Vert \)) for all \(\Vert \hbox {S}\Vert \) in \({\varvec{\mathcal {B}}}\). This is already quite a strong restriction.

(SNI) From S, infer \(\lnot \lnot \lnot \lnot \lnot \lnot \hbox {S}\)

(SNE) From \(\lnot \lnot \lnot \lnot \lnot \lnot \hbox {S}\), infer S

\({\varvec{\wedge }}\) Consistency:

For all \(\hbox {S}_{1}, \hbox {S}_{2} \in \) \({\varvec{\mathcal {L}}}, \hbox {S}_{1}\) and \(\hbox {S}_{2}\) are both true at \(\upomega \) if and only if \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) is true at \(\upomega \)
Now I want to be clear that I’ve not yet said that \(\Omega \) contains no impossible worlds whatsoever. If there are irreducibly disjunctive sentences in \({\varvec{\mathcal {L}}}\), then a sentence like \(\hbox {S}_{1} \vee \hbox {S}_{2}\) may still behave erratically by, e.g., not being true at all and only the worlds where at least one of \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true. Likewise, if \({\varvec{\mathcal {L}}}\) contains a primitive conditional connective \(\rightarrow \) (i.e., where \(\hbox {S}_{1} \rightarrow \hbox {S}_{2}\) is not simply a shorthand for \(\lnot (\hbox {S}_{1} \wedge \) \(\lnot \hbox {S}_{2}))\), then we’ve not said anything to guarantee that the worlds in \(\Omega \) must validate even very simple inference rules like modus ponens. Thus, there may still be plenty of logically impossible worlds in \(\Omega \). Nevertheless, with NonContradiction and Maximal Specificity, \(\wedge \) Consistency alone we’ve managed to close \(\Omega \) under a very strong consequence relation. Indeed, \(\Omega \) is already only apt for modelling agents who are very good logical reasoners: for every classically valid inference pattern \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), ...\(\Rightarrow \) S, the worlds in \(\Omega \) will be closed under an corresponding inference which replaces each of \(\hbox {S}_{1}, \hbox {S}_{2}\), ...and S with a classically equivalent sentence expressed using only \(\lnot \) and \(\wedge \). For instance, while \(\Omega \) might not be closed under disjunction introduction, we do know that at any world where either \(\hbox {S}_{1}\) or \(\hbox {S}_{2}\) is true, \(\lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) will also be true. And at any world where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, \(\lnot (\lnot \hbox {S}_{1} \wedge \lnot \hbox {S}_{2}) \wedge \lnot (\lnot \hbox {S}_{1} \wedge \hbox {S}_{2}) \wedge \, \lnot (\hbox {S}_{1} \wedge \lnot \hbox {S}_{2})\) is true. What we have, in effect, is a model of an agent who is logically infallible with respect to a huge range of sometimes very complex inferences. That the agent might also be logically incompetent with respect to other very basic inferences hardly seems to help.
In summary: given Minimal Richness, if we want to preserve Booleanism alongside the expressibility hypothesis, then we have to close \(\Omega \) under some (classically valid) inferences. We have a certain degree of choice as to what inferences these might be (e.g., double negation elimination versus sextuple negation elimination). But closing \(\Omega \) under the most simple and natural rules—that is, those rules which ordinary agents are most likely to consistently follow—leads us directly into closing \(\Omega \) under a complete fragment of classical logic, and, plausibly, under classical logic simpliciter.
5 Responses
At the end of Sect. 3, I noted that the problems of probabilistic coherence result from a sequence of choices, about the formal properties and interpretation of \(\Omega \), \({\varvec{\mathcal {B}}}\), and \(\mathcal {C}r\). All standard models of partial belief presuppose that \({\varvec{\mathcal {B}}}\) satisfies Booleanism, and \(\mathcal {C}r\) satisfying at least one of Nonnegativity, Normalisation, and Monotonicity or something very similar; combined with a space of worlds limited only to the possible, these quickly get us to some very strong coherence constraints on degrees of belief. We can avoid these constraints without making any significant changes to the standard models if \(\Omega \) includes enough impossible worlds, but doing so will generate a problem with expressibility.
 1.
Keep the standard probabilistic model of partial belief, and bite the bullet on the matter of probabilistic coherence.
 2.
Develop a nonstandard model of partial belief which keeps Booleanism but avoids the probabilistic coherence without resorting to impossible worlds.
 3.
Develop a nonstandard model of partial belief which involves impossible worlds but doesn’t presuppose Booleanism.
 4.
Offer an alternative interpretation of \(\mathcal {C}r\) (i.e., such that \(\mathcal {C}r\) being defined for inexpressible propositions does not conflict with the expressibility hypothesis).
 5.
Reject the expressibility hypothesis.
With respect to the third strategy, it’s worth noting that Booleanism is not something to be given up lightly. To be sure, the definition of \(\mathcal {C}r\) in terms of a probability distribution \(\mathcal {D}\) that I gave in Sect. 3 in no way required any special assumptions about the structure of \({\varvec{\mathcal {B}}}\); so it’s clear that we can construct a recognisably ‘probabilistic’ model of partial belief without assuming Booleanism. But then we can raise a version of the point made at the end of Sect. 2: if we let \(\Omega \) satisfy Really Unrestricted Comprehension, and simply define \({\varvec{\mathcal {B}}}\) as \(\{\Vert \hbox {S}\Vert : \hbox {S} \in {\varvec{\mathcal {L}}}\}\), then while it’s true that \(\mathcal {D}\) will let us encode any arbitrary assignment of values into \(\mathcal {C}r\), it’s hard to see why we should want to use a probability distribution in the first place. \(\mathcal {D}\) itself doesn’t directly represent anything about \(\upalpha \)’s doxastic state—no S will be true at just one world \(\upomega \), so \(\mathcal {D}(\upomega )\) cannot be interpreted as a degree of belief towards the singleton proposition {\(\upomega \)}. What we really have is just a complicated way of listing out \(\upalpha \)’s degree of belief states, with the probabilistic aspects adding nothing to efficiency or illumination.
But that isn’t the only worry in the vicinity. A more important concern, I think, arises from the fact that Booleanism frequently comes up as a basic assumption in various representation theorems, where the requirement that \({\varvec{\mathcal {B}}}\) has some minimally rich algebraic structure is a prerequisite for our being able to assign numerical values to the contents of \({\varvec{\mathcal {B}}}\) in a meaningful and systematic way. For example, the assumption plays a role throughout Jeffrey’s (1990) representation theorem for expected utility theory—where, if we were to assume that the space of thinkable propositions \({\varvec{\mathcal {B}}}\) was such that none of its members is a subset of any other members, almost all of his axioms would be either meaningless or trivial. Booleanism is a standard assumption for theories of decision making and uncertainty, with almost all axiomatic decision theories being built around it. Or consider the common approach to characterising numerical degrees of belief defined in terms of qualitative belief orderings over propositions, based on the work of de Finetti (1931) and Scott (1964). Representation theorems which take us from qualitative belief orderings to probabilities are importantly dependent on \({\varvec{\mathcal {B}}}\) having a rich algebraic structure. Without something like the axiom of qualitative additivity—that if \(P_{1}\) and \(P_{2}\) both have null intersection with \(P_{3}\), then one holds \(P_{1}\) to be more likely than \(P_{2}\) if and only if one holds \(P_{1} \cup P_{3}\) to be more likely than \(P_{2} \cup P_{3}\)—the qualitative belief ordering would lack a sufficiently rich structure to support anything more than a simple (and representationally inadequate) ordinal scale.^{14}
With respect to the fourth strategy, we could perhaps keep the probabilistic model as it is (more or less), but make changes to how we interpret \(\mathcal {C}r\).^{15} For instance, instead of saying that \(\mathcal {C}r(P)=x\) if and only if \(\upalpha \) has degree of belief x towards some object of belief represented by P, we might instead say that \(\mathcal {C}r\) represents \(\upalpha \)’s degrees of belief only where the propositions in question are expressible. But what then of the values that \(\mathcal {C}r\) assigns to inexpressible propositions? One thought would be to say that while \(\mathcal {C}r\) represents \(\upalpha \)’s degrees of belief when P is expressible, it represents some other propositional attitude \(\upphi \) when P is inexpressible. For instance, one might think that if P is expressible, then \(\mathcal {C}r(P^{\mathrm{C}})\) represents \(\upalpha \)’s degree of rejection towards P, which plausibly is \(1  \mathcal {C}r(P)\). However, this kind of ‘rejectionist’ proposal will only work if the complement of every inexpressible proposition is expressible, which is not in general the case. In particular, the domain of \(\mathcal {C}r\) has to be closed under intersections and unions, and the complement of the (inexpressible) intersection or union of two expressible propositions will often be itself inexpressible.
Of course, there may exist some other broadly ‘doxastic’ attitude \(\upphi \) that I’ve not considered, which takes inexpressible propositions as its objects—but what reason do we have for positing the existence of this \(\upphi \), beyond the desire to preserve some modelling assumptions?
6 The expressibility hypothesis (again)
Finally, one may want to go after the assumption that there exists an \({\varvec{\mathcal {L}}}\) of the kind described in Sect. 1, in which everything that \(\upalpha \) believes or partially believes is expressible. If this is false, then the presence of inexpressible propositions in the domain of \(\mathcal {C}r\) is perhaps even to be expected, not shunned. Maybe we have just discovered that sometimes our partial beliefs towards expressible propositions comes handinhand with partial beliefs towards inexpressible propositions; the latter are perfectly legitimate objects of thought, but not all such objects are expressible.
First things first, it should be noted that there are accounts of what worlds are which cannot plausibly avoid a version of my argument by denying the expressibility hypothesis. For example, Nolan (1997) favours an approach where (in his terminology) ‘propositions’—the meanings of sentences and the objects of thought—are taken to be the fundamental entities from which worlds are constructed. On this picture, possible worlds are maximal consistent sets of propositions à la Adams (1974), while impossible worlds are those sets of propositions which are inconsistent and/or nonmaximal. Adopting this view, we could let \({\varvec{\mathcal {L}}}\) simply be the class of all propositions qua objects of thought, trivialising the question as to whether \({\varvec{\mathcal {L}}}\) is ‘expressively rich enough’ to capture every belief that \(\upalpha \) might have. We can then easily see that once something like Unrestricted Comprehension holds, there will be sets of worlds with no proposition in common amongst their members. These sets of worlds will not only be linguistically inexpressible, but quite literally unthinkable.^{16}
Furthermore, I have already noted Jago’s work on the expressiveness of Lagadonian languages in Sect. 1, which undergirds his linguistic ersatz account of impossible worlds as arbitrary sets of sentences taken from a prespecified ‘worldmaking’ language \({\varvec{\mathcal {L}}}\). And note the central importance of the expressibility hypothesis to the account, according to which a setofworlds proposition P represents some content C just in case, for every world \(\upomega \) in P, there is a sentence S in \(\upomega \) which expresses that C. In general, this brand of linguistic ersatzer argues for the representational adequacy of their propositions qua sets of ‘worlds’ by arguing first that the basic worldmaking language is up to the task of distinguishing between all possible contents of belief, from which it quickly follows that sets of sets of these sentences can distinguish between different belief contents—for the simple reason that there is a onetoone correspondence between the set of sentences S of a language, \({\varvec{\mathcal {L}}}\), and the set of \(P \subseteq {\varvec{\mathcal {L}}}\) such that \(\hbox {S } \in P\). The expressiveness of the ersatz setsofworlds model is directly grounded in the expressiveness of the language it’s built upon, with propositional representation achieved directly through the meanings of the sentences shared by the worlds within the propositions.
But I don’t want my argument to rest upon specific approaches to characterising worlds. So, to conclude the discussion, I will proceed as follows. First, I’ll make a few general points in favour of the expressibility hypothesis. I don’t take any of these to be conclusive; much like the present state of the literature on the expressibility of thought, there is plenty of space for disagreement here. It is enough to show, however, that denying the expressibility hypothesis is no trivial matter. Secondly, and much more importantly, I’ll end by saying why I don’t think that denying the expressibility hypothesis is the right way to respond to the argument.
Most of these discussions focus on natural languages, which makes it a little hard to apply them to the nonnatural language \({\varvec{\mathcal {L}}}\). Of particular note is that natural languages will contain a variety of contextdependent expressions which serve to expand their expressiveness, whereas I’ve stipulated that the sentences of \({\varvec{\mathcal {L}}}\) have their meanings independent of context. Since I’ve made very few substantive assumptions about \({\varvec{\mathcal {L}}}\), it’s hard to see why there would be any particular problems for applying lessons drawn from natural languages to an language \({\varvec{\mathcal {L}}}\) besides those which arise from contextsensitivity. Certainly, the fact that the interpretation of \({\varvec{\mathcal {L}}}\)’s sentences are unambiguous and precise shouldn’t give us any reason to think that it’s less likely we’ll find the right sentences in \({\varvec{\mathcal {L}}}\).Thoughts differ in all else that is said to be among the contents of the mind in being wholly communicable: it is of the essence of thought that I can convey to you the very thought I have [...] It is of the essence of thought, not merely to be communicable, but to be communicable, without residue, by means of language. (1978, p. 142)
We could rerun the argument without supposing that \({\varvec{\mathcal {L}}}\) contains only contextinsensitive expressions. We would then need to speak not of expressibility and inexpressibility simpliciter, but rather expressibility relative to a context. But, if it’s not already plausible that every object of belief is expressible in a contextinsensitive language, then it’s not clear why every content of belief should be expressible in a contextsensitive language in a specific context. A better option, if we thought that every belief were expressible in some natural language \({\varvec{\mathcal {L}}}_{n}\), would be to take \({\varvec{\mathcal {L}}}_{n}\) as the basis for the construction of \({\varvec{\mathcal {L}}}\), which proceeds by systematically eliminating the contextsensitivity of \({\varvec{\mathcal {L}}}_{n}\) while preserving overall expressibility. The received view is that such an elimination is entirely possible—and indeed, easy. As Stalnaker puts it, it seems at first pass “easy to eliminate contextdependence [since for] any proposition expressed in context c by sentence S, we may simply stipulate that some other sentence \(\hbox {S}^\prime \) shall express, in all contexts, that same proposition” (Stalnaker 1984, pp. 151–152).^{17} If this kind of elimination strategy is viable, then we have every reason to think that whatever we can say in, e.g., English, we can say in a spruced up and contextindependent version of English.
But all this depends on a more general assumption that our beliefs ought to be linguistically expressible somehow or other, which the reader may very well doubt. Nevertheless, the existence of something much like \({\varvec{\mathcal {L}}}\) is strongly suggested by a wide variety of positions in philosophy. The assumption plays a role in important attempts to explain mental representation. If one accepts the arguments for the existence of a Language of Thought as the psychological basis for our capacity to have propositional attitudes, then the existence of a language like \({\varvec{\mathcal {L}}}\) seems hard to deny. According to this popular view, thinking in general is a computational process sensitive only to the (contextindependent) syntax of strings of symbols in a compositional Language of Thought, and one has a belief with content P only in the event that they are appropriately related to a sentence in this language which means that P. The existence of a language rich enough to express each of our beliefs is also presupposed a number of models of mental content. For instance, and besides the Lagadonian approaches already mentioned, Chalmers models the contents of thoughts—including our partial beliefs—as sets of scenarios, with each scenario being an ‘epistemically complete’ description of way the world might be for all we know a priori in an idealised language consisting of vocabulary for describing the microphysical and phenomenal characteristics of the world (see his 2011, 2012). That is, each scenario is a (potentially infinitary) conjunction of sentences in an ideal language, with each scenario being inconsistent with every other scenario. To express any set of scenarios in this language, a (potentially infinitary) disjunction of scenarios will suffice.
With all that said, the recent literature has seen some purported counterexamples to my assumption about the expressibility of belief. Shaw (2013) develops a variation on the Berry paradox to argue for the existence of a kind of inexpressible thought content—an instance of a case which he says “happens on extremely rare occasions due to a particular kind of linguistic technicality” (p. 70). Hellie (2004) has also argued that there may be truths about phenomenal experience which we can appreciate but cannot express linguistically. And if one thinks that there is a onetoone correspondence between ways the world might be and possible belief contents, then there are also classic expressive inadequacy arguments involving qualitatively indiscernible individuals and alien properties, to the effect that no language can describe every possibility (e.g., Lewis 1986, p. 157ff; Bricker 1987). I will not discuss any of these points in detail. Perhaps each gives rise to a genuine problem for the expressibility hypothesis. But acquiescing on this point hardly seems to help with the problem currently at hand. The inexpressibility of most of \(\mathcal {C}r\)’s domain cannot be explained by an occasional linguistic technicality. And moreover, the inexpressible propositions that we have been describing are not plausibly about some ineffable aspect of our phenomenal experience, alien properties, or qualitatively indiscernible individuals.
If \({\varvec{\mathcal {L}}}\) lacks the expressive power to represent our thoughts about such things—so be it. Let \({\varvec{\mathcal {L}}}\) represent a language capable of expressing only those more mundane beliefs which are expressible, like the belief that roses are red. (If need be, let \({\varvec{\mathcal {L}}}\) be the set of declarative sentences of English, and fix a context.) What kind of content could the set of worlds where ‘Roses are red’ is not true represent, if not that roses are not red? Clearly, it has something to do with roses and redness—but what? We can’t express it, sure, but it doesn’t even seem like there’s anything contentlike in the vicinity for us to believe. At best, the inexpressible propositions we’ve been talking about look like an artefact of the model, not some newly discovered kind of content towards which most of our beliefs are directed.
This is, of course, a version of the argument above against the hypothetical linguistic ersatzer who denies the expressibility hypothesis. The point here is general, and constitutes the central reason why going after the expressibility hypothesis looks like the wrong strategy. An adequate response to the argument of Sect. 4 can’t be to just point out that there may be some possible things that \(\upalpha \) could believe which are not expressible. The odd inexpressible object of thought here and there isn’t an immediate cause for concern: the underlying problem survives mere counterexamples to the existence of \({\varvec{\mathcal {L}}}\). Unless we make serious changes to the basic probabilistic model of our beliefs, then so long as Booleanism and (Really) Unrestricted Comprehension are true, if you have a degree of belief x towards \(\Vert \hbox {S}\Vert \) you will have a degree of belief (1 – x) towards the mysteriously inexpressible proposition \(\Vert \hbox {S}\Vert ^{\mathrm{C}}\); and if you have degrees of belief x and y towards \(\Vert \hbox {S}_{1}\Vert \) and \(\Vert \hbox {S}_{2}\Vert \) then you’ll have some degree of belief \(z \le x, y\) towards the inexpressible \(\Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \) and \(((x+y)  z)\) towards \(\Vert \hbox {S}_{1}\Vert \cup \Vert \hbox {S}_{2}\Vert \). Inexpressibility on this model is not some esoteric phenomenon resting on a technicality, nor does it seem to be limited to a specific kind of topic (e.g., phenomenology, alien properties, and indiscernible individuals) about which we might have beliefs.
For similar reasons, I am not moved by simple cardinality arguments aimed at showing that we must accept the existence of inexpressible propositions, regardless of whether we adopt impossible worlds into our ontology or not. Some vigorously intuit that for any subset \({\varvec{\mathcal {S}}}\) of any language \({\varvec{\mathcal {L}}}, \upalpha \) might (partially) believe that all and only the sentences of \({\varvec{\mathcal {S}}}\) are true. If \({\varvec{\mathcal {L}}}\) is setsized, then the cardinality of the \(\wp ({\varvec{\mathcal {L}}}\)) is strictly greater than that of \({\varvec{\mathcal {L}}}\). It follows that \({\varvec{\mathcal {L}}}\) cannot contain a unique sentence S for each subset \({\varvec{\mathcal {S}}}\subseteq {\varvec{\mathcal {L}}}\) to the effect of ‘All and only the elements of \({\varvec{\mathcal {S}}}\) are true’. Thus, either the content in question is not expressible at all, or it cannot be expressed in \({\varvec{\mathcal {L}}}\)—either way, \({\varvec{\mathcal {L}}}\) is not up to the task of expressing everything that \(\upalpha \) might believe. But even if the intuition underlying this argument is correct—and it is by no means obvious that it is—the conclusion is merely that we must accept that we might have some inexpressible (partial) beliefs. What the argument doesn’t do is give us any reason to think that the algebra of propositions \({\varvec{\mathcal {B}}}\) that constitutes what \(\upalpha \) actually has partial beliefs towards is filled to the brim with inexpressible propositions. Indeed, it’s perfectly consistent with the argument’s conclusion that \({\varvec{\mathcal {B}}}\) contains no inexpressible propositions at all!
We get to keep the model only if we’re happy with the implication that thinkers systematically have at least as many partial beliefs towards inexpressible propositions as they do towards expressible propositions. And that is a hard pill to swallow. If we’re to be expected to swallow it, we’ll need good reasons to think that (a) these inexpressible propositions exist, (b) that they have suchandsuch systematic relations to the expressible propositions, and (c) that they can and indeed always are believed. And those reasons can’t be just that these are consequence of a model which includes possible and impossible worlds.
The probabilistic analogues of the problems of logical omniscience require some response. The solution we end up with may involve the introduction of impossible worlds, but this looks to be a viable solution only if we drop the very standard—and very important—assumption of Booleanism, or if we embrace the inexpressibility of most of our thoughts. Neither option seems particularly appealing, and we may well do better to look for a solution without the impossible.
Footnotes
 1.
By ‘total doxastic state’ I mean the sum total of facts about the subject’s doxastic attitudes broadly construed, i.e., \(\upalpha \)’s full beliefs, partial beliefs, comparative degrees of confidence, and so on—generally, those aspects of her mental life which characterise how she takes the world to be.
 2.
A note on this: I am ignoring any beliefs which might be, as Perry (1979) calls them, essentially indexical—e.g., the belief that I am here. The assumption that we can express irreducibly indexical beliefs in a language whose interpretation is by stipulation contextindependent may rightly be doubted. But I am setting this complication aside because the arguments that follow can be naturally adapted to a centred worlds framework (see Lewis 1979), which would permit the inclusion of contextdependent sentences back into \({\varvec{\mathcal {L}}}\).
 3.
Compare the discussion on Daniel Nolan’s account of impossible worlds in Sect. 6. Nolan constructs his space of (possible and impossible) worlds out of a ‘language’ consisting of the objects of thought directly, using a version of the Really Unrestricted Comprehension principle that I discuss below.
 4.
A Lagadonian language is one wherein particulars are taken to be names of themselves, and properties and relations are taken to be predicates for themselves. For example, the content Frank is taller than Mary may (but need not) be treated as construction out of Frank, Mary, and the is taller than relation.
 5.
For instance, in response to the Russell–Kaplan paradox (see Davies 1981, p. 262; Kaplan 1995), Lewis (1986, pp. 104–107) argues that there are many more ways the world might be than there are possible functional roles, and hence more than there are possible belief contents—at least \(\beth _{3}\) for the former, and probably no more than \(\beth _{0}\) for the latter.
 6.
The use of impossible worlds to help solve the problem of logical omniscience and related problems in epistemic logic was explicitly introduced in Rantala (1982), although the idea can also be found in Hintikka (1975) and Creswell (1973). Numerous authors have since made use of the idea, and a recent defence can be found in a series of works by Jago (2009, 2013, 2014a, 2015b, a) and Berto (2010). See also Nolan (1997, 2013), though Nolan’s general focus is on using impossible worlds to give a Lewisian semantics for counterpossible conditionals. I am focusing on proponents of the socalled “American stance” on impossible worlds; my arguments are not intended to touch upon the “Australasian” use of impossible worlds qua basis for an interpretation of some nonclassical logic.
 7.
By ‘maximal set of sentences’ \({\varvec{\mathcal {S}}} \subseteq {\varvec{\mathcal {L}}}\), I mean a set such that for any \(S\,\in \,{\varvec{\mathcal {L}}}\), at least one of S or \(\lnot \hbox {S}\) is in \({\varvec{\mathcal {S}}}\).
 8.
Note that \(R_{\upalpha }(\upomega )\) can be inexpressible even if we have a name \(a_{i}\) for each of the worlds within \(R_{\upalpha }(\upomega )\) and \({\varvec{\mathcal {L}}}\) contains a way of saying “The actual world is \(a_{1}\) or \(a_{2}\) or ...” (or something to that effect). Assuming that such a sentence exists in \({\varvec{\mathcal {L}}}\), if an unrestricted comprehension principle holds then the sentence will be true at some of the worlds in \(R_{\upalpha }(\upomega )\), but it will also be false at some of those worlds (and true at some worlds outside of \(R_{\upalpha }(\upomega ))\).
 9.
 10.
To be clear: we are not making any assumptions yet about which worlds get into \(\Omega \); we will see how different assumptions about \(\Omega \) impact upon the probabilistic model as we go along.
 11.
Maximal Specificity says that \(\Vert \hbox {S}\Vert \cup \Vert \lnot \hbox {S}\Vert = \Omega \). Normalisation plus \(\varSigma \) Additivity then imply that \(\mathcal {C}r(\Vert \hbox {S}\Vert  \Vert \lnot \hbox {S}\Vert ) + \mathcal {C}r(\Vert \lnot \hbox {S}\Vert  \Vert \hbox {S}\Vert ) + \mathcal {C}r(\Vert \hbox {S}\Vert \cap \Vert \lnot \hbox {S}\Vert ) = 1\). Since \(\mathcal {C}r(\Vert \hbox {S}\Vert \cap \Vert \lnot \hbox {S}\Vert ) \ge 0, \mathcal {C}r(\Vert \hbox {S}\Vert ) \ge \mathcal {C}r(\Vert \hbox {S}\Vert  \Vert \lnot \hbox {S}\Vert )\) and \(\mathcal {C}r(\Vert \lnot \hbox {S}\Vert ) \ge \mathcal {C}r(\Vert \lnot \hbox {S}\Vert  \Vert \hbox {S}\Vert )\), it follows that \(\mathcal {C}r(\Vert \hbox {S}\Vert ) + \mathcal {C}r(\Vert \lnot \hbox {S}\Vert ) \ge 1\).
 12.
A referee suggests in response to this point that there may be limits on our capacity to believe or have varying degrees of belief towards multiple contents simultaneously even when they’re jointly consistent; e.g., if the contents expressed by \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), and \(\hbox {S}_{3}\) are each particularly complex, then representational storage limits could prevent all three from being simultaneously believed to some positive degree or other. In that case, it may not be possible for \(\upalpha \) to have confidence regarding each of \(\hbox {S}_{1}, \hbox {S}_{2}\), and \(\hbox {S}_{3}\) at the same time, undercutting any immediate formal need for having a world \(\upomega \) in \(\Omega \) such that each of \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), and \(\hbox {S}_{3}\) are true.
There may well be representational storage limits, as a contingent matter of fact, for certain kinds of nonideal agents. But suppose we restate Minimal Richness such that it quantifies only over triples \(\hbox {S}_{1}, \hbox {S}_{2}, \hbox {S}_{3}\) such that it is possible for \(\upalpha \) to have doxastic attitudes towards the contents expressed by \(\hbox {S}_{1}, \hbox {S}_{2}\), and \(\hbox {S}_{3}\) simultaneously. Now, the restricted richness condition in conjunction with R1 will imply that if \(\hbox {S}_{2}\) is true at all and only the worlds where \(\hbox {S}_{1}\) is not true, then either \(\hbox {S}_{2}\) is logically equivalent to \(\lnot \hbox {S}_{1}\), or it’s not possible for \(\upalpha \) to have attitudes regarding \(\hbox {S}_{2}\) while having attitudes regarding \(\hbox {S}_{1}\). Likewise, given R2 the restricted version of the condition implies that if \(\hbox {S}_{3}\) is true at all and only the worlds where \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are each true for a pair \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) which can be simultaneously entertained, then either \(\hbox {S}_{3}\) is logically equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\), or \(\upalpha \) cannot have attitudes regarding \(\hbox {S}_{3}\) while also having attitudes towards \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\).
In each case, the latter disjunct would already be problematic given Booleanism. If \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert \)) is defined then so is \(\mathcal {C}r(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}})\). That is, if \(\Vert \hbox {S}_{1}\Vert \) is in B, then \(\Vert \hbox {S}_{1}\Vert ^{\mathrm{C}}\) is in B, so we should want to be able to say that \(\upalpha \) can have attitudes towards both propositions simultaneously. So, whatever sentence \(\hbox {S}_{2}\) holds at all and only the worlds where \(\hbox {S}_{1}\) doesn’t hold had better be logically equivalent to \(\lnot \hbox {S}_{1}\); and likewise regarding \(\Vert \hbox {S}_{3}\Vert = \Vert \hbox {S}_{1}\Vert \cap \Vert \hbox {S}_{2}\Vert \). Furthermore, the main upshot of the discussion that follows is that if the relevant sentences \(\hbox {S}_{2}\) and \(\hbox {S}_{3}\) are not \(\lnot \hbox {S}_{1}\) and \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) respectively, then the worlds we are left with in \(\Omega \) are closed under apparently quite arbitrary inference rules which we have no good reasons to believe are adhered to in general by ordinary agents. Were we to suppose that whenever \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\) are true, there’s a third sentence \(\hbox {S}_{3}\) which is also true such that (i) \(\hbox {S}_{3}\) is not logically equivalent to \(\hbox {S}_{1} \wedge \hbox {S}_{2}\) and (ii) cannot be entertained by \(\upalpha \) alongside \(\hbox {S}_{1}\) and \(\hbox {S}_{2}\), then we won’t have closed \(\Omega \) under any less baffling inference patterns.
 13.
The proof of this is straightforward given that \(\mathcal {C}r\) is a probability function. Suppose that \(\mathcal {C}r(P_{1}) = \mathcal {C}r(P_{2}) = \mathcal {C}r(P_{3}) > 2/3\). Then \(\mathcal {C}r(P_{1} \cap P_{2}) > 1/3\), and since \(\mathcal {C}r(P_{3} \cap (P_{1} \cap P_{2})) = \mathcal {C}r(P_{3}) + \mathcal {C}r(P_{1} \cap P_{2})  \mathcal {C}r(P_{3} \cap (P_{1} \cap P_{2})) = 1, \mathcal {C}r(P_{3} \cap (P_{1} \cap P_{2}))\) must be greater than 0.
 14.
To be sure, there are nonBoolean ‘probability’ theories—for example, quantum probabilities are constructed around involutive algebras which need not satisfy Booleanism. I suspect that similar problems as those raised in Sect. 4 will also arise in most circumstances where \({\varvec{\mathcal {B}}}\) is taken to satisfy a number of basic algebraic closure conditions, but I have not argued for this.
 15.
Note that the interpretation of \(\mathcal {C}r\) will still have to be recognisably doxastic, otherwise we’re no longer dealing with a model of \(\upalpha \)’s doxastic states. I have nothing to say about nondoxastic interpretations of \(\mathcal {C}r\).
 16.
This point is not unknown to Nolan, who notes in his (1997, p. 563) that there will be sets of worlds on his account which correspond to no proposition qua object of thought. In personal correspondence, Nolan has also pointed out that any set of worlds containing only possible worlds will be inexpressible if \(\Omega \) satisfies Unrestricted Comprehension. For any set of possible worlds \(\{\upomega _{1}, \upomega _{2}, \ldots \}\) there will be an impossible world \(\upomega _{\mathrm{i}}\) such that (a) everything true at all of the worlds in \(\{\upomega _{1}, \upomega _{2}, {\ldots }\}\) is true at \(\upomega _{\mathrm{i}}\), and (b) some impossibility\(\bot \) is also true at \(\upomega _{\mathrm{i}}\). Since \(\bot \) isn’t true at any possible world, there is nothing that’s true at all and only the worlds in \(\{\upomega _{1}, \upomega _{2}, {\ldots }\}\).
 17.
Notes
Acknowledgements
Special thanks to to Daniel Nolan and Robbie Williams for helpful discussions on this paper and closely related topics. Further thanks are due to Jessica Isserow for comments on numerous drafts, Thomas Brouwer, Paolo Santorio, the Leeds NatRep and CMM seminar groups, and two anonymous referees for Synthese. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/20072013) / ERC Grant Agreement n. 312938. In addition, this project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkłodowskaCurie Grant Agreement No 703959.
References
 Adams, R. M. (1974). Theories of actuality. Nous, 8, 211–231.CrossRefGoogle Scholar
 Berto, F. (2010). Impossible worlds and propositions: Against the parity thesis. The Philosophical Quarterly, 40, 471–86.Google Scholar
 Bjerring, J. C. (2013). Impossible worlds and logical omniscience: An impossibility result. Synthese, 190, 2505–24.CrossRefGoogle Scholar
 Bjerring, J. C. (2014). Problems in epistemic space. Journal of Philosophical Logic, 43, 153–170.CrossRefGoogle Scholar
 Bjerring, J. C., & Schwarz, W. (2017). Granularity problems. The Philosophical Quarterly, 67(266), 22–37.CrossRefGoogle Scholar
 Bricker, P. (1987). Reducing possible worlds to language. Philosophical Studies, 52(3), 331–355.CrossRefGoogle Scholar
 Carston, R. (2002). Thoughts and utterances: The pragmatics of explicit communication. Oxford: Blackwell.CrossRefGoogle Scholar
 Chalmers, D. (2011). The nature of epistemic space. In A. Egan & B. Weatherson (Eds.), Epistemic modality (pp. 60–107). Oxford: Oxford University Press.CrossRefGoogle Scholar
 Chalmers, D. (2012). Constructing the world. Oxford: Oxford University Press.Google Scholar
 Choquet, G. (1954). Theory of capacities. Annales de l’institut Fourier, 5, 131–295.CrossRefGoogle Scholar
 Cozic, M. (2006). Contributions to economic analysis. In R. Topol & B. Walliser (Eds.), Impossible states at work: Logical omniscience and rational choice (pp. 47–68). Amsterdam: Elsevier.Google Scholar
 Creswell, M. J. (1973). Logics and languages. London: Methuen.Google Scholar
 Davies, M. (1981). Meaning, quantification, necessity: Themes in philosophical logic. London: Routledge & Kegan Paul.Google Scholar
 de Finetti, B. (1931). Sul significato soggettivo della probabilita. Fundamenta Mathematicae, 17(1), 298–329.CrossRefGoogle Scholar
 Dempster, A. P. (1968). A generalization of Bayesian inference. Journal of the Royal Statistical Society Series B (Methodological), 30, 205–247.Google Scholar
 Dubois, D., & Prade, H. (1988). Possibility theory. An approach to computerized processing of uncertainty. New York: Plenum.Google Scholar
 Dummett, M. (1978). Truth and other enigmas. Cambridge: Harvard University Press.Google Scholar
 Easwaran, K. (2014). Regularity and hyperreal credences. Philosophical Review, 123(1), 1–41.CrossRefGoogle Scholar
 Halpern, J. Y., & Pucella, R. (2011). Dealing with logical omniscience: Expressiveness and pragmatics. Artificial Intelligence, 175(1), 220–235.CrossRefGoogle Scholar
 Hellie, B. (2004). There’s something about Mary. In Y. Nagasawa (Ed.), Inexpressible truths and the allure of the knowledge argument (p. 33364). Cambridge: MIT press.Google Scholar
 Hintikka, J. (1962). Knowledge and belief: An introduction to the logic of the two notions. Ithaca: Cornell University Press.Google Scholar
 Hintikka, J. (1975). Impossible possible worlds vindicated. Journal of Philosophical Logic, 4, 475–84.CrossRefGoogle Scholar
 Hofweber, T. (2006). Inexpressible properties and propositions. In D. Zimmerman (Ed.), Oxford studies in metaphysics. Oxford: Oxford University Press.Google Scholar
 Jago, M. (2009). Logical information and epistemic space. Synthese, 167, 327–341.CrossRefGoogle Scholar
 Jago, M. (2012). Constructing worlds. Synthese, 189, 59–74.CrossRefGoogle Scholar
 Jago, M. (2013). The logica yearbook 2012. In V. Puncochar & P. Svarny (Eds.), Are impossible worlds trivial? (pp. 35–50). London: College Publications.Google Scholar
 Jago, M. (2014a). The impossible: An essay on hyperintensionality. Oxford: Oxford University Press.CrossRefGoogle Scholar
 Jago, M. (2014b). The problem of rational knowledge. Erkenntnis, 79, 1151–1168.CrossRefGoogle Scholar
 Jago, M. (2015a). Hyperintensional propositions. Synthese, 192(3), 585–601.CrossRefGoogle Scholar
 Jago, M. (2015b). Impossible worlds. Nous, 49(4), 713–728.CrossRefGoogle Scholar
 Jeffrey, R. C. (1990). The logic of decision. Chicago: University of Chicago Press.Google Scholar
 Kaplan, D. (1995). A problem in possible worlds semantics. In W. SinnottArmstrong, et al. (Eds.), Modality, morality and belief: Essays in honor of Ruth Barcan Marcus (pp. 41–52). Cambridge: Cambridge University Press.Google Scholar
 Katz, J. (1978). Effability and translation. In F. Guenthner & M. GuenthnerReutter (Eds.), Meaning and translation (pp. 157–189). New York: NYU Press.Google Scholar
 Katz, J. (1981). Language and other abstract objects. Oxford: Basil Blackwell.Google Scholar
 Kyburg, H. E. (1992). Getting fancy with probability. Synthese, 90, 189–203.CrossRefGoogle Scholar
 Levi, I. (1974). On indeterminate probabilities. The Journal of Philosophy, 71(13), 391–418.CrossRefGoogle Scholar
 Lewis, D. (1979). Attitudes de dicto and de se. The Philosophical Review, 88(4), 513–543.CrossRefGoogle Scholar
 Lewis, D. (1982). Logic for equivocators. Nous, 16(3), 431–441.CrossRefGoogle Scholar
 Lewis, D. (1986). On the plurality of worlds. Cambridge: Cambridge University Press.Google Scholar
 Lewis, D. (2004). The law of noncontradiction: New philosophical essays. In G. Priest (Ed.), Letters to beall and priest (pp. 176–177). Oxford: Clarendon Press.Google Scholar
 Lipman, B. L. (1997). Epistemic logic and the theory of games and decisions. In M. Bacharach (Ed.), Logics for nonomniscient agents: An axiomatic approach (pp. 193–216). Berlin: Springer.Google Scholar
 Lipman, B. L. (1999). Decision theory without logical omniscience: Toward an axiomatic framework for bounded rationality. The Review of Economic Studies, 66(2), 339–361.CrossRefGoogle Scholar
 Nolan, D. (1997). Impossible worlds: A modest approach. Notre Dame Journal of Formal Logic, 38, 535–72.CrossRefGoogle Scholar
 Nolan, D. (2013). Impossible worlds. Philosophy Compass, 8(4), 360–372.CrossRefGoogle Scholar
 Perry, J. (1979). The problem of the essential indexical. Nous, 13(1), 3–21.CrossRefGoogle Scholar
 Priest, G. (2006). In contradiction: A study of the transconsistent. Oxford: Oxford University Press.CrossRefGoogle Scholar
 Rantala, V. (1982). Impossible worlds semantics and logical omniscience. Acta Philosophica Fennica, 35, 106–15.Google Scholar
 Recanati, F. (1994). Foundations of speech act theory: Philosophical and linguistic perspectives. In S. Tsohatzidis (Ed.), Contextualism and anticontextualism in the philosophy of language (pp. 156–166). London: Routledge.Google Scholar
 Schiffer, S. (2003). The things we mean. Oxford: Oxford University Press.CrossRefGoogle Scholar
 Scott, D. (1964). Measurement structures and linear inequalities. Journal of Mathematical Psychology, 1(2), 233–247.CrossRefGoogle Scholar
 Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton University Press.Google Scholar
 Shaw, J. R. (2013). Truth, paradox, and ineffible propositions. Philosophy and Phenomenological Research, 86(1), 64–104.CrossRefGoogle Scholar
 Spohn, W. (2012). The laws of belief: Ranking theory and its philosophical application. Oxford: Oxford University Press.CrossRefGoogle Scholar
 Stalnaker, R. (1996). Impossibilities. Philosophical Topics, 24, 193–204.CrossRefGoogle Scholar
 Stalnaker, R. C. (1984). Inquiry. London: The MIT Press.Google Scholar
 Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5(4), 297–323.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.