In this chapter, we lay the foundations for our enterprise. In particular, we explain how an information-based semantics, inquisitive semantics, allows us to interpret statements and questions in a uniform way and to define a general notion of entailment in which questions can occur as premises and conclusions. We explore in detail the significance of this generalized notion of entailment, showing in particular that it captures as a special case an important logical notion that we call dependency. We explain how questions can be viewed as denoting information types and how inquisitive entailment can thus be seen as generalizing entailment from a notion relating pieces of informations to one relating information types. We also show how a logic based on inquisitive semantics can be equipped in a canonical way with an implication connective that internalizes, in a precise sense, the meta-language entailment relations into the object language. At the end of the section we motivate in more detail some of our setup choices and we relate our approach to previous work on questions in logic.

Throughout this chapter, we deliberately leave some notions underspecified. In particular, we will not specify a formal language or a precise notion of model. This will allow us to focus on the main general ideas underlying the approach and on those aspects of the theory that follow from these ideas. The missing details can then be filled in in different ways, thereby instantiating the general picture to many concrete logical systems. Thus, what we are describing in this chapter can be seen as a general template that underlies the different inquisitive logics to be investigated in the subsequent chapters, as well as many other inquisitive systems that we are not going to cover.

Our presentation of inquisitive semantics in this chapter differs from the one to be found in the more language-oriented expositions—in particular, from the one in the inquisitive semantics textbook [1]. The difference concerns how the semantics is motivated as well as how the basic notions are introduced. In terms of motivations, the presentation in Ciardelli et al. [1] is driven by considerations about natural language semantics and discourse. By contrast, in this chapter—and in the book at large—we will argue for the inquisitive approach purely on the basis of motivations stemming from formal logic. Relatedly, the presentation in Ciardelli et al. [1] proceeds by introducing inquisitive contents in terms of their discourse effects (namely, providing information and raising issues). As a derivative notion, we get a semantic relation called of support between information states and sentences. By contrast, here we will take the support relation as primitive and we will understand this notion in a way which is independent of discourse effects, and arguably more fundamental.Footnote 1

2.1 Dependency

Let us start out with a simple example which will help us illustrate the ideas introduced as we go through this chapter. Consider a regular die with six faces. Let us say that the outcomes 1 and 2 are in the low range, 3 and 4 in the middle range, and 5 and 6 in the high range. Now consider the following questions about the outcome of a die roll.

figure a

These three questions are logically related in an interesting way: as soon as the first two questions are settled, the third is bound to be settled as well; that is, as soon as we settle the outcome’s parity (even or odd) and its range (low, middle, or high), we thereby settle exactly what the outcome is. We say that in the given situation, the questions \(\textsf {parity}\) and \(\textsf {range}\) determine the question \(\textsf {outcome}\), and we refer to this relation as a dependency.Footnote 2 Dependency is a notion of great importance. Let us briefly examine why.

Take the setting of experimental science. Consider the range of experiments that we can perform in a certain context. We can think of each experiment as a procedure that yields the answer to a certain question. Let us call a question experimental if it can be directly settled by performing an experiment. Then to ask whether a question is determined by the set of experimental questions is to ask whether it is possible, in principle, to answer that question by our empirical means. And of course, an analogous issue arises not just with experiments, but whenever there is a distinction between a range of “viable” questions, those that can actually be asked, and the “target” questions that one is interested in.Footnote 3 One can also start at the other hand: suppose we are interested in resolving a certain target question. Then it is interesting to ask if a certain set of viable questions logically determines the target question: for that tells us whether the set provides an inquiry strategy which is guaranteed to achieve our goal.

Turning to theoretical science, a crucial aspect of a scientific theory is its predictive power. We can characterize this as the power to yield answers to certain questions on the basis of answers to other questions. Thus, the predictive power of a theory lies precisely in the dependencies that hold on the basis of it. For example, classical mechanics can be characterized as predictive of a body’s position at a time t given (i) the body’s position and velocity at a different time \(t_0\), (ii) the body’s mass and (iii) the force field in which the body moves. What this amounts to is that against the background of classical mechanics, any way of settling the questions (i)–(iii) yields a corresponding answer to the question of where the body is located at time t.

In fact, much of the enterprise of natural sciences such as physics and chemistry consists in finding out what dependencies hold in nature: on the basis of what factors can we predict the trajectory of a planet, the temperature of a gas, or the speed of a certain chemical reaction? For instance, one of the earliest achievements of modern science was the discovery that, absent air resistance, the time that a body dropped near the Earth surface employs to reach the ground is completely determined by the height from which it is dropped. This is an instance of dependency in our sense: one question, from what height the body is dropped, determines another question, how long it takes to hit the ground.

A further illustration comes from database theory. A relational database (say, the database of a company) consists of entries (say, one for each employee) where each entry gives a value to a number of attributes (social security number, surname, department, salary, etc). We can think of the attributes as questions, with each entry providing an answer to each question. Certain dependencies are expected to hold between different attributes: for instance, the social security number of an employee should uniquely determine their surname, but not the other way around. Keeping track of these dependencies plays a crucial role in strategies designed to efficiently organize the data, which is why dependencies have received much attention in database theory (for a survey, see Fagin and Vardi [4]).

In this chapter, we will show that the relation of dependency is nothing but a facet of the central logical notion of entailment, once this notion is generalized so that it applies not only to statements, but also to questions. The study of dependency thus pertains to logic in the strictest sense, and many standard notions and techniques of logic can be fruitfully applied to study dependencies in the context of a logic equipped with questions.

We begin in the next section by explaining our strategy for bringing questions within the scope of logic.

2.2 From Truth Conditions to Support Conditions

The standard approach to logic is centered around the notion of truth. To give a semantics for a logical language is to give a recursive specification of truth conditions—to lay out, for each sentence of the language, what a state of affairs must be like in order for the sentence to count as true. Formally, semantics thus takes the form of a relation

$$w\models \alpha $$

where \(\alpha \) is a sentence and w is a semantic object that models a state of affairs. Let us refer to such an object as a possible world.

The exact nature of the objects that play the role of possible worlds in this schema varies. Often, a possible world may be identified simply with a model for the language at stake; for instance, if \(\alpha \) is a sentence of predicate logic, then w may be a standard relational structure. However, in this book we will build on intensional semantics, an approach which is designed to represent a whole variety of states of affairs in a single model. In this approach, a model M comes with an associated set \(W_M\) of possible worlds, primitive entities which stand for different states of affairs.

The central notion of logic, entailment, is then understood in terms of necessary preservation of truth: an entailment is valid if the conclusion is true whenever the premises are all true. Focusing of the case of a single premise:

$$\alpha \models _{\textsf {truth}}\beta \iff \text {for all models }M\text { and worlds }w\in W_M,\;w\models \alpha \,\text { implies }\,w\models \beta .$$

This perspective naturally leads to the view that the notion of entailment—arguably the central concern of the field of logic—is only meaningful for statements. After all, if entailment is defined as necessary preservation of truth, it is only applicable to sentences which are truth-apt, i.e., capable of being true or false. And, arguably, being truth-apt is a property that distinguishes statements from other sentence types, like questions and commands.Footnote 4

However, this truth-based construal of entailment is not the only possibility. An alternative conception arises from a more information-oriented perspective on semantics. Rather than taking semantics to specify in what circumstances a sentence is true, we may take it to specify what information it takes to settle, or establish, the sentence. On this view, semantics takes the form of a relation

$$s\models \varphi $$

where \(\varphi \) is a sentence and s is a semantic object that models a body of information. We will refer to s as an information state and we will read the expression \(s\models \varphi \) as “s supports \(\varphi \)”.Footnote 5

As in the case of possible worlds, different options are available as to the formal modeling of information states that play a role in this semantics. Intuitively, an information state encodes certain information about what things are like, and thereby it determines a distinction between two kinds of states of affairs: those that fit the available information—and which are, thus, live possibilities according to the state—and those that do not fit the available information—and which are ruled out by the state. Thus, at a minimum, we want an information state s to determine a corresponding set of live possibilities, \(\textsf {live}(s)\subseteq W_M\). For our purposes in this book, this is in fact all we need to know about an information state. Therefore, we may simply identify an information state with a set of possible worlds—the corresponding set of live possibilities. Conversely, given a set s of possible worlds, we can think of it as encoding a body of information: the information that the state of affairs corresponds to one of those in s. Thus, throughout this book, information states are simply modeled as sets of possible worlds.Footnote 6

Definition 2.2.1

(Information states) An information state in a model M is a subset \(s\subseteq W_M\).

Notice that no state of affairs fits an inconsistent body of information. Therefore, the set of live possibilities corresponding to inconsistent information is the empty set. Conversely, if a body of information is consistent, there is some state of affairs that fits that information; therefore, the corresponding set of live possibilities is non-empty.

Definition 2.2.2

(Inconsistent state) The inconsistent information state is the empty set of worlds, \(\emptyset \). An information state is consistent if it is non-empty.

Information states can be ordered naturally according to how much information they contain. If t contains at least as much information as s, then every world which is ruled out by s is also ruled out by t; therefore, the set of live possibilities for t is a subset of the set of live possibilities for s, and so \(t\subseteq s\). Conversely, if \(t\subseteq s\), then t rules out every world that s rules out and possibly more; given that the only aspect of information that we are taking into account is its potential to circumscribe a set of possibilities, we should count t as being at least as strong as s. Thus, we view t as containing at least as much information as s just in case \(t\subseteq s\); we then say that t is an enhancement of s, or that t implies s.

Definition 2.2.3

(Enhancement ordering) Given two information states s and t, we say that t is an enhancement of s in case \(t\subseteq s\).

Let us illustrate this with an example. We can model our die scenario as involving a logical space of six possible worlds \(w_1,\dots ,w_6\), corresponding to the six possible outcomes of the die roll. Now here are three things we might know about the outcome of the roll:

figure b

If taken as complete descriptions of the available information, these correspond to three information states \(s_{\lnot \textsf {six}},s_{\textsf {odd}},s_{\textsf {one}}\). The corresponding sets of live possibilities are shown in Fig. 2.1. Note that these states are ordered from the weakest, \(s_{\lnot \textsf {six}}\), to the strongest, \(s_{\textsf {one}}\). The latter is a state of complete information: it determines exactly what the actual state of affairs is, and it is maximally strong among the consistent states.

Fig. 2.1
figure 1

Three information states in the die roll scenario, ordered from the weakest (\(s_{\lnot \textsf {six}}\)) to the strongest (\(s_{\textsf {one}}\))

Importantly, a connection should obtain between the truth conditions of a statement \(\alpha \) and its support conditions: this is because, on the intended understanding of the support relation, to establish that \(\alpha \) is just to establish that the world is one where \(\alpha \) is true. This means that s should count as supporting \(\alpha \) just in case all live possibilities for s are worlds where \(\alpha \) is true. To formulate this precisely, it is useful to introduce the following technical notion.

Definition 2.2.4

(Truth set) The truth-set of a statement \(\alpha \) in a model M, denoted \(|\alpha |_M\), is the set of worlds in M where \(\alpha \) is true:

$$|\alpha |_M:=\{w\in W_M\mid w\models \alpha \}.$$

Then the intended connection between truth conditions and support conditions can be spelled out as follows.

Constraint 2.2.5

(Truth-Support Bridge) Let \(\alpha \) be a statement and M a model. For any information state \(s\subseteq W_M\) we should have:

$$s\models \alpha \iff \forall w\in s:w\models \alpha .$$

Or, equivalently:

$$s\models \alpha \iff s\subseteq |\alpha |_M.$$

As an illustration, consider the following statement in the die roll scenario:

figure c

This statement should count as supported by the information states \(s_{\textsf {one}}\) and \(s_{\textsf {odd}}\), since it is true at every live possibility in these states. But it should not be supported by \(s_{\lnot \textsf {six}}\), since it is not true at \(w_2\), which is a live possibility in \(s_{\lnot \textsf {six}}\).

The Truth-Support Bridge above implies that the truth conditions of a statement determine its support conditions. Moreover, if we spell out this connection in the special case that s is a singleton state \(\{w\}\), we find that the converse is also true, namely, that support conditions determine truth conditions:

$$w\models \alpha \iff \{w\}\subseteq |\alpha |_M\iff \{w\}\models \alpha .$$

Intuitively, this says that \(\alpha \) is true at a world w just in case the information that w is the actual world implies that \(\alpha \). Thus, for statements, truth conditions and support conditions are inter-definable.

Let us now turn to entailment. Our informational perspective comes with a natural construal of entailment as preservation of support: an entailment is valid if the conclusion is supported by any information state that supports the premises. Focusing for simplicity on the case of a single premise:

$$\alpha \models _{\textsf {info}}\beta \iff \text {for all models }M\text { and info states }s\subseteq W_M,\;s\models \alpha \text { implies }s\models \beta .$$

It follows from the Truth-Support Bridge that the two construals of entailment determine the same relation among statements:

$$\alpha \models _{\textsf {truth}}\beta \iff \alpha \models _{\textsf {info}}\beta .$$

To see this, suppose \(\alpha \models _{\textsf {truth}}\beta \). Consider an information state s supporting \(\alpha \): by the Truth-Support Bridge, this means that \(\alpha \) is true everywhere in s. Since \(\alpha \models _{\textsf {truth}}\beta \), \(\beta \) must be true everywhere in s, too. Thus, using again the Bridge, \(\beta \) must be supported in s. This shows that \(\alpha \models _{\textsf {info}}\beta \). Conversely, suppose \(\alpha \models _{\textsf {info}}\beta \). Consider a world w where \(\alpha \) is true. By the Truth-Support Bridge, \(\{w\}\) is a state which supports \(\alpha \). Since \(\alpha \models _{\textsf {info}}\beta \), \(\{w\}\) must also support \(\beta \). Thus, using again the Bridge, \(\beta \) is true at w. This shows that \(\alpha \models _{\textsf {truth}}\beta \).

This means that, given our understanding of the support relation, construing entailment as preservation of support does not lead to a non-classical logic; instead, it provides an alternative semantic foundation for classical logic. Given the equivalence between the truth-conditional construal of entailment and the informational one, it is not surprising that the former, which is arguably simpler, has been taken as the standard one. However, the informational approach has a crucial advantage for our purposes: it extends naturally beyond statements to cover also questions. Indeed, while it is not clear what it means for a question to be true or false in a given state of affairs, there is a clear sense in which a question can be said to be settled, or not settled, by a given body of information. To illustrate this point, consider again the three questions from our die roll example, repeated below.

figure d

Consider the model from Fig. 2.1. What information states from this model count as settling each of these questions? The answer is straightforward. To settle the first, we need either enough information to conclude that the outcome is even (\(s\subseteq \{w_2,w_4,w_6\}\)), or enough information to conclude that the outcome is odd (\(s\subseteq \{w_1,w_3,w_5\}\)). To settle the second, we need either the information that the outcome is in the low range (\(s\subseteq \{w_1,w_2\}\)), or that it is in the middle range (\(s\subseteq \{w_3,w_4\}\)), or that it is in the high range (\(s\subseteq \{w_5,w_6\}\)). To settle the third we need information that determines exactly which world obtains (\(s\subseteq \{w_i\}\) for some i). Thus, the support conditions of these questions in our model are:

figure e

These support conditions are visualized in Fig. 2.2, which depicts the maximal supporting states for the three questions. In each case, the supporting states are the sets in the picture, as well as their subsets.

Fig. 2.2
figure 2

The maximal supporting states for our three example questions

This illustrates how support conditions are obviously meaningful for questions. Moreover, there are good reasons to regard support conditions as a natural candidate for the role of semantic contents of questions. Here is one: a key role for questions in communication is that they allow speakers to formulate requests for information. The semantic content of a question should play a crucial role in determining the satisfaction conditions for such a request—i.e., in specifying what information is being requested by asking it. If the content of the question lies in its support conditions, which specify what information must be available for the question to count as settled, then it is clear how this role is played: the request is satisfied just in case a supporting state is established.

2.3 A General Notion of Entailment

We saw that two different perspectives are possible on the relation of entailment: the standard one based on truth, and an informational one based on support. We saw that these two perspectives are extensionally equivalent for statements. However we saw that the support relation can be used to interpret not only statements, but also questions. As a consequence, if entailment is characterized in terms of support, then it extends in a natural way to questions. We can thus consider a more general entailment relation, \(\Phi \models _{\textsf {info}}\psi \), holding between a set \(\Phi \) of sentences, which may include questions as well as statements, and a sentence \(\psi \), which may be either a statement or a question:

$$\Phi \models _{\textsf {info}}\psi \iff \text {for all models }M\text { and states }s\subseteq W_M,s\models \Phi \text { implies }s\models \psi $$

where \(s\models \Phi \) abbreviates ‘\(s\models \varphi \) for all \(\varphi \in \Phi \)’. Since this notion of entailment will be the one we will work with in the rest of the book, we will henceforth drop the subscript info whenever there is no risk of ambiguity. As usual, in terms of entailment we can also define notions of logical equivalence and logical validity:

  • \(\varphi \) and \(\psi \) are logically equivalent, denoted \(\varphi \equiv \psi \), if \(\varphi \models \psi \) and \(\psi \models \varphi \);

  • \(\varphi \) is logically valid, denoted \(\models \varphi \), if \(\varphi \) is entailed by the empty set.

Spelling out the definitions, we find that \(\varphi \) and \(\psi \) are logically equivalent if they are supported by the same information states in every model, and that \(\varphi \) is logically valid if it is supported by every state in every model.

2.3.1 Significance of Entailments Involving Questions

What is the significance of this more general entailment relation? Focusing for now on the case of a single premise, we have four possible entailment patterns. Let us examine and illustrate briefly each of them.

  • Statement-to-statement. If \(\alpha \) and \(\beta \) are statements, then \(\alpha \models \beta \) expresses the fact that settling that \(\alpha \) implies settling that \(\beta \). As we have already discussed, given the Truth-Support Bridge, this coincides with the familiar, truth-conditional notion of entailment: \(\alpha \models \beta \,\) holds in case \(\beta \) is true whenever \(\alpha \) is.

  • Statement-to-question. If \(\alpha \) is a statement and \(\mu \) is a question, then \(\alpha \models \mu \) means that settling that \(\alpha \) implies settling the question \(\mu \). Thus, we may regard \(\alpha \models \mu \) as expressing the fact that \(\alpha \) logically resolves \(\mu \). Example: the statement ‘Galileo discovered Ganymede in 1610’ entails the question ‘In what year did Galileo discover Ganymede?’, as well as the question ‘Did Galileo discover anything in 1610?’.

  • Question-to-statement. If \(\mu \) is a question and \(\alpha \) is a statement, then \({\mu \models \alpha }\) means that settling the question \(\mu \), no matter how, implies settling that \(\alpha \). We thus regard \(\mu \models \alpha \) as expressing the fact that \(\mu \) logically presupposes \(\alpha \). Example: the question ‘In what year did Galileo discover Ganymede?’ entails the statement ‘Galileo discovered Ganymede’.

  • Question-to-question. If \(\mu \) and \(\nu \) are both questions, then \(\mu \models \nu \) expresses the fact that settling \(\mu \) implies settling \(\nu \). This is just the relation of dependency that we discussed in the previous section, but now in its purely logical version, since all models—not just the intended one—are taken into account. Thus, \(\mu \models \nu \) expresses the fact that \(\mu \) logically determines \(\nu \). Example: the question ‘In what year did Galileo discover Ganymede?’ entails the question ‘In what century did Galileo discover Ganymede?’.

Thus, support semantics gives rise to an interesting general notion of entailment, which covers questions as well as statements and which unifies four interesting logical notions: (i) a statement being a logical consequence of another; (ii) a statement logically resolving a question; (iii) a question logically presupposing a statement; and, finally, (iv) a question logically determining another.

2.3.2 Entailment in Context

In ordinary situations, it is rarely the purely logical notion of consequence that we are concerned with. Rather, we typically take many facts about the world for granted and then assess whether on that basis, something follows from something else. We say, for instance, that the fact that Galileo discovered some celestial bodies follows from the fact that he discovered some of Jupiter’s moons; in doing so, we take for granted the fact that Jupiter’s moons are celestial bodies; worlds in which Jupiter’s moons are not celestial bodies are simply disregarded.

The same holds for questions: when we are concerned with dependency, it is rarely purely logical dependency that is at stake. Rather, we are usually concerned with the relations that one question bears to another, given certain background facts. In our initial example, the background facts include what the possible outcomes of the roll are, what outcomes count as low, middle, and high, etc. It is against this contextual background that the dependency holds.

In order to capture these relations, besides the purely logical notion of entailment that we discussed, we will also introduce notions of entailment relative to a given model M, and relative to a given context. We will model a context simply as an information state s. In assessing entailment relative to s, we take the information embodied by s for granted. This means that, to decide whether an entailment holds or not, only worlds in s, and states consisting of such worlds, are taken into account.

Definition 2.3.1

(Contextual entailment) Let M be a model and let \(s\subseteq W_M\) be an information state. We let:

$$\Phi \models _s\psi \iff \text {for all }t\subseteq s, t\models \Phi \text { implies }t\models \psi .$$

We write \(\varphi \equiv _s\psi \) in case \(\varphi \models _s\psi \) and \(\psi \models _s\varphi \), i.e., in case \(\varphi \) and \(\psi \) are supported by the same states \(t\subseteq s\). We write \(\models _M\) and \(\equiv _M\) instead of \(\models _{W_M}\) and \(\equiv _{W_M}\) for entailment and equivalence relative to the universe \(W_M\) of the model.

Contextual entailment captures relations of consequence, resolution, presupposition, and dependency which hold against the background of a specific context.

For an illustration, consider again our die roll example. Let M be the model formalizing the scenario. We have:

$$\textsf {parity},\;\textsf {range}\,\models _M\, \textsf {outcome}.$$

We can see this visually from Fig. 2.2: if a state settles \(\textsf {parity}\) then it is included in one of the rows of the model; if a state settles \(\textsf {range}\), it is included in one of the columns; thus, if a state settles both \(\textsf {parity}\) and \(\textsf {range}\), it must be included in a singleton, which means that it settles \(\textsf {outcome}\).

The fact that this contextual entailment holds amounts precisely to our initial observation that a certain dependency holds in the described context: the outcome’s parity and its range jointly determine what the outcome is. Thus, once we extend the notion of entailment to cover questions, dependencies turn out to be entailments—more precisely, question entailments in context.

2.3.3 Conditional Dependencies

In our die scenario, the question \(\textsf {range}\) does not by itself determine the question \(\textsf {outcome}\). However, given the information that the outcome is a prime number, \(\textsf {range}\) does determine \(\textsf {outcome}\): if the outcome is in the low range it is two, if it is in the middle range it is three, and if it is in the high range it is five. If \(\textsf {prime}\) denotes the statement that the outcome is prime, then we can say that in the given context, \(\textsf {range}\) determines \(\textsf {outcome}\) conditionally on \(\textsf {prime}\) (see Fig. 2.3).

Fig. 2.3
figure 3

Illustration of a conditional dependency: given that the outcome is prime, the range of the outcome determines the outcome

Generalizing, we can give the following definition.

Definition 2.3.2

(Conditional dependency) In a state s, a question \(\mu \) determines a question \(\nu \) conditionally on a statement \(\alpha \) if \(\mu \) determines \(\nu \) relative to the the \(\alpha \)-worlds in s. In symbols:

$$\mu \models _{s}^{\alpha }\nu \iff \forall t\subseteq s\cap |\alpha |_M: t\models \mu \text { implies }t\models \nu .$$

It turns out that conditional dependencies can also be captured as instances of entailment: it suffices to regard the condition \(\alpha \) as an additional premise alongside the determining question. That is, we have:

$$\mu \models _{s}^{\alpha }\nu \iff \alpha ,\mu \models _s\nu .$$

We can show this as follows, using the Truth-Support Bridge, which applies since \(\alpha \) is a statement:

$$\begin{aligned} \mu \models _{s}^{\alpha }\nu\iff & {} \forall t\subseteq s\cap |\alpha |_M: t\models \mu \text { implies }t\models \nu \\\iff & {} \forall t\subseteq s: t\subseteq |\alpha |_M\text { and } t\models \mu \text { implies }t\models \nu \\\iff & {} \forall t\subseteq s: t\models \alpha \text { and } t\models \mu \text { implies }t\models \nu \\\iff & {} \alpha ,\mu \models _s\nu . \end{aligned}$$

Thus, for instance, our example of conditional dependency above amounts to the entailment:

$$\textsf {prime},\;\textsf {range}\,\models _M\,\textsf {outcome}.$$

The story extends straightforwardly to dependencies involving multiple determining questions and multiple conditions. The upshot is that a contextual entailment 

$$\Gamma ,\Lambda \models _s\nu $$

where \(\Gamma \) is a set of statements, \(\Lambda \) a set of questions, and \(\nu \) a question, captures a conditional dependency: relative to s, the questions in \(\Lambda \) jointly determine the question \(\nu \) given the statements in \(\Gamma \).

A completely analogous story can be told when we replace contextual entailment by logical entailment: a relation

$$\Gamma ,\Lambda \models \nu $$

captures a purely logical conditional dependency: in any state of any model, the questions in \(\Lambda \) jointly determine the question \(\nu \) given the statements in \(\Gamma \).

2.4 Questions as Information Types

In this section we show that both statements and questions may be regarded as describing information types: statements describe singleton types, which may be identified with specific pieces of information, while questions describe proper, non-singleton information types. This shows how the support approach may be viewed as generalizing the classical notion of entailment from pieces of information to arbitrary information types.

2.4.1 Inquisitive Propositions

In truth-conditional semantics, the content of a sentence \(\alpha \) in a model M is encoded by its truth-set, that is, by the set of all worlds in M where \(\alpha \) is true:

$$|\alpha |_M=\{w\in W_M\,|\,w\models \alpha \}.$$

Similarly, in support-conditional semantics, the content of a sentence \(\varphi \) in a model M is encoded by its support-set, that is, the set of all states in M where \(\varphi \) is supported:

$$[\varphi ]_M=\{s\subseteq W_M\,|\,s\models \varphi \}.$$

The support-set of a formula is a set of information states of a special form. Indeed, suppose an information state s settles a sentence \(\varphi \): then, any information state t that enhances s will also settle \(\varphi \). That is, the relation of support is persistent.Footnote 7

 

Persistency::

if \(t\subseteq s\), \(s\models \varphi \text { implies }t\models \varphi .\)

  This implies that the support-set of a sentence \([\varphi ]\) is always downward closed, that is, if it contains a state s, it also contains all stronger states \(t\subseteq s\).  

Downward closure::

if \(t\subseteq s\), \(s\in [\varphi ]_M\text { implies }t\in [\varphi ]_M\).

 

Another way to state this condition uses a downward closure operation \((\cdot )^{\downarrow }\). In words, the downward closure of a set S of info states, denoted \(S^{\downarrow }\), is the set of those states which imply some element of S.

Definition 2.4.1

(Downward closure) If \(S\subseteq \wp (W_M)\), the downward closure of S is the set:

$$S^\downarrow =\{s\subseteq W_M\,|\,s\subseteq t\text { for some }t\in S\}.$$

It is easy to see that \(S^\downarrow \) is always a downward closed set of states, and moreover, it is the smallest downward closed set of states which contains S. The fact that the support-set \([\varphi ]_M\) of a formula is downward closed may then be expressed succinctly as follows.  

Downward closure, restated::

\([\varphi ]_M=([\varphi ]_M)^\downarrow \).

 

Downward closure is not the only general feature of the support-set of a sentence. Consider the empty information state, \(\emptyset \). This represents an inconsistent body of information, which is not compatible with any possible world. It follows from the Truth-Support Bridge that \(\emptyset \) supports any statement. This may be seen as a natural semantic version of the ex-falso quodlibet principle of classical logic. Similarly, it is natural to assume that \(\emptyset \) also trivially supports any question, so that for all \(\varphi \) we have the following.Footnote 8  

Semantic ex-falso::

\(\emptyset \in [\varphi ]_M\).

  We will refer to a set of states which contains \(\emptyset \) and satisfies downward closure as an inquisitive proposition. Now, for a downward closed set of states P, we have \(\emptyset \in P\iff P\ne \emptyset \). Thus, we can define inquisitive propositions as follows.

Definition 2.4.2

(Inquisitive propositions) An inquisitive proposition in a model M is a non-empty and downward closed set \(P\subseteq \wp (W_M)\) of information states.

We refer to the support-set\([\varphi ]_M\) of a sentence \(\varphi \) in a model M as the inquisitive proposition expressed by \(\varphi \) in M.

A special role is played by the maximal elements in an inquisitive proposition P. We will refer to these elements as the alternatives in P.

Definition 2.4.3

(Alternatives) \(\textsc {Alt}(P)=\{s\in P\,|\,\text {there is no }t\supset s\text { such that\ }t\in P\}\).

If \(\varphi \) is a sentence, we also refer to the alternatives in \([\varphi ]_M\) as the alternatives for \(\varphi \) in the model M. Thus, the alternatives for \(\varphi \) are the minimally informed states that support \(\varphi \), i.e., those that contain just enough information to settle \(\varphi \). We write \(\textsc {Alt}_M(\varphi )\) instead of \(\textsc {Alt}([\varphi ]_M)\).

2.4.2 Pieces and Types of Information

Consider the proposition expressed by a statement \(\alpha \). The Truth-Support Bridge requires the following connection:

$$s\in [\alpha ]_M\iff s\subseteq |\alpha |_M.$$

This implies that \([\alpha ]_M\) has a unique maximal supporting state—a unique alternative—namely \(|\alpha |_M\):

  • \(\textsc {Alt}_M(\alpha )=\{|\alpha |_M\}\).

This unique alternative for \(\alpha \) is naturally regarded as a piece of information: the information that \(\alpha \) is true. To settle that \(\alpha \) is just to establish this piece of information. By means of downward closure, we can express this as follows:

  • \([\alpha ]_M=\{|\alpha |_M\}^\downarrow \).

Thus, a statement \(\alpha \) may be regarded as naming a specific piece of information; the statement is settled in an information state s if this specific piece of information is available in s, i.e., if \(s\subseteq |\alpha |_M\).

Things are different for questions. For an illustration, consider the question \(\textsf {range}\) of whether the outcome of the die roll is in the low, middle, or high range. This question has three alternatives in the intended model:

  • \(a_{\textsf {low}}=\{w_1,w_2\}\);

  • \(a_{\textsf {mid}}=\{w_3,w_4\}\);

  • \(a_{\textsf {high}}=\{w_5,w_6\}\).

These alternatives correspond to three different pieces of information about the range of the outcome. We can think of them as three pieces of information which instantiate an information type, and we can think of the question \(\textsf {range}\) as naming this information type. To settle the question is to establish some information or other of this type. We can express this using the downward closure operation as follows:

  • \([\textsf {range}]_M=\{a_{\textsf {low}},a_{\textsf {mid}},a_{\textsf {high}}\}^\downarrow \).

The difference between the case of a statement and that of a question is illustrated in Fig. 2.4.

Fig. 2.4
figure 4

Singleton versus non-singleton information types

Similarly, the question \(\textsf {parity}\)—whether the outcome is even or odd—corresponds to an information type comprising the following pieces of information:

  • \(a_{\textsf {even}}=\{w_2,w_4,w_6\}\);

  • \(a_{\textsf {odd}}=\{w_1,w_3,w_5\}\).

These encode, respectively, the information that the outcome is even, and the information that it is odd.

Finally, the question \(\textsf {outcome}\), what the outcome is, corresponds to a type of information comprising the pieces of information \(a_{\textsf {one}}=\{w_1\},\dots ,a_{\textsf {six}}=\{w_6\}\), each providing complete information about what the outcome is.

To make these observations more general, we introduce the following notion.

Definition 2.4.4

(Normality) We say that an inquisitive proposition P is normal in case \(P=\textsc {Alt}(P)^\downarrow \). We say that a sentence \(\varphi \) is normal in a model M in case \([\varphi ]_M\) is normal.

Note that the inclusion \(\textsc {Alt}(P)^\downarrow \subseteq P\) holds for any inquisitive proposition P: indeed, if \(s\subseteq a\) for some \(a\in \textsc {Alt}(P)\), then since \(a\in P\) also \(s\in P\) by downward closure. Thus, the normality condition amounts to the inclusion \(P\subseteq \textsc {Alt}(P)^\downarrow \), i.e., to the requirement that any element of P be included in a maximal one.

A statement \(\alpha \) is always normal, since as we have seen, the Truth-Support Bridge implies that any state supporting \(\alpha \) is included in the truth-set \(|\alpha |_M\), which is the unique alternative for \(\alpha \). The questions in our example are also normal, as are many other natural classes of questions. However, there is no reason to suppose that questions will in general be normal. We will encounter some examples of non-normal questions in Chap. 5. If a question \(\mu \) is normal, then we can naturally think of it as describing, in a model M, a type of information whose instances are the alternatives \(a\in \textsc {Alt}_M(\mu )\). To settle the question is to establish the information corresponding to one of these alternatives.

2.4.3 Generators and Alternatives

We have seen that, for many examples of sentences \(\varphi \), we have \([\varphi ]_M=\textsc {Alt}_M(\varphi )^{\downarrow }\), and in this case, we can think of \(\varphi \) as describing a type of information \(\textsc {Alt}_M(\varphi )\). However, there are many other sets of information states T such that \([\varphi ]_M=T^\downarrow \). For instance, since \([\varphi ]_M\) is downward closed we have \([\varphi ]_M=([\varphi ]_M)^\downarrow \). So, what is so special about alternatives?

In this section we give an answer to this question having to do with the way in which an inquisitive proposition may be regarded as being generated from a given set of information states.

Definition 2.4.5

(Generators for an inquisitive proposition) A set T of info states is a generator for an inquisitive proposition P if \(P=T^\downarrow \).

If T is a generator for \([\varphi ]_M\), this means that a state s supports \(\varphi \) iff s implies some piece of information \(a\in T\). Therefore, we can regard \(\varphi \) as standing for the type of information T.

Any inquisitive propositionP admits a trivial generator, namely, P itself. However, the examples discussed in the previous subsection show that many inquisitive propositions admit much smaller generators. For instance, in the case of the statement \(\textsf {low}\) that the outcome is low, the proposition \([\textsf {low}]_M\) admits a singleton generator, namely, \(\textsc {Alt}_M(\textsf {low})=\{a_{\textsf {low}}\}\). In the case of the question \(\textsf {range}\), the proposition \([\textsf {range}]_M\) admits a generator consisting of only three elements, namely, \(\textsc {Alt}_M(\textsf {range})=\{a_{\textsf {low}},a_{\textsf {mid}},a_{\textsf {high}}\}\).

The generators \(\textsc {Alt}_M(\textsf {low})\) and \(\textsc {Alt}_M(\textsf {range})\) are very different from the trivial generators \([\textsf {low}]_M\) and \([\textsf {range}]_M\). First, it is easy to check that any element of \(\textsc {Alt}_M(\textsf {low})\) and \(\textsc {Alt}_M(\textsf {range})\) is essential to the representation of the corresponding proposition: if we were to remove it, the resulting set would no longer be a generator for the proposition. We say that these generators are minimal. Moreover, the elements of these generators are pairwise logically independent, in the sense that one element of the generator never implies another. We will say that these generators are independent.

Definition 2.4.6

(Minimal and independent generators) Let T be a generator for an inquisitive proposition P. We say that T is:

  • minimal if no proper subset \(T'\subset T\) is a generator for P;

  • independent if there are no \(t,t'\in T\) such that \(t\subset t'\).

The following proposition says that minimal and independent generators coincide.

Proposition 2.4.7

(Minimality and independence are equivalent) Let T be a generator for a proposition P. T is minimal iff it is independent.

Proof

Suppose T is independent and consider a proper subset \(T'\subset T\). Let \(s\in T-T'\). Since \(s\in T\) and T is a generator for P, we have \(s\in T^\downarrow =P\). However, since \(T'\subseteq T\), s cannot be a subset of any element of \(T'\): otherwise, it would be a subset of some element of T other than itself, contrary to the independence of T. This means that \(s\not \in T'^\downarrow \). Since \(s\in P\), this shows that \(T'^\downarrow \ne P\), i.e., that \(T'\) is not a generator for P. Since \(T'\subset T\) was arbitrary, this shows that T is minimal.

For the converse, suppose T is not independent. Then, there are two states \(s,t\in T\) such that \(s\subset t\). Now consider \(T':=T-\{s\}\). It is easy to verify that \(T'\) is still a generator for P, which shows that T is not minimal. \(\Box \)

We will refer to a minimal generator for a proposition as a basis.Footnote 9

Definition 2.4.8

(Basis for an inquisitive proposition ) A basis for an inquisitive proposition P is a minimal generator for P.

The next proposition shows that there is something special about the alternatives for an inquisitive proposition: whenever a proposition P admits a basis, the unique basis for P is \(\textsc {Alt}(P)\).

Proposition 2.4.9

 

  • An inquisitive proposition P has a basis iff P is normal;

  • If P has a basis, then the unique basis for P is \(\textsc {Alt}(P)\).

Proof

First suppose P is normal, i.e., \(P=\textsc {Alt}(P)^\downarrow \). This means that \(\textsc {Alt}(P)\) is a generator for P. Moreover, notice that by definition, \(\textsc {Alt}(P)\) is independent. By the previous proposition, it follows that \(\textsc {Alt}(P)\) is a basis for P.

Moreover, suppose T is another basis for P. Consider any \(s\in \textsc {Alt}(P)\): since \(s\in P=T^\downarrow \), s must be a subset of some element \(t\in T\). But now, since t is in T, t must also be in \(T^\downarrow =P\). Since \(s\in \textsc {Alt}(P)\) is by definition a maximal element in P; so, we must have \(s=t\), which implies \(s\in T\). Since this holds for any \(s\in \textsc {Alt}(P)\), we have \(\textsc {Alt}(P)\subseteq T\). By the minimality of T, this implies \(T=\textsc {Alt}(P)\). This shows that, if P admits a basis, the unique basis is \(\textsc {Alt}(P)\).

Conversely, suppose P is not normal, i.e., \(P\ne \textsc {Alt}(P)^\downarrow \). Since \(\textsc {Alt}(P)^\downarrow \subseteq P\) by the downward closure of P, we must have \(P\not \subseteq \textsc {Alt}(P)^\downarrow \). This means that there must be a state \(s\in P\) which is not included in any maximal state \(t\in P\).

Now consider any generator T for P. Since T is a generator and \(s\in P\), we must have \(s\subseteq t\) for some \(t\in T\). Now, we know that t cannot be a maximal element of P. So, let \(u\in P\) be such that \(t\subset u\). Since \(u\in P\) and T is a generator, u must be a subset of some \(t'\in T\). But then we have \(t\subset u\subseteq t'\), that is, t is a proper subset of \(t'\), and both t and \(t'\) are in T. This shows that T is not independent, which by the previous proposition implies that T is not a basis. Since T was arbitrary, this shows that there is no basis for P.\(\Box \)

This result shows that, if P is normal, we can regard it as being generated in a canonical way by the set of its alternatives, while if P is non-normal, there is no minimal choice of a generator for P.Footnote 10

2.4.4 Entailment as a Relation Among Information Types

Consider again our initial example of a dependency. If we think of the questions in the examples as names for information types, we may phrase this relation as follows: information of type \(\textsf {parity}\), combined with information of type \(\textsf {range}\) yields information of type \(\textsf {outcome}\).

In this section, we discuss how bringing questions into play may be viewed as taking us from a logic of pieces of information to a logic of information types. To see this, take a specific model M and fix for any sentence \(\varphi \) a generator \(T(\varphi )\) of \([\varphi ]_M\), so that we can regard \(\varphi \) as describing the type of information \(T(\varphi )\). If our sentences are normal, we saw that the canonical and most parsimonious choice is \(T(\varphi )=\textsc {Alt}_M(\varphi )\), but we will not need this assumption.

The following proposition shows that indeed, entailment holds if any piece of information of type \(\varphi \) implies some piece of information of type \(\psi \).

Proposition 2.4.10

\(\varphi \models _M\psi \iff \text { for every }a\in T(\varphi )\text { there is some }a'\in T(\psi )\text { such that }a\subseteq a'\).

Proof

Suppose \(\varphi \models _M\psi \). This amounts to the inclusion \([\varphi ]_M\subseteq [\psi ]_M\). Take an arbitrary \(a\in T(\varphi )\). Using the fact that \(T(\varphi )\) and \(T(\psi )\) are generators for \([\varphi ]_M\) and \([\psi ]_M\) we have:

$$a\in T(\varphi )^{\downarrow }=[\varphi ]_M\subseteq [\psi ]_M=T(\psi )^{\downarrow }.$$

So we have \(a\in T(\psi )^{\downarrow }\), which means that \(a\subseteq a'\) for some \(a'\in T(\psi )\).

Conversely, suppose for all \(a\in T(\varphi )\text { there is some }a'\in T(\psi )\text { such that }a\subseteq a'\). This means that if a state s is a subset of some state in \(T(\varphi )\), it is also a subset of some state in \(T(\psi )\). So we have \(T(\varphi )^{\downarrow }\subseteq T(\psi )^{\downarrow }\). Since \(T(\varphi )\) and \(T(\psi )\) are generators, it follows that \([\varphi ]_M\subseteq [\psi ]_M\), which means that \(\varphi \models _M\psi \). \(\Box \)

In case at least one of the formulas involved is a statement \(\alpha \), things can be simplified by taking \(T(\alpha )=\{|\alpha |_M\}\). First, suppose the formulas \(\alpha \) and \(\beta \) at stake are both statements. In this case, the entailment boils down to a relation between pieces of information: the information that \(\alpha \) is true implies the information that \(\beta \) is true.

Proposition 2.4.11

(Statement-to-statement entailment)

\(\alpha \models _M\beta \iff |\alpha |_M\subseteq |\beta |_M\)

Next, suppose \(\alpha \) is a statement and \(\mu \) a question. The entailment \(\alpha \models \mu \) holds in case the information that \(\alpha \) is true yields some piece of information of type \(\mu \).

Proposition 2.4.12

(Statement-to-question entailment)

\(\alpha \models _M\mu \iff |\alpha |_M\subseteq a\) for some \(a\in T(\mu )\)

For an example, consider again the question \(\textsf {parity}\), whether the outcome is even or odd. Then we can take \(T(\textsf {parity})=\{a_{\textsf {even}},a_{\textsf {odd}}\}\). So a statement \(\alpha \) entails \(\textsf {parity}\) just in case \(|\alpha |_M\subseteq a_{\textsf {even}}\) or \(|\alpha |_M\subseteq a_{\textsf {odd}}\), i.e., just in case \(\alpha \) implies that the outcome is even or it implies that the outcome is odd. This fits the idea that a statement entails a question if it resolves it.

Conversely, if \(\mu \) is a question and \(\alpha \) a statement, the entailment \(\mu \models \alpha \) holds in case any information of type \(\mu \) yields the information that \(\alpha \) is true.

Proposition 2.4.13

(Question-to-statement entailment)

\(\mu \models _M\alpha \iff a\subseteq |\alpha |_M\) for all \(a\in T(\mu )\)

For an example, let \(\mu \) be the following question:

figure f

Let \(T(\mu )=\{a_{\textsf {two}},a_{\textsf {four}},a_{\textsf {six}}\}\), where \(a_{\textsf {two}}=\{w_2\}\), \(a_{\textsf {four}}=\{w_4\}\), and \(a_{\textsf {six}}=\{w_6\}\). A statement\(\alpha \) is entailed by \(\mu \) just in case we have \(a_{\textsf {two}}\subseteq |\alpha |_M\), \(a_{\textsf {four}}\subseteq |\alpha |_M\), and \(a_{\textsf {six}}\subseteq |\alpha |_M\). This holds just in case \(\{w_2,w_4,w_6\}\subseteq |\alpha |_M\). Thus, a statement \(\alpha \) is entailed by \(\mu \) if and only if \(\alpha \) is entailed by the statement that the outcome is even. This fits the idea that a question entails a statement if it presupposes it. Figure 2.5 gives a graphical illustration.

Fig. 2.5
figure 5

In this model, the question whether the outcome is even or odd is entailed by the statement that the outcome is even, which in turn is entailed by the question which even number is the outcome

Next, let us look at entailments with multiple premises. The following proposition states that an entailment holds if the following is the case: whenever we are given a piece of information for each premise type, the resulting combined body of information implies some piece of information of the conclusion type.

Proposition 2.4.14

\(\varphi _1,\dots ,\varphi _n\models _M\psi \iff \text {for every }\;a_1\in T(\varphi _1)\,,\,\dots \,,\,a_n\!\in T(\varphi _n):\)

\(\phantom {\varphi _1,\dots ,\varphi _n\models _M\psi \iff \;}{\qquad \qquad \qquad \qquad \qquad \,}a_1\cap \dots \cap a_n\subseteq a' \text { for some }a'\in T(\psi )\)

We omit the proof, which is a variation of the one given above for the case of a single premise. As an illustration, consider our initial example of a dependency. The questions \(\textsf {parity},\,\textsf {range},\) and \(\textsf {outcome}\) are canonically associated with the following information types:

  • \(T(\textsf {parity})=\{a_{\textsf {even}},a_{\textsf {odd}}\}\);

  • \(T(\textsf {range})=\{a_{\textsf {low}},a_{\textsf {mid}},a_{\textsf {high}}\}\);

  • \(T(\textsf {outcome})=\{a_{\textsf {one}},\dots ,a_{\textsf {six}}\}\).

The above proposition tells us that the entailment holds if for any way of pairing a piece of information a of type \(\textsf {parity}\) with a piece of information \(a'\) of type \(\textsf {range}\), the joint information \(a\cap a'\) implies some piece of information \(a''\) of type \(\textsf {outcome}\). And this is indeed the case, as the following relations show.

  • \(a_{\textsf {even}}\cap a_{\textsf {low}}{\,\,}\subseteq a_{\textsf {two}}\)

  • \(a_{\textsf {even}}\cap a_{\textsf {mid}}{\,}\subseteq a_{\textsf {four}}\)

  • \(a_{\textsf {even}}\cap a_{\textsf {high}}\subseteq a_{\textsf {six}}\)

  • \(a_{\textsf {odd}}\cap a_{\textsf {low}}{\,}\subseteq {\,}a_{\textsf {one}}\)

  • \(a_{\textsf {odd}}\cap a_{\textsf {mid}}{\,}\subseteq {\,}a_{\textsf {three}}\)

  • \(a_{\textsf {odd}}\cap a_{\textsf {high}}\subseteq a_{\textsf {five}}\)

This illustrates how dependencies amount to relations between information types: in this case, the entailment \(\textsf {parity},\textsf {range}\models _M\textsf {outcome}\) captures the fact that in the given context, information of type parity and information of type range are guaranteed to jointly yield information of type outcome.

2.5 Inquisitive Implication

2.5.1 Internalizing Entailment

In a support-based semantics, the contexts to which entailment can be relativized are the same as the objects at which formulas are evaluated, namely, information states. This makes it possible to define an implication connective \(\rightarrow \) which internalizes the meta-language relation of entailment. The idea is to make \(\varphi \rightarrow \psi \) supported by an information state s if relative to s, \(\varphi \) entails \(\psi \):

$$\begin{aligned} s\models \varphi \rightarrow \psi \;\iff \; \varphi \models _s\psi . \end{aligned}$$
(2.1)

Simply by making explicit what the condition \(\varphi \models _s\psi \) amounts to, we get the support clause governing this operation:

$$s\models \varphi \rightarrow \psi \;\iff \;\text {for all }t\subseteq s,\;\, t\models \varphi \,\text { implies }\,t\models \psi .$$

That is, an implication is supported in s in case enhancing s so as to support the antecedent is bound to lead to an information state that supports the consequent. Interestingly, this is, mutatis mutandis, precisely the interpretation of implication that we also find in most information-based semantics, such as Beth and Kripke semantics for intuitionistic logic, Veltman’s data semantics, and Humberstone’s possibility semantics (cf. references in Footnote 5).

Let us first consider the result of applying this implication operation to a pair of statements\(\alpha \) and \(\beta \). Using the Truth-Support Bridge, and denoting by \(\supset \) the truth-functional material conditional, we get:

$$\begin{aligned} s\models \alpha \rightarrow \beta\iff & {} \forall t\subseteq s,\; t\models \alpha \text { implies }t\models \beta \\\iff & {} \forall t\subseteq s,\; t\subseteq |\alpha |_M\text { implies }t\subseteq |\beta |_M\\\iff & {} {s\,}\cap {\,|\alpha |_M}\subseteq {|\beta |_M}\\\iff & {} s\subseteq |\alpha \supset \beta |_M. \end{aligned}$$

Thus, the semantics of \(\alpha \rightarrow \beta \) that our clause delivers is the same that we would have obtained by lifting the material conditional of classical logic in accordance with the Truth-Support Bridge: \(\alpha \rightarrow \beta \) is supported at a state if the material conditional is true at each world in the state.

However, our implication connective is applicable not only to statements, but also to questions. For instance, suppose \(\mu \) and \(\nu \) are questions. According to the clause above, we have:

$$s\models \mu \rightarrow \nu \;\iff \; \mu \models _s\nu .$$

The right-hand side amounts to the fact that \(\mu \) determines \(\nu \) relative to s. So \(\mu \rightarrow \nu \) is a formula that expresses within the object language that \(\mu \) determines \(\nu \); it is supported precisely by those information states in which this dependency holds. Similarly, if \(\alpha \) is a statement and \(\mu \) a question, then \(\alpha \rightarrow \mu \) expresses the fact that \(\alpha \) resolves \(\mu \), and \(\mu \rightarrow \alpha \) the fact that \(\mu \) presupposes \(\alpha \).

Entailment relations involving multiple premises can also be expressed in the object language by means of \(\rightarrow \), as shown by the following proposition.

Proposition 2.5.1

For any number n we have:

$$s\models \varphi _1\rightarrow (\dots \rightarrow (\varphi _n\rightarrow \psi ))\iff \varphi _1,\dots ,\varphi _n\models _s \psi .$$

Proof

For simplicity, we give the proof for the case of \(n=2\), but the argument generalizes straightforwardly.

$$\begin{aligned}&s\models \varphi _1\rightarrow (\varphi _2\rightarrow \psi )\\ \iff \;&\forall t\subseteq s:t\models \varphi _1\text { implies }\forall t'\subseteq t: (t'\models \varphi _2\text { implies }t'\models \psi )\\ \iff \;&\forall t\subseteq s:t\models \varphi _1\text { and }t\models \varphi _2\text { implies }t\models \psi \\ \iff \;&\varphi _1,\varphi _2\models _s\psi \end{aligned}$$

The crucial step in this derivation is the second biconditional. For the left-to-right direction, notice that we can take \(t'=t\). For the converse, note that by the persistency of support, if \(t\models \varphi _1\) and \(t'\subseteq t\) then also \(t'\models \varphi _1\).\(\Box \)

This has interesting repercussions for the expression of conditional dependencies. As we discussed above in Sect. 2.3.3, the fact that in a state s a question \(\mu \) determines a question \(\nu \) conditionally on a statement \(\alpha \) is captured by the contextual entailment \(\alpha ,\mu \models _s\nu \). By what we have just seen, this entailment can be expressed in the object language by the formula

$$\alpha \rightarrow (\mu \rightarrow \nu ).$$

So this formula expresses that the dependency \(\mu \rightarrow \nu \) holds conditionally on \(\alpha \). This is interesting, as it shows that conditional dependencies can indeed be seen as arising from conditionalizing dependencies in a precise sense. They can be expressed as conditionals having the condition as antecedent, and a dependence formula as consequent.

To summarize: we saw above that the classical meta-language entailment relation can be generalized to questions, allowing us to capture the relations of (conditional) dependency, resolution, and presupposition. Now we have seen that, in a parallel way, the classical implication connective can be generalized to questions, providing us with the linguistic resources to express such relations in the object language.

2.5.2 Generalizing the Ramsey Test

Perhaps the most influential insight in theorizing about conditionals is the Ramsey test idea, so called after a footnote in Ramsey [23]. According to this idea, interpreting a conditional on the basis of a certain body of information involves supposing the antecedent and then assessing whether (or to what extent) the resulting hypothetical state supports the conclusion.

In this section we are going to see that the semantics of our inquisitive conditional vindicates the Ramsey test idea and generalizes it to questions.

Let us start by asking how we can model the process of supposing a statement \(\alpha \) in an information state s.Footnote 11 The natural answer in our setting is that to suppose \(\alpha \) in s is to suppose that the world is one where \(\alpha \) is true, i.e., to enter a hypothetical state which extends s by incorporating the information \(|\alpha |_M\). In other words, supposing \(\alpha \) takes us from s to the hypothetical state \(s\cap |\alpha |_M\).

The following proposition shows that when the antecedent is a statement, our semantics yields a version of the Ramsey test idea: a conditional is supported by an information state just in case the consequent is supported by the hypothetical state obtained by supposing the antecedent.

Proposition 2.5.2

(Ramsey test, deterministic case) Suppose \(\alpha \) is a statement, and \(\psi \) a sentence which may be either a statement or a question. Then for any model M and state s:

$$s\models \alpha \rightarrow \psi \iff s\cap |\alpha |_M\models \psi .$$

Proof

We have:

$$\begin{aligned} s\models \alpha \rightarrow \psi\iff & {} \forall t\subseteq s:\; t\models \alpha \text { implies }t\models \psi \\\iff & {} \forall t\subseteq s:\; t\subseteq |\alpha |_M\text { implies }t\models \psi \\\iff & {} \forall t\subseteq {s}\cap {|\alpha |_M}:\; t\models \psi \\\iff & {} {s}\cap {|\alpha |_M}\models \psi \end{aligned}$$

where the second biconditional uses the Truth-Support Bridge for \(\alpha \) and the last biconditional uses the persistency of support.\(\Box \)

What about the case in which the antecedent is a question \(\mu \)? In that case, the antecedent does not identify a single piece of information and, therefore, it does not determine a single supposition. Instead, the antecedent determines an information type \(T(\mu )\), which is instantiated by multiple pieces of information \(a\in T(\mu )\). Each of these pieces of information can be supposed in s, leading to a corresponding hypothetical state \(s\cap a\). Thus, an interrogative antecedent does not determine a single hypothetical state, but multiple hypothetical states. It is then natural to generalize the Ramsey test idea in the following way: interpreting a conditional on the basis of a certain body of information involves supposing information of the antecedent type and then assessing whether each of the resulting hypothetical states supports the conclusion. We can think of an interrogative antecedent as inducing a non-deterministic supposition, and we can think of the conditional as claiming that this supposition is bound to lead to a state that supports the consequent. The following proposition shows that the semantics of our conditional is in line with this generalization of the Ramsey test idea.

Proposition 2.5.3

(Ramsey test, general case) Let \(\varphi ,\psi \) be either statements or questions. Let M be a model, \(T(\varphi )\) a generator for \([\varphi ]_M\), and s an information state in M. Then:

$$s\models \varphi \rightarrow \psi \iff s\cap a\models \psi \text { for every }a\in T(\varphi ).$$

Proof

Suppose \(s\models \varphi \rightarrow \psi \). Take any \(a\in T(\varphi )\). Since \(a\models \varphi \), by persistency \(s\cap a\) is a subset of s supporting \(\varphi \). Since \(s\models \varphi \rightarrow \psi \), \(s\cap a\) must support \(\psi \).

Conversely, suppose \(s\cap a\models \varphi \text { for every }a\in T(\varphi )\). Take any \(t\subseteq s\) which supports \(\varphi \). Since \(T(\varphi )\) is a generator for \([\varphi ]_M\), this means that we must have \(t\subseteq a\) for some \(a\in T(\varphi )\). Therefore, \(t\subseteq s\cap a\). By our assumption, \(s\cap a\models \psi \), and so by persistency, \(t\models \psi \). This shows that \(s\models \varphi \rightarrow \psi \).\(\Box \)

To illustrate this generalization of the Ramsey test to questions, it is helpful to consider a concrete example.

Example 2.5.4

(Implication among questions) Consider again the questions range and outcome from our die roll scenario. Recall from Sect. 2.4.2 that the question range can be associated with the generator

$$T(\textsf {range})=\{a_{\textsf {low}},a_{\textsf {mid}},a_{\textsf {high}}\}$$

where \(a_{\textsf {low}}=\{w_1,w_2\}\), \(a_{\textsf {mid}}=\{w_3,w_4\}\), and \(a_{\textsf {high}}=\{w_5,w_6\}\). According to the previous proposition, a state s supports the conditional

$$\textsf {range}\rightarrow \textsf {outcome}$$

just in case the following three conditions hold:

  • \(s\cap a_{\textsf {low}}\models \textsf {outcome}\);

  • \(s\cap a_{\textsf {mid}}\models \textsf {outcome}\);

  • \(s\cap a_{\textsf {high}}\models \textsf {outcome}\).

So, our question antecedent is associated not with a single supposition, but with three suppositions. Graphically, each of these suppositions amounts to restricting the state s to a specific column. In order to support outcome, the resulting hypothetical states are required to contain at most one world. So, the states that support the implication \(\textsf {range}\rightarrow \textsf {outcome}\) are those that contain at most one world from each column. The maximal such states—the alternatives—are those that select exactly one world from each column. Since there are 3 columns and 2 worlds for each, there are \(2^3=8\) alternatives for the implication \(\textsf {range}\rightarrow \textsf {outcome}\). These are visualized in Fig. 2.6. Note that each of these alternatives corresponds to one particular way for the question \(\textsf {range}\) to determine the question \(\textsf {outcome}\), i.e., to one particular way for the dependency to obtain.

Fig. 2.6
figure 6

The eight alternatives for the implication \(\textsf {range}\rightarrow \textsf {outcome}\). Each alternative represents a maximal state in which \(\textsf {range}\) determines \(\textsf {outcome}\), and corresponds to a way for the dependency to obtain

2.6 Truth and Question Presupposition

In Sect. 2.2, we saw that for statements, truth conditions and support conditions are inter-definable via the Truth-Support Bridge. In particular, if we are given the support conditions of a statement \(\alpha \), we can recover its truth conditions by means of the following connection: \(\alpha \) is true at a world if and only if it is supported at the corresponding singleton state. In symbols:

$$w\models \alpha \iff \{w\}\models \alpha .$$

In a semantics where support is the primitive semantic notion, we can take this relation to provide a definition of truth in terms of support. Since support is defined not only for statements, but also for questions, the resulting notion of truth is then defined for questions as well.

But intuitively, what does it mean for a question \(\mu \) to be true at a world w? The definition says that \(\mu \) is true at w in case \(\mu \) is settled by the information state \(\{w\}\). Now, \(\{w\}\) is a state of complete information: thus, if the question is not settled in this state, this cannot be because not enough information is available: it must be because the question does not admit any truthful resolution at w. We can read \(w\models \mu \) as capturing the fact that question \(\mu \) is soluble at w.

This interpretation is backed by the following proposition, which follows immediately from the persistency of support.

Fig. 2.7
figure 7

The proposition expressed by a question, and the corresponding truth-set

Proposition 2.6.1

For any sentence \(\varphi \) and any world w: \(w\models \varphi \iff (w\in s\text { for some }s\models \varphi )\). Equivalently, for any sentence \(\varphi \) and model M we have: \(|\varphi |_M=\bigcup [\varphi ]_M\).

For a question \(\mu \), this proposition states that \(\mu \) is true at w if there is some body of information s that settles \(\mu \) (\(s\models \mu \)) without ruling out w (\(w\in s\)).

As illustration, consider again the question in (4), repeated below:

figure g

This question is supported by the singletons \(\{w_2\}\), \(\{w_4\}\), \(\{w_6\}\), but not by the singletons \(\{w_1\}\), \(\{w_3\}\), \(\{w_5\}\). So, the question is true at \(w_2,w_4,w_6\), but not at \(w_1,w_3,w_5\). In other words, this question is true just in case the outcome is even—it has the same truth conditions as the statement:

figure h

Figure 2.7 illustrates the relation between the proposition expressed by (5), \([(5)]_M\), and the set of worlds where (5) is true, \(|(5)|_M\).

Interestingly, this way of extending the notion of truth to questions was proposed before by Belnap [5] (see also [3]), though in the context of a slightly different approach to questions (cf. Sect. 2.9). Belnap put it as follows:

I should like in conclusion to propose the following linguistic reform: that we all start calling a question “true” just when some direct answer thereto is true.[5]

We will also follow Belnap [5] in taking the truth conditions of a question to capture its presupposition. Belnap puts forward the following thesis (the exact phrasing is not Belnap’s, but it is one he could have subscribed to, given the “linguistic reform” he proposed in the above passage).

Belnap’s Thesis

Every question presupposes precisely that its truth conditions obtain.

The qualification precisely means that a question presupposes that its truth conditions obtain, and nothing more. Thus, what a question presupposes is captured entirely by its truth conditions.

Like Belnap, we will say that a statement \(\alpha \) expresses the presupposition of a question \(\mu \) in case \(\alpha \) and \(\mu \) have the same truth conditions. Thus, for instance, the statement (6) expresses the presupposition of the question (5). In the logical systems that we will develop, it will often be convenient to associate with each question \(\mu \) in the system a specific statement \(\pi _\mu \) which expresses the question’s presupposition; for brevity, we will also refer to \(\pi _\mu \) as the presupposition of \(\mu \).Footnote 12

In the discussion above, we said that, if \(\mu \) is a question and \(\alpha \) is a statement, we can view the entailment \(\mu \models \alpha \) as capturing the fact that \(\mu \) presupposes \(\alpha \). The following proposition says that \(\mu \) presupposes \(\alpha \) just in case \(\alpha \) follows from the presupposition \(\pi _\mu \) of \(\mu \). The proof is left as an exercise (Exercise 2.10.5).

Proposition 2.6.2

Let \(\mu \) be a question, \(\pi _\mu \) a statement that expresses the presupposition of \(\mu \), and \(\alpha \) an arbitrary statement. Then

$$\mu \models \alpha \iff \pi _\mu \models \alpha .$$

The same holds when logical entailment is replaced by contextual entailment.

To conclude this section, let us address a possible source of confusion. We started by claiming that, in order to bring questions within the scope of logic, we should move away from truth-conditional semantics, since questions cannot be interpreted in terms of truth conditions. But now we are saying that questions have truth conditions after all. Doesn’t this, then, undermine our argument?

It does not: although we can define a technical notion of truth that applies to questions, that does not mean that the semantics of questions can be captured in terms of truth conditions. On the contrary, the truth conditions of a question heavily underdetermine its semantics. To see this, consider again the three questions \(\textsf {parity},\textsf {range},\) and \(\textsf {outcome}\) from our die roll example. It is easy to check that these questions have the same truth conditions in our model: they are all true at every world. But obviously they are not equivalent in the model, and the logical relations among them are non-trivial. Thus, we cannot rely on our generalized truth conditions to extend logic to questions in a meaningful way. It is only at the level of support that this extension is possible.

This discussion highlights a fundamental semantic difference between statements and questions. The support conditions of a statement are fully determined by its truth conditions: support at a state just amounts to truth at each world. By contrast, the support conditions of a question are underdetermined by its truth conditions, which only capture the presupposition of the question. We express this by saying that statements, unlike questions, are truth-conditional.

Definition 2.6.3

(Truth-conditionality) We call a sentence \(\varphi \) truth-conditional if for all models M and states \(s\subseteq W_M\):

$$s\models \varphi \iff w\models \varphi \text { for all }w\in s.$$

In the formal systems of the next chapters, we will take truth-conditionality to be the fundamental semantic difference between statements and questions. Our languages will not incorporate a syntactic distinction between statements and questions (though such a distinction is compatible with the project of inquisitive logic; see for instance [29, 30]). Rather, we will regard truth-conditional formulas as statements, and non truth-conditional formulas as questions.Footnote 13  The following proposition provides an alternative characterization: statements may be seen as describing specific pieces of information, whereas questions need to be regarded as describing proper information types.

Proposition 2.6.4

\(\varphi \) is truth-conditional \(\iff [\varphi ]_M\) admits a singleton generator in any model M.

Proof

If \(\varphi \) is truth-conditional, it is easy to see that \([\varphi ]_M\) admits the singleton generator \(\{|\varphi |_M\}\) in any model. Conversely, suppose \([\varphi ]_M\) always admits a singleton generator and consider a model M. Let \(\{a_\varphi \}\) be a singleton generator for \([\varphi ]_M\), which means that \([\varphi ]_M=\{a_\varphi \}^\downarrow \). We have:

$$w\in |\varphi |_M\iff \{w\}\in [\varphi ]_M=\{a_\varphi \}^\downarrow \iff \{w\}\subseteq a_{\varphi } \iff w\in a_{\varphi }.$$

This shows that \(|\varphi |_M=a_\varphi \), that is, the unique element of the generator must be precisely the truth-set of \(\varphi \). Finally, using this fact we have:

$$s\models \varphi \iff s\in [\varphi ]_M=\{a_\varphi \}^\downarrow =\{|\varphi |_M\}^\downarrow \iff s\subseteq |\varphi |_M.$$

That is, \(\varphi \) is supported at a state in M in case it is true everywhere in the state. Since this is true for any M, \(\varphi \) is truth-conditional.\(\Box \)

2.7 Summing Up

We have seen that classical logic can be given an alternative, informational semantics in terms of support conditions, which determines when a sentence is settled by a body of information, rather than when it is true at a world. This semantics can be extended to interpret questions in a natural way. Using this semantics, we can generalize the classical notion of entailment to questions. Several interesting logical notions, including the relation of dependency discussed in Sect. 2.1, turn out to be facets of this general entailment relation.

We have discussed how, based on our semantics, sentences may be regarded as denoting information types: statements denote singleton types, which can be identified with specific pieces of information; questions denote non-singleton types, which are instantiated by several different pieces of information. An entailment holds if information of the type described by the premises is guaranteed to yield information of the type described by the conclusion.

We saw that a logic formulated within this framework can be equipped with a conditional operator which captures the meta-language entailment relation within the object language—providing us, in particular, with the linguistic resources to express dependencies and conditional dependencies. The semantics of this operator amounts to a generalization of the Ramsey test idea: a conditional is supported by an information state if supposing information of the antecedent type leads to a hypothetical state that supports the consequent.

Finally, we saw that a support-based semantics suggests a natural way to extend the notion of truth to questions and that, under such an extension, the truth conditions of a question may be viewed as capturing its presupposition.

One important part of the conceptual picture is still missing. This has to do with the role of questions in inference. We postpone discussion of this important topic until Chap. 3, since the main points are best illustrated once we have a concrete proof system on the table. However, we may already anticipate the main idea: using questions in proofs allows us to reason with arbitrary information of a given type. For instance, by assuming the question range, one is supposing to have the information whether the outcome is low, middle, or high. One can then reason about what other information one has on that basis, and thereby formally prove that certain dependencies hold.

This completes our presentation of the foundations of inquisitive logic. The rest of this chapter contains two discussion sections: the first (Sect. 2.8) concerns some modeling choices we made in this section, the second (Sect. 2.9) the relations between the present framework and previous approaches to questions in logic. The contents of these sections are not presupposed in the following chapters, so the reader may choose to skip ahead to the exercises (Sect. 2.10) or to the next chapter.

2.8 Setup Choices

In this section we discuss in more detail some of the basic setup choices we made in this chapter, which will be reflected in the systems to be developed in the rest of the book. We focus on issues concerning the modeling of information states.

2.8.1 Modeling Information States: Explicit Versus Implicit

We have seen that, by moving from a semantics based on possible worlds to a semantics based on information states, we can interpret both statements and questions in a uniform way, and thereby we obtain an interesting generalization of the classical notion of entailment.

This approach is compatible with different ways to model information states. We have opted here for an explicit modeling of information states, which assigns a content to information states and then orders information states in terms of their content. In our case, the relevant content of an information state is its power to represent things as being in a certain way—thus circumscribing a set of worlds as live possibilities.Footnote 14 This allowed us to identify an information state with the corresponding set of live possibilities and, thus, to model information states in the context of standard possible-world models of the kind commonly used in intensional logic.

Many information-based semantics proceed differently: they take information states and the relation of enhancement between them as primitive objects in the model, possibly making additional assumptions about the resulting ordered set. We may call this an implicit modeling of information states, since there is no explicit modeling of the content of an information state: in this approach, the model does not explicitly specify what the information available in an information state is. One gets some implicit description of the relevant information via the semantics, but even this does not entirely reflect the content of the state, since not all aspects of the available information need to be expressible in the relevant language. For instance, consider Kripke semantics for intuitionistic logic: a model may consist of an infinite chain \(s_0,s_1,s_2,\dots ,\) of information states, where each state \(s_{n+1}\) is stronger than \(s_n\) and where each state satisfies the same sentences. The model represents \(s_1\) as being stronger than \(s_0\), but it does not tell what information we have at \(s_1\) but not at \(s_0\).

In principle, the project of inquisitive logic is compatible with both ways of introducing information states into the picture, and in fact, it has been carried out within both kinds of approaches (for studies of inquisitive logics based on the implicit approach, see Punčochár [31,32,33,34] and Holliday [35]). Both approaches are valuable, for different reasons. For our goals in this book, an explicit modeling of information states has several advantages.

First, it allows us to better motivate and assess the semantics. A crucial feature of the explicit approach is that one can see what the content of an information state is independently of the semantics. This allows us to see whether the semantics itself makes reasonable predictions—that is, if it declares a sentence to be supported by those states in which it is intuitively settled in view of the content of the state, which we can grasp independently of the semantics. We made use of this feature above, when we discussed what information states in our die roll scenario should count as supporting our questions. In doing this, we relied on intuitions such as: the question is settled if and only if the information state implies that the state of affairs is such-and-such. An inquisitive semantics can be motivated, and assessed, on the basis of such support intuitions, just like a truth-conditional semantics can be motivated and assessed on the basis of intuitions about truth. By contrast, in the implicit perspective, we have no direct way to motivate and assess the semantics, since we have no independent access to the information that a state is supposed to encode. We can only assess the semantics indirectly, via the logic that it yields.

Second, the explicit perspective comes with a natural way to model concrete scenarios of partial information, such as our die example above: just build a model with one world for each way things may be; at least in simple examples, we have a good grasp of what these are. By contrast, the abstract perspective underdetermines which model we are supposed to use to represent a concrete scenario, even one of the simplest kind.

Thirdly, in the implicit approach, the logic one gets depends in part on one’s assumptions about the structure of the space of information states. If one already has a logic in mind and simply wants to provide a semantics for it, one just has to find the right set of assumptions. But if one wants to motivate a logic on the basis of the semantics, then one has to justify one’s choice of a particular set of assumptions; that means not just motivating the assumptions one makes, but also arguing that no further assumptions should be made. This is hard, however, and rarely even attempted. By contrast, in the explicit approach, the structure of the space of information states does not have to be stipulated, but is determined by the contents of the relevant states.

Finally, the explicit approach is more conservative. It is not part of our aim in this book to question the suitability of truth-conditional semantics for statements. At the same time, we argued for a departure from truth-conditional semantics in order to extend logic to questions. Our approach allows us to have our cake and eat it too: we can base our semantics on standard possible-world models for intensional semantics; we interpret sentences in terms of support relative to sets of possible worlds; and then for statements we can retrieve truth conditions relative to worlds in the way described in Sect. 2.6. One may have independent reason for abandoning truth-conditional semantics for statements; but the task of extending logic to questions does not necessitate this move.

Let me illustrate this point with an example. In Chap. 8 we will discuss modal sentences like \(\Box {?p}\), which on one interpretation can be read as “the agent knows whether p”. Here, the argument of the modality is a question, ?p, but the entire sentence is a statement. The intended truth conditions of this statement can be specified in the setting of a standard Kripke model for epistemic logic as follows:

$$w\models \Box {? p}\;\iff \; \forall v,v'\in R[w]: (v\models p\iff v'\models p).$$

In inquisitive modal logic, we can deliver these truth conditions compositionally. The semantics is given in the setting of a standard Kripke model, but now in terms of support conditions. From these we can then derive truth conditions relative to possible worlds. For our statement, these are exactly the ones given above. Interpreting questions need not prevent us from assigning standard truth conditions to statements; on the contrary, it allows us to explain how statements involving question constituents get their truth conditions compositionally.

All this is not to say that the explicit approach is preferable in all respects. The implicit approach has its own benefits. An important one is that it allows us to consider things at a higher level of abstraction, from the perspective of a very general framework that can be further constrained in various ways. In this way, we can see just what assumptions about the space of information states are responsible for certain features of the resulting logic. Moreover, we can study the variety of different logics that can result from different assumptions—as well as what these logics all have in common. This interesting project has been pursued in some detail in a series of papers by Vit Punčochář [31,32,33,34].

2.8.2 “Accessible” Information States? Distinguishing Semantics and Epistemology

A body of information describes the world as being a certain way, and thus determines a set of worlds \(s\subseteq W\). Conversely, given a set of worlds \(s\subseteq W\), we can think of it as a body of information: namely, the information that the world is one of those in s.

But—one may object—should this always qualify as an information state? What if it is not actually possible, due to constraints on cognition or on inquiry, to be in a state which determines s as its set of live possibilities?

A first thing to say is that this objection presupposes a different conception of the notion of information state than the one which is relevant for our enterprise. It presupposes that “information state” means something like “possible belief state of the agent” or “possible outcome of inquiry”, and therefore must be constrained by considerations about the limits of cognition or inquiry. Our view of information states is more abstract: an information state is just something that partially determines how things are; it makes sense to ask whether such an object settles a question, or a statement, quite regardless of whether it is, or could be, the belief state of an agent in an inquiry scenario.

Moreover, from the point of view of our enterprise, restricting the semantics to information states that are ‘accessible’ in some epistemic sense would be a bad idea: it would mix semantics and epistemology in an unintended way. Since the suggestion to modify the semantics in this way regularly comes up, it is worth illustrating the consequences of this sort of restriction in some detail.

Suppose that, in the die scenario, things are set up in such a way that the outcome will be revealed to us at once. Given this setup, it is not possible for us to gain partial information about the outcome: the only ‘accessible’ information states in this scenario are the initial state, where every outcome is possible, and the six complete states in which the outcome is revealed. These are shown in the picture on the left in Fig. 2.8.

Fig. 2.8
figure 8

The “accessible” information states in the two scenarios discussed in this section

Let \(\textsf {parity}\) denote the question whether the outcome is even or odd, and let range denote the question whether the range is low, middle, or high. If we only accessible information states are taken into account, relative to the model M which represents our state prior to the roll we get the prediction that

$$\textsf {parity}\models _M\textsf {range},$$

since any accessible state that supports parity is a complete state, and therefore also supports range. What this relation captures is a pragmatic fact about the inquiry situation: learning the parity of the outcome implies learning the range.

However, by focusing on learning, this approach misses a more basic fact: the parity of the outcome does not determine its range: for instance, the outcome being even does not determine whether it is low, middle, or high. This fact has nothing to do with any agent, or with learning. It has to do with what the live possibilities are and with how the two questions are related relative to this set of possibilities. It is a purely semantic matter.

On the standard view of logic, it is this sort of basic semantic relation that entailment is supposed to track, and this is so in our view as well.

Now consider a second variation of our die scenario. We insert a coin into a machine that rolls a die, which is hidden from us. Then the following happens:

  • if the outcome is 1, we lose and a red light appears;

  • if the outcome is 6, we win and a green light appears;

  • in all other cases, we get the coin back and a yellow light appears, but the outcome is not revealed.

Thus, the accessible information states are the set \(\{w_1,\dots ,w_6\}\) (the initial state) and the states \(\{w_1\}\) (red light), \(\{w_6\}\) (green light), and \(\{w_2,w_3,w_4,w_5\}\) (yellow light). These are represented by the picture on the right in Fig. 2.8.

Here, restricting the semantics to accessible information states leads to unintended results even for statements. To see this, let even stand for the statement that the outcome is even, and let high be the statement that the outcome is high. If we consider only accessible states we get:

$$\textsf {even}\models _M\textsf {high}.$$

However, it is obvious that from the fact that the outcome is even it does not follow that the outcome is high, since the outcome may well be two or four. Of course, learning that the outcome is even implies learning that it is high, but that is a different matter, and not the sort of matter that we normally take logic to be about. This shows that restricting the semantics to accessible information states would take us to a revisionary view of logic, even in the case of statements. Pursuing such a revisionary project is legitimate (in a sense, intuitionistic logic arises precisely from such a project) but our aim here is different: not to revise standard logic, but to generalize it.

Here is another way to put the point: in order to determine if an entailment holds on the basis of an information state, one looks at subsets of the state. In so doing, one is not asking what would happen if an agent were to learn something. One is, rather, testing certain features of the available information by exploring what it implies under certain suppositions—looking, for instance, at whether combining the given information with information of one type (say, parity) yields information of another type (range). In sum, whether an entailment holds or not relative to a body of information is an intrinsic property of the information state itself, which turns on what the given information implies when augmented with certain suppositions. So, whether the entailment holds should depend only on the content of the state—the set of live possibilities—and on nothing else.

To distinguish semantic entailments from their pragmatic cousins is especially important since these relations have different logical features. For an example, take again parity, i.e., the question whether the outcome is even or odd. Consider a statement \(\alpha \): in any given context s, the entailment \(\alpha \models _s\textsf {parity}\) captures the fact that relative to the set of possibilities s, the information that \(\alpha \) settles the question whether the outcome is even or odd; that happens just in case the information that \(\alpha \) implies that the outcome is even, or the information that \(\alpha \) implies that the outcome is odd. So, if \(\alpha \) is a statement, we must have:

$$\alpha \models _s\textsf {parity}\;\iff \; \alpha \models _s\textsf {even}\;\text { or }\;\alpha \models _s\textsf {odd}.$$

As we will see, this is an instance of a general principle, called Split, which regulates the interaction of statements with questions. This principle is a tenet of inquisitive logics: it reflects the important idea that statements denote specific pieces of information (as we discussed in Sect. 2.4). This implies that to check what follows from \(\alpha \) in s is to check what follows by strengthening s in a specific way, namely, by supposing that \(\alpha \) is true.

Things look different under the pragmatic notion. Consider again the situation in which the only thing one can learn is the exact outcome of the roll. Then, for instance, learning that the outcome is low implies learning whether it is even or odd. However, learning that the outcome is low does not imply learning that it is even (since we may learn that the outcome is 1), and it does not imply learning that it is odd (since we may learn that it is 2). Therefore, if we took entailment to be about learning, the split property would fail:

$$\textsf {low}\models _s\textsf {parity}\;\;\;\not \!\!\Longrightarrow \; \textsf {low}\models _s\textsf {even}\,\text { or }\,\textsf {low}\models _s\textsf {odd}.$$

This is because one may come to learn the statement \(\textsf {low}\) in different ways, which lead to different ways of resolving the question \(\textsf {parity}\). This difference has repercussions for the behavior of the implication connective, and thereby leads to a different logic.

Later on, we will see that the fact that implication quantifies over sub-states leads to hard questions—many of which are currently open—about the meta-theoretic properties of inquisitive predicate logic. It is sometimes suggested that a simpler logic may be obtained by revising the semantics so that implication only quantifies over a restricted family of subsets specified by the model. This is analogous to the move that is made by weakening second-order logic by replacing the standard semantics by Henkin semantics. Given the discussion in this section, however, we can see that the simpler logic obtained from this move would not track the intended entailment relation, but rather some kind of pragmatic counterpart of it that has different and weaker logical features.

2.9 Relation with Previous Work

In this section, we situate the present proposal within the landscape of previous approaches to questions in logic.

2.9.1 Erotetic Logic Tradition

Although questions have received much less attention than statements in logic, there is nevertheless a rather large literature on them. Most work in this tradition goes under the header erotetic logic. Some key references: Prior and Prior [36], Hamblin [2], Kubiński [37, 38], Harrah [39, 40], Åqvist [41], Belnap and Steel [3], Tichy [42], Wiśniewski [26, 43, 44]. This is not the place for a detailed survey of this literature (for a valuable survey, see Harrah [45]). Our aim in this section is merely to explain how our approach relates to previous work in this tradition. A comparison with some more closely related theories is given in the next sections.

We can identify two fundamental differences between inquisitive logic and previous theories in erotetic logic. One difference concerns the way questions are analyzed, the other the aims of the theory.

Different approaches to the interpretation of questions. The most common approach to the interpretation of questions in the erotetic logic tradition is the answer-set approach. The idea, which goes back to Hamblin [2], is that a question is interpreted by specifying what statements count as answers to it. In inquisitive logic, as we saw, questions are instead interpreted by specifying their support conditions. There is an obvious relation between the two approaches: the support conditions of a question capture the conditions in which the question counts as settled—or, we may as well say, answered. Clearly, specifying what counts as an answer is tightly related to specifying in what circumstances the question counts as answered (though here one encounters some subtle issues concerning whether information resolving a question is always linguistically expressible, which we do not assume). However, there are also differences. Perhaps most importantly, the set-of-answers approach treats statements and questions differently. Statements are interpreted directly, via standard truth-conditional semantics, while questions are interpreted only derivatively, by interpreting their answers. Our approach, by contrast, treats statements and questions on a par: both are interpreted in terms of the same semantic notion—support relative to an information state. This feature is crucial to get a uniform logic in which statements and questions can participate in the same logical relations and be combined by the same logical operations. Moreover, our approach is similar to the truth-conditional one in that both interpret sentences in terms of a certain satisfaction relation (truth/support) relative to certain evaluation points (worlds/information states), and then define entailment as preservation of this satisfaction relation. As becomes clear by looking at the formal systems, this makes inquisitive logic much more similar to standard logic as compared to logics based on the answer-set approach.Footnote 15

Different aims. Perhaps most importantly, our enterprise here is somewhat different from that of previous theories in erotetic logic. Approaches in the erotetic logic tradition share the idea that dealing with questions in logic means turning attention away from those concerns that take center stage in standard logic, namely, the study of entailment, logical operators (connectives, quantifiers, modalities), and proofs.

In the introduction to what is perhaps the most well-developed erotetic logic in the literature, Belnap and Steel [3] state their aims as follows:

On the object-language level we want to create a carefully designed apparatus permitting the asking and answering of questions. On the meta-language level we want to elaborate a set of concepts useful for categorizing, evaluating, and relating questions and answers.

This description fits not just Belnap and Steel’s own work, but most of the early work in the erotetic logic tradition. Some more recent approaches have focused instead on the role of questions in processes of inquiry, either modeling inquiry itself as a sequence of questioning moves and inference moves, as in the interrogative model of inquiry of Hintikka [25], or characterizing how questions can be arrived at in an inquiry scenario, as in the inferential erotetic logic of Wiśniewski [26,27,28, 43, 44].Footnote 16

Our aim here, by contrast, is to extend logic to questions while staying close to the standard concerns of logic: entailment, logical operators, and proofs.

Start from entailment. The logical relations involving questions that we considered in this section have been considered before in the erotetic logic tradition. The notion of dependency that we discussed above has been considered under the name containment ever since Hamblin [2]. The two ‘mixed’ notions—a statement resolving a question and a question presupposing a statement—are also standard (they are found, e.g., in Belnap and Steel’s [3], where the terminology ‘being a complete answer to’ is used for the first notion). The novelty of our approach, however, is that it allows us to recognize these notions as being different instances of a single general notion—a notion that can be viewed as a generalization of entailment to questions. This realization allows for a much more thorough deployment of the tools of logic: we already discussed in Sect. 2.5 how, by internalizing entailment, we get the tools to express the relevant relations in the object language by means of a well-behaved implication connective; moreover, the fact that the relevant relations are entailments means that they can be formally proved once we develop a proof system for our logics.

Second, consider logical operators. Standard logic only becomes interesting once we consider complex statements, built up by means of connectives, quantifiers, or modalities. By contrast, in much of the erotetic logic literature, no logical operators can be applied to questions. There are some exceptions: for instance, Belnap and Steel [3] allow the formation of conjunctive questions and conditional questions. But these are treated on an ad-hoc basis: the relevant logical operations are not unified with standard conjunction and implication, and the formal properties of these operations are not investigated at all. Our approach will be very different: we will define our operators so that they can be applied uniformly to statements and questions. Thus, e.g., conditional statements and conditional questions will be treated by using the very same implication connective—the one described above. The study of the properties of these generalized operators is also a central topic in our approach.

Finally, consider proofs. Belnap and Steel are very clear:

Absolutely the wrong thing is to think [the logic of questions] is a logic in the sense of a deductive system, since one would then be driven to the pointless task of inventing an inferential scheme in which questions, or interrogatives, could serve as premises and conclusions.

(Belnap and Steel [3], p. 1)

We disagree: since entailments involving questions are meaningful and interesting, it is natural to ask if they can be established by means of proof systems in which we can manipulate statements and questions. In fact, we will see that questions are very interesting tools for logical inference: they allow us to reason with arbitrary information of a given type. Thus, studying the role of questions in logical proofs turns out to be far from pointless.

2.9.2 The Logic of Interrogation

To my knowledge, the first approach that allows for a generalization of the classical notion of entailment to questions is the Logic of Interrogation (LoI) of Groenendijk [48], based on the partition theory of questions of Groenendijk and Stokhof [49]. The original presentation of the semantics is a dynamic one, in which the meaning of a sentence is identified with its context-change potential. However, as pointed out by ten Cate and Shan [50], the dynamic coating is not essential. In essence, the system may be described as follows: both statements and questions are interpreted with respect to pairs \(\langle w,w'\rangle \) of possible worlds: a statement is satisfied by such a pair if it is true at both worlds, while a question is satisfied if the complete answer to the question is the same in w and \(w'\). In this approach, the content of a sentence \(\varphi \) is captured by the set of pairs \(\langle w,w'\rangle \) satisfying \(\varphi \); for any \(\varphi \), this set is an equivalence relation over a subset of \(W_M\), which we will denote as \(\sim _{\varphi }\). Such an equivalence relation may be equivalently regarded as a partition \(\Pi ^\varphi _M\) of a subset of the logical space, where the blocks of the partitions are the equivalence classes \([w]^{\sim _\varphi }\) modulo \(\sim _\varphi \) of those worlds in the domain of \(\sim _\varphi \). For a statement \(\alpha \), the partition \(\Pi _M^\alpha \) always consists of a unique block, namely, the truth-set \(|\alpha |_M\) of the statement. For a question \(\mu \), \(\Pi _M^\mu \) typically consists of several blocks, which are regarded as the possible complete answers to the question.

Since statements and questions are interpreted by means of a uniform semantics, LoI allows for the definition of a notion of entailment in which both statements and questions can take part:

$$\varphi \models _{\textsf {LoI}}\psi \iff \text {for all }M\text { and all }w,w'\in W_M: \langle w,w'\rangle \models \varphi \text { implies }\langle w,w'\rangle \models \psi .$$

In terms of partitions, this notion of entailment may be cast as follows:

$$\varphi \models _{\textsf {LoI}}\psi \iff \text {for all }M,\text { for all }a\in \Pi _M^\varphi \text { there is an }a'\in \Pi _M^\psi \text { such that }a\subseteq a'.$$

This is clearly reminiscent of Proposition 2.4.10: here, too, we can think of a sentence as denoting a (possibly singleton) information type; \(\varphi \) entails \(\psi \) if any information of type \(\varphi \) always yields some corresponding information of type \(\psi \).

Groenendijk [48] applies this approach to a logical language which is an extension of first-order predicate logic with questions. This gives rise to an interesting combined logic of statements and questions, which was investigated and axiomatized by ten Cate and Shan [50]. As we will see in Chap. 4, this can be identified with a fragment of inquisitive predicate logic.

What is the relation between the LoI framework and the present approach? Consider a question \(\mu \). Given the LoI perspective, it is natural to assume that \(\mu \) is settled in an information state s in case s entails some complete answer to \(\mu \). This yields the following support conditions:

$$s\models \mu \iff s\subseteq a\text { for some }a\in \Pi _M^\mu .$$

Thus, the set of supporting states is precisely the downward closure of \(\Pi _M^\mu \):

$$[\mu ]_M=(\Pi _M^\mu )^\downarrow .$$

The same relation holds for a statement \(\alpha \): \(\alpha \) is settled in an information state s in case \(s\subseteq |\alpha |_M\); given that \(\Pi _M^\alpha =\{|\alpha |_M\}\), we have that \([\alpha ]_M=(\Pi _M^\alpha )^\downarrow \). Thus, for all sentences \(\varphi \) that can be interpreted in LoI, we have:

$$[\varphi ]_M=(\Pi _M^\varphi )^\downarrow .$$

That is, the set of blocks of the partition induced by \(\varphi \) is always a generator for the inquisitive proposition expressed by \(\varphi \). This allows us to move from the LoI-representation of a sentence to its support-based representation.

Conversely, the LoI-representation of a sentence \(\varphi \) can be retrieved from its support-based representation by taking the maximal supporting states for \(\varphi \), i.e., the alternatives for the proposition \([\varphi ]_M\):

$$\Pi _M^\varphi =\textsc {Alt}([\varphi ]_M).$$

In sum, for sentences that can be interpreted in LoI, we can go back and forth between the two semantics. Moreover, given that \(\Pi _M^\varphi \) is a generator for \([\varphi ]_M\), it follows from Proposition 2.4.10 that the notion of entailment that the two frameworks characterize is the same.

$$\varphi \models \psi \iff \varphi \models _{\textsf {LoI}}\psi .$$

Thus, the logic of interrogation and our own approach essentially agree with respect to those sentences that can be interpreted in LoI. However, the support approach that we discussed in this section is strictly more general than the LoI approach based on pairs of worlds. In order for a question to be analyzable in LoI, it must be a unique-answer question, in the sense of the following definition.

Definition 2.9.1

(Unique-answer questions) A question \(\mu \) is a unique-answer question if any \(w\in W_M\) is contained in at most one alternative for \(\mu \).

The class of unique-answer questions includes many natural kinds of questions, such as the questions \(\textsf {parity},\textsf {range}, \) and \(\textsf {outcome}\) from our initial example. But there are also important classes of questions that are not unique-answer and that can be analyzed in inquisitive semantics but not in partition semantics.

For an example, imagine a game in which a player has picked a secret two-digits code, where each digit is 1, 2, or 3. So, there are \(3\times 3=9\) possible codes (11, 12, 13, etc.), corresponding to 9 possible worlds. Now consider:

figure i

This question is completely resolved just in case one of the following pieces of information is available:

figure j

Thus, we have three alternatives for the question, depicted in Fig. 2.9. As the image shows, these alternatives overlap: this corresponds to the fact that the three pieces of information above are not mutually exclusive—on the contrary, they are pair-wise compatible. So, (7) is not a unique-answer question and cannot be analyzed in partition semantics.

Fig. 2.9
figure 9

Overlapping alternatives for a mention-some question

Questions like (7), which ask for an instance of a given property, are known as mention-some question. Mention-some questions are not exotic—on the contrary, they occur frequently both in ordinary situations and in specialized scientific discourse. The following are examples.

figure k

Mention-some questions are not the only class of questions that can be analyzed in inquisitive semantics but not in partition semantics. Other examples are approximate value questions like (10-a) (cf. Yablo [51]), which ask for a value with a certain margin of error, and conditional questions like (10-b), which ask for the answer to a question under a supposition. We will see in the next chapter how conditional questions can be analyzed naturally as conditionals in inquisitive semantics.

figure l

Summing up, the extra generality of inquisitive semantics allows us to represent, and reason with, a broader class of questions than is treatable in LoI.

However, there is also a second reason why this generality is important. We saw above that inquisitive semantics can be equipped with an implication operator \(\rightarrow \) that allows us to express the meta-language entailment relation within the object language. This operator plays a key role in inquisitive logics. But, as we will see in the next chapter, the inquisitive proposition expressed by implications is typically one that does not correspond to a partition of the logical space—it involves overlapping alternatives.

In partition semantics, it is provably impossible to define such a connective. For instance, consider the questions \(\textsf {parity}\) (whether the outcome is even or odd), and \(\textsf {outcome}\) (what the outcome is) in our die roll example. One can prove, for instance, that in the model M of our running example there is no way to assign a partition to an implication \((\textsf {parity}\rightarrow \textsf {outcome})\) in such a way that for any other unique-answer question \(\lambda \) we have the desired connection:

$$\lambda ,\textsf {parity}\models _M\textsf {outcome}\iff \lambda \models _M\textsf {parity}\rightarrow \textsf {outcome}.$$

In other words, due to its semantic assumptions, LoI does not allow us to access some important expressive means that will play an important role in the development of our inquisitive logics.

2.9.3 Inquisitive Pair Semantics

Starting with the work of Velissaratou [52] on conditional questions, the pursuit of greater generality led to the development of successors of LoI in which formulas are still evaluated relative to pairs of worlds, but the set of pairs satisfying a given formula is not necessarily an equivalence relation. In this setting, the natural way to read the relation \(\langle w,w'\rangle \models \mu \), where \(\mu \) is a question, is no longer “the complete answer to \(\mu \) is the same in w as in \(w'\)”, but rather “some complete answer to \(\mu \) is true at both w and \(w'\)”. This approach, laid out in Groenendijk [53] and Mascarenhas [54], was originally dubbed inquisitive semantics; it is now referred to as inquisitive pair semantics to distinguish it from the present support-based approach.

While Groenendijk [29] showed that this sort of semantics can indeed deal adequately with conditional questions, Ciardelli [21, 55], and later Ciardelli et al. [30] argued that no pair semantics can provide a satisfactory general framework for question semantics. To get an idea of the problem, consider again our mention-some question (7), and consider the set of possible worlds \(s=\{w_{12},w_{13},w_{23}\}\) in the model of Fig. 2.9 (i.e., the upper-right corner in the figure). In s, our mention-some question is not settled: the information available in s implies neither that 1 occurs in the code, nor that 2 occurs, not that 3 occurs. However, this cannot be detected by looking at pairs of worlds, since each pair of worlds in s does settle the question. So if our semantics only looks at pairs of worlds, it fails to see that (7) is not settled in s and, thus, it fails to distinguish (7) from a different question which is settled at s. Examples of this kind motivated a shift from pairs to sets of worlds as points of evaluation, leading to the modern support-based version of inquisitive semantics.

2.9.4 Nelken and Shan’s Modal Approach

A different uniform approach to statements and questions was proposed by Nelken and Shan [56]. In this approach, questions are translated as modal sentences, and they are interpreted by means of truth conditions: a question is true at a world w in case it is settled by an information state R[w] associated with the world (i.e., the set of successors given by an accessibility relation R). Thus, for instance, Nelken and Shan render the question whether p by the modal formula \(?p:=\Box p\vee \Box \lnot p\).

In one respect, this approach is similar to the approach proposed here, since the meaning of a question is essentially taken to be encoded by the conditions under which the question is settled. And indeed, if we consider entailments which involve only questions, the approach of Nelken and Shan would make the same predictions as ours. However, in their approach an important asymmetry between statements and questions is maintained: for questions, what matters is whether they are settled by a relevant information state, while for statements, what matters is whether they are true at the world of evaluation. This asymmetry creates problems the moment we start considering cases of entailment involving both statements and questions. It is easy to see that, if such entailments are to be meaningful at all, entailment cannot just amount to preservation of truth. Nelken and Shan propose to fix this by re-defining entailment as modal consequence: \(\varphi \models \psi \) if, whenever \(\varphi \) is true at every possible world in a model, so is \(\psi \). However, this move has the unintended consequence of changing the consequence relation for modal statements. For instance, if our declarative language indeed contains a Kripke modality, say a knowledge modality K, then if our notion of entailment is redefined as modal consequence, we make undesirable predictions, such as \(p\models Kp\). Thus, this approach does not really allow us to extend classical logics with questions in a conservative way.Footnote 17

The asymmetry from which the problem originates can be eliminated by letting statements, too, be interpreted in terms of when they are settled by the state R[w], rather in terms of when they are true at w: that is, we may render a basic statement not as a propositional formula \(\alpha \), but as a modal formula \(\Box \alpha \). If we made this move, we would arrive at a framework with a sensible logic, but with some unnecessary complexity: while sentences are interpreted with respect to a world w equipped with an information state R[w], it is only the state R[w] which matters for the interpretation of both statements and questions. We could thus get rid of the worlds altogether and interpret formulas directly relative to states. This would also allow us to work with simpler models and with a simpler syntax, leading to an approach similar to the one taken here.

2.10 Exercises

Exercise 2.10.1

(Support conditions) Imagine a game in which a player has picked a secret two-digits code, where each digit is 1, 2, or 3. So, there are \(3\times 3=9\) possible codes, corresponding to 9 possible worlds, as follows:

figure m
  1. 1.

    For each of the following statements and questions, determine and draw a picture of the maximal information states in which it is supported.

    figure n
  2. 2.

    Find questions in English having the following sets as alternatives:

    figure o

    Note that one trivial option is to formulate the relevant question as: “is the code among those in \(\{\dots \}\), or among those in \(\{\dots \}\), etc.”; try to avoid such trivial solutions and to come up with more natural formulations.

  3. 3.

    How many non-equivalent yes/no questions can we in principle ask about the code in this model? (Include the trivial yes/no question in the count.)

Exercise 2.10.2

(Inquisitive entailment in context) Consider the model and some of the sentences of the previous exercise, labeled by letters as above. We can draw a table showing at row x and column y whether or not sentence x entails sentence y in the model. For instance:

 

a

b

c

f

h

i

j

a

\(\checkmark \)

\(\checkmark \)

\(\checkmark \)

b

\(\times \)

\(\checkmark \)

\(\checkmark \)

c

\(\times \)

\(\times \)

\(\checkmark \)

f

 

h

 

i

 

j

 

Determine which of the remaining entailments hold and fill the table.

Exercise 2.10.3

(Polar questions) If \(\alpha \) is a statement, let \(?\alpha \) denote the polar question whether \(\alpha \), which is settled iff the available information determines whether \(\alpha \) is true or false:

$$s\models {?\alpha }\iff s\subseteq |\alpha |_M\text { or }s\cap |\alpha |_M=\emptyset .$$

Let M be an arbitrary model. Let us say that a sentence \(\varphi \) is a tautology in M if \(\varphi \) is supported by every information state in M. Prove the following claims:

  1. 1.

    \(?\alpha \) is a tautology in \(M\iff \alpha \) or \(\lnot \alpha \) is a tautology in M.

  2. 2.

    \({?\alpha }\equiv _M{?\beta }\iff \alpha \equiv _M\beta \) or \(\alpha \equiv _M\lnot \beta \).

  3. 3.

    If \({?\alpha }\models _M{?\beta }\) then either \({?\alpha }\equiv _M{?\beta }\) or \({?\beta }\) is a tautology in M.

Notice in particular that item 3 implies that no proper entailment can hold among non-trivial polar questions.

Exercise 2.10.4

(Inquisitive implication) Consider again the model corresponding to our die roll scenario, and consider the following statements and questions.

even

The outcome is even.

low

The outcome is low.

parity

Is the outcome even, or odd?

range

Is the outcome low, middle, or high?

Determine and draw the alternatives for the following implications.

  1. 1.

    \(\textsf {even}\rightarrow \textsf {low}\)

  2. 2.

    \(\textsf {even}\rightarrow \textsf {range}\)

  3. 3.

    \(\textsf {even}\rightarrow \textsf {parity}\)

  4. 4.

    \(\textsf {range}\rightarrow \textsf {even}\)

  5. 5.

    \(\textsf {parity}\rightarrow \textsf {range}\)

  6. 6.

    \(\textsf {range}\rightarrow \textsf {parity}\)

Hint. Use the Ramsey test clause give by Propositions 2.5.2 and 2.5.3. Draw each alternative in a separate picture of the logical space, as in Fig. 2.6.

Exercise 2.10.5

(Entailments towards statements) Let \(\alpha \) be a statement and \(\Phi \) a set of sentences which may be either statements or questions. Using the Truth-Support Bridge, show that we have:

$$\Phi \models \alpha \iff \text {for all models { M} and worlds }w\in W_M: w\models \Phi \text { implies }w\models \psi ,$$

where \(w\models \Phi \) means ‘\(w\models \varphi \) for all \(\varphi \in \Phi \)’. Using this, prove Proposition 2.6.2.