# Algebraic foundations for the semantic treatment of inquisitive content

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s11229-013-0282-4

- Cite this article as:
- Roelofsen, F. Synthese (2013) 190(Suppl 1): 79. doi:10.1007/s11229-013-0282-4

- 16 Citations
- 939 Downloads

## Abstract

In classical logic, the proposition expressed by a sentence is construed as a set of possible worlds, capturing the informative content of the sentence. However, sentences in natural language are not only used to *provide* information, but also to *request* information. Thus, natural language semantics requires a logical framework whose notion of meaning does not only embody informative content, but also inquisitive content. This paper develops the algebraic foundations for such a framework. We argue that propositions, in order to embody both informative and inquisitive content in a satisfactory way, should be defined as non-empty, downward closed sets of possibilities, where each possibility in turn is a set of possible worlds. We define a natural entailment order over such propositions, capturing when one proposition is at least as informative and inquisitive as another, and we show that this entailment order gives rise to a complete Heyting algebra, with meet, join, and relative pseudo-complement operators. Just as in classical logic, these semantic operators are then associated with the logical constants in a first-order language. We explore the logical properties of the resulting system and discuss its significance for natural language semantics. We show that the system essentially coincides with the simplest and most well-understood existing implementation of *inquisitive semantics*, and that its treatment of disjunction and existentials also concurs with recent work in *alternative semantics*. Thus, our algebraic considerations do not lead to a wholly new treatment of the logical constants, but rather provide more solid foundations for some of the existing proposals.

### Keywords

Algebraic semanticsInquisitive semanticsAlternative semantics## 1 Introduction

In classical logic, the proposition expressed by a sentence is construed as a set of possible worlds, embodying the informative content of the sentence. However, sentences in natural language are not only used to *provide* information, but also to *request* information. Thus, natural language semantics requires a logical framework whose notion of meaning embodies both informative and inquisitive content.

A natural starting point–rooted in the seminal work of Hamblin (1973) and Karttunen (1977) on the semantics of questions, and further pursued in recent work on *inquisitive semantics* (Groenendijk and Roelofsen 2009; Ciardelli 2009; Ciardelli and Roelofsen 2011, among others)–is to construe the proposition expressed by a sentence \(\varphi \), \([\varphi ]\), as a set of *possibilities*, where each possibility in turn is a set of possible worlds. In uttering \(\varphi \), a speaker can then be taken to provide the information that the actual world is located in at least one of the possibilities in \([\varphi ]\), i.e., in \(\bigcup [\varphi ]\), and at the same time she can be taken to request information from other conversational participants in order to locate the actual world inside a specific possibility in \([\varphi ]\).

As soon as we move from the classical notion of propositions as sets of possible worlds to the richer notion of propositions as sets of possibilities, two crucial questions arise. The first question is whether propositions should really be defined as *arbitrary* sets of possibilities, or whether we should adopt certain constraints on which sets of possibilities form suitable propositions and which don’t. The above discussion indicates that sets of possibilities are *sufficient* for the purpose at hand, which is to capture informative and inquisitive content. But we do not only want a notion of propositions that is sufficient for the given purpose, we want a notion that is *just right*. In particular, it should be the case that any two non-identical propositions really differ in informative and/or inquisitive content. Otherwise, we would have two representations for exactly the same content, which means that our notion of propositions would be too fine-grained. We will show that, in order to meet this criterion, propositions should not be defined as arbitrary sets of possibilities, but, instead, as sets of possibilities that are non-empty and downward closed (i.e., if \(\alpha \in [\varphi ]\) and \(\beta {\,\subseteq \,}\alpha \), then \(\beta \in [\varphi ]\) as well). This result is relevant for any Hamblin/Karttunen-style semantic account of questions, no matter whether such an account is cast within the framework of inquisitive semantics or not.

The second question that arises is how the propositions expressed by complex sentences should be defined in a compositional way. In particular, if we limit ourselves to a first-order language, what is the role of connectives and quantifiers in this richer setting? How do we define \([\lnot \varphi ]\), \([\varphi \wedge \psi ]\), \([\varphi \vee \psi ]\), etcetera, in terms of \([\varphi ]\) and \([\psi ]\)?

This issue has been addressed quite extensively in recent work on inquisitive semantics. It has also been addressed in a different setting, namely in work on so-called *alternative semantics* for disjunction and existentials (Kratzer and Shimoyama 2002; Simons 2005a, b; Alonso-Ovalle 2006, 2008, 2009; Aloni 2007a, b; Menéndez-Benito 2005, 2010, among others). In this framework, sets of possibilities–also known as *alternatives*–are not primarily used to capture inquisitive content, but rather to characterize the semantic contribution of disjunction and existentials in the process of meaning composition. Even though inquisitive and alternative semantics were motivated by rather different concerns, they essentially coincide in their treatment of disjunction and existentials.

It has been shown that the treatment of the logical constants in inquisitive and alternative semantics makes suitable predictions about the semantic behavior of the corresponding connectives and quantifiers in a variety of typologically unrelated natural languages. However, even though we have thus obtained a much more accurate characterization of the meaning of the relevant connectives and quantifiers in natural language, inquisitive and alternative semantics do not yet provide an explanation for why these constructions have the particular meanings that they have, and why constructions with these particular meanings are so wide-spread across languages.

After all, to justify their treatment of the logical constants, both frameworks directly rely on observations concerning the semantic behavior of the corresponding connectives and quantifiers in natural langauge. For instance, the treatment of \(\vee \) is justified by observations concerning the word *or* in English and similar words in other languages. The vantage point of this approach is that it provides a very direct link between the formal treatment of the logical constants on the one hand, and intuitions about the natural language expressions that these logical constants are usually associated with on the other hand. Thereby, it immediately brings out the linguistic significance of the two frameworks. However, in order to gain *explanatory* power, the given treatment of the logical constants should be justified by considerations *independent* of the linguistic data themselves.

To this end, the present paper develops an inquisitive semantics whose treatment of the logical constants is motivated exclusively by *algebraic* considerations. Just like classical propositions can be shown to form a complete Boolean algebra, and classical logic can be obtained by associating the basic operations in this algebra with the logical constants, we will show that inquisitive propositions form a complete Heyting algebra, and we will obtain an inquisitive semantics for the language of first-order logic by associating the basic operations in this algebra with the logical constants. Crucially, the justification for the resulting system does not rely in any way on intuitions concerning specific linguistic constructions.

Still, the results of our algebraic enterprise will be highly relevant for natural language semantics, since it is to be expected that natural languages generally have constructions that are used to perform the basic algebraic operations on propositions. For instance, it is natural to expect that languages generally have a word that is used (possibly among other things) to construct the *join* of two propositions, and another word to construct the *meet* of two propositions. In English, the words *or* and *and* are usually taken to fulfill this purpose. If this general expectation is borne out, then our algebraic semantics does not only provide a precise characterization of the meaning of these words; it also provides an explanation for the ubiquity of words with these particular meanings across languages. ^{1}

Our algebraic semantics will essentially coincide with the simplest and most well-understood existing implementation of inquisitive semantics, and it will also concur with the treatment of disjunction and existentials in alternative semantics. Thus, our algebraic considerations will indeed converge with the linguistic intuitions that previously played a central role in justifying the treatment of the logical constants, and the main result of our work will not be a wholly new semantics, but rather a more solid foundation for some of the existing proposals.

The paper is structured as follows. Section 2 briefly reviews the algebraic foundations of classical logic; Sect. 3 develops an algebraically motivated inquisitive semantics, discussing its logical properties and significance for natural language semantics; and Sect. 4 concludes.

## 2 Algebraic foundations of classical logic

To illustrate our approach, let us briefly review the algebraic foundations of classical logic. ^{2} Throughout the paper we will assume a set \(W\) of possible worlds as our logical space. In classical logic, the proposition expressed by a sentence \(\varphi \) is a set of possible worlds, embodying the informative content of the sentence. We will denote this set of worlds as \([\varphi ]_c\), where the subscript \(c\) stands for classical. In asserting \(\varphi \), a speaker is taken to provide the information that the actual world is located in \([\varphi ]_c\). Given this way of thinking about propositions, there is a natural *entailment order* between them: \(A\models _c B\) iff \(A\) is at least as informative as \(B\), i.e., iff \(A\subseteq B\).

This entailment order in turn gives rise to certain algebraic operations on propositions. For instance, for any set of propositions \(\Sigma \), there is a unique proposition that (i) entails all the propositions in \(\Sigma \), and (ii) is entailed by all other propositions that entail all propositions in \(\Sigma \). This proposition is called the *greatest lower bound* of \(\Sigma \) w.r.t. \(\models _c\), or in algebraic jargon, its *meet*. It amounts to \(\bigcap \Sigma \) (given the stipulation that \(\bigcap \emptyset = W\)). Similarly, every set of propositions \(\Sigma \) also has a unique *least upper bound* w.r.t. \(\models _c\), which is called its *join*, and amounts to \(\bigcup \Sigma \). The existence of meets and joins for arbitrary sets of classical propositions implies that the set of all classical propositions, \(\Pi _c\), together with the entailment order \(\models _c\), forms a *complete lattice*.

This lattice is *bounded*. That is, it has a bottom element, \(\bot :=\emptyset \), and a top element, \(\top := W\), such that for every proposition \(A\), we have that \(\bot \models _c A\) and \(A\models _c\top \). Moreover, for every two propositions \(A\) and \(B\), there is a unique weakest proposition \(C\) such that \(A\cap C\models _c B\). This proposition is called the *pseudo-complement of*\(A\)*relative to*\(B\). It is denoted as \(A{\,\Rightarrow \,}B\) and amounts to \((W-A)\cup B\). Intuitively, the pseudo-complement of \(A\) relative to \(B\) is the weakest proposition such that if we ‘add’ it to \(A\), we get a proposition that is at least as strong as \(B\). The existence of relative pseudo-complements implies that \(\langle \Pi _c,\models _c\rangle \) forms a *Heyting algebra*.

If \(A\) is an element of a Heyting algebra, it is customary to refer to \(A^*:= (A{\,\Rightarrow \,}\bot )\) simply as the *pseudo-complement* of \(A\) (rather than the pseudo-complement of \(A\) relative to \(\bot \)). In the case of \(\langle \Pi _c,\models _c\rangle \), \(A^*\) amounts to \(W-A\). By definition of \({\,\Rightarrow \,}\), we always have that \(A \cap A^* = \bot \). In the specific case of \(\langle \Pi _c,\models _c\rangle \), we also always have that \(A\cup A^*=\top \). This means that \(A^*\) is in fact the *Boolean complement* of \(A\), and that \(\langle \Pi _c,\models _c\rangle \) forms a *Boolean algebra*, a special kind of Heyting algebra.

*meet*,

*join*, and

*(relative) pseudo-complementation*with the logical constants:

- 1.
\([\lnot \varphi ] \qquad := [\varphi ]^*\)

- 2.
\([\varphi \wedge \psi ]\,\,:= [\varphi ]\cap [\psi ]\)

- 3.
\([\varphi \vee \psi ]\,\,:= [\varphi ]\cup [\psi ]\)

- 4.
\([\varphi \rightarrow \psi ]\! := [\varphi ]{\,\Rightarrow \,}[\psi ]\)

*meet*,

*join*, and

*relative pseudo-complementation*–and classical propositional logic is obtained by associating these basic semantic operations with the logical constants.

## 3 Algebraic foundations for inquisitive semantics

Exactly the same strategy can be applied in the inquisitive setting. Only now we will have a richer notion of propositions, and a different entailment order on them, sensitive to both informative and inquisitive content.

### 3.1 Propositions and entailment

Let us first determine how propositions and entailment should be defined precisely. We will start with the following notion of propositions; this will be refined below, but it forms a natural point of departure.

**Definition 1**

A set of possible worlds \(\alpha {\,\subseteq \,}W\) is called a possibility.

A proposition is a non-empty set of possibilities. (to be refined)

Propositions of this kind can be taken to embody informative and inquisitive content in the following way. First, in uttering a sentence that expresses a proposition \(A\), a speaker can be taken to provide the information that the actual world lies in at least one of the possibilities in \(A\), i.e. in \(\bigcup A\). In view of this, we will refer to \(\bigcup A\) as the *informative content* of \(A\), and denote it as \(\mathsf{info}(A)\).

**Definition 2**

(Informative content). \(\mathsf{info}(A) := \bigcup A\)

Second, someone who utters a sentence that expresses a proposition \(A\) can also be taken to *request* certain information from other conversational participants. Namely, she can be taken to request enough information to locate the actual world in a specific possibility in \(A\), rather than just in the union of all the possibilities that \(A\) consists of.

We will say that a piece of information \(\beta \), modeled as a set of possible worlds, *settles* a proposition \(A\) just in case it is contained in one of the possibilities \(\alpha \) that \(A\) consists of, which means that it locates the actual world inside that possibility \(\alpha \).

**Definition 3**

(Settling a proposition). A piece of information \(\beta \) settles a proposition \(A\) if and only if \(\beta {\,\subseteq \,}\alpha \) for some \(\alpha \in A\).

Notice that propositions are defined as *non-empty* sets of possibilities. This reflects the assumption that for any proposition, there is at least one piece of information that settles that proposition (although there is one proposition, namely \(\{\emptyset \}\), which can only be settled by providing inconsistent information).

Propositions can be ordered in terms of the information that they provide, but also in terms of the information that they request. We say that one proposition \(A\) is at least as informative as another proposition \(B\), \(A{\,\models _\mathsf{info}\,}B\), just in case \(\mathsf{info}(A)\subseteq \mathsf{info}(B)\), as in the classical setting. On the other hand, we say that one proposition is at least as inquisitive as another proposition \(B\), \(A{\,\models _\mathsf{inq}\,}B\), iff \(A\) requests at least as much information as \(B\), i.e., iff every piece of information that settles \(A\) also settles \(B\). This means that every possibility in \(A\) must be contained in some possibility in \(B\). Thus, \(A{\,\models _\mathsf{inq}\,}B\) if and only if \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \). These two orders can be combined into one overall entailment order: \(A\models B\) iff both \(A{\,\models _\mathsf{info}\,}B\) and \(A{\,\models _\mathsf{inq}\,}B\).

**Definition 4**

\(A{\,\models _\mathsf{info}\,}B\) iff \(\mathsf{info}(A)\subseteq \mathsf{info}(B)\)

\(A{\,\models _\mathsf{inq}\,}B\) iff \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \)

\(A\models B\) iff \(A{\,\models _\mathsf{info}\,}B\) and \(A{\,\models _\mathsf{inq}\,}B\)

Notice that \(A{\,\models _\mathsf{inq}\,}B\) actually *implies* that \(A{\,\models _\mathsf{info}\,}B\). After all, if every possibility in \(A\) is contained in some possibility in \(B\), then \(\bigcup A\) must also be contained in \(\bigcup B\). Thus, the overall entailment order can be simplified as follows.

**Fact 1**

(Entailment simplified). \(A\models B\) iff \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \)

Having established this notion of entailment, we are ready to examine whether our notion of propositions is really appropriate for the purpose at hand. As mentioned in the introduction, we would like to have that any two non-identical propositions really differ in informative and/or inquisitive content. Or, phrased the other way around, any two propositions \(A\) and \(B\) that are just as informative and just as inquisitive, should be identical. In more technical terms, we want our entailment order to be *anti-symmetric*. That is, whenever \(A\models B\) and \(B\models A\), it should be the case that \(A=B\). We will show that this is *not* the case.

To see this, first notice that \(\mathsf{info}(A)\) and \(\mathsf{info}(B)\), i.e., the union of the possibilities in \(A\) and the union of the possibilities in \(B\), clearly coincide. Thus, \(A\) and \(B\) are just as informative. To see that \(A\) and \(B\) also request just as much information, consider a piece of information that settles \(A\). Such a piece of information must either provide the information that the actual world lies in \(\alpha \) or it must provide the information that the actual world lies in \(\beta \). But that means that it also settles \(B\). And vice versa, any piece of information that settles \(B\) also settles \(A\). Thus, \(A\) and \(B\) are also just as inquisitive.

This shows that, as long as we are interested in capturing only informative and inquisitive content, our notion of propositions as arbitrary sets of possibilities is not quite appropriate. Rather, we would like to have a more restricted notion, such that any two non-identical propositions really differ in informative and/or inquisitive content.^{3}

To this end, we will define propositions as non-empty, *downward closed* sets of possibilities.

**Definition 5**

A set of possibilities \(A\) is downward closed if and only if for every \(\alpha \in A\) and every \(\beta {\,\subseteq \,}\alpha \), we also have that \(\beta \in A\).

Propositions are non-empty, downward closed sets of possibilities.

We will use \(\Pi \) to denote the set of all propositions. To see that downward closedness is a natural constraint on propositions in the present setting, consider the following. We are conceiving of propositions as sets of possibilities, and these possibilities determine what it takes to settle a given proposition. Thus far, we have been assuming the following relationship between the pieces of information that settle a proposition \(A\) and the possibilities that \(A\) consists of: a piece of information \(\beta \) settles \(A\) iff it is contained in some possibility \(\alpha \in A\). But we could just as well assume a more direct relationship between the possibilities in \(A\) and the pieces of information that settle \(A\). Namely, we could simply think of the possibilities in \(A\) as corresponding precisely to the pieces of information that settle \(A\). But if we conceive of the possibilities in a proposition in this way, we are immediately forced to define propositions as downward closed sets of possibilities. After all, if \(\alpha \in A\), then, given the assumed conception of possibilities, \(\alpha \) is a piece of information that settles \(A\); but then any stronger piece of information \(\beta \subset \alpha \) also settles \(A\), and this means, again given the assumed conception of possibilities, that any \(\beta \subset \alpha \) must also be in \(A\).

Given this more restricted notion of propositions as non-empty, downward closed sets of possibilities, the characterization of \(\models \) can be further simplified. We said above that \(A\models B\) iff every piece of information that settles \(A\) also settles \(B\). Given our new conception of propositions, this simply amounts to inclusion: \(A{\,\subseteq \,}B\).

**Fact 2**

(Entailment further simplified). \(A\models B\) iff \(A{\,\subseteq \,}B\)

From this characterization it immediately follows that \(\models \) forms a partial order over \(\Pi \). This implies in particular that \(\models \) is anti-symmetric, which means that every two non-identical propositions really differ in informative and/or inquisitive content, as desired.

### 3.2 Algebraic operations

The next step is to see what kind of algebraic operations \(\models \) gives rise to. It turns out that, just as in the classical setting, any set of propositions \(\Sigma \) has a unique greatest lower bound (meet) and a unique least upper bound (join) w.r.t. \(\models \).

**Fact 3**

(Meet). For any set of propositions \(\Sigma \), \(\bigcap \Sigma \) is the meet of \(\Sigma \) w.r.t. \(\models \) (assuming that \(\bigcap \emptyset = \wp (W)\)).

*Proof*

First, let us show that \(\bigcap \Sigma \) is a proposition. If \(\Sigma =\emptyset \) then \(\bigcap \Sigma = \wp (W)\), which is indeed a proposition. If \(\Sigma \ne \emptyset \) then \(\bigcap \Sigma \) must contain \(\emptyset \), since all elements of \(\Sigma \) are non-empty and downward closed, which means that they must contain \(\emptyset \). So \(\bigcap \Sigma \) is non-empty. To see that it is also downward closed, suppose that \(\alpha \in \bigcap \Sigma \). Then \(\alpha \) must be in every proposition in \(\Sigma \). But then every \(\beta {\,\subseteq \,}\alpha \) must also be included in every proposition in \(\Sigma \), and therefore in \(\bigcap \Sigma \). So \(\bigcap \Sigma \) is indeed downward closed. Next, note that \(\bigcap \Sigma \models A\) for any \(A\in \Sigma \), which means that \(\bigcap \Sigma \) is a lower bound of \(\Sigma \). What remains to be shown is that \(\bigcap \Sigma \) is the *greatest* lower bound of \(\Sigma \). That is, for every \(B\) that is a lower bound of \(\Sigma \), we must show that \(B\models \bigcap \Sigma \). To see this let \(B\) be a lower bound of \(\Sigma \), and let \(\beta \) be a possibility in \(B\). Then, since \(B\models A\) for any \(A\in \Sigma \), \(\beta \) must be included in any \(A\in \Sigma \). But then \(\beta \) must also be in \(\bigcap \Sigma \). Thus, \(B\models \bigcap \Sigma \), which is exactly what we set out to show. So \(\bigcap \Sigma \) is indeed the greatest lower bound of \(\Sigma \). \(\square \)

**Fact 4**

(Join). For any set of propositions \(\Sigma \), \(\bigcup \Sigma \) is the join of \(\Sigma \) w.r.t. \(\models \) (assuming that \(\bigcup \emptyset = \{\emptyset \}\)).

*Proof*

We omit the proof that \(\bigcup \Sigma \) is a proposition. For any \(A\in \Sigma \), \(A\models \bigcup \Sigma \), which means that \(\bigcup \Sigma \) is an upper bound of \(\Sigma \). What remains to be shown is that \(\bigcup \Sigma \) is the *least* upper bound of \(\Sigma \). That is, for every \(B\) that is an upper bound of \(\Sigma \), we must show that \(\bigcup \Sigma \models B\). To see this let \(B\) be an upper bound of \(\Sigma \), and \(\alpha \) a possibility in \(\bigcup \Sigma \). Then \(\alpha \) must be in some proposition \(A\in \Sigma \). But then, since \(A\models B\), \(\alpha \) must also be in \(B\). And this establishes that \(\bigcup \Sigma \models B\), which is what we set out to show. Thus, \(\bigcup \Sigma \) is indeed the least upper bound of \(\Sigma \). \(\square \)

The existence of meets and joins for arbitrary sets of propositions implies that \(\langle \Pi ,\models \rangle \) forms a complete lattice. And again, this lattice is bounded, i.e., there is a bottom element, \(\bot :=\{\emptyset \}\), and a top element, \(\top :=\wp (W)\). Finally, as in the classical setting, for every two propositions \(A\) and \(B\), there is a unique weakest proposition \(C\) such that \(A\cap C\models B\). Recall that this proposition, which is called the pseudo-complement of \(A\) relative to \(B\), can be characterized intuitively as the weakest proposition such that if we add it to \(A\), we get a proposition that is at least as strong as \(B\). The only thing that has changed with respect to the classical setting is that strength is now measured both in terms of informative content and in terms of inquisitive content.

**Definition 6**

**Fact 5**

(Relative pseudo-complement). For any two propositions \(A\) and \(B\), \(A{\,\Rightarrow \,}B\) is the pseudo-complement of \(A\) relative to \(B\).

*Proof*

We omit the proof that \(A{\,\Rightarrow \,}B\) is a proposition. To see that \(A\cap (A{\,\Rightarrow \,}B)\models B\), let \(\alpha \) be a possibility in \(A\cap (A{\,\Rightarrow \,}B)\). Then \(\alpha \) is both in \(A\) and in \(A{\,\Rightarrow \,}B\). Since \(\alpha \in A{\,\Rightarrow \,}B\), it must be the case that *if*\(\alpha \in A\)*then also*\(\alpha \in B\). But we know that \(\alpha \in A\). So \(\alpha \) must also be in \(B\). This establishes that \(A\cap (A{\,\Rightarrow \,}B)\models B\).

It remains to be shown that \(A\!{\,\Rightarrow \,}\! B\) is the *weakest* proposition \(C\) such that \(A\cap C\!\models \! B\). In other words, we must show that for any proposition \(C\) such that \(A\cap C\models B\), it holds that \(C\models (A{\,\Rightarrow \,}B)\). To see this, let \(C\) be a proposition such that \(A\cap C\models B\) and let \(\alpha \) be a possibility in \(C\). Towards a contradiction, suppose that \(\alpha \not \in (A{\,\Rightarrow \,}B)\). Then there must be some \(\beta {\,\subseteq \,}\alpha \) such that \(\beta \in A\) and \(\beta \not \in B\). Since \(C\) is downward closed, \(\beta \in C\). But that means that \(\beta \) is in \(A\cap C\), while \(\beta \not \in B\). Thus \(A\cap C\not \models B\), contrary to what we assumed. So \(A{\,\Rightarrow \,}B\) is indeed the pseudo-complement of \(A\) relative to \(B\). \(\square \)

The existence of relative pseudo-complements implies that \(\langle \Pi ,\models \rangle \) forms a Heyting algebra. Recall that in a Heyting algebra, \(A^*:= (A{\,\Rightarrow \,}\bot )\) is referred to as the *pseudo-complement* of \(A\). In the specific case of \(\langle \Pi ,\models \rangle \), pseudo-complements can be characterized as follows.

**Fact 6**

Thus, \(A^*\) consists of all the possibilities that are disjoint from \(\bigcup A\). This means that a piece of information settles \(A^*\) just in case it locates the actual world outside \(\bigcup A\).

So far, then, everything works out just as in the classical setting. However, unlike in the classical setting, the pseudo-complement of a proposition is not always its *Boolean* complement. In fact, most propositions in \(\langle \Pi ,\models \rangle \) do not have a Boolean complement at all. To see this, suppose that \(A\) and \(B\) are Boolean complements. This means that (i) \(A\cap B = \bot \) and (ii) \(A\cup B = \top \). Condition (ii) can only be fulfilled if \(W\) is contained in either \(A\) or \(B\). Suppose \(W\in A\). Then, since \(A\) is downward closed, \(A=\wp (W)=\top \). But then, in order to satisfy condition (i), we must have that \(B = \{\emptyset \} = \bot \). So the only two elements of our algebra that have a Boolean complement are \(\top \) and \(\bot \). This implies that \(\langle \Pi ,\models \rangle \) does not form a Boolean algebra.

Thus, starting with a new notion of propositions and an entailment order on these propositions that takes both informative and inquisitive content into account, we have established an algebraic structure with three basic operations, *meet*, *join*, and *relative pseudo-complementation*. The only difference with the algebraic structure obtained in the classical setting is that, apart from the extremal elements of the algebra, propositions do not have Boolean complements. However, as in the classical setting, every proposition does have a pseudo-complement.

### 3.3 Connectives

Now suppose that we have a language \(L\), whose sentences express the kind of propositions considered here. Then it is natural to assume that this language has certain sentential connectives which semantically behave like *meet*, *join*, and *(relative) pseudo-complement* operators. Below we define a semantics for the language of propositional logic, \(L_P\), that has exactly these characteristics: conjunction behaves semantically as a *meet* operator, disjunction behaves as a *join* operator, negation as a *pseudo-complement* operator, and implication as a *relative pseudo-complement* operator. The semantics assumes a valuation function which assigns a truth-value to every atomic sentence in every world. For any atomic sentence \(p\), the set of worlds where \(p\) is true is denoted by \(|p|\).

**Definition 7**

- 1.
\([p] := \wp (\, |p| \,)\)

- 2.
\([\lnot \varphi ] := [\varphi ]^*\)

- 3.
\([\varphi \wedge \psi ] := [\varphi ]\cap [\psi ]\)

- 4.
\([\varphi \vee \psi ] := [\varphi ]\cup [\psi ]\)

- 5.
\([\varphi \rightarrow \psi ] := [\varphi ]{\,\Rightarrow \,}[\psi ]\)

The clauses for the logical constants are completely determined by our algebraic considerations. Notice, however, that these considerations do not dictate a particular treatment of atomic sentences. We assume that in uttering an atomic sentence \(p\), a speaker provides the information that \(p\) is true, and does not request any further information from other participants. This assumption is directly reflected by the clause for atomic sentences given above, which defines \([p]\) as the set of all possibilities containing only worlds where \(p\) is true.

### 3.4 Quantifiers

The approach taken here can straightforwardly be extended to obtain an inquisitive semantics for the language of first-order logic, \(L_{FO}\). The proposition expressed by a universally quantified formula \(\forall x.\varphi \), relative to an assignment \(g\), can be defined as the *meet* of all the propositions that \(\varphi \) expresses relative to assignment functions that differ from \(g\) at most in the value that they assign to \(x\). And similarly, the proposition expressed by an existentially quantified formula \(\exists x.\varphi \), relative to \(g\), can be defined as the *join* of all the propositions that \(\varphi \) expresses relative to assignment functions that differ from \(g\) at most in the value that they assign to \(x\).

As usual, the semantics for \(L_{FO}\) assumes a domain of individuals \(D\) and a world-dependent interpretation function \(I_w\) that maps every individual constant \(c\) to some individual in \(D\) and every \(n\)-place predicate symbol \(R\) to a set of \(n\)-tuples of individuals in \(D\). Formulas are interpreted relative to an assignment function \(g\), which maps every variable \(x\) to some individual in \(D\). For every individual constant \(c\), \([c]_{w,g} = I_w(c)\) and for every variable \(x\), \([x]_{w,g} = g(x)\). An atomic sentence \(Rt_1\ldots t_n\) is true in a world \(w\) relative to an assignment \(g\) iff \(\langle [t_1]_{w,g},\ldots ,[t_n]_{w,g}\rangle \in I_w(R)\). Given an assignment \(g\), the set of worlds \(w\) such that \(Rt_1\ldots t_n\) is true in \(w\) relative to \(g\) is denoted by \(|Rt_1\ldots t_n|_g\).

**Definition 8**

- 1.
\([Rt_1\ldots t_n]_g := \wp (~ |Rt_1\ldots t_n|_g ~)\)

- 2.
\([\lnot \varphi ]_g := [\varphi ]_g^*\)

- 3.
\([\varphi \wedge \psi ]_g := [\varphi ]_g\cap [\psi ]_g\)

- 4.
\([\varphi \vee \psi ]_g := [\varphi ]_g\cup [\psi ]_g\)

- 5.
\([\varphi \rightarrow \psi ]_g := [\varphi ]_g{\,\Rightarrow \,}[\psi ]_g\)

- 6.
\([\forall x.\varphi ]_g := \bigcap _{d\in D}~ [\varphi ]_{g[x/d]}\)

- 7.
\([\exists x.\varphi ]_g := \bigcup _{d\in D}~ [\varphi ]_{g[x/d]}\)

Given its algebraic characterization, the status of this system among logical frameworks for the semantic treatment of informative and inquisitive content, is precisely the same as that of classical first-order logic among logical frameworks for the semantic treatment of purely informative content. In this sense, the system may be regarded as the most basic inquisitive semantics. Just like classical logic in the purely informative setting, the system provides a suitable framework for the formulation and comparison of different theories of inquisitive constructions in natural language, and a common starting point for the development of even richer logical frameworks dealing with aspects of meaning that go beyond purely informative and inquisitive content (e.g. presuppositional aspects of meaning). We will therefore refer to the system as \(\mathsf {Inq}_\mathsf{B}\), where B stands for basic.

In the remainder of the paper we will relate \(\mathsf {Inq}_\mathsf{B}\) to earlier work on inquisitive semantics, identify its basic logical properties, and discuss its significance for natural language semantics.

### 3.5 Propositions and support

In previous work on inquisitive semantics, a number of different systems have been considered. We will focus here on the simplest and most well-understood system, where the proposition expressed by a sentence is defined in terms of the notion of *support* (just as in the classical setting, the proposition expressed by a sentence is usually defined in terms of *truth*). Support is a relation between sentences and information states (relativized to an assignment function in the first-order setting). Information states are modeled as sets of possible worlds (valuation functions in the propositional setting; first-order models in the first-order setting). Support for \(L_{FO}\) is defined recursively as follows. ^{4}

**Definition 9**

- 1.
\(s\models _{g}Rt_1\ldots t_n\) iff \(s{\,\subseteq \,}|Rt_1\ldots t_n|_g\)

- 2.
\(s\models _{g}\lnot \varphi \) iff \(\forall t\subseteq s:\) if \(t\ne \emptyset \) then \(t\not \models _{g}\varphi \)

- 3.
\(s\models _{g}\varphi \wedge \psi \) iff \(s\models _{g}\varphi \) and \(s\models _{g}\psi \)

- 4.
\(s\models _{g}\varphi \vee \psi \) iff \(s\models _{g}\varphi \) or \(s\models _{g}\psi \)

- 5.
\(s\models _{g}\varphi \rightarrow \psi \) iff \(\forall t\subseteq s:\) if \(t\models _{g}\varphi \) then \(t\models _{g}\psi \)

- 6.
\(s\models _{g}\forall x.\varphi \) iff \(s{\,\models \,}_{g[x/d]}\varphi \) for every \(d\in D\)

- 7.
\(s\models _{g}\exists x.\varphi \) iff \(s{\,\models \,}_{g[x/d]}\varphi \) for some \(d\in D\)

Now, it turns out that there is a very close connection between the information states that support a formula \(\varphi \), relative to an assignment \(g\), and the proposition \([\varphi ]_g\) that \(\varphi \) expresses relative to \(g\) in \(\mathsf {Inq}_\mathsf{B}\). Namely, the proposition expressed by \(\varphi \) relative to \(g\) in \(\mathsf {Inq}_\mathsf{B}\) is precisely the set of all states that support \(\varphi \) relative to \(g\).

**Fact 7**

This result tells us that \(\mathsf {Inq}_\mathsf{B}\) essentially coincides with the existing support-based system. It must be noted that in most presentations of the support-based system, the proposition expressed by a sentence is defined as the set of *maximal* states supporting the sentence, rather than the set of *all* supporting states. ^{5} However, central logical notions like entailment and equivalence are directly defined in terms of support, which means that the logic that the two systems give rise to is exactly the same. Thus, all the logical results obtained for the support-based system immediately carry over to our algebraic system. In particular, we can import the following completeness result (Ciardelli 2009; Ciardelli and Roelofsen 2009, 2011). ^{6}

**Theorem 1**

*modus ponens*as the only inference rule, and the following axioms:

All axioms for intuitionistic logic.

Kreisel-Putnam: \((\lnot \varphi \rightarrow \psi \vee \chi ) \longrightarrow (\lnot \varphi \rightarrow \psi )\vee (\lnot \varphi \rightarrow \chi )\)

Atomic double negation: \(\lnot \lnot p\rightarrow p\) (only for atomic \(p\))

Note that \(\mathsf {Inq}_\mathsf{B}\) is stronger than in intuitionistic logic. Namely, besides the axioms of intuitionistic logic, which are valid on any Heyting algebra (Troelstra and van Dalen 1988), it also validates the Kreisel-Putnam axiom and the law of double negation for atomic sentences. The latter is evidently connected to the treatment of atomic sentences in \(\mathsf {Inq}_\mathsf{B}\). Recall that our algebraic considerations did not dictate a particular treatment of atomic sentences. We defined the proposition expressed by an atomic sentence \(p\) as the set of all possibilities consisting of worlds where \(p\) is true, reflecting the assumption that in uttering \(p\), a speaker provides the information that \(p\) is true, and does not request any further information from other participants. This particular treatment of atomic sentences results in the validity of \(\lnot \lnot p\rightarrow p\).

The validity of the Kreisel-Putnam axiom is connected to the fact that the space of propositions in \(\mathsf {Inq}_\mathsf{B}\) actually forms a specific kind of Heyting algebra. This additional structure is not directly relevant for the purposes of this paper, but clearly plays a crucial role in comparing the logic of \(\mathsf {Inq}_\mathsf{B}\) with intuitionistic logic. Ciardelli (2009) and Ciardelli and Roelofsen (2009, 2011) pursue such a comparison in more detail.

In the next two subsections we will introduce some additional notions, and highlight some specific features of \(\mathsf {Inq}_\mathsf{B}\). In doing so, we will mostly restrict our attention to the propositional setting. Everything we will say also applies to the first order system, but formulating things in the first-order setting is a bit more cumbersome, because everything needs to be relativized to assignment functions.

### 3.6 Informativeness and inquisitiveness

Recall that we defined the informative content of a proposition \(A\), \(\mathsf{info}(A)\), as the union of all the possibilities in \(A\). Derivatively, we will say that the informative content of a *sentence*\(\varphi \), \(\mathsf{info}(\varphi )\), is the informative content of the proposition that it expresses, i.e., \(\bigcup [\varphi ]\).

It can be shown that the informative content of a sentence \(\varphi \) in \(\mathsf {Inq}_\mathsf{B}\) always coincides with the proposition \([\varphi ]_c\) expressed by that sentence in classical logic (see, e.g., Ciardelli and Roelofsen 2011, p. 62). This means that \(\mathsf {Inq}_\mathsf{B}\) forms a conservative extension of classical logic, in the sense that it leaves the treatment of informative content untouched.

**Fact 8**

(The treatment of informative content in \(\mathsf {Inq}_\mathsf{B}\) is classical). For any sentence \(\varphi \): \(\mathsf{info}(\varphi ) = [\varphi ]_c\).

We will say that a sentence is *informative* just in case its informative content does not cover the entire logical space, i.e., iff \(\mathsf{info}(\varphi )\ne W\). On the other hand we will say that \(\varphi \) is *inquisitive* just in case accepting \(\mathsf{info}(\varphi )\) is not sufficient to settle \([\varphi ]\), i.e., iff \(\mathsf{info}(\varphi )\not \in [\varphi ]\). In uttering an inquisitive sentence, a speaker does not just ask other participants to accept the information that she herself provides in uttering that sentence, but also to supply additional information.

**Definition 10**

\(\varphi \) is informative iff \(\mathsf{info}(\varphi )\ne W\)

\(\varphi \) is inquisitive iff \(\mathsf{info}(\varphi )\not \in [\varphi ]\)

In terms of these notions of informativeness and inquisitiveness, we define questions, assertions, hybrids, and tautologies as follows.

**Definition 11**

\(\varphi \) is a question iff it is non-informative

\(\varphi \) is an assertion iff it is non-inquisitive

\(\varphi \) is hybrid iff it is both informative and inquisitive

\(\varphi \) is a tautology iff it is neither informative nor inquisitive

Recall that in the classical setting, a sentence is a tautology just in case it is non-informative. In \(\mathsf {Inq}_\mathsf{B}\), sentences can be meaningful by being informative, but also by being inquisitive. Thus, it is natural that in order to count as a tautology in \(\mathsf {Inq}_\mathsf{B}\), a sentence has to be neither informative nor inquisitive.

Notice that a question is tautological just in case it is non-inquisitive, and an assertion is tautological just in case it is non-informative. Thus, sentences that are neither informative nor inquisitive count both as tautological assertions and as tautological questions.

It can be shown that a sentence is tautological just in case it expresses the proposition \(\wp (W)\), which is the top element of our algebra.

**Fact 9**

\(\varphi \) is a tautology iff \([\varphi ] = \top = \wp (W)\)

### 3.7 Disjunction, existentials, and inquisitiveness

*maximal*possibilities in \([p\vee q]\): the possibility that consists of all worlds where \(p\) is true, and the possibility that consists of all worlds where \(q\) is true. Since \(\mathsf{info}(p\vee q)\) does not cover the entire logical space, \(p\vee q\) is informative; and since \(\mathsf{info}(p\vee q)\not \in [p\vee q]\), it is also inquisitive. So \(p\vee q\) is an example of a hybrid sentence.

This example shows that disjunction is a source of inquisitiveness. It turns two atomic, non-inquisitive sentences into an inquisitive sentence. In the first-order setting, existential quantification behaves in a similar way and is also a source of inquisitiveness. It can in fact be shown that disjunction and existential quantification are the *only* sources of inquisitiveness in \(L_{FO}\) (see, e.g., Ciardelli and Roelofsen 2011, p. 62).

**Fact 10**

(Disjunction, existentials, and inquisitiveness). If a sentence in \(L_{FO}\) does not contain disjunction or existential quantification then it is not inquisitive. ^{7}

As mentioned in the introduction, a treatment of disjunction and existentials as introducing sets of possibilities has not only been developed in inquisitive semantics but also in *alternative semantics* (Kratzer and Shimoyama 2002; Simons 2005a, b; Alonso-Ovalle 2006, 2008, 2009; Aloni 2007a, b; Menéndez-Benito 2005, 2010, among others). This treatment has been motivated by a number of empirical phenomena, including free choice inferences, exclusivity implicatures, and conditionals with disjunctive antecedents. The proposed analysis of disjunction and indefinites led to new accounts of these phenomena which improved considerably on previous accounts. However, as mentioned in the introduction as well, no motivation has so far been provided for this alternative treatment of disjunction and existentials *independently* of the linguistic phenomena at hand. Moreover, the treatment of disjunction and existentials in alternative semantics has been presented as a real *alternative* for the classical treatment of these logical constants as *join* operators. It seems, then, that anyone adopting the proposed alternative treatment of disjunction and existentials is forced to give up the classical treatment of these operators. One particular consequence of taking such a step is that the duality between disjunction and conjunction, and the corresponding duality between existential and universal quantification, gets lost.

The algebraic inquisitive semantics developed in the present paper sheds new light on these issues. First, it shows that, once inquisitive content is taken into consideration besides informative content, general algebraic considerations lead essentially to the treatment of disjunction and existentials that was proposed in alternative semantics, thus providing exactly the independent motivation that has so far been missing. Moreover, it shows that the proposed ‘alternative’ treatment of disjunction and existentials is actually a natural generalization of the classical treatment: disjunction and existentials can still be taken to behave semantically as *join* operators, only now the propositions that they apply to are more fine-grained in order to capture both informative and inquisitive content. And once the algebraic underpinning is regained, the duality between disjunction and conjunction, and the corresponding duality between existential and universal quantification, are restored as well. So we can have our cake and eat it: we can maintain the idea that disjunction and existentials behave as *join* operators, and still treat them as introducing sets of alternatives. ^{8}

### 3.8 Projection operators

Given this picture, it is natural to think of *projection operators* that map any sentence onto the axes of the space. In particular, we may consider a *non-inquisitive projection operator*\(!\) that maps any sentence \(\varphi \) to an assertion \(!\varphi \) that is non-inquisitive but otherwise as similar as possible to \(\varphi \), and a *non-informative projection operator*\(?\) that maps every \(\varphi \) to a question \(?\varphi \) that is non-informative but otherwise as similar as possible to \(\varphi \).

- 1.
\(!\varphi \) is non-inquisitive;

- 2.
\(\mathsf{info}(!\varphi )=\mathsf{info}(\varphi )\), i.e., \(!\varphi \) preserves the informative content of \(\varphi \)

^{9}

**Theorem 2**

*Proof*

First, we show that \(!\), as defined here, satisfies the requirements. Notice that \(\mathsf{info}(!\varphi ) = \bigcup [!\varphi ] = \mathsf{info}(\varphi )\). So the second requirement is fulfilled. And since \(\mathsf{info}(\varphi )\in [!\varphi ]\), the first requirement is fulfilled as well.

Now let us show that any operator that meets the given requirements must behave exactly as \(!\) does. Let \(\nabla \) be an operator that meets the given requirements. Then, for every \(\varphi \), \(\nabla \varphi \) must be non-inquisitive. That is, \([\nabla \varphi ] = \wp (\mathsf{info}(\nabla \varphi ))\). But we must also have that \(\mathsf{info}(\nabla \varphi )=\mathsf{info}(\varphi )\), which means that \([\nabla \varphi ] = \wp (\mathsf{info}(\varphi )) = [!\varphi ]\). So \(\nabla \) must indeed behave exactly as \(!\) does. \(\square \)

Now let us consider \(?\), the non-informative projection operator. Clearly, we always want \(?\varphi \) to be non-informative. But what else do we want? We cannot demand that \(?\varphi \) is always just as inquisitive as \(\varphi \) itself, i.e. that \([?\varphi ]\) and \([\varphi ]\) are always settled by exactly the same pieces of information. After all, if we enforced this requirement, \(?\varphi \) would simply have to be equivalent to \(\varphi \). There is, however, a natural way to weaken this requirement. In order to do so, we should not only consider the pieces of information that settle \([\varphi ]\), but rather more generally the pieces of information that *decide* on \([\varphi ]\).

**Definition 12**

\(\beta \) contradicts \([\varphi ]\) iff \(\beta \cap \bigcup [\varphi ]=\emptyset \)

\(\beta \) decides on \([\varphi ]\) iff it settles \([\varphi ]\) or contradicts \([\varphi ]\)

\(\mathsf{D}(\varphi )\) denotes the set of all pieces of information that decide on \([\varphi ]\)

- 1.
\(?\varphi \) is non-informative

- 2.
\(\mathsf{D}(?\varphi ) = \mathsf{D}(\varphi )\)

**Theorem 3**

*Proof*

First let us check that, given this definition, \(?\) satisfies the given requirements. First, we always have that \(\bigcup [?\varphi ] = W\), which means that \(?\varphi \) is never informative. Moreover, if \(\beta \) is a piece of information that decides on \([\varphi ]\) then it clearly settles, and therefore decides on \([?\varphi ]\). Vice versa, if \(\beta \) decides on \([?\varphi ]\) then, since there are no possibilities that are disjoint from \(\bigcup [?\varphi ]\), \(\beta \) must actually settle \([?\varphi ]\) and therefore be included in \([?\varphi ]\). And this means, given how \([?\varphi ]\) is defined, that \(\beta \) must decide on \([\varphi ]\). So \(?\) indeed meets the given requirements.

Now let us show that any operator that satisfies the given requirements must behave exactly as \(?\) does. Let \(\Delta \) be an operator that satisfies the requirements. Then, for every \(\varphi \), \(\Delta \varphi \) must be non-informative, which means that \(\mathsf{info}(\Delta \varphi ) = W\). Moreover, we must have that \(\mathsf{D}(\Delta \varphi )=\mathsf{D}(\varphi )\). Given that \(\mathsf{info}(\Delta \varphi ) = W\), there cannot be any possibilities that are disjoint from \(\bigcup [\Delta \varphi ]\). Thus, \(\mathsf{D}(\Delta \varphi )\) amounts to \([\Delta \varphi ]\). But then \([\Delta \varphi ]\) must be identical to \(\mathsf{D}(\varphi )\), which is \([?\varphi ]\). So \(\Delta \) must indeed behave exactly as \(?\) does. \(\square \)

Now, if \([!\varphi ]\) is defined as \(\wp (\mathsf{info}(\varphi ))\), and \([?\varphi ]\) as \(\mathsf{D}(\varphi )\), then the semantic behavior of these operators can actually be characterized in terms of our basic algebraic operations.

**Fact 11**

\([!\varphi ] = ([\varphi ]^*)^*\)

\([?\varphi ] = [\varphi ]\cup [\varphi ]^*\)

This also means that the projection operators can actually be expressed in terms of the basic connectives in our logical language. ^{10}

**Fact 12**

\(!\varphi \,\equiv \lnot \lnot \varphi \)

\(?\varphi \equiv \varphi \vee \lnot \varphi \)

Thus, rather than adding \(!\) and \(?\) as primitive logical constants to our language, we can simply introduce \(!\varphi \) as an abbreviation of \(\lnot \lnot \varphi \) and \(?\varphi \) as an abbreviation of \(\varphi \vee \lnot \varphi \). The logic that the system gives rise to is then fully determined by the behavior of our basic connectives, and in proving things about the system, we never need to consider \(!\) and \(?\) explicitly.

This is in fact exactly how \(!\varphi \) and \(?\varphi \) were defined in (Groenendijk and Roelofsen 2009; Ciardelli 2009; Ciardelli and Roelofsen 2011), i.e., as abbreviations of \(\lnot \lnot \varphi \) and \(\varphi \vee \lnot \varphi \). So again, our considerations in this section have not really led to a new treatment of projection operators, but rather to a more solid foundation for the existing treatment. ^{11}

Having established the connection between our characterization of the projection operators and the way they were defined in earlier work, we can immediately import a number of results. We mention here only the two most significant ones. First, there is a close correspondence between the projection operators and the semantic categories of questions and assertions.

**Fact 13**

\(\varphi \) is an assertion iff \(\varphi \equiv {!\varphi }\)

\(\varphi \) is a question iff \(\varphi \equiv {?\varphi }\)

Second, a sentence \(\varphi \) is always equivalent to the conjunction of its two projections, \(?\varphi \) and \(!\varphi \).

**Fact 14**

(Division). \(\varphi \equiv {?\varphi }\wedge {!\varphi }\)

In our view, these results are significant for the semantic analysis of declarative and interrogative *complementizers* in natural language. Just as it is to be expected that natural languages generally have connectives that behave semantically as *join*, *meet*, and *pseudo-complement* operators, it is also to be expected that natural languages generally have complementizers that behave semantically as non-informative or non-inquisitive projection operators, or combinations thereof. ^{12}

It is interesting to note in this regard that the non-informative projection operator, \(?\), which turns every sentence in our logical language into a question and would therefore naturally be associated with interrogative complementizers in natural languages, is closely related to disjunction and existential quantification. Namely, \([?\varphi ]\) is the *join* of \([\varphi ]\) and \([\varphi ]^*\), and the join operation, also associated with disjunction and existential quantification, is the essential source of inquisitiveness in \(\mathsf {Inq}_\mathsf{B}\). This fact may provide the basis for an explanation of the well-known observation that in many languages, question markers are homophonous with words for disjunction and/or indefinites (e.g., Japanese *ka*) (see Jayaseelan 2001, 2008; Bhat 2005; Haida 2007; AnderBois 2011, 2012, among others).

### 3.9 Maximal possibilities and compliance

Before concluding, we would like to briefly come back to the difference between the notion of a proposition in \(\mathsf {Inq}_\mathsf{B}\) and the one assumed in most previous work on the support-based system (see footnote 5).

As mentioned, the proposition expressed by a sentence \(\varphi \) in \(\mathsf {Inq}_\mathsf{B}\) coincides precisely with the set of all states that support \(\varphi \). However, in the support-based system the proposition expressed by \(\varphi \) is usually defined as the set of *maximal* states supporting \(\varphi \), i.e., the set of states that support \(\varphi \) and are not contained in any other state supporting \(\varphi \). We will use \([\![\varphi ]\!]\) to denote this set of maximal supporting states.

Now, if we restrict our attention to \(L_P\), it can in fact be shown that a sentence \(\varphi \) is supported by a state \(s\) if and only if \(s\) is contained in a maximal state supporting \(\varphi \) (see Ciardelli and Roelofsen 2011, p. 59).

**Fact 15**

This means that, for any \(\varphi \in L_P\), \([\varphi ]\) can be fully recovered from \([\![\varphi ]\!]\), simply by taking its downward closure. Clearly, \([\![\varphi ]\!]\) can also always be obtained from \([\varphi ]\), by taking maximal elements. So at first sight there does not seem to be any reason to prefer one notion over the other.

However, there is a specific reason why \([\![\varphi ]\!]\) is usually adopted in the support-based system, rather than \([\varphi ]\). Namely, one of the main logical pragmatic notions that the semantics is intended to give rise to, i.e., the notion of *compliance* (Groenendijk and Roelofsen 2009), makes crucial reference to maximal supporting states and is therefore more straightforwardly characterized in terms of \([\![\varphi ]\!]\) than in terms of \([\varphi ]\). Compliance is a strict notion of logical relatedness. For instance, \(p\) is a compliant response to \(?p\), but \(p\wedge q\) is not, because \(q\) contributes information that is logically unrelated to \(?p\). Maximal supporting states play an important role in characterizing compliance because they correspond to pieces of information that are *just* sufficient to settle the given proposition, i.e., they settle the proposition without providing additional, possibly redundant and logically unrelated information (see Groenendijk and Roelofsen 2009).

Thus, if we want to characterize such a notion of compliance, there indeed seems to be a good reason to focus on maximal supporting states, and in the propositional setting this is unproblematic (although taking the proposition expressed by a sentence to consist of all supporting states, as in \(\mathsf {Inq}_\mathsf{B}\), does of course not prevent us from characterizing compliance, it just makes it slightly less straightforward).

However, it has been shown in great detail by Ciardelli (2009, 2010) that if we move to the first-order setting, compliance can no longer be defined in terms of maximal supporting states; in fact, in the first-order setting compliance cannot be defined in terms of support at all. Ciardelli’s argument starts with the following example.

*Example 1*

(The boundedness formula). Consider a first-order language which has a unary predicate symbol \(P\), a binary function symbol \(+\), and the set \(\mathbb{N }\) of natural numbers as its individual constants. Suppose that our logical space consist of first-order models \(M=\langle D,I\rangle \), where \(D=\mathbb{N }\), \(I\) maps every \(n\in \mathbb{N }\) to the corresponding \(n\in D\), and \(+\) is interpreted as addition. So the only difference between the models in our logical space is the way in which they interpret \(P\). Let \(x\le y\) abbreviate \(\exists z(x+z=y)\), let \(B(x)\) abbreviate \(\forall y(P(y)\rightarrow y\le x)\), and for every \(n\in \mathbb{N }\), let \(B(n)\) abbreviate \(\forall y(P(y)\rightarrow y\le n)\). Intuitively, \(B(n)\) says that \(n\) is greater than or equal to any number in \(P\). In other words, \(B(n)\) says that \(n\) is an *upper bound* for \(P\).

A state \(s\) supports a formula \(B(n)\), for some \(n\in \mathbb{N }\), iff \(B(n)\) is true in every model in \(s\), that is, iff \(n\) is an upper bound for \(P\) in every \(M\) in \(s\). Now consider the formula \(\exists x.B(x)\), which intuitively says that there is an upper bound for \(P\). This formula, which Ciardelli refers to as the *boundedness formula*, does not have a maximal supporting state. To see this, let \(s\) be an arbitrary state supporting \(\exists x. B(x)\). Then there must be a number \(n\in \mathbb{N }\) such that \(s\) supports \(B(n)\), i.e., \(B(n)\) must be true in all models in \(s\). Now let \(M^{\prime }\) be the model in which \(P\) denotes the singleton set \(\{n+1\}\). Then \(M^{\prime }\) cannot be in \(s\), because it does not make \(B(n)\) true. Thus, the state \(s^{\prime }\) which is obtained from \(s\) by adding \(M^{\prime }\) to it is a proper superset of \(s\) itself. However, \(s^{\prime }\) clearly supports \(B(n+1)\), and therefore also still supports \(\exists x.B(x)\). This shows that any state supporting \(\exists x.B(x)\) can be extended to a larger state which still supports \(\exists x.B(x)\), and therefore no state supporting \(\exists x.B(x)\) can be maximal. \(\square \)

This example shows that a general notion of compliance, that applies both in the propositional setting and in the first-order setting, should *not* make reference to maximal supporting states. Such a notion would give undesirable results for the boundedness formula and other cases where there are no maximal supporting states. Intuitively, this is because in these cases there are no pieces of information that provide *exactly* enough information to settle the given proposition. For every piece of information that settles the proposition, we can find a weaker piece of information that still settles the proposition. This means that maximal supporting states do not form a suitable basis for a general notion of compliance.

Ciardelli goes on to argue that a satisfactory notion of compliance can in fact not be defined in terms of support at all. This argument is based on the following example.

*Example 2*

(The positive boundedness formula). Consider the following variant of the boundedness formula: \(\exists x(x\ne 0\wedge B(x))\). This formula says that there is a *positive* upper bound for \(P\). Intuitively, it differs from the ordinary boundedness formula in that it does not license “Yes, zero is an upper bound for \(P\)” as a compliant response. However, in terms of support, \(\exists x(x\ne 0\wedge B(x))\) and \(\exists x.B(x)\) are entirely equivalent. Thus, support is not fine-grained enough to capture the intuition that these formulas do not license the same range of compliant responses. \(\square \)

This argument is relevant here, because it brings to light an important limitation of the support-based system, and therefore also of \(\mathsf {Inq}_\mathsf{B}\). The system does what it was meant to do, i.e., it provides a notion of meaning that embodies both informative and inquisitive content in a satisfactory way (also in the case of the boundedness formulas). However, this notion of meaning is not fine-grained enough to provide the basis for an adequate notion of compliance.

There have been several attempts to overcome this limitation (see, e.g., Ciardelli 2009, 2010; Westera 2012a; Ciardelli et al. 2013b). However, none of these attempts have so far been entirely conclusive. We hope that the algebraic approach developed here will shed new light on this issue as well. In principle, we could start out with a notion of meaning that is even richer than the one adopted here. Once we have a clear intuitive understanding of such a notion of meaning, and a suitable notion of entailment, we can follow essentially the same line of thought that has been pursued here to arrive at a system that adequately deals with compliance and possibly other aspects of meaning that are beyond the reach of \(\mathsf {Inq}_\mathsf{B}\). Initial work in this direction has been pursued in (Roelofsen 2011b) and (Westera 2012b).

## 4 Conclusion

In this paper we developed and investigated a framework for the semantic treatment of informative and inquisitive content, driven entirely by algebraic considerations. We proposed to define propositions as non-empty, downward closed sets of possibilities, and we showed that entailment can simply be defined as inclusion in this case, suitably capturing when one proposition is at least as informative and inquisitive as another. We showed that this entailment order gives rise to a complete Heyting algebra, with *meet*, *join*, and *relative pseudo-complement* operators. Just as in classical logic, these semantic operators were then associated with the logical constants in a first-order language.

We found that the resulting system essentially coincides with the simplest and most well-understood existing implemenation of *inquisitive semantics*, and that its treatment of disjunction and existentials also concurs with that of *alternative semantics*. Thus, our algebraic considerations did not lead to a wholly new semantics, but rather to a more solid foundation for some of the existing systems. In future work, we hope to extend the approach to obtain an even more fine-grained framework, where propositions do not only embody informative and inquisitive content, but also further aspects of meaning.

It may be useful to draw an analogy with arithmetic operations here. Given that addition and subtraction are such basic operations on quantities, we expect that natural languages which have words in their vocabulary to talk about quantities will generally also have words that behave semantically as addition and subtraction operators (in English, these words are *plus* and *minus*). Similarly, given that *join* and *meet* are such basic operations on propositions, we expect that natural languages will generally have words whose semantic role is to effectuate these operations.

Our presentation will be self-contained but perhaps somewhat dense for readers with a limited background in algebraic semantics. Partee et al. (1990) provide a more comprehensive discussion of all the relevant algebraic notions, especially geared at a linguistically oriented readership.

As alluded to in the introduction, there is a large body of work on the semantics of questions, starting with Hamblin (1973) and Karttunen (1977), which assumes precisely the type of meanings that we have considered here, i.e., meanings as arbitrary sets of possibilities. All this work suffers from the anti-symmetry problem that we just pointed out. There is also a large body of work, starting with Groenendijk and Stokhof (1984), in which question-meanings are not taken to be arbitrary sets of possibilities, but rather sets of possibilities that *partition* the logical space. In this case the anti-symmetry problem does not arise. For arguments to move from a partition semantics to an inquisitive semantics of the kind developed here, we refer to Mascarenhas (2009) and Ciardelli et al. (2013a).

The definition of support assumed here was first proposed for \(L_P\) in (Groenendijk 2008; Ciardelli 2008). It was extended to \(L_{FO}\) in (Ciardelli 2009) and further investigated in (Groenendijk and Roelofsen 2009; Ciardelli and Roelofsen 2011). The definition differs subtly but crucially from the one proposed in (Groenendijk 2009; Mascarenhas 2009). For discussion of the differences and arguments in favor of the current notion of support, see (Ciardelli and Roelofsen 2011, §8). The system considered here has been extended in several ways in order to capture aspects of meaning that go beyond informative and inquisitive content (see, e.g., Ciardelli et al. 2009; Roelofsen and van Gool 2010, and Farkas and Roelofsen 2012). In these extended systems, the proposition expressed by a sentence is no longer defined via the notion of support, but rather by means of a direct recursive definition, as in the algebraic semantics presented in this paper.

Groenendijk (2008) actually makes a distinction between the *meaning* of a sentence (the set of all supporting states) and the *proposition* expressed by a sentence (the set of maximal supporting states). Ciardelli (2008) makes a similar distinction. In other work on the support-based system, the meaning/proposition associated with a sentence is defined as the set of maximal supporting states.

The completeness problem for the first-order case is still open. See Ciardelli (2009) for discussion.

Notice that the implication in the other direction does not always go through, i.e., if a sentence contains a disjunction or an existential quantifier then it is not always inquisitive, as witnessed by sentences like \(\lnot (p\vee q)\), \((p\vee q)\rightarrow r\), and \(p\vee p\).

There is one caveat here: if \(\psi \) entails \(\varphi \) then the disjunction \(\varphi \vee \psi \) is equivalent with just \(\varphi \) in \(\mathsf {Inq}_\mathsf{B}\), since propositions are downward closed. As a concrete example of this general fact, we have that \((p\vee q)\vee (p\wedge q)\) (read: \(p\)*or*\(q\)*or both*) is equivalent with just \(p\vee q\). Work on alternative semantics, in particular that of Alonso-Ovalle (2006, 2008, 2009), has shown that in order to account for certain phenomena, it is important to assign distinct semantic values to these two sentences. This cannot be achieved as long as propositions embody only informative and inquisitive content. However, it *is* achieved very naturally in an extension of \(\mathsf {Inq}_\mathsf{B}\), where besides informative and inquisitive content, propositions also embody *attentive content* (Ciardelli et al. 2009; Roelofsen 2011c). In this framework, the two sentences are indeed assigned distinct semantic values, intuitively because they draw attention to different possibilities. Preliminary investigations of the algebraic foundations of this framework can be found in (Roelofsen 2011b, c; Westera 2012b).

In general, a representation theorem is a theorem that states that every abstract structure with certain properties must be isomorphic to some specific concrete structure. Our representation theorem states that in order to satisfy the above requirements, the non-inquisitive projection operator must be defined in a certain way.

We use \(\equiv \) here to denote equivalence, i.e., \(\varphi \equiv \psi \) iff \([\varphi ]=[\psi ]\).

Our considerations here can also be seen as providing motivation for the *existential closure* operator in alternative semantics (see the references given above), which behaves essentially in the same way as our non-inquisitive projection operator.

In English, we can think of the words *that* and *whether* as realizing declarative and interrogative complementizers, respectively, in embedded clauses (e.g., *John knows that/whether Mary is coming*). Even though these words do not occur in unembedded clauses, it is commonly assumed that the syntactic representations of unembedded clauses also involve complementizers. This assumption is also commonly made for *wh*-interrogatives, which, in English, do not exhibit overt complementizers even if they are embedded. In many other languages, complementizers are realized more overtly. It seems plausible to treat the declarative complementizer in English (*that*) as \(!\), the *wh*-interrogative complementizer as \(?\), and the polar interrogative complementizer (*whether*) as \(?!\). A detailed examination of this linguistic analysis is beyond the scope of this paper. Importantly, however, note that the framework developed here also allows us to formulate alternative analyses. The framework does not make any direct predictions about the semantic behavior of any specific construction in any specific natural language. It mainly offers the logical tools that are necessary to formulate such analyses, and gives rise to the expectation that, in general, natural languages will have ways to express the basic algebraic operations and the basic projection operations on propositions.

## Acknowledgments

First and foremost, I am very grateful to Ivano Ciardelli, Donka Farkas, Jeroen Groenendijk, and Matthijs Westera for our intensive collaboration on inquisitive semantics over the past few years. I am also very grateful to Gaëlle Fontaine, Bruno Jacinto, Theo Janssen, Morgan Mameni, Greg Restall, and Balder ten Cate for discussion of the ideas presented here, and to the anonymous reviewers of this paper, as well as its predecessor (Roelofsen 2011a), for very useful and constructive feedback. The research reported here was made possible through financial support from the Netherlands Organization for Scientific Research (NWO), which is gratefully acknowledged.

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.