Introduction

A motivating example

Suppose a certain disease may give rise to two symptoms, \(S_1\) and \(S_2\), the latter much more distressing than the former. Suppose the disease may be countered by means of a certain treatment, which however carries some associated risk. A hospital has the following protocol for dealing with the disease: if a patient presents symptom \(S_2\), the treatment is always prescribed. If the patient only presents symptom \(S_1\), however, the treatment is prescribed just in case the patient is in good physical condition; if not, the risks associated with the treatment outweigh the benefits, and the treatment is not prescribed.

Given the hospital’s protocol, whether or not the treatment is prescribed for a patient is determined by two things: (i) which symptoms the patient presents and (ii) whether the patient is in good physical condition. This means that, in the given context, a certain relation holds between the following questions:

figure a

This relation amounts to the following: in the given context, settling the questions \(\mu _1\) and \(\mu _2\) implies settling the question \(\nu \). We will say that the question \(\nu \) is determined by the questions \(\mu _1\) and \(\mu _2\) in the given context, and we will refer to this relation as a dependency.Footnote 1

This relation may also be viewed as connecting three different types of information: given the hospital’s protocol, complete information about a patient’s symptoms, combined with information about whether the patient is in good condition, yields information about whether the treatment is prescribed. Using the questions as labels, we may say that information of type \(\mu _1\), together with information of type \(\mu _2\), yields information of type \(\nu \).

The relevance of dependency

Dependencies are quite ubiquitous, both in ordinary contexts, such as the one of our example, and in specific scientific domains. In this section we mention three areas where this notion plays a role, although undoubtedly many others can be found.

Natural sciences

Much of the enterprise of natural sciences such as physics and chemistry consists in finding out what dependencies hold in nature: what are those factors that determine the trajectory of a planet, the temperature of a gas, or the speed of a certain chemical reaction?

One of the earliest achievements of modern science was the discovery that, absent air resistance, the time that a body dropped near the Earth surface employs to reach the ground is completely determined by the height from which it is dropped; another early realization of the modern theory of mechanics is that, given a flat surface, the distance at which a cannonball will land is completely determined by its initial velocity. Such relations are all cases of dependency in our sense: one question (say, what the initial velocity is) completely determines another question (say, how far the cannonball will land).

Indeed, the epistemic value of a scientific theory, such as classical mechanics or thermodynamics, lies at least to a large extent in its ability to establish such dependencies, which is often referred to as the theory’s predictive power. Our perspective allows us to make this very precise: we can say that a theory \(\varGamma \) is predictive of a question \(\nu \) given questions \(\mu _1,\dots ,\mu _n\) in case within the context of \(\varGamma \), \(\nu \) is determined by \(\mu _1,\dots ,\mu _n\). Thus, e.g., classical mechanics can be said to be predictive of a body’s position at a time t given (i) the body’s position and velocity at a different time \(t_0\), (ii) the body’s mass and (iii) the force field in which the body moves.

Linguistics

One of the goals of the theory of pragmatics is to understand when a certain sequence of conversational moves forms a coherent dialogue, and why. A crucial part of this task is to characterize what sentences count as acceptable replies to a question in a certain context. Now consider the following exchange:

figure b

In this dialogue, Bob’s reply sounds as informative a response as Alice can possibly hope for. However, strictly speaking, it does not resolve Alice’s question, since it does not provide a specific place where Bob can be found. Rather, what Bob’s reply does is to establish a dependency of Alice’s question on another question, the question whether it will be sunny. This illustrates the fact that in some cases, the optimal response to a given question may in fact take the form of a dependency on another question.

Databases

A database is a relation, i.e., a collection of vectors of a given size. A vector in a database is called an entry, and its coordinates are called the entry’s attributes. E.g., the database of a university may contain one entry for each student, and the attributes may be student ID, last name, program, etc.

The traditional role of dependency in database theory is in the specification of constraints that the database should satisfy. Such constraints often take the form of dependencies. E.g., as a university we want an ID number to uniquely identify a student: this means that that the value of the attribute student ID should completely determine the value of all other attributes in the database, such as first name, program, etc. Since it is important to verify that such a constraint is indeed satisfied by a database, reasoning about dependencies is a topic that has received much attention in the database community.

A related domain where dependency plays a role is query answering. Queries can be thought of as questions in a specific formal language. When a query is asked by a user, a program accesses the database, computes an answer, and returns it to the user. However, databases are typically large, and consulting them is costly: thus, it is often useful to store the answers to particular queries after these have been answered. Such stored answers are called views. Ideally, when a query is asked, one would like to compute an answer just based on the available views, without having to reconsult the database. However, in general this is only possible if the new query is in fact determined by the previous ones, i.e., if a certain dependency relation holds.

Aim and structure of the paper

The first purpose of this paper is to demonstrate that the logical relation of dependency is nothing but a facet of the fundamental logical notion of entailment, once this is extended to cover not only statements, but also questions. As such, it can be investigated in an insightful way by means of the standard notions and techniques of logic, provided logic is extended to encompass questions.

The second purpose of the paper is to show that questions have an interesting role to play in inferences. When occurring in a logical proof, a question plays the role of a placeholder, standing for an arbitrary piece of information of a certain type. For instance, the question what the patient’s symptoms are stands for some complete specification of the patient’s symptoms. By using questions, we can thus manipulate indeterminate information, and this makes it possible to provide simple formal proofs of dependencies.

Finally, the third purpose of the paper is to show that such proofs admit a constructive interpretation, similar to the proofs-as-programs interpretation of intuitionistic logic: they do not just witness the existence of a dependency, but actually encode a method for computing the dependency, i.e., a method for turning information of the type described by the assumptions into information of the type described by the conclusion.

The paper is structured as follows: Section 2 discusses how questions can be brought within the scope of logic by moving from a truth-conditional semantics to an information-based semantics, and how dependency emerges as a facet of entailment in this generalized setting. Section 3 illustrates these ideas by means of a concrete formal system which extends classical propositional logic with questions. Section 4 deals with the role played by questions in proofs and brings out the constructive content of proofs involving questions. Section 5 situates the present contribution in a broader context, comparing it to other logical approaches to questions and dependency. Finally, Sect. 6 summarizes the main points of the paper and outlines directions for future work.

Entailment in the realm of questions

From truth conditions to support conditions

Traditionally, logic is concerned with relations between sentences of a particular kind, namely, statements. Classical logic arises from the default assumption that the meaning of a statement lies in its truth conditions, that is, in the conditions that a state of affairs must satisfy in order to qualify as one in which the statement is true.Footnote 2

We will refer to the formal representation of a (complete) state of affairs as a possible world, and we will denote the class of possible worlds of possible worlds by \(\omega \).Footnote 3 Thus, in the truth-conditional approach, semantics consists in the specification of a relation \(w\models \alpha \) between possible worlds w and statements \(\alpha \), which holds in case \(\alpha \) is true in the state of affairs described by w. The central notion of logic, the relation of entailment, can then be characterized as preservation of truth: \(\alpha \) entails \(\beta \) in case \(\beta \) is true whenever \(\alpha \) is.

$$\begin{aligned} \alpha \models \beta \iff \,\text {for all }w\in \omega :\;w\models \alpha \,\text { implies }\,w\models \beta \end{aligned}$$

This is, in a nutshell, the usual way to characterize the fundamental notion of entailment in classical logic. Given this perspective, questions seem to have no place in logic: after all, it is not even clear what it should mean for a question to be true or false in a certain state of affairs. Since entailment was characterized in terms of truth, it is thus also unclear what it would mean for a question to occur as an assumption or as a conclusion of an entailment relation.

However, it is possible to give an alternative semantic foundation for classical logic, which starts out from a more information-oriented perspective. Rather than taking the meaning of a statement to be given by laying out in which circumstances \(\alpha \) is true, we may take it to be given by laying out what information it takes to settle \(\alpha \), that is, to establish that \(\alpha \) is true. In this perspective, \(\alpha \) is evaluated not with respect to states of affairs, but instead with respect to pieces/bodies of information, whose formal counterpart we will call information states.Footnote 4

A simple and perspicuous way of modeling an information state s, which goes back at least to Hintikka (1962), and which is widely adopted both in logic and in linguistics, is to identify it with a set of possible worlds, namely, those worlds that are compatible with the information available in s. In other words, if s is a set of possible worlds, then s encodes the information that the actual state of affairs corresponds to one of the possible worlds in s. If \(t\subseteq s\), this means that in t, at least as much information as in s is available, and possibly more. We say that t is an enhancement of s or also that t yields s.

In the informational approach that we will explore, semantics will thus be given by a relation \(s\models \alpha \), called support, between information states s and statements \(\alpha \), which holds in case \(\alpha \) is settled in s. This semantic perspective brings along a corresponding notion of entailment as preservation of support: \(\alpha \) entails \(\beta \) in case settling \(\alpha \) implies settling \(\beta \).

$$\begin{aligned} \alpha \models \beta \iff \text {for all }s\subseteq \omega :\;s\models \alpha \text { implies }s\models \beta \end{aligned}$$

Now, we regard a statement \(\alpha \) as being settled in s just in case it follows from the information in s that \(\alpha \) is true, i.e., in case s is only compatible with worlds in which \(\alpha \) is true. But this means that s settles \(\alpha \) iff all the worlds in s are worlds where \(\alpha \) is true. Let us write \(|\alpha |\) for the set of worlds where \(\alpha \) is true, that is, \(|\alpha |=\{w\in \omega \,|\,w\models \alpha \}\). Then, for all information states s we have:

Relation 1

(Support conditions from truth conditions)

\( s\models \alpha \iff s\subseteq |\alpha |\)

Thus, the support conditions for a statement are completely determined by its truth conditions. On the other hand, if we consider this connection in the special case where s is a singleton \(\{w\}\), we obtain: \(w\!\models \!\alpha \!\iff \!w\!\models \!|\alpha |\!{\iff }\!\{w\}\!\subseteq \!|\alpha |\!{\iff }\!\{w\}\!\models \!\AA \). Thus, the truth conditions of a statement are in turn determined by its support conditions.

Relation 2

(Truth conditions from support conditions)

\(w\models \alpha \iff \{w\}\models \AA \)

These connections show that, for statements, the truth-conditional approach and the support-conditional approach are two sides of the same coin: support conditions and truth conditions are interdefinable.

What is more, truth-conditional semantics and support-conditional semantics provide two different characterizations of the same notion of entailment. To see this, suppose \(\alpha \) entails \(\beta \) in the truth-conditional sense, i.e., \(|\alpha |\subseteq |\beta |\). Suppose \(s\models \alpha \): according to Relation 1, this means that \(s\subseteq |\alpha |\). Since \(|\alpha |\subseteq |\beta |\), this implies \(s\subseteq |\beta |\), which again by Relation 1 gives \(s\models \beta \). This shows that \(\alpha \) entails \(\beta \) in the support-conditional sense. Conversely, suppose \(\alpha \) entails \(\beta \) in the support-conditional sense, and let \(w\models \alpha \). According to Relation 2, \(\{w\}\) is a state which supports \(\alpha \). But then, \(\{w\}\) must also support \(\beta \), which, again by Relation 2, implies \(w\models \beta \). This shows that \(\alpha \) entails \(\beta \) in the truth-conditional sense.

What this means is that support semantics does not give rise to a new, non-standard notion of entailment, but instead provides an alternative, information-based characterization of entailment in classical logic.Footnote 5

Questions enter the stage

While truth-conditional semantics and support semantics are equivalent as far as statements are concerned, support semantics has an advantage which is not obvious at first: unlike truth-conditional semantics, it naturally accommodates not only statements, but also questions. For while it is not clear what it means for a question to be true or false in a certain state of affairs, there is a natural sense in which a question can be said to be settled in an information state s.

For a concrete example, consider one of the questions in our example, the question \(\mu _1\) of what symptoms, out of \(S_1\) and \(S_2\), the patient presents. This question is settled in an information state s in case either (i) s settles that the patient presents neither symptom, or (ii) s settles that the patient presents only \(S_1\), or (iii) s settles that the patient presents only \(S_2\), or finally, (iv) s settles that the patient presents both symptoms. This means that \(\mu _1\) is settled in a state s just in case s is included in one of the following four states:

  • \(a_{\emptyset \phantom {2}}=\,\{w\in \omega \,|\,\text {patient has no symptoms in }w\}\)

  • \(a_{1\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_1\text { in }w\}\)

  • \(a_{2\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_2\text { in }w\}\)

  • \(a_{12}=\,\{w\in \omega \,|\,\text {patient has both symptoms in }w\}\)

It is worth pointing out that not only are support conditions naturally defined for questions: there are also good reasons to regard them as a good candidate for the role of question meaning. For questions are used primarily, though not uniquely, in order to specify requests for information: it is therefore natural to expect that to know the meaning of a question is to know what information is requested by asking it, that is, what information state has to be brought about in order for the question to be settled. That is precisely what is encapsulated into the question’s support conditions.

Pieces and types of information

In truth-conditional semantics, the meaning of a sentence \(\varphi \) is encoded by its truth-set, i.e., by the set \(|\varphi |=\{w\in \omega \,|\,w\models \varphi \}\) of worlds at which \(\varphi \) is true. Similarly, in support semantics, the meaning of a sentence \(\varphi \) is encoded by its support-set, that is, the set \([\varphi ]=\{s\subseteq \omega \,|\,s\models \varphi \}\) of states which support \(\varphi \).

The support-set of a sentence is a set of information states of a special form. For suppose \(\varphi \) is settled in an information state s; then, \(\varphi \) will also be settled at any state that contains at least as much information as s. That is, the relation of support is persistent Footnote 6:

Persistency: if \(t\subseteq s\), \(\,s\models \varphi \,\text { implies }\,t\models \varphi \)

This implies that the support-set \([\varphi ]\) of a sentence is always downward closed, that is, if it contains a state s, it also contains all enhancements \(t\subseteq s\).

Downward closure: if \(t\subseteq s\), \(\,s\in [\varphi ]\,\text { implies }\,t\in [\varphi ]\)

It will be useful to introduce a notion of downward closure of a given set T of information states, defined as follows:

$$\begin{aligned} T^\downarrow =\{s\subseteq \omega \,|\,s\subseteq t\text { for some }t\in T\} \end{aligned}$$

Clearly, \(T^\downarrow \) is a downward closed set; in fact, \(T^\downarrow \) is always the smallest downward closed set which contains T. We say that the set of states \(T^\downarrow \) is generated by T, or that T is a generator for \(T^\downarrow \).

Now, the support set of a statement has another important feature besides downward closure. Relation 1 between the support conditions of a statement and its truth conditions implies that \(s\in [\alpha ]\iff s\subseteq |\alpha |\). That is, we have the following relation:

$$\begin{aligned} {[}\alpha ]=\{|\alpha |\}^\downarrow \end{aligned}$$

This shows that the support-set of a statement is always generated by a single state, namely, the truth-set \(|\alpha |\).Footnote 7 We may regard \(|\alpha |\) as a piece of information, namely, the information that \(\alpha \) is true. Thus, we may regard a statement \(\alpha \) as describing a specific piece of information. To say that \(\alpha \) is settled in a state is simply to say that this piece of information is available in s.

This is not the case for questions. For instance, consider again the the question \(\mu _1\) of what symptoms the patient presents, in the context of our example. We saw above that a state s supports \(\mu _1\) if it is included in one of the following four states:

  • \(a_{\emptyset \phantom {2}}=\,\{w\in \omega \,|\,\text {patient has no symptoms in }w\}\)

  • \(a_{1\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_1\text { in }w\}\)

  • \(a_{2\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_2\text { in }w\}\)

  • \(a_{12}=\,\{w\in \omega \,|\,\text {patient has both symptoms in }w\}\)

That is, in this case the support-set of \(\mu _1\) is not generated by a single state. Rather, it is generated by four different states:

$$\begin{aligned} {[}\mu _1]=\{a_\emptyset ,a_{1},a_2,a_{12}\}^{\downarrow } \end{aligned}$$

This reflects the fact that settling a question, such as \(\mu _1\), does not amount to establishing a specific piece of information, as in the case of a statement, but rather to establishing one of several alternative pieces of informations—which we may think of as the various ways in which the question may be resolved.

We may thus think of \(\mu _1\) as describing not a specific piece of information, as in the case of a statement, but rather a type of information. The elements of this type are the states \(a_\emptyset ,a_1,a_2,a_{12}\) regarded, respectively, as: the information that the patient has no symptom, the information that the patient has only symptom \(S_1\); the information that the patient has only symptom \(S_2\); and the information that the patient has both symptoms. To say that \(\mu _1\) is settled in a state is to say that some piece of information of this type is available.

We may generalize these observations as follows. If T is a generator for the support-set \([\varphi ]\), we may regard \(\varphi \) as describing the type of information T. For \(\varphi \) is settled in a state s iff some piece of information \(a\in T\) is available in s:

$$\begin{aligned} s\models \varphi \iff s\subseteq a\text { for some }a\in T \end{aligned}$$

We may then take the following to be the fundamental property that distinguishes statements from questions.

Definition 1

(Determinacy) We call a sentence \(\varphi \) is determinate in case \([\varphi ]\) admits a singleton generator. Otherwise, we call the sentence \(\varphi \) indeterminate.

Statements are determinate, that is, they may be regarded as describing a specific piece of information. Questions, on the other hand, are indeterminate: their support-set is not generated by any single information state, and they must thus be regarded as describing a non-singleton type of information.Footnote 8

In general, there will of course be many generators T for a for a given support set \([\varphi ]\). However, However, in many cases we have a unique have a unique minimal generator.

Definition 2

(Alternatives) The alternatives for a sentence \(\varphi \) are the maximal information states supporting \(\varphi \).

$$\begin{aligned} \textsc {Alt}(\varphi )=\{s\,|\,s\models \varphi \text { and there is no }t\supset s\text { such that }t\models \varphi \} \end{aligned}$$

Proposition 1

  • If \([\varphi ]=\textsc {Alt}(\varphi )^\downarrow \), then \(\textsc {Alt}(\varphi )\) is the unique minimal generator for \([\varphi ]\).

  • If \([\varphi ]\ne \textsc {Alt}(\varphi )^\downarrow \), then \([\varphi ]\) has no minimal generator.

If \([\varphi ]=\textsc {Alt}(\varphi )^\downarrow \), we will say that a sentence \(\varphi \) is normal. Statements are always normal, since we have \([\alpha ]=\{|\alpha |\}^\downarrow =\textsc {Alt}(\alpha )^\downarrow \). The questions in our example are normal as well, and so are all the questions expressible in the propositional logic described in Sect. 3. The proposition ensures that each normal sentence \(\varphi \) can be construed in a canonical way as describing the type of information \(\textsc {Alt}(\varphi )\).Footnote 9

Summing up, then, the support-conditional approach allows us to think of sentences in general as describing information types; a sentence is settled in a state iff some information of the corresponding type is available. Statements can be taken to describe singleton types, consisting of a unique piece of information; questions, on the other hand, always describe proper types, consisting of several distinct pieces of information.Footnote 10

Logical entailment

In the truth-conditional approach, entailment is characterized as preservation of truth: a conclusion follows from a set of premises if it is true whenever all the premises are true. As a consequence, it is only statements, whose meaning can be captured in terms of truth conditions, that can meaningfully figure in an entailment relation. In the support-conditional approach, entailment is characterized as preservation of support: a conclusion follows from a set of premises if it is settled whenever all the premises are. In symbols:

$$\begin{aligned} \varPhi \models \psi \iff \text {for all }s\subseteq \omega :\;s\models \varPhi \text { implies }s\models \psi \end{aligned}$$

where \(s\models \varPhi \) is shorthand for ‘\(s\models \varphi \text { for all }\varphi \in \varPhi \)’. As we saw, support conditions are meaningful not only for statements, but also for questions. As a consequence, we can now make sense of entailment relations which involve sentences of both categories. Thus, characterizing entailment in terms of support allows for a substantial generalization of the classical notion of entailment.

Now, given a sentence \(\varphi \), let \(T\varphi \) be an arbitrary generator for \([\varphi ]\), so that we can think of \(\varphi \) as describing the type of information \(T\varphi \).Footnote 11 Then, it is easy to see that the entailment \(\varphi \models \psi \) holds iff any piece of information of type \(T\varphi \) yields some corresponding piece of information of type \(T\psi \).

$$\begin{aligned} \varphi \models \psi \iff \text {for every }a\in T\varphi \text { there exists }a^{\prime }\in T\psi \text { such that }a\subseteq a^{\prime } \end{aligned}$$

In the case of multiple premises, this generalizes as follows: \(\varphi _1,\dots ,\varphi _n\models \psi \) holds in case combining information of type \(T\varphi _i\) for \(1\le i\le n\) is guaranteed to yield some piece of information of type \(T\psi \).

$$\begin{aligned} \varphi _1,\dots ,\varphi _n\models \psi\iff & {} \text {for every }a_1\in T\varphi _1,\dots ,a_n\in T\varphi _n\\&\text {there exists }a^{\prime }\in T\psi \text { such that }a_1\cap \dots \cap a_n\subseteq a^{\prime } \end{aligned}$$

To get acquainted with the significance of this generalized entailment relation, consider the case of a single premise. We have four possible entailment patterns: statement to statement, statement to question, question to statement, and question to question. Let us examine briefly the significance of each case.

  • If \(\alpha \) and \(\beta \) are statements, then \(\alpha \models \beta \) expresses the fact that settling \(\alpha \) implies settling \(\beta \). This simply amounts to \(|\alpha |\subseteq |\beta |\), that is, the information that \(\alpha \) is true yields the information that \(\beta \) is true. As we have pointed out above, this coincides with the standard truth-conditional notion of entailment.

  • If \(\alpha \) is a statement and \(\mu \) a question, \(\alpha \models \mu \) expresses the fact that settling \(\alpha \) implies settling \(\mu \). We may read \(\alpha \models \mu \) as “\(\alpha \) logically resolves \(\mu \)”. E.g., the statement Galileo discovered Jupiter’s moons entails the question whether Galileo discovered anything.

    It is easy to see that we have \(\alpha \models \mu \iff |\alpha |\subseteq a\) for some \(a\in T\mu \): that is, \(\alpha \) entails \(\mu \) if the information that \(\alpha \) yields some information of type \(\mu \).

  • If \(\mu \) is a question and \(\alpha \) is a statement, then \(\mu \models \alpha \) expresses the fact that whenever we settle \(\mu \)—in any possible way—we also settle \(\alpha \); in other words, it is impossible to resolve the question without establishing that \(\alpha \). We may read \(\mu \models \alpha \) as “\(\mu \) presupposes \(\alpha \)”. E.g., the question in what year Galileo discovered Jupiter’s moons entails the statement Galileo discovered Jupiter’s moons.

    It is easy to see that we have \(\mu \models \alpha \,\iff \, a\subseteq |\alpha |\) for all \(a\in T\mu \): that is, \(\mu \) entails \(\alpha \) iff any information of type \(\mu \) yields the information that \(\alpha \).

  • If \(\mu \) and \(\nu \) are questions, \(\mu \models \nu \) express the fact that settling \(\mu \) implies settling \(\nu \). This is precisely the relation of dependency that we pointed out in our initial examples, but now in its purely logical version, since all worlds, not just some contextually relevant ones, are taken into account.

    We may read \(\mu \models \nu \) as “\(\mu \) logically determines \(\nu \)”. E.g., the question in what year Galileo discovered Jupiter’s moons entails the question in what century Galileo discovered Jupiter’s moons.

    In terms of information types, we have \(\mu \models \nu \iff \) for all \(\,a\in T\mu \,\) there is \(\,a^{\prime }\in T\nu \,\) such that \(\,a\subseteq a^{\prime }\): that is \(\mu \) entails \(\nu \) if any piece of information of type \(\mu \) yields some corresponding piece of information of type \(\nu \).

Thus, support semantics gives rise to an interesting generalization of classical entailment, which captures not only the logical connections existing between pieces of information (the standard consequence relation), but also those existing between pieces of information and types of information (resolution, presupposition), and between one type of information and another (dependency).

Entailment in context

When we think about a statement being a consequence of another, it is rarely the purely logical notion of consequence that we are concerned with. Rather, we typically take some facts for granted, and then assess whether on that basis, the truth of one statement implies the truth of the other. We say, e.g., that Galileo discovered some celestial body is a consequence of Galileo discovered Jupiter’s moons; in doing so, we take for granted that Jupiter’s moons are celestial bodies: worlds in which this is not the case are not taken into account.

The same holds for questions: when we are concerned with dependencies, it is rarely purely logical dependencies that are at stake. Rather, we are usually concerned with the relations that one question bears to another, given certain background facts about the world. In our initial example, for instance, it is the hospital’s protocol that provides the context relative to which the dependency holds.

In order to capture these relations, besides the absolute notion of logical entailment that we discussed, we will also introduce a relativized notion of contextual entailment. We will model a context simply as an information state s. In assessing entailment relative to s, we take the information embodied by s for granted. This means that only worlds in s, and states consisting of such worlds, need to be taken into account. Formally, we make the following definition.

$$\begin{aligned} \varPhi \models _s\psi \iff \text {for all }t\subseteq s:\;t\models \varPhi \,\text { implies }\,t\models \psi \end{aligned}$$

Contextual entailment captures relations of consequence, resolution, presupposition, and dependency which hold not purely logically, but against the background of a specific context.

Fig. 1
figure 1

The meanings of the three questions involved in our initial example, within the context s provided by the hospital’s protocol. The sets displayed in the figures are the intersections of the alternatives for the questions with the context. To avoid clutter, we label each of these sets by the name of the corresponding alternative a. In fact, what is displayed is the intersection \(a\cap s\) of this alternative with the context

Focusing on dependency, let us look in particular at how our initial example is captured as an instance of entailment in context. Let s denote our hospital protocol context, which consists of the set of worlds which are compatible with the protocol. Thus, e.g., s contains worlds where the patient has both symptoms and the treatment is prescribed, but not worlds where the patient has both symptoms and the treatment is not prescribed, since such worlds are incompatible with the protocol.

Now, we saw that a state \(t\subseteq s\) settles the question \(\mu _1\) of what symptoms the patient has in case it is included in one of the following four states, whose intersection with s is depicted in Fig. 1b:

  • \(a_{\emptyset \phantom {2}}=\,\{w\in \omega \,|\,\text {patient has no symptoms in }w\}\)

  • \(a_{1\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_1\text { in }w\}\)

  • \(a_{2\phantom {2}}=\,\{w\in \omega \,|\,\text {patient has only symptom }S_2\text { in }w\}\)

  • \(a_{12}=\,\{w\in \omega \,|\,\text {patient has both symptoms in }w\}\)

A state \(t\subseteq s\) settles the question \(\mu _2\) of whether the patient is in good condition in case it settles that the patient is in good condition, or it settles that the patient is not in good condition. This holds just in case t is included in either of the following states, whose intersection with s is depicted in Fig. 1c:

  • \(a_{g}\,=\,\{w\in \omega \,|\,\text {patient is in good condition in }w\}\)

  • \(a_{\overline{g}}\,=\,\{w\in \omega \,|\,\text {patient is not in good condition in }w\}\)

Finally, a state \(t\subseteq s\) settles the question \(\nu \) of whether the treatment is prescribed just in case it settles that the treatment is prescribed, or it settles that the treatment is not prescribed. That is, in case t is included in one of the following two states, whose intersection with s is depicted in Fig. 1(d):

  • \(a_{t}\,=\,\{w\in \omega \,|\,\text {treatment is prescribed in }w\}\)

  • \(a_{\overline{t}}\,=\,\{w\in \omega \,|\,\text {treatment is not prescribed in }w\}\)

Now, clearly, relative to the context s, neither \(\mu _1\) nor \(\mu _2\) by itself entails \(\nu \). For instance, \(\mu _1\) is settled in the state \(a_1\cap s\), but \(\nu \) is not. This corresponds to the fact that the information that the patient has only symptom \(S_1\) is not sufficient to determine whether the treatment is prescribed or not. Similarly, \(\mu _2\) is settled in each of the states \(a_g\cap s\) and \(a_{\overline{g}}\cap s\), but \(\nu \) is not. This corresponds to the fact that information as to whether the patient is in good condition is not sufficient to determine whether the treatment is prescribed.

Hence, we have \(\mu _1\not \models _s\nu \) and \(\mu _2\not \models _s\nu \), which captures the fact that whether the treatment is prescribed is not fully determined by either the patient’s symptoms or the patient’s condition in the given context.

At the same time, \(\mu _1\) and \(\mu _2\) together do entail \(\nu \) relative to s. For consider a state \(t\subseteq s\) which settles both \(\mu _1\) and \(\mu _2\): since t settles \(\mu _1\), t must be included in one of the sets \(a_\emptyset ,\,a_1,\,a_2,\,a_{12}\); and since t settles \(\mu _2\), t must be included in one of among \(a_g\) and \(a_{\overline{g}}\). It is clear by inspecting the figure that any such state must be included in one among \(a_t\) and \(a_{\overline{t}}\), which means that it also settles \(\nu \). Thus, we have \(\mu _1,\mu _2\models _s\nu \), which captures the fact that, in the given context, whether the treatment is prescribed is jointly determined by the patient’s symptoms and condition. In this way, the dependency relation of our initial example is captured as a particular instance of entailment—more precisely, as a case of question entailment in context.

It is also natural to look at this relation in terms of information types. Since all three questions involved in the examples are normal, we can associate them to the following information types.

  • \(\textsc {Alt}(\mu _1)=\{a_\emptyset ,\,a_1,\,a_2,\,a_{12}\}\)

  • \(\textsc {Alt}(\mu _2)=\{a_g,\,a_{\overline{g}}\}\)

  • \(\textsc {Alt}(\,\nu \,)\,=\{a_t,\,a_{\overline{t}}\}\)

Then, the entailment \(\mu _1,\mu _2\models _s\nu \) amounts to the following.

$$\begin{aligned} \mu _1,\,\mu _2\,\models _s\,\nu\iff & {} \text { for any }a\in \textsc {Alt}(\mu _1)\text { and any }a^{\prime }\in \textsc {Alt}(\mu _2)\\&\text { there is }a^{\prime \prime }\in \textsc {Alt}(\nu )\text { such that }s\cap a\cap a^{\prime }\subseteq a^{\prime \prime } \end{aligned}$$

That is, the entailment holds if, within the context s provided by the protocol, combining a piece of information of type symptoms (\(a\in \textsc {Alt}(\mu _1)\)) with one of type conditions (\(a^{\prime }\in \textsc {Alt}(\mu _2)\)) is bound to yield some piece of information of type treatment (\(a^{\prime \prime }\in \textsc {Alt}(\nu )\)). This shows how the contextual entailment \(\mu _1,\,\mu _2\models _s\nu \) captures precisely the relation that we observed to exist between the types of information \(\textsc {Alt}(\mu _1)\), \(\textsc {Alt}(\mu _2)\), and \(\textsc {Alt}(\nu )\) within the context s.

From contextual to logical entailment

Contextual entailments can be made into logical entailments by turning the relevant contextual material into an explicit premise. If \(\varGamma \) is a set of statements, and if \(|\varGamma |\) is the set of worlds at which these statements are all true, we have the following connection:

$$\begin{aligned} \varPhi \models _{|\varGamma |}\psi \iff \varGamma ,\varPhi \models \psi \end{aligned}$$

That is, if a context s is describable by a set \(\varGamma \) of statements, contextual entailment relative to s amounts to logical entailment with the statements in \(\varGamma \) as additional premises. In our example, the context s may be described by means of a statement such as the following.

figure c

Thus, the dependency in our example is not only captured by the contextual entailment \(\,\mu _1,\mu _2\models _s\nu \,\) relative to the protocol context, but also by its purely logical counterpart \(\,\gamma ,\mu _1,\mu _2\models \nu \,\) in which the hospital’s protocol is turned into an additional premise.

Internalizing entailment

In support semantics, the contexts to which entailment can be relativized are the same kind of objects at which sentences are evaluated, namely, information states. This ensures that a support-based logic can always be enriched with an operation of implication which internalizes the meta-language relation of entailment. In other words, a logical system whose semantics is given in terms of support may always be equipped with a connective \(\rightarrow \) such that, for any sentences \(\varphi \) and \(\psi \), \(\varphi \rightarrow \psi \) is settled in s iff \(\varphi \) entails \(\psi \) relative to s.

$$\begin{aligned} s\models \varphi \rightarrow \psi \;\iff \; \varphi \models _s\psi \end{aligned}$$

Simply by making explicit what the condition \(\varphi \models _s\psi \) amounts to, we get the inductive support clause governing this operation:

$$\begin{aligned} s\models \varphi \rightarrow \psi \iff \text {for all }t\subseteq s:\;\, t\models \varphi \text { implies }t\models \psi \end{aligned}$$

Interestingly, this is, mutatis mutandis, precisely the interpretation of implication that we find in most information-based semantics.

If we apply this clause to statements, what we get is simply the usual material conditional of classical logic. For using Relation 1 we have:

$$\begin{aligned} s\models \alpha \rightarrow \beta\iff & {} \forall t\subseteq s,\; t\models \alpha \text { implies }t\models \beta \\\iff & {} \forall t\subseteq s,\; t\subseteq |\alpha |\text { implies }t\subseteq |\beta |\\\iff & {} {s\,}\cap {\,|\alpha |}\subseteq {|\beta |}\; \iff \; s\subseteq \overline{|\alpha |}\,\cup \,|\beta | \end{aligned}$$

where \(\overline{|\alpha |}=\omega -|\alpha |\) is the set of worlds where \(\alpha \) is false. Thus, the conditional \(\alpha \rightarrow \beta \) is supported in a state s iff the corresponding material conditional is true everywhere in s. This is interesting, as it shows that the standard material conditional may be seen as arising precisely by internalizing within the language the relation of contextual entailment between statements.

What is more interesting, however, is that the clause above defines an operation which generalizes the material conditional. Above, we saw that support semantics is suitable for interpreting questions, besides statements. If our language contains questions, implication among them is naturally defined: given two questions \(\mu \) and \(\nu \), we thus have a formula \(\mu \rightarrow \nu \) which is supported by a state s in case \(\mu \) determines \(\nu \) relative to s.

What this shows is that the support approach does not only allow us to generalize the relation of entailment to questions, capturing dependencies: it also allows us to generalize in a parallel way the conditional operator to questions, enabling us to express these dependencies within the language.

Summing up

We have seen that classical logic can be given an alternative, informational semantics in terms of support conditions, which determines when a sentence is settled by a body of information, rather than when it is true at a world. Unlike truth-conditional semantics, support semantics can interpret questions in a natural way. In this approach, a formula may be regarded as describing a type of information: statements describe singleton types, which may be identified with specific pieces of information; questions describe non-singleton types, which are instantiated by several different pieces of information.

This unified semantic account of statements and questions allows for a generalization of the classical notion of entailment: while entailments among statements have the usual significance, entailments involving questions capture dependencies. In particular, an entailment of the form \(\,\alpha ,\,\mu \,\models \,\nu \) captures the fact that, in the context described by the statement \(\alpha \), the question \(\mu \) determines the question \(\nu \): that is, given the information that \(\alpha \) is true, any piece of information of type \(\mu \) yields some corresponding piece of information of type \(\nu \).

Questions in propositional logic

In this section, the ideas discussed abstractly so far will be illustrated by means of a concrete logical system. We will do this in the simplest possible setting, that of propositional logic. The system that we will discuss is the system InqB of propositional inquisitive logic (Ciardelli 2009; Groenendijk and Roelofsen 2009; Ciardelli and Roelofsen 2011). However, we will take a new perspective on this system. In previous work, the idea was that standard propositional formulas are given a more fine-grained semantics, adding an inquisitive dimension to purely truth-conditional meaning: as a consequence, InqB emerged as a non-classical logic. Here, we will show that the same system can also be regarded as an incarnation of the general approach described in the previous section. This means that we will first re-implement classical propositional logic based on support, and then extend this classical core by adding a new question-forming disjunction connective: as a consequence, InqB will now emerge as a conservative extension of classical propositional logic, CPL.

What this new perspective brings out is that InqB can be seen as adding expressive power to CPL: whereas classical logic can be regarded as a logic of pieces of information, inquisitive logic may be regarded as a logic of information types. In classical propositional logic, propositional formulas are viewed as statements, and thus, as describing specific pieces of information. In InqB, these formulas will be given an interpretation which is isomorphic to the standard one; in this way, the logic of statements will be preserved. However, this classical core will be augmented with questions—indeterminate formulas which describe proper information types. By allowing us to interpret such formulas, the support approach yields a logic that captures interesting logical relations, such as dependency, that are not covered by classical propositional logic.Footnote 12 \({^{,}}\) Footnote 13

For proofs of the technical results mentioned in this section, the reader is referred to Ciardelli (2009) and Ciardelli and Roelofsen (2011). Occasionally, proofs are provided for claims which lack a direct analogue in the literature.

Propositional information states

First, let us see how our informal talk of possible worlds and information states can be made precise in the propositional setting. Given a set \({\mathcal {P}}\) of propositional atoms, we will take a possible world to be a propositional valuation, that is, a function \(w:{\mathcal {P}}\rightarrow \{0,1\}\) which specifies which of the atoms are true. As a consequence, information states will be sets of propositional valuations. The set \(\omega \) containing all valuations represents the trivial state in which no information is present. At the opposite end of the spectrum, the empty set \(\emptyset \) represents the state of inconsistent information; non-empty states will be referred to as consistent states.

Support semantics for classical propositional logic

Next, let us see how we can give a support semantics a support semantics for classical propositional logic. The set \({\mathcal {L}}_c\) of classical formulas is given by the following definition:

$$\begin{aligned} \varphi \;::=\;p\;|\;\bot \;|\;\varphi \wedge \varphi \;|\;\varphi \rightarrow \varphi \end{aligned}$$

That is, we take classical formulas to be built up from atoms and the falsum constant \(\bot \) by means of the primitive connectives \(\wedge \) and \(\rightarrow \). We take negation and disjunction to be defined from these primitive connectives as follows:

$$\begin{aligned} \lnot \varphi \;:=\;\varphi \rightarrow \bot&\qquad \qquad&\varphi \vee \psi \;:=\;\lnot (\lnot \varphi \wedge \lnot \psi ) \end{aligned}$$

Thus, our classical language is just a standard propositional language. However, we will give the semantics of this language not via a recursive definition of the relation of truth with respect to a world, but instead via a recursive definition of the relation of support with respect to information states.

Definition 3

(Support)

  • \(s\models p\iff w(p)=1\) for all \(w\in s\)

  • \(s\models \bot \iff s=\emptyset \)

  • \(s\models \varphi \wedge \psi \iff s\models \varphi \) and \(s\models \psi \)

  • \(s\models \varphi \rightarrow \psi \iff \) for all \(t\subseteq s\), \(t\models \varphi \) implies \(t\models \psi \)

Keeping in mind that we read support as capturing when a formula is settled in an information state, the clauses may be read as follows. An atom p is settled in s in case it is true at every world in s. The falsum constant \(\bot \) is only settled in the inconsistent state, \(\emptyset \). A conjunction is settled in s in case both conjuncts are. Finally, implication internalizes entailment in the way described in the previous section: an implication is settled in s in case the antecedent entails the consequen relative to s; that is, in case enhancing s so as to settle the antecedent is guaranteed to lead to a state which also settles the consequent.

We will say that a state s is compatible with a formula \(\varphi \), notation \(s\between \varphi \), in case s can be enhanced consistently to a state that supports \(\varphi \):

$$\begin{aligned} s\between \varphi \iff t\models \varphi \text { for some consistent } t\subseteq s \end{aligned}$$

Using this notion, the derived semantic clauses for negation and disjunction may be expressed as follows.

  • \(s\models \lnot \varphi \iff \) it is not the case that \(s\between \varphi \)

  • \(s\models \varphi \vee \psi \iff \) for all consistent \(t\subseteq s\), \(\,t\between \varphi \,\) or \(\,t\between \psi \)

That is, a negation \(\lnot \varphi \) is settled in s if s is incompatible with \(\varphi \), i.e., if s cannot be consistently enhanced to support \(\varphi \). As for disjunction, \(\varphi \vee \psi \) is settled in s if any consistent enhancement of s is bound to be compatible with either \(\varphi \) or \(\psi \)—that is, if s cannot be consistently enhanced to a state that rules out both \(\varphi \) and \(\psi \).

Now we can verify inductively that what we gave is a support-based formulation of classical propositional logic. Indeed, support conditions are related to standard truth conditions in accordance with Relation 1: a classical formula \(\varphi \in {\mathcal {L}}_c\) is supported by a state s if and only if it is true at every world in s.

Proposition 2

(Support conditions and truth conditions) For any state \(s\subseteq \omega \) and any classical formula \(\varphi \in {\mathcal {L}}_c\):

$$\begin{aligned} \;s\models \varphi \iff \text {for all }w\in s, \;w\models \varphi \text { in classical propositional logic} \end{aligned}$$
Fig. 2
figure 2

The alternatives for some classical formulas. 11 represents a words where p and q are both true, 10 a world where p is true and q is false, etc. As ensured by Proposition 2, each formula has a unique alternative, which coincides with its truth-set

Writing \(|\varphi |\) for the truth-set of \(\varphi \), i.e., the set of valuations at which \(\varphi \) is true, this property can be rewritten as \([\varphi ]=\{|\varphi |\}^{\downarrow }\). Thus, any classical formula is determinate in the sense of Definition 1, and may thus be regarded as a statement. An immediate consequence of this proposition is that a classical formula always has a unique alternative, which coincides with its truth-set. This is illustrated by Fig. 2.

Proposition 2 shows that the support-semantics we have just given for our classical language is equivalent to the standard truth-conditional semantics: the two are inter-definable, and moreover, it is easy to see that they give rise to the same notion of entailment. Thus, what we have provided is simply an alternative semantic foundation for classical propositional logic.

Adding questions to propositional logic

Now that we have re-implemented classical propositional logic in terms of support, we can exploit the extra richness of the support framework over the truth-conditional framework to extend classical propositional logic with questions. We will do this by enriching our classical language with a new connective , called inquisitive disjunction. Thus, the full language \({\mathcal {L}}\) of our system is generated from atoms and \(\bot \) by means of the connectives \(\wedge ,\rightarrow \), and .

Intuitively, we may regard as standing for the question whether \(\varphi \) or \(\psi \), which is settled just in case one among \(\varphi \) and \(\psi \) is settled.Footnote 14

Definition 4

(Support for inquisitive disjunction)

It is easy to see that the resulting support relation satisfies the persistency property discussed in the previous section: if a formula is supported by a state s, then it remains supported by any enhancement of s. Moreover, support satisfies the empty state property, stating that the inconsistent information state supports any formula whatsover. This is a semantic version of the familiar ex falso quodlibet principle.

Proposition 3

(Properties of the support relation)

  • Persistence property: if \(s\models \varphi \) and \(t\subseteq s\), then \(t\models \varphi \)

  • Empty state property: \(\emptyset \models \varphi \) for all \(\varphi \in {\mathcal {L}}\)

In the previous section, we have identified a fundamental semantic difference between statements and questions: statements are determinate, i.e., admit a singleton generator, while questions are indeterminate, i.e., they only have non-singleton generators. We can then categorize the formulas of our formal language as being statements or questions according to this characterization.Footnote 15 \(^,\) Footnote 16

Definition 5

(Statements and questions)

  • A formula \(\varphi \in {\mathcal {L}}\) is called a statement iff it is determinate.

  • A formula \(\varphi \in {\mathcal {L}}\) is called a question iff it is indeterminate.

Henceforth, we take the meta-variables \(\alpha ,\beta \) range over statements, \(\mu ,\nu \) range over questions, and \(\varphi ,\psi ,\chi \) range over arbitrary formulas.

It is easy to see that formulas in \({\mathcal {L}}\) are always normal, that is, the set of alternatives for a formula is always a generator for the formula’s support-set.

Proposition 4

(Normality) For all \(\varphi \in {\mathcal {L}}\), \(\;[\varphi ]=\textsc {Alt}(\varphi )^\downarrow \)

This means that we may regard a formula \(\varphi \) in a canonical way as denoting the type of information \(\textsc {Alt}(\varphi )\). This allows us to give a very visual characterization of the classes of statements and questions.

Proposition 5

  • \(\varphi \) is a statement iff it has a unique alternative.

  • \(\varphi \) is a question iff it has two or more alternatives.

Let us first focus on the class of statements. An interesting observation is that statements may be characterized as formulas whose semantics is completely determined at the level of singleton states.

Proposition 6

\(\varphi \) is a statement iff the following holds for all s:

$$\begin{aligned} s\models \varphi \iff \{w\}\models \varphi \text { for all }w\in s \end{aligned}$$

We may generalize the notion of truth from classical formulas to the whole language \({\mathcal {L}}\) by defining it in terms of support at singleton states: \(\varphi \) is true at w if it is supported at \(\{w\}\). Then, the previous proposition says that statements are precisely, that is, those formulas whose semantics is completely determined in terms of truth conditions.

Next, recall that Proposition 2 guarantees that any classical formula is a statement. Conversely, any statement is equivalent to a classical formula, which shows that, by adding to CPL, we are enabling our logic to express questions, but not to express new statements.Footnote 17 To prove this, we can first associate with any formula a corresponding classical formula.

Definition 6

(Classical variant of a formula) The classical variant \(\varphi ^{cl}\) of a formula \(\varphi \) is obtained from \(\varphi \) by replacing all occurrences of inquisitive disjunction by classical disjunction.

Now, for any formula \(\varphi \), its classical variant \(\varphi ^{cl}\) is a classical formula having as its unique alternative the union of all the alternatives for \(\varphi \).

Proposition 7

For any \(\varphi \), \(\textsc {Alt}(\varphi ^{cl})=\{\bigcup \textsc {Alt}(\varphi )\}\).

It follows from this proposition that a formula \(\varphi \) is a statement iff it is equivalent to its classical variant \(\varphi ^{cl}\). As a consequence, we get the following corollary.

Corollary 1

Any statement is equivalent to a classical formula.

Another important observation is that, as a consequence of the semantic clause for negation, \(\lnot \varphi \) is always a statement. In particular, the double negation of a formula is always a statement, which is equivalent to the classical variant of the formula.

Proposition 8

\(\lnot \lnot \varphi \equiv \varphi ^{cl}\)

As a consequence, statements can also be characterized as being precisely those formulas which are equivalent to their own double negation. In other words, the double negation law is the hallmark of statements.

Proposition 9

\(\varphi \equiv \lnot \lnot \varphi \iff \varphi \) is a statement.

Let us now turn our attention to questions. First, notice that, if \(\alpha \in {\mathcal {L}}_{c}\), the polar question whether \(\varphi \), which can be settled either by establishing \(\alpha \), or by establishing \(\lnot \alpha \), can be expressed by means of inquisitive disjunction as . It will be convenient to abbreviate this formula as \(?\alpha \).

Definition 7

(Question mark operator)

An attractive feature of the system InqB is the following: since the semantics of the connectives \(\wedge \) and \(\rightarrow \) is given in terms of support, these connectives can be used to combine not only statements, but also questions. In this way, the standard truth-conditional operations of conjunction and implication are extended to questions in a natural way. To see the effect of these operators when applied to questions, consider first conjunction: applying \(\wedge \) to two polar questions, such as ?p and ?q, results in a question \({?p}\wedge {?q}\) which is settled whenever both conjuncts are settled, as illustrated in Fig. 3c.

For implication, consider first the formula \(p\rightarrow {?q}\): as illustrated in Fig. 3d, this formula has two alternatives, corresponding to \(p\rightarrow q\) and \(p\rightarrow \lnot q\). Thus, the implication \(p\rightarrow {?q}\) is a question which is settled whenever ?q is settled conditionally on the assumption that p. This corresponds to a natural language question like (2).Footnote 18

figure d
Fig. 3
figure 3

The alternatives for some questions in InqB

Finally, consider the implication \({?p}\rightarrow {?q}\), having questions both as consequent and as antecedent: this formula is supported in s if \({?p}\models _s{?q}\), that is, in case ?p determines ?q relative to s. Thus, \({?p}\rightarrow {?q}\) is settled in a state s in case s contains state contains enough information to establish a dependency of ?q on ?p. As shown in Fig. 3e, this formula is a question with four alternatives, corresponding to the ways in which such a dependency may obtainFootnote 19:

  1. 1.

    \((p\rightarrow q)\wedge (\lnot p\rightarrow q)\equiv {q}\)

  2. 2.

    \((p\rightarrow q)\wedge (\lnot p\rightarrow \lnot q)\equiv q\leftrightarrow p\)

  3. 3.

    \((p\rightarrow \lnot q)\wedge (\lnot p\rightarrow q)\equiv q\leftrightarrow \lnot p\)

  4. 4.

    \((p\rightarrow \lnot q)\wedge (\lnot p\rightarrow \lnot q)\equiv \lnot q\)

Resolutions and inquisitive normal form

An important feature of the system InqB is that we can compute, recursively on the structure of a formula \(\varphi \), a set of classical formulas which can be taken to name the different pieces of information of type \(\varphi \). We refer to these formulas as the resolutions of \(\varphi \).

Definition 8

(Resolutions)

  • \({\mathcal {R}}(p)=\{p\}\)

  • \({\mathcal {R}}(\bot )=\{\bot \}\)

  • \({\mathcal {R}}(\varphi \wedge \psi )=\{\alpha \wedge \beta \,|\,\alpha \in {\mathcal {R}}(\varphi )\text { and }\beta \in {\mathcal {R}}(\psi )\}\)

  • \({\mathcal {R}}(\varphi \rightarrow \psi )=\{\bigwedge _{\alpha \in {\mathcal {R}}(\varphi )}(\alpha \rightarrow f(\alpha ))\,|\,f:{\mathcal {R}}(\varphi )\rightarrow {\mathcal {R}}(\psi )\}\)

Notice that resolutions are by definition classical formulas. Moreover, it is easy to show by induction that any classical formula is the only resolution of itself.

Proposition 10

If \(\alpha \in {\mathcal {L}}_c\), then \({\mathcal {R}}(\alpha )=\{\alpha \}\).

On the other hand, questions always have multiple resolutions: for instance, we have \({\mathcal {R}}({?p})=\{p,\lnot p\}\), and \({\mathcal {R}}(p\rightarrow {?q})=\{p\rightarrow q,\, p\rightarrow \lnot q\}\).

The key property of resolutions is given by the following Proposition: to settle the formula \(\varphi \) is to establish that \(\alpha \) is true, for some resolution \(\alpha \) of \(\varphi \).

Proposition 11

For any formula \(\varphi \in {\mathcal {L}}\) and any state \(s\subseteq \omega \):

$$\begin{aligned} \;s\models \varphi \iff s\models \alpha \text { for some }\alpha \in {\mathcal {R}}(\varphi ) \end{aligned}$$

Since resolutions are classical formulas, \(s\models \alpha \) amounts to \(s\subseteq |\alpha |\). Thus, the above proposition can be re-stated as follows:

$$\begin{aligned} {[}\varphi ]=\{|\alpha |\,|\,\alpha \in {\mathcal {R}}(\varphi )\}^{\downarrow } \end{aligned}$$

This shows that a formula \(\varphi \in {\mathcal {L}}\) can always be viewed as denoting a type of information whose elements are named by the resolutions of \(\varphi \).Footnote 20

Moreover, as a corollary of the above proposition we have the following normal form result for InqB, which shows that any formula is equivalent to an inquisitive disjunction of classical formulas.

Proposition 12

(Inquisitive normal form) Let \(\varphi \in {\mathcal {L}}\) and let \({\mathcal {R}}(\varphi )=\{\alpha _1,\dots ,\alpha _n\}\). Then,

It is interesting to remark that there is a close similarity between the inductive definition of resolutions that we gave, and the inductive definition of proofs in the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic. In this interpretation, a proof of a conjunction is a pair of two proofs, one for each conjunct; a proof of a disjunction is a proof of either disjunct; and a proof of an implication is a function that turns any proof of the antecedent into a proof of the consequent. Similarly, here a resolution of a conjunction is a conjunction of two resolutions, one for each conjunct; a resolution of an inquisitive disjunction is a resolution of either disjunct; and a resolution of an implication corresponds to a function from resolutions of the antecedent to resolutions of the consequent. The main difference between the two notions is that, unlike proofs in the BHK interpretation, resolutions are in turn formulas, that is, syntactic objects within the same language in which the original formula lives. Thus, we can look at the definition of resolutions as a sort of language-internal version of the BHK interpretation.

By means of the notion of resolutions we can also re-state the support conditions for an implication in an interesting way. To spell this out, we will introduce the notion of a dependence function.

Definition 9

(Dependence function)

  • A function \(f:{\mathcal {R}}(\varphi )\rightarrow {\mathcal {R}}(\psi )\) is called a dependence function from \(\varphi \) to \(\psi \) in a context s, notation \(f:\varphi \leadsto _s\psi \), in case for any \(\alpha \in {\mathcal {R}}(\varphi )\), \(\alpha \models _s f(\alpha )\).

  • A function \(f:{\mathcal {R}}(\varphi )\rightarrow {\mathcal {R}}(\psi )\) is called a logical dependence function from \(\varphi \) to \(\psi \), notation \(f:\varphi \leadsto \psi \), if it is a dependence function in any context.

Now, the resolutions of an implication \(\varphi \rightarrow \psi \) are statements of the form:

$$\begin{aligned} \gamma _f\,=\bigwedge _{{\alpha \in {\mathcal {R}}(\varphi )}}(\alpha \rightarrow f(\alpha )) \end{aligned}$$

for a function \(f:{\mathcal {R}}(\varphi )\rightarrow {\mathcal {R}}(\psi )\). It is easy to verify that the statement \(\gamma _f\) is supported in a state s f if and only if f is a dependence function in s.

Proposition 13

Let \(f:{\mathcal {R}}(\varphi )\rightarrow {\mathcal {R}}(\psi )\). For any state s, \(\;s\models \gamma _f\iff f:\varphi \leadsto _s\psi \)

Now, Proposition 12 tells us that \(\varphi \rightarrow \psi \) is supported in a state s in case some formula \(\gamma _f\in {\mathcal {R}}(\varphi \rightarrow \psi )\) is supported. By the previous proposition, this holds if and only if there exists a dependence function \(f:\varphi \leadsto _{s}\psi \). We have thus obtained the following result about the support conditions for an implication.

Proposition 14

(Implication and dependence functions) \(s\models \varphi \rightarrow \psi \iff \) there exists some \(f:{\varphi \leadsto _s\psi }\)

This shows that a state supports an implication \(\varphi \rightarrow \psi \) iff it admits a dependence function form \(\varphi \) to \(\psi \), i.e., if there exists a systematic way to use the information available in s to infer from any given resolution of \(\varphi \) a corresponding resolution of \(\psi \).

Entailment and propositional dependencies

Now that we have set up a logical system encompassing both statements and questions, let us take a look at the relation of entailment which arises from it. As we expect, on the classical fragment of the language, this relation coincides with truth-conditional entailment in classical propositional logic.

Proposition 15

(Entailment among classical formulas is classical) Let \(\varGamma \cup \{\alpha \}\subseteq {\mathcal {L}}_c\). Then \(\varGamma \models \alpha \iff \varGamma \) entails \(\alpha \) in CPL.

In this precise sense, InqB is a conservative extension of classical propositional logic. This fact shows that nothing is lost in the move from CPL to InqB: anything that could be formalized in classical propositional logic can still be formalized in exactly the same way in InqB.Footnote 21 On the other hand, more things can be formalized in InqB than in CPL, since now we can also look at relations of entailment which involve questions.

Let us examine the various ways in which questions can participate in an entailment relation. First, recall that an entailment \(\alpha \models \mu \) from a statement to a question holds if \(\alpha \) logically resolves \(\mu \). As an illustration, we have \(p\wedge q\models {?p}\), but \(p\vee q\not \models {?p}\): the question ?p is resolved by the by the statement \(p\wedge q\), but not by the by the statement \(p\vee q\).

The following proposition says that a statement entails an inquisitive disjunction if and only if it entails a specific one of the disjuncts. This holds not only for logical entailment, but more generally for entailment relative to an arbitrary context. The purely logical case is obtained by setting \(s=\omega \).

Proposition 16

(Split property) or \(\alpha \models _s\psi \).

Notice that by setting \(\alpha =\top \) we obtain that InqB has the disjunction property for : whenever is logically valid, either \(\varphi \) or \(\psi \) must be valid as well. Also, notice that combining the Split Property with the normal form result given by Proposition 12, we obtain the following corollary.

Corollary 2

\(\alpha \models _s\mu \iff \alpha \models _s\beta \) for some \(\beta \in {\mathcal {R}}(\mu )\)

What this corollary says is that in any context s, a statement \(\alpha \) resolves a question \(\mu \) if and only if it entails some specific resolution of \(\mu \).

Let us now consider an entailment \(\mu \models \alpha \) from a question to a statement. We said that such an entailment captures the fact that \(\mu \) logically presupposes \(\alpha \), i.e., that \(\mu \) can only be resolved provided \(\alpha \) is true. As an illustration, we have , but \({?p}\,\not \models p\vee q\): the statement \(p\vee q\) is logically presupposed by the question , since this question can only be truthfully resolved provided one of p and q is indeed true; but \(p\vee q\) is not presupposed by the question ?p, since ?p can be truthfully resolved even in worlds where \(p\vee q\) is false.

The following proposition shows that \(\alpha \) is entailed by \(\mu \) iff it is entailed by the classical disjunction \(\bigvee {\mathcal {R}}(\mu )\) of the resolutions to \(\mu \), which may be seen as capturing the question’s presupposition.

Proposition 17

\(\mu \models _s\alpha \iff \bigvee {\mathcal {R}}(\mu )\models _s\alpha \)

Finally, consider the most interesting case from our perspective, namely, entailment between questions. We saw in Sect. 2.4 that \(\mu \models \nu \) captures the fact that \(\nu \) is logically determined by \(\mu \). Moreover, we saw in Sect. 2.6 that adding a statement \(\alpha \) as assumption, \(\alpha ,\mu \models \nu \) captures the fact that \(\mu \) determines \(\nu \) in the context described by \(\alpha \):

$$\begin{aligned} \alpha ,\,\mu \models \nu \iff \mu \models _{|\alpha |}\nu \end{aligned}$$

Of course, things are similar when we have several statements and questions as premises. For an illustration of how propositional dependencies are captured as entailment relations in InqB, let us formalize our initial example in the system. We will make use of four propositional atoms:

  • \(s_1\): the patient has symptom \(S_1\);

  • \(s_2\): the patient has symptom \(S_2\);

  • \(\,g\;\): the patient is in good physical condition;

  • \(\;t\;\): the treatment is prescribed.

Now the protocol of our hospital is encoded by the following classical formula:

$$\begin{aligned} \gamma \;:=\;t\;\leftrightarrow \;{s_2}\vee ({s_1}\wedge g) \end{aligned}$$

The question \(\mu _1\) of what symptoms, out of \(S_1\) and \(S_2\), the patient presents is captured by the formula \(?s_1\wedge {?s_2}\). The question \(\mu _2\) of whether the patient is in good physical condition is captured by the formula ?g, and the question \(\nu \) of whether the treatment is prescribed is captured by the formula ?t. Thus, the dependency which we observed to hold in our initial example is captured by the following entailment, which can indeed be checked to be valid.

$$\begin{aligned} \gamma ,\;\;{?s_1}\wedge {?s_2},\;\;{?g}\;\;\models \;\;{?t} \end{aligned}$$

This example illustrates how the relation of entailment in InqB captures not only the standard relation of consequence among propositional statements, which is treated as in classical logic, but also other interesting logical relations—in particular, the relation of dependency among propositional questions.

In the next section we are going to see that the connection between entailment and dependency has a proof-theoretic counterpart: since entailments involving questions capture dependencies, by making inferences with questions we can provide formal proofs of dependencies; moreover, a proof of a dependency always has an interesting kind of constructive content, namely, it encodes an algorithm whereby the dependency can be effectively computed.

Reasoning with questions

In this section we turn our attention to the proof-theoretic side of logic, and discuss the novelties introduced by letting questions take part in logical proofs. In Sect. 4.1, we provide a sound and complete proof system for InqB, showing how questions can be handled in inferences in the specific setting of propositional logic. In Sect. 4.2, we look at an the computational content of proofs involving questions: we will show that such proofs can be viewed as encoding programs that turn information of the types denoted by the assumptions into information of the type denoted by the conclusion. Finally, in Sect. 4.3 we zoom out from the specifics of our proof system, and turn to more general considerations concerning the role of questions in logical proofs, and the significance of inferential moves like making an assumption or drawing a conclusion when these moves are applied to questions.

Fig. 4
figure 4

A sound and complete natural deduction system for InqB. The variables \(\varphi ,\psi ,\) and \(\chi \) range over arbitrary formulas, while the variable \(\alpha \) ranges over classical formulas

A natural deduction system for InqB

In this subsection, a sound and complete proof system for InqB is described. Unlike the systems provided by Ciardelli and Roelofsen (2011), which are Hilbert-style, here we will work with a natural deduction system. This allows for more insightful formal proofs, providing a better grasp of the significance of reasoning with questions. Since it is straightforward to adapt the techniques in Ciardelli and Roelofsen (2011) to this system, we will not provide a proof of completeness. Instead, we will focus on an aspect which is not discussed in previous work: the conceptual significance of the inference rules, and the content of inquisitive proofs as a whole.

The inference rules of the system are listed in Fig. 4, where the variables \(\varphi ,\psi ,\) and \(\chi \) range over all formulas, while \(\alpha \) is restricted to classical formulas. As customary, we refer to the introduction rule for a connective \(\circ \) as \((\circ \textsf {i})\), and to the elimination rule as \((\circ \textsf {e})\). We write \(P:\varPhi \vdash \psi \) to mean that P is a proof whose set of undischarged assumptions is included in \(\varPhi \) and whose conclusion is \(\psi \), and we write \(\varPhi \vdash \psi \) to mean that a proof \(P:\varPhi \vdash \psi \) exists. Finally, we say that two formulas \(\varphi \) and \(\psi \) are provably equivalent, notation \(\varphi \dashv \vdash \psi \), in case \(\varphi \vdash \psi \) and \(\psi \vdash \varphi \). Let us comment briefly on each of the rules of this system.

Conjunction

Conjunction is governed by the standard inference rules. The soundness of these rules corresponds to the following feature of conjunction in InqB: a set of assumptions entails a conjunction iff it entails both conjuncts.

Proposition 18

\(\varPhi \models \varphi \wedge \psi \iff \varPhi \models \varphi \text { and }\varPhi \models \psi \)

These rules are not restricted to classical formulas: conjunctive questions like \({?p}\wedge { ?q}\) can be handled in inferences just like standard conjunctions.

Implication

Implication is also governed by the standard inference rules. The soundness of these rules corresponds to the following proposition, which captures the tight relation existing between implication and entailment.

Proposition 19

\(\varPhi \models \varphi \rightarrow \psi \iff \varPhi ,\varphi \models \psi \)

Again, these rules are not restricted to classical formulas: implications involving questions (including implications which capture dependencies, like \({?p}\rightarrow {?q}\)) can also be handled by means of the standard implication rules.

Falsum

As usual, \(\bot \) has no introduction rule, and can be eliminated to infer any formula. This corresponds to the fact that we have \(\bot \models \varphi \) for all formulas \(\varphi \), which in turn is a consequence of the fact that the inconsistent state \(\emptyset \) supports every formula.

Negation

Since \(\lnot \varphi \) is defined as \(\varphi \rightarrow \bot \), the usual intuitionistic rules for negation, given in Fig. 5, follow as particular cases of the rules for implication.

Inquisitive disjunction

Inquisitive disjunction is governed by the standard by the standard inference rules rules for disjunction. The soundness of these rules corresponds to the following fact, which follows from the support clause for .

Proposition 20

and \(\,\varPhi ,\psi \models \chi \).

Classical disjunction

Figure 5 shows the derived rules for \(\vee \). While the introduction rule is the standard one, the elimination rule is restricted to conclusions that are classical formulas. Without this restriction, the rule is not sound. E.g., we have \(p\models {?p}\) and \(\lnot p\models {?p}\), but \(p\vee \lnot p\not \models {?p}\): the question ?p is logically resolved by the statements p and \(\lnot p\), but not by the tautology \(p\vee \lnot p\).

Fig. 5
figure 5

Derived rules for \(\vee \) and \(\lnot \), where \(\alpha \) is restricted to classical formulas

Double negation elimination

We saw in the previous section that the double negation law is characteristic of statements (Proposition 9). By allowing double negation elimination for all classical formulas, we are encoding the fact that all classical formulas in InqB are statements—and thus obey classical logic.

Notice that having this rule for all classical formulas means that our system is an extension of a standard natural deduction system for classical logic (as given, e.g., in Gamut 1991). This ensures that any natural deduction proof in classical logic is also a proof in our system.

Split

The only non-standard ingredient of our system is the split rule, which distributes a classical antecedent over an inquisitive disjunction. This rule encodes the Split Property of Proposition 16, repeated here.

Proposition 21

(Split Property)

Indeed, by the connection between entailment in context and implication, the split property could be re-stated as follows: for any state s, if then \(s\models \alpha \rightarrow \varphi \text { or }s\models \alpha \rightarrow \psi \). In turn, by the support-clause for inquisitive disjunction, this amounts to , i.e., precisely to the validity of the split rule. Thus, we may regard the split rule as encoding the following fundamental feature of our logic: in any given context, if a statement resolves a question, it must resolve it in a particular way.Footnote 22

Inquisitive proofs and their constructive content

Now that we have discussed the individual inference rules, let us turn to examine the significance of inquisitive proofs as a whole. To see what proofs involving questions look like, let us consider once again our initial example of a dependency, corresponding to the following entailment:Footnote 23

$$\begin{aligned} \gamma ,\;\; {?s_1},\;\;{?s_2},\;\;{?g}\;\;\models \;\; {?t} \end{aligned}$$

where \(\gamma \) stands for the protocol description, \(\,t\,\leftrightarrow \, s_2\vee (s_1\wedge g)\). Since this entailment is valid, it must be possible to provide a proof for it in our system. Such a proof is displayed below, where sub-proofs involving only classical logic have been omitted and denoted by \((\textsf {C}_1),\dots ,(\textsf {C}_4)\). We will refer to this proof as P.

figure e

It is instructive to consider what argument is encoded by this proof. In words, this may be phrased roughly as follows. We are assuming information of type \(?s_2\). This means that we have either the information that \(s_2\), or the information that \(\lnot s_2\). If we have the information that \(s_2\), then by combining this information with \(\gamma \) we can infer t, and so we have some information of type ?t. On the other hand, if we have the information that \(\lnot s_2\), we have to rely on having information of type \(?s_1\). This means that we have either the information that \(s_1\), or the information that \(\lnot s_1\). If the information we have is \(\lnot s_1\), then by combining this with \(\lnot s_2\) and \(\gamma \) we can infer \(\lnot t\), and thus we have some information of type ?t. On the other hand, if the information we have is that \(s_1\), then we have to rely on having information of type ?g. Again, there are two possibilities: if the information we have is g, then by combining this with \(s_1\) and \(\gamma \) we can infer t, and so we have information of type ?t; if the information we have is \(\lnot g\), then by combining this with \(\lnot s_2\) and \(\gamma \) we can infer \(\lnot t\), and thus again we have information of type ?t. So, in any case, under the given assumptions we are assured to have information of type ?t.Footnote 24

Notice an interesting fact about this proof: the proof does not just witness that, given \(\gamma \), information of type \(?s_1,?s_2,\) and ?g yields information of type ?t: within its structure, it actually encodes how to obtain information of type ?t from information of the types \(?s_1,{?s_2},\) and ?g. This means that if we replace each of the indeterminate assumptions \({?s_1},{?s_2},?g\) by a corresponding determinate assumption, say \(s_1,\lnot s_2,\) and g respectively, the proof describes how to obtain a corresponding resolution of ?t—in this case, t. In other words, the proof provides us with an algorithm to compute the dependency at hand.

Fig. 6
figure 6

An illustration of the resolution algorithm: given a proof \(P:\overline{\varphi }\vdash \psi \) and resolutions \(\overline{\alpha }\) of \(\overline{\varphi }\), the algorithm builds a proof \(F_{P}(\overline{\alpha }):\overline{\alpha }\vdash \beta \) of a corresponding resolution \(\beta \) of \(\psi \)

This is not an accident, but a manifestation of a general fact concerning inquisitive proofs: given a proof P which witnesses a dependency, we can always see this proof as encoding a program that computes the dependency. To state this fact in a precise way, let us write \(\overline{\varphi }\) for a sequence \(\varphi _1,\dots ,\varphi _n\) of formulas, and \(\overline{\alpha }\in {\mathcal {R}}(\overline{\varphi })\) to mean that \(\overline{\alpha }\) is a sequence \(\alpha _1,\dots ,\alpha _n\) such that \(\alpha _i\in {\mathcal {R}}(\varphi _i)\). Then, we have the following theorem.

Theorem 1

(Existence of a resolution algorithm) Let \(P:\overline{\varphi }\vdash \psi \) and let \(\overline{\alpha }\in {\mathcal {R}}(\overline{\varphi })\). There is a procedure which, inductively on P, constructs a proof \(\,F_P(\overline{\alpha }):\overline{\alpha }\vdash \beta \) having as conclusion a resolution \(\beta \in {\mathcal {R}}(\psi )\).

The proof of this theorem is given in the appendix, where we explicitly describe how to construct the desired proof \(F_P(\overline{\alpha })\) inductively on P. We will refer to this inductive procedure as the resolution algorithm, and we will refer to \(F_P(\overline{\alpha })\) as the resolution of the proof P on input \(\overline{\alpha }\). The idea of the resolution algorithm is illustrated in Fig. 6.

The existence of this procedure shows that an inquisitive proof may be regarded as a template for classical proofs, where questions serve as placeholders for arbitrary information of the corresponding type. As soon as the indeterminate assumptions of the proof are instantiated to particular resolutions—say, as soon as we input the data relative to a specific patient—the template can be instantiated to a proof in classical logic, which infers some corresponding resolution of the conclusion—in our case, a deliberation about the treatment.

As an illustration, suppose we get the data relative to a certain patient: this patient has only symptom \(S_1\) and is in good physical condition. Then we can instantiate the question assumptions of our proof to \(s_1,\lnot s_2,\,g\). Applying the resolution algorithm yields the following proof (where \(\textsf {C}_3\) was a sub-proof of our original proof), witnessing that from this particular resolution of the assumptions it follows that the treatment is indeed prescribed.

figure f

To sum up, then, a proof \(P:\overline{\varphi }\vdash \psi \) in inquisitive logic can always be seen as encoding a program to turn each resolution of the set \(\overline{\varphi }\) of assumptions into a corresponding resolution of the conclusion \(\psi \): given resolutions \(\overline{\alpha }\in {\mathcal {R}}(\overline{\varphi })\), we can use the proof P to compute a corresponding resolution \(f(\overline{\alpha })\in {\mathcal {R}}(\psi )\) which follows from \(\overline{\alpha }\): for this, it suffices to let \(f_P(\overline{\alpha })\) be the conclusion of \(F_P(\overline{\alpha })\). In other words, from a proof of a dependency we can always extract a logical dependence function logical dependence function \(f_P:\overline{\varphi }\leadsto \psi \).Footnote 25

On the role of questions in inference

Let us now abstract away from the specific setting of InqB, and let us turn to examine what our investigations show about the role of questions in logical proofs. In the logic literature, the meaningfulness of making inference with questions has been doubted, and sometimes overtly denied, as in the following passage, drawn from the introduction to Belnap and Steel (1976):

Absolutely the wrong thing is to think [the logic of questions] is a logic in the sense of a deductive system, since one would then be driven to the pointless task of inventing an inferential scheme in which questions, or interrogatives, could serve as premises and conclusions.

What we hope to have achieved in this section is to show that Belnap and Steel were too pessimistic: not only is it possible to give a deductive system in which questions serve as premisses and conclusions, and a logically well-behaved one at that; but also, proofs in such a system are meaningful and worthy of investigation.

For one thing, we saw in the previous sections that entailments involving questions capture interesting logical relations. This, in itself, would suffice to grant interest in a syntactic calculus that tracks this generalized notion of entailment. However, in this section we have seen that the role of question in proofs goes beyond this: inferences involving questions are themselves interesting logical objects, which capture meaningful arguments.

In a nutshell, what we found is that questions make it possible to perform inferences with information which is not fully specified. To appreciate this point, it might be useful to draw a connection with the constants used for arbitrary individuals in natural deduction systems for standard first-order logic.Footnote 26 For instance, in order to infer \(\psi \) from \(\exists x\varphi (x)\), one can make a new assumption \(\varphi (c)\), where c is fresh in the proof and not occurring in \(\psi \), and then try to derive \(\psi \) from this assumption. Here, the idea is that c stands for an arbitrary object in the extension of \(\varphi (x)\). If \(\psi \) can be inferred from \(\varphi (c)\), then it must follow no matter which specific object “of type \(\varphi (x)\)” the constant c denotes, and thus it must follow from the mere existence of such an object. Questions allow us to do something similar, except that instead of an arbitrary individual, a question stands for an arbitrary piece of information of the corresponding type. For instance, the question what the patient’s symptoms are may be viewed as a placeholder standing for an arbitrary specification of the patient’s symptoms.

When assuming a question \(\mu \), what we are supposing is some indeterminate piece of information of type \(\mu \). Thus, e.g., by assuming the question what the patient’s symptoms are, we are supposing to be given a complete specification of the patient’s symptoms. We are not assuming anything specific about what these symptoms are—say, that the patient has only symptom \(S_1\); we are merely assuming some information of this type.

Similarly, consider the move of drawing a conclusion. In concluding a question \(\mu \), what we are establishing is that, under the given assumptions, we are guaranteed to have some information of type \(\mu \)—though precisely what information this is will in general depend on what specific information instantiates the indeterminate assumptions under which the conclusion \(\mu \) was drawn.

Summing up, then, questions may be used in logical inferences as placeholders for arbitrary information of the corresponding type. As we saw, by manipulating such placeholders it is possible to construct logical proofs that witness the existence of certain dependence relations between information types. Thus, far from being meaningless from a proof-theoretic perspective, questions turn out to be extremely interesting tools for logical inference.

Relation with previous work

Non entailment-directed approaches to questions

Throughout most of the history of logic, virtually no attention has been paid to questions. It is not until the second half of the 20th century that logical works devoted to questions have started to appear. In most of these works (e.g. Åqvist 1965; Harrah 1961, 1963; Belnap and Steel 1976; Tichy 1978) the emphasis has been on providing a logical language for questions, and on characterizing the relation of answerhood between statements and questions. Other approaches have focused instead on the role of questions in processes of inquiry, either modeling inquiry itself as a sequence of questioning moves and inference moves, as in the interrogative model of inquiry of Hintikka (1999), or characterizing how questions are arrived at in an inquiry scenario, as in the inferential erotetic logic of Wiśniewski (1995).Footnote 27 What all these theories have is common is the assumption that dealing with questions requires turning to relations other than logical entailment. Thus, they pursue enterprises which, while related, are also different in an important respect from the one that we have been concerned with here: incorporating questions on a par with statements in the very relation of entailment, and characterizing how they can be manipulated in entailment-tracking logical proofs.

The Logic of Interrogation

To the best of our knowledge, the first approach that allows for a generalization of the classical notion of entailment to questions is the Logic of Interrogation (LoI) of Groenendijk (1999), based on the partition theory of questions of Groenendijk and Stokhof (1984). The original presentation of the system is a dynamic one, in which entailment is defined in terms of context-change potential. However, as pointed out by ten Cate and Shan (2007), the dynamic coating is not essential. In its essence, the system may be described as follows: both statements and questions are interpreted with respect to pairs \(\langle w,w^{\prime }\rangle \) of possible worlds: a statement is satisfied by such a pair if it is true at both worlds, while a question is satisfied if the true answer to the question is the same in both worlds. In this approach, the meaning of a sentence \(\varphi \) is captured by the set of pairs \(\langle w,w^{\prime }\rangle \) satisfying \(\varphi \); for any \(\varphi \), this set is an equivalence relation over a subset of the logical space, which we will denote as \({\sim }_{\varphi }\). Such an equivalence relation may be equivalently regarded as a partition \(\varPi _\varphi \) of a subset of the logical space, where the blocks of the partitions are the equivalence classes \([w]^{{\sim }_\varphi }\) of worlds modulo \({\sim }_\varphi \).

$$\begin{aligned} \varPi _\varphi =\{[w]^{{\sim }_\varphi }\,|\,w\in \omega \} \end{aligned}$$

For a statement \(\alpha \), the partition \(\varPi _\alpha \) always consists of a unique block, corresponding to the truth-set \(|\alpha |\) of the statement. For a question \(\mu \), \(\varPi _\mu \) consists of several blocks, which are regarded as the complete answers to the question.

Since statements and questions are interpreted by means of a uniform semantics, LoI allows for the definition of a notion of entailment in which both statements and questions can take part:

$$\begin{aligned} \varphi \models _{\textsf {LoI}}\psi \iff \text {for all }w,w^{\prime }\in \omega : \langle w,w^{\prime }\rangle \models \varphi \text { implies }\langle w,w^{\prime }\rangle \models \psi \end{aligned}$$

In terms of partitions, this notion of entailment may be cast as follows:

$$\begin{aligned} \varphi \models _{\textsf {LoI}}\psi \iff \text {for all }a\in \varPi _\varphi \text { there is }a^{\prime }\in \varPi _\psi \text { such that }a\subseteq a^{\prime } \end{aligned}$$

This shows that in LoI, too, we may view sentences as denoting information types; \(\varphi \) entails \(\psi \) in case information of type \(\varphi \) always yields information of type \(\psi \). Thus, while this has not been highlighted much in the literature, the unified view of entailment discussed in this paper already emerges in LoI.

In Groenendijk (1999), this approach is applied to a particular logical language, which is an extension of first-order predicate logic with questions. This gives rise to an interesting combined logic of statements and questions, which was investigated and axiomatized by ten Cate and Shan (2007).

The relation between the LoI framework and the approach presented here can be characterized as follows. If a sentence \(\varphi \) is interpretable in LoI, then a state s supports \(\varphi \) in case s is included in one of the blocks of the partition induced by \(\varphi \). So, the support-set of \(\varphi \) may be obtained as the downward closure of the partition \(\varPi _\varphi \).

$$\begin{aligned} {[}\varphi ]=(\varPi _\varphi )^\downarrow \end{aligned}$$

Conversely, the elements of the partition \(\varPi _\varphi \) can always be characterized as the maximal elements of \((\varPi _\varphi )^\downarrow \). This means that the LoI-representation of a sentence \(\varphi \) can be recovered from its inquisitive representation as follows:

$$\begin{aligned} \varPi _\varphi =\textsc {Alt}(\varphi ) \end{aligned}$$

Thus, for sentences that can be interpreted in LoI, we can go back and forth between the two semantics. Furthermore, it is easy to see that the notion of entailment that the two frameworks characterize is the same.

$$\begin{aligned} \varphi \models \psi \iff \varphi \models _{\textsf {LoI}}\psi \end{aligned}$$

In spite of this tight connection, however, what we have done here is not merely to provide an alternative semantics for LoI, based on information states rather than pairs of worlds. The reason is that the support approach that we discussed in this paper is strictly more general than the LoI approach based on pairs of worlds. To see why, consider again the way in which a question \(\mu \) is interpreted in LoI: a pair of worlds \(\langle w,w^{\prime }\rangle \) satisfies \(\mu \) in case the complete answer to \(\mu \) is the same in w as in \(w^{\prime }\). Clearly, this interpretation only makes sense provided that for any world w, there is such a thing as the complete answer to \(\mu \) at w. Now, our analysis of the relation between complete answers and support conditions makes clear what this assumption amounts to: \(\mu \) must be a partition question, in the following sense.

Definition 10

(Partition questions) \(\mu \) is a partition question if any world is contained in a unique alternative for \(\mu \).

Fig. 7
figure 7

The alternatives for the mention-some question , where we have restricted to a small set of candidates \(\{\)Pierre, Manon, Lev\(\}\)

While the class of partition questions includes many natural kinds of questions, such as the questions that were at play in our hospital protocol example, there are also important types of questions that fall outside of this class. Most importantly, this class does not include so-called mention-some questions, that is, questions that ask for an instance of a certain property or relation. Under the most salient interpretation, the following are all examples of mention-some questions:

figure g

It is easy to see that such questions are not partition questions. Consider for example (3-a): as illustrated by Fig. 7, the alternatives for this question correspond to the possible witnesses for the property of being a typical French name. In a given world w, we may of course have several witnesses for this property, which means that w is contained in several alternatives for the question. This shows that a question like (3-a) is not a partition question, and thus it is not interpretable in LoI. On the other hand, since it is clear what information is needed in order for (3-a) to be settled, the inquisitive approach has no problem interpreting this question. Since mention-some questions are a broad and practically relevant class of questions, this is a significant advantage of support semantics over the LoI approach.

Another important class of non-partition questions that we briefly discussed in this paper is given by conditional questions, exemplified by (4).

figure h

Such questions, too, cannot be adequately represented in the LoI framework. Starting precisely with the problem of conditional questions, the pursuit of greater generality lead Velissaratou (2000), Groenendijk (2009) and Mascarenhas (2009) to relax the constraints of the LoI framework, interpreting sentences by means of binary relations that are not necessarily transitive. This lead to a first version of inquisitive semantics, now referred to as pair semantics. While Groenendijk (2011) showed that the pair semantics can indeed deal adequately with conditional questions, Ciardelli (2008, (2009), and later Ciardelli et al. (2015) argued that no pair semantics provides a satisfactory general framework for questions, and that an interpretation based on information states is needed instead, leading to the support-based approach that we discussed here.

Nelken and Shan’s modal approach

After the Logic of Interrogation, a different uniform approach to statements and questions was proposed by Nelken and Shan (2006). In this approach, questions are translated as modal sentences, and they are interpreted by means of truth conditions: a question is true at a world w in case it is settled by an information state R[w] associated with the world (i.e., the set of successors given by an accessibility relation R). Thus, for instance, Nelken and Shan render the question whether p by the modal formula \(?p:=\Box p\vee \Box \lnot p\).

In one respect, this approach is similar to the approach proposed in this paper, since the meaning of a question is taken to be encoded by the conditions under which the question is settled by a relevant body of information. And indeed, if we consider entailments which involve only questions, the approach of Nelken and Shan makes the expected predictions. However, an asymmetry between statements and questions is maintained in this approach: for questions, what matters is whether they are settled by a relevant information state, while for statements, what matters is whether they are true at the world of evaluation. This asymmetry creates problems the moment we start considering cases of entailment involving both statements and questions, such as the one corresponding to our protocol example. It is easy to see that, if such entailments are to be meaningful at all, entailment cannot just amount to preservation of truth. Nelken and Shan propose to fix this by re-defining entailment as modal consequence: \(\varphi \models \psi \) if, whenever \(\varphi \) is true at every possible world in a model, so is \(\psi \). However, this move has the odd consequence of changing the consequence relation for statements in an undesirable way. For instance, if our declarative language indeed contains a Kripke modality, say a knowledge modality K, then if our notion of entailment is redefined as modal consequence, we make undesirable predictions, such as \(p\models Kp\). Thus, this approach does not really allow us to extend classical logic with questions in a conservative way.Footnote 28

The modal translation of InqB

The asymmetry between statements and questions that is problematic for Nelken and Shan’s approach can be eliminated by letting statements, too, be interpreted in terms of when they are settled by the state R[w], rather in terms when they are true at w. That is, just like Nelken and Shan translate a question \(?p\in {\mathcal {L}}\) as \(\Box p\vee \Box \lnot p\), one may translate a statement \(p\in {\mathcal {L}}\) in modal logic as \(\Box p\). More generally, we may associate to any formula \(\varphi \in {\mathcal {L}}\) a modal formula \(\varphi ^\Box \) defined as follows:

$$\begin{aligned} \varphi ^\Box =\bigvee \Big \{\Box \alpha \,|\,\alpha \in {\mathcal {R}}(\varphi )\Big \} \end{aligned}$$

We will refer to \(\varphi ^\Box \) as the modal translation of \(\varphi \). It is then easy to show that this map is indeed a translation of InqB into the modal logic K, in the sense that for any \(\varPhi \cup \{\psi \}\subseteq {\mathcal {L}}\), we have:

$$\begin{aligned} \varPhi \models \psi \iff \varPhi ^\Box \models _{\textsf {K}}\psi ^\Box \end{aligned}$$

where \(\varPhi ^\Box =\{\varphi ^\Box \,|\,\varphi \in \varPhi \}\). This translation shows that it is possible to encode InqB within the modal logic K. We could compare this with the Gödel translation of intuitionistic logic into the modal logic S4. Thus, modal logic provides an alternative setup in which dependencies may be captured. E.g., instead of writing \(p\leftrightarrow q, \;{?p}\models {?q}\), we may write \({\Box (p\leftrightarrow q),\;\Box p\vee \Box \lnot p\;\models _{\textsf {K}}\;\Box q\vee \Box \lnot q}\). However, the approach presented in this paper has several important advantages over the modal approach. Let us discuss some of them.

Parsimony

First of all, the modal approach is unnecessarily redundant. In this approach, a formula is evaluated at a world w equipped with an information state R[w]. However, it is clear that the evaluation world w is completely unnecessary: it is only the content of the information state R[w] that matters for the satisfaction of a formula \(\varphi ^\Box \). But if it is only the content of a certain information state that matters, we can evaluate a formula directly with respect to that information state, without invoking a specific world of evaluation. This move allows for a significant simplification of our semantic structures: we no longer need an accessibility relation R, whose unique purpose was to anchor the relevant state to a specific world.

Insight

By uncovering the connection between dependency and entailment, the present approach provides an insight that is missing in the modal approach. This insight also has practical consequences, since it allows us to use ideas and techniques of logic in the analysis of the dependencies. For instance, since entailment can be generally internalized in the language by means of implication, dependencies can be expressed as implications between questions. This provides a well-behaved logical representation of dependencies, and suggests natural rules for reasoning with them. As an example of the explanatory power of the approach, this perspective shows that the well-known Armstrong’s axioms for dependency used in database theory are nothing but the familiar intuitionistic rules for implication in disguise (a point first made by Abramsky and Väänänen 2009).

Logical operations

In intuitionistic logic there is a wealth of interesting structure that becomes rather invisible from the standpoint of the S4 translation. Similarly, we have seen that the inquisitive approach leads to the discovery of interesting structural features at the support level: in particular, connectives such as conjunction and implication (as well as quantifiers and modalities, which we have not discussed here) generalize to this setting in a natural way, so that they can also manipulate questions. These generalizations do not only give nice results: they are also natural from an algebraic point of view (Roelofsen 2013) and from a proof-theoretic one, as we saw. To give just one example, let us focus on the implication operation. Consider the following sentences:

figure i

In inquisitive logic, the interpretation of these sentences can be obtained in a simple, compositional way. If we translate (5-a) as p and (5-b) as q, then we can translate (5-c) as ?q, (5-d) as \(p\rightarrow q\), and (5-e) as \(p\rightarrow {?q}\). Notice that one and the same operator is at play in (5-d) and (5-e): this operator has a uniform semantics, a simple algebraic characterization, and a natural proof-theory.

In the modal approach, the translation of (5-a) is \(\Box p\), and the translation of (5-b) is \(\Box q\). The translation of (5-c) is \(\Box p\vee \Box \lnot p\). The translation of (5-d) is not \(\Box p\rightarrow \Box q\), as would be expected, but rather \(\Box (p\rightarrow q)\); similarly, the translation of (5-e) is not \(\Box p\rightarrow (\Box q\vee \Box \lnot q)\), but rather \(\Box (p\rightarrow q)\vee \Box (p\rightarrow \lnot q)\).

Although modal logic does have an implication, this cannot be used to interpret the conditional construction. More importantly, in this approach, it is not clear that there is any structural similarity between (5-d) and (5-e), nor that there is a fundamental relation between these two sentences and the simpler sentences (5-a-c). Clearly, some important piece of structure—in the case in point, the existence of a neat implication operation—is being missed from the modal perspective.

Inferences and computational interpretation of proofs

We saw in Sect. 4 that inquisitive logic allows us to manipulate questions in inference by means of simple and familiar logical rules. By using questions, we can then provide formal proofs of dependencies. Moreover, we saw that such proofs have a computational interpretation, encoding algorithms to compute the dependency at hand. It is not clear that the modal approach has an equally attractive framework to offer. First, it would require us to reason with a more complex language, including modalities in addition to just connectives (or, in addition to whatever other logical constants the language includes). Second, it is not clear whether a general constructive interpretation of proofs exists in this approach.

Dependence logic

The ideas discussed in this chapter are also deeply connected with the investigations undertaken in recent years within the framework of Dependence Logic (Väänänen 2007). Indeed, the dependencies considered in dependence logic are special instances of question entailment. In particular, the dependence atoms \(\,=(p_1,\dots ,p_n,q)\,\) of propositional dependence logic (Väänänen 2008; Yang 2014) capture the dependency of the atomic polar question ?q on the atomic polar questions \(?p_1,\dots ,?p_n\). As a consequence, in our language they may be expressed as \({?p_1}\wedge \dots \wedge {?p_n}\rightarrow {?q}\). This yields a decomposition of these atoms into more basic and better-behaved operations—which allows for a natural proof-theory. While the possibility of such a decomposition was noted by Abramsky and Väänänen (2009), the present work casts new light on this connection in several ways. First, we can now see that this decomposition reflects a fundamental connection between dependencies and questions: a dependency is a case of entailment having questions as its protagonists; since entailments can be internalized as implications, dependencies can be expressed as implications between questions. Second, it becomes clear that dependence atoms capture only a special case of a more general phenomenon: dependencies may involve all sorts of questions other than atomic polar questions, and implication gives us a fully general way to express them. Third, as we have seen, the connection between questions and dependencies has a proof-theoretic side to it: dependencies may be formally proved to hold in a system equipped with questions; and moreover, the resulting proofs do not just witness dependencies but actually encode specific dependence functions.

The relation between inquisitive logic and dependence logic is the subject of a separate paper (Ciardelli 2016), which develops these points in detail, and shows how they carry over to the setting of first-order logic.

Previous work on inquisitive semantics

Finally, within the landscape of recent work on inquisitive semantics (among others, Ciardelli et al. 2013a, 2015; Roelofsen 2013; Ciardelli and Roelofsen 2015; Groenendijk and Roelofsen 2013; Punčochář 2015a, b), the contribution of the present paper is threefold. In the first place, we have shown that propositional inquisitive logic may be regarded as a conservative extension of classical propositional logic with a question operator, and we saw that this perspective sheds new light on some logical features of the system. Secondly, we have investigated the role of questions in logical inferences, and we have established a new result, Theorem 1, which brings out the computational content of inquisitive proofs. Thirdly, and most importantly, while work on inquisitive semantics has so far been mainly driven by motivations stemming from linguistics and philosophy of language, we have argued that inquisitive semantics also has solid motivations stemming entirely from within the field of logic. As we saw, taking questions into account broadens the scope of classical logic in an exciting way, bringing within reach an elegant account of the logical relation of dependency.

Conclusion and further work

In this paper we have seen that, by moving from a truth-based view of meaning to an information-based view, we obtain a uniform semantic framework for statements and questions. This allows for a substantial generalization of the fundamental notions of classical logic. In particular, while classical logic is concerned with the relation of logical consequence, which relates specific pieces of information, the presence of question allows us to also capture also the relation of logical dependency, which connects different information types, and which plays an important role in a broad range of different contexts.

Additionally, we saw that questions have a role to play in logical proofs. By using questions, we can manipulate indeterminate information, that is, information which is not fully specified, such as what the symptoms are or whether the treatment is prescribed. This allows us to formally prove that a certain dependency holds within a certain context. Moreover, we saw that, at least in the propositional setting, from such a proof we can extract an algorithm that computes this dependency, i.e., that turns information of the type described by the assumptions into information of the type described by the conclusion.

In this paper, we focused on the fundamental ideas of the inquisitive approach, and on the generalized view of logic to which they give rise. Now that the significance of inquisitive logic and its relevance for applications become clearer, however, many technical questions also become more urgent. One class of questions is proof-theoretical: e.g., does the natural-deduction system introduced above admit a normalization theorem? If not, can inquisitive logic be regimented in a better-behaved proof system? The labeled sequent calculus developed by Sano (2009) for a related logic provides a starting point for an alternative approach. Other interesting questions concern the computational properties of the resolution algorithm, as well as the complexity of the problem of deciding if a given entailment holds in InqB.

A further question is whether an interpretation of the kind discussed here is available for the extension of InqB proposed and axiomatized by Punčochář (2015b). In this extension, the system is expanded with operators that yield non-persistent meanings. The resulting language includes not only formulas like p and ?p, which are supported if certain information is available, but also formulas like \({\sim }p\), which are supported if certain information is lacking. Understanding the role of such formulas in a logic of information and the properties of the resulting system is an interesting task for future work.

Perhaps most importantly, propositional logic is only a starting point towards a comprehensive theory of questions in logic. From the present perspective, an important result would be an axiomatization of first-order inquisitive logic (Ciardelli 2009; Roelofsen 2013), ideally paired with a resolution algorithm allowing us to regard proofs as programs for computing dependencies. Within a first-order language, many interesting kinds of questions become expressible, besides the disjunctive questions built up by means of . This includes mention-all wh-questions like (6-a), which can be expressed as \(\forall x?Px\), and mention-some wh-questions like (6-b), which can be expressed as \(\overline{\exists }xPx\), where \(\overline{\exists }\) is the quantifier counterpart of .

figure j

An axiomatization of inquisitive predicate logic, or a suitable fragment thereof, would provide the means to reason about dependencies between such questions, thus covering a broad spectrum of interesting informational scenarios.

Finally, an interesting task is to explore the repercussions of the new logical perspective on dependency described in this paper for specific fields in which this relation plays a role, such as database theory. One interesting observation we mentioned is that, given that dependencies can generally be expressed as implications among questions, the famous Armstrong axioms used for reasoning about dependencies in database theory (Armstrong 1974) turn out to be simply particular cases of the standard inference rules for implication; this illustrates how inquisitive logics might provide a general and logically transparent environment for reasoning about database dependencies. In this respect, the fact that proofs of dependencies have a computational interpretation seems of particular interest—especially if this should turn out to be the case also for first-order inquisitive logic, or interesting fragments thereof. It is also worth noting that while state-of-the-art techniques for optimizing query answering in databases, such as the notion of view determinacy (Segoufin and Vianu 2005) focus on partition questions, it seems that mention-some questions like those in might also be interesting from the perspective of a query language. Thus, an axiomatization of fragments of first-order inquisitive logic might also provide useful tools to extend the methods currently used to optimize query answering to a broader class of queries.