Journal of Logic, Language and Information

, Volume 25, Issue 1, pp 51–76 | Cite as

Erotetic Search Scenarios and Three-Valued Logic

Open Access
Article

Abstract

Our aim is to model the behaviour of a cognitive agent trying to solve a complex problem by dividing it into sub-problems, but failing to solve some of these sub-problems. We use the powerful framework of erotetic search scenarios (ESS) combined with Kleene’s strong three-valued logic. ESS, defined on the grounds of Inferential Erotetic Logic, has appeared to be a useful logical tool for modelling cognitive goal-directed processes. Using the logical tools of ESS and the three-valued logic, we will show how an agent could solve the initial problem despite the fact that the sub-problems remain unsolved. Thus our model not only indicates missing information but also specifies the contexts in which the problem-solving process may end in success despite the lack of information. We will also show that this model of problem solving may find use in an analysis of natural language dialogues.

Keywords

Erotetic search scenarios Kleene’s strong three-valued logic Inferential Erotetic Logic Erotetic implication Problem-solving 

1 Introduction

We will assume the perspective of an agent trying to solve a compound problem by dividing it into sub-problems. To solve these sub-problems our agent needs to collect additional information. We will concern ourselves with situations when the agent is not capable of solving the problem on his/her own, therefore we will assume that collecting information involves a questioning process. Thus we may imagine that the agent is engaged in a conversation with another agent (human being or a machine) or even with multiple agents (sequentially or in multiparty dialogue) and that he/she asks questions (queries) in order to gain the information needed to solve the initial problem. It can be expected that, during this process, he/she will encounter the following situations:
  • the questioned agent does not know the answer (there is not enough data in his/her knowledge base);

  • the questioned agent does not want to share his/her knowledge;

  • information provided by the questioned agent is not suited to our agent’s needs (e.g. the answer provides little or too much information—therefore certain additional processing steps need to be taken).

In each of these cases our agent deals with a problem-solving process with some information gaps involved. In this paper we will model such a process by means of Erotetic Search Scenarios (e-scenarios, or simply ESS for short).

Generally speaking, ESS constitute a formally modelled “map”, or a search plan which

shows how an initial question can be answered on the basis of a given set of initial premises and by means of asking and answering auxiliary questions (Wiśniewski 2003, p. 391).

What is significant is that e-scenarios are defined by means of certain concepts borrowed from the logic of questions IEL,1 a logic which focuses on inferences whose premises and/or conclusion may be a question, and which provides the criteria for the validity of such inferences. Up to now, ESS has appeared to be a powerful logical tool for modelling cognitive goal-directed processes [cf. Wiśniewski (2001, 2003, 2014), Łupkowski (2010, 2012), Urbański et al. (2010), see also Urbański (2001) for the connection between ESS and proof theory].

The fact that the agent may come across information gaps and, what is more important, the process of reasoning involving such information gaps, will be modelled in this paper by the so-called strong three-valued logic of Kleene [cf. Urquhart (2002) for background]. In order to simplify the matter we shall assume that the queries asked by our agent may be answered with ‘yes’, ‘no’ or ‘it is not known’, where the third answer expresses an information gap. As may be expected, the “additional” third logical value is used to semantically evaluate the third answer. (The logical basis is described in detail in Sect. 4.)

Using the logical machinery briefly described above we model the process of problem-solving with information gaps. The pivotal characteristic of our model is that it enables:
  • the identification of missing information,

  • the identification of the contexts in which the process of solving the initial problem may end in success despite the lack of information.

2 Related Work

This paper is in line with works applying IEL framework for the analysis of widely understood problem-solving and dialogue modelling. One of the advantages of our approach is that it is applicable to the real dialogues retrieved from natural language corpora (like e.g. British National Corpus, or Basic Electricity and Electronics Corpus; applications in this domain will be presented in Sect. 5.2). The use of ESS allows not only for modelling such dialogues (Łupkowski 2012), but also for generating certain linguistic events observed in these dialogues (Łupkowski et al. 2014).

There are also other approaches to problem-solving and dialogue modelling which apply the logic of questions. One of the closest to ours is the approach employed in Peliš and Majer (2010, 2011), Švarný et al. (2014), where the dynamic epistemic logic of questions combined with the public announcements logic is applied for modelling public communication process and knowledge revisions during this process (both in the case of individual agents’ knowledge and in the case of common knowledge). This approach is based on the so-called “set-of-answers methodology” which is employed also on the grounds of IEL.2 The erotetic epistemic logic allows for modelling sincere agents involved in a card guessing game. The task of an agent is to infer card distribution over agents on the basis of their announcements during the game. This allows for introducing many interesting concepts, like askability, answerhood, partial answerhood and for building models for problems such as e.g. the Russian Cards Problem—see Švarný et al. (2014).

When it comes to different frameworks of the logic of questions one should mention inquisitive semantics (INQ) (Groenendijk 2009). INQ is also used to address the issue of modelling natural language dialogues and problem-solving. It introduces the notion of compliance in order to achieve this goal—cf. (Groenendijk and Roelofsen 2011). Roughly speaking, INQ treats questions as sets of possibilities or, in other words, as an issue to be resolved. The intuition behind the notion of compliance is to provide a criterion to “judge whether a certain conversational move makes a significant contribution to resolving a given issue” (Groenendijk and Roelofsen 2011, p. 167). Compliance allows for modelling interesting aspects of natural language dialogues and question processing; it can be shown, however, that erotetic implication has more expressive power—cf. (Łupkowski 2014, 2015).3

Let us now add a word on what this paper is not about. We do not aim at a specific description of a (human) cognitive agent and we do not use modal (epistemic) logics. It is common nowadays to use modal logics in modelling various aspects of the behaviour of epistemic/cognitive agents [Peliš and Majer (2010, 2011), Švarný et al. (2014) to mention only those already mentioned]. Unfortunately, the approaches based on modal logics often run into the omniscience problem, and our approach is definitely free of that level of idealisation. We do not need to impose almost any restrictions on our agents. On the other hand, our approach may be contrasted with that of computational learning theory (Jain et al. 1999; Kelly 2014), where an epistemic agent is conceived as a learner processing information. What we aim at is a more abstract, and probably much more general, model which does not presuppose whether an agent is a human or not, and which does not specify the nature of the information sources.

3 Erotetic Search Scenarios for Classical Logic

Let us start with an example. Suppose that John wants to know whether a particular individual d is a local user of the Alpha computer system. As for the ‘local user’ concept John is defining it with the following rules (where usr stands for ‘is a user of the system’ and live stands for ‘lives in’):
$$\begin{aligned}&locusr(x) \rightarrow usr(x)\\&locusr(x) \rightarrow live(x,p)\\&usr(x) \wedge live(x,p) \rightarrow locusr(x) \end{aligned}$$
In order to establish a solution to his problem he will ask Ann for information. Let us assume that Ann knows the following facts about the system under discussion:
$$\begin{aligned} \begin{array}{lr} usr(a) &{} \qquad \qquad \qquad live(a,p)\\ usr(b) &{} \qquad \qquad \qquad live(b,z)\\ usr(c) &{} \qquad \qquad \qquad live(c,p)\\ &{} \qquad \qquad \qquad live(d,z) \end{array} \end{aligned}$$
As can be observed, asking Ann directly if d is a local user would not be very effective, since (given that the database represents the sum of her knowledge) Ann does not know the concept ‘local user’. Instead, John may devise a strategy of solving his initial problem by gathering solutions to some sub-problems connected with the initial one (and using concepts known to Ann). This strategy may be modelled by the tree-diagram presented in Fig. 1.
Fig. 1

Exemplary e-scenario

The initial question (root) of the diagram expresses John’s initial problem, and the leaves of the tree-diagram represent solutions to the initial problem. All the questions except the root are called auxiliary questions. Those occurring in the branching points are sub-questions which John will ask Ann (we call them queries). These questions are: ‘?usr(d)’ and ‘?live(dp)’, so they concern concepts known to Ann.4

The tree presented in Fig. 1 is an e-scenario for the initial question.5 Each path of the e-scenario leads to an answer to the initial question through certain (at least one) queries asked of Ann. Obviously, the queries are not selected at random—they are matched to bring John closer to the solution of the initial problem. And thus it may be observed that, first, if the question ‘? locusr(d)’ is sound (i.e. it has a true direct answer,6 which is the case here) and the declarative premises are true, then the queries must be sound as well. And second, each answer to a query (if true, and given that the declarative premises are true) leads John to a solution of the previously posed question. In other words, queries of this e-scenario are erotetically implied by some question occurring higher at the same path and by the declarative premises. A formal definition of erotetic implication (e-implication for short), as well as that of an e-scenario, will be given below; first, however, we will need some logical preliminaries. [We shall follow Wiśniewski (2013) in technical notation.]

We will take the language of Classical Propositional Logic (CPL, for short) as the starting point, this will be called L. Language L contains the following connectives: \(\lnot \), \(\rightarrow \), \(\vee \), \(\wedge \), \(\leftrightarrow \); the concept of a well-formed formula (wff) is defined in a traditional manner. We assume that \(\vee \) and \(\wedge \) bind more strongly than \(\rightarrow \) and \(\leftrightarrow \), we also adopt the usual conventions concerning omitting parentheses.

At this point, IEL introduces another object-level language, \(L^+\), which contains declarative well-formed formulas (d-wffs) and erotetic well-formed formulas (e-wffs, that is, questions).7 The categories of d-wffs and e-wffs are disjoint. D-wffs of \(L^+\) are simply well-formed formulas of L, and e-wffs of \(L^+\) are expressions of the form:
$$\begin{aligned} ? \{ A_1, \ldots , A_n \} \end{aligned}$$
(1)
where \(n \ge 2\) and \(A_1, \ldots , A_n\) are nonequiform (i.e. syntactically distinct) d-wffs of L. Thus questions of language \(L^+\) are distinguished syntactically. The set
$$\begin{aligned} \{ A_1, \ldots , A_n \} \end{aligned}$$
(2)
constitutes the set of all the direct answers to question (1). As we can see, each question of \(L^+\) has a finite set of direct answers and each question has at least two direct answers. If Q is a question of \(L^+\), then \(\mathbf {d}Q\) denotes the set of direct answers to Q. Polar questions of the form \(?\{ A, \lnot A\}\) are simply presented as:
$$\begin{aligned} ? A \end{aligned}$$
(3)
as it was done in our example above. We will say that question (3) is based on formula A. Later on we will also make use of questions of the following form, called binary conjunctive questions:
$$\begin{aligned} ? \{ A \wedge B, A \wedge \lnot B, \lnot A \wedge B, \lnot A \wedge \lnot B \} \end{aligned}$$
(4)
which will be abbreviated as \(? \pm |A,B|\).

Questions of \(L^+\) will be referred to by metavariables Q, \(Q^*\), \(Q^{**}\), etc. Metavariables \(A, B, C, \ldots \) refer to d-wffs of \(L^+\), and \(X, Y, \ldots \) to sets of d-wffs. Now we are ready to present:

Definition 1

(Erotetic implication) A question Q implies a question \(Q^*\) on the basis of a set of d-wffs X (in symbols, \(\mathbf {Im}(Q,X,Q^*)\)) iff:
  1. (1)

    for each \(A \in \mathbf {d}Q\), if A is true and all the d-wffs in X are true, then at least one element of \(\mathbf {d}Q^*\) is true;

     
  2. (2)

    for each \(B \in \mathbf {d}Q^*\), there exists a non-empty proper subset Y of \(\mathbf {d}Q\) such that if B is true and all the d-wffs in X are true, then at least one element of Y is true.

     

The first clause of the above definition warrants the transmission of soundness (of the implying question Q) and truth (of the declarative premises in X) into soundness (of the implied question \(Q^*\)). The second clause expresses the property of “open-minded cognitive usefulness” of e-implication, that is, the fact that each answer to the implied question \(Q^*\) narrows down the set of possibilities offered by the implying question Q.

And finally:8

Definition 2

(E-scenario) A finite labelled tree \({\varPhi }\) is an erotetic search scenario for a question Q relative to a set of d-wffs X iff
  1. (1)

    the nodes of \({\varPhi }\) are labelled by questions and d-wffs; they are called e-nodes and d-nodes, respectively;

     
  2. (2)

    Q labels the root of \({\varPhi }\);

     
  3. (3)

    each leaf of \({\varPhi }\) is labelled by a direct answer to Q;

     
  4. (4)

    \(\mathbf d Q \cap X = \emptyset \);

     
  5. (5)
    for each d-node \(\gamma _\delta \) of \({\varPhi }\): if A is the label of \(\gamma _\delta \), then
    1. (a)

      \(A \in X\), or

       
    2. (b)

      \(A \in \mathbf d Q^*\), where \(Q^* \ne Q\) and \(Q^*\) labels the immediate predecessor of \(\gamma _\delta \);

       
    3. (c)

      \(\{B_1, \ldots , B_n\} \models A\), where \(B_i\)\((1 \le i \le n)\) labels a d-node of \({\varPhi }\) that precedes the d-node \(\gamma _\delta \) in \({\varPhi }\);

       
     
  6. (6)

    each d-node of \({\varPhi }\) has at most one immediate successor;

     
  7. (7)

    there exists at least one e-node of \({\varPhi }\) which is different from the root;

     
  8. (8)
    for each e-node \(\gamma _\varepsilon \) of \({\varPhi }\) different from the root: if \(Q^*\) is the label of \(\gamma _\varepsilon \), then \(\mathbf d Q^* \ne \mathbf d Q\) and
    1. (a)

      \(\mathbf{Im}(Q^{**}, Q^*)\) or \(\mathbf{Im}(Q^{**}, B_1, ..., B_n,Q^*)\), where \(Q^{**}\) labels an e-node of \({\varPhi }\) that precedes \(\gamma _\varepsilon \) in \({\varPhi }\) and \(B_i\)\((1 \le i \le n)\) labels a d-node of \({\varPhi }\) that precedes \(\gamma _\varepsilon \) in \({\varPhi }\), and

       
    2. (b)
      an immediate successor of \(\gamma _\varepsilon \) is either an e-node or is a d-node labelled by a direct answer to the question that labels \(\gamma _\varepsilon \), moreover
      • if an immediate successor of \(\gamma _\varepsilon \) is an e-node, it is the only immediate successor of \(\gamma _\varepsilon \),

      • if an immediate successor of \(\gamma _\varepsilon \) is not an e-node, then for each direct answer to the question that labels \(\gamma _\varepsilon \) there exists exactly one immediate successor of \(\gamma _\varepsilon \) labelled by the answer.

       
     

Branches of an e-scenario \(\varPhi \) will be called paths of this e-scenario.

Clauses (1)(3) are self-explanatory. Clause (4) is to warrant that the solution to the initial problem represented by Q is not among the declarative premises in X (thus Q expresses a genuine problem). Clause (5) states that a d-wff may occur in an e-scenario only if (a) it is one of the initial declarative premises (e.g. a rule defining the ‘local user’ concept), (b) it is an answer to a query which labels a branching point of the e-scenario (like e.g. the answer ‘usr(d)’ in our example) or (c) it may be inferred from d-wffs which has already appeared on a given branch of the tree (e.g. formula ‘\(usr(d) \wedge live(d,p)\)’ was inferred from ‘usr(d)’ and ‘live(dp)’). Clause (6) ensures that an e-scenario branches only on questions (not on d-wffs). Clause (8) guarantees that: (a) if a question (other than the root) appears in an e-scenario, its appearance must be properly justified: it should be erotetically implied by some previous question (and possibly some d-wffs), and (b) a question may be succeeded in an e-scenario only by another question (as is the case with question ‘\(? \{ locusr(d), \lnot locusr(d), \lnot usr(d) \}\)’ in our example) or—if it is a branching point—by its direct answers. In the last situation each of the direct answers to such a query must immediately succeed the query. Finally, clause (7) together with (6), (8) and the fact that e-scenarios are finite, entails the existence of at least one branching point.

Now we are in a position to explain the role played in e-scenarios by questions like ‘\(? \{ locusr(d), \lnot locusr(d), \lnot usr(d) \}\)’. The point is that the relation of erotetic implication is not transitive. It holds between, e.g., questions ‘? locusr(d)’ and ‘\(? \{ locusr(d), \lnot locusr(d), \lnot usr(d) \}\)’, and also between questions ‘\(? \{ locusr(d)\), \(\lnot locusr(d), \lnot usr(d) \}\)’ and ‘? usr(d)’, but nevertheless it does not hold between ‘? locusr(d)’ and ‘? usr(d)’ for the following reason: the affirmative answer to the second question, i.e. ‘usr(d)’ (together with the declarative premises) does not warrant that the true answer to ‘? locusr(d)’ lies in some proper subset of the set of direct answers to this question. In other words, clause (2) of the definition of erotetic implication is not fulfilled. But we do want the erotetic part of an e-scenario to be regulated by this relation, and thus we introduce another auxiliary question, which is ‘\(? \{ locusr(d), \lnot locusr(d), \lnot usr(d) \}\)’, which reassures transmission of erotetic implication between questions.

As can be seen, e-scenarios are defined in terms of syntax and semantics, however:

Viewed pragmatically, an e-scenario provides us with conditional instructions which tell us what questions should be asked and when they should be asked. Moreover, an e-scenario shows where to go if such-and-such a direct answer to a query appears to be acceptable and goes so with respect to any direct answer to each query (Wiśniewski 2003, p. 422).

Taking into account their applications, one of the most important properties of e-scenarios is expressed by the Golden Path Theorem. We present here a somewhat simplified version of the theorem [for the original see Wisniewski (2003, p. 411) or Wisniewski (2013, p. 126)]:

Theorem 1

(Golden Path Theorem) Let \(\varPhi \) be an e-scenario for a question Q relative to a set of d-wffs X. Assume that Q is a sound question, and that all the d-wffs in X are true. Then e-scenario \(\varPhi \) contains at least one path \(\mathbf {s}\) such that:
  1. (1)

    each d-wff of \(\mathbf {s}\) is true,

     
  2. (2)

    each question of \(\mathbf {s}\) is sound, and

     
  3. (3)

    \(\mathbf {s}\) leads to a true direct answer to Q.

     

The theorem states that if an initial question is sound and all the initial premises are true, then an e-scenario contains at least one golden path, which leads to a true direct answer to the initial question of the e-scenario. Intuitively, such an e-scenario provides a search plan which might be described as safe and finite—i.e. it will end up in a finite number of steps and it will lead to an answer to the initial question (cf. Wiśniewski 2003, 2004).

E-scenarios may be complete or incomplete. An e-scenario is complete, if each direct answer to the principal question is the endpoint of some path (the label of a leaf), and incomplete, when at least one of the answers to the principal question is not on any of the leaves. Moreover, if \(\varPhi \) is an e-scenario relative to an empty set of d-wffs, then we say that \(\varPhi \) is a pure e-scenario.

What is important is that if a query is a complex question (that is, it has a complex logical structure) and the answerer finds it difficult to answer it, he/she may try to execute another e-scenario with the troublesome query playing the role of the initial question. Then, again, when executing the e-scenario one may arrive at a query that has a complex logical structure and try another e-scenario with this query as the initial question. In other words, the e-scenarios are subjected to change by the process of embedding of one e-scenario into another. More formally, let us imagine that we have an e-scenario \(\varPhi \) for a question Q built on the basis of a set of premises X. A query \(Q_m\) appears on one of the paths of the scenario. Let us also imagine that we have an e-scenario \(\varPsi \) for question \(Q_m\) built on the basis of a set of premises Y. Question \(Q^*\) is the first auxiliary question of \(\varPsi \). Now we may embed \(\varPsi \) into \(\varPhi \) (as far as some conditions are met—cf. Wisniewski (2003, pp. 413–414). The new e-scenario obtained presents a modified search plan for question Q. An e-scenario embedding schema is presented in Fig. 2.
Fig. 2

E-scenarios embedding schema (cf. Wiśniewski 2008)

Using the mechanism of systematic embedding one may prove a very important result formulated below as Lemma 1. (An atomic polar question based on a propositional variable \(p_i\) is a question of the form: \(? \{ p_i, \lnot p_i \}\). A non-atomic polar question is a polar question based on a compound formula.)

Lemma 1

(Wiśniewski 2003, p. 419) If Q is a non-atomic polar question, then there exists a complete e-scenario for Q such that each query of this e-scenario is a polar question based on a propositional variable that occurs in Q.

Moreover, the above result may be generalised to non-polar questions in the following way:

Theorem 2

(Wiśniewski 2003, p. 421) If Q is not an atomic polar question based on a propositional variable, then there exists a complete e-scenario for Q relative to a disjunction of all the direct answers to Q such that each query of this scenario is a polar question based on a propositional variable that occurs in Q.

The reader may find more information on embedding and its applications in Wiśniewski (2013).

In Sect. 4.3 we will consider e-scenarios whose queries may be answered by three possible answers: ‘yes’, ‘no’ or ‘I don’t know’. At the moment we will need some more technical details.

4 The Logical Basis for Information Gaps

4.1 Three-Valued Logic \(\mathbf {K}3\)

In order to express the third answer mentioned above we will introduce the third logical value. For this purpose we shall consider Kleene’s strong three-valued logic \(\mathbf {K}3\).9 Let L be defined as in Sect. 3. The connectives are defined by the following truth-tables:

We augment language L with two additional unary connectives ‘\(\boxminus \)’ and ‘\(\boxplus \)’, and call the new language L3. The set of wffs of L3 is defined as the smallest set containing the set of wffs of L and such that if A is a wff of L, then (i) ‘\(\boxminus (A)\)’ is a wff of L3 and (ii) ‘\(\boxplus (A)\)’ is a wff of L3. Observe that the new connectives never occur inside the wffs of L and they can not be iterated.

The intended reading of the new connectives is the following:
  • \(\boxminus A\)—an agent \(\mathtt {a}\)cannot decide if it is the case that A using the knowledge base \(\mathsf {D}\).

  • \(\boxplus A\)—an agent \(\mathtt {a}\)can decide if it is the case that A using the knowledge base \(\mathsf {D}\).

(Obviously, the connectives are not syntactically relativised to an agent and a database.) The connectives are characterised by the following truth-tables:10Thus semantically speaking, the new connectives reduce the set of possible values to the classical ones. Let us also observe that if we allowed to put negation in front of ‘\(\boxminus \)’, then the second connective could be introduced by the following definition: \(\boxplus A =_{defn} \lnot \boxminus A\).

4.2 Questions Introduced—Language \(L3^+\) and its Semantics

We will define language \(L3^+\), built upon L3, in which questions may be formed. [The construction of the language follows analogous constructions from Wiśniewski (2013).] The vocabulary of \(L3^+\) contains the vocabulary of L3 (thus also the connectives ‘\(\boxplus \)’, ‘\(\boxminus \)’) and the signs: ‘?’,‘\(\{\)’,‘\(\}\)’. As in the classical case [see Sect. 3 and Wiśniewski (2003)], by the declarative well-formed formulas of \(L3^+\) (d-wffs) we mean the well-formed formulas of L3. The notion of an e-wff (question) of \(L3^+\) is also defined as in the classical case (see (1) on page 6), this time, however, the direct answers to a question are formulated in L3. We apply all the previous notational conventions.

We are interested in a specific category of ternary questions, which may be viewed as the counterparts of polar (yes-no) questions, provided with the third possible answer “it is not known whether” (or “the machine does not know”, “it was not decided”, etc.). Intuitively, this refers to the situation in which John asks Ann whether A holds, and she may answer with ‘Yes, A holds.’, ‘No, it does not hold.’ or simply ‘I don’t know whether A holds or not.’ The third answer, expressing the lack of knowledge with respect to A, will be symbolized by ‘\(\boxminus A\)’. Thus the question will be represented in language \(L3^+\) as follows:
$$\begin{aligned} ? \{ A, \lnot A, \boxminus A \} \end{aligned}$$
(5)
Expressions of the form (5) will be called ternary questions of\(L3^+\). For the sake of simplicity, we will represent them as ‘? A’. If A in schema (5) is a propositional variable, then (5) is called atomic ternary question of\(L3^+\) or, more specifically, atomic ternary question of\(L3^+\)based on propositional variable A.

We will use the general setting of Minimal Erotetic Semantics (MiES) here (cf. Wiśniewski 1996, 2001, 2013; Peliš 2011). Roughly speaking, MiES is a very rich source of concepts used in semantical analysis of both declaratives and questions. Here we present some of them. The basic semantic notion is that of partition.

Definition 3

Let \(\mathcal {D}_{L3^+}\) be the set of d-wffs of language \(L3^+\). By a partition of language\(L3^+\) we mean an ordered pair \(\mathbf {P} = \langle \mathbf {T_P}, \mathbf {U_P} \rangle \) such that:
  • \(\mathbf {T_P} \cap \mathbf {U_P} = \emptyset \)

  • \(\mathbf {T_P} \cup \mathbf {U_P} = \mathcal {D}_{L3^+}\)

By a partition of the set\(\mathcal {D}_{L3^+}\) we mean a partition of language \(L3^+\). If for a certain partition \(\mathbf {P}\) and a d-wff A, \(A \in \mathbf {T_P}\), then we say that A is true in partition\(\mathbf {P}\), otherwise, A is untrue in\(\mathbf {P}\). What is essential for the semantics of \(L3^+\) is the notion of \(\mathbf {K}3\)-admissible partition. First, we define the notion of a \(\mathbf {K}3\)-assignment as a function \(VAR \longrightarrow \{0, \frac{1}{2}, 1\}\). Next, we extend \(\mathbf {K}3\)-assignments to \(\mathbf {K}3\)-valuations according to the truth-tables of \(\mathbf {K}3\). Now we are ready to present:

Definition 4

We will say that partition \(\mathbf {P}\) is \(\mathbf {K}3\)-admissible provided that for some \(\mathbf {K}3\)-valuation V, the set \(\mathbf {T_P}\) consists of formulas true under V and the set \(\mathbf {U_P}\) consists of formulas which are not true under V.

A question Q is called sound under a partition\(\mathbf {P}\) provided that some direct answer to Q is true in \(\mathbf {P}\). We will call a question Qsafe, if Q is sound under each \(\mathbf {K}3\)-admissible partition. Note that in the three-valued setting a polar question is not safe. However, each ternary question of \(L3^+\) is safe.

We will make use of the notion of multiple-conclusion entailment (mc-entailment, for short),11 which denotes a relation between sets of d-wffs generalising the standard relation of entailment.

Definition 5

(Multiple-conclusion entailment in\(L3^+\)) Let X and Y be sets of d-wffs of language \(L3^+\). We say that Xmc-entailsY in \(L3^+\), in symbols Open image in new window, iff for each \(\mathbf {K}3\)-admissible partition \(\mathbf {P}\) of \(L3^+\), if \(X \subseteq \mathbf {T_P}\), then \(Y \cap \mathbf {T_P} \ne \emptyset \).

As a special case of mc-entailment we obtain:

Definition 6

(Entailment in\(L3^+\)) Let X be a set of d-wffs and A a single d-wff of \(L3^+\). We say that XentailsA in \(L3^+\), in symbols \(X \models _{L3^+} A\), iff Open image in new window, that is, iff for each \(\mathbf {K}3\)-admissible partition \(\mathbf {P}\) of \(L3^+\), if each formula from X is true in \(\mathbf {P}\), then A is true in \(\mathbf {P}\).

The crucial concept for ESS is the one of erotetic implication.

Definition 7

(Erotetic implication in\(L3^+\)) Let Q and \(Q^*\) stand for questions of \(L3^+\) and let X be a set of d-wffs of \(L3^+\). We will say that Q\(L3^+\)-implies\(Q^*\)on the basis ofX, in symbols \(\mathbf {Im}_{L3^+}(Q,X,Q^*)\), iff
  1. 1.

    for each \(A \in \mathbf {d}Q\), Open image in new window, and

     
  2. 2.

    for each \(B \in \mathbf {d}Q^*\), there is a non-empty proper subset Y of \(\mathbf {d}Q\) such that Open image in new window.

     

4.3 Erotetic Search Scenarios with Ternary Questions

Let us now come back to John and Ann. Suppose that John asks Ann a question which is compound, e.g. whether \(A \wedge B\). Then the following tree-diagram may represent a possible course of events:
where the following:
$$\begin{aligned} ? \pm \boxminus |A, B| \end{aligned}$$
(6)
refers to a question of the form:
$$\begin{aligned} ? \{ A \wedge B, A \wedge \lnot B, A \wedge \boxminus B, \lnot A \wedge B, \lnot A \wedge \lnot B, \lnot A \wedge \boxminus B, \boxminus A \wedge B, \boxminus A \wedge \lnot B, \boxminus A \wedge \boxminus B \} \end{aligned}$$
(7)
(7) is a ternary counterpart of binary conjunctive questions (recall (4)). Again, its role in the above schema is technical—it warrants transmission of erotetic implication between questions.

The interesting thing that happens here is that John and Ann simplify the question ‘\(? A \wedge B\)’, by going through questions concerning A and B as well.

It is easy to establish that the following holds true:
  1. (1.1)

    \(\mathbf{Im}(? \{ A \wedge B, \lnot (A \wedge B), \boxminus (A \wedge B) \}, ? \pm \boxminus |A, B|)\)

     
  2. (1.2)

    \(\mathbf{Im}(? \pm \boxminus |A, B|, ? \{ A, \lnot A, \boxminus A \})\)

     
  3. (1.3)

    \(\mathbf{Im}(? \pm \boxminus |A, B|, ? \{ B, \lnot B, \boxminus B \})\)

     
Thus we may say that the above tree-diagram represents a certain e-scenario, whose elements are expressed in language \(L3^+\). Below we present other e-scenarios for ternary questions whose answers are compound d-wffs. E-scenarios of this kind are called standard e-scenarios for connectives.12 Their characteristic feature is that they contain queries the direct answers to which are subformulas of direct answers to the main question or the subformulas in the scope of ‘\(\lnot \)’ or ‘\(\boxminus \)’. Thus the standard e-scenarios may be viewed as analysing the logical structure of a problem expressed by the main question. Standard e-scenarios presented in this section are complete, and they have ternary questions as the only queries.
It is the case that:
  1. (1.4)

    \(\mathbf{Im}(? \{ \lnot A, \lnot \lnot A, \boxminus \lnot A \}, ? \{ A, \lnot A, \boxminus A \})\)

     
The following also holds true:
  1. (1.5)

    \(\mathbf{Im}(? \{ A \vee B, \lnot (A \vee B), \boxminus (A \vee B) \}, ? \pm \boxminus |A, B|)\)

     
  2. (1.6)

    \(\mathbf{Im}(? \{ A \rightarrow B, \lnot (A \rightarrow B), \boxminus (A \rightarrow B) \}, ? \pm \boxminus |A, B|)\)

     
  3. (1.7)

    \(\mathbf{Im}(? \{ A \leftrightarrow B, \lnot (A \leftrightarrow B), \boxminus (A \leftrightarrow B) \}, ? \pm \boxminus |A, B|)\)

     
Fig. 3

ESS with an initial question based on a compound formula

It may also happen that the initial question concerns the very solvability of a problem A. This kind of question may be represented as ‘\(? \{ \boxminus A, \boxplus A \}\)’ and its analysis may be conducted according to:
It is the case that:
  1. (1.8)

    \(\mathbf{Im}(? \{ \boxminus A, \boxplus A \}, ? \{ A, \lnot A, \boxminus A \})\)

     
What is important is that, using the logical apparatus introduced in Sect. 4.2, we may show that all of the above standard e-scenarios have the golden path property.

We may also wonder what happens when the arguments A, B of connectives in standard e-scenarios are compound as well. Well, let us recall the idea of embedding. As long as we are dealing with e-scenarios whose queries are of the form \(? \{ A, \lnot A, \boxminus A \}\), if A is logically compound, then we may embed a standard e-scenario for this question into the initial e-scenario, thus going down to queries based on propositional variables. Figure 3 presents an example e-scenario constructed in this manner.

Moreover, we may observe that a user of the e-scenario from Fig.  3 will discover a solution to the initial problem, even if some sub-problems are unsolved. E.g. consider the path of the ESS going through:
$$\begin{aligned} ?p; \boxminus p; ?q; \boxminus q; \boxminus (p \wedge q); ? \lnot r; ?r; \lnot r; p \wedge q \rightarrow \lnot r \end{aligned}$$
(8)
Since we have the above standard e-scenarios at our disposal, we may generate, for each d-wff A (built by means of classical connectives and/or ‘\(\boxminus \)’), a complete e-scenario for question \(? \{ A, \lnot A, \boxminus A \}\) which has as its queries only ternary questions based on propositional variables. Moreover, this may be done in a purely mechanical way. Below we will display this result once again as a decently formulated theorem (see Theorem 4). This result has been also used in a Prolog implementation of an algorithm generating such e-scenarios. The program is available at https://inquestpro.wordpress.com/resources/software/

4.4 On Standard e-Scenarios for \(L3^+\): Logical Details

The concept of erotetic search scenario for a question (e-wff) of \(L3^+\) relative to a set of d-wffs of \(L3^+\) is defined according to Definition 2, with the exception that ‘\(\models \)’ refers to ‘\(\models _{L3^+}\)’ and ‘\(\mathbf {Im}\)’ should be understood as ‘\(\mathbf {Im}_{L3^+}\)’.

As in the classical case (see Sect. 3) we arrive at:

Theorem 3

(Golden Path Theorem) Let \(\varPhi \) be an e-scenario for a question Q (of language \(L3^+\)) relative to a set X of d-wffs (of \(L3^+\)). Let \(\mathbf {P}\) be a K3-admissible partition of language \(L3^+\) such that Q is sound in \(\mathbf {P}\) and all the d-wffs in X are true in \(\mathbf {P}\). Then the e-scenario \(\varPhi \) contains at least one path \(\mathbf {s}\) such that:
  1. 1.

    each d-wff of \(\mathbf {s}\) is true in \(\mathbf {P}\),

     
  2. 2.

    each question of \(\mathbf {s}\) is sound in \(\mathbf {P}\), and

     
  3. 3.

    \(\mathbf {s}\) leads to a direct answer to Q which is true in \(\mathbf {P}\).

     

Proof

The proof of this theorem may be conducted in the form of a straightforward reformulation and extension of the proof by Wiśniewski (2003). We mention the basic facts on which the proof relies.

First, each d-wff which is not a direct answer to a query is entailed (in the sense of Definition 6) by some d-wffs which have already appeared on a given path. Similarly, auxiliary questions (and thus all queries) are erotetically implied (in the sense of Definition 7) by some wff-s occuring on a given path. Second, the entailment relation preserves truth under \(\mathbf {K}3\)-admissible partitions and the e-implication relation preserves soundness of questions under \(\mathbf {K}3\)-admissible partitions (more specifically, it transmits the semantical properties of soundness of questions and truth of d-wffs into the soundness of a question). Third, if a query is sound under a partition, then by clause 8b of Definition 2 of e-scenario the direct answer which is true under this partition labels one of the immediate successors of this query.

All the details dependent on \(\mathbf {K}3\)-semantics, especially the fact that the schemas presented in Sect. 4.3 are e-scenarios for questions of \(L3^+\), may be easily calculated with \(\mathbf {K}3\) tables. We leave it to the reader.

And finally:

Theorem 4

Let Q be a ternary question of language \(L3^+\), i.e., \(Q = \{A\), \(\lnot A\), \(\boxminus A\}\), where A is a compound d-wff of \(L3^+\). There exists a pure and complete e-scenario for Q such that each query of the e-scenario is an atomic ternary question based on a propositional variable that occurs in Q.

Proof

In Wiśniewski (2003) the reader may find the proof for the classical setting. Again, in our case there is not much to be added, thus instead of presenting a reformulation of the existing proof, we describe the algorithm on which the Prolog program mentioned in Sect. 4.3 is based.

For our present purposes we call a sequence s of wffs a partial path, whenever for some i, \(s_i\) is a question of the form ? D, where D is a compound d-wff, and \(s_{i+1}\) is a direct answer to this question, i.e., \(s_{i+1}\) is either D, \(\lnot D\) or \(\boxminus D\). That is, a partial path is a sequence containing a ternary query, which is not atomic (since D is compound).

In the first step, an e-scenario for a compound d-wff A is generated as a list of partial paths of the form:
$$\begin{aligned}&[? A, A]\end{aligned}$$
(9)
$$\begin{aligned}&[? A, \lnot A]\end{aligned}$$
(10)
$$\begin{aligned}&[? A, \boxminus A] \end{aligned}$$
(11)
(Observe that the set of three sequences of the above form is not an e-scenario yet, since there is no query. Thus the analysis in Prolog of question ‘\(? \{p, \lnot p, \boxminus p\}\)’ does not produce an e-scenario. This is how it should be.) Next, for each partial path the program analyses the structure of A and fits one of the standard e-scenarios to it. For example, if A is of the form \(B \wedge C\), then it replaces:
  1. 1.

    a partial path (9) with the sequence \([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, B, ? C, C, B \wedge C]\)

     
  2. 2.

    a partial path (10) with three sequences: \([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, B, ? C, \lnot C, \lnot (B \wedge C)]\)\([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, \lnot B, \lnot (B \wedge C)]\)\([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, \boxminus B, ? C, \lnot C, \lnot (B \wedge C)]\)

     
  3. 3.

    a partial path (11) with three sequences: \([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, B, ? C, \boxminus C, \boxminus (B \wedge C)]\)\([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, \boxminus B, ? C, C, \boxminus (B \wedge C)]\)\([? (B \wedge C), ? (\pm \boxminus |B, C|), ? B, \boxminus B, ? C, \boxminus C, \boxminus (B \wedge C)]\)

     
The reader may easily check that the set of seven sequences presented above constitutes an e-scenario (a pure scenario, actually) for question ? A. If A is of one of the forms: \(B \vee C\), \(B \rightarrow C\), \(B \leftrightarrow C\), \(\lnot B\), \(\boxminus B\), then the program proceeds in a similar way, according to a suitable schema of e-scenario.

The sequences of wffs created so far are saved in the form of a list of lists. Next, the following step is iterated: the first list (sequence of wffs) is analysed, and if it is a partial path (i.e. if it contains a non-atomic query followed by a direct answer to it), then it is replaced with a list (or with lists) according to the pattern illustrated above (that is, in accordance with the schemes of e-scenarios presented in Sect. 4.3). And if it is not a partial path (i.e. it does not contain any non-atomic query), then it is removed from the list (to be returned at the end). The reader can see that during this stage the Prolog program actually applies the embedding procedure.\(\square \)

Let us stress once again that due to the golden path property we are guaranteed that once a sound question is posed and the declaratives assumed are true, we will reach the final solution provided we gain true answers to the queries. This property relies primarily on the fact that the questions which occur in the scenarios are erotetically implied by the previous elements of the structure.

We have used strong Kleene’s logic as the basis, since we believe that this logic gives a characterisation of logical connectives which suits our purposes very well, e.g. it gives value \(\frac{1}{2}\) (unknown) to implication whenever its antecedent and consequent have this value (see Sect. 4.1). Let us stress that we think of material implication here. If ‘\(A \rightarrow B\)’ is understood as ‘\(\lnot A \vee B\)’, and both A and B are undecided, then there is no basis for deciding whether ‘\(\lnot A\)’ and ‘\(\lnot A \vee B\)’. Thus ‘\(A \rightarrow B\)’ must be left undecided.13

Let us emphasize, however, that we could have employed some other three-valued logic and made the erotetic machinery fit. In other words, the level of erotetic concepts is largely independent of the logical basis. The former is adjusted to the latter by providing suitable definitions of admissible partitions.

5 Modelling of the Problem Solving Process

5.1 More on Embedding

In the previous section we introduced e-scenarios with ternary questions but with no declarative premises. Now we will consider a situation when the declarative premises are present, and moreover, an information gap will occur explicitly among the premises.

Let us imagine that an agent a wants to establish whether A is the case. The agent knows that: \(A \leftrightarrow B \wedge C\), but knows nothing about B. We may now imagine that a solves his/her problem according to the following e-scenario (as can be observed, a’s premise and the fact that \(\boxminus B\) are incorporated in the initial premises of the e-scenario):
In building this e-scenario we rely on the fact that: \(\mathbf{Im} (? \{A, \lnot A, \boxminus A \}, \{ A \leftrightarrow B \wedge C, \boxminus B \}, ?\{ C, \lnot C, \boxminus C \})\). In this example \(\boxminus B\) might be treated as an information gap. However, it can be observed that there is one possible course of events that will lead to the answer to the initial question despite the lack of knowledge about B—namely the case, where the answer to the question \(? \{ C, \lnot C, \boxminus C \}\) is negative (then the answer to the initial question is also negative). We have seen the same effect in the previous section—a definite solution of the main problem may be arrived at despite the lack of knowledge concerning the subproblems. We may say that the proposed e-scenario offers three cognitive situations (from most to least preferable):
  • A ‘maximal’ cognitive situation is represented by the path going through the answer \(\lnot C\), because it leads to \(\lnot A\), i.e. a definite answer to the initial question.

  • A ‘minimal’ one is reflected by the path which goes through the answer C, as in this situation the questioning process ends up with some knowledge gains (despite the fact that we did not manage to solve the initial problem, we know that C).

  • A ‘zero knowledge’ situation is represented by the third path going through \(\boxminus C\), because it finishes the questioning process without any knowledge gains.

Now we will analyse in a similar manner the example presented in Sect. 3. Let us recall that John wanted to know if user d is a local user of the Alpha computer system, and that he would question Ann to check this. Ann’s knowledge about the computer system in question is represented by the following facts: usr(a), live(ap), usr(b), live(bz), usr(c), live(cp), live(dz), while John’s concept of a local user might be expressed by the rules: \(locusr(x) \rightarrow usr(x)\), \(locusr(x) \rightarrow live(x,p)\), \(usr(x) \wedge live(x,p) \rightarrow locusr(x)\).
Again, we assume that John does want to establish whether d is a local user but he will not ask for this directly. This time, however, we may suppose that gaining an answer to this question is John’s hidden agenda [so this time John does not ask Ann directly whether d is a local user because he wishes to hide his intentions; cf. Urbański et al. (2010) for this kind of ESS-analysis, cf. also Genot and Jacot (2012)]. Using ternary questions introduced in Sect. 4.3 we may propose the following e-scenario for John to guide his questioning process:
In designing the schema we rely on the following logical fact:
$$\begin{aligned}&\mathbf{Im} (? \{ A, \lnot A, \boxminus A \}, \{A \rightarrow C_1, A \rightarrow C_2, C_1 \wedge C_2 \rightarrow A \}, ?\{ C_1 \wedge C_2, \lnot (C_1 \wedge C_2),\\&\quad \boxminus (C_1 \wedge C_2) \}) \end{aligned}$$
The paths of the scenario represent seven different courses of events (four ‘maximal’, one ‘minimal’, and two ‘zero knowledge’ type).
We may now imagine that—in order to solve the initial problem of the e-scenario—John will ask Ann auxiliary questions and, depending on the answers received, he will choose further auxiliary questions on the basis of the scenario. In our example the activated (executed) part of the e-scenario would be the following:

It can be noted that an information gap ‘\(\boxminus usr(d)\)’ has appeared, because Ann did not provide information concerning the fact usr(d) (it is not present in Ann’s knowledge base). Despite this, John reached the solution to the initial problem after obtaining the answer to the auxiliary question \(? \{ live(d,p)\), \(\lnot live(d,p)\), \(\boxminus live(d,p) \}\).

We may also suppose that John will not obtain enough information to solve his initial problem (e-scenario execution will activate the zero knowledge path), or simply wants to establish if d is a user of the Alpha system. To do this John might change his information source. Let us assume that he will ask Gill now. Gill has some information about users of the Beta information system (rg stands for ‘is a registered Beta system user’):
$$\begin{aligned}&rg(d)\\&rg(e)\\&rg(f) \end{aligned}$$
John uses the following ‘user’ concept in this case: \(rg(x) \leftrightarrow \lnot usr(x)\) (i.e. x is a registered Beta system user iff x is not a user of the Alpha system). As in the previous case, we will assume that John will devise his strategy on the basis of an e-scenario.
Using the e-scenarios embedding procedure (cf. Fig.  2) John may now construe one e-scenario involving the concept of a registered Beta system user. Thanks to the e-scenarios’ properties and the embedding procedure the new concept is used in order to obtain an answer to John’s initial question.

5.2 A Natural Language Analysis

Is the ESS-apparatus amenable to modelling natural language dialogues?14 Well, let us now consider a more sophisticated example extracted from the Basic Electricity and Electronics Corpus (BEE) (Rosé et al. 1999), which contains tutorial dialogues from an electronics course. Analysis of dialogues in educational contexts reveal that a teacher may use a kind of strategy of searching/provoking ‘I don’t know’ type of answers to identify a student’s lack of knowledge/understanding. After such a situation a teacher might ask a series of sub-questions that may lead the student to a better understanding of a given topic. Often such auxiliary questions are accompanied with some additional information about the topic, like in the example below (BEE(F), stud48).15
As we can see, when the Tutor asks Student the first question each option seems equally possible from the point of view of Student. Thus the initial question might be reconstructed as follows:
$$\begin{aligned}&?\{ p \wedge q, p \wedge \lnot q, p \wedge \boxminus q, \lnot p \wedge q, \lnot p \wedge \lnot q, \lnot p \wedge \boxminus q, \boxminus p \wedge q, \boxminus p \wedge \lnot q,\\&\quad \boxminus p \wedge \boxminus q \} \end{aligned}$$
where p stands for ‘current flows through the wire’ and q—‘current flows through the meter’. Remember that this type of question is symbolised by \(? \pm \boxminus |p, q|\). We can also identify the Tutor’s premise explicated in the dialogue; it falls under the schema: \(r \leftrightarrow \lnot (p \wedge q)\), where r stands for ‘the leads obstruct the current flow’. Now the dialogue may be modelled by the e-scenario presented on Fig. 4.
Fig. 4

ESS for the exemplary dialogue (BEE(F), stud48)

After the Tutor asks the first question (s)he reformulates it, which is reflected in the e-scenario by the question \(? \{ p \wedge \lnot q, \lnot p \wedge q, p \wedge q, \lnot p \wedge \lnot q, \boxminus p \vee \boxminus q \}\). Now five answers are possible, and our Student chooses the last one (‘I don’t know’ is represented by ‘\(\boxminus p \vee \boxminus q\)’). Then, to simplify the matter, the Tutor asks about p again, and after Student’s answer ‘\(\boxminus p\)’ he introduces an additional declarative premise. The vertical dots indicate that something more would happen in the e-scenario, if the Student’s answers were different. The whole scenario relies on the following logical facts:
  • \(\mathbf {Im}(? \pm \boxminus |p, q|, \emptyset , ? \{ p \wedge \lnot q, \lnot p \wedge q, p \wedge q, \lnot p \wedge \lnot q, \boxminus p \vee \boxminus q \} )\)

  • \(\mathbf {Im}(? \{ p \wedge \lnot q, \lnot p \wedge q, p \wedge q, \lnot p \wedge \lnot q, \boxminus p \vee \boxminus q \}, \emptyset , ? \{ p, \lnot p, \boxminus p \} )\)

  • \(\mathbf {Im}(? \{ p \wedge \lnot q, \lnot p \wedge q, p \wedge q, \lnot p \wedge \lnot q, \boxminus p \vee \boxminus q \}, \{ r \leftrightarrow \lnot (p \wedge q) \}, ? \{ r, \lnot r, \boxminus r \} )\)

6 Conclusions and Further Work

In this paper we have used erotetic search scenarios in order to model the behaviour of an agent solving a complex problem. The scenario assumed by the agent represents the strategy applied by him/her to find the solution. The pivotal element of our model is the process of auxiliary questions generation—posing additional questions is a necessary element of the problem-solving process and our model illustrates this. Last but not least, we have used a three-valued version of the e-scenarios in order to show how the agent may proceed in gathering information and solving the problem although some information is missing. Our model explains also how in the situation of information gaps the agent may come across a definite solution.

In his works (especially in Wiśniewski 2013) Wiśniewski provides a very general scheme of defining the concept of erotetic implication and that of erotetic search scenario via the tools of Minimal Erotetic Semantics. In our work we have completed this scheme with the 3-valued logic K3. When it comes to the basic tools provided by the IEL we may say that, technically, we have just completed the scheme but on the other hand—conceptually—we have redefined the notion of erotetic search scenario by adding the third answer expressing the lack of knowledge.

The most natural idea for extending our work seems to be by a transition from 3-valued logic of Kleene to 4-valued logic of Belnap (see Belnap Jr 1977), where the fourth value is intuitively assigned to inconsistent pieces of information. Similar approach, yet critical for Belnap’s one, may be found in Szałas (2013). It does not deal with erotetic issues, however.

Another interesting issue to be examined in the future is the possibility of the algorithmic approach to problem-solving with the usage of erotetic search scenarios. Current works are focused on a more sophisticated implementation of the procedure of erotetic search scenarios generation (except the functionalities provided by the already mentioned Prolog implementation it will also allow for dynamic ESS generation and modification including the use of declarative premises). Such an algorithmic approach would also open the possibility of examining computational complexity of the discussed procedures.

Footnotes

  1. 1.

    IEL stands for Inferential Erotetic Logic. IEL was developed by Andrzej Wiśniewski in the 1990s. For more information see Wiśniewski (1995) or Wiśniewski (2013).

  2. 2.

    The set-of-answers methodology in formalising questions is rooted in Hamblin’s postulates. Its basic intuition is formulated as the first postulate, namely: “Knowing what counts as an answer is equivalent to knowing the question”—for wider discussions see e.g. Wisniewski (2013, p. 16–18), Peliš (2010).

  3. 3.

    See also Wiśniewski and Leszczyńska-Jasion (2015) for a thorough analysis of the relations between the two paradigms: that of INQ and that of IEL.

  4. 4.

    There is also one additional auxiliary question of the form: \(?\{ locusr(d)\), \(\lnot locusr(d)\), \(\lnot usr(d) \}\), which Ann is not actually asked. Its function is technical and will be explained later—see p. 8.

  5. 5.

    One may observe that although we have used predicates here, the whole reasoning may be perfectly modelled in the language of Classical Propositional Logic. We use this notation mainly for its readability. However, it is worth noticing that e-scenarios may be defined and analysed also for the non-trivial cases of questions expressed in the language of First-order Logic. The reader may find more information on this topic in Wiśniewski (2013) Chapter 7 and Part III.

  6. 6.

    For the notion of ‘direct answer’, which will be used in this paper, see Chapter 2 of Wiśniewski (2013).

  7. 7.

    For the details of such constructions see Wiśniewski (1995) or (2013).

  8. 8.

    E-scenarios are defined as trees in this case, but an alternative definition is possible. The reader may find both definitions and many examples in: Wiśniewski (2001, 2003, 2013) or Wiśniewski (2014). For the very definitions of an e-scenario see also Leszczyńska-Jasion (2013).

  9. 9.

    It is worth noticing that the motivation behind Kleene’s strong logic is analogous to ours: the logical values are thought of as possible answers of a machine which may settle the answer as ‘true’ or ‘false’ or, for certain inputs, do not settle any definite answer at all. Cf. Urquhart (2002, pp. 253–254), Bolc and Borowik (1992, p. 74).

  10. 10.

    Semantically speaking, a connective having the same meaning as our ‘\(\boxplus \)’ has been introduced, for example, on the grounds of paraconsistent logics as a “consistency connective” ‘\(\circ \)’ (see: Carnielli et al. 2007, p. 18). Our connectives may also be viewed as “characteristic functions” of definite logical values, compare Chapter 2 of Borowik (2003) and Bolc and Borowik (2003).

  11. 11.

    Cf. Shoesmith and Smiley (1978).

  12. 12.

    For standard e-scenarios see the literature concerning ESS.

  13. 13.

    We do not aim at an adequate reconstruction of natural-language conditionals in our framework. See for example Priest (2008) for a good survey of problems that such a reconstruction must encounter.

  14. 14.

    For more detailed discussion on this subject see also Łupkowski (2012).

  15. 15.

    This notation indicates BEE sub-corpus (F—Final) and the file number (stud10). Unfortunately no sentence numbering is available for the BEE corpus.

References

  1. Belnap, N. D. Jr. (1977). A useful four-valued logic. In: J. M. Dunn & G. Epstein (Eds.), Modern uses of multiple-valued logic, pp. 5–37. Berlin: Springer.Google Scholar
  2. Bolc, L., & Borowik, P. (1992). Many-valued logics: Volume 1: Theoretical foundations. Many-valued logics. Berlin: Springer.Google Scholar
  3. Bolc, L., & Borowik, P. (2003). Many-valued logics: Volume 2: Automated reasoning and practical applications. Berlin: Springer.Google Scholar
  4. Borowik, P. (2003). Wybrane klasy logik skończenie wielowartościowych. Pewne formalizmy odrzucania wyrażeń. Czȩstochowa: Wydawnictwo WSP.Google Scholar
  5. Carnielli, W., Coniglio, M. E., & Marcos, J. (2007). Logics of formal inconsistency. In D. M. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic (Vol. 14, pp. 1–93). Berlin: Springer.CrossRefGoogle Scholar
  6. Genot, E. J., & Jacot, J. (2012). How can questions be informative before they are answered? Strategic information in interrogative games. Episteme, 9(02), 189–204. doi:10.1017/epi.2012.8, http://journals.cambridge.org/article_S1742360012000081.
  7. Groenendijk, J. (2009). Inquisitive semantics: Two possibilities for disjunction. In P. Bosch, D. Gabelaia, & J. Lang (Eds.), Logic, language, and computation (Vol. 5422, pp. 80–94). Lecture notes in computer science. Berlin/Heidelberg: Springer.Google Scholar
  8. Groenendijk, J., & Roelofsen, F. (2011). Compliance. In A. Lecomte & S. Tronçon (Eds.), Ludics, dialogue and interaction (pp. 161–173). Berlin, Heidelberg: Springer.CrossRefGoogle Scholar
  9. Jain, S., Osherson, D., Royer, J., & Sharma, A. (1999). Systems that learn: An introduction to learning theory (2nd ed.). New York: Bradford.Google Scholar
  10. Kelly, K. T. (2014). A computational learning semantics for inductive empirical knowledge. In: A. Baltag & S. Smets (Eds.), Johan van Benthem on logic and information dynamics (pp. 289–337). Berlin: Springer.Google Scholar
  11. Leszczyńska-Jasion, D. (2013). Erotetic search scenarios as families of sequences and erotetic search scenarios as trees: Two different, yet equal accounts. Tech. rep.: Adam Mickiewicz University.Google Scholar
  12. Łupkowski, P. (2010). Cooperative answering and inferential erotetic logic. In: Łupkowski and Purver (2010), pp. 75–82.Google Scholar
  13. Łupkowski, P. (2012). Erotetic inferences in natural language dialogues. In Proceedings of the Logic & Cognition Conference, Poznań, pp. 39–48.Google Scholar
  14. Łupkowski, P. (2014). Compliance and pure erotetic implication. In V. Punčochář (Ed.), Logica yearbook 2013 (pp. 105–114). London: College Publications.Google Scholar
  15. Łupkowski, P. (2015). Question dependency in terms of compliance and erotetic implication. Logic and Logical Philosophy, 24(3), 357–376. http://apcz.pl/czasopisma/index.php/LLP/article/view/LLP.2015.002.
  16. Łupkowski, P., & Leszczyńska-Jasion, D. (2014). Generating cooperative question-responses by means of erotetic search scenarios. Logic and Logical Philosophy, 24(1), 61–78. http://apcz.pl/czasopisma/index.php/LLP/article/view/LLP.2014.017.
  17. Łupkowski, P., & Purver, M. (Eds.). (2010). Aspects of semantics and pragmatics of dialogue. SemDial 2010, 14th Workshop on the semantics and pragmatics of dialogue, Polish Society for Cognitive Science, Poznan.Google Scholar
  18. Peliš, M. (2010). Set of answers methodology in erotetic epistemic logic. Acta Universitatis Carolinae Philosophica et Historica, 2, 61–74.Google Scholar
  19. Peliš, M. (2011). Logic of questions. PhD thesis, Charles University in Prague.Google Scholar
  20. Peliš, M., & Majer, O. (2010). Logic of questions from the viewpoint of dynamic epistemic logic. In M. Peliš (Ed.), The logica yearbook 2009 (pp. 157–172). London: College Publications.Google Scholar
  21. Peliš, M., & Majer, O. (2011). Logic of questions and public announcements. In N. Bezhanishvili, S. Löbner, K. Schwabe, & L. Spada (Eds.), Eighth international Tbilisi symposium on logic, language and computation 2009 (pp. 145–157). Lecture Notes in Computer Science. Berlin: Springer.Google Scholar
  22. Priest, G. (2008). An introduction to non-classical logic. From if to is (2nd ed.). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  23. Rosé, C. P., DiEugenio, B., & Moore, J. (1999). A dialogue-based tutoring system for basic electricity and electronics. In S. P. Lajoie & M. Vivet (Eds.), Artificial intelligence in education (pp. 759–761). Amsterdam: IOS.Google Scholar
  24. Shoesmith, D. J., & Smiley, T. J. (1978). Multiple-conclusion Logic. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  25. Švarný, P., Majer, O., & Peliš, M. (2014). Erotetic epistemic logic in private communication protocol. In M. Dančák & V. Punčochář (Eds.), The logica yearbook 2013 (pp. 223–237). London: College Publications.Google Scholar
  26. Szałas, A. (2013). How an agent might think. Logic Journal of IGPL, 21(3), 515–535.CrossRefGoogle Scholar
  27. Urbański, M. (2001). Synthetic tableaux and erotetic search scenarios: Extension and extraction. Logique et Analyse, 173–174-175, 69–91.Google Scholar
  28. Urbański, M., & Łupkowski, P. (2010). Erotetic search scenarios: Revealing interrogator’s hidden agenda. In: Łupkowski and Purver (2010), pp. 67–74.Google Scholar
  29. Urquhart, A. (2001). Basic many-valued logic. In D. M. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic (Vol. 2, pp. 249–295). Dordrecht: Kluwer.CrossRefGoogle Scholar
  30. Wiśniewski, A. (1995). The posing of questions: Logical foundations of erotetic inferences. Dordrecht, Boston, London: Kluwer.CrossRefGoogle Scholar
  31. Wiśniewski, A. (1996). The logic of questions as a theory of erotetic arguments. Synthese, 109(1), 1–25.CrossRefGoogle Scholar
  32. Wiśniewski, A. (2001). Questions and inferences. Logique et analyse, 173–175, 5–43.Google Scholar
  33. Wiśniewski, A. (2003). Erotetic search scenarios. Synthese, 134, 389–427.CrossRefGoogle Scholar
  34. Wiśniewski, A. (2004). Erotetic search scenarios, problem-solving, and deduction. Logique et analyse, 185–188, 139–166.Google Scholar
  35. Wiśniewski, A. (2008). Questions, inferences, and dialogues. Presentation for LONDIAL. SemDial Workshop Series on the Semantics and Pragmatics of Dialogue, London, 2–4 June 2008.Google Scholar
  36. Wiśniewski, A. (2013). Questions, inferences and scenarios, studies in logic. Logic and Cognitive Systems, Vol. 46. College Publications.Google Scholar
  37. Wiśniewski, A. (2014). Answering by means of questions in view of inferential erotetic logic. In J. Meheus, E. Weber, & D. Wouters (Eds.), Logic, reasoning and rationality (pp. 261–283). Berlin: Springer.Google Scholar
  38. Wiśniewski, A., & Leszczyńska-Jasion, D. (2015). Inferential erotetic logic meets inquisitive semantics. Synthese. doi:10.1007/s11229-013-0355-4.

Copyright information

© The Author(s) 2015

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Logic and Cognitive ScienceAdam Mickiewicz University, Institute of PsychologyPoznańPoland

Personalised recommendations