Reasoning About Agent Types and the Hardest Logic Puzzle Ever
Authors
- First Online:
- Received:
- Accepted:
DOI: 10.1007/s11023-012-9287-x
- Cite this article as:
- Liu, F. & Wang, Y. Minds & Machines (2013) 23: 123. doi:10.1007/s11023-012-9287-x
- 6 Citations
- 310 Views
Abstract
In this paper, we first propose a simple formal language to specify types of agents in terms of necessary conditions for their announcements. Based on this language, types of agents are treated as ‘first-class citizens’ and studied extensively in various dynamic epistemic frameworks which are suitable for reasoning about knowledge and agent types via announcements and questions. To demonstrate our approach, we discuss various versions of Smullyan’s Knights and Knaves puzzles, including the Hardest Logic Puzzle Ever (HLPE) proposed by Boolos (in Harv Rev Philos 6:62–65, 1996). In particular, we formalize HLPE and verify a classic solution to it. Moreover, we propose a spectrum of new puzzles based on HLPE by considering subjective (knowledge-based) agent types and relaxing the implicit epistemic assumptions in the original puzzle. The new puzzles are harder than the previously proposed ones in the literature, in the sense that they require deeper epistemic reasoning. Surprisingly, we also show that a version of HLPE in which the agents do not know the others’ types does not have a solution at all. Our formalism paves the way for studying these new puzzles using automatic model checking techniques.
Keywords
Agent typesPublic announcement logicQuestioning strategyKnight and KnavesThe hardest logic puzzle everIntroduction
Three gods A, B, and C are called, in some order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes/no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language, in which the words for yes and no are da and ja, in some order. You do not know which word means which.
Boolos (1996) gave a lengthy solution which makes use of solutions to three simpler puzzles. Rabern and Rabern (2008) noticed that the puzzle may be trivialized according to Boolos’s original assumption on the behaviour of Random and thus proposed an amended version of HLPE. Uzquiano (2010) gave a two-question solution to the amended version of HLPE and proposed an even harder one which is proven to be not solvable in two questions by Wheeler and Barahona (2012). However, Wintein (2011) argues that the results in Wheeler and Barahona (2012) depend on a particular conception of answering self-referential questions truthfully or falsely, and propose a two-question solution to Uzquiano’s puzzle based on a different conception. Except for the formal truth theory presented in Wintein (2011), existing discussions on HLPE are mostly informal to some extent, featuring Boolean reasoning in finding solutions expressed in natural language which often involve self-referential questions. A complete formalization of such puzzles should take care of many different aspects which are hard to put together, such as questions and answers, liars and truth tellers, epistemic reasoning, and solution concepts for puzzles.
In this paper, we will give a purely formal, yet intuitive account of HLPE-like scenarios, by introducing logical frameworks for reasoning about knowledge by communication under uncertainty of various agent types. As suggested in HLPE and other Knights and Knaves puzzles, people behave differently in their ways of information exchange. The same utterance may contain different intended information due to different types of the speakers. Here, by `types’, we mean the patterns that agents follow in communicating information. Knowledge of agent types is crucial in social communication, in particular for strategic settings where people have to interpret and predict the behaviours of their opponents. By developing our formal framework, our aim is not only to solve puzzles like HLPE, but also to deal with general epistemic reasoning under uncertainty about agent types.
As for HLPE itself, there are several advantages to going purely formal. First of all, some of the existing solutions can be verified formally. More importantly, by making everything precise, we will discover the implicit epistemic assumptions behind those puzzles about agent types. As we will show, modifying those assumptions may change the nature of the puzzles, which also leads to even harder puzzles involving interesting and complicated epistemic reasoning. On the other hand, the formal approach also limits the language of questions that we can use in solving these puzzles. For example, the self-referential questions and temporal-related questions as in Wheeler and Barahona (2012) are not expressible in our frameworks due to difficulties in defining their semantics. The good aspect of such limitations is that we can now prove impossibility results, e.g., non-existence of solutions to certain harder puzzles. The ultimate goal behind the development of our formal framework is to automate the reasoning process and thus handle the puzzles and other applications in an automatic fashion using computational tools, without tedious analysis of combinatorics hidden behind the scenes.
Related work Our logical framework is based on Public Annoucement Logic (PAL) (cf. Plaza 2007; Gerbrandy and Groeneveld 1997) where announcements update the knowledge of agents. The extra twist here is that who said what is important due to the different types of the speakers. Similar issues about agency have been considered in Liu (2004) and Liu (2009) where different revision policies of different agents towards new incoming information are studied. A particular type of agents, viz. the liar, has been studied in a dynamic epistemic framework similar to PAL in van Ditmarsch et al. (2011) and van Ditmarsch (2011), where the focus is on epistemic effects of lying. The aim of the current paper, however, is to move further by considering general agent types and epistemic reasoning about these. The treatment of the type language is inspired by the analysis of protocols in Wang (2011b) where agent types can be viewed as simple conditional protocol schemas.
We take agent types as first-class citizens in our logical frameworks by specifying them formally in a type language. Correspondingly, in the model we have type assignments for each agent. The interpretation of an announcement depends on its speaker’s type.
With both types and agents specified in our logical language, we can formulate complicated sentences and questions (e.g., ‘What would be his answer if he were asked whether he is a liar?’). On the other hand, from a technical point of view of expressive power, such intriguing formulas with complex questions and answers can be reduced to formulas of a simple epistemic logic (with types).
The puzzles are formalized in our framework as pairs consisting of a model and a goal formula. A solution is a questioning strategy that satisfies some conditions represented by model checking problems on the model.
In the rest of the paper, we will walk the readers through our technical developments step by step. Each step will be demonstrated by logic puzzles in the style of Knights and Knaves until we are ready to talk about HLPE and its variations. "Agent Types in Public Announcements" looks at agent types in public announcements. We propose the basic logical framework \({\tt PALT}^{\tt T}\) and provide a complete axiomatization via a reduction to \({\tt EL}^{\bf T},\) epistemic logic with type formulas. In "Agent Types in Questions and Answers", we enrich \({\tt PALT}^{\tt T}\) with question and answer operators to obtain a new logic \({\tt PQLT}^{\tt T}\). To formally discuss HLPE, we replace announcement-like answers in \({\tt PQLT}^{\tt T}\) by arbitrary utterances and obtain \({\tt PQLT}^{\tt T}_{\tt U},\) which also allows us to define solutions to the puzzles formally. \({\tt PQLT}^{\tt T}_{\tt U}\) is used in "Formalizing the Hardest Logic Puzzle Ever" to verify an existing solution to HLPE. Moreover, a spectrum of new, harder puzzles is proposed in "New Puzzles with Epistemic Twists", by considering subjective types instead of objective types and relaxing some of the epistemic assumptions in the original HLPE. We prove that a version of HLPE, where the agents do not know others’ types, does not have any solution at all. "Conclusion and Discussion" ends the paper with conclusions and further directions.
Agent Types in Public Announcements
Language and Semantics
In this work, an agent type specifies necessary condition for an agent to announce a proposition. For example, a liar is someone who only announces false propositions, i.e., if he announces ϕ then ϕ must be false, but he does not need to announce every false proposition. We introduce the following type language to specify agent types formally.
Definition 1
Note that x and \({\varvec{\varphi}}\) are the only variables, thus \({K_{\user2{x}} {\varvec{\varphi}}\land K_{\user2{y}}{\varvec{\psi}}}\) is not a well-formed type. Each agent type η can also be viewed as a function assigning a precondition to each announcement made by an agent of this type.
We can use this type language to define many intuitive agent types.
Example 1
Type TT (truth teller): \({\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}} {\varvec{\varphi}}\)
Type LL (liar): \(\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)
Type LT (bluffer): \(\top\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\).
Next, if the knowledge of the speaker is taken into account, we can define more realistic subjective types: whether a proposition can be announced depends on the knowledge of the speaker.
Example 2
Type STT (subjective truth teller): \(K_{\user2{x}}{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)
Type SLL (subjective liar): \(K_{\user2{x}}\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\).
Remark 1
Type PSTT (progressive subjective truth teller): \(K_{\user2{x}}{\varvec{\varphi}}\land K_{\user2{x}}\neg K_{\bf G}{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)
Type CSLL (cautious subjective liar): \(K_{\user2{x}}\neg{\varvec{\varphi}}\land K_{\user2{x}}\neg K_{\bf G}\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)
Based on a finite set of agent types we can build our first logical language:
Definition 2
We call the announcement-free fragment of \({\tt PALT}^{\tt T}\) the epistemic language with type formulas\(({\tt EL}^{\bf T})\) and sometimes denote \({\tt PALT}^{\tt T}\) by \({\tt EL}^{\bf T}+[!_a\phi]\).
The superscript T in \({\tt PALT}^{\tt T}\) emphasises that the properties of \({\tt PALT}^{\tt T}\) may depend on the specific T that is selected. As usual, we have the following abbreviations: \( \bot:=\neg\top, \phi\vee\psi:=\neg(\neg \phi\wedge\neg \psi),\phi\rightarrow\psi:=\neg\phi\vee\psi, \langle {!_a\psi}\rangle\phi:=\neg[!_a\psi]\neg\phi, \hat{K}_a\phi:=\neg K_a\neg\phi \). We also write K_{a}^{W}ϕ for \(K_a\phi\lor K_a\neg\phi,\) meaning that a knows whether ϕ. η(a) expresses that agent a is of the type η and [!_{a}ψ]ϕ says that if a can announce ψ then after the announcement, ϕ holds.
Recall that each η can be viewed as a function. Now given \(\eta=\psi({\varvec{\varphi}},{\user2{x}})\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}},\) let \(\eta(\phi,a)=\psi[\phi/{\varvec{\varphi}}, a/{\user2{x}}], \) i.e., replacing each occurrence of \({\varvec{\varphi}}\) in \(\psi({\varvec{\varphi}},{\user2{x}})\) with ϕ and each occurrences of x with a. Intuitively, an agent a of a type η can announce a concrete proposition ϕ only when η(ϕ, a) holds. Although two agents may announce the same proposition ϕ, the actual information that it carries can be different due to different agent types.
Definition 3
\({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t\,\vDash \, \lambda(t,a)(\psi,a)\}}\)
For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t)\) and λ′(t) = λ(t).
Remark 2
For generality, we do not assume that the agents always know their types, i.e., η(a)→ K_{a}η(a) is not valid, since in some cases an agent may not be aware of its own type although it behaves exactly according to this type.
The above semantics is similar to the one for the standard public announcement logic (PAL) (cf. Plaza 2007), where after an announcement of ϕ, we simply delete all the worlds that do not satisfy ϕ, namely all the worlds where ϕ cannot be truthfully announced. In our setting, under the extra information of agent types, after a’s announcing ϕ we delete all worlds where a would not have been able to announce ϕ according toa′stype.
\({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t \,\Vvdash\, \psi\}}\)
For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t)\) and λ′(t) = λ(t).
It is a well-known result that public announcement logic can be translated back to epistemic logic qua expressiveness (cf. e.g., van Ditmarsch et al. 2007). This result clearly also holds in our setting with type formulas:
Proposition 1
\({\tt PAL}^{\bf T}\)is equally expressive as\({\tt PALT}^{\bf T}\)on S5models with type assignments.
Proof
Knights and Knaves
Before moving on to technical results about \({\tt PALT}^{\bf T},\) we demonstrate the use of this simple yet powerful framework by some examples. Consider the following Knights and Knaves puzzle first introduced by Smullyan (1978).
Example 3
(Three inhabitants) On a fictional island, the inhabitants are either Knights, who always tell the truths, or Knaves, who always lie. A visitor D from the outside world meets three inhabitants A, B and C on the island. D asks them to tell their types. A says: B is a Knave. B says: C is a Knave. C says: A and B are Knaves. Now, is it possible for the visitor to find out the inhabitants’ types from their statements?
\(\lambda(s,A)={\tt TT}\) and \({{{\mathfrak{M}}}_1,s\,\vDash \,{\tt LL}(B),}\)
\(\lambda(s,A)={\tt LL}\) and \({{{\mathfrak{M}}}_1,s\,\vDash \,\neg{\tt LL}(B)}\).
\(\lambda(s,A)={\tt TT}\) and \(\lambda(s,B)={\tt LL}\) (i.e., the worlds in the shape of TL_)
\(\lambda(s,A)={\tt LL}\) and \(\lambda(s,B)={\tt TT}\) (i.e., the worlds in the shape of LT_)
Now let us consider another variation of the Knights and Knaves:
Example 4
(Death or Freedom)A and B are standing at a fork in the road. Now comes C. C knows that one of them is a Knight and the other is a Knave, but C does not know who is who. C also knows that one road leads to Death, and the other leads to Freedom. Suppose A is the honest Knight, and he knows which way leads to Freedom, how can A let C know the right way to go?
Note that this puzzle is not trivial, since although A can tell the truth, C may not be sure that A is telling the truth. To solve the puzzle, let us first prove a simple proposition:
Proposition 2
Given\({\bf T}=\{{\tt TT},{\tt LL}\}, a\in{\bf G}\)and any\({\tt PALT}^{\bf T}\)formula ϕ, let\(\phi^{\circ}_a=({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi)\). Now for any\({\tt PALT}^{\bf T}\)formula ϕ, and any model\({{{\mathfrak{M}}}, {{\mathfrak{M}}}|^a_{\phi^{\circ}_a}}\)is the submodel of\({{{\mathfrak{M}}}}\)obtained by keeping all the worlds that satisfy ϕ. Moreover, for any\(a,b\in{\bf G}\)and any modality-free\({\tt PALT}^{\bf T}\)formula ϕ we have: \(\,\vDash \,[!_a (\phi^{\circ}_a)]K_b\phi\).
Proof
\(\lambda(s,a)={\tt TT}\) and \({{{\mathfrak{M}}},s\,\vDash \,\phi^{\circ}_a}\)
\(\lambda(s,a)={\tt LL}\) and \({{{\mathfrak{M}}},s\,\vDash \,\neg\phi^{\circ}_a}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, ({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi)}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \,\neg(({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi))}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, {\tt TT}(a)\to \phi}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \neg({\tt LL}(a)\to \neg \phi)}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \phi}\)
\({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \phi}\)
If the road behind the Knight is the one leading to Freedom (F_{A}) then he can say ‘if I am a Knight, then the road behind me leads to Freedom, and if I am a Knave, then the road behind me leads to Death’ (\(!_A (({\tt TT}(A)\to F_A)\land ({\tt LL}(A)\to \neg F_A))\)).
On the other hand if \(\neg F_A\) is true then \(!_A (({\tt TT}(A)\to \neg F_A)\land ({\tt LL}(A)\to F_A))\) is enough.
\(!_A \langle {!_B \neg F_A}\rangle\top\) reads: A announces that ‘The other guy would say that the road behind me leads to Death’ (similarly for \(!_A \langle {!_B F_A}\rangle\top\)). The verification of this solution is left to the reader as a simple exercise.
The above example demonstrates that subjective types and knowledge of the agents may make a difference. We will apply a similar modification to HLPE in the later part of the paper.
Axiomatization
Our language \({\tt PALT}^{\bf T}\) looks similar to \({\tt PAL}^{\bf T}\). In this section, we will make the link precise and use it to obtain a complete axiomatization of \({\tt PALT}^{\bf T}\). To ease the discussion, let us first define some useful notations.
Given a finite set of types T, let δ_{ϕ}^{a} be an abbreviation of \(\bigvee_{\eta\in {\bf T}} (\eta(a)\wedge \eta(\phi,a))\) where η(a) is a formula and η(ϕ, a) is the value (a formula) of the function η on the input (ϕ, a). Since in our models, an agent can have only one type at each state, each world can satisfy at most one disjunct of \(\bigvee_{\eta\in {\bf T}}(\eta(a)\wedge \eta(\phi,a))\).
The result is a faithful \({\tt PAL}^{\bf T}\) translation of \({\tt PALT}^{\bf T}\) formulas.
Proposition 3
For any\({\tt PALT}^{\bf T}\)formula ϕ, and any pointed\({{{\mathfrak{M}}},s: {{\mathfrak{M}}},s\,\vDash \,\phi\iff {{\mathfrak{M}}},s\,\Vvdash\,t(\phi)}\).
Proof
We prove the proposition by induction on the structure of ϕ. The Boolean cases and the K_{a}ϕ case are trivial. Before we can approach the [!_{a}ψ]ϕ case, we need to prove the following claim within the induction for ϕ:
Here the crucial second ‘\(\iff\)’ is due to the following: (1) \({{{\mathfrak{M}}},s\,\Vvdash\,\eta^*(a)\iff{{\mathfrak{M}}},s\,\vDash \,\eta^*(a);}\) (2) the assumption that \({{{\mathfrak{M}}},s\,\vDash \,\psi\iff {{\mathfrak{M}}},s\,\Vvdash\, t(\psi); }\) (3) the fact that η^{*}(ψ, a) is constructed by Boolean connectives and epistemic operators based on ψ (by the definition of the type language); (4) the Boolean cases and the K_{a}ϕ case in the main inductive proof.
Axiom schemas | (\(\hbox{for arbitrary}\, a,b\in{\bf G}, p\in{\bf P}\cup{\bf P}_{\bf T}\)) |
---|---|
TAUT | all the instances of tautologies |
MU | \(\bigwedge\nolimits_{a\in {\bf G}}(\bigwedge\nolimits_{\eta\in{\bf T}}(\eta(a)\leftrightarrow \bigwedge\nolimits_{\eta'\not=\eta,\eta'\in{\bf T}}\neg\eta'(a) ))\) |
DISTK | K_{a}(ϕ→ψ)→ (K_{a}ϕ→ K_{a}ψ) |
T | K_{a}ϕ→ϕ |
4 | K_{a}ϕ→ K_{a}K_{a}ϕ |
5 | \( \neg K_a\phi\to K_a\neg K_a\phi\) |
!ATOM | \([!_a\psi]p\leftrightarrow({\delta^a_{\psi}}\to p)\) |
!NEG | \([!_a\psi]\neg \phi\leftrightarrow({\delta^a_{\psi}}\to \neg [!_a\psi]\phi)\) |
!CON | \([!_a\psi](\phi\land\chi)\leftrightarrow([!_a\psi] \phi\land [!_a\psi]\chi)\) |
!K | \([!_a\psi]K_b\phi\leftrightarrow ({\delta^a_{\psi}}\to K_b [!_a\psi]\phi)\) |
Rules | |
---|---|
GENK | \({\frac{\phi}{K_a\phi}}\) |
RE | \({\frac{\phi\leftrightarrow\psi} {\chi[\psi/\phi]\leftrightarrow\chi}}\) |
MP | \({\frac{\phi,\phi\to\psi}{\psi}}\) |
Theorem 1
ATis sound and complete.
Proof
(Sketch) The soundness of MU is due to the fact that λ is a function, whence the basic type formulas of any agent are mutually exclusive and altogether exhaustive on each world of a model. The soundness of other axiom schemas and rules can be checked as for the standard axiomatization of PAL (cf. Plaza 2007) based on Proposition 3. The completeness is proved by a reduction argument that makes use of the reduction axiom schemas (!ATOM, !NEG, !CON, !K), and the rule RE to eliminate [!_{a}ψ] operators in an inside-out fashion (cf. Wang 2011a for a detailed discussion). The only difficulty here is assigning ‘announcement complexities’ to \({\tt PALT}^{\bf T}\) formulas in such a way that rewriting from the left-hand-side to the right-hand-side of !ATOM, !NEG, !CON, !K always reduces complexity. With a suitable complexity assignment, we can show that every \({\tt PALT}^{\bf T}\) formula can be reduced to an equivalent \({\tt EL}^{\bf T}\) formula by repeatedly applying the left-to-right rewriting rules specified by the reduction axiom schemas and the replacement of equals specified by the RE rule. It is not hard to see that the system AT without !ATOM, !NEG, !CON, !K can completely axiomatize \({\tt EL}^{\bf T}\). Now, if \(\vDash\phi,\) then \(\Vvdash\phi'\) for some \({\tt EL}^{\bf T}\) formula ϕ′ that can be obtained from ϕ by using the reduction axioms, and so \(\phi\leftrightarrow\phi'\) can be derived in AT. By the completeness of \({\tt EL}^{\bf T}\) we know that ϕ′ can also be derived in AT. Therefore ϕ can be derived in AT. Hence AT is complete. \(\square\)
The above proof shows that \({\tt PALT}^{\bf T}\) is equally expressive as \({\tt EL}^{\bf T}\). By Proposition 1, \({\tt PAL}^{\bf T}\) is equally expressive as \({\tt EL}^{\bf T}\). Therefore we have the following result:
Proposition 4
\({\tt EL}^{\bf T}, {\tt PAL}^{\bf T},\)and\({\tt PALT}^{\bf T}\)are equally expressive.
In particular, \({\tt PALT}^{\bf T}\) formulas without knowledge operators or subjective (knowledge-based) types can be translated into propositional formulas based on P∪P_{T}. This explains why solving puzzles like Example 3 normally only requires propositional reasoning. However, as we will show in the later part of the paper, knowledge-based subjective types make the story much more complicated and interesting, which will demonstrate the full power of our framework.
On the other hand, for some special types T, it is indeed possible to obtain a composition result.
Proposition 5
Proof
Due to Proposition 3, [!_{a}ϕ][!_{b}ψ]χ is equivalent to a \({\tt PAL}^{\bf T}\) formula of the shape [!ϕ^{*}][!ψ^{*}]t(χ) for some \({\tt PAL}^{\bf T}\) formulas ϕ^{*} and ψ^{*}. Since \([!\phi^*][!\psi^*]t(\chi)\leftrightarrow [!(\phi^*\land [!\phi^*]\psi^*)]t(\chi)\) is valid in \({\tt PAL}^{\bf T}\) semantics, [!_{a}ϕ][!_{b}ψ]χ is equivalent to the \({\tt PAL}^{\bf T}\) formula \([!(\phi^*\land [!\phi^*]\psi^*)]t(\chi)\). Now it is not hard to reduce \(\phi^*\land [!\phi^*]\psi^*\) into an \({\tt EL}^{\bf T}\) formula θ using our translation f as in Proposition 1, and so [!_{a}ϕ][!_{b}ψ]χ is equivalent to a \({\tt PAL}^{\bf T}\) formula [!θ]t(χ). Now by Proposition 2, truthful announcement of θ can be mimicked by an announcement of a \({\tt PALT}^{\bf T}\) formula θ°_{a} by agent a. Hence it is easy to see that the \({\tt PAL}^{\bf T}\) formula [!θ]t(χ) is equivalent to the \({\tt PALT}^{\bf T}\) formula [!_{a}θ°_{a}]χ. Taking things together, [!_{a}ϕ][!_{b}ψ]χ is equivalent to the \({\tt PALT}^{\bf T}\) formula [!_{a}θ°_{a}]χ. \(\square\)
‘I am a Liar’
Careful readers may have found out that the language of \({\tt PALT}^{\bf T}\) allows us to express the following announcement: \(!_a{\tt LL}(a)\) which may be roughly read as ‘I am a liar.’. It sounds like a liar sentence. However, a closer look should reveal that in our framework this is not a self-referential liar sentence such as ‘This sentence is a lie.’.
First note that \(!_a{\tt LL}(a)\) is not even a well-formed formula in \({\tt PALT}^{\bf T}\). Therefore, it does not make sense to talk about its truth value. On the other hand, \(!_a{\tt LL}(a)\) is viewed as an action in our framework and we may talk about its executability and update effects.
Now given T = {TT, LL}, from Proposition 3, \(!_a{\tt LL}(a)\) can be translated into a public announcement \(!(({\tt TT}(a)\land {\tt LL}(a))\lor ({\tt LL}(a)\land \neg{\tt LL}(a)))\) which amounts to the action of truthfully announcing \(\bot\). It is impossible to truthfully announce \(\bot, \) so \(!_a{\tt LL}(a)\) is not executable at all. According to the semantics, we can easily verify that \([!_a{\tt LL}(a)]\bot\) is valid, which is a formal way of saying \(!_a{\tt LL}(a)\) is not executable. Since \(!_a{\tt LL}(a)\) can never happen according to the types that govern the behaviours of agents, it has no non-trivial update effects.
On the other hand, if T = {LL, TT,LT} then \([!_a{\tt LL}(a)]\bot\) is not valid any more, and instead, \([!_a{\tt LL}(a)]K_b{\tt LT}(a)\) becomes valid for any \(a,b\in{\bf G}\). This is because only the bluffers can possibly execute \(!_a{\tt LL}(a),\) by the definition of LL,TT, and LT. This demonstrates that when bluffers are involved, successfully saying ‘I am a liar’ amounts to signalling that the speaker is a bluffer.
A final, related question is: Can a liar tell others that he is a liar in some way? It is rather easy when T = {TT, LL}. The liar can just announce \(\bot\). What about T = {TT, LL, LT}? Unfortunately, it is no easy task without signalling who are the bluffers first: whatever the truth teller and liar may say, the hearer just cannot rule out the possibility that the speaker is a bluffer.
Agent Types in Questions and Answers
Question-answer situations are typical interactive scenarios in which agents exchange information with each other. In this section, we extend the language of \({\tt PALT}^{\bf T}\) to handle questions and answers. Moreover, by formally defining puzzles and their solutions within our framework, we will apply our logic to HLPE-like puzzles.
A Question–Answer Logic
First, we extend \({{\tt PALT}^{\bf T}}\) with question modalities:
Definition 4
Intuitively, [?_{a}ψ]ϕ expresses that ‘After asking a whether ψ, ϕ holds’, and [!_{a}]ϕ says that ‘No matter what answer a gives (to the current question), afterwards ϕ holds’. Here we only focus on yes/no questions. Note that this language is expressive enough to express counterfactual questions. For example, \(?_a([?_ap]\langle {!_ap}\rangle\top)\) expresses the question ‘would you answer yes if you were asked whether p?’.
Definition 5
\({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t\,\Vdash\,_\# \lambda(s,a)(\psi,a)\}}\)
For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t),\) and λ′(t) = λ(t).
Initially no question is asked (the use of # in the first clause).
When a question ?_{a}ψ is asked, the question ψ and its answerer a are recorded (see the use of (a, ψ) in the clause for [?_{a}ψ]ϕ), replacing the previously unanswered one, if there is any.
A proposition can be announced by a (!_{a}ψ) only if ψ is a proper answer to the current question for a (the clause for [!_{a}ψ]ϕ). Thus no one can say anything before a question is raised.
After an answer is given, the record is set to #.
- Any question can be addressed to any one, and the arbitrary answer operator can be split into two answers, as demonstrated by the following two valid formulas:$$ [?_a\phi]\chi\leftrightarrow \langle {?_a\phi}\rangle\chi \qquad\qquad [?_a\phi][!_a]\chi\leftrightarrow [?_a\phi]([!_a\phi]\chi\land [!_a\neg\phi]\chi) $$
Remark 3
Questions have been discussed in dynamic epistemic logic (van Benthem and Minică 2009; Minică 2011), where questions partition the set of possible worlds. Our treatment is simpler, due to our intended application in HLPE-like puzzles where a question is always answered before the next question is raised. Therefore we do not consider the effect of consecutive questions: a new question will simply replace the old one, thus there is at most just one question for exactly one of the agents. This limitation can be overcome by using more complicated records μ, which we leave for future work.
By this translation we show that \({\tt PQLT}^{\bf T}\) is no more expressive than \({\tt PALT}^{\bf T}\):
Proposition 6
For any\({{{\mathfrak{M}}},s}\)and any\({\tt PQLT}^{\bf T}\)formula ϕ, the following holds:\({{{\mathfrak{M}}},s\,\Vdash\,\phi\iff {{\mathfrak{M}}},s\,\vDash\, g(\phi)}\)
Proof
We can actually prove the following stronger claim by a straightforward induction on the structure of the formulas:
For any \({{{\mathfrak{M}}},s,}\) any \({\tt PQLT}^{\bf T}\) formula ϕ, and any \({\mu\in \{\#\}\cup\{{\bf G}\times Form({\tt PQLT}^{\bf T})\}: {{\mathfrak{M}}},s\,\Vdash\,_\mu\phi\iff {{\mathfrak{M}}},s\,\vDash\, g_\mu(\phi)}\).
Note that although g translates [!_{a}]ϕ into a conjunction of two concrete formulas, we cannot eliminate the operator [!_{a}] in \({\tt PQLT}^{\bf T}\), since it depends on the previously asked question.
Proposition 7
For any\({{{\mathfrak{M}}},s}\)and any\({\tt PALT}^{\bf T}\)formula ϕ, the following holds: \({{{\mathfrak{M}}},s\,\Vdash\,g'(\phi)\iff {{\mathfrak{M}}},s\,\vDash \, \phi}\)
Therefore \({\tt PQLT}^{\bf T}\) is equally expressive as \({\tt PALT}^{\bf T}, {\tt PAL}^{\bf T}\) and \({\tt EL}^{\bf T},\) based on Proposition 4.
Again, although \({\tt PQLT}^{\bf T}\) does not increase the expressive power of the language, it eases the syntactic specification. Let us consider another (more popular) variation of the Knights and Knaves puzzle as follows.
Example 5
(Death or Freedom with questions) The setting is exactly the same as before in Example 4, but now C is allowed to ask a question to one of A and B. How should he ask his question in such a way that he will know the way to Freedom no matter what the answer is?
\(?_A([?_BF_A]\langle {!_BF_A}\rangle\top): \) ‘Will the other man tell me that your path leads to Freedom?’
\(?_A ([?_AF_A]\langle {!_AF_A}\rangle\top): \) ‘Will you say ‘yes’ if you are asked whether your path leads to Freedom?’
Therefore the worlds satisfying one of the following conditions are kept: \({{\mathfrak{M}},\_,{\tt TT},{\tt LL}\,\Vdash\,\neg [?_BF_A]\langle {!_BF_A}\rangle\top}\) or \({{\mathfrak{M}},\_,{\tt LL},{\tt TT}\,\Vdash\, [?_BF_A]\langle {!_BF_A}\rangle\top. }\)
Equivalently:\({{\mathfrak{M}},\_,{\tt TT},{\tt LL}\,\nVdash\,_{B,F_A} \langle {!_BF_A}\rangle\top}\) or \({{\mathfrak{M}},\_,{\tt LL},{\tt TT}\,\Vdash\,_{B,F_A} \langle {!_BF_A}\rangle\top}\)
Then it is not hard to see that \({{\mathfrak{M}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}}\) only keeps the worlds \((F_A, {\tt TT},{\tt LL})\) and \((F_A, {\tt LL},{\tt TT}), \) thus \({{\mathfrak{M}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}, (F_A,{\tt TT},{\tt LL})\,\Vdash\,_\#K^W_CF_A}\).
Handling Arbitrary Utterances
To formally discuss the original HLPE, we still need one last technical preparation, since the gods in the story of HLPE answer questions in their own language. In this subsection, we also take this into consideration.
Definition 6
[!_{a}u]ϕ expresses that, if a says u, then ϕ is true.
A model \({{\mathfrak{M}}}\) for \({{\tt PQLT}^{\bf T}_{\bf U}}\) is a tuple: \((S,\{\sim_a\mid a\in{\bf G}\},V,\lambda, I)\) where \(I: S\times Form({{\tt PQLT}^{\bf T}_{\bf U}})\times {\bf U} \to Form({{\tt PQLT}^{\bf T}_{\bf U}})\) is a function and I(s, ϕ, u) is the interpretation of an answer u on world s given the question ϕ. For example, if u = {yes, no}, we can define a function I corresponding to the usual interpretation of yes and no as answers to questions: I(s, ϕ, yes) = ϕ and \(I(s,\phi, no)=\neg\phi\) for each s and each ϕ.
The semantics of \({{\tt PQLT}^{\bf T}_{\bf U}}\) is mostly the same as that of \({{\tt PQLT}^{\bf T}}\), except for the formulas involving utterances, which depend on the interpretation function.
Definition 7
\({S'=\{t\mid t\in S \hbox{ and } {\mathfrak{M}}, t \,\Vdash\,_\# \lambda(t,a)(I(t,\chi,u),a)\}}\)
For each \(a\in {\bf G}, t\in S', u\in {\bf U}, \phi\in{{\tt PQLT}^{\bf T}_{\bf U}}: \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t), \lambda'(t)=\lambda(t),\) and I′(t, ϕ, u) = I(t, ϕ, u).
Remark 4
It is important that we use \(\Vdash_\#\) in the third condition of the clause for [!_{a}u]ϕ. Replacing # by μ will cause circularity in the semantics. For instance, \(?_a\langle {!_au}\rangle\top\) may then expresses the self-referential question ‘Will you answeru(to this question)?’.
Questioning Strategy
In the previous sections, we talked about the notions of puzzles and solutions in a rather informal manner. In this subsection, we attempt to formalize them precisely in the framework of \({{\tt PQLT}^{\bf T}_{\bf U}}\).
Definition 8
Q is a non-empty finite set of question states and \(r\in Q \) is the initial state,
F is a non-empty finite set of final states such that F ∩ Q = ∅,
δ:Q × U→ Q ∪ F is a transition function,
\(L: Q\to {\bf G}\times Form({{\tt PQLT}^{\bf T}_{\bf U}})\) essentially assigns to each question state a question ?_{a}ϕ expressible in \({{\tt PQLT}^{\bf T}_{\bf U}}\) (formally represented as a pair (a, ϕ)).
For any questioning strategy π = (Q, F, r, δ, L) and any \(q\in Q, \) let L^{G}(q) and \(L^\Upphi(q)\) be the first and the second element of L(q), respectively. Note that every q node has one and only one u successor for each u in U. Two different question states may be assigned the same question (a, ϕ). Given a questioning strategy π, an execution of π is a path \(r\buildrel{{u_1}} \over {\rightarrow}q_1\cdots\buildrel{{u_n}} \over {\rightarrow}q_n\) in π such that \(q_i\in Q\) for i < n and \(q_n\in F\). Let P(π) be the collection of all the executions in π. The length of a strategy (|π|) is defined as the length of the longest execution of π (a natural number or ω).
Intuitively it says that for each execution \(?_{a_1}\phi_1!_{a_1}u_1\cdots ?_{a_n}\phi_n!_{a_n} u_n\in Seq(\pi), \) if the kth question \({?_{a_k}}{\phi_{k}} \) is asked then it must be answerable by some \(u\in{\bf U}, \) and if the answer is indeed \({!_{a_k}}u_{k} \) then we can proceed to the next question \({?_{a_{k+1}}}{\phi_{k+1}} \) and so on; eventually if the last question a_{n}ϕ_{n} is answered then ϕ holds. The idea behind the answerability condition \(\langle {!_{a_k}}\rangle\top\) is that we need to ask sensible questions that always have answers, otherwise [!_{a}]ψ may hold trivially. For example, if an agent is a subjective truth teller, he may not be able to answer ?ϕ if he does not know whether ϕ. If no answer is also regarded as an answer, then the utterance ‘I don’t know’ should be included in U as well. See Remark 5 at the end of the next section for further discussion.
In the discussion of HLPE, we will only consider questions that are always answerable by ja or da, so the above simplified condition suffices.
Formalizing the Hardest Logic Puzzle Ever
In this section, we review one classic solution to the original HLPE in our formal framework.
- B1
Each god may get asked more than one question;
- B2
Later questions may depend on previous ones and their answers;
- B3
Whether Random speaks truly or not depends on the flip of a coin in his mind: if the coin comes down heads, he speaks truly; if tails, falsely.
- B4
Random will always answer ‘da’ or ‘ja’.
- B3’
Whether Random answers ‘ja’ or ‘da’ depends on the coin flip in his mind: if it comes down heads, he answers ‘ja’; if tails, he answers ‘da’.
- E0
A, B, and C are of the types in T = {TT, LL, LT} and this is common knowledge (to all of the agents including the questionerD).
- E1
A, B, and C are of different types and this is common knowledge.
- E2
A, B, and C know each other’s types and this is common knowledge.
- E3
A, B, and C know the meaning of ‘da’ and ‘ja’ and this is common knowledge.
- E4
D does not know the types of A, B, C and this is common knowledge.
- E5
D does not know the exact meanings of ‘da’ and ‘ja’ but he knows that one means ‘yes’ and the other means ‘no’, and this is common knowledge.
- Q1
All questions are asked and answered publicly.
- Q2
D does not mention himself in the questions.
- LS
We only consider solutions of length less than 4.
Formalizing HLPE
All the other assumptions E0–E4 can also be formalized and checked on \({{\mathfrak{M}}_0, }\) which we leave as an exercise for the interested reader.
This shows that the model \({{\mathfrak{M}}_0}\) complies with our assumptions. Now let χ(a) be the formula \(K_D {\tt LL}(a)\lor K_D {\tt TT}(a)\lor K_D {\tt LT}(a),\) and let χ be \(\chi(A)\land \chi(B)\land\chi(C)\). The HLPE puzzle can be formalized as \({({\mathfrak{M}}_0,\chi)}\).
Verification of a Classic Solution
Let E^{*} be the function that takes a question q to the question ‘If you were asked whether q would you say “ja?”’. When either True or False are asked E^{*}(q), a response of ‘ja’ indicates that the correct answer to q is affirmative and a response of ‘da’ indicates that the correct answer to q is negative.
Lemma 1
Proof
New Puzzles with Epistemic Twists
In the previous sections, we developed epistemic frameworks to handle various puzzles about agent types in question-answer scenarios such as the original HLPE. However, the power of our frameworks has not yet been fully demonstrated, since most of the previous examples can be treated as puzzles of Boolean algebra in the informal discussion style of the literature. This phenomenon has a technical explanation as we mentioned before: as long as we talk about objective types, the knowledge of agents is not really relevant and apparently complicated formulas can be translated back to Boolean formulas or simple epistemic formulas with no higher-order knowledge. Thus, existing puzzles are just too easy to require the full power of our \({{\tt PQLT}^{\bf T}_{\bf U}}\) framework. In this section, let us go a little bit further and consider some significantly harder puzzles where deeper epistemic reasoning is required.
One important underlying assumption in the original puzzle and its existing variations is that A, B, and C are gods. Intuitively, being gods, A, B, and C should know everything. Therefore their knowledge does not play a role in reasoning about their types. However, what if they are not gods but human beings? Being ordinary people, A, B, and C may not know everything and they will then behave according to their own knowledge.
It is commonly known (to A, B, C, and D) that agents A, B and C only know their own types.
It is commonly known that A knows everyone’s type, but B and C only know their own types.
It is commonly known that a bluffer knows everyone’s type, but truth tellers and liars only know their own types.
A knows everyone’s type, but B and C only know their own types and doubt whether A indeed knows their types. D is not sure whether any of the three know all the types of each other.
From the model we can read off that it is commonly known that A does not know the types of B or C, but both B and C know the type of A. Moreover, it is commonly known that ja means yes and da means no. Now, can D determine A, B, and C’s types by asking questions?
Surprisingly, the answer is negative. To prove it formally, we need Proposition 9 which will be proved later on. The intuition is this: first of all, asking A does not bring any new information since D knows everything that A knows. However, whatever D asks B or C, there is always a possibility that the answerer is a bluffer and thus at least one of the answers does not give any useful information. For example, suppose D asks B ‘Are you a liar?’ If the answer is ja, we know B must be LT, since a (subjective) liar cannot answer ja. However, if the answer is da, we cannot learn anything since both the liar and the bluffer can answer da. Note that in case A does not have any uncertainties between the two worlds, then D can simply ask A about the types of B and C.
Now we are ready to consider a particular variation of the HLPE:
Example 6
(HLPE with ignorance) A (subjective) liar, a (subjective) truth teller and a bluffer are living on an island. They know their own types but do not know others’ types. Moreover, it is commonly known that they are of different types. They understand English but can only answer questions in their own language, in which the words for yes and no are da and ja, in some order. Now the question is: can you determine their types by asking questions such that they are always able to answer ja or da.
- E0’
A, B, and C are of types in T = {STT,SLL,LT} and this is common knowledge (to all of the agents including the questioner D).
- E1
A, B, and C are of different types and this is common knowledge.
- E2’
A, B, and C know their own types but do not know others’ types, and this is also common knowledge.
- E3–E5
Q1, and Q2 are as before, but we do not constrain ourselves to 3-step solutions, thus giving up constraint LS.
It is not hard to check that E0’, E3-E5 hold on \({{\mathfrak{M}}_1}\). For E2’, note that for any agent \(a\in\{A,B,C\}, \) at each world s, agent a cannot distinguish s from another world t where his own type and the interpretation function are the same as in s. For example, agent B cannot distinguish STT, SLL, LT, JA from LT, SLL, STT, JA.
Let θ(a) be the formula \(K_D {\tt SLL}(a)\lor K_D {\tt STT}(a)\lor K_D {\tt LT}(a)\) and \(\theta=\theta(A)\land \theta(B)\land\theta(C)\). The puzzle is then formalized as \({({\mathfrak{M}}_1, \theta)}\).
Clearly, in the above model D also knows the exact meanings of da and ja and this is common knowledge.
Before we prove the following proposition, let us be more precise about answerable questions. We say that a question ?_{a}ϕ is answerable on a model \({{\mathfrak{M}}}\) if \({{\mathfrak{M}}\,\Vdash\,[?_a\phi]\langle {!_a}\rangle\top}\). Thus, for any world in \({{\mathfrak{M}}, a}\) has at least one possible answer to the question ?_{a}ϕ. It is not hard to see that if ?_{a}ϕ is answerable on a submodel \({{\mathfrak{N}}}\) of \({{\mathfrak{M}}_1, }\) then \({{\mathfrak{N}}\,\Vdash\,\neg{\tt LT}(a)\to (K_a\phi\lor K_a\neg\phi)}\).
Proposition 8
There is a solution to\({({\mathfrak{M}}_1, \theta)}\)iff there is a solution to\({({\mathfrak{M}}_2, \theta)}\).
Proof
Proofs for this proposition and other results presented in this section are provided in the "Appendix".
This proposition says that we can actually ignore uncertainties about ja and da when searching for solutions to \({({\mathfrak{M}}_1,\theta)}\).
We say that a question ?_{a}ϕ is effective on a model \({{\mathfrak{M}}}\) if for any \({u\in\{ja,\,da\}: {\mathfrak{M}}|^a_{\phi,u}}\) is defined (i.e., the domain is not empty), and \({{\mathfrak{M}}|^a_{\phi,u}\not={\mathfrak{M}}: }\) that is, answers to the question will always update the model by deleting some worlds. Now we make one crucial observation before proving our main impossibility result.
Proposition 9
For any submodel\({{\mathfrak{N}}}\)of\({{\mathfrak{M}}_2}\)and any\({a\in\{A,B,C\}: {\mathfrak{N}}\,\Vdash\,\neg {\tt SLL}(a)}\)or\({{\mathfrak{N}}\,\Vdash\,\neg {\tt STT}(a)}\)implies that there is no effective question forain model\({{\mathfrak{N}}}\).
Based on Proposition 9, we have the following theorem.
Theorem 2
There is no solution to\({({\mathfrak{M}}_2, \theta), }\)and therefore, there is no solution to\({({\mathfrak{M}}_1,\theta)}\).
The proof of Theorem 2 gives a further interesting result: Although we cannot guarantee that D knows all the types of the agents, we can guarantee that D always knows the type of one of the non-bluffers (but he cannot make sure which one)!
Remark 5
Note that in the above discussions, we only consider ja and da as well-formed answers and consider solutions with answerable questions only. In more realistic cases, agents should be able to answer ‘I don’t know’ [or keep silent as in Uzquiano (2010)]. However, the definition of types will be much more complicated, and there may be different options in redefining the liar and the bluffer. E.g., can a liar truthfully answer ‘I don’t know’ or just say a random ‘yes’ or ‘no’ instead? Moreover, can a bluffer also announce ‘I don’t know’ randomly? Given some acceptable new definitions of types involving ‘I don’t know’, is it possible for D to know the types of A, B, and C? We suspect that the answer is still negative, but leave this for future exploration.
Conclusion and Discussion
\({{\tt EL}}^{\bf T}\) Epistemic language (with type formulas),
\({{\tt PAL}}^{\bf T} ={{\tt EL}}^{\bf T}+[!\phi]\) Public announcement language (with type formulas),
\({{\tt PALT}^{\bf T}}={{\tt EL}}^{\bf T}+[!_a\phi]\) Public announcement language with types,
\({{\tt PQLT}^{\bf T}}={{\tt EL}}^{\bf T}+[!_a\phi]+[?_a\phi]+[!_a]\) Public question language with types,
\({{\tt PQLT}^{\bf T}_{\bf U}}={{\tt EL}}^{\bf T}+[!_au]+[?_a\phi]+[!_a]\) Public question language with types and arbitrary utterances.
The first four languages are interpreted on epistemic models with type assignments, while the last language \({{\tt PQLT}^{\bf T}_{\bf U}}\) is interpreted on epistemic models with type assignments and utterance interpretations. We have shown that the first four languages are equally expressive. This does not mean that we do not need \({{\tt PAL}}^{\bf T}, {{\tt PALT}^{\bf T}},\) and \({{\tt PQLT}^{\bf T}}\) any more: on the contrary, they allow us to express things more naturally. As with standard public announcement logic (cf. Lutz 2006 and French et al. 2011), we conjectured that \({{\tt PALT}^{\bf T}}\) enjoys an exponential gain in succinctness than \({{\tt EL}}^{\bf T}\). Moreover, the expressiveness results do not tell us everything about those logics, e.g., in \({{\tt PALT}^{\bf T}},\) two announcements cannot be composed into one in general, but only for special cases with certain T. We also showed that the public announcements in \({{\tt PAL}}^{\bf T}\) can be mimicked by typed announcements with T containing LL and TT. There is a lot more to be explored about these logics.
We studied several variations of the Knight and Knave puzzles within the logical frameworks that we developed. In particular, we formalized HLPE and verified a classic solution. It was also shown that puzzles involving only objective truth tellers and liars are usually simpler than those with subjective types and epistemic uncertainties. Following this insight, we proposed new harder puzzles based on the original HLPE with complicated epistemic reasoning involved. In particular, we showed that there is no solution to a variation of HLPE, when the gods in the original HLPE are replaced by humans who do not know each other’s types. However, there is a questioning strategy that can let the questioner know the type of one non-bluffer.
The discussion of HLPE has demonstrated the power of our formal approach in handling complicated epistemic reasoning based on types of agents. However, the proofs for most of our results about HLPE boil down to tedious combinatorial analysis. Actually, we can save effort here by using automatic model checking methods based on our logical frameworks (cf. e.g., Clarke et al. 1999). In this paper, we have formally defined what is a puzzle and what is a solution to a puzzle. The verification of a solution is then transformed into model checking problems for certain modal formulas. In principle, we can then use techniques from model checking in our setting. Our translations between languages allow us to do model checking of the complex languages by model checking the translated formulas in the simpler languages. Moreover, solutions to the puzzles can be found by a bounded search over the possible sequences of questions. Thus the spectrum of new puzzles should not be solved by hand but in an automatic manner. A detailed discussion of computational issues of the model checking problem is beyond the scope of this paper, and is left for future work.
The boundary between the solvable and the unsolvable We have shown that, if A, B, and C do not know each other’s types, then there is no solution to the revised HLPE puzzle. Since there are indeed solutions to the original HLPE puzzle, the natural question to ask is: Can we find a ‘minimal’ assumption on the knowledge of A, B, and C such that there is a solution? On the other hand, we can also keep the assumption of ignorance but allow agents to say ‘I do not know’ in some way to see whether this will lead to a solvable puzzle. As we discussed in Remark 5, this may raise several new possibilities for defining the types of subjective liars and bluffers.
From knowledge to belief and more In this paper we defined ‘subjective’ agent types by conditioning on knowledge of agents. However, realistic agents often rely on their beliefs to make announcements or answer questions. We can certainly replace knowledge operators with belief operators in types. An even further way to go is to consider probability distributions of propositions as preconditions of agent types, e.g., a liar is some one who tells lies 80 % of the time.
Richer agent types In this work, we focused on agent types in terms of what agents deliver by their announcements. There are definitely richer types in real life. For example, agent types may be reflected in how much information they would like to deliver w.r.t. what they know. A conservative agent may only announce \(\phi\lor \psi\) even when he knows ϕ. We will leave those richer types for other occasions.
Boolos credits Raymond Smullyan as the originator of the puzzle and John McCarthy for adding the twist of ja and da.
Truth values of epistemic formulas may not be preserved after announcement. For a study in the setting of PAL, we refer to (van Ditmarsch and Kooi 2006) and (Holliday and Icard III 2010).
We conjecture that \({\tt PALT}^{\bf T}\) is at least exponentially more succinct than \({\tt PAL}^{\bf T},\) but leave the proof for future work.
See (van Benthem and Minică 2009) for a similar composition issue in dynamic-epistemic logics of questions and answers.
I.e., (Q∪ F, δ) is an acyclic graph where each node except r has one and only one u-predecessor for each \(u\in{\bf U}, \) and r can reach all other nodes.
We conjecture that even when D can ask questions privately, the puzzle \({({\mathfrak{M}}_1,\theta)}\) still does not have any solution.
Interested readers may consult (Blackburn et al. 2002) for the preservation result of positive formulas in the standard setting of modal logic.
For instance, according to the semantics when s is in the shape of \({\tt STT}\_\_{\tt JA}, \lambda(A,s)(I(s,\phi, ja),A)=K_A\phi\). Therefore when answering ja, the updated model keeps the worlds satisfying \(K_A\phi\land{\tt STT}(A)\).
Acknowledgments
The authors would like to thank Hans van Ditmarsch and Johan van Benthem for their detailed comments on earlier versions of this paper, and thank Gregory Wheeler for pointing out the literature on the HLPE, which helped to shape the development of this work. We are also grateful to two anonymous referees of this journal for their very valuable comments. Both authors are partially supported by the Major Program of National Social Science Foundation of China (NO.11&ZD088). Yanjing Wang is also supported by the MOE Project of Key Research Institute of Humanities and Social Sciences in Universities (No.12JJD720011).