Minds and Machines

, Volume 23, Issue 1, pp 123–161

Reasoning About Agent Types and the Hardest Logic Puzzle Ever


  • Fenrong Liu
    • Department of PhilosophyTsinghua University
    • Department of Philosophy and Institute of Foreign PhilosophyPeking University

DOI: 10.1007/s11023-012-9287-x

Cite this article as:
Liu, F. & Wang, Y. Minds & Machines (2013) 23: 123. doi:10.1007/s11023-012-9287-x


In this paper, we first propose a simple formal language to specify types of agents in terms of necessary conditions for their announcements. Based on this language, types of agents are treated as ‘first-class citizens’ and studied extensively in various dynamic epistemic frameworks which are suitable for reasoning about knowledge and agent types via announcements and questions. To demonstrate our approach, we discuss various versions of Smullyan’s Knights and Knaves puzzles, including the Hardest Logic Puzzle Ever (HLPE) proposed by Boolos (in Harv Rev Philos 6:62–65, 1996). In particular, we formalize HLPE and verify a classic solution to it. Moreover, we propose a spectrum of new puzzles based on HLPE by considering subjective (knowledge-based) agent types and relaxing the implicit epistemic assumptions in the original puzzle. The new puzzles are harder than the previously proposed ones in the literature, in the sense that they require deeper epistemic reasoning. Surprisingly, we also show that a version of HLPE in which the agents do not know the others’ types does not have a solution at all. Our formalism paves the way for studying these new puzzles using automatic model checking techniques.


Agent typesPublic announcement logicQuestioning strategyKnight and KnavesThe hardest logic puzzle ever


In his popular book (Smullyan 1978), Raymond Smullyan proposed a series of puzzles called Knights and Knaves, where the usual goal is to determine who are the Knights (truth tellers) and who are the Knaves (liars) by asking them questions. One variation of such puzzles is made famous by Boolos (1996), where it is called the Hardest Logic Puzzle Ever (HLPE): 1

Three gods AB, and C are called, in some order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of AB, and C by asking three yes/no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language, in which the words for yes and no are da and ja, in some order. You do not know which word means which.

Boolos (1996) gave a lengthy solution which makes use of solutions to three simpler puzzles. Rabern and Rabern (2008) noticed that the puzzle may be trivialized according to Boolos’s original assumption on the behaviour of Random and thus proposed an amended version of HLPE. Uzquiano (2010) gave a two-question solution to the amended version of HLPE and proposed an even harder one which is proven to be not solvable in two questions by Wheeler and Barahona (2012). However, Wintein (2011) argues that the results in Wheeler and Barahona (2012) depend on a particular conception of answering self-referential questions truthfully or falsely, and propose a two-question solution to Uzquiano’s puzzle based on a different conception. Except for the formal truth theory presented in Wintein (2011), existing discussions on HLPE are mostly informal to some extent, featuring Boolean reasoning in finding solutions expressed in natural language which often involve self-referential questions. A complete formalization of such puzzles should take care of many different aspects which are hard to put together, such as questions and answers, liars and truth tellers, epistemic reasoning, and solution concepts for puzzles.

In this paper, we will give a purely formal, yet intuitive account of HLPE-like scenarios, by introducing logical frameworks for reasoning about knowledge by communication under uncertainty of various agent types. As suggested in HLPE and other Knights and Knaves puzzles, people behave differently in their ways of information exchange. The same utterance may contain different intended information due to different types of the speakers. Here, by `types’, we mean the patterns that agents follow in communicating information. Knowledge of agent types is crucial in social communication, in particular for strategic settings where people have to interpret and predict the behaviours of their opponents. By developing our formal framework, our aim is not only to solve puzzles like HLPE, but also to deal with general epistemic reasoning under uncertainty about agent types.

As for HLPE itself, there are several advantages to going purely formal. First of all, some of the existing solutions can be verified formally. More importantly, by making everything precise, we will discover the implicit epistemic assumptions behind those puzzles about agent types. As we will show, modifying those assumptions may change the nature of the puzzles, which also leads to even harder puzzles involving interesting and complicated epistemic reasoning. On the other hand, the formal approach also limits the language of questions that we can use in solving these puzzles. For example, the self-referential questions and temporal-related questions as in Wheeler and Barahona (2012) are not expressible in our frameworks due to difficulties in defining their semantics. The good aspect of such limitations is that we can now prove impossibility results, e.g., non-existence of solutions to certain harder puzzles. The ultimate goal behind the development of our formal framework is to automate the reasoning process and thus handle the puzzles and other applications in an automatic fashion using computational tools, without tedious analysis of combinatorics hidden behind the scenes.

Related work Our logical framework is based on Public Annoucement Logic (PAL) (cf. Plaza 2007; Gerbrandy and Groeneveld 1997) where announcements update the knowledge of agents. The extra twist here is that who said what is important due to the different types of the speakers. Similar issues about agency have been considered in Liu (2004) and Liu (2009) where different revision policies of different agents towards new incoming information are studied. A particular type of agents, viz. the liar, has been studied in a dynamic epistemic framework similar to PAL in van Ditmarsch et al. (2011) and van Ditmarsch (2011), where the focus is on epistemic effects of lying. The aim of the current paper, however, is to move further by considering general agent types and epistemic reasoning about these. The treatment of the type language is inspired by the analysis of protocols in Wang (2011b) where agent types can be viewed as simple conditional protocol schemas.

There are a few points worth mentioning about our approach:
  • We take agent types as first-class citizens in our logical frameworks by specifying them formally in a type language. Correspondingly, in the model we have type assignments for each agent. The interpretation of an announcement depends on its speaker’s type.

  • With both types and agents specified in our logical language, we can formulate complicated sentences and questions (e.g., ‘What would be his answer if he were asked whether he is a liar?’). On the other hand, from a technical point of view of expressive power, such intriguing formulas with complex questions and answers can be reduced to formulas of a simple epistemic logic (with types).

  • The puzzles are formalized in our framework as pairs consisting of a model and a goal formula. A solution is a questioning strategy that satisfies some conditions represented by model checking problems on the model.

In the rest of the paper, we will walk the readers through our technical developments step by step. Each step will be demonstrated by logic puzzles in the style of Knights and Knaves until we are ready to talk about HLPE and its variations. "Agent Types in Public Announcements" looks at agent types in public announcements. We propose the basic logical framework \({\tt PALT}^{\tt T}\) and provide a complete axiomatization via a reduction to \({\tt EL}^{\bf T},\) epistemic logic with type formulas. In "Agent Types in Questions and Answers", we enrich \({\tt PALT}^{\tt T}\) with question and answer operators to obtain a new logic \({\tt PQLT}^{\tt T}\). To formally discuss HLPE, we replace announcement-like answers in \({\tt PQLT}^{\tt T}\) by arbitrary utterances and obtain \({\tt PQLT}^{\tt T}_{\tt U},\) which also allows us to define solutions to the puzzles formally. \({\tt PQLT}^{\tt T}_{\tt U}\) is used in "Formalizing the Hardest Logic Puzzle Ever" to verify an existing solution to HLPE. Moreover, a spectrum of new, harder puzzles is proposed in "New Puzzles with Epistemic Twists", by considering subjective types instead of objective types and relaxing some of the epistemic assumptions in the original HLPE. We prove that a version of HLPE, where the agents do not know others’ types, does not have any solution at all. "Conclusion and Discussion" ends the paper with conclusions and further directions.

Agent Types in Public Announcements

Language and Semantics

In this work, an agent type specifies necessary condition for an agent to announce a proposition. For example, a liar is someone who only announces false propositions, i.e., if he announces ϕ then ϕ must be false, but he does not need to announce every false proposition. We introduce the following type language to specify agent types formally.

Definition 1

(Type language) Given a fixed agent variable x and a fixed formula variable \({\varvec{\varphi}},\) the set E of agent types η is recursively defined as:
$$ \begin{aligned} \eta &::= \psi\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}} \\ \psi &::= \top\mid {\varvec{\varphi}}\mid \neg \psi \mid \psi\wedge\psi\mid K_{\user2{x}}\psi \end{aligned} $$
where \(\top\) stands for tautologies.

Note that x and \({\varvec{\varphi}}\) are the only variables, thus \({K_{\user2{x}} {\varvec{\varphi}}\land K_{\user2{y}}{\varvec{\psi}}}\) is not a well-formed type. Each agent type η can also be viewed as a function assigning a precondition to each announcement made by an agent of this type.

We can use this type language to define many intuitive agent types.

Example 1

(objective truth teller, liar and bluffer)
  • Type TT (truth teller): \({\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}} {\varvec{\varphi}}\)

  • Type LL (liar): \(\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)

  • Type LT (bluffer): \(\top\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\).

Next, if the knowledge of the speaker is taken into account, we can define more realistic subjective types: whether a proposition can be announced depends on the knowledge of the speaker.

Example 2

(subjective truth teller and liar)
  • Type STT (subjective truth teller): \(K_{\user2{x}}{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)

  • Type SLL (subjective liar): \(K_{\user2{x}}\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\).

Remark 1

The above are just some examples of agent types. Other interesting types can be defined if we enrich the type language with other operators, e.g., KG (everyone knows that ...) or CG (it is common knowledge that ...). For example, a progressive speaker may only want to announce ϕ if ϕ is not known by all the audience. We may define the following types:
  • Type PSTT (progressive subjective truth teller): \(K_{\user2{x}}{\varvec{\varphi}}\land K_{\user2{x}}\neg K_{\bf G}{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)

  • Type CSLL (cautious subjective liar): \(K_{\user2{x}}\neg{\varvec{\varphi}}\land K_{\user2{x}}\neg K_{\bf G}\neg{\varvec{\varphi}}\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}}\)

Based on a finite set of agent types we can build our first logical language:

Definition 2

(Public announcement language with types) Given a finite set \({\bf T}\subseteq {\bf E}\) of agent types, a finite set G of agent names, a set P of basic proposition letters, the language \({\tt PALT}^{\tt T}\) is defined as:
$$ \phi::= \top\mid p\mid \eta(a) \mid \neg \phi \mid \phi\wedge\phi\mid K_a\phi \mid [!_a\phi]\phi $$
where \(p\in {\bf P}, a\in {\bf G}\) and \(\eta\in {\bf T}\).

We call the announcement-free fragment of \({\tt PALT}^{\tt T}\) the epistemic language with type formulas\(({\tt EL}^{\bf T})\) and sometimes denote \({\tt PALT}^{\tt T}\) by \({\tt EL}^{\bf T}+[!_a\phi]\).

The superscript T in \({\tt PALT}^{\tt T}\) emphasises that the properties of \({\tt PALT}^{\tt T}\) may depend on the specific T that is selected. As usual, we have the following abbreviations: \( \bot:=\neg\top, \phi\vee\psi:=\neg(\neg \phi\wedge\neg \psi),\phi\rightarrow\psi:=\neg\phi\vee\psi, \langle {!_a\psi}\rangle\phi:=\neg[!_a\psi]\neg\phi, \hat{K}_a\phi:=\neg K_a\neg\phi \). We also write KaWϕ for \(K_a\phi\lor K_a\neg\phi,\) meaning that a knows whether ϕ. η(a) expresses that agent a is of the type η and [!aψ]ϕ says that if a can announce ψ then after the announcement, ϕ holds.

Recall that each η can be viewed as a function. Now given \(\eta=\psi({\varvec{\varphi}},{\user2{x}})\twoheadleftarrow !_{\user2{x}}{\varvec{\varphi}},\) let \(\eta(\phi,a)=\psi[\phi/{\varvec{\varphi}}, a/{\user2{x}}], \) i.e., replacing each occurrence of \({\varvec{\varphi}}\) in \(\psi({\varvec{\varphi}},{\user2{x}})\) with ϕ and each occurrences of x with a. Intuitively, an agent a of a type η can announce a concrete proposition ϕ only when η(ϕ, a) holds. Although two agents may announce the same proposition ϕ, the actual information that it carries can be different due to different agent types.

Definition 3

(Semantics) A model for the language of \({\tt PALT}^{\tt T}\) is a tuple \({{{\mathfrak{M}}}=(S, \{\sim_a\mid a\in{\bf G}\}, V, \lambda),}\) where \((S, \{\sim_a\mid a\in{\bf G}\},V)\) is a standard multi-agent S5 Kripke model: S is a non-empty set of possible worlds, \(\sim_a\subseteq S\times S\) is an equivalence relation over S, and V:S→ 2P is a valuation function assigning to each world a set of basic propositions. The new component λ:S × GT assigns to each agent on each world a type in T. The semantics of \({\tt PALT}^{\tt T}\) formulas is defined as follows:
where \({{\mathfrak{M}}|^a_{\psi}}\) is defined as \((S', \{\sim'_a\mid a\in{\bf G}\}, V', \lambda')\) where:
  • \({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t\,\vDash \, \lambda(t,a)(\psi,a)\}}\)

  • For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t)\) and λ′(t) = λ(t).

Note that \({{{\mathfrak{M}}}|^a_{\psi}}\) is well-defined if S′ is not empty, and \({{{\mathfrak{M}}}, s\,\vDash \, \lambda(s,a)(\psi,a)}\) in the clause of [!aψ]ϕ guarantees that. We say ϕ is valid on\({{{\mathfrak{M}}}}\) (\({{{\mathfrak{M}}}\,\vDash \,\phi}\)) if, for all s in \({{{\mathfrak{M}}}: {{\mathfrak{M}}},s\,\vDash \,\phi}\). We say ϕ is valid (\(\,\vDash \,\phi\)) if for all the models \({{{\mathfrak{M}}}: {{\mathfrak{M}}}\,\vDash \,\phi}\).

Remark 2

For generality, we do not assume that the agents always know their types, i.e., η(a)→ Kaη(a) is not valid, since in some cases an agent may not be aware of its own type although it behaves exactly according to this type.

The above semantics is similar to the one for the standard public announcement logic (PAL) (cf. Plaza 2007), where after an announcement of ϕ, we simply delete all the worlds that do not satisfy ϕ, namely all the worlds where ϕ cannot be truthfully announced. In our setting, under the extra information of agent types, after a’s announcing ϕ we delete all worlds where a would not have been able to announce ϕ according toastype.

To be more precise in later discussions, we define the language \({\tt PAL}^{\bf T}\) as \({\tt EL}^{\bf T}+[!\phi],\) public announcement logic with type formulas. Recall that \({\tt PALT}^{\bf T}\) is \({\tt EL}^{\bf T}+[!_a\phi],\) so the only difference between \(S\) and \({\tt PAL}^{\bf T}\) is that announcements in \({\tt PAL}^{\bf T}\) are agent-less, i.e., announced by a single truth teller: ‘the god’. Correspondingly, the semantics of \({\tt PAL}^{\bf T}\) differs from the semantics of \({\tt PALT}^{\bf T}\) only in the clause for announcement (we write the relevant satisfaction relation as \(\Vvdash\)):
where \({{{\mathfrak{M}}}|_{\psi}}\) is defined as \((S', \{\sim'_a\mid a\in{\bf G}\}, V', \lambda')\) where:
  • \({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t \,\Vvdash\, \psi\}}\)

  • For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t)\) and λ′(t) = λ(t).

Note that types play no role in the semantics of announcements in \({\tt PAL}^{\bf T}\). Thus, \({\tt PAL}^{\bf T}\) behaves just like standard PAL equipped with a special set of basic propositions (the type formulas). In the rest of this paper, given a finite set of types T, we let PT be the set of type propositions i.e., \(\{\eta(a)\mid\eta\in{\bf T}, a\in {\bf G}\}\).

It is a well-known result that public announcement logic can be translated back to epistemic logic qua expressiveness (cf. e.g., van Ditmarsch et al. 2007). This result clearly also holds in our setting with type formulas:

Proposition 1

\({\tt PAL}^{\bf T}\)is equally expressive as\({\tt PALT}^{\bf T}\)on  S5models with type assignments.


(Sketch) We only define the relevant translation \(f:{\tt PAL}^{\bf T}\to{\tt EL}^{\bf T}\) (where \(p\in{\bf P}\cup{\bf P}_{\bf T}\)):
$$\begin{array}{rclrcl} f(\top) &=&\top & f([!\psi]\top) &=& f(\psi\to \top) \\ f(p) &=& p & f([!\psi]p) &=& f(\psi\to p) \\ f(\neg\phi) &=&\neg f(\phi) & f([!\psi]\neg\phi) &=& f(\psi\to \neg [!\psi]\phi)\\ f(\phi_1 \land \phi_2) &=& f(\phi_1) \land f(\phi_2)& f([!\psi](\phi_1 \land \phi_2)) &=& f([!\psi]\phi_1\land [!\psi]\phi_2)\\ f(K_a \phi) & = & K_a f(\phi)& f([!\psi]K_a \phi)&=&f(\psi\to K_a (\psi\to[!\psi]\phi))\\ & & & f([!\psi][!\chi]\phi)&=&f([!\psi]f([!\chi]\phi)) \end{array}$$
Based on a suitable definition of the complexity of formulas (cf. van Ditmarsch et al. 2007) we can show that the translation/rewriting always reduces the complexity. Hence, it will terminate at some point and eliminate all announcement operators in an inside-out fashion.\(\square\)

Knights and Knaves

Before moving on to technical results about \({\tt PALT}^{\bf T},\) we demonstrate the use of this simple yet powerful framework by some examples. Consider the following Knights and Knaves puzzle first introduced by Smullyan (1978).

Example 3

(Three inhabitants) On a fictional island, the inhabitants are either Knights, who always tell the truths, or Knaves, who always lie. A visitor D from the outside world meets three inhabitants A, B and C on the island. D asks them to tell their types. A says: B is a Knave. B says: C is a Knave. C says: A and B are Knaves. Now, is it possible for the visitor to find out the inhabitants’ types from their statements?

Let us start with the following model \({{{\mathfrak{M}}}_1}\) where AB, and C know their own types (either TT or LL) but D knows nothing about the types of AB, and C. Note that we write LLT for a world s where \(\lambda(s,A)={\tt LL}, \lambda(s,B)={\tt LL}\) and \(\lambda(s,C)={\tt TT}\) (similarly for other abbreviations).2 Following the usual convention in visualizing S5 models, the actual relations are the reflexive transitive closures of the (bidirectional) ones denoted in the following graphs. \({{{\mathfrak{M}}}_2}\) is the model after A’s announcement \({!_A({\tt LL}(B)), {{\mathfrak{M}}}_3}\) is the model after the second announcement \(!_B{\tt LL}(C)\) and \({{{\mathfrak{M}}}_4}\) is the model after the third announcement \(!_C({\tt LL}(A)\land{\tt LL}(B))\). Thus \({{{\mathfrak{M}}}_2={{\mathfrak{M}}}_1|^A_{{\tt LL}(B)}, {{\mathfrak{M}}}_3={{\mathfrak{M}}}_2|^B_{{\tt LL}(C)}}\) and \({{{\mathfrak{M}}}_4={{\mathfrak{M}}}_3|^C_{{\tt LL}(A)\land{\tt LL}(B)}}\).
Note that by the definition of the updated model, \({{{\mathfrak{M}}}_2={{\mathfrak{M}}}_1|^A_{{\tt LL}(B)}}\) keeps the worlds s in \({{{\mathfrak{M}}}_1}\) where \({{{\mathfrak{M}}}_1,s\,\vDash \,\lambda(s,A)({\tt LL}(B),A),}\) that is: it keeps the worlds s satisfying one of the following conditions:
  • \(\lambda(s,A)={\tt TT}\) and \({{{\mathfrak{M}}}_1,s\,\vDash \,{\tt LL}(B),}\)

  • \(\lambda(s,A)={\tt LL}\) and \({{{\mathfrak{M}}}_1,s\,\vDash \,\neg{\tt LL}(B)}\).

Since T = {LL, TT}, the above two conditions are equivalent to the following:
  • \(\lambda(s,A)={\tt TT}\) and \(\lambda(s,B)={\tt LL}\) (i.e., the worlds in the shape of TL_)

  • \(\lambda(s,A)={\tt LL}\) and \(\lambda(s,B)={\tt TT}\) (i.e., the worlds in the shape of LT_)

It is clear that \({{{\mathfrak{M}}}_2}\) only contains TL_ and LT_. A similar reasoning works for \({{{\mathfrak{M}}_3}}\) and \({{{\mathfrak{M}}}_4}\) by the definition of the updated model.
Note that according to the semantics, for \(\langle {!_a\psi}\rangle\phi\) we have:
$$ {{{\mathfrak{M}}}}, s\,\vDash \, \langle {!_a\psi}\rangle\phi \Leftrightarrow {{{\mathfrak{M}}}}, s\,\vDash \, \lambda(s,a)(\psi,a) \quad {\hbox{and}} \quad {{{\mathfrak{M}}}}|^a_{\psi}, s\,\vDash \, \phi $$
Now it is easy to see that LTL is the only world s in \({{{\mathfrak{M}}}_1}\) such that all announcements in the story can be successfully announced in the given order:
$$ {{{\mathfrak{M}}}}_1, s\,\vDash \,\langle {!_A {\tt LL}(B)}\rangle\langle {!_B{\tt LL}(C)}\rangle\langle {!_C({\tt LL}(A)\land{\tt LL}(B))}\rangle\top $$
Moreover, since \({{{\mathfrak{M}}}_4}\) is a singleton model, it is clear that
$$ {{{\mathfrak{M}}}}_1, {\tt LTL}\,\vDash \,\langle {!_A {\tt LL}(B)}\rangle\langle {!_B{\tt LL}(C)}\rangle\langle {!_C({\tt LL}(A)\land{\tt LL}(B))}\rangle K_D({\tt LL}(A)\land{\tt TT}(B)\land {\tt LL}(C)) $$
Thus, after the three announcements, agent D knows that A and C are liars and B is a truth teller.

Now let us consider another variation of the Knights and Knaves:

Example 4

(Death or Freedom)A and B are standing at a fork in the road. Now comes C. C knows that one of them is a Knight and the other is a Knave, but C does not know who is who. C also knows that one road leads to Death, and the other leads to Freedom. Suppose A is the honest Knight, and he knows which way leads to Freedom, how can A let C know the right way to go?

Note that this puzzle is not trivial, since although A can tell the truth, C may not be sure that A is telling the truth. To solve the puzzle, let us first prove a simple proposition:

Proposition 2

Given\({\bf T}=\{{\tt TT},{\tt LL}\}, a\in{\bf G}\)and any\({\tt PALT}^{\bf T}\)formula ϕ, let\(\phi^{\circ}_a=({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi)\). Now for any\({\tt PALT}^{\bf T}\)formula ϕ, and any model\({{{\mathfrak{M}}}, {{\mathfrak{M}}}|^a_{\phi^{\circ}_a}}\)is the submodel of\({{{\mathfrak{M}}}}\)obtained by keeping all the worlds that satisfy ϕ. Moreover, for any\(a,b\in{\bf G}\)and any modality-free\({\tt PALT}^{\bf T}\)formula ϕ we have: \(\,\vDash \,[!_a (\phi^{\circ}_a)]K_b\phi\).


For any model \({{\mathfrak{M}}, {{\mathfrak{M}}}|^a_{\phi^{\circ}_a}}\) only keeps the worlds s where \({{{\mathfrak{M}}},s\,\vDash \,\lambda(s,a)(\phi^\circ_a,a), }\) that is: it keeps the worlds satisfying one of the following conditions:
  • \(\lambda(s,a)={\tt TT}\) and \({{{\mathfrak{M}}},s\,\vDash \,\phi^{\circ}_a}\)

  • \(\lambda(s,a)={\tt LL}\) and \({{{\mathfrak{M}}},s\,\vDash \,\neg\phi^{\circ}_a}\)

This can be stated equivalently as:
  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, ({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi)}\)

  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \,\neg(({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi))}\)

which is equivalent to:
  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, {\tt TT}(a)\to \phi}\)

  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \neg({\tt LL}(a)\to \neg \phi)}\)

and this is again equivalent to:
  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt TT}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \phi}\)

  • \({{{\mathfrak{M}}},s\,\vDash \,{\tt LL}(a)}\) and \({{{\mathfrak{M}}},s\,\vDash \, \phi}\)

Since \({{\bf T}=\{{\tt TT},{\tt LL}\}, {{\mathfrak{M}}}|^a_{\phi^{\circ}_a}}\) simply keeps all worlds where ϕ holds no matter what the type of a is. Since the updates do not change the truth values of Boolean formulas, the validity of \([!_a (({\tt TT}(a)\to \phi)\land ({\tt LL}(a)\to \neg \phi))]K_b\phi\) is immediate3. \(\square\)
The preceding proposition says that given T = {LL, TT}, an agent a can actually mimic a truthful announcement of ϕ, qua epistemic update effects, by !aϕa°, no matter what a’s type actually is. Now let us come back to Example 4. Let FA denote the proposition that the road behind A leads to Freedom, thus \(\neg F_A\) says that the road behind A leads to Death. A solution to the puzzle of Example 4 is simply as follows:
  • If the road behind the Knight is the one leading to Freedom (FA) then he can say ‘if I am a Knight, then the road behind me leads to Freedom, and if I am a Knave, then the road behind me leads to Death’ (\(!_A (({\tt TT}(A)\to F_A)\land ({\tt LL}(A)\to \neg F_A))\)).

  • On the other hand if \(\neg F_A\) is true then \(!_A (({\tt TT}(A)\to \neg F_A)\land ({\tt LL}(A)\to F_A))\) is enough.

To verify that the above solution indeed works, we first build the initial model \({{{\mathfrak{M}}}}\) as follows, where, for example, \((F_A, {\tt TT}, {\tt LL})\) denotes the world where FA is true and A is assigned TT while B is assigned LL (similarly for other states).
Then based on Proposition 2, we have:
$$ {{{\mathfrak{M}}}}\,\vDash \, {\tt TT}(A)\to \bigwedge_{\psi\in \{F_A, \neg F_A\}}(\psi\to \langle {!_A (({\tt TT}(A)\to \psi)\land ({\tt LL}(A)\to \neg \psi))}\rangle K^W_C F_A). $$
Since \({{{\mathfrak{M}}}\,\vDash \,{\tt TT}(A)\leftrightarrow{\tt LL}(B)}\) and \({{{\mathfrak{M}}}\,\vDash \, {\tt LL}(A)\leftrightarrow{\tt TT}(B),}\) we also have:
$$ {{{\mathfrak{M}}}}\,\vDash \,{\tt TT}(A)\to\bigwedge_{\psi\in \{F_A, \neg F_A\}}(\psi\to \langle {!_A (({\tt TT}(A)\to \psi)\land ({\tt TT}(B)\to \neg \psi))}\rangle K^W_C F_A) $$
which gives an alternative solution. In words, it lets the Knight say ‘The road behind the Knight leads to Freedom.’ when FA is true and ‘The road behind the Knight leads to Death’ when FA is not true.
Yet another well-known solution is shorter in terms of announcements:
$$ {{{\mathfrak{M}}}}\,\vDash \,{\tt TT}(A)\to (F_A\to \langle {!_A \langle {!_B \neg F_A}\rangle\top}\rangle K_C^W F_A)\land (\neg F_A\to \langle {!_A \langle {!_B F_A}\rangle\top}\rangle K_C^W F_A) $$

\(!_A \langle {!_B \neg F_A}\rangle\top\) reads: A announces that ‘The other guy would say that the road behind me leads to Death’ (similarly for \(!_A \langle {!_B F_A}\rangle\top\)). The verification of this solution is left to the reader as a simple exercise.

However, the last solution does not work any more, if we make the puzzle harder by letting Knights and Knaves be ignorant of each other’s types and replace objective types TT, LL by subjective types (let T = {STT, SLL}). Then the appropriate initial model \({{{\mathfrak{M}}}'}\) may look as follows:
To see what this model says, note the following validity:
$$ {{{\mathfrak{M}}}}'\,\vDash \, \neg K^W_A{\tt STT}(B)\land K_C\neg K^W_A{\tt STT}(B)\land K^W_AF_A\land \neg K_C^WF_A\land \neg K_C^W{\tt STT}(A) $$
This says that A does not know B’s type and C knows this, but A does know whether his road leads to Freedom while C does not know, as before, whether A’s road leads to Freedom. (The case for B is similar.)
Now suppose the real situation is \((F_A,{\tt STT},{\tt SLL})\). Let us verify the previous short solution ‘The other guy would say that the road behind me leads to Death’ in this state.
$$ \begin{aligned} {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})&\,\vDash \,\langle {!_A \langle {!_B \neg F_A}\rangle\top}\rangle K_C^W F_A\\ \Rightarrow {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})&\,\vDash \,\langle {!_A \langle {!_B \neg F_A}\rangle\top}\rangle\top\\ \iff {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})&\,\vDash \,{\tt STT}(\langle {!_B \neg F_A}\rangle\top, A)\\ \iff {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})&\,\vDash \, K_A\langle {!_B \neg F_A}\rangle\top\\ \iff {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})&\,\vDash \, \langle {!_B \neg F_A}\rangle\top\hbox{ and }{{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt STT})\,\vDash \, \langle {!_B \neg F_A}\rangle\top\\ \iff {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})& \,\vDash\,K_B F_A\hbox{ and }{{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt STT})\,\vDash\,K_B \neg F_A \\ \end{aligned} $$
However, since \({{{\mathfrak{M}}}',(F_A,{\tt STT},{\tt STT})\,\nvDash\,K_B \neg F_A, }\) we have
$$ {{{\mathfrak{M}}}}',(F_A,{\tt STT},{\tt SLL})\,\nvDash\,\langle {!_A \langle {!_B \neg F_A}\rangle\top}\rangle K_C^W F_A $$
Therefore, A’s announcing ‘The other guy would say that the road behind me leads to Death’ does not work any more (assuming FA), since A does not know B’s type and as a truth teller he can only say what he knows.

The above example demonstrates that subjective types and knowledge of the agents may make a difference. We will apply a similar modification to HLPE in the later part of the paper.

In the present example, we can overcome the difficulties caused by the ignorance of other players’ types by modifying the previous short solution to (assuming FA): ‘I would say my path leads to Freedom (if I were asked)’ (\(!_A \langle {!_A F_A}\rangle\top\)). Note that this is different from simply announcing FA, for example:
$$ {{{\mathfrak{M}}}}', (F_A,{\tt SLL},{\tt STT})\,\vDash \,\langle {!_A\langle {!_AF_A}\rangle\top}\rangle\top\quad{\text{but}}\quad{{{\mathfrak{M}}}}', (F_A,{\tt SLL},{\tt STT})\,\nvDash\,\langle {!_AF_A}\rangle\top. $$
We can verify that this modified solution indeed works:
$$ {{{\mathfrak{M}}}}'\,\vDash \,{\tt STT}(A)\to ((F_A\to\langle {!_A \langle {!_A F_A}\rangle\top}\rangle K^W_CF_A)\land (\neg F_A\to\langle {!_A\langle {!_A \neg F_A}\rangle\top}\rangle K^W_C F_A)) $$


Our language \({\tt PALT}^{\bf T}\) looks similar to \({\tt PAL}^{\bf T}\). In this section, we will make the link precise and use it to obtain a complete axiomatization of \({\tt PALT}^{\bf T}\). To ease the discussion, let us first define some useful notations.

Given a finite set of types T, let δϕa be an abbreviation of \(\bigvee_{\eta\in {\bf T}} (\eta(a)\wedge \eta(\phi,a))\) where η(a) is a formula and η(ϕ, a) is the value (a formula) of the function η on the input (ϕ, a). Since in our models, an agent can have only one type at each state, each world can satisfy at most one disjunct of \(\bigvee_{\eta\in {\bf T}}(\eta(a)\wedge \eta(\phi,a))\).

Now we can rewrite each \({\tt PALT}^{\bf T}\) formula into a \({\tt PAL}^{\bf T}\) formula by recursively replacing each [!aψ] modality in \({\tt PALT}^{\bf T}\) formulas by an announcement modality in \({\tt PAL}^{\bf T}\). Formally, we define a translation \(t:{\tt PALT}^{\bf T}\to{\tt PAL}^{\bf T}\) as follows:
$$ \begin{aligned} t(\top)&=\top \quad t(p)=p\quad t(\eta(a))=\eta(a) \\ t(\neg\phi)&=\neg t(\phi)\quad t(\phi\land \psi)=t(\phi)\land t(\psi) \quad t(K\phi)=Kt(\phi)\\ t([!_a\psi]\phi)& =[!t({\delta^a_{\psi}})]t(\phi) \end{aligned} $$
For example, given T = {TT, LL}:
$$ t([!_a [!_b {\tt TT}(a)]\bot]\bot) =[{!}(({\tt TT}(a)\land t([!_b{\tt TT}(a)]\bot)) \lor ({\tt LL}(a)\land \neg t([!_b{\tt TT}(a)]\bot)))]\bot $$
where \(t([!_b{\tt TT}(a)]\bot)=[!(({\tt TT}(b)\land {\tt TT}(a))\lor({\tt LL}(b)\land\neg{\tt TT}(a)))]\bot\).

The result is a faithful \({\tt PAL}^{\bf T}\) translation of \({\tt PALT}^{\bf T}\) formulas.

Proposition 3

For any\({\tt PALT}^{\bf T}\)formula ϕ, and any pointed\({{{\mathfrak{M}}},s: {{\mathfrak{M}}},s\,\vDash \,\phi\iff {{\mathfrak{M}}},s\,\Vvdash\,t(\phi)}\).


We prove the proposition by induction on the structure of ϕ. The Boolean cases and the Kaϕ case are trivial. Before we can approach the [!aψ]ϕ case, we need to prove the following claim within the induction for ϕ:

If \({{{\mathfrak{M}}},s\,\vDash \,\psi\iff {{\mathfrak{M}}},s\,\Vvdash\, t(\psi),}\) then \({{{\mathfrak{M}}},s\,\vDash \,\lambda(s,a)(\psi,a)\iff {{\mathfrak{M}}},s\,\Vvdash\,t(\delta^a_\psi)}\). The argument goes by the following chain of equivalences:
$$ \begin{aligned} {{{\mathfrak{M}}}},s&\,\vDash \,\lambda(s,a)(\psi,a)\\ \iff {{{\mathfrak{M}}}},s&\,\vDash \,\eta^*(a)\land \eta^*(\psi,a) \quad (\hbox{where}\, \eta^*=\lambda(s,a))\\ \iff {{{\mathfrak{M}}}},s&\,\Vvdash\,\eta^*(a)\land t(\eta^*(\psi,a))\quad (\hbox{see below})\\ \iff {{{\mathfrak{M}}}},s&\,\Vvdash\, \bigvee_{\eta\in {\bf T}} (\eta(a)\wedge t(\eta(\psi,a))) \quad (\hbox{since}\, a\, \hbox{has one and only one type on}\, s)\\ \iff {{\mathfrak{M}}},s &\,\Vvdash\, t \left(\bigvee_{\eta\in {\bf T}} (\eta(a)\wedge \eta(\psi,a))\right) \quad (\hbox{since}\,t\, \hbox{commutes with} \,\land\, \hbox{and} \neg)\\ \iff {{{\mathfrak{M}}}},s &\,\Vvdash\, t(\delta_\psi^a)\\ \end{aligned} $$

Here the crucial second ‘\(\iff\)’ is due to the following: (1) \({{{\mathfrak{M}}},s\,\Vvdash\,\eta^*(a)\iff{{\mathfrak{M}}},s\,\vDash \,\eta^*(a);}\) (2) the assumption that \({{{\mathfrak{M}}},s\,\vDash \,\psi\iff {{\mathfrak{M}}},s\,\Vvdash\, t(\psi); }\) (3) the fact that η*(ψ, a) is constructed by Boolean connectives and epistemic operators based on ψ (by the definition of the type language); (4) the Boolean cases and the Kaϕ case in the main inductive proof.

Now based on the above claim, we know that \({{{\mathfrak{M}}}|_{t({\delta^a_{\psi}})}}\) is exactly the same as \({{{\mathfrak{M}}}|^a_\psi}\). Then the following reasoning for [!aψ]ϕ is immediate:
$$ \begin{aligned} {{\mathfrak{M}}}, s &\,\vDash \, [!_a\psi]\phi\\ \hbox{ iff } {{\mathfrak{M}}}, s& \,\vDash \, \lambda(s,a)(\psi,a)\hbox{ implies } {{\mathfrak{M}}}|^a_{\psi}, s\,\vDash \, \phi\\ \hbox{ iff } {{\mathfrak{M}}}, s&\,\Vvdash\, t({\delta^a_{\psi}}) \hbox{ implies } {{\mathfrak{M}}}|_{t({\delta^a_{\psi}})}, s\,\Vvdash\, t(\phi)\\ \hbox{ iff } {{\mathfrak{M}}}, s &\,\Vvdash\, t([!_a\psi]\phi).\\ \end{aligned} $$
Note that the above proposition does not imply that we may just forget about \({\tt PALT}^{\bf T}\): the translation that we defined clearly introduces an exponential blow-up in the length of formulas. For example, the executability of the announcements in Example 3 can be translated into the following formula with standard public announcements:4
$$ \begin{aligned} &\langle {!(({\tt TT}(A)\land {\tt LL}(B))\lor ({\tt LL}(A)\land\neg {\tt LL}(B)))}\rangle\langle {!(({\tt TT}(B)\land {\tt LL}(C))\lor ({\tt LL}(B)\land\neg{\tt LL}(C)))}\rangle\\ &\langle {!({\tt TT}(C)\land{\tt LL}(A)\land {\tt LL}(B))\lor ({\tt TT}(C)\land\neg ({\tt LL}(A)\land{\tt LL}(B)))}\rangle\top \end{aligned} $$
Based on Proposition 3 and the axiomatization of public announcement logic (cf. e.g., Plaza 2007), we axiomatize \({\tt PALT}^{\bf T}\) by the following Hilbert-style proof system AT where χ[ψ/ϕ] denotes any formula obtained by replacing some occurrences of ϕ in χ by ψ.

Axiom schemas

(\(\hbox{for arbitrary}\, a,b\in{\bf G}, p\in{\bf P}\cup{\bf P}_{\bf T}\))


all the instances of tautologies


\(\bigwedge\nolimits_{a\in {\bf G}}(\bigwedge\nolimits_{\eta\in{\bf T}}(\eta(a)\leftrightarrow \bigwedge\nolimits_{\eta'\not=\eta,\eta'\in{\bf T}}\neg\eta'(a) ))\)


Ka(ϕ→ψ)→ (Kaϕ→ Kaψ)




Kaϕ→ KaKaϕ


\( \neg K_a\phi\to K_a\neg K_a\phi\)


\([!_a\psi]p\leftrightarrow({\delta^a_{\psi}}\to p)\)


\([!_a\psi]\neg \phi\leftrightarrow({\delta^a_{\psi}}\to \neg [!_a\psi]\phi)\)


\([!_a\psi](\phi\land\chi)\leftrightarrow([!_a\psi] \phi\land [!_a\psi]\chi)\)


\([!_a\psi]K_b\phi\leftrightarrow ({\delta^a_{\psi}}\to K_b [!_a\psi]\phi)\)






\({\frac{\phi\leftrightarrow\psi} {\chi[\psi/\phi]\leftrightarrow\chi}}\)



Theorem 1

ATis sound and complete.


(Sketch) The soundness of MU is due to the fact that λ is a function, whence the basic type formulas of any agent are mutually exclusive and altogether exhaustive on each world of a model. The soundness of other axiom schemas and rules can be checked as for the standard axiomatization of PAL (cf. Plaza 2007) based on Proposition 3. The completeness is proved by a reduction argument that makes use of the reduction axiom schemas (!ATOM, !NEG, !CON, !K), and the rule RE to eliminate [!aψ] operators in an inside-out fashion (cf. Wang 2011a for a detailed discussion). The only difficulty here is assigning ‘announcement complexities’ to \({\tt PALT}^{\bf T}\) formulas in such a way that rewriting from the left-hand-side to the right-hand-side of !ATOM, !NEG, !CON, !K always reduces complexity. With a suitable complexity assignment, we can show that every \({\tt PALT}^{\bf T}\) formula can be reduced to an equivalent \({\tt EL}^{\bf T}\) formula by repeatedly applying the left-to-right rewriting rules specified by the reduction axiom schemas and the replacement of equals specified by the RE rule. It is not hard to see that the system AT without !ATOM, !NEG, !CON, !K can completely axiomatize \({\tt EL}^{\bf T}\). Now, if \(\vDash\phi,\) then \(\Vvdash\phi'\) for some \({\tt EL}^{\bf T}\) formula ϕ′ that can be obtained from ϕ by using the reduction axioms, and so \(\phi\leftrightarrow\phi'\) can be derived in AT. By the completeness of \({\tt EL}^{\bf T}\) we know that ϕ′ can also be derived in AT. Therefore ϕ can be derived in AT. Hence AT is complete. \(\square\)

The above proof shows that \({\tt PALT}^{\bf T}\) is equally expressive as \({\tt EL}^{\bf T}\). By Proposition 1, \({\tt PAL}^{\bf T}\) is equally expressive as \({\tt EL}^{\bf T}\). Therefore we have the following result:

Proposition 4

\({\tt EL}^{\bf T}, {\tt PAL}^{\bf T},\)and\({\tt PALT}^{\bf T}\)are equally expressive.

In particular, \({\tt PALT}^{\bf T}\) formulas without knowledge operators or subjective (knowledge-based) types can be translated into propositional formulas based on PPT. This explains why solving puzzles like Example 3 normally only requires propositional reasoning. However, as we will show in the later part of the paper, knowledge-based subjective types make the story much more complicated and interesting, which will demonstrate the full power of our framework.

We end this subsection with a technical issue that has an interesting twist in the current context. In some axiomatizations of standard PAL, the following composition axiom schema is included instead of the inference rule RE (cf. e.g., van Ditmarsch et al. 2007; Wang 2011a):
$$ {\tt !COM}\quad [!\phi][!\psi]\chi\leftrightarrow [!(\phi\land [!\phi]\psi)]\chi $$
The idea is that one can always combine two announcements into one in PAL (and also in \({\tt PAL}^{\bf T}\)). It is natural to ask whether some form of the composition axiom schema is valid in \({\tt PALT}^{\bf T}\). However, the answer is negative in general.5 Suppose we only have one single subjective truth teller type: T = {STT}. Consider the following model, where a does not know if q and b does not know whether p:
Clearly \({{{\mathfrak{M}}},s\,\vDash \,\langle {!_a p}\rangle\langle {!_b q}\rangle(K_a(p\land q)\land K_b(p\land q))}\). However, it is impossible to combine these two announcements into one announcement of the form of \(\langle {!_a\phi}\rangle\) or \(\langle {!_b\phi}\rangle\) after which both a and b know p and q. To see this, note that agents can only announce something that they know according to their type STT. Intuitively, you cannot let yourself know something new by just repeating things that you already know. Technically, a can only announce non-empty unions of equivalence classes w.r.t. ∼a, which allows him only three different formulas (modulo logical equivalence): \(q, \neg q,\) or \(\top\). None of these will let a know whether q.

On the other hand, for some special types T, it is indeed possible to obtain a composition result.

Proposition 5

GivenT = {TT, LL}, the following is valid:
$$ [!_a\phi][!_b\psi]\chi \leftrightarrow[!_a\phi']\chi $$
where ϕ′ depends only on ϕ, ψ, aandb.


Due to Proposition 3, [!aϕ][!bψ]χ is equivalent to a \({\tt PAL}^{\bf T}\) formula of the shape [!ϕ*][!ψ*]t(χ) for some \({\tt PAL}^{\bf T}\) formulas ϕ* and ψ*. Since \([!\phi^*][!\psi^*]t(\chi)\leftrightarrow [!(\phi^*\land [!\phi^*]\psi^*)]t(\chi)\) is valid in \({\tt PAL}^{\bf T}\) semantics, [!aϕ][!bψ]χ is equivalent to the \({\tt PAL}^{\bf T}\) formula \([!(\phi^*\land [!\phi^*]\psi^*)]t(\chi)\). Now it is not hard to reduce \(\phi^*\land [!\phi^*]\psi^*\) into an \({\tt EL}^{\bf T}\) formula θ using our translation f as in Proposition 1, and so [!aϕ][!bψ]χ is equivalent to a \({\tt PAL}^{\bf T}\) formula [!θ]t(χ). Now by Proposition 2, truthful announcement of θ can be mimicked by an announcement of a \({\tt PALT}^{\bf T}\) formula θ°a by agent a. Hence it is easy to see that the \({\tt PAL}^{\bf T}\) formula [!θ]t(χ) is equivalent to the \({\tt PALT}^{\bf T}\) formula [!aθ°a]χ. Taking things together, [!aϕ][!bψ]χ is equivalent to the \({\tt PALT}^{\bf T}\) formula [!aθ°a]χ. \(\square\)

‘I am a Liar’

Careful readers may have found out that the language of \({\tt PALT}^{\bf T}\) allows us to express the following announcement: \(!_a{\tt LL}(a)\) which may be roughly read as ‘I am a liar.’. It sounds like a liar sentence. However, a closer look should reveal that in our framework this is not a self-referential liar sentence such as ‘This sentence is a lie.’.

First note that \(!_a{\tt LL}(a)\) is not even a well-formed formula in \({\tt PALT}^{\bf T}\). Therefore, it does not make sense to talk about its truth value. On the other hand, \(!_a{\tt LL}(a)\) is viewed as an action in our framework and we may talk about its executability and update effects.

Now given T = {TT, LL}, from Proposition 3, \(!_a{\tt LL}(a)\) can be translated into a public announcement \(!(({\tt TT}(a)\land {\tt LL}(a))\lor ({\tt LL}(a)\land \neg{\tt LL}(a)))\) which amounts to the action of truthfully announcing \(\bot\). It is impossible to truthfully announce \(\bot, \) so \(!_a{\tt LL}(a)\) is not executable at all. According to the semantics, we can easily verify that \([!_a{\tt LL}(a)]\bot\) is valid, which is a formal way of saying \(!_a{\tt LL}(a)\) is not executable. Since \(!_a{\tt LL}(a)\) can never happen according to the types that govern the behaviours of agents, it has no non-trivial update effects.

On the other hand, if T = {LL, TT,LT} then \([!_a{\tt LL}(a)]\bot\) is not valid any more, and instead, \([!_a{\tt LL}(a)]K_b{\tt LT}(a)\) becomes valid for any \(a,b\in{\bf G}\). This is because only the bluffers can possibly execute \(!_a{\tt LL}(a),\) by the definition of LL,TT, and LT. This demonstrates that when bluffers are involved, successfully saying ‘I am a liar’ amounts to signalling that the speaker is a bluffer.

A final, related question is: Can a liar tell others that he is a liar in some way? It is rather easy when T = {TT, LL}. The liar can just announce \(\bot\). What about T = {TT, LL, LT}? Unfortunately, it is no easy task without signalling who are the bluffers first: whatever the truth teller and liar may say, the hearer just cannot rule out the possibility that the speaker is a bluffer.

Agent Types in Questions and Answers

Question-answer situations are typical interactive scenarios in which agents exchange information with each other. In this section, we extend the language of \({\tt PALT}^{\bf T}\) to handle questions and answers. Moreover, by formally defining puzzles and their solutions within our framework, we will apply our logic to HLPE-like puzzles.

A Question–Answer Logic

First, we extend \({{\tt PALT}^{\bf T}}\) with question modalities:

Definition 4

(Public question logic with types\({\tt PQLT}^{\bf T}\)) Given T, P and G as before, the language \({\tt PQLT}^{\bf T}\) extends \({\tt PALT}^{\bf T}\) with question operators and arbitrary answer operators:
$$ \phi ::= \top\mid p\mid \neg \phi \mid \phi\wedge\phi\mid K_a\phi \mid \eta(a) \ \mid [!_a\phi]\phi \mid [?_a\phi]\phi \mid [!_a]\phi $$
where \(\eta\in{\bf T}\) and \(a\in {\bf G}\).

Intuitively, [?aψ]ϕ expresses that ‘After asking a whether ψ, ϕ holds’, and [!a]ϕ says that ‘No matter what answer a gives (to the current question), afterwards ϕ holds’. Here we only focus on yes/no questions. Note that this language is expressive enough to express counterfactual questions. For example, \(?_a([?_ap]\langle {!_ap}\rangle\top)\) expresses the question ‘would you answer yes if you were asked whether p?’.

Definition 5

(Semantics for\({\tt PQLT}^{\bf T}\)) The semantics of \({\tt PQLT}^{\bf T}\) formulas on a model \({{{\mathfrak{M}}}=(S,\sim,V,\lambda)}\) is defined as the following w.r.t. a context\(\mu\in\{\#\}\cup \{{\bf G}\times Form({\tt PQLT}^{\bf T})\}\) where \(Form({\tt PQLT}^{\bf T})\) is the set of \({{\tt PQLT}^{\bf T}}\) formulas6. Intuitively, μ is used to record the current question: it can be of the form (a, ϕ) (a needs to answer whether ϕ) or simply # (there is currently no question to be answered).
where ψ =  ± χ means \(\psi=\chi \hbox{ or }\psi=\neg\chi\). \({{{\mathfrak{M}}}|^a_{\psi}}\) is defined like before as \((S', \{\sim'_a\mid a\in{\bf G}\}, V', \lambda')\) with:
  • \({S'=\{t\mid t\in S \hbox{ and } {{\mathfrak{M}}}, t\,\Vdash\,_\# \lambda(s,a)(\psi,a)\}}\)

  • For each \(a\in {\bf G}, t\in S': \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t),\) and λ′(t) = λ(t).

We say \({{{\mathfrak{M}}}|^a_\psi}\) is defined if \({\{t\mid t\in S \hbox { and } {{\mathfrak{M}}}, t \,\Vdash\,_\# \lambda(s,a)(\psi,a)\}}\) is not empty.
The ideas behind the above semantics can be summarized as follows:
  • Initially no question is asked (the use of # in the first clause).

  • When a question ?aψ is asked, the question ψ and its answerer a are recorded (see the use of (a, ψ) in the clause for [?aψ]ϕ), replacing the previously unanswered one, if there is any.

  • A proposition can be announced by a (!aψ) only if ψ is a proper answer to the current question for a (the clause for [!aψ]ϕ). Thus no one can say anything before a question is raised.

  • After an answer is given, the record is set to #.

  • Any question can be addressed to any one, and the arbitrary answer operator can be split into two answers, as demonstrated by the following two valid formulas:
    $$ [?_a\phi]\chi\leftrightarrow \langle {?_a\phi}\rangle\chi \qquad\qquad [?_a\phi][!_a]\chi\leftrightarrow [?_a\phi]([!_a\phi]\chi\land [!_a\neg\phi]\chi) $$

Remark 3

Questions have been discussed in dynamic epistemic logic (van Benthem and Minică 2009; Minică 2011), where questions partition the set of possible worlds. Our treatment is simpler, due to our intended application in HLPE-like puzzles where a question is always answered before the next question is raised. Therefore we do not consider the effect of consecutive questions: a new question will simply replace the old one, thus there is at most just one question for exactly one of the agents. This limitation can be overcome by using more complicated records μ, which we leave for future work.

The language of \({\tt PQLT}^{\bf T}\) extends \({\tt PALT}^{\bf T}\). However, \({\tt PQLT}^{\bf T}\) formulas can be translated into \({\tt PALT}^{\bf T}\) by the following translation g:
$$ \begin{aligned} g(\phi) &= g_{\#}(\phi)\\ g_\mu(\top) &=\top\\ g_\mu(p) &= p\\ g_\mu(\eta(a)) &= \eta(a)\\ g_\mu(\neg\phi) &=\neg g_\mu(\phi)\\ g_\mu(\phi_1 \land \phi_2) &= g_\mu(\phi_1) \land g_\mu(\phi_2) \\ g_\mu([!_a\psi]\phi) & = \left\{\begin{array}{ll} [!_a g_{\#}(\psi)] g_{\#}(\phi) & \hbox{ if }\mu=(a,\chi) \hbox{ and } \psi=\pm\chi\\ \top & \hbox{if otherwise } \end{array}\right.\\ g_\mu([!_a]\phi) & = \left\{\begin{array}{ll} g_\mu([!_a \chi]\phi \land [!_a \neg \chi] \phi)& \hbox{ if }\mu=(a,\chi) \\ \top & \hbox{ if otherwise } \end{array}\right.\\ g_\mu([?_a\psi]\phi) & = g_{(a,\psi)}(\phi)\\ \end{aligned} $$

By this translation we show that \({\tt PQLT}^{\bf T}\) is no more expressive than \({\tt PALT}^{\bf T}\):

Proposition 6

For any\({{{\mathfrak{M}}},s}\)and any\({\tt PQLT}^{\bf T}\)formula ϕ, the following holds:\({{{\mathfrak{M}}},s\,\Vdash\,\phi\iff {{\mathfrak{M}}},s\,\vDash\, g(\phi)}\)


We can actually prove the following stronger claim by a straightforward induction on the structure of the formulas:

For any \({{{\mathfrak{M}}},s,}\) any \({\tt PQLT}^{\bf T}\) formula ϕ, and any \({\mu\in \{\#\}\cup\{{\bf G}\times Form({\tt PQLT}^{\bf T})\}: {{\mathfrak{M}}},s\,\Vdash\,_\mu\phi\iff {{\mathfrak{M}}},s\,\vDash\, g_\mu(\phi)}\).

Note that although g translates [!a]ϕ into a conjunction of two concrete formulas, we cannot eliminate the operator [!a] in \({\tt PQLT}^{\bf T}\), since it depends on the previously asked question.

On the other hand, we can also translate \({\tt PALT}^{\bf T}\) to \({\tt PQLT}^{\bf T}\) by g′:
$$ \begin{aligned} g'(\top) &=\top\\ g'(p) &= p\\ g'(\eta(a)) &= \eta(a)\\ g'(\neg\phi) &=\neg g'(\phi)\\ g'(\phi_1 \land \phi_2) &= g'(\phi_1) \land g'(\phi_2) \\ g'([!_a\psi]\phi) & = [?_a g'(\psi)][!_ag'(\psi)]g'(\phi)\\ \end{aligned} $$
Again, by a straightforward induction, we can show:

Proposition 7

For any\({{{\mathfrak{M}}},s}\)and any\({\tt PALT}^{\bf T}\)formula ϕ, the following holds: \({{{\mathfrak{M}}},s\,\Vdash\,g'(\phi)\iff {{\mathfrak{M}}},s\,\vDash \, \phi}\)

Therefore \({\tt PQLT}^{\bf T}\) is equally expressive as \({\tt PALT}^{\bf T}, {\tt PAL}^{\bf T}\) and \({\tt EL}^{\bf T},\) based on Proposition 4.

Again, although \({\tt PQLT}^{\bf T}\) does not increase the expressive power of the language, it eases the syntactic specification. Let us consider another (more popular) variation of the Knights and Knaves puzzle as follows.

Example 5

(Death or Freedom with questions) The setting is exactly the same as before in Example 4, but now C is allowed to ask a question to one of A and B. How should he ask his question in such a way that he will know the way to Freedom no matter what the answer is?

Again let T = {LL, TT}. We can express the following questions:
  • \(?_A([?_BF_A]\langle {!_BF_A}\rangle\top): \) ‘Will the other man tell me that your path leads to Freedom?’

  • \(?_A ([?_AF_A]\langle {!_AF_A}\rangle\top): \) ‘Will you say ‘yes’ if you are asked whether your path leads to Freedom?’

Recall the model \({{{\mathfrak{M}}}}\) of Example 4:
We can verify that
$$ {{{\mathfrak{M}}}}\,\Vdash\,[?_A([?_BF_A]\langle {!_BF_A}\rangle\top)][!_A]K^W_CF_A\land [?_A ([?_AF_A]\langle {!_AF_A}\rangle\top)][!_A]K^W_CF_A. $$
As an example, let us take the first conjunct and verify it at the world \((F_A,{\tt TT},{\tt LL})\):
$$ \begin{aligned} {{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL}) &\,\Vdash\,[?_A([?_BF_A]\langle {!_BF_A}\rangle\top)][!_A]K^W_CF_A\\ \iff{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL})&\,\Vdash\,_{\#}[?_A([?_BF_A]\langle {!_BF_A}\rangle\top)][!_A]K^W_CF_A\\ \iff{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL}) &\,\Vdash\,_{(A,[?_BF_A]\langle {!_BF_A}\rangle\top)}[!_A]K^W_CF_A\\ \iff{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL}) &\,\Vdash\,_{\#}[!_A([?_BF_A]\langle {!_BF_A}\rangle\top)]K^W_CF_A {\hbox{ and}} \\ {{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL}) &\,\Vdash\,_{\#}[!_A(\neg [?_BF_A]\langle {!_BF_A}\rangle\top)]K^W_CF_A\\ \end{aligned} $$
Now let us continue with the second conjunct of the final part (the first conjunct can be verified similarly):
$$ \begin{aligned} &{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL})\,\Vdash\,_{\#}[!_A(\neg [?_BF_A]\langle {!_BF_A}\rangle\top)]K^W_CF_A \\ \iff &{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL})\,\Vdash\,_{\#} {\tt TT}(\neg [?_BF_A]\langle {!_BF_A}\rangle\top, A)\\ &\hbox{ implies } {{\mathfrak{M}}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}, (F_A,{\tt TT},{\tt LL})\,\Vdash\,_\# K^W_CF_A\\ \iff &{{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL})\,\nVdash\,_{(B, F_A)} \langle {!_BF_A}\rangle\top\\ &\hbox{ implies } {{\mathfrak{M}}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}, (F_A,{\tt TT},{\tt LL})\,\Vdash\,_\#K^W_CF_A\\ \iff & {{\mathfrak{M}}},(F_A,{\tt TT},{\tt LL})\,\nVdash\,_{\#} {\tt LL}(F_A,B)\\ &\hbox{ implies } {{\mathfrak{M}}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}, (F_A,{\tt TT},{\tt LL})\,\Vdash\,_\#K^W_CF_A \\ \end{aligned} $$
where \({{\mathfrak{M}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}}\) keeps the worlds s in \({{\mathfrak{M}}}\) such that
$$ {{\mathfrak{M}}},s\,\Vdash\, \lambda(s,A)(\neg [?_BF_A]\langle {!_BF_A}\rangle\top,A). $$

Therefore the worlds satisfying one of the following conditions are kept: \({{\mathfrak{M}},\_,{\tt TT},{\tt LL}\,\Vdash\,\neg [?_BF_A]\langle {!_BF_A}\rangle\top}\) or \({{\mathfrak{M}},\_,{\tt LL},{\tt TT}\,\Vdash\, [?_BF_A]\langle {!_BF_A}\rangle\top. }\)

Equivalently:\({{\mathfrak{M}},\_,{\tt TT},{\tt LL}\,\nVdash\,_{B,F_A} \langle {!_BF_A}\rangle\top}\) or \({{\mathfrak{M}},\_,{\tt LL},{\tt TT}\,\Vdash\,_{B,F_A} \langle {!_BF_A}\rangle\top}\)

Then it is not hard to see that \({{\mathfrak{M}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}}\) only keeps the worlds \((F_A, {\tt TT},{\tt LL})\) and \((F_A, {\tt LL},{\tt TT}), \) thus \({{\mathfrak{M}}|^A_{\neg [?_BF_A]\langle {!_BF_A}\rangle\top}, (F_A,{\tt TT},{\tt LL})\,\Vdash\,_\#K^W_CF_A}\).

Alternatively, we can verify the above \({{\tt PQLT}^{\bf T}}\) formulas by using the translation g and the semantics for \({{\tt PALT}^{\bf T}}\), as we showed in Proposition 6:
$$ \begin{aligned} &g([?_A([?_BF_A]\langle {!_BF_A}\rangle\top)][!_A]K^W_CF_A)\\ &\quad=g_{\#}([?_A([?_BF_A]\langle {!_BF_A}\rangle\top)][!_A]K^W_CF_A)\\ &\quad=g_{\mu} ([!_A]K^W_CF_A) \quad {\hbox{where}}\quad \mu=(A,[?_BF_A]\langle {!_BF_A}\rangle\top)\\ &\quad=g_{\mu} ([!_A([?_BF_A]\langle {!_BF_A}\rangle\top) ]K^W_CF_A)\land g_{\mu} ([!_A(\neg[?_BF_A]\langle {!_BF_A}\rangle\top) ]K^W_CF_A)\\ &\quad= ([!_A g_{\#}([?_BF_A]\langle {!_BF_A}\rangle\top) ]g_{\#}(K^W_CF_A)\land ([!_A g_{\#}(\neg [?_BF_A]\langle {!_BF_A}\rangle\top) ]g_{\#}(K^W_CF_A) \\ &\quad = ([!_A g_{(B,F_A)}(\langle {!_BF_A}\rangle\top) ]K^W_CF_A\land ([!_A \neg g_{(B,F_A)}(\langle {!_BF_A}\rangle\top) ]K^W_CF_A \\ &\quad= [!_A \langle {!_BF_A}\rangle\top]K^W_CF_A\land [!_A \neg\langle {!_BF_A}\rangle\top ]K^W_CF_A \\ \end{aligned} $$
The announcements in the last line may look familiar: actually, under the translation g, the solutions to Example 5 are translated into solutions to Example 4 without using questions.

Handling Arbitrary Utterances

To formally discuss the original HLPE, we still need one last technical preparation, since the gods in the story of HLPE answer questions in their own language. In this subsection, we also take this into consideration.

Definition 6

(Public question language with types and utterances) Let U be a finite set of utterances, the language \({{\tt PQLT}^{\bf T}_{\bf U}}\) replaces the announcements !aϕ in \({{\tt PQLT}^{\bf T}}\) by utterances !au:
$$ \phi ::= \top\mid p\mid \neg \phi \mid \phi\wedge\phi\mid K_a\phi \mid \eta(a) \mid [!_a u]\phi \mid [?_a\phi]\phi \mid [!_a]\phi $$
where \(\eta\in{\bf T}, u\in {\bf U}\) and \(a\in {\bf G}\).

[!au]ϕ expresses that, if a says u, then ϕ is true.

A model \({{\mathfrak{M}}}\) for \({{\tt PQLT}^{\bf T}_{\bf U}}\) is a tuple: \((S,\{\sim_a\mid a\in{\bf G}\},V,\lambda, I)\) where \(I: S\times Form({{\tt PQLT}^{\bf T}_{\bf U}})\times {\bf U} \to Form({{\tt PQLT}^{\bf T}_{\bf U}})\) is a function and I(s, ϕ, u) is the interpretation of an answer u on world s given the question ϕ. For example, if u = {yesno}, we can define a function I corresponding to the usual interpretation of yes and no as answers to questions: I(s, ϕ, yes) = ϕ and \(I(s,\phi, no)=\neg\phi\) for each s and each ϕ.

The semantics of \({{\tt PQLT}^{\bf T}_{\bf U}}\) is mostly the same as that of \({{\tt PQLT}^{\bf T}}\), except for the formulas involving utterances, which depend on the interpretation function.

Definition 7

(Semantics for\({{\tt PQLT}^{\bf T}_{\bf U}}\)) The semantics of \({{\tt PQLT}^{\bf T}_{\bf U}}\) formulas on the model \({{\mathfrak{M}}=(S,\{\sim_a\mid a\in{\bf G}\},V,\lambda, I)}\) is defined exactly as the semantics of \({{\tt PQLT}^{\bf T}}\) w.r.t. \(\mu\in\{\#\}\cup {\bf G}\times Form({{{\tt PQLT}^{\bf T}_{\bf U}}}), \) except for the following clauses:
where \({{\mathfrak{M}}|^a_{\chi,u}}\) is defined as \((S', \{\sim'_a\mid a\in{\bf G}\}, V', \lambda', I')\) where:
  • \({S'=\{t\mid t\in S \hbox{ and } {\mathfrak{M}}, t \,\Vdash\,_\# \lambda(t,a)(I(t,\chi,u),a)\}}\)

  • For each \(a\in {\bf G}, t\in S', u\in {\bf U}, \phi\in{{\tt PQLT}^{\bf T}_{\bf U}}: \sim_a'=\sim_a|_{S'\times S'}, V'(t)=V(t), \lambda'(t)=\lambda(t),\) and I′(t, ϕ, u) = I(t, ϕ, u).

We say that \({{\mathfrak{M}}|^a_{\chi,u}}\) is defined if the set \({\{t\mid t\in S \hbox{ and } {\mathfrak{M}}, t\,\Vdash\,_\# \lambda(t,a)(I(t,\chi,u),a)\}}\) is not empty.
It is easy to see that:
$$ {{\mathfrak{M}}}, s\,\Vdash\,_\mu \langle {!_a}\rangle\phi \Leftrightarrow {{\mathfrak{M}}}, s\,\Vdash\,_\mu \neg [!_a]\neg\phi \Leftrightarrow \hbox{ there exists a } u\in{\bf U}: {{\mathfrak{M}}}, s\,\Vdash\,_\mu \langle {!_a u}\rangle\phi $$

Remark 4

It is important that we use \(\Vdash_\#\) in the third condition of the clause for [!au]ϕ. Replacing # by μ will cause circularity in the semantics. For instance, \(?_a\langle {!_au}\rangle\top\) may then expresses the self-referential question ‘Will you answeru(to this question)?’.

Questioning Strategy

In the previous sections, we talked about the notions of puzzles and solutions in a rather informal manner. In this subsection, we attempt to formalize them precisely in the framework of \({{\tt PQLT}^{\bf T}_{\bf U}}\).

Definition 8

(Questioning strategy) A questioning strategy π w.r.t. \({{\tt PQLT}^{\bf T}_{\bf U}}\) is a tuple (QFr, δ, L) where
  • Q is a non-empty finite set of question states and \(r\in Q \) is the initial state,

  • F is a non-empty finite set of final states such that F Q = ∅, 

  • δ:Q × UQF is a transition function,

  • \(L: Q\to {\bf G}\times Form({{\tt PQLT}^{\bf T}_{\bf U}})\) essentially assigns to each question state a question ?aϕ expressible in \({{\tt PQLT}^{\bf T}_{\bf U}}\) (formally represented as a pair (a, ϕ)).

In this work, we only consider the questioning strategies that are trees7.

For any questioning strategy π = (QFr, δ, L) and any \(q\in Q, \) let LG(q) and \(L^\Upphi(q)\) be the first and the second element of L(q), respectively. Note that every q node has one and only one u successor for each u in U. Two different question states may be assigned the same question (a, ϕ). Given a questioning strategy π, an execution of π is a path \(r\buildrel{{u_1}} \over {\rightarrow}q_1\cdots\buildrel{{u_n}} \over {\rightarrow}q_n\) in π such that \(q_i\in Q\) for i < n and \(q_n\in F\). Let P(π) be the collection of all the executions in π. The length of a strategy (|π|) is defined as the length of the longest execution of π (a natural number or ω).

For example, given \({\bf G}=\{A,B,C\}, {\bf T}=\{{\tt TT},{\tt LL},{\tt LT}\}\) and \({\bf U}=\{ja, da\}, \) a simple questioning strategy π: ‘asking them one by one if they are bluffers’ is illustrated as follows:
where \(r:?_A{\tt LT}(A)\) means \(L(r)=(A, {\tt LT}(A)), \) similarly for other nodes.
Let Seq(π) be all the potential question-answer sequences of π, namely,
$$ \begin{aligned} Seq(\pi)=\{?_{a_1}\phi_1 !_{a_1}u_1\ldots ?_{a_n}\phi_n !_{a_n}u_n\mid q_0\buildrel{{u_1}} \over {\rightarrow}q_1\cdots \buildrel{{u_{n}}} \over {\rightarrow}q_{n+1}\in P(\pi),\\ \quad\quad\quad\quad\quad\quad\quad \forall i: a_i=L^{\bf G}(q_i), \phi_i=L^\Upphi(q_i)\}. \end{aligned} $$
A puzzle of \({{\tt PQLT}^{\bf T}_{\bf U}}\) is a pair consisting of a \({{\tt PQLT}^{\bf T}_{\bf U}}\) model and a \({{\tt PQLT}^{\bf T}_{\bf U}}\) formula as the goal: \({({\mathfrak{M}}, \phi)}\). Intuitively, a puzzle asks for a questioning strategy π such that ϕ is guaranteed after executing π. A questioning strategy π is a solution to a puzzle \({({\mathfrak{M}}, \phi)}\) if for all \(?_{a_1}\phi_1!_{a_1}u_1\cdots ?_{a_n}\phi_n!_{a_n} u_n\in Seq(\pi)\):
$$ \begin{aligned} {{\mathfrak{M}}} \,\vDash\, [?_{a_1}\phi_1](\langle {!_{a_1}}\rangle\top\land [!_{a_1}u_1][?_{a_2}\phi_2](\langle {!_{a_2}}\rangle\top\land [!_{a_2}u_2][?_{a_3}\phi_3](\dots [?_{a_n}\phi_n]\\ \quad\quad\quad(\langle {!_{a_n}}\rangle\top\land[!_{a_n} u_n]\phi)..))) \end{aligned} $$

Intuitively it says that for each execution \(?_{a_1}\phi_1!_{a_1}u_1\cdots ?_{a_n}\phi_n!_{a_n} u_n\in Seq(\pi), \) if the kth question \({?_{a_k}}{\phi_{k}} \) is asked then it must be answerable by some \(u\in{\bf U}, \) and if the answer is indeed \({!_{a_k}}u_{k} \) then we can proceed to the next question \({?_{a_{k+1}}}{\phi_{k+1}} \) and so on; eventually if the last question anϕn is answered then ϕ holds. The idea behind the answerability condition \(\langle {!_{a_k}}\rangle\top\) is that we need to ask sensible questions that always have answers, otherwise [!a]ψ may hold trivially. For example, if an agent is a subjective truth teller, he may not be able to answer ?ϕ if he does not know whether ϕ. If no answer is also regarded as an answer, then the utterance ‘I don’t know’ should be included in U as well. See Remark 5 at the end of the next section for further discussion.

The above formal requirement looks complicated, but it can be simplified under certain conditions. If we are sure that every question in π is always answerable w.r.t. any world in \({{\mathfrak{M}},}\) then π is a solution to \({({\mathfrak{M}},\phi)}\) iff every executable path of π leads to ϕ: for any \(?_{a_1}\phi_1!_{a_1}u_1\cdots ?_{a_n}\phi_n!_{a_n} u_n\in Seq(\pi)\):
$$ {{\mathfrak{M}}}\,\Vdash\,[?_{a_0}\phi_0][!_{a_0}u_0]\cdots [?_{a_n}\phi_n][!_{a_n} u_n]\phi. $$

In the discussion of HLPE, we will only consider questions that are always answerable by ja or da, so the above simplified condition suffices.

Formalizing the Hardest Logic Puzzle Ever

In this section, we review one classic solution to the original HLPE in our formal framework.

Recall the story of HLPE mentioned at the beginning of this paper. Boolos provides the following guidelines in Boolos (1996):
  1. B1

    Each god may get asked more than one question;

  2. B2

    Later questions may depend on previous ones and their answers;

  3. B3

    Whether Random speaks truly or not depends on the flip of a coin in his mind: if the coin comes down heads, he speaks truly; if tails, falsely.

  4. B4

    Random will always answer ‘da’ or ‘ja’.

Rabern and Rabern (2008) first noticed that B3 may trivialize the puzzle, and therefore proposed an alternative assumption B3’ that we will follow in this work:
  1. B3’

    Whether Random answers ‘ja’ or ‘da’ depends on the coin flip in his mind: if it comes down heads, he answers ‘ja’; if tails, he answers ‘da’.

Note that B1, B2 are already asumed implicitly in our formal definition of solutions to a puzzle, while B3’ and B4 actually say that Random is indeed of the type LT that we have defined given any interpretation of da and ja.
However, to formalize the puzzle precisely, there is still a lot more left to be clarified about the knowledge of agents. Let us list the implicit (epistemic) assumptions as follows:
  1. E0

    AB, and C are of the types in T = {TT, LL, LT} and this is common knowledge (to all of the agents including the questionerD).

  2. E1

    AB, and C are of different types and this is common knowledge.

  3. E2

    AB, and C know each other’s types and this is common knowledge.

  4. E3

    AB, and C know the meaning of ‘da’ and ‘ja’ and this is common knowledge.

  5. E4

    D does not know the types of ABC and this is common knowledge.

  6. E5

    D does not know the exact meanings of ‘da’ and ‘ja’ but he knows that one means ‘yes’ and the other means ‘no’, and this is common knowledge.

Moreover, we assume the following:
  1. Q1

    All questions are asked and answered publicly.

  2. Q2

    D does not mention himself in the questions.

  3. LS

    We only consider solutions of length less than 4.

Q1 and Q2 may look unnecessary but they do play a role in the analysis of HLPE within our framework: we only consider public questions and answers in our technical preparations, and Q2 will simplify our discussion later on in the paper.

Formalizing HLPE

In the sequel, we fix \({\bf U}=\{ja,da\}, {\bf T}=\{{\tt TT},{\tt LL},{\tt LT}\}\) and \({\bf G}=\{A,B,C,D\}\). According to the assumptions E0–E5 we can build the following model \({{\mathfrak{M}}_0}\) (as usual we omit the reflexive transitive arrows and also the type of D since it is irrelevant):
where JA at a world s denotes the interpretation that ja means yes, and da means no at world s, i.e., I(s, ϕ, ja) = ϕ and \(I(s,\phi,da)=\neg\phi\) for any \({{\tt PQLT}^{\bf T}_{\bf U}}\) formula ϕ. Similarly, DA at world s denotes that \(I(s,\phi, ja)=\neg\phi\) and I(s, ϕ, da) = ϕ for any ϕ.
Note that although we do not include a common knowledge operator CG in our logical language, we can define common knowledge of ϕ (CGϕ) as a conjunction of all formulas of the form \(K_{a_1}\dots K_{a_n}\phi\) where \(a_i\in{\bf G}\). We may write \({{\mathfrak{M}}\,\Vdash\,C_{\bf G}\phi}\) if all the formulas in the collection are true at all worlds in \({{\mathfrak{M}}}\). With the help of CG, we can verify that \({{\mathfrak{M}}_0}\) indeed validates the formulas corresponding to the assumptions E0 to E5. Take E5 as a non-trivial example, and let
$$ \begin{aligned} \phi^{\tt JA}_x&={\tt TT}(x)\to([?_x {\tt TT}(x)]\langle {!_x\,ja}\rangle\top\land[?_x \neg{\tt TT}(x)]\langle {!_x \, da}\rangle\top)\\ \phi^{\tt DA}_x&={\tt TT}(x)\to([?_x {\tt TT}(x)]\langle {!_x\, da}\rangle\top\land[?_x \neg {\tt TT}(x)]\langle {!_x\,ja}\rangle\top) \end{aligned} $$
Intuitively, \(\bigwedge\nolimits_{x\in{\bf G}}\phi^{\tt JA}_x\) is a clumsy way of saying that ja means yes and da means no (we cannot express this directly in our language). Similarly for \(\bigwedge\nolimits_{x\in{\bf G}}\phi^{\tt DA}_x\). We can formalize E5 by the following formula (more precisely, an infinite set of formulas):
$$ \phi_{E5}= C_{\bf G}(K_D(\bigwedge_{x\in{\bf G}}\phi^{\tt JA}_x\lor\bigwedge_{x\in{\bf G}}\phi^{\tt DA}_x)\land \neg (K_D \bigwedge_{x\in{\bf G}}\phi^{\tt JA}_x \lor K_D \bigwedge_{x\in{\bf G}}\phi^{\tt DA}_x)) $$
We can then verify that \({{\mathfrak{M}}_0\,\vDash\,\phi_{E5}}\).

All the other assumptions E0–E4 can also be formalized and checked on \({{\mathfrak{M}}_0, }\) which we leave as an exercise for the interested reader.

This shows that the model \({{\mathfrak{M}}_0}\) complies with our assumptions. Now let χ(a) be the formula \(K_D {\tt LL}(a)\lor K_D {\tt TT}(a)\lor K_D {\tt LT}(a),\) and let χ be \(\chi(A)\land \chi(B)\land\chi(C)\). The HLPE puzzle can be formalized as \({({\mathfrak{M}}_0,\chi)}\).

Verification of a Classic Solution

Before verifying an existing solution, let us formally prove the following crucial result from (Rabern and Rabern 2008):

Let E* be the function that takes a question q to the question ‘If you were asked whether q would you say “ja?”’. When either True or False are asked E*(q), a response of ‘ja’ indicates that the correct answer to q is affirmative and a response of ‘da’ indicates that the correct answer to q is negative.

Lemma 1

(Embedded question lemma8) For any modality-free formula ϕ of\({{\tt PQLT}^{\bf T}_{\bf U}},\)any\(a\in\{A,B,C\}\)and any submodel\({{\mathfrak{N}}}\)of\({{\mathfrak{M}}_0}\):
$$ {{\mathfrak{N}}}\,\Vdash\,[?_a [?_a\phi]\langle {!_a\, ja}\rangle\top]([!_a\, ja]K_D(\neg{\tt LT}(a)\to\phi)\land [!_a\, da]K_D(\neg{\tt LT}(a)\to\neg\phi)) $$
where \([?_a\phi]\langle {!_a\, ja}\rangle\top\) expresses ‘If I asked you ϕ would you say “ja?”’.


Without loss of generality, let a = A. Let \(\psi=[?_A\phi]\langle {!_A\, ja}\rangle\top , \phi^{\tt JA}_s=\lambda(s,A)(\psi, A)\) and \(\phi^{\tt DA}_s=\lambda(s,A)(\neg \psi,A)\) for any s in \({{\mathfrak{N}}}\). Then we have the following chain of equivalences:
$$ \begin{aligned} &{{\mathfrak{N}}},s\,\Vdash\,[?_A [?_A\phi]\langle {!_A \,ja}\rangle\top][!_A\,ja]K_D(\neg{\tt LT}(a)\to\phi)\\ \iff &{{\mathfrak{N}}},s\,\Vdash\,_{(A, [?_A\phi]\langle {!_A \,ja}\rangle\top)}[!_A\,ja]K_D(\neg{\tt LT}(A)\to\phi)\\ \iff &\left\{ \begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#\phi^{\tt JA}_s \hbox{ implies } {{\mathfrak{N}}}|^A_{(\psi,\,ja)},s\,\Vdash\,_\# K_D(\neg{\tt LT}(A)\to\phi) & \hbox{if}\,s=\_\_\_{\tt JA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\phi^{\tt DA}_s \hbox{ implies } {{\mathfrak{N}}}|^A_{(\psi,ja)},s\,\Vdash\,_\# K_D(\neg{\tt LT}(A)\to\phi) & \hbox{if}\,s=\_\_\_{\tt DA} \end{array} \right.(\star). \end{aligned} $$
$$ \begin{aligned} &{{\mathfrak{N}}},\_\_\_{\tt JA}\,\Vdash\,_\#\phi^{\tt JA}_s \\ \iff& {{\mathfrak{N}}},s\,\Vdash\,_\#\lambda(s,A)([?_A\phi]\langle {!_A\, ja}\rangle\top,A)\quad\hbox{if}\,s=\_\_\_{\tt JA}\\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#[?_A\phi]\langle {!_A\, ja}\rangle\top& \hbox{if}\,s={\tt TT}\_\_{\tt JA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\neg [?_A\phi]\langle {!_A\, ja}\rangle\top& \hbox{if}\,s={\tt LL}\_\_{\tt JA}\\ {{\mathfrak{M}}},s\,\Vdash\,_\#\top& \hbox{if}\,s={\tt LT}\_\_{\tt JA} \end{array} \right.\\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#\phi& \hbox{if}\,s={\tt TT}\_\_{\tt JA}\\ {{\mathfrak{N}}},s\,\nVdash\,_\#\neg\phi& \hbox{if}\,s={\tt LL}\_\_{\tt JA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\top& \hbox{if}\,s={\tt LT}\_\_{\tt JA} \end{array} \right.\\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#\phi& \hbox{if}\,s\not={\tt LT}\_\_{\tt JA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\top& \hbox{if}\,s={\tt LT}\_\_{\tt JA} \end{array} \right. \end{aligned} $$
$$ \begin{aligned} &{{\mathfrak{N}}},\_\_\_{\tt DA}\,\Vdash\,_\#\phi^{\tt DA}_s \\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#\neg [?_A\phi]\langle {!_A\,ja}\rangle\top& \hbox{if}\,s={\tt TT}\_\_{\tt DA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\neg \neg [?_A\phi]\langle {!_A\, ja}\rangle\top& \hbox{if}\,s={\tt LL}\_\_{\tt DA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\top& {\hbox{if}}\,s={\tt LT}\_\_{\tt DA} \end{array} \right.\\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\nVdash\,_\#\neg\phi& \hbox{if}\,s={\tt TT}\_\_{\tt DA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\# \phi& \hbox{if}\,s={\tt LL}\_\_{\tt DA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\top& \hbox{if}s={\tt LT}\_\_{\tt DA} \end{array} \right.\\ \iff&\left\{\begin{array}{ll} {{\mathfrak{N}}},s\,\Vdash\,_\#\phi& \hbox{if}s\not={\tt LT}\_\_{\tt DA}\\ {{\mathfrak{N}}},s\,\Vdash\,_\#\top& \hbox{if}\,s={\tt LT}\_\_{\tt DA} \end{array} \right. \end{aligned} $$
According to the semantics, \({{\mathfrak{N}}|^A_{\psi,\,ja}}\) retains the worlds t where:
$$ \left\{\begin{array}{ll} {{\mathfrak{N}}},t\,\Vdash\,_\#\phi_t^{\tt JA}& \hbox{if}\,t=\_\_\_{\tt JA}\\ {{\mathfrak{N}}},t\,\Vdash\,_\# \phi_t^{\tt DA}& \hbox{if}\,t=\_\_\_{\tt DA}\\ \end{array} \right. $$
Based on the above observations, \({{\mathfrak{N}}|^A_{\psi,\,ja}}\) retains the worlds satisfying \({\tt LT}(A)\lor\phi, \) independent from the interpretation of ja and da. Now since ϕ is modality-free, all the worlds in \({{\mathfrak{N}}|^A_{\psi,\, ja}}\) satisfy \({\tt LT}(A)\lor\phi\). Therefore \((\star)\) is indeed true, and hence \({{\mathfrak{N}},s\,\Vdash\,[?_A [?_A\phi]\langle {!_A\, ja}\rangle\top][!_A\,ja]K_D(\neg{\tt LT}(A)\to\phi)}\) for an arbitrary s in \({{\mathfrak{N}}}\). Similarly we can show that
$$ {{\mathfrak{N}}}\,\Vdash\,[?_A [?_A\phi]\langle {!_A\,ja}\rangle\top] [!_A\, da]K_D(\neg{\tt LT}(A)\to\neg\phi). $$
Since the selection of A is arbitrary, the proof can be completed easily. \(\square\)
Let \(\phi={\tt LT}(A)\). Based on the above lemma, we have:
$$ \begin{aligned} {{\mathfrak{M}}}_0\,\Vdash\,[?_B [?_B{\tt LT}(A)]\langle {!_B \,ja}\rangle a\top]([!_B\,ja]K_D(\neg{\tt LT}(B)\to{\tt LT}(A))\land\\ [!_B\, da]K_D(\neg{\tt LT}(B)\to\neg{\tt LT}(A))) \end{aligned} $$
Note that \(\neg{\tt LT}(B)\to{\tt LT}(A)\) is equivalent to \({\tt LT}(B)\lor{\tt LT}(A),\) and \(\neg{\tt LT}(B)\to\neg{\tt LT}(A)\) is equivalent to \({\tt LT}(B)\lor\neg{\tt LT}(A)\). Since it is commonly known that there is only one bluffer, the above result implies the following:
$$ {{\mathfrak{M}}}_0\,\Vdash\,[?_B [?_B{\tt LT}(A)]\langle {!_B\, ja}\rangle\top]([!_B\,ja]K_D(\neg {\tt LT}(C))\land [!_B\, da]K_D(\neg{\tt LT}(A))) $$
In words, this says that D will know that one of the agents is not a bluffer.
Based on this result, Rabern and Rabern (2008) proposed a three-step solution as follows (following (Rabern and Rabern 2008) we use Ea*(ϕ) as the short hand for formula \( [?_a\phi]\langle {!_a \, ja}\rangle\top\)):
In words, D first asks B whether A is a bluffer. Then depending on the answer, either A or C must be a non-bluffer. Thus D can then ask the non-bluffer about his own type and others’ types.
Call the above questioning strategy π. We can verify π formally. Note that all the questions in π can be answered by at least one of ja and da, thus we only need to check that for all \(?_{a_1}\phi_1!_{a_1}u_1\cdots ?_{a_n}\phi_n!_{a_n} u_n\in Seq(\pi)\):
$$ {{\mathfrak{M}}}_0 \,\vDash\, [?_{a_0}\phi_0][!_{a_0}u_0]\cdots [?_{a_n}\phi_n][!_{a_n} u_n]\chi. $$
Based on Lemma 1, the verification is immediate, and thus D will know the types of all the three agents.

New Puzzles with Epistemic Twists

In the previous sections, we developed epistemic frameworks to handle various puzzles about agent types in question-answer scenarios such as the original HLPE. However, the power of our frameworks has not yet been fully demonstrated, since most of the previous examples can be treated as puzzles of Boolean algebra in the informal discussion style of the literature. This phenomenon has a technical explanation as we mentioned before: as long as we talk about objective types, the knowledge of agents is not really relevant and apparently complicated formulas can be translated back to Boolean formulas or simple epistemic formulas with no higher-order knowledge. Thus, existing puzzles are just too easy to require the full power of our \({{\tt PQLT}^{\bf T}_{\bf U}}\) framework. In this section, let us go a little bit further and consider some significantly harder puzzles where deeper epistemic reasoning is required.

One important underlying assumption in the original puzzle and its existing variations is that AB, and C are gods. Intuitively, being gods, AB, and C should know everything. Therefore their knowledge does not play a role in reasoning about their types. However, what if they are not gods but human beings? Being ordinary people, AB, and C may not know everything and they will then behave according to their own knowledge.

In such a scenario, agents may not know each other’s types, and they should have subjective, instead of objective types. Correspondingly, we should replace T in the assumption E0 by \({\bf T}'=\{{\tt STT},{\tt SLL},{\tt LT}\}\).9 Since we do not require the agents to know each others’ types, E2 should be abandoned. What would be an alternative to E2? Actually there are many possible assumptions. We just list a few examples:
  • It is commonly known (to ABC, and D) that agents AB and C only know their own types.

  • It is commonly known that A knows everyone’s type, but B and C only know their own types.

  • It is commonly known that a bluffer knows everyone’s type, but truth tellers and liars only know their own types.

  • A knows everyone’s type, but B and C only know their own types and doubt whether A indeed knows their types. D is not sure whether any of the three know all the types of each other.

To see that such epistemic assumptions can really make a difference, let us look at the following simple example \({{\mathfrak{N}}}\):

From the model we can read off that it is commonly known that A does not know the types of B or C, but both B and C know the type of A. Moreover, it is commonly known that ja means yes and da means no. Now, can D determine AB, and C’s types by asking questions?

Surprisingly, the answer is negative. To prove it formally, we need Proposition 9 which will be proved later on. The intuition is this: first of all, asking A does not bring any new information since D knows everything that A knows. However, whatever D asks B or C, there is always a possibility that the answerer is a bluffer and thus at least one of the answers does not give any useful information. For example, suppose D asks B ‘Are you a liar?’ If the answer is ja, we know B must be LT, since a (subjective) liar cannot answer ja. However, if the answer is da, we cannot learn anything since both the liar and the bluffer can answer da. Note that in case A does not have any uncertainties between the two worlds, then D can simply ask A about the types of B and C.

Now we are ready to consider a particular variation of the HLPE:

Example 6

(HLPE with ignorance) A (subjective) liar, a (subjective) truth teller and a bluffer are living on an island. They know their own types but do not know others’ types. Moreover, it is commonly known that they are of different types. They understand English but can only answer questions in their own language, in which the words for yes and no are da and ja, in some order. Now the question is: can you determine their types by asking questions such that they are always able to answer ja or da.

Let us first list the new assumptions:
  1. E0’

    AB, and C are of types in T = {STT,SLL,LT} and this is common knowledge (to all of the agents including the questioner D).

  2. E1

    AB, and C are of different types and this is common knowledge.

  3. E2’

    AB, and C know their own types but do not know others’ types, and this is also common knowledge.

  4. E3–E5

    Q1, and Q2 are as before, but we do not constrain ourselves to 3-step solutions, thus giving up constraint LS.

Based on the above assumption, we can build the following model \({{\mathfrak{M}}_1}\):

It is not hard to check that E0’, E3-E5 hold on \({{\mathfrak{M}}_1}\). For E2’, note that for any agent \(a\in\{A,B,C\}, \) at each world s, agent a cannot distinguish s from another world t where his own type and the interpretation function are the same as in s. For example, agent B cannot distinguish STT, SLL, LT, JA from LT, SLL, STT, JA.

Let θ(a) be the formula \(K_D {\tt SLL}(a)\lor K_D {\tt STT}(a)\lor K_D {\tt LT}(a)\) and \(\theta=\theta(A)\land \theta(B)\land\theta(C)\). The puzzle is then formalized as \({({\mathfrak{M}}_1, \theta)}\).

First note that Lemma 1 does not hold any more if we consider the submodels of \({{\mathfrak{M}}_1}\) instead of the submodels of \({{\mathfrak{M}}_0}\). For example, we have:
$$ {{\mathfrak{M}}}_1,({\tt STT},{\tt SLL},{\tt LT},{\tt JA})\,\nVdash\, [?_A[?_A{\tt LT}(B)]\langle {!_A\,ja}\rangle\top][!_A\, da](K_D(\neg{\tt LT}(A)\to\neg{\tt LT}(B))) $$
To see this, observe that \({{\mathfrak{M}}_1|^A_{[?_A{\tt LT}(B)]\langle {!_A\,ja}\rangle\top,\, da}}\) keeps the world (STT, LT, SLL, JA) where \(\neg {\tt LT}(A)\to \neg {\tt LT}(B)\) does not hold:
$$ \begin{aligned} &{{\mathfrak{M}}}_1, ({\tt STT},{\tt LT},{\tt SLL},{\tt JA})\,\Vdash\,{\tt STT}(\neg [?_A{\tt LT}(B)]\langle {!_A\,ja}\rangle\top, A) \\ \iff &{{\mathfrak{M}}}_1, ({\tt STT},{\tt LT},{\tt SLL},{\tt JA})\,\Vdash\,_\# K_A\neg [?_A{\tt LT}(B)]\langle {!_A\, ja}\rangle\top\\ \iff & {{\mathfrak{M}}}_1, ({\tt STT},{\tt LT},{\tt SLL},{\tt JA})\,\nVdash\,_\# [?_A{\tt LT}(B)]\langle {!_A\, ja}\rangle\top \\ &\hbox{ and }{{\mathfrak{M}}}_1, ({\tt STT},{\tt SLL},{\tt LT},{\tt JA})\,\nVdash\,_\# [?_A{\tt LT}(B)]\langle {!_A\,ja}\rangle\top\\ \iff & {{\mathfrak{M}}}_1, ({\tt STT},{\tt LT},{\tt SLL},{\tt JA})\,\nVdash\,_{(A,{\tt LT}(B))}\langle {!_A\,ja}\rangle\top \\ &\hbox{ and }{{\mathfrak{M}}}_1, ({\tt STT},{\tt SLL},{\tt LT},{\tt JA})\,\nVdash\,_{(A,{\tt LT}(B))}\langle {!_A\, ja}\rangle\top\\ \iff & {{\mathfrak{M}}}_1, ({\tt STT},{\tt LT},{\tt SLL},{\tt JA})\,\nVdash\, K_A{\tt LT}(B) \\ &\hbox{ and }{{\mathfrak{M}}}_1, ({\tt STT},{\tt SLL},{\tt LT},{\tt JA})\,\nVdash\, K_A{\tt LT}(B)\\ \Longleftarrow &{{\mathfrak{M}}}_1, ({\tt STT},{\tt SLL},{\tt LT},{\tt JA})\,\nVdash\,{\tt LT}(B) \end{aligned} $$
The essential problem is that when a subjective truth teller answers ‘no’ to a question ‘will you be able to answer “yes” to a question ?ψ’, it does not mean that he will answer ‘no’ when he is actually asked whether ψ, because he might be not able to answer anything according to his type. Note that the question ‘will you answer “yes” to a question ?ψ’ is always answerable, but the question ?ψ might not be answerable. The classic solution to the original HLPE involves asking questions about other’s types. However, with the subjective liar and truth teller, such questions might not be answerable any more, since the agents may be ignorant about others’ types.
Working toward solving the puzzle, we need a few new insights. Let \({{\mathfrak{M}}_1'}\) be the model just like \({{\mathfrak{M}}_1}\) but without D links between the JA zone and DA zone (that is, D knows the meanings of ja and da). Let \({{\mathfrak{M}}_2}\) be the upper part of \({{\mathfrak{M}}_1, }\) being the following model:

Clearly, in the above model D also knows the exact meanings of da and ja and this is common knowledge.

Before we prove the following proposition, let us be more precise about answerable questions. We say that a question ?aϕ is answerable on a model \({{\mathfrak{M}}}\) if \({{\mathfrak{M}}\,\Vdash\,[?_a\phi]\langle {!_a}\rangle\top}\). Thus, for any world in \({{\mathfrak{M}}, a}\) has at least one possible answer to the question ?aϕ. It is not hard to see that if ?aϕ is answerable on a submodel \({{\mathfrak{N}}}\) of \({{\mathfrak{M}}_1, }\) then \({{\mathfrak{N}}\,\Vdash\,\neg{\tt LT}(a)\to (K_a\phi\lor K_a\neg\phi)}\).

Proposition 8

There is a solution to\({({\mathfrak{M}}_1, \theta)}\)iff there is a solution to\({({\mathfrak{M}}_2, \theta)}\).


Proofs for this proposition and other results presented in this section are provided in the "Appendix".

This proposition says that we can actually ignore uncertainties about ja and da when searching for solutions to \({({\mathfrak{M}}_1,\theta)}\).

We say that a question ?aϕ is effective on a model \({{\mathfrak{M}}}\) if for any \({u\in\{ja,\,da\}: {\mathfrak{M}}|^a_{\phi,u}}\) is defined (i.e., the domain is not empty), and \({{\mathfrak{M}}|^a_{\phi,u}\not={\mathfrak{M}}: }\) that is, answers to the question will always update the model by deleting some worlds. Now we make one crucial observation before proving our main impossibility result.

Proposition 9

For any submodel\({{\mathfrak{N}}}\)of\({{\mathfrak{M}}_2}\)and any\({a\in\{A,B,C\}: {\mathfrak{N}}\,\Vdash\,\neg {\tt SLL}(a)}\)or\({{\mathfrak{N}}\,\Vdash\,\neg {\tt STT}(a)}\)implies that there is no effective question forain model\({{\mathfrak{N}}}\).

Based on Proposition 9, we have the following theorem.

Theorem 2

There is no solution to\({({\mathfrak{M}}_2, \theta), }\)and therefore, there is no solution to\({({\mathfrak{M}}_1,\theta)}\).

The proof of Theorem 2 gives a further interesting result: Although we cannot guarantee that D knows all the types of the agents, we can guarantee that D always knows the type of one of the non-bluffers (but he cannot make sure which one)!

Now let θ′(a) be \(K_D{\tt SLL}(a) \lor K_D{\tt STT}(a)\) and let θ′ be \(\theta'(A)\lor\theta'(B)\lor\theta'(C)\). Although there is no solution to the original puzzle, we do have solutions to the puzzle \({({\mathfrak{M}}_2, \theta')}\). A simple solution is to ask each of AB, and C ‘are you a bluffer?’ The questioning strategy (and the outcomes at final states) can be illustrated as follows:
Note that D cannot guarantee where he ends up in the questioning strategy tree, since the answer from a bluffer is essentially non-deterministic. Moreover, repeatedly using the strategy after D reaches one of the final states will not work, since we assume (Q1) that all the questions are asked and answered publicly. Thus when A and B get to know more, one cannot eliminate their knowledge10. We can also turn the above solution into a solution for \({({\mathfrak{M}}_1,\theta')}\) by replacing each ?aψ in the above solution with \(?_a([?_a(({\tt STT}(a)\to \psi)\land ({\tt SLL}(a)\to\neg\psi))]\langle {!_a\,ja}\rangle\top)\) as used in the proof of Proposition 8.

Remark 5

Note that in the above discussions, we only consider ja and da as well-formed answers and consider solutions with answerable questions only. In more realistic cases, agents should be able to answer ‘I don’t know’ [or keep silent as in Uzquiano (2010)]. However, the definition of types will be much more complicated, and there may be different options in redefining the liar and the bluffer. E.g., can a liar truthfully answer ‘I don’t know’ or just say a random ‘yes’ or ‘no’ instead? Moreover, can a bluffer also announce ‘I don’t know’ randomly? Given some acceptable new definitions of types involving ‘I don’t know’, is it possible for D to know the types of AB, and C? We suspect that the answer is still negative, but leave this for future exploration.

Conclusion and Discussion

In this paper, we first proposed a simple type language to define agent types in terms of preconditions of announcements. Based on a finite set of types T defined in the type language, we introduced the following five logical languages:
  • \({{\tt EL}}^{\bf T}\) Epistemic language (with type formulas),

  • \({{\tt PAL}}^{\bf T} ={{\tt EL}}^{\bf T}+[!\phi]\) Public announcement language (with type formulas),

  • \({{\tt PALT}^{\bf T}}={{\tt EL}}^{\bf T}+[!_a\phi]\) Public announcement language with types,

  • \({{\tt PQLT}^{\bf T}}={{\tt EL}}^{\bf T}+[!_a\phi]+[?_a\phi]+[!_a]\) Public question language with types,

  • \({{\tt PQLT}^{\bf T}_{\bf U}}={{\tt EL}}^{\bf T}+[!_au]+[?_a\phi]+[!_a]\) Public question language with types and arbitrary utterances.

In \({{\tt PALT}^{\bf T}}, {{\tt PQLT}^{\bf T}},\) and \({{\tt PQLT}^{\bf T}_{\bf U}}, \)who says what is important due to the types of speakers. These languages are very powerful in expressing complicated announcements and questions such as the apparently paradoxical ‘I am a liar’ announcement and counterfactual questions like ‘would he answer “yes ” if he were asked ϕ?’.

The first four languages are interpreted on epistemic models with type assignments, while the last language \({{\tt PQLT}^{\bf T}_{\bf U}}\) is interpreted on epistemic models with type assignments and utterance interpretations. We have shown that the first four languages are equally expressive. This does not mean that we do not need \({{\tt PAL}}^{\bf T}, {{\tt PALT}^{\bf T}},\) and \({{\tt PQLT}^{\bf T}}\) any more: on the contrary, they allow us to express things more naturally. As with standard public announcement logic (cf. Lutz 2006 and French et al. 2011), we conjectured that \({{\tt PALT}^{\bf T}}\) enjoys an exponential gain in succinctness than \({{\tt EL}}^{\bf T}\). Moreover, the expressiveness results do not tell us everything about those logics, e.g., in \({{\tt PALT}^{\bf T}},\) two announcements cannot be composed into one in general, but only for special cases with certain T. We also showed that the public announcements in \({{\tt PAL}}^{\bf T}\) can be mimicked by typed announcements with T containing LL and TT. There is a lot more to be explored about these logics.

We studied several variations of the Knight and Knave puzzles within the logical frameworks that we developed. In particular, we formalized HLPE and verified a classic solution. It was also shown that puzzles involving only objective truth tellers and liars are usually simpler than those with subjective types and epistemic uncertainties. Following this insight, we proposed new harder puzzles based on the original HLPE with complicated epistemic reasoning involved. In particular, we showed that there is no solution to a variation of HLPE, when the gods in the original HLPE are replaced by humans who do not know each other’s types. However, there is a questioning strategy that can let the questioner know the type of one non-bluffer.

The discussion of HLPE has demonstrated the power of our formal approach in handling complicated epistemic reasoning based on types of agents. However, the proofs for most of our results about HLPE boil down to tedious combinatorial analysis. Actually, we can save effort here by using automatic model checking methods based on our logical frameworks (cf. e.g., Clarke et al. 1999). In this paper, we have formally defined what is a puzzle and what is a solution to a puzzle. The verification of a solution is then transformed into model checking problems for certain modal formulas. In principle, we can then use techniques from model checking in our setting. Our translations between languages allow us to do model checking of the complex languages by model checking the translated formulas in the simpler languages. Moreover, solutions to the puzzles can be found by a bounded search over the possible sequences of questions. Thus the spectrum of new puzzles should not be solved by hand but in an automatic manner. A detailed discussion of computational issues of the model checking problem is beyond the scope of this paper, and is left for future work.

Finally, we end our paper with a list of important further issues:
  • The boundary between the solvable and the unsolvable We have shown that, if AB, and C do not know each other’s types, then there is no solution to the revised HLPE puzzle. Since there are indeed solutions to the original HLPE puzzle, the natural question to ask is: Can we find a ‘minimal’ assumption on the knowledge of AB, and C such that there is a solution? On the other hand, we can also keep the assumption of ignorance but allow agents to say ‘I do not know’ in some way to see whether this will lead to a solvable puzzle. As we discussed in Remark 5, this may raise several new possibilities for defining the types of subjective liars and bluffers.

  • From knowledge to belief and more In this paper we defined ‘subjective’ agent types by conditioning on knowledge of agents. However, realistic agents often rely on their beliefs to make announcements or answer questions. We can certainly replace knowledge operators with belief operators in types. An even further way to go is to consider probability distributions of propositions as preconditions of agent types, e.g., a liar is some one who tells lies 80 % of the time.

  • Richer agent types In this work, we focused on agent types in terms of what agents deliver by their announcements. There are definitely richer types in real life. For example, agent types may be reflected in how much information they would like to deliver w.r.t. what they know. A conservative agent may only announce \(\phi\lor \psi\) even when he knows ϕ. We will leave those richer types for other occasions.


Boolos credits Raymond Smullyan as the originator of the puzzle and John McCarthy for adding the twist of ja and da.


Since D’s type is irrelevant, we omit it in the model.


Truth values of epistemic formulas may not be preserved after announcement. For a study in the setting of PAL, we refer to (van Ditmarsch and Kooi 2006) and (Holliday and Icard III 2010).


We conjecture that \({\tt PALT}^{\bf T}\) is at least exponentially more succinct than \({\tt PAL}^{\bf T},\) but leave the proof for future work.


See (van Benthem and Minică 2009) for a similar composition issue in dynamic-epistemic logics of questions and answers.


See (Wang 2011a, b) for other applications of the context dependent semantics in DEL.


I.e., (QF, δ) is an acyclic graph where each node except r has one and only one u-predecessor for each \(u\in{\bf U}, \) and r can reach all other nodes.


We adopt the name of the lemma from (Rabern and Rabern 2008).


A subjective bluffer is the same as an objective one.


We conjecture that even when D can ask questions privately, the puzzle \({({\mathfrak{M}}_1,\theta)}\) still does not have any solution.


Interested readers may consult (Blackburn et al. 2002) for the preservation result of positive formulas in the standard setting of modal logic.


For instance, according to the semantics when s is in the shape of \({\tt STT}\_\_{\tt JA}, \lambda(A,s)(I(s,\phi, ja),A)=K_A\phi\). Therefore when answering ja, the updated model keeps the worlds satisfying \(K_A\phi\land{\tt STT}(A)\).



The authors would like to thank Hans van Ditmarsch and Johan van Benthem for their detailed comments on earlier versions of this paper, and thank Gregory Wheeler for pointing out the literature on the HLPE, which helped to shape the development of this work. We are also grateful to two anonymous referees of this journal for their very valuable comments. Both authors are partially supported by the Major Program of National Social Science Foundation of China (NO.11&ZD088). Yanjing Wang is also supported by the MOE Project of Key Research Institute of Humanities and Social Sciences in Universities (No.12JJD720011).

Copyright information

© Springer Science+Business Media B.V. 2012