The Fundamental Problem of Logical Omniscience

We propose a solution to the problem of logical omniscience in what we take to be its fundamental version: as concerning arbitrary agents and the knowledge attitude per se. Our logic of knowledge is a spin-off from a general theory of thick content, whereby the content of a sentence has two components: (i) an intension, taking care of truth conditions; and (ii) a topic, taking care of subject matter. We present a list of plausible logical validities and invalidities for the logic of knowledge per se for arbitrary agents, and isolate three explanatory factors for them: (1) the topic-sensitivity of content; (2) the fragmentation of knowledge states; (3) the defeasibility of knowledge acquisition. We then present a novel dynamic epistemic logic that yields precisely the desired validities and invalidities, for which we provide expressivity and completeness results. We contrast this with related systems and address possible objections.


Introduction
Formal epistemology tends to bake in logical omniscience. Epistemic logic in the style of [33] models a body of knowledge as a set of possible worlds K and takes ϕ as known when ϕ is true at every world in K . It follows that every logical truth is known by every agent, and if ϕ is known then so are its logical consequences. Relatedly, a model going back to [11] takes propositional content to be a set of possible worlds. If an agent's doxastic state is then modeled as a mere set of contents, the resulting doxastic logic seems objectionably weak: since ϕ ∧ ψ and ϕ are generally true at different worlds, this picture allows for agents that believe the former without the latter. Taking belief as closed under entailment provides a simple fix, but renders all logical truths universally believed. Further, Bayesianism models an agent's credences relative to evidence as a probability function. As this standardly assigns 1 to every logical truth, the agent is represented as fully believing all of them, as well as every logical consequence of the evidence [12,Ch.6].
Fans of these formalisms have room to maneuver. One may revise logic or propositional content, or complicate the relationship between attitudes and content [13,42,51,56]. One may interpret the formalisms as being about a perfectly rational Bayesian believer, or about what an ideal agent may infer from its body of knowledge [33,Sect. 2.5].
But deep philosophical issues are nearby. Say that a problem of logical omniscience arises for attitude A, relative to a class of agents, when either: (i) it is unclear whether A is closed under deductive consequence at all; or, (ii) given that A is not so closed, it is unclear what restricted closure principles hold for it; or (iii) though it is reasonably clear what principles should hold, it is unclear how to generate them with extant formal tools. (i) and (ii) bear on our basic understanding of propositional attitudes. (i) is widespread: epistemologists debate whether skeptical paradoxes show that even ideal agents cannot know every deductive consequence of their knowledge [15,16,32,65]. Bayesians debate whether even rational credence is closed under logical consequence [59,Sect.5.4]. (ii) is widespread: everyday belief ascriptions seem not to be closed under entailment, but it is less clear what logic, if any, they do exhibit [48]. So problems of logical omniscience can be framed for ordinary knowers. Or, for various ideal knowers: with bounded but otherwise perfect cognitive powers [18,Ch. 10]; normatively ideal [59]; or cognitively ideal [35]. Or, for derivative attitudes, such as 'logical commitment given what one knows' [43], or 'knowability in principle given certain evidence or information' [4,29].
A fundamental version of the problem is hit when one focuses on arbitrary agents, abstracting away from cognitive skills and rational commitments; and on the knowledge attitude per se, as opposed to derivative ones. This is the topic of the present paper: the intrinsic logic of the knowledge attitude, based only on the literal content of knowledge ascriptions such as 'a knows that ϕ' and plausibly universal features of a body of knowledge, irrespective of what a can or cannot compute, its working memory, or normative constraints.
The fundamental problem has a static aspect and a dynamic aspect. The static: what further facts must an arbitrary agent know if she knows ϕ 1 , . . . , ϕ n ? The dynamic: how is an arbitrary agent's body of knowledge κ altered by her coming to know ϕ? (The dynamic may be less well-known than the static, but has been explored in the literature -we'll get back to this.) For both, we answer by identifying a number of pre-theoretic logical principles, addressing (ii) above. We then use them to motivate a novel logical system, addressing (iii).
to be addressed dynamically. The treatment of logical omniscience in [60] distinguishes explicit and implicit knowledge [17,42]. The former is understood statically: it gives the agent's actual knowledge and has negligible logical structure. The latter is a derived attitude, understood dynamically and closed under full classical (monotonic) logical consequence. We argue below that our system gives a richer account of the logical structure of actual knowledge, and avoids over-estimating the fruits of inferential dynamics.
We proceed as follows. Section 2 explains the fundamental problem and critically discusses some attempted responses from the literature. Section 3 presents a list of intuitive logical validities and invalidities for the intrinsic logic of knowledge, taken as defeasible criteria of adequacy for an epistemic logic that solves the fundamental problem. Section 4 spells out our logical system and shows that it meets such criteria. Interesting divergences with traditional dynamic epistemic logic, and with topic-sensitive accounts of doxastic notions, are noted. Section 5 anticipates some objections and offers replies and, where necessary, refinements. A twofold appendix provides proofs for theorems from the philosophical discussion, and results concerning soundness and completeness.

The Goal
Start with three questions: (1) Internal Logic. What is the internal logic of knowledge? That is: given that an arbitrary agent knows certain facts, what further facts can we conclude she knows? (2) Potential Knowledge. What can be known on the basis of a body of knowledge? That is: what can an arbitrary agent know via inference from her existing knowledge? (3) Content. How best to model the contents of knowledge? In particular, how best to model a body of knowledge -the sum total of what an agent knows?
The three are related: the logic of the knowledge attitude depends on the nature of its possible contents. In particular, if contents can stand in mereological relations then an arbitrary agent stands in the knowledge relation to content P when she knows every part of P : 1 [65,Sect. 7.3] calls this 'immanent closure'. In specific cases, it is obvious that one knowledge claim follows from another: one cannot know that Jones is horrendously late without knowing that Jones is late. Such intuitions are prima facie data for evaluating a general account.
We intend the answers to (1), (2), (3) to be purely descriptive and universal. We do not address normative issues, such as: given that an agent knows P , what is she thereby permitted to know, or obliged to know? Nor are we asking what a cognitively ideal agent knows given knowledge P , or what a computationally bounded agent thereby knows, inferentially or otherwise. These questions generate interesting issues for which the label 'problem of logical omniscience' might be apt [6,7,33,35,38,58,59]. But they are not our fundamental problem. This bears on the intrinsic features of knowledge, and how language reports on them.
Logical tools help with (1), (2), (3). Take a modal propositional language E L with knowledge modality K and update modality [ ]. Read 'Kϕ' as 'the agent knows that ϕ'. Read '[ϕ]ψ' as 'ψ holds after the agent comes to know ϕ'. 2 Suppose we settle on an intuitive epistemic logic, IEL, encompassing a list of plausible validities and invalidities given our intended interpretation of E L . Then (1), (2), (3) compress into a challenge: (4) Goal. Provide a semantics for E L , such that the compositional meanings of Kϕ and [ϕ]ψ yield IEL.
To solve our fundamental problem is now to meet the Goal. The solution must be squeezed between two extremes: (5) Lower Bound: the logic of knowledge per se is non-trivial. (6) Upper Bound: knowledge per se is not closed under (classical, monotonic) entailment.
As for Lower Bound: given operator , a validity is in the single-premise internal logic of just in case it is of the form ϕ ⊃ ψ, with ⊃ the material conditional. The single-premise internal logic of is trivial when ϕ ⊃ ψ is a validity only if ϕ = ψ (ϕ and ψ are syntactically identical). The single-premise internal logic of knowledge is not trivial. Knowledge distributes over conjunction [64, pp. 276-9]: knowing both Jones and Smith are late entails knowing Jones is late. A satisfactory epistemic logic validates K(p ∧ q) ⊃ Kp, where p and q are atoms.
As for Upper Bound: an agent may know something without knowing all of its consequences. Mathematical knowledge on the basis of a conjunction of simple axioms is often elusive: one can know Peano's Axioms, and these can entail, say, Goldbach's Conjecture; but one doesn't know if Goldbach's Conjecture is true.
As for the dynamics: add update modalities of the form [ϕ] to our language with . Suppose the following schema is valid: [ϕ] ϕ. Hence, [ϕ] is a 'bringing it about that ϕ' operation. Then, the single-step internal dynamics of is the set of validities of the form [ϕ] ψ. The internal dynamics is trivial just in case [ϕ] ψ is a validity only if ϕ = ψ. The internal dynamics of knowledge is not trivial. If one comes to know that Jones is an expert lawyer, then one comes to know that Jones is a lawyer. Yet for ordinary agents, coming to know ϕ needn't result in knowing every consequence of ϕ: complicated mathematical theorems may elude one even after one comes to know the conjunction of all relevant axioms.
The Lower and Upper Bound yield desiderata for any framework aiming to achieve the Goal: (7)

Discussing Some Literature
Here are frameworks that fail to meet some desideratum. On one approach, content is unstructured: an agent's total body of knowledge at actuality @ is modeled as a set of possible worlds. Given that gaining information is narrowing the space of possibilities, the set represents the information the agent has about @ (compare the notion of 'information as range' in [24] and [60]). Kϕ is true at @ exactly when ϕ is true at every possible world compatible with the agent's information. This, of course, is the classic approach of [33]: epistemic logic is but a normal modal logic. Logical omniscience follows: Kϕ ⊃ Kψ is valid when ϕ entails ψ. The Upper Desideratum (8) is missed. On another approach, content is as structured as a syntax: an agent's total knowledge at @ is modeled as a set of sentences K @ in a suitable formal language (say, E L itself). 3 (One may then understand the contents of a knowledge attitude as sentences in a language of thought; but see [13] for a critique.) Kϕ is true at @ exactly when ϕ ∈ K @ . If every set of sentences is a possible body of knowledge, then no significant logical constraints are imposed on the internal logic of knowledge. And any proposed restriction on the class of admissible sets either seems ad hoc, or betrays the leading idea that knowledge contents are as fine-grained as linguistic expressions. The Lower Desideratum (7) is missed. 4 This quandary has generated a wealth of literature we will not assess here. 5 A third approach combines the two above [17,42,61]. Unstructured content informs an implicit knowledge operator: I ϕ is true at w just in case ϕ is true at all worlds epistemically accessible from w. Structured content informs an explicit knowledge operator: Eϕ is true at w just in case ϕ is a member of a given set K w , containing only sentences true at every world epistemically accessible from w. Such models are sometimes given a finer structure in awareness logics: each w is assigned a set of sentences A w , thought of as the sentences the agent is aware of at w. Then, Eϕ holds at w just in case ϕ ∈ A (w) and I ϕ is true at w [17]. 6 3 See, for instance, [46] and [40]. 4 One might compare a prominent line in the impossible worlds approach, according to which content is taken as a set of worlds, some possible, some impossible. Impossible worlds are treated as 'anarchic', allowing the violation of even basic logical truths. This results in a trivial, or nearly trivial, epistemic logic. See [53] and [52]. 5 For instance, we pass over the impossible worlds approach and non-standard logic approach. See [18,53,63], Ch. 9. Other extant approaches are explicitly concerned to model bounded reasoning and bounded rationality. See [18], Ch. 10. 6 See [55] for a broad overview of this approach and further references.
One may interpret the framework as follows: Eϕ indicates that ϕ is actually known by the agent; I ϕ indicates that in principle the agent could come to know ϕ on the basis of her present knowledge, given sufficient conceptual and computational resources. Actual knowledge exhibits little logical structure; potential knowledge displays logical omniscience.
One can enhance this theory with dynamic operators [60]: a 'realization' operator [# ] ensures that the truth of both Eϕ and E(ϕ ⊃ ψ) entails the truth of [#ψ]Eψ ('ψ is explicitly known upon realizing that ψ'). Technically, [#ϕ]ψ is true at w just in case the addition of ϕ to K w assures the truth of ψ. Since Eϕ and E(ϕ ⊃ ψ) need not entail Eψ, all this goes some way to representing the power of inference to extend knowledge. A candidate dissolution of the problem of logical omniscience emerges: the Lower Desideratum is met by I ; the Upper Desideratum is met by E. To think that a single logic of knowledge should meet both is to conflate actual and potential knowledge.
This, however, underestimates the intrinsic logic of actual knowledge. If Smith does not know that Jones is late then it must be that Smith does not know that Jones is horrendously late. Inferential relations, on the face of it, cannot explain this: if ψ can merely be inferred from ϕ, there is a possible situation -one where a given agent just fails to make the inference -where ϕ is known but not ψ. A better explanation: Jones' being late is part of the content that Jones is horrendously late. By Yablovian 'immanent closure', to know the latter is to have already secured knowledge of the former, no matter how cognitively impaired one (on occasion) is.
Further, the proposal overestimates the intrinsic logic of potential knowledge, on (at least) grounds of the defeasibility of knowledge. An ordinary agent may know ϕ and know ϕ ⊃ ψ, yet inferring ψ is not a path to knowing ψ, but to losing the knowledge that ϕ. Beth knows Theorem 1 on the basis of testimony from her trusted tutor. She also believes that Theorem 2 holds, on the basis of testimony from her trusted professor (the professor is mistaken: a rare occurrence). She then comes to know that Theorem 1 and Theorem 2 are inconsistent, on the basis of painstaking mathematical reasoning. She rightly concludes that her tutor might have been mistaken, so rationally suspends her belief in Theorem 1. She thus loses knowledge of Theorem 1. Before she draws her last inference or proves the inconsistency result, there seems no interesting sense in which she potentially or implicitly knows that Theorem 2 is false on the basis of her present knowledge. Notice that we did not appeal to any irrationality on the part of our agent, though examples along this line are easy to concoct.
This motivates a dynamic epistemic logic that better accounts for the delicacies of content and reasoning, towards meeting the Goal. In the next Section we describe the logical features that should be captured. 7 7 Our Lower and Upper Bounds above, in a sense, don't squeeze very tightly. There are proposals, such as that of [37,38], which resort to an impossible worlds semantics with a sort of logical plausibility metric on worlds, capable of marking a distinction between blatant (e.g., that 2 + 2 = 5) and subtle (e.g., that Fermat's Last Theorem fails) impossibilities. This models moderately competent, but not logically omniscient agents capable of reasoning to rule out the former, though often incapable to rule out the latter. The proposal navigates successfully between our two extremes, but the set-up is very different from

Intuitive Epistemic Logic
We now propose a list IEL of intuitive validities and invalidities, generating defeasible criteria of success for an epistemic logic aimed at the Goal. While building the list, we propose three explanations for its items: (1) the topic-sensitivity of content; (2) the fragmentation of knowledge; (3) the defeasibility of its acquisition. One reason logical omniscience is difficult to deal with is its multifarious origin: arbitrary agents fail to be non-omniscient for different, irreducible reasons. We submit that (1)-(3) are three core ones.
The full definition of the formal language for IEL comes in Section 3.4, after some nuances have been motivated. This will suffice for now: p, q, r are atomic sentence symbols from set A T . The boolean connectives ¬, ∧, ∨, ⊃ and ≡ are standard. Read ' ϕ' as 'It is apriori that ϕ' and 'Kϕ' as 'The agent knows that ϕ'. Read '[ϕ]ψ' as 'After the agent comes to know ϕ, ψ holds'. 8 One can come to know ϕ via receipt of new empirical information, inference, and so on. We model a single agent, so all mention of the agent is suppressed. is a universal modality, quantifying over all possible worlds taken as epistemic scenarios. We model apriori entailment as (ϕ ⊃ ψ). We use √ to indicate that a schema is intuitively valid, × that it isn't.

Topic-Sensitivity
If it is apriori that ϕ entails ψ, then one knows ϕ only if one knows ψ. This is invalid: it is apriori that if 113 guests attended the ball, then the number of guests is prime; but one can know the antecedent without knowing the consequent. Borrowing from [57], p. 88: William III might have known that England could avoid war with France, but he needn't have thereby known that England could avoid nuclear war with France. He didn't have the concept nuclear, and so wasn't positioned to think about nuclear wars at all. (To bolster this: William III could know and therefore believe that England could avoid war. But, presumably, he didn't have beliefs about nuclear wars, and so didn't believe that nuclear war could be avoided, as [57] notes. As knowledge implies belief, we retrieve our example.) (9) is connected to two further invalidities: (10) × Apriori Omniscience: ϕ ⊃ Kϕ (11) × Omniscience Rule: If ϕ is valid then so is Kϕ ours: Jago's work models intermediately competent agents that sit strictly between the Lower Bound of complete logical incapacity and the Upper Bound of logical omniscience. It does not give a content-based semantics for 'x knows that ϕ' for arbitrary agents x, as we claim to do below. 8 Or, less tersely: 'After the agent comes to know the proposition expressed by ϕ in the prior discourse context, ψ holds in the posterior discourse context'. The prior context is the context before the proposition in question is added to the agent's body of knowledge; the posterior context is the context after this event.
As in standard dynamic epistemic logic, we interpret [ϕ]ψ with care in view of Moorean phenomena: see Section 3.6.
Cantor's Theorem is apriori, but an agent need not know it. William III failed to know that either England could avoid nuclear with France, or England could not avoid nuclear war with France. Agents can be denied knowledge because they fail to properly grasp the relevant contents. One can grasp claims about the number 113 without being able to grasp claims about primeness; one can grasp claims about war without being able to grasp claims about nuclear war. Thus, one can know a claim about the former subject matter without knowing anything about the latter subject matter, despite that claim entailing something about the latter subject matter. This is not to say that if one comes to grasp a certain subject matter then one automatically knows every consequence of one's knowledge about that subject matter (at least not when 'grasping' a proposition is understood as merely understanding it, or being positioned to hold an attitude towards it). Presumably, an agent can know there are 113 guests at the ball, fully grasp the subject matter of 'the number of guests is prime' (including the topic of primeness) and still fail to know that number of guests is prime. One can fully grasp 'the number e is irrational' without knowing it to be true (perhaps one is wondering whether it is true). So our conclusions are modest: subject matter is an aspect of content; hence, if ψ is about something that ϕ is not, then the content of ψ is not included in that of ϕ; hence, an agent can know ϕ without knowing ψ -even if ϕ entails ψ. In contrast, we note an intuitive validity: This reflects a basic principle in the epistemology of mathematics: if a conjunction of axioms is apriori and it can be determined apriori that those axioms have a certain consequence, then that consequence is itself apriori. As a restricted quantifier over epistemically possible worlds, apriority is a normal modal.

Thickness, Same-Saying
Section 1 proposed that: content is thick (truth conditions plus topic); the Boolean connectives are topic-transparent; one content includes another when there is both truth and topic preservation; and Kϕ ⊃ Kψ is valid just in case ϕ's content includes ψ's content. This predicts the following validities and invalidities: One knows that Jane is a lawyer and a fisherman only if one knows Jane is a lawyer. Knowing the latter is part of knowing the former.
William III knew that France will not go to war but didn't know that France will not go to nuclear war. He knew that England will go to war without knowing that England won't both refrain from war and develop a nuclear arsenal. These seem to hold even if we replace knowledge with what knowledge entails: belief. If (15) were valid, then the intentionality of content would be trivialized: in particular, if the logic of knowledge tracks closure under parthood, then everything one knows would be (partly) about every topic. For example: one knows that Jane is friendly. (15) reflects that part of what one knows is that Jane is not both unfriendly and an admirer of Plutarch. Thus, one's knowledge is partly about Plutarch. (And, 'Plutarch' could be replaced with any referring term.) To say that one does not know anything about Plutarch would be nonsensical on this view, since everything one knows is partly about Plutarch.
Our hypothesis also predicts that these patterns are reflected in judgments about indirect speech reports. Mary said that Smith and Jones are both on time; so part of what she said is that Smith is on time. William III said that England will go to war; he did not thereby say that England won't both refrain from war and develop a nuclear arsenal.

Fragmentation
A simple theory of thick content does not tell the whole non-omniscience story. If it did, the following principle would be valid: If an agent's body of knowledge can always be taken as one content, P , and ϕ and ϕ ⊃ ψ are both known, then knowledge of ϕ and ϕ ⊃ ψ must be included in that of P . Hence, P must both entail ψ and include the topics of ϕ and ψ in its subject matter. If this is enough for ψ to be part of P , then ψ must also be known, by immanent closure. However, one may know ϕ and ϕ ⊃ ψ but fail to put two and two together. 9 Jones knows that Mary lives in New York, that Fred lives in Boston and that Boston is north of New York. Yet Jones fails to realize the obvious: that Mary will have to travel north to visit Fred [9, p. 199].
A popular response is that Jones' knowledge state must be fragmented across various 'frames of mind' [17,44,57,66]. He knows certain facts about Mary in one frame of mind; certain facts about Fred in another; and certain facts relating Boston to New York in yet another. He may fail to put these bits of knowledge together. 10 Fragmentation also predicts that the following dynamic principles are invalid: One might come to know one premise of modus ponens, then come to know the other, but still fail to know the conclusion by failing to put two-and-two together. 9 What, intuitively, is it to put 'two and two together'? A weak reading understands this as merely contemplating ϕ and ψ at the same time, or being aware of one's knowledge of each at the same time. A strong reading understands it as 'merging', partly or fully, the knowledge of ϕ and ψ into a unified piece of knowledge, as presumably happens when one infers a joint consequence. We are here primarily interested in the latter reading. 10 Classic works on the functioning of human memory [36,39] provide support for fragmentation. Whether memories are stored or, rather, reconstructed from some sparse partial information base, it seems that our mental hard disk is too big a stock for us to perform an exhaustive research (for the relevant base) each time we aim at retrieving information. Human memory works in a partitioned way, storing in fragments. Intersectorial logical dependencies may pass quietly unnoticed.
Notice that fragmentation alone does not tell the whole non-omniscience story: topic-sensitivity is still needed. If contents were mere sets of possible worlds, and given that a knows ϕ just in case ϕ is true at every world in some frame of mind for a, fragmentation would still validate (9), (10) and (11).

Formal Language
We now properly introduce our formal language, E L . Let F = {1, 2, . . . , n} be a set of n frame symbols. Finitude is needed to keep the corresponding axiomatizations finitary, but does not bear on substantive philosophical points, except for the plausible claim that a finite cognitive agent will not host infinitely many frames of mind. We use i, j, k as metavariables for frame symbols, indexing the agent's different frames of mind and taken as a class of atomic formulas in A T : atom i denotes the content of the agent's i-th frame of mind: 'i ⊃ p' reads 'the agent's i-th frame of mind implies p'. We use p, q, r as metavariables ranging over members of A T \ F . We use x, y, z as metavariables ranging over the whole of A T . The BNF for E L : Read K i ϕ as 'the agent knows ϕ in frame of mind i', and, as a first pass, define Kϕ says that knowledge tout-court is knowledge in some frame of mind or other. Sentence [ϕ]ψ says that if the agent adds the content of ϕ (in context) to any of her frames of mind, then ψ holds. That is: her coming to know ϕ assures that ψ is true. (It does not merely express the following weaker claim: if the agent adds the content of ϕ to all of her frames of mind, then ψ holds.) Now for our third source of intuitive (in)validities.

Defeasibility
If topic-sensitivity and fragmentation were the only sources of non-omniscience, the following would be valid for i = j : This says: if in some frame of mind one knows that ϕ implies ψ, and in another ϕ, then one will come to know ψ after updating the first frame of mind with the knowledge contained in the second, or vice versa. Such a merge -or a partial merge -presumably results from inference. Hence closure under merge is a version of the proposal [60, Ch.5] that closure ('logical omniscience') holds when regulated by the action of inference.
But Closure under merge is invalid. Consider a variation on a counterexample [41] aims at the weak closure principle in [31, Sect. 2.3]: Edward knows, in different frames of mind, both p (facts from chemistry) and if p then q (if p, then this homeopathic medicine is too diluted to contain active ingredients). Edward deduces q. He needn't thereby come to know q: he is strongly committed to r ('My mother swears that homeopathy works'), which he sees as weighing against q. Rather than gaining knowledge of q, he loses knowledge of p.
Does the counterexample depend on Edward being irrational? Some think that one cannot know p if one has a defeater for p [1]: the agent has justified beliefs in d 1 , . . . , d n , which jointly render p subjectively improbable. If Edward were rational in believing that homeopathy is efficacious on the basis of his total evidence, including his mother's testimony, then he could not have known p in the first place, on the basis of having a defeater.
But can't a perfectly rational agent lose knowledge via competent deduction? In one frame of mind, Mary knows that if there is no mistake on her tax return, then a report to the contrary is mistaken (p ⊃ q). She knows in that frame of mind that her accountant is very reliable on tax matters. She knows that he has reported a mistake on her return. (As it happens, this is the one occasion when he is incorrect.) In a different frame of mind, Mary knows that there is no mistake on her tax return (p). This knowledge is based on the especially fastidious manner in which she completed her return. How should Mary merge? She rightly concludes, given strong reasons to take her accountant as reliable, that she must suspend her belief that there is no error on her return, and so call into question how fastidious her completion really was, or how much she can really trust her fastidious efforts. Thus, she loses the knowledge that there is no mistake, rather than acquiring or preserving the knowledge that report of a mistake must be false.
To generalize, start with two monotonicity principles for epistemic update: These say that knowledge cannot be lost as more knowledge is acquired. This is obviously false for agents with fallible memories. Other counterexamples have long been noted. Jane knows that she does not know whether p. Presumably, she no longer knows this after coming to know p. Monotonicity failure for epistemic formulas is thus a feature of paradigm systems in the dynamic epistemic logic tradition. However, [28] emphasizes the defeasibilty of rational update in belief revision due neither to memory failure nor to epistemic formulas: Even standard dynamic epistemic logic systems validate these. But here's a counterexample. Mary knows that her tax return has no mistakes, on the basis of fastidious checking. But after she comes to know that her generally reliable accountant has reported a mistake in the return, she no longer knows that it contains no mistakes. An intuitive assessment: the newly acquired knowledge alters the space of reasons that bear on Mary's belief system, and if she reasonably suspends the belief that p, then she cannot know that p. Monotonicity fails even within a single frame of mind: Our judgment in the previous example seems unaltered if we stipulate that it applies to a single known content. Further, suppose that framed atomic monotonicity is valid. Then the following is, too: But this is surely invalid. Edward knows, in frame of mind i, certain facts about chemistry that refute homeopathy. In another frame j , he knows that his mother swears by homeopathy. His irrational psychology is such that, were he to put two-and-two together, he would suspend belief and so lose his knowledge of chemistry. Now consider the following instance of closure under merge: There seems an easy path from counterexamples to framed atomic monotonicity to ones for the above schema. One selects a countermodel so that (i) it verifies K j (p ⊃ p); (ii) it falsifies K i p ⊃ [i : j ]K i p (as in a counter-example to framed atomic montonicity); (iii) it falsifies K j p; (iv) i and j are the only frames in the model. Closure under merge should be rejected, due to the non-monotonicity of knowledge update.

Successful Update, Moore Sentences
Neverthelss, [ϕ]ψ must sustain two simple validities: Learning p implies success: coming to know the proposition expressed by sentence p results in knowing that p. And though coming to know q might lead to losing the knowledge that p, it cannot lead to knowing that ¬p: this just is the factivity of knowledge. One shouldn't endorse these principles beyond atoms, due to Moorean phenomena: take p ∧ ¬Kp. One may come to know the proposition expressed by this, say, by testimony. But the acquired knowledge is linguistically elusive: coming to know it does not render the sentence K(p ∧ ¬Kp) true. Indeed, ¬Kp is rendered false by this epistemic event. So [p ∧ ¬Kp]K(p ∧ ¬Kp) is invalid. Our cautious interpretation of [ϕ]ψ in footnote 8 matters: if P is the proposition expressed by p ∧ ¬Kp in context, it is trivially true after coming to know P that one thereby knows P in the resulting context. What is not known is the false proposition expressed by p ∧ ¬Kp in the resulting context, since Kp there reports on the updated body of knowledge.

A Logic for Thick, Defeasible, Fragmented Knowledge
We now give our formal semantics for language E L from Section 3.4, meeting the pattern IEL of validities and invalidities identified in Section 3. In Section 4.2, we draw comparisons with [66].
• W is a non-empty set: possible worlds. The power set of W is the set of intensions relative to W , denoted I. • @ ∈ W is a designated actual world. I @ denotes the set of all intensions that include @. • T is a non-empty set: possible topics.
We call μ(P , Q) the update of P given Q. We stipulate that μ(P , Q) ⊆ Q: an update with information Q leaves every non-Q world eliminated. We also stipulate that P ∩ Q ⊆ μ(P , Q): at best, merging P and Q results in knowing their conjunction. It follows that @ ∈ μ(P , Q): intuitively, the update μ(P , Q) assures knowledge of a content with intension Q, so long as the agent's state is sensitive to the topic of that content.
• v : A T → I assigns an intension to each atom in the language (including frame symbols): the truth valuation function. By stipulation, this assignment obeys: Since knowledge is factive, this reflects our intention that we model the knowledge of each frame of mind. • t : E L → T assigns a topic to each formula in the language, including frame symbols: the topic function. To ensure the topic-transparency of Boolean connectives, the assignment obeys: (T, ⊕, μ and t obey few constraints. Finer structure would be needed to more deeply explore the nature of subject matter and knowledge update. Our aim, however, is merely to recover IEL. Coarse representations of topic-sensitivity, fragmentation, and defeasibility, will suffice.) We define topic inclusion, a ≤ b, as a ⊕ b = b. We call intensions thin propositions. A thick proposition is an ordered pair P , a of an intension and a topic (cf. [65]). Thus, v and t together assign a thick proposition to each atom, including each frame symbol. We write κ(i) for the pair v(i), t (i) , and refer to κ(i) as the knowledge state of the agent at @, in her i-th frame of mind. κ is the agent's total knowledge state at @: κ := {κ(1), . . . , κ(n)} We define the satisfaction relation via the following recursion (we define the intension of ϕ with respect to M as |ϕ| M := {w ∈ W : M , w ϕ}, omit the subscript M when the model is contextually clear): There are two natural ways to define truth in a model and validity for this framework. On one hand, we can say that ϕ is true in M (and write M ϕ) exactly when M , @ ϕ. Then we can say that ϕ is valid just in case it is true in every epistemic update model, and write ϕ.
Alternatively, we can say that ϕ is true in M (and write M ϕ) exactly when M , w ϕ for every w in M . Then we can say that ϕ is valid just in case it is true in every epistemic update model, and write ϕ.
However, ϕ and ϕ are in agreement with respect to the principles we canvased in Section 3. We use EUL to refer to the set of all validities of epistemic update logic: {ϕ ∈ E L : ϕ}.
The clause for K i ϕ reflects our two-component account of content: to know ϕ is to have a knowledge state that is both sensitive to the topic of ϕ and rules out all incompatible worlds. 11 Note that, on the above semantics, K i ϕ is best read as 'at @, the agent knows that ϕ in her i-th frame of mind'. On this reading, one expects the truth of K i ϕ to be invariant between worlds.
Theorem 2 EUL satisfies IEL: every intuitive validity in Section 3 is in EUL; no intuitive invalidity in Section 3 is in EUL.
The proofs yielding Theorem 2 come in technical Appendix A.

Departures from Yalcin
We treat subject matters abstractly, representing only the mereological structure of topics and the transparency of the logical connectives. Otherwise, we pass in silence over what topics are. Also, we take subject matter as integral to content: since both knowledge states and interpreted sentences have content, both have subject matter. Knowledge that ϕ requires that the topic of ϕ be contained in that of (some fragment of) the agent's knowledge state.
Our setup is thus doubly divergent from [66], who deploys a specifically Lewisian account of subject matters [45] as partitions of the space of worlds, and treats the content of a claim as merely a set of worlds. On his account, a belief state is a partial function β accepting, if so defined, a subject matter π and returning an intension β(π), with the constraint that β(π) is at the resolution of π i.e. β(π) is a union of cells in π . The idea is that β(π), if defined, represents the agent's body of (explicit) belief about subject matter π : a thin proposition at the resolution of the subject matter. If β(π) is not defined, the agent's belief state is not sensitive to subject matter π and so the agent has no (explicit) beliefs about π . Proposition P is (explicitly) believed relative to the agent's belief state β just in case there exists a subject matter π such that (i) P is at the resolution of π and (ii) β(π) is defined and entails P (i.e. every β(π) world is a P world).
As [66] notes, the partition view of subject matter validates undesirable closure principles. (His Sect. 7 offers intriguing remarks on how to address this.) Let P be any necessary, apriori proposition, e.g., Cantor's Theorem. This is true at every possible world. Thus, given any partition π , the intension of P is identical to the union of some cells in π (all of them). Thus, on the account of [66], P is believed with respect to any belief state. A form of omniscience follows: agents believe every apriori necessary proposition. 11 Compare the clause for 'analytic implication' in the conceptivist logic literature (see [19,22]) and the clause for knowledge ascriptions in the awareness logic literature (see [17]). The version of awareness logic closest to our framework is the one in terms of 'awareness generated by primitive propositions' (see [55], p. 81), where an agent is aware of a formula ϕ just in case it is aware of all of its atomic constituents, taken together. We stress the syntactic features of this approach, not shared by ours: awareness is still given by a construction based on atomic formulas, whereas our topic function assigns topics, nonlinguistic items in the semantics, to formulas, with a recursion on the basic non-epistemic operators (negation and conjunction, thus disjunction).
Further, let P &Q and P be the non-identical, contingent propositions expressed, respectively, by p ∧ q and p. Let β be a belief state defined on exactly one partition: the division of logical space into the P &Q worlds and their complement. Suppose the state returns the former set of worlds, given this partition. The content of p ∧ q is thus explicitly believed by an agent with state β, but the content of p is not. So [66] counter-intuitively predicts that one can explicitly believe that Mary arrived at the meeting late and disheveled, without believing that Mary arrived late.

Further (In)Validities
We note some further validities and invalidities for which intuition seems muddier: it's neither a clear cost nor a clear benefit that our system yields these patterns. Some validities: One knows that Smith and Jones aren't both on time exactly when one knows that either Smith is late or Jones is late. One knows Smith and Jones are both late exactly when one knows that neither is on-time. Compare the following invalidity: William III might have known that France will go to war without knowing that either they will go to war or develop a nuclear arsenal. Further, some valid restricted closure principles: If one both knows that ϕ and knows that ϕ ⊃ ψ within a unified frame of mind, then one knows ψ. If one knows p after coming to know p ⊃ q within the same frame mind, then one knows q within that same frame of mind.

Expressive Surprises
E L is strictly more expressive than the static fragment of the language (call it E L − ) obtained by removing the dynamic operator [i : ϕ]. This makes our framework differ crucially from traditional dynamic epistemic logics for which so-called reduction axioms are available. A standard technique for obtaining a complete axiomatization here is by adding a set of reduction axioms to a complete axiomatization of the underlying static logic. These provide a recursive step-by-step translation of every formula containing dynamic operators to a provably equivalent formula without them. Completeness follows from that of the underlying static logic, plus the soundness of the reduction axioms (see, e.g., [62], Sect. 7.4).
However, this only works when the language with dynamic operators is equally expressive as the language without. This is not the case for E L . Two examples reveal two different reasons for the lack of expressivity of E L − . The examples also motivate a way of extending E L − so that the language one gets is sufficiently expressive to (1) still state the validities and invalidities of interest and (2) provide sound and complete axiomatizations for its dynamic extensions by means of reduction axioms. We put the soundness and completeness proofs in Appendix B, but we discuss the examples concerning expressivity in some detail here, for we find them philosophically illuminating.
Some notation: for any ϕ ∈ E L , let A T (ϕ) denote the set of all atomic formulas occurring in ϕ. For a topic fusion operator ⊕, let  Fig. 1a and b, respectively. 12 Moreover For any ϕ ∈ E L , we let t (ϕ) = ⊕A T (ϕ) and t (ϕ) = ⊕ A T (ϕ). Finally, take some μ : I @ × I @ → I @ that fits the description in Definition 1, such as μ(P , Q) = P ∩ Q. (The way we define μ is not important here, since the domains of both models are singletons.) Both M and M are epistemic update models.
We now prove that for all ϕ ∈ E L − , we have M , @ ϕ iff M , @ ϕ by induction on the structure of ϕ; cases for the elements of A T , the Boolean connectives, and are elementary since they are not concerned with the topic component. Assume inductively that the statement holds for ψ and show it for ϕ := K k ψ.
Case ϕ := K k ψ: Suppose that M , @ K k ψ. This means, by the semantics of K k , that t (ψ) ≤ t (k) and M , @ ψ. Note that every frame of mind in both models corresponds to the singleton set {@}. We have two cases: Case k = i: Then, t (ψ) ≤ t (k) implies that t (ψ) = t (i) = c. This means, by the definition of t (ψ), that A T (ψ) = {i}. Therefore, t (ψ) = c , implying that t (ψ) ≤ t (i). Also, M , @ ψ implies, by the induction hypothesis, that M , @ ψ. Hence, as i = k, we obtain that M , @ K k ψ.
Case k = i: ψ (by the induction hypothesis), we again obtain that M , @ K k ψ. The other direction follows similarly.
We then conclude that M and M are modally equivalent with respect to the language E L − . Now consider the sentence [i : j ]K i p with j = i. We have M , @ Therefore, [i : j ]K i p can distinguish M from M , thus, it cannot be logically equivalent to any formula in E L − . This proves Lemma 3.
The counterexample shows that E L − is not as expressive as E L due to topicality. K i ϕ can express whether the topic of ϕ is included in the topic of the ith frame of mind. But K i ϕ cannot express anything concerning a comparison between the topics of arbitrary sentences ϕ and ψ. On the other hand, part of what [i : ϕ]K i ψ states is that the topic of ψ is included in the topic of the ith frame of mind after having grasped the topic of ϕ.
This motivates an extension, E L + − , of E L − with more general conditional knowledge operators of the form K ϕ ψ. 13 Here, ϕ is not necessarily a frame symbol, but can be an arbitrary sentence in the given language. The semantics are: The conditional knowledge operator K ϕ ψ will help us build the topic component of the canonical model in the completeness proof to be given in Appendix B.1. Counterexample 2: E L + − is still not sufficiently expressive to obtain reduction axioms for the dynamic operator [i : ϕ]ψ since it cannot express the precondition of updates, namely, truth at the actual state @. To see this, consider the models Fig. 2).
We show that for all ϕ ∈ E L + − , M , w 2 ϕ iff M , w 2 ϕ, but, e.g., the dynamic sentence ¬[i : p]⊥ can distinguish the two pointed models (where ⊥ is an abbreviation for some q ∧ ¬q). The former is easy to see: since every sentence of the language is assigned to the same topic, the topic component in this particular case does not play any essential role. That is, we have both M K ϕ ψ ≡ (ϕ ⊃ ψ) and M K ϕ ψ ≡ (ϕ ⊃ ψ). Moreover, M , w 2 and M , w 2 cannot be distinguished 13 Compare system KRI in [4]. within the basic modal language with the global modality ϕ since they are bisimilar (see, e.g., [ We therefore need to further extend E L + − with an operator that expresses 'ϕ is true at the actual state', that is, the actuality operator Aϕ [14]. The resulting extended static language of knowledge E L * − is defined recursively by the following grammar: where x ∈ A T . The dynamic language obtained by adding [i : ϕ]ψ to E L * − is denoted by E L * . In Appendix B, we prove soundness and completeness results for E L + − , E L * − and E L * . For the dynamic language E L * , we axiomatize two classes of epistemic update models that represent maximal and minimal knowledge updates. To anticipate further, the former is modelled similarly to the so-called public announcements [25,49,50] and the latter constitutes an example of non-monotonic knowledge update.

Elusive Logical Knowledge
Our semantics validates: It seems clear that ordinary agents can fail to know that (p ∧ q) ⊃ (¬p ⊃ q) and p ⊃ ¬ . . . ¬p are tautologies. One might well describe oneself as knowing that roses are red and violets are blue, but deny knowing that 'Roses are not red' materially implies 'Violets are blue'. One might well describe oneself as knowing that 'Roses are red' is true but hesitate when asked if its thousand-fold negation is true.
Stalnaker [57,Ch.5] came up with a strategy for these cases. (He applied it to the more controversial issue of elusive mathematical knowledge, to which we come below; we only need a milder version.) Ordinary agents can lack basic knowledge about propositional logic, which may influence what knowledge ascriptions they deploy. But, as Stalnaker pointed out, the ignorance in question is plausibly metalinguistic here, and due to difficulties in parsing the logical form of the relevant sentences. The beginning logic student does not know that certain sentence types must express truths in virtue of their form. She does not know that something she knows (roses are red or violets are blue) can be expressed by a sentence of the form ¬p ⊃ q. If her ignorance is relieved, she has not, after all, learned anything about the colour of flowers: she has learned something about the behavior of certain connectives.

Elusive Mathematical Knowledge I
One who knows ϕ, we claim, knows every consequence of ϕ that preserves subject matter. One might worry that the whole subject matter of a mathematical theory is encapsulated by its axioms. Then one who knows the conjunction of the ZF axioms knows every theorem of ZF set theory.
Whether this has force depends on answers to big questions we cannot address here, due to reasons of space: what is the nature of subject matter? How should we think about the subject matter of mathematical axioms? How do subject matter relationships between quantified statements and their instances work? Consider the claim that 0 ≤ S(0) entails S(0) ≤ S(S(0)), where S is the successor function. One view of subject matter [47,54] has it that the topic of a claim is the set of objects and properties mentioned in that claim, or their associated concepts. Then '0 ≤ S(0)' has the same subject matter as 'S(0) ≤ S(S(0))': the set with zero, the successor function and the relation of numerical order as members. According to another view [23], the subject matter of ϕ is a situation-like object: a truthmaker, or fusion of truthmakers. Then the subject matter of '0 ≤ S(0)' may be represented as a structure of objects and relations: 0, less, 1 . That of 'S(0) ≤ S(S(0))' may be represented by 1, less, 2 . If subject matter inclusion amounts to one situation being part of the other, then the subject matter of S(0) ≤ S(S(0)) is not included in that of 0 ≤ S(0). A worry about mathematical omniscience does not arise.
Or, take the following set theoretic reasoning:  Is the subject matter of (40) included in that of (39), and of (41) included in that of the conjunction of (38) and (39)? Clearly, one can know the assumptions without knowing the conclusion. Indeed, one can know (39) without knowing (40).
Thus a proponent of our account of the logic of ordinary knowledge ascription must defend a theory of subject matter according to which the subject matter of a universal claim need not include the subject matter of all of its instances. This seems plausible, though the issue is murky. Suppose Jane is, in fact, in the Netherlands, but Beth has no reason to think so (usually Jane is in Mexico). Presumably, Beth can believe that everyone in the Netherlands is cold without believing Jane is cold.

Elusive Mathematical Knowledge II
Our semantics validates: On the assumption that ϕ ∨ ¬ϕ is apriori, this follows inescapably from several key theses: A. Content is just thick content i.e. truth conditions plus topic. B. P is part of Q iff P entails Q and P 's topic includes Q's topic. C. Immanent closure: Kϕ entails Kψ when ψ's content is part of ϕ's content. D. The logical connectives are topic-transparent. E. All apriori truths have the same truth conditions. (43) seems implausible for knowledge per se. Suppose that ϕ is a complicated tautology. Then (43) entails that ϕ is known by anyone who knows that ϕ is either true or false. Assume that Goldbach's Conjecture (GC) is true and apriori. Then (43) entails that GC is known by anyone who knows that GC is either true or false.
The meta-linguistic strategy of Section 5.1 finds some application here. Let ϕ be a tautology. An advocate of A-E judges that ϕ and ϕ ∨ ¬ϕ have identical content (call it P ). Further, she can accept that it is easy to know that the sentence ϕ ∨ ¬ϕ is true and, thus, it is often easy to know P . However, she need not accept that, in this case, it is also easy to know that sentence ϕ is true, or a tautology, or expresses P . The value in identifying ϕ as a tautology, she says, lies not in learning its austere content P , but in coming to recognize that ϕ expresses austere content.
However, (43) also demonstrates the limits of the Stalnakerian strategy. 14 It is implausible that 'Goldbach's conjecture' expresses austere content that is easy to know.
So our system needs refinement. B is a natural target. Assuming D, we have identified a plausible counter-example to its right-to-left direction: though GC and GC ∨ ¬GC plausibly have the same truth conditions and topic, it seems invariably odd to claim p is part of p ∨ ¬p. Knowing GC is not part of knowing GC ∨ ¬GC; to say GC ∨ ¬GC is not to (partly) say GC. E is also a natural target. Though GC is apriori, it is prima facie odd to claim that GC ∨ ¬GC entails GC. If this entailment fails, a counter-example exists: a circumstance c at which GC ∨ ¬GC is true but GC is false. Conclusion: GC ∨ ¬GC and GC differ in truth conditions. This involves an important admission: c is a mathematically impossible situation. Hence, this refinement admits 'impossible worlds' (though logically impossible worlds need not be admitted).
We reserve judgment on the best refinement. At any rate, our system is easily amended to reject E. It treats as the universal necessity modal. A refinement treats as a general S5 necessity modal, by extending our models with a transitive, reflexive, symmetric accessibility relation and interpreting by quantifying over the accessible worlds. In this case, (43) is invalidated when p holds because p is true at every accessible world, K(p ∨ ¬p) holds because some frame of mind includes the topic of p, and Kp fails because no frame of mind contains only p worlds.

Knowledge of Conjunctions
Our semantics validates: Like (43), (44) is implausible for knowledge per se. 15 Assume that Goldbach's Conjecture (GC) is true and an apriori consequence of the (conjunction of the) Peano Axioms (α). (44) entails that GC is known by anyone who knows α conjoined with GC ∨ ¬GC. Like (43), this, at its core, is an inescapable consequence of key theses A-E listed in Section 5.3.
Unlike (43), a refinement based on rejecting E does not seem promising. Here, this amounts to admitting impossible worlds at which α∧(GC∨¬GC) holds but GC does not. Such a world is not only mathematically impossible, but logically impossible: the Peano Axioms hold, yet one of their logical consequences does not. As such examples are easily multiplied, dropping E heads down the slippery slope toward the extreme liberalism of [51]: for any set of sentences, there exists a world at which exactly those sentences are true. On this view, content is as fine-grained as syntax, contra the recommendations of Section 2. 16 A more promising strategy is to fold the issue of (44) into a more general one concerning conjunctive knowledge. Consider: 15 Thanks to Alexandru Baltag for pressing us on this point. 16 Compare the logical framework of [53]. Of course, Priestian liberalism has the flexibility to recover any logical desiderata, by merely stipulating a restriction on the class of models that determine logical consequence. The Priestian claims that a content be identified with a set of worlds (i.e. effectively, a set of sets of sentences); that Q is part of P just in case P ⊆ Q; and that ϕ is true at world w just in case ϕ is in the set of sentences associated with w. Thus, if the full range of Priestian models are admitted, part-hood has no logic. But suppose one wants to preserve simplification: the content of ϕ ∧ψ must include that of ϕ, as a matter of logic. The Priestian can achieve this by restricting the admissible models to those where: if ϕ ∧ ψ is associated with world w then ϕ is associated with w. The proposal has an obvious worry: it yields the right logic via mere, ad hoc stipulation. It has little explanatory power: the set of admissible models amounts to a mere re-description of the logical desiderata. For a similar critique, see [37] and [5, Sect. 5.3].
Our system validates (45) (contrast (17)). Do ordinary knowledge ascriptions align? Jones knows that Mary lives in New York, knows that Fred lives in Boston and knows that Boston is north of New York, but fails to infer that Mary will have to travel north to visit Fred. Still, we seem comfortable in asserting 'Jones knows that Mary lives in New York and that Fred lives in Boston and that Boston is north of New York'. Thus, one might worry that, via our treatment of fragmentation, our system departs from intuitive epistemic logic by pulling Kϕ ∧ Kψ and K(ϕ ∧ ψ) apart: ordinary discourse attests that 'Jones knows ϕ and ψ' is interchangeable with 'Jones knows ϕ and Jones knows ψ'. In particular, one worries that our system errs in validating (45).
To handle such discourse, a proponent of fragmentation must deny that ordinary claim 'Smith knows ϕ and ψ' invariably reports K i (ϕ ∧ ψ) i.e. ϕ ∧ ψ is known in a single frame of mind i belonging to Smith. Rather, it may report K i ϕ ∧ K j ψ for distinct frames i and j . Hence, ascribing knowledge of a conjunction needn't communicate that the conjuncts are part of a unified piece of knowledge. Unification may be pending, subject to outstanding inferences. This would explain our intuitive distaste for (44): it needn't follow from ordinary claim 'Smith knows both the Peano axioms and that either GC holds or it doesn't' that Smith has drawn the inferences that would unify her knowledge of the conjuncts, and so yield knowledge of GC.
How to systematize this position? A proponent of our system in Section 4 could bite the bullet, by appealing to the looseness of natural language. A bare claim 'Smith knows ϕ and ψ', she might posit, is ambiguous between communicating K i ϕ ∧K j ψ, for some i and j , and the stronger K i (ϕ ∧ ψ), for some i. Framed in terms of our proposal in Section 4: 'Smith knows ϕ and ψ' is ambiguous between Kϕ ∧ Kψ and K(ϕ ∧ ψ).
An appeal to ambiguity will strike some as cheap or otherwise ad hoc. A second strategy rejects B or C and revises the definition of Kϕ so as to invariably express: there exists i and j such that K i ϕ ∧ K j ψ. We sketch a tentative proposal that piggybacks on the logical system introduced in Section 4.1. In what follows, ϕ and ψ are boolean constructions from ordinary proposition symbols. We delay complications raised by frame symbols, epistemic formulas and update formulas for elsewhere. We recursively define ϕ's basic conjunctive parts, denoted P(ϕ), with: is thus a model-independent set of sentences. Given epistemic update model M , P(ϕ) can be associated with a corresponding set of thick propositions: P M (ϕ). Conceptually, the basic conjunctive parts of ϕ are best thought of as the latter set.
We leave it ambiguous whether this proposal rejects B with a more refined account of content part, or rejects C with a more refined account of knowledge closure. It may be checked, by induction, that if ψ ∈ P(ϕ) then ϕ's intension is a subset of ψ's intension and t (ψ) ≤ t (ϕ), relative to M . 18 A sample: Now amend the definition of Kϕ from Section 3.4 as follows (with ϕ again ranging over boolean complexes of ordinary proposition symbols): Kϕ now says that each basic conjunctive part of ϕ is known by the agent in some or other frame of mind. The curious reader may check that this does not validate (44) and (45). Likewise, it may be checked that the pattern of validities and invalidities from Section 3 is preserved. A pressing issue remains: how to extend the proposal to our full language, including when ϕ is an epistemic formula? Lemma 4 For all epistemic update models M = W, @, T, ⊕, μ, v, t , all ϕ ∈ E L , i ∈ F , and all Boolean sentences ψ, we have |ψ| M = |ψ| M i ϕ .
Proof Follows via an easy induction on the structure of ψ.

Proof of Theorem 2: Invalidities:
In figures of counterexamples, white nodes represent possible worlds, black nodes represent possible topics. Valuation and topic assignment are given by labelling each node with atomic formulae. We omit labelling when a node is assigned every element in A T .
Counterexample for (9)- (11) and (15) (We do not need to specify μ since it is irrelevant for these schemas.) Then, (9) (15) and (16) Counterexample for (17) (Since every ϕ ∈ E L is mapped to the same topic, the topicality constraints will be trivially satisfied.) Then, (17) and M 2 , @ K j p (since v(j ) = |p|), thus, M 2 , @ K(p ⊃ q) and M 2 , @ Kp. However, M 2 , @ Kq since M 2 , @ K i q and M 2 , @ K j q (as v(i) ⊆ |q| and v(j ) ⊆ |q|, respectively). For (18) and (19), consider M 2 with μ(P , Q) = P ∩ Q for all P , Q ∈ I @ . Then, (18) Counterexample for (20) and (21): These schemas are invalid due to nonmonotonicty of knowledge update. The counterexample M 2 in Fig. 4 with μ (P , Q) = Q for all P , Q ∈ I @ invalidates (20) and (21): (27): These principles are invalid due to non-monocity of knowledge update. It is easy to see that counterexamples invalidating (26) and (27) are also counterexamples for (24) and (25), respectively. Moreover, (24) and (25) are special cases of (22) and (23), respectively. Consider the model and v(q) = W , t (ϕ) = a for all ϕ ∈ E L , and μ(P , Q) = Q for all P , Q ∈ I @ (Fig. 5). As every frame of mind is mapped to the same set of possible wolds, we do not need to consider fragmentation. Similarly since every ϕ ∈ E L is mapped to the same topic, the topicality constraints will be trivially satisfied. Then, (26) ) and M , w ϕ. While the former means that |ϕ| ⊆ |ψ|, the latter means |ϕ| = W . Therefore, |ψ| = W , i.e., M , w ψ. we can live with this idealization, as Theorem 2 still holds with respect to epistemic update models with the above constraint. In the remainder of this appendix, all models are implicitly assumed to obey this constraint on t.
We first provide a sound and complete axiomatization for E L * − (Appendix B.1). The completeness result for E L + − follows similarly, so we omit many details and only point out the differences (Appendix B.1.1). The completeness for E L * will follow from the completeness of E L * − via a set of sound reduction axioms (Appendix B.2).

B.1 The (static) Logic of Knowledge Over E L * −
Since E L * − does not have the dynamic operator, the update function μ does not play any role in its interpretation in epistemic update models. We therefore opt for simplicity and interpret E L * − in what we call epistemic models, M = W, @, T, ⊕, v, t , obtained by removing μ from epistemic update models.
To recap, given an epistemic model M = W, @, T, ⊕, v, t and w ∈ W , we define the satisfaction relation for the atomic formulae, Booleans, and ϕ as in Definition 2; for K ψ ϕ and Aϕ we have: Truth in a model and validity are defined as before (see Section 4.1). Soundness and completeness are defined in a standard way with respect to the global notion of validity denoted by .
Proof Follows via an easy induction on the structure of ϕ as μ does not play any role in the interpretation of the sentences in E L * − .
(II) Axioms for A and : (III) Axioms for i and A: (IV) Axioms connecting K and :
Soundness of EL * is a matter of routine validity check, so we skip its proof. The rest of this section is devoted to the completeness proof of EL * , which is presented in full detail.
We say that is EL * -consistent if EL * ⊥, and EL * -inconsistent otherwise. We omit the tag EL * and say (in)consistent when the logic is contextually clear. A sentence ϕ is consistent with if ∪ {ϕ} is consistent (or, equivalently, if EL * ¬ϕ). Finally, a set of formulas is a maximally consistent set (or, in short, mcs) if it is consistent and any set of formulas properly containing is inconsistent [8]. 20 Lemma 8 For every mcs of EL * and ϕ, ψ ∈ E L * − , the following hold: Proof Standard.
Lemma 9 (Lindenbaum's Lemma) Every EL * -consistent set can be extended to a maximally consistent one.
Proof Standard.
Let X be the set of all maximally consistent sets of EL * . Define ∼ on X as . It is standard to prove that ∼ is an equivalence relation, as is an S5 operator. To define the canonical model, we need some auxiliary definitions and lemmas.
In the following proofs, we make repeated use of Lemma 8 in a standard way as in the proof of Lemma 10 and omit mention of it.

Lemma 11
For all ∈ X , ≈ is an equivalence relation. Moreover, for all , ∈ X such that ∼ , we have ≈ =≈ .
is the equivalence class of ϕ with respect to ≈ .
The topic inclusion relation ≤ c on the canonical model is defined in the usual way.

Lemma 13
The canonical model M c = W c , @ 0 , T c , ⊕ c , v c , t c is an epistemic model.

Lemma 17 (Truth Lemma) Let
0 be a mcs of EL * and M c = W c , @ 0 , T c , ⊕ c , v c , t c be the canonical model for 0 . Then, for all ∈ W c and ϕ ∈ E L * − , M c , ϕ iff ϕ ∈ .
Proof The proof is by induction on the structure of ϕ. The cases for the atomic formulae, Booleans, and ψ are standard, where the case for ψ uses Lemma 16.
Proof We prove only completeness since soundness is a matter of routine validity check. Let EL * ϕ. This means that {¬ϕ} is consistent, and by Lemma 9, can be extended to a mcs 0 . Then, by Lemma 17, we obtain that M c , 0 ϕ, where M c is the canonical model for 0 .

B.1.1 The (static) Logic of Knowledge Over E L + −
We now provide a sound and complete axiomatization for the fragment E L + − of E L * − without the actuality operator. Table 2 is a sound and complete axiomatization of E L + − with respect to the class of epistemic models.

Theorem 19 EL + given in
The proof of Theorem 19 follows similarly to the proof of Corollary 18 except that we need to replace Lemma 10 by the following lemma.
Lemma 20 guarantees the existence of an appropriate actual world @ 0 in the canonical model for a maximally EL + -consistent set 0 .

B.2 The Logic of Knowledge Update Over E L *
This section presents soundness and completeness results for the dynamic language E L * of epistemic update. We axiomatize two classes of epistemic update models that represent maximal and minimal knowledge updates, respectively. These are the two extreme cases that an epistemic update function μ (given in Definition 1) can interpret: the former is modelled similarly to the so-called public announcements [25,49,50], and the latter represents an agent who learns the new piece of information without merging its intension with her prior information state.

Definition 4 (Maximal Epistemic Update Model)
A maximal epistemic update model is a tuple M = W, @, T, ⊕, μ, v, t where W, @, T, ⊕, v, t is an epistemic model and μ : I @ × I @ → I @ is an update function such that μ(P , Q) = P ∩ Q.

Theorem 21 A sound and complete axiomatization EUL *
max of E L * with respect to the class of maximal epistemic update models is obtained by adding to EL * the set of axioms and rules in Table 3.

Definition 5 (Minimal Epistemic Update Model)
A minimal epistemic update model is a tuple M = W, @, T, ⊕, μ, v, t where W, @, T, ⊕, v, t is an epistemic model and μ : I @ × I @ → I @ is an update function such that μ(P , Q) = Q.

Theorem 22 A sound and complete axiomatization EUL *
min of E L * with respect to the class of minimal epistemic update models is obtained by replacing axiom (R i ) in Table 3 by [i : ϕ]i ≡ Aϕ ⊃ (ϕ ∧ (i ∨ ¬i)).

B.2.1 Proofs of Theorems 21 and 22
The proofs of Theorems 21 and 22 are by the so-called standard DEL-technique completeness via reduction (or translation), as briefly explained in Section 4.4. For a detailed presentation of completeness by reduction, we refer the reader to [62,Chapter 7.4]. where |A T (ϕ)| is the number of elements in A T (ϕ).