The Logic of Framing Effects

Framing effects concern the having of different attitudes towards logically or necessarily equivalent contents. Framing is of crucial importance for cognitive science, behavioral economics, decision theory, and the social sciences at large. We model a typical kind of framing, grounded in (i) the structural distinction between beliefs activated in working memory and beliefs left inactive in long term memory, and (ii) the topic- or subject matter-sensitivity of belief: a feature of propositional attitudes which is attracting growing research attention. We introduce a class of models featuring (i) and (ii) to represent, and reason about, agents whose belief states can be subject to framing effects. We axiomatize a logic which we prove to be sound and complete with respect to the class.


Framed Believers
Physicians tend to believe that some lung cancer patients should get surgery with a 90% one-month survival rate.Physicians tend not to believe that such patients should get surgery with a 10% first month mortality [35,367].People often have different attitudes towards differently presented logically or necessarily equivalent contents.This is called the framing effect [36].A good deal of behavioral economics takes its cue from it.Unlike Econs, the fully consistent agents of classical economic theory who well-order their preferences and maximize expected utility, humans can be framed: nudged into believing different things depending on how equivalent options are presented to them [58].People will believe more in a certain economic policy when its employment rate is given, than when the corresponding unemployment rate is [20].Early student registration is boosted by threatening lateness penalty more than by promising early bird discount [25].Framing has momentous social consequences [14,19,40,50].We need a logic to represent and reason about framed believers.
Standard epistemic logic in the tradition of [32] won't do.It models agents closer to Econs than to humans: perfect thinkers whose belief or knowledge states are fully closed under logical consequence.Hintikkan agents cannot have different attitudes towards logically or necessarily equivalent propositional contents.Such contents are just sets of possible worlds: their identity is given by co-intensionality.
It is often granted, however, that human thought is hyperintensional: one can have an attitude towards some content, without having it towards contents implied by or necessarily equivalent to it.One is not informed that one's neighbor John is Jack the Ripper although one has no doubts on John's self-identity.One can know that 7 + 5 = 12, without knowing that has no solutions in positive integers for 2; or that is a logical truth, without knowing that a long and complicated propositional tautology is.One can desire that one's headache goes away, but one doesn't desire that one has a headache.More controversially, one can know that one has hands without being in a position to know that one is no brain in a vat.One can believe that without believing that although the two are equivalent in classical and in various non-classical logics, for one lacks some concept needed to grasp 's content.
What kind of logical non-omniscience is involved in typical framing effects like the ones of the aforementioned examples?It cannot be tied to the a priori/a posteriori distinction, as in the 'John is John' vs. 'John is Jack the Ripper' case: that the survival rate is 90% is neither more nor less a priori than that mortality is 10%.Nor can it be due to computational limitations (it's easy to compute 7 + 5, whereas the proof of Fermat's Last Theorem has baffled mathematicians for two centuries), or difficulties in parsing long and syntactically complex sentences (the versus complicated tautology case): the sentences 'The survival rate is 90%' and 'The mortality (rate) is 10%' are just as easy to parse as each other.
One may not want to assume at the outset that the problem is with the nature of the attitude itself.Idealized, perfect reasoners can desire that their headache goes away without desiring that they have a headache to begin with.Perhaps, due to looming skeptical paradoxes, they don't know they are not handless recently envatted brains although they know they have hands and (they even know that) the former follows from the latter: perhaps logical closure fails even for their knowledge states.(The jury is out on this: see e.g.[29,39,46,55] among the anti-closure, [31,38,61] among the pro-closure.)What we are after now, however, is belief.When the case for knowledge not being closed under logical consequence even for perfect reasoners is presented, their being 'ideally astute' is usually defined in terms of belief: they believe all the logical consequences of what they know, and therefore believe [18,33,46].The open issue is whether that's sufficient for the closure of their knowledge states.
Could the kind of logical non-omniscience displayed by agents with framed beliefs be due to lack of concepts, as when one believes but not an entailed because one doesn't have some notion required to grasp ?To adapt [57], 88: William III believed that England could avoid war without believing that England could avoid nuclear war.That's because he had no idea what a nuclear weapon might be.He could have no such attitude towards propositions whose grasp involved a concept he lacked.
This gets us closer to the phenomenon we're after, but not close enough.Surely human thinkers have a limited repertoire of concepts, but that's not what is involved in typical framing effects.Framed physicians have all the concepts needed to fully grasp both the proposition that the survival rate is 90% and the one that the mortality is 10%.In particular, they are fully on top both of the concept survival and of the concept mortality by any conceptual or semantic competence test.Still, only the former proposition gets them to believe that the patients should take surgery.
What is going on, framing theorists say [35,36], is that 'The mortality is 10%', but not 'The survival rate is 90%', makes people think about mortality.The thought that the survival rate is 90% is not about that: on the face of it, it's about survival.Survival and death are deeply connected in anyone's mind.But, cognitively limited as we are, we may not think about mortality -and much of what comes with it -when we think about survival rates, even if we have the concept mortality firmly in our repertoire.We leave it asleep.In order to think that the mortality is 10%, instead, we have to think about mortality, for that's what the proposition is about.As [62], Ch. 7, has it: an epistemic rift can open up between logically or necessarily equivalent propositions when they differ in subject matter -what they are about -even for thinkers who have all the relevant concepts.
The framing effects we aim to model, thus, involve the having of different attitudes towards equivalent propositions one perfectly grasps, due to differences in what those propositions are about.We grant that this is not the only way agents can be framed: qua psychological phenomenon, framing may involve all sorts of subtle pragmatic cues and mental associations triggered by word order, emphasis, etc. -we will see that approaches in doxastic logic different from ours may be even more suitable to model kinds of framing tied to the syntax-sensitivity of agents.But aboutness-based framing is a typical kind of framing, we conjecture, because it has deep roots, on the one hand, in the structure of our belief system, and on the other, in the nature of its contents.Our logic of framing will represent both roots.We introduce them in the following Section.

Working Memory, Long Term Memory, Aboutness
To model the structural features of our belief system responsible for aboutness-based framing, we should look at a widely accepted acquisition of cognitive psychology: the distinction between working and long term memory [22,Part II].Researchers disagree on the nature of both.Qua logical modelers, we don't want our account to be held hostage to the next empirical discovery, or consensus switch in psychological research.Luckily, we can be neutral on the more controversial issues and take on board the less controversial ones.For instance, working memory (WM), which deals with the processing and short-term storage of information, is at times understood as encompassing a buffer of data at hand for the performing of cognitive tasks, plus a central executive unit: the locus of attention and cognitive control [2,3]; at times, as a plurality of modules or structures [6].For our purposes, we only need to consider its most agreed-upon feature: it has limited capacity.Only few chunks of information can be retained in WM, and only for a limited amount of time (see the views compared in [44]).
Instead, long-term memory (LTM) (or, the declarative part of it: [52,56]) is that vast knowledge base where cognitive agents store, or encode, their beliefs and knowledge about specific events (the so-called episodic memory) as well as general laws and principles (the so-called semantic memory).There's a divide in cognitive psychology, on whether WM and LTM are separate (contents are stored in LTM and retrieved from it for use in WM), or the former is just the activated part of the latter [1,15,44].We can be neutral on this as well.Now our framed agents, we propose, can have the belief that patients should get surgery with a 90% one-month survival rate activated in their working memory, without having the intensionally equivalent belief that patients should get surgery with a 10% first month mortality there.However, our agents can have all the relevant information and, in particular, the concept mortality, in their (declarative) LTM.Let's call beliefs activated in WM, active, and beliefs left asleep in LTM, passive.A belief is active when it is available in WM to perform cognitive tasks with it.It is passive when it is stored, or encoded, in the agent's LTM, and left inactive there.
As for the contents of active and passive beliefs: we propose to embed topics or subject matters in the very notion of proposition.Our starting point is the venerable idea of intentionality: mental states such as belief bear aboutness -as Yablo has it, 'the relation that meaningful items bear to whatever it is that they are on or of or that they address or concern' [62, p. 1].This is their topic, or subject matter.The aboutness of an intentional state, such as believing that , must be in line with that of the proposition, , which makes for the content of .should not be understood, then, just as a set of possible worlds, but also in terms of its topic or subject matter.When one believes that John is handsome, one's belief is about John's looks insofar as that's what the believed proposition is about.
What is a topic or subject matter?We are familiar with truth conditions and truth sets (typically, sets of possible worlds) as specifying propositional contents.One may be less familiar with topics, although the literature is burgeoning.One way of understanding them links to questions.Lewis [41] interpreted subject matters as partitions of the modal space (see also [34]).Take the number of stars.The associated question is, 'How many stars are there?'.This splits the total set of worlds into equivalence classes.Two worlds are in the same cell when they agree on the answer: all zero-star worlds, all one-star worlds, etc. [62] goes for divisions (admitting overlap), rather than partitions.Others [24] understand topics in terms of truthmakers, interpreted as constructions out of states (think of something like the situations of [7]).
For our logical modeling purposes, however, we don't need to take a stance on what topics are.We just take on board three constraints needed for our logic of framing, following also [9][10][11]30].We will not defend the constraints, because researchers on subject matter generally agree on them, and we piggyback on their agreed upon results: 1. Logically or necessarily equivalent sentences and can differ in their propositional content because of differences in what they are about.E.g., in [62]'s version of subject matter theory, propositional content is hyperintensional because it's not given only by truth set, but also by aboutness.'Equilateral triangles are equiangular' and 'Either John passes the exam or not' differ in content in spite of sameness of truth set, because only one is about equilateral triangles, and made true by how such triangles are.Yablo calls mere truth sets, 'thin propositions'; truth sets enriched with subject matters, 'thick propositions'.2. The space of topics has a mereological structure [24,34,62] The topic of should be the same as that of .Conjunction and disjunction merge topics: 'John is tall and handsome' and 'John is tall or handsome' are both about the height and looks of John.The topic of and that of are the same: the fusion of the topic of and that of .Supposing this is enough qua introduction to WM, LTM, and topics, here are some desiderata our logic of framing should comply with.First, an interpreted sentence expresses a thick proposition.Thick propositions are the contents of both active and passive belief.Belief as such, then, is, as [47,63] have it, topic-sensitive.We evaluate ascriptions of active belief with respect to the agents' WM, and ascriptions of passive belief with respect to their LTM.In this sense, both WM and LTM are topic-sensitive.
Next, a realistic framed believer should be non-omniscient with respect to both WM and LTM.Psychologists contrast the limited capacity of the former with the breadth of the latter.However, neither should host all the logical consequences of what it hosts, or display an omni-inclusive conceptual repertoire.In particular, both passive and active belief must be hyperintensional: framed agents are not logically closed with respect to either.
Next, whether WM is separate from LTM, or just the activated part of the latter, no information or concept can be in WM unless it is in LTM to begin with: agents cannot have any attitude on subject matters whose concepts they simply lack.They are as blind to them as William III was to the topic of nuclear war avoidance.
To get an idea of how such desiderata cooperate, consider the following two triplets of group-wise intensionally equivalent sentences: A. Before we get to our own proposal to model agents of this kind, we briefly discuss some hyperintensional epistemic logics for non-logically omniscient agents already on the market, to see to what extent they could be used to represent framing.

Some Ideas on the Market
As far as we know, few hyperintensional epistemic logics have aimed at directly representing the difference between WM and LTM.One distinction which may look similar is the one between explicit and implicit knowledge and belief, found in awareness logics developed with an eye on the logical omniscience problem [8,23,59].Unawareness is lack of conception, rather than information: being unaware of is understood as not having present in the mind, or not thinking about [53,[79][80]].Thus, it seems especially suitable to model framing.
In [23]'s approach, awareness is represented syntactically.One is aware of when belongs to a set of formulas, thus linguistic items: the agent's awareness set .Implicit knowledge or belief get the usual Hintikkan characterization, whereas the corresponding explicit attitudes are defined as the combination of the implicit ones with awareness: an agent has the explicit attitude towards when the agent has the implicit one and .The view has been claimed to mix syntax and semantics, essentially imposing a syntactic filter over a standard Hintikkan semantics [37].Resorting to syntax, however, allows very fine-grained distinctions: as any bunch of sentences can serve as the awareness set , explicit attitudes obey no non-trivial logical closure properties.This is all good and well if one has a syntactic or sentential conception of belief.However, this is not the conception of belief qua topic-sensitive intentional state we have endorsed above.Such a conception puts a limit to the amount of fine-grainedness one can plausibly assign to belief contents.In our approach, a framed agent who actively believes should actively believe , and should actively believe , provided the agent has parsed the syntax of the sentences that express the relevant contents.
Although the sentences we may use in our logical language to ascribe beliefs to the agent are different, the propositional content that John is tall and handsome and the one that John is handsome and tall are intensionally equivalent, and the agent who actively believes either is already thinking about the other's topic -because it is the same topic, say, John's height and looks.That John is tall and handsome entails that John is tall, and one who actively believes the former is already thinking about the topic of the latter, as it is part of that of the former.As Williamson -a subscriber to the semantic conception of belief -has it: If a positive propositional attitude is closed under at least some forms of logical consequence [...], we may expect it to be closed under a very intimate one such as the -elimination inference from to and to .[...]There is no obstacle to the idea that knowing a conjunction constitutes knowing its conjuncts, just as, in mathematics, we may count a proof of a conjunction as a proof of its conjuncts, so that if is proved then is proved, not just provable.[61,277,283] To be fair (and to follow a useful suggestion by a reviewer of our paper), syntactic approaches can easily mimic closure properties of belief, including conjunction elimination and inversion: see e.g. the recent cutting-edge work of [42,43] on belief bases (classic works on belief bases include [28,51]).Besides, works in the syntacticsentential tradition can be useful in modeling some specific kinds of framing, e.g., presentation order effects.We get a long list of search results on Amazon and we stop when we find an article we judge satisfactory.Had the same items in the list been arranged differently, with the article further down, we may never have bought it.If we take the list as a long conjunction of sentences, & & & ..., order matters [53,83].
We have already seen, however, that this is not the kind of non-omniscience going on in our typical cases of framing above.In 'The survival rate is 90%' vs. 'The mortality (rate) is 10%', or 'You get early bird discount' vs. 'You get late registration surcharge', the sentences are no conjunct lists and the syntax of either in the pair is just as easy to parse as that of the other.The framed agents whose modelling we're after have correctly parsed the syntax of the relevant sentences and are fully on top of the expressed contents.
One variant of the awareness approach, 'propositionally determined awareness' (see [27,327], focusing on knowledge, and [53,84]), puts a constraint on awareness sets: one is aware of just in case one is aware of all of its atomic constituents taken together.This automatically delivers closure under conjunction elimination and other closure properties, taking it closer to the approach we present below -but still features a mixture of syntax and semantics to achieve the result.A properly semantic account of topics or subject matters as non-linguistic items, like the one we are pursuing, should allow for different (atomic) sentences to be assigned, on occasion, the same subject matter, just as they can be assigned, on occasion, the same truth set.Overall, explicit attitudes in the awareness setting do not map very neatly to our active belief as a topic-sensitive attitude implemented in the agent's WM.
Nor do implicit attitudes neatly map to our passive belief as implemented in LTM.Logics featuring the explicit/implicit distinction usually take the implicit attitude as a normal Hintikkan modality.The attitude displays, thus, full logical omniscience: the agent implicitly knows or believes all logical truths, and all logical consequences of what it knows or believes.The agent has no awareness or conceptual limitations there: it is simply on top of all the relevant propositions.But, as we have remarked, LTM is not like that.Realistic agents don't possess all concepts, and don't have all the logical consequences of their passive beliefs stored or encoded in LTM: passive belief should be hyperintensional, too.
Balbiani et al. [4] present one of the few logical works that explicitly aim at modelling the WM/LTM distinction.It's a powerful framework in the tradition of dynamic epistemic logic [17], modelling the processes through which a non-omniscient agent forms its beliefs via operations of perception and inference in WM, and can store and retrieve them from LTM.Their language has an operator for explicit belief, tied to WM, and one expressing background knowledge, tied to LTM.The latter is a normal modality, and so faces the same issue as implicit knowledge in the awareness setting: the agent is logically omniscient with respect to its background knowledge.
What's more worrying for the prospects of applying the logic to framing, is that explicit belief gets a Scott-Montague neighborhood semantics [48,54]: one explicitly believes that at world when 's truth set is in the neighborhood set assigned to .Famously, neighborhood semantics gives weak non-normal modal logics capable of breaking a number of logical closure features for their operators.In particular, one can explicitly believe a conjunction without explicitly believing the conjuncts.If one does want to enforce conjunction elimination for explicit belief, one can, of course, add conditions (specifically, one could close the neighborhoods under supersets for -elimination: see [48,81]).But even in the basic neighborhood setting, when and are logically or necessarily equivalent, they will be in the same set of neighborhoods.Thus, explicit belief in either will automatically entail explicitly belief in the other.This is exactly what shouldn't happen if we want to capture framing for explicit beliefs.Neighborhoods alone don't deliver the right kind of hyperintensionality and non-omniscience.
We now move on to our own proposal and start making things precise.

Language and Models
Our logic of belief for framed agents is based on a modal language with a countable set of propositional variables Prop . is defined recursively by the grammar: where Prop.We read as 'the agent actively believes that ', as 'the agent passively believes that ', and as a normal epistemic-a priori modality, 'it is a priori that '. (Thus, the box does not represent what our modeled agents know a priori; rather, what logically omniscient Hintikkan agents may know a priori.We will use it in some logical invalidities below, to mark how our agents differ from the logically omnisicient ones.)When we talk about both and , we simply write .We often use ... for propositional variables and employ the usual abbreviations for propositional connectives as , , and ; and for the duals and .We define 1 1 and .We follow the usual rules for the elimination of the parentheses.For any , denotes the set of propositional variables occurring in .
We interpret this language on topic-sensitive subset space models (inspired by the original subset space semantics of [45], see below for a comparison): Definition 1 (Topic-sensitive Subset Space Model) A topic-sensitive subset space model (model) is a tuple where 1.
is a non-empty set of possible worlds; 2.
is a non-empty, finite subset of such that : each non-empty represents the informational content of a memory cell; 3. is a non-empty set of topics; 4.
is a binary idempotent, commutative, associative operation: topic fusion.We assume unrestricted fusion, that is, is always defined on :

5.
Prop is a topic function assigning a topic to each element in Prop and a non-empty, finite set of topics to each element in : for all Prop, and is non-empty and finite for all .Topic function extends to the whole language by taking the topic of a sentence as the fusion of the elements in : 1 where 1 ;

6.
Prop is a valuation function that maps every propositional variable in Prop to a set of worlds in .
In the metalanguage we use variables 1 2 ranging over possible topics.We define topic parthood, , out of topic fusion as iff .
Thus, is a join semilattice and a partially ordered set.The strict topic parthood, denoted by , is defined as usual as iff and .Here's what the model represents: the agent's belief system is composed of memory cells.These are chunks of LTM which can be put into (or, if one prefers, activated as) WM, that is, made available for actions of cognitive processing.A memory cell is represented by an indexed set , where and : is made of informational content and topic .
Memory cells are topic-sensitive: when one is in (or activated as) WM, the agent is actively thinking about its subject matter, and has its informational content available for processing.
and are assumed to be finite in order to make the framework realistic: our cognitive agents can only have finitely many memory cells.

Every
is assigned a set of topics, rather than a single topic (Definition 1.5), to capture the key idea that the same informational content can be associated with different topics.Take our triplet of intensionally equivalent, topic-diverging sentences A., B. and C. in Section 2. Intensional equivalence means that they have the same bunch of worlds as their truth set.Call it .Let the topics be , and , respectively.Each of , , and can make for a distinct memory cell, differing from the others in topic but not in informational content.
The agent's LTM, then, is easily defined: . The information stored in LTM is the information available in all memory cells, taken together.The topic of LTM is the fusion of those of all memory cells: the total repertoire of topics or subject matters the agent has grasped.To simplify the notation, we set and b.Then the LTM of the agent is b .Notice that b is guaranteed to be in , since is finite.LTM is at least as large as any single memory cell which can be activated as, or put into, WM, with respect to both information and topic.The agent passively believes, i.e., has in LTM, at least as much as it can actively believe, i.e., activate and process in WM: the latter has quite limited capacity compared to LTM, as cognitive psychology taught us.This is made precise by spelling out the truth conditions for sentences of our language .We evaluate formulas with respect to world-memory pairs , with representing the actual world, and a memory cell.We denote the set of all world-memory pairs of a model by ( for pair).The working memory WM is the designated world-memory pair with respect to which we evaluate formulas: Definition 2 ( -Semantics for ) Given a model and a world-memory pair , the -semantics for is defined recursively as: iff iff not iff and iff iff and iff and b where .We omit the subscript for models and write when it is contextually clear.When it is not the case that , we write .As the semantics has it, the agent actively believes whatever is entailed by their WM with respect to both informational content and topic.The agent passively believes whatever is entailed by their LTM (ditto).
As the following lemma shows, only the truth value of an ascription of active belief depends on the chosen : Lemma 1 Given a model , , two world-memory pairs , and such that does not have any occurrences of , we have iff .
Proof See Appendix.
However, the agent can actively believe with respect to one memory cell without actively believing the same content with respect to another one.That is, given a model and two world-memory pairs , it could be that and for some as shown by the following example.
Example 1 Consider the model , that is, the agent does not have the subject matter of in working memory .Similarly, we also have, e.g., for two reasons: (1) and (2) , that is, the informational content of does not eliminate all non-possibilities and the subject matter of is not part of the subject matter of working memory , respectively (see Fig. 1).
Next comes the definition of logical consequence (with respect to ): with and , iff for all models and all : if for all , then .For single-premise entailment, we write for .As a special case, logical validity, , truth at all world-memory pairs of all models, is , entailment by the empty set of premises.is called Fig. 1 Model in Example 1: White nodes represent possible worlds, black nodes represent possible topics, ellipses represent the informational contents of memory cells.Valuation and topic assignment are given by labelling each node with atomic formulas invalid, denoted by , if it is not a logical validity, that is, if there is a model and a world-memory pair such that .
The abbreviation will play a role in formalizing validities and invalidities. 1Given a model , it is easy to see that is true at every world-memory pair in and for any .Thus, we can talk in the language about what topics the agent is actively thinking about in WM, and what topics the agent has grasped and stored in LTM.Formulas of the form ( ) express within the language statements such as 'The agent has (does not have) the subject matter of in WM': iff and iff and since iff .
Similarly, formulas of the form ( ) express within the language statements such as 'The agent has (does not have) the subject matter of in LTM' (the proof follows similarly).
Our semantics is structurally similar to the subset space semantics of [45] in that the component of a topic-sensitive subset space model constitutes a subset space and we evaluate sentences not at worlds but at world-set pairs.Subset space semantics was originally designed to model an evidence-based notion of absolutely certain knowledge and epistemic effort.The evaluation pairs of the form within this framework obey the constraint (for knowledge is veridical) and are often called 'epistemic scenarios'.represents the agent's current truthful evidence.
Our framework comes with a distinct formalism, however, and a different interpretation of a subset space model's components.We focus on belief rather than knowledge, so the evaluation pairs are tailored accordingly: as belief is not factive, a memory cell does not have to meet the constraint (see also [12] for a different subset space semantics for belief without this constraint).More importantly, our subset spaces and the corresponding evaluation pairs are endowed with topics.This makes the resulting logic of belief hyperintensional, as opposed to the intensional epistemic logics of the traditional subset space semantics (see, e.g., [12,16,45,60]). 1 In order to have a unique definition of each , we set the convention that elements of occur in from left-to-right in the order they are enumerated in Prop .For example, for 2 etc.This convention will eventually not matter since our logic cannot differentiate two conjunctions of different order: provably and semantically equivalent to .

Axiomatization, Soundness and Completeness
Table 1 gives a sound and complete axiomatization L of the logic of framed belief over .
The notion of derivation, denoted by , in L is defined as usual.Thus, means is a theorem of L.
Theorem 2 L is a sound and complete axiomatization of with respect to the class of topic-sensitive subset space models: for every , if and only if .
Proof See Appendix.
The axioms in Group I give general closure features of belief, both active and passive, for our framed agents.C ensures that beliefs are fully conjunctive: one who believes that John is tall and handsome, believes both that John is tall and that John is handsome, and vice versa.Ax1 captures, as desired, the topic-sensitivity of belief: one can actively believe only if one is actively thinking about the relevant topic in WM; one can passively believe only if one has concepts for the relevant topic stored in LTM.Ax2 states a limited deductive closure principle for both active and passive belief: if follows from a priori, and one believes , and one is on top of the subject matter of , then one does believe .Ax3 has it that beliefs are not world-relative.
In Group II, D states a consistency principle for active belief: one who has in WM will not also have there.This does not hold for passive belief: a realistic agent may have all sorts of inconsistent beliefs stored or encoded in its LTM.They can stay there insofar as one does not think about them all together, i.e., the inconsistencies are shielded from the focus of attention in WM.
As for Group III, the Inc principle bridging active and passive belief guarantees, as desired, that whatever is activated in WM be available in LTM to begin with.
Just as important as validities are invalidities, as they display the extent to which our framed agents are non-omniscient.We discuss a few prominent ones: The failure of 1-3 tells us that our agents, unlike the logically omniscient Hintikkan agents, don't believe all (a priori) truths and that their beliefs are not closed under strict-a priori implication.4 says that they also lack the wisdom of negative introspection: they can fail to believe that they don't believe something.
We've put the most important bit at the end: the last three invalidities, 5-7, crucially capture typical framing.Framing a guarantees that agents can have different attitudes towards equivalent formulas.Framing b says that one can have the belief that (e.g., patients should get surgery with a 90% one-month survival rate) activated in WM, without having the belief that (patients should get surgery with a 10% first month mortality) there, even when one does have their equivalence in one's belief base: one is on top of all the relevant concepts and believes that either is true iff the other is.But all of this is left asleep in LTM: one is just not thinking about it.Framing c says that one's actively believing does not imply that one actively believes , even when the two are equivalent and one has the subject matter of in one's LTM.And so, we claim, our models capture precisely the phenomenon of framing we were after.

Conclusion and Further Work
We have presented a class of models and a logic, sound and complete with respect to the class, to reason about the beliefs of typical framed agents: non-logically omniscient agents who can have different attitudes towards logically or necessarily equivalent contents they perfectly grasp.The two key ingredients we have adopted are (i) the topic-sensitivity of belief states, mirroring the one of the propositional contents of such states; and (ii) a distinction between WM and LTM, to model the idea that framed agents can actively think about in their WM, without thinking about an equivalent which they, however, have in their LTM belief base.
Three directions of further investigation: first, both active and passive belief are plain, categorical forms of belief.It will be interesting to expand the language and formal semantics so that they include conditional, topic-sensitive belief.
Second, working memory is properly so-called in cognitive psychology because it is the locus of cognitive activity: beliefs are in there in order to be manipulated, expanded, revised via operations of combination, deduction, etc.And there's a tradition of active logic [21, see e.g.] which models the dynamic process of drawing inferences within the limitations of working memory.Active logic is a wide-ranging framework that can model the dynamics of commonsensical and episodic reasoning across time, whereas our approach to working memory in this paper has been rather static.However, one possible direction of expansion would then feature the addition to our language of dynamic operators in the style of Dynamic Epistemic Logic [5,17], following a similar pattern as in [4].This will allow to properly model, e.g., how agents operate on their beliefs in the light of new incoming information, before storing the results in LTM.
Third, our subset space-style semantics suggests another natural dynamic extension: one could add the so-called effort modality of subset space logics as, e.g., a working memory improvement operator.The original effort modality, here denoted by , is intended to capture a notion of epistemic effort that leads to acquiring more evidence.In our topic-sensitive, hyperintensional logic for reasoning about framed believers, we can read as ' is true in a stronger memory cell (with respect to both information and topic)' and interpret it as iff there is and s.t.b and .
So, is modelled as a working memory transformation operator that takes a memory cell and gives us another that approximates better to the LTM.

A.2.1 Soundness of L
Soundness is a matter of routine validity check, so we spell out only the relatively tricky cases.
Proof Let be a model and .Checking the soundness of the system S5 for is standard: recall that is interpreted as the global modality.Validity  Suppose that , i.e, that and .By the definitions of and b, we have that and b.Therefore, and b, i.e., .

A.2.2 Completeness of L
We establish the completeness result via a canonical model construction.While the construction of memory cells uses methods presented by [26], the construction of canonical topics is inspired by the canonical model construction of awareness models (see, e.g., [27]).

Auxiliary Definitions and Lemmas:
The notion of derivation, denoted by , in L is defined as usual.Thus, means is a theorem of L.

Lemma 3
The following are derivable in L: .We say that is L-consistent if , and L-inconsistent otherwise.A sentence is L-consistent with if is L-consistent (or, equivalently, if ).Finally, a set of formulas is a maximally L-consistent set (or, in short, mcs) if it is L-consistent and any set of formulas properly containing is L-inconsistent [13].We drop mention of the logic L when it is clear from the context.iff .
Proof Standard.
In the following proofs, we make repeated use of Lemma 4 in a standard way and often omit mention of it.
Lemma 5 (Lindenbaum's Lemma) Every L-consistent set can be extended to a maximally L-consistent one.Proof Standard.Let be the set of all maximally L-consistent sets.For each , define for some and for some .
In short, . By axiom Inc, we have , therefore, also have .Moreover, we define on as iff .
Since is an S5 modality, it is easy to see that is an equivalence relation.For any maximally L-consistent set , we denote by the equivalence class of induced by , i.e., .It is easy to see that if , then .

Lemma 6
For any two maximally consistent sets and such that , and .Therefore, also .
Proof follows from the axioms and rules of S5 .For , let .This means that there is such that .Then, by Ax3 and S5 , we have .As , we obtain that , thus, .The other direction follows similarly since is symmetric.We then also have .
Given a mcs 0 of L, the canonical model for 0 is a tuple 0 where 0 0 . To simplify the notation, we denote 0 0 .
7 5 12.  B.   has no solutions in positive integers for 2. C. Extremally disconnectedness is not a hereditary property of topological spaces.D. Triangles have three sides.E. Bachelors are unmarried.F. Baryons are hadrons with odd numbers of valence quarks.A.-C. express contents which are necessary, of the same kind of necessity (say, mathematical necessity).Ditto for D.-F.(say, definitional necessity).Our framed believers could find themselves in the following situation with respect to each triplet.(i) They passively believe the content expressed by the first item, A., or D.: they have the relevant information and they are on top of the basic arithmetical or geometrical subject matter involved, so it's all stored or encoded in LTM.They are just not thinking about arithmetic, or about triangles, at the moment.(ii) They actively believe the content expressed by the second item, B. or E.: they have the relevant proposition in their WM because they are currently engaged in thoughts about diophantine equations, or John's marital status.(iii) They neither actively nor passively believe the content expressed by the third item, C. or F.: they just have no idea what topological spaces are and what features they have, or they have never heard about exotic notions from particle physics.They are as blind to them as William III was to nuclear war avoidance.
Fig. 2 for 0 , where b b a . Topics can have proper parts; distinct topics may have common parts.Mathematics includes arithmetic.Mathematics and philosophy share subject matter, having (certain parts of) logic in common.Correspondingly, what a sentence is about can overlap with, or be properly included in, what another one is about.3. The Boolean operators are subject-matter-transparent: they add no topic of their [24,49,62]62].'John isn't handsome' is exactly about what 'John is handsome' is about, say, John's looks.It certainly does not speak about negation.

Table 1
Axiomatization of the logic of framed belief L over of D is guaranteed since by the definition of memory cells.Validity of Ax1 is immediate consequence of the semantic clauses for and the definition of .Ax3 is valid since truth of a belief sentence is state independent: it is easy to see that either or , for any .Here we spell out the details only for C , Ax2 , and Inc.Validity proof for C follows similarly: replace by and by b. Inc: