Mighty Belief Revision

Belief revision theories standardly endorse a principle of intensionality to the effect that ideal doxastic agents do not discriminate between pieces of information that are equivalent within classical logic. I argue that this principle should be rejected. Its failure, on my view, does not require failures of logical omniscience on the part of the agent, but results from a view of the update as mighty: as encoding what the agent learns might be the case, as well as what must be. The view is motivated by consideration of a puzzle case, obtained by transposing into the context of belief revision a kind of scenario that Kit Fine has used to argue against intensionalism about counterfactuals. Employing the framework of truthmaker semantics, I go on to develop a novel account of belief revision, based on a conception of the update as mighty, which validates natural hyperintensional counterparts of the usual AGM postulates.


Introduction
Belief revision theories standardly endorse a principle of intensionality, according to which it is a requirement of rationality on ideal doxastic agents that they do not discriminate between pieces of information that are equivalent within classical logic: whatever they are disposed to (come to or continue to) believe upon receiving the one, they are disposed to believe upon receiving the other, and vice versa. In this paper, I argue that, subject to certain qualifications, that principle should be rejected.
Its argued failure does not require failures of logical omniscience on the part of the agent. It results instead from a view of the update as mighty-as encoding what the agent learns might be the case, as well as what must be. 1 Central to my argument is a puzzle case, obtained by transposing into the context of belief revision a kind of scenario that Kit Fine, in his 'Counterfactuals without Possible Worlds' ( [7], see also his [8]), has used to argue against the principle of intensionality for (the antecedents of) counterfactuals.
The structure of the paper is as follows. Section 2 introduces some background assumptions and terminology and gives a more precise statement of the principle of intensionality. Section 3 describes an intensional account of rational belief revision-a form of the popular AGM approach-which is closely related to the standard possible worlds analysis of counterfactuals. Section 4 presents the puzzle cases. Section 5 applies the AGM approach to these cases and argues that it gives the wrong results. Section 6 examines and rejects some prima facie promising ways to respond to the difficulty while retaining intensionality. Section 7 introduces the basic ideas guiding my subsequent development of a truthmaker-based, hyperintensional approach. Section 8 introduces the conception of the update as mighty underlying the approach and explains how it leads to violations of the principle of intensionality. Section 9 formally articulates some constraints on the rational ways of revising by mighty updates. It is shown that the account delivers the intuitively correct verdicts in the problem cases while retaining those components of the AGM account that are not undermined by those examples. Section 10, finally, describes in more general terms the advantages I take the truthmaker-based approach to offer while identifying some open questions for future research to pursue.

Belief Revision and Intensionality
At any given time, doxastic agents like ourselves have a set of beliefs, and they have dispositions to revise their beliefs in certain ways under certain circumstances. For brevity, we shall refer to such dispositions simply as dispositions to revise, and we shall refer to relevant circumstances as occasions for revision. The combination of a total system of beliefs and a total set of dispositions to revise we may call a (complete) doxastic state. Call a complete doxastic state (ideally rationally) permissible iff it could be the doxastic state of an ideally rational doxastic agent (short: ideal agent). We may also call a partial doxastic state permissible iff it has a permissible complete extension. The aim of a theory of belief revision, as I here conceive of it, is to capture the general, logico-structural properties that are held by any permissible complete doxastic state.
It is standard to assume that for any ideal agent and for any possible occasion for revision, the agent's dispositions to revise determine a unique result, i.e. a unique set of beliefs comprising all and only those beliefs the agent would hold after exercising their dispositions. The dispositions to revise of an ideal agent may then be represented by a function mapping every possible occasion for revision to a revised belief system. 2 Let us call occasions for revision dynamically equivalent iff no permissible doxastic state discriminates between them. That is to say, occasions for revision o 1 and o 2 are dynamically equivalent just in case for any function f representing the dispositions to revise in some permissible doxastic state, f (o 1 ) = f (o 2 ). It is a standard (if often tacit) assumption that one way to characterize a sufficient condition for dynamic equivalence is in terms of a proposition suitably related to the occasion of revision-call this proposition the update. This seems quite plausible. Presumably, the rationality or otherwise of a possible response to an occasion for revision can depend only on what the doxastic agent learns, or what information they receive, on that occasion. If what the agent learns on occasion o 1 is the same as what they learn on occasion o 2 , then rationality seems to require that the agent make the same adjustments to their beliefs in both situations. Assuming that the totality of what the agent learns can always be represented by a proposition, we may take that proposition to be the update and conclude that occasions for revision with the same update are dynamically equivalent. We shall later say more about how to make these ideas precise. For now, note that given the dynamic equivalence of situations with the same update, for the purposes of a theory of belief revision, we may identify occasions for revisions with their associated updates, and we may represent an agent's dispositions to revise as a function mapping each possible update 3 to a revised belief system. We shall also describe possible updates as dynamically equivalent when the associated occasions for revision are. A principle of intensionality for updates may now be stated as follows: Intensionality For any possible updates P and Q, if P is logically equivalent to Q then P is dynamically equivalent to Q.
In this formulation, the principle presupposes a notion of logical equivalence for updates. The most common approach in the literature is to identify updates with sen-tences of some formal, propositional language. An alternative is to assume a notion of a logically possible world, and to identify updates with the sets of logically possible worlds in which they are true. Logical equivalence for updates is then simply the identity relation, and so adopting this kind of conception of the update will automatically ensure that Intensionality holds. The two approaches may be connected, relative to a chosen formal language, by identifying logically possible worlds with the corresponding maximal consistent sets of sentences of the language. 4

A Possible Worlds Approach
Within a possible worlds framework, we can formulate a prima facie attractive theory of rational belief revision that is closely related to the standard possible worlds account of counterfactuals. 5 On this account of counterfactuals, recall, we assume that for any given possible world w, there is an ordering of all worlds according to their comparative similarity, in some suitable sense, to w. A counterfactual A C is then taken to be true at w iff all those worlds at which A is true which are closest-i.e. most similar-to w are worlds at which C is also true.
Under the analogous approach to belief revision, both belief systems and updates are identified with a set of logically possible worlds. A doxastic state accordingly consists of a set B of possible worlds representing the belief system, and a function mapping any set of possible worlds P -the update-to a set of possible worlds B * P -the revised belief system. It is assumed that in any ideally rational doxastic state, B is non-empty. The logical constraints on the revision function are stated by appeal to an ordering on the worlds, formally similar to the similarity orderings by which counterfactuals are interpreted. 6 Informally, we may think of the ordering as representing the comparative plausibility of the worlds by the lights of the agent, or perhaps the strength with which the worlds are excluded or disbelieved by the agent. The worlds at which the agent's beliefs are true are the most plausible ones, which are not excluded or disbelieved at all. All other worlds are excluded, but some more firmly than others, in which case they are treated as less plausible. 4 Analogous questions of granularity may also be raised with respect to the other component of a doxastic state, i.e. the total system of beliefs. Our focus in this paper, though, will be on the intensionality or otherwise of the update. 5 The classical sources are Stalnaker's [33] and Lewis's [25]. 6 The idea of basing belief revision theory semantically on an ordering of worlds is familiar in the literature. My presentation here largely follows Huber [20]. The approach based on plausibility orderings can equivalently be stated in terms of plausibility spheres, just like the Lewis/Stalnaker semantics can be stated in terms of similarity spheres instead of similarity orderings. Modulo the subtleties surrounding the condition (≤4) mentioned below, the present approach is thus equivalent to the sphere-based approach first described by Grove [17]. The same kind of ordering of worlds, under the label of faithful assignments, is used by Katsuno and Mendelzon [22] to prove a representation theorem for AGM revision operations (the counterpart of (≤4) is not needed there, since the authors assume the underlying language to be based on a finite set of propositional letters). For a useful overview of equivalent characterizations of the AGM model, see chapter 4 of Fermé and Hansson's [6] and especially section 4.1, which discusses the various possible worlds based models.
More formally, given a belief system B, we call a plausibility ordering centered on B any two-place relation ≤ on the worlds such that for all worlds w, v, u: Informally, w ≤ v means that w is at least as plausible as v. (≤1)-(≤3) ensure that the plausibility ordering is transitive, that any two worlds are comparable in terms of their plausibility, and that all and only the members of B are maximally plausible. The final condition (≤4), as we shall see, is of special importance for our purposes: it ensures that any non-empty set of worlds has a maximally plausible member. The crucial claim is now that for any ideally rational doxastic state with belief system B and revision function * , there exists a plausibility ordering ≤ of the worlds centered on B such that for every possible update P , B * P ={z ∈ P : z ≤ y whenever y ∈ P }: the revision by update P is always the set of the most plausible P -worlds.
This account of belief revision is near-equivalent to the popular AGM theory of belief revision ([1]). 7 Within AGM, a belief system is modelled by a set K of sentences of a propositional language L, an update is modelled by a single sentence α from L, and the dispositions to revise are modelled by a function mapping K and any such α to a new belief system K * α. The theory then includes the following eight postulates to be satisfied by any ideally rational belief set and revision function (where K + α is the closure under logical consequence of K ∪ {α}): We shall sometimes refer to the last two postulates as the supplementary AGM postulates, and to the other six as the basic AGM postulates. 8 It is known that from any AGM belief set K and revision function * , one can construct a possible worlds interpretation of L and an ordering ≤ on the worlds, centered on the set of worlds at which K is true, which satisfies conditions (≤1)-(≤3) as well as a weakened version of (≤4). 9 Say that a formula α ∈ L expresses a set of worlds (under the given interpretation) iff it is true at exactly those worlds. Then the relevant weakening of (≤4) says that any non-empty set of worlds expressed by some formula in L has a maximally plausible member. Conversely, given a plausibility ordering centered on a set of worlds B ⊆ W and an interpretation of L relative to W , one can define a corresponding AGM-style revision operator for the belief set true exactly at the members of B which satisfies the AGM postulates. 10 For most of the discussion to follow, we may treat AGM and the plausibility based possible worlds approach as equivalent, and refer to them indiscriminately as the AGM approach or the possible worlds approach.

Of Dominos and Matches
In this section, I shall describe some partial doxastic states and argue that they are rationally permissible, i.e. that they have complete extensions that could be the doxastic state of an ideally rational agent. In the next section I will then show that the permissibility of these doxastic states is in conflict with the AGM approach.
For definiteness, imagine a particular doxastic agent, Dom. His relevant beliefs concern an infinite sequence of domino stones, arranged like this 11 We assume that each stone can only fall to the right, not to the left. We refer to the stones as s 1 , s 2 , . . ., respectively, with s 1 being the leftmost stone, and s n+1 the stone immediately to the right of s n . Let F n be the proposition that stone n fell. We suppose that as a matter of fact, no stone fell.
For each n, Dom believes that ¬F n . Furthermore, he has the following dispositions to revise: If Dom were to learn that F n , then he would come to believe that F m for all m with m ≥ n. At the same time, he would retain the belief that ¬F m for all m with m < n.
It will be helpful to introduce some notation to describe the doxastic state more succinctly. Let us write P ⇒ Q for the claim that Dom is disposed to (come to or continue to) believe that Q upon learning that P -i.e. on any occasion for revision whose update is the proposition that P . 12 Slightly artificially, we write ⇒ Q to say 9 That only the weakened version of (≤4) is guaranteed is why I said the above account is near-equivalent to AGM. We will see below that this detail is somewhat relevant to our purposes. 10 These results are due to Adam Grove ([17]). 11 The scenario is essentially identical to the first example described by Fine [7], except that Fine's scenario features rocks instead of domino stones. Note that our case strictly requires only that our agent has the relevant beliefs about domino stones, not that these beliefs are accurate. But for presentational purposes it seemed helpful to me to suppose the situation to be as the agent believes it to be. 12 The reason for using this notation is that it helps bring out more clearly the connection to counterfactual logic. This will help relating the present discussion to Fine's, and in particular means that his central proofs carry over to our setting without any changes.-The idea of interpreting a conditional in terms of belief revision in this way is again familiar from previous work, most notably in connection with the Ramsey Test; see e.g. [14][15][16] and [26]; see also [6, p. 85f].
that Dom believes that Q (since this is like saying that he is disposed to believe that Q upon learning nothing). We can now summarize the partial doxastic state D we have ascribed to Dom as follows: ⇒ ¬F n for all n (D. +) F n ⇒ F m for any m ≥ n (D.−) F n ⇒ ¬F m for any m < n Dom's dispositions to revise may be seen simply as reflecting an awareness of the nature of the setup as described above: since each stone can only fall to the right, knocking over every subsequent stone, if Dom learns F n he also comes to believe F n+1 , F n+2 , . . . and accordingly gives up ¬F n+1 , ¬F n+2 , . . . but since each stone can only fall to the right, he has no reason to give up ¬F n−1 , or F n−2 , . . . At first glance, it would therefore appear that the doxastic state is permissible.
At second glance, one might worry that perhaps Dom does have some reason to give up ¬F n−1 upon learning F n . For given that, say, the second stone fell, it is natural to ask what caused it to fall. And since one of the things that may have caused this is the first stone falling, perhaps Dom does have some reason to allow for the possibility that the first stone fell as well. This objection may be avoided, however, by modifying the example, at the cost of some additional complexity.
The difficulty arises because in the case of the dominos, the truth of F n would be responsible for the truth of F n+1 , and ultimately F m whenever m > n. But this is an inessential feature of the example. Indeed, for roughly similar reasons, Fine has already described a version of the example which lacks this feature ([7, p. 224f]). Transposed to the belief revision setting, the case runs as follows. We imagine another doxastic agent, Matt. His relevant beliefs are that there is an infinity of matches m 1 , m 2 , . . . , placed in causal isolation from one another, each of them in an environment maximally conducive to the match lighting upon being struck, but none of them actually struck. Now let S n be the proposition that match m n is struck, let L n be the proposition that match m n lights, and let W n be the proposition that match m n is wet. Let S be S 1 ∧ S 2 ∧ . . ., so S says that each match is struck. Then F n is S ∧ ((W n ∧ ¬L n ) ∧ (W n+1 ∧ ¬L n+1 ) ∧ . . .). So F n says that each match is struck, but every match from n onwards is wet and does not light.
Note that for all n, F n contains F n+1 as a conjunct. So in this version of the case, the dispositions ascribed in (D.+) are simply dispositions to believe conjuncts of conjunctive information received, and therefore clearly permissible. So let us turn to the dispositions ascribed in (D.−), and let us consider the instance F 2 ⇒ ¬F 1 . Note first that since Matt believes each match to be in an environment maximally conducive to its lighting upon being struck, learning that the first match is struck (S 1 ) would give Matt good reason to believe that match 1 lights (L 1 ). So it seems rational for Matt to believe that L 1 upon learning that S 1 . Now F 2 is the conjunction of S 1 with some information exclusively about other matches, believed by Matt to be causally isolated from match 1. None of this additional information seems in any way to undermine the support that S 1 -match 1 is struck-offers for L 1 -match 1 lights. 13 So it also seems rational for Matt to believe that L 1 upon learning that F 2 . But now note that L 1 logically entails ¬F 1 , since F 1 contains ¬L 1 as a conjunct. So since it seems clearly rational for Matt to form the belief that L 1 upon learning F 2 , and L 1 logically entails ¬F 1 , it also seems clearly rational for Matt to retain the belief that ¬F 1 upon learning that F 2 . In other words, learning that F 2 not only provides no reason for Matt to give up the belief that ¬F 1 , it gives Matt additional support for that belief. Parallel considerations apply with equal force to the other instances of (D.−). I conclude that at least in this more complicated variant, the beliefs and dispositions to revise we have ascribed to Matt are jointly rationally permissible. Now consider the infinite disjunction F 1 ∨ F 2 ∨ . . ., and let us use F to abbreviate it. Assuming that the proposition that F is also a possible update, how should Matt be disposed to revise his beliefs upon learning that F ? It is clear that there has to be some number n such that it is permissible for Matt to give up the belief that ¬F n upon learning that F . After all, if Matt were to retain each belief that ¬F n and add the belief that F , the resulting belief system would be inconsistent. We can also say something more specific, it seems to me. For it is hard to see how giving up ¬F n could be permissible for Matt for the case of, say, n = 17 but not for n = 1. So it also seems safe to assume that it is permissible for Matt to give up the belief that ¬F 1 upon learning that F .
We may summarize the central results of this section as follows: There is some permissible doxastic state which extends D and which, for some n, includes the disposition to give up the belief that ¬F n upon learning that F . In particular, there is some permissible doxastic state extending D and including the disposition to give up the belief that ¬F 1 upon learning that F .
In the next section, I will show that these results conflict with the AGM approach. Before that, let me address a kind of dismissive attitude towards these scenarios that some readers may be tempted to adopt. Clearly, both the domino-and the matchexample are somewhat unrealistic. There are no infinite sequences of domino stones, and no infinite collections of matches in causal isolation from one another. So what, one might therefore ask, if our theory of belief revision has implausible implications with respect to such bizarre and silly cases? What matters, surely, is how belief systems relevantly similar to our own may be rationally revised, and the problematic kinds of doxastic states do not seem very similar to our own! In response, it should be noted, firstly, that the specific subject matter of the above examples is of course not essential to the problem that they give rise to. All we need to generate that problem is an instance of the general structure exhibited by the cases of the dominos and the matches. So the objection can succeed only if all instances of this structure are silly. But that is not so. As Fine ([8, p. 36]) points out, one way 13 Perhaps one might object that even though the matches are assumed to be causally isolated from one another, since according to F n , the fates of matches n and onwards are so similar, it is still rational to suspect some kind of systematic explanation, which could then also suggest that earlier matches suffered the same fate. But it is not even necessary to suppose that events in the different regions are similar in this way. All we need is that S n always says that some 'trigger'-event occurred, that L n says that the corresponding standard result occurred, and that W n says that some corresponding 'blocker'-condition obtained. (Cf. [7, p. 225], see also [8, p. 35, fn. 1].) to obtain more realistic instances is by considering, instead of infinite sequences of objects, infinite sequences of values of some quantity capable of continuous change, or at least taken by the agent to be so capable. Thus, we may consider an agent's beliefs concerning the flight of a missile believed to possess an automatic mechanism for correcting any deviations from its intended path (the example is Fine's). The propositions F 1 , F 2 , . . . are now to the effect that the missile deviated by 1 inch off course, that the missile deviated by 1/2 inch off course, . . . Since any deviation occurs in a continuous way, upon learning F n the agent will believe F m whenever m ≥ n. But they may rationally retain the belief ¬F m whenever m < n, taking the mechanism to have prevented any greater deviation.
Another idea, more promising in our context of belief revision than in Fine's context of counterfactuals, is to construct an example using actual infinite sequences of abstract objects, such as the sequence of the natural numbers. What we would need is an example of a property with respect to which an ideal agent might initially believe that no number has it, and be disposed, upon learning that n has the property, to form the belief that m has it for all m ≥ n, and to retain the belief that m does not have it for all m < n. Indeed, we might approximate the structure of the match example by letting F n say that (a) for each n, attempts have been made to prove that n has the property, and (b) a proof has been found for each m ≥ n. The supposition that this kind of situation could arise for some complicated number-theoretic property does not appear problematically unrealistic.
Secondly, the objection overestimates the role that infinity plays for the problem. As we shall shortly see, the relevant assumptions of the intensional approaches yield highly counter-intuitive results even in application to related, finitary contexts. Roughly speaking, the role of infinity is only to turn counter-intuitive results into contradictory ones. Relatedly, the approach I shall eventually propose deviates from its intensional rivals even in finitary contexts, and may be argued to be superior to them even on the basis of considering only finitary contexts.

Against the Possible Worlds Approach
We shall now show that the AGM approach is incompatible with the results of the previous section. To start, let us assume Matt is disposed to give up the belief that ¬F 1 upon learning that F : This is rationally incompatible, given AGM, with To see this, note first that under the AGM approach, for any propositions P and Q, P ⇒ Q holds iff B * P entails Q, i.e. iff B * P ⊆ Q. So (X 1 ) implies that B * F ⊆ ¬F 1 . By the definition of revision in terms of the plausibility ordering, B * F comprises exactly the maximally plausible F -worlds, so (X 1 ) requires that some maximally plausible F -world be an F 1 -world. Call that world w. By (1), every maximally plausible F 1 -world is an F 2 -world, so w is also an F 2 -world. By (2), every maximally plausible F 2 -world is a ¬F 1 -world, so among the F 2 -worlds, some world v must be more plausible than w. But every F 2 -world is also an F -world, so v is a more plausible F -world than w, contrary to the assumption that w is a maximally plausible F -world. Since we found it to be rationally permissible for Matt to satisfy (X 1 ), this is a problem.
Moreover, by similar reasoning we can show that any instance of is rationally incompatible, under the possible worlds approach, with (D.+) and (D.−), for there is no ordering of the worlds that satisfies the conditions (1)-(4) on plausibility orderings and that validates the dispositions in (D.+) and (D.−). In particular, any such ordering that respects (D.+) and (D.−) is such that there is no maximal F -world. For suppose w is an F -world, and let m be some number such that w is an F m -world. Suppose for contradiction that w is a maximal F -world. Then in particular, w is a maximal F m -world. By (D.+), every maximal F m -world is also an F m+1 -world. By (D.−), no maximal F m+1 -world is an F m -world. So w is not a maximal F m+1 -world, and hence not a maximal F -world after all.
Alternatively, as Fine shows ([7, pp. 244ff]), we can also derive all instances of F ⇒ ¬F n from (D.+) and (D.−) using only the following inference rules, all of which are valid under the possible worlds approach: The complete proof of this result is fairly long and complicated, so I shall refrain from reproducing it here. To see where the reasoning of the proof might best be resisted, and thus which rule might best be given up, it is more helpful to present it in more informal terms. And since the match-case is rather complex and hard to think about, in commenting on the various steps, I will use the dominos-example again. Let me first explain why giving up ¬F 1 in response to F would involve a violation of the rules. We may divide the reasoning into three main steps.
The first step is an application of Substitution, taking us from In the domino case, this says that given that Dom is disposed to retain the belief that the first stone stands given the information that the second fell, he must also be disposed to retain that belief given 14 Fine also has an infinitary version of this rule, allowing us to infer P ⇒ Q 1 ∧ Q 2 ∧ . . . fromP ⇒ Q 1 , P ⇒ Q 2 , . . .. Using this rule we could show that conforming to the rules would lead, in the case at hand, to Matt's believing an outright contradiction upon learning F . But it seems bad enough if Matt ends up with an unsatisfiable belief system, accepting an infinite disjunction while rejecting each disjunct. For this result we only need the finitary rule. 15 As Fine points out, we actually require only a weaker rule with the added condition that P and Q be logically exclusive. The difference is not essential for present purposes, so for simplicity, I've here stated the stronger one. the information that either the first and second, or not the first but the second stone fell.
The second step is to infer from F 1 ∧ F 2 ∨ ¬F 1 ∧ F 2 ⇒ ¬F 1 that F 1 ∨ ¬F 1 ∧ F 2 ⇒ ¬F 1 : Dom must also retain the belief that the first stone stands upon learning that the either the first stone fell or not the first but the second fell. The justification for this is that Dom is disposed to form the belief that F 2 given the information that F 1 . Because of this, for Dom, learning F 1 and learning F 1 ∧ F 2 effectively come to the same thing, and the same is then true for learning The third step is to infer from Here, the idea may be described as follows. Learning F presents Dom with a choice: he needs to pick some stone s n as the left-most stone for which to give up the belief that ¬F n . Now F 1 ∨ ¬F 1 ∧ F 2 says that either s 1 or s 2 is the first stone to fall. So learning F 1 ∨ ¬F 1 ∧ F 2 presents Dom with a related choice: he needs to pick some stone s n ∈ {s 1 , s 2 } as the left-most stone for which to give up the belief that ¬F n . Now the point is that if Dom does not pick s 1 among the options s 1 and s 2 , he cannot rationally pick s 1 among the options s 1 , s 2 , . . . Put another way, that F 1 ∨ ¬F 1 ∧ F 2 ⇒ ¬F 1 means that Dom prefers the scenario in which s 2 is the left-most stone to fall to the scenario in which s 1 is the left-most stone to fall. But giving up ¬F 1 in response to F would mean not preferring any alternative scenario to the scenario with even s 1 falling. So in particular, it would mean not preferring a scenario s 2 as the left-most stone to fall to the scenario with even s 1 falling.
So if Dom is to conform to the above rules, he must retain ¬F 1 upon learning F , and hence conclude that one of the other stones fell, i.e. F 2 ∨ F 3 ∨ . . . But the same considerations that prevent him from giving up ¬F 1 to accommodate F also prevent him from giving up F 2 to accommodate F 2 ∨ F 3 ∨ . . ., so in the end he is prevented from giving up any ¬F n . Resisting this final part of the argument seems hopeless. As mentioned before, it simply beggars belief that general constraints of rationality should prevent Dom, in the case at hand, from giving up ¬F 1 in response to F , while allowing him to give up, say, ¬F 17 .
Applying AGM theory proper to the examples is not completely straightforward, since the examples involve infinite (conjunctions and) disjunctions, and AGM, strictly speaking, is concerned only with finitary propositional languages. Still, we may consider a trivial extension of AGM to languages with infinite conjunction and disjunction, in which we simply retain all the usual postulates. In this extension of AGM, the above rules can all be derived, and thus the proof that Dom and Matt won't be allowed to give up any belief of the form ¬F n can be carried out.
But we can also adjust the example so as to do without any infinitely long sentences. Instead, we may replace each infinite conjunction and each infinite disjunction used in our argument by a propositional letter, interpreted as expressing the same proposition as the infinitary sentence it replaces. If it is objected that these propositions may not be graspable by finite thinkers, we can instead let the propositional letters express the universal quantifications corresponding to the infinite conjunctions and the existential quantifications corresponding to the infinite disjunctions.
Since the dispositions ascribed in (D.+) and (D.−), under this modification, concern the same propositions as before-or perhaps quantificational counterparts-they are no less reasonable than before. So we still find that there can be no maximally plausible F -worlds. Since our background language now has a propositional letter true in exactly the F -worlds, it follows that no ordering of the worlds can satisfy (≤1)-(≤3) together with the weakened version of (≤4). And so we can infer by the mentioned equivalence that no AGM-revision operation can accord with (D.+) and (D.−) under the finitary replacement.
Most of the derivation given by Fine also still goes through under this modification. Infinitary sentences are involved only in the third step of the argument as described above, in which we infer F ⇒ ¬F 1 from Formally, the way the derivation works is this. By Entailment, we also have By Disjunction, we obtain Now the point is that this big disjunction is logically equivalent to F = F 1 ∨F 2 ∨. . ., so that by Substitution we may infer F ⇒ ¬F 1 .
But now let F and F 3 be propositional letters expressing the proposition that some stone fell, and that some stone other than the first two fell, respectively. Then Entailment and Disjunction also give us Now the antecedent in (5 ) is not logically equivalent to F , so we cannot infer F ⇒ ¬F 1 simply by an application of Substitution. But it is very plausible to assume that it must always be rationally permissible for the agent to treat F 1 ∨ ¬F 1 ∧ F 2 ∨ ¬F 1 ∧ ¬F 2 ∧ F 3 as equivalent to F in his dispositions to revise. So we may simply make it a further non-logical assumption of the case, in addition to (D.+) and (D.−), that the agent's dispositions satisfy this condition. Given this assumption, we may then infer F ⇒ ¬F 1 , and similarly for all other ¬F n . In this way, even without the use of infinitary sentences, we obtain examples of ideally rational doxastic states that violate some of the AGM principles.

Against Intensionalist Responses
We saw that the doxastic states described, in virtue of satisfying (D.+) and (D.−), yield a violation of the condition (≤4) of the possible worlds approach, requiring each set of possible worlds-or each expressible set of worlds in case of the weakened version-to have a maximally plausible member. One obvious idea for responding to the problem while retaining much of the original framework is therefore to drop this condition. There is even a precedent for this move for the case of counterfactuals, as the counterpart to (≤4) in this setting is the so-called limit assumption, famously rejected by Lewis. Without (≤4), we can no longer define the revision by update P as the set of the maximally plausible P -worlds. How are we to define it instead? Lewis's proposal for truth-conditions for counterfactuals is of no help. Lewis takes P Q to be true iff Q is true in all sufficiently close P -worlds, i.e. iff by restricting attention more and more to ever closer P -worlds, eventually we will be left only with Q-worlds. The simplest way to see that this won't help is to note that the Lewis-style truth-conditions for P ⇒ Q actually validate all the above inference rules. 16 A natural idea at this point is that the new belief state, in cases where there are no maximally plausible updates, should simply contain all the update-worlds. 17 At first glance, this may look attractive. It allows (D.+) and (D.−) to hold, while also allowing that the agent gives up ¬F 1 upon learning F , since some F -worlds are F 1worlds. But at second glance it becomes clear that this suggestion throws out the baby with the bathwater. For the proposal does not allow our agent to have any beliefs, post-revision, save for those entailed by the update F . For example, our agent is not allowed to believe, post-revision, that if F 1 then F 2 , since it is compatible with the truth of F that F 1 ∧ ¬F 2 . But it is clearly rational in our scenario to retain the belief that F 2 if F 1 , and so the proposal still misclassifies rational doxastic states as irrational.
Perhaps, then, we might give up on the idea that every rational revision function must be definable in terms of a plausibility ordering. Instead, we might say merely that any rational revision function must conform, in some suitable sense, to a plausibility ordering, and allow that there may be more than one revision function conforming to a given plausibility ordering. A natural first suggestion would be to take a revision function to conform to a plausibility ordering iff it maps any update P to the set of maximally plausible P -worlds if that set is non-empty, and to some upwards closed non-empty subset of P if not, where a subset P of P is upwards closed iff P includes every P -world that is more plausible than some world in P . 18 In terms of the inference rules employed in Fine's derivation, this proposal invalidates the Disjunction rule. In particular, it leads to the rejection of the inference from (3) and (4 ) to (5 ): 16 There is a rule that is invalidated by adopting the Lewis-style truth-conditions, namely the infinitary version of the conjunction rule (cf. [7, p. 225]). As mentioned before, this rule is not required for our purposes. 17 This corresponds to the idea considered by Fine [7, p. 228f] of taking P Q to be true iff Q is true in all the closest and all the stranded P -worlds, where a P -world is stranded iff there is no closest world closer than it. Fine's most important objection against the proposal is analogous to my criticism in the main text. 18 Probably, one should then impose some further constraints on how the choices of subsets for different updates have to relate. For instance, any world in the revision by F 2 ∨ F 3 ∨ . . . should probably also be included in the revision by In terms of the AGM postulates, the proposal invalidates the postulate of Superexpansion, which says that the result of revising with a proposition P , conjoined with Q, entails the result of revising with P ∧ Q. To see how this fails, note that the result of revising with F , under the present proposal, is compatible with F 1 , and remains so when conjoined with F 2 . At the same time, since F ∧ F 2 is logically equivalent with F 2 , the result of revising with F ∧ F 2 is not compatible with F 1 , since the belief that ¬F 1 is retained in the revision by F 2 .
Although an improvement over the previous attempts, this strategy is still unsatisfactory. For the proposal to be adequate, two conditions must be satisfied. Firstly, the complete extensions of the doxastic state that it classifies as permissible must really be so. Secondly, it must classify every permissible extension of the state as permissible. With respect to both conditions, there are good reasons to be skeptical.
Regarding the first condition, the problem is that the rejected applications of Disjunction and Superexpansion are intuitively very plausible. In the case of Disjunction, we assume that upon learning that F 1 ∨ ¬F 1 ∧ F 2 -all matches are struck, but all matches from the first or the second onwards are wet and do not light-Matt retains the belief that ¬F 1 , and thus excludes the possibility that the first match is wet and does not light. He also retains that belief, obviously, upon learning that ¬F 1 ∧ ¬F 2 ∧ F 3 . How can it then be rational for Matt not to retain the same beliefand thus to allow for the possibility that the first match is wet-upon learning the disjunction of these two pieces of information?
The case of Superexpansion seems even more compelling. We take for granted that Matt gives up the belief that ¬F 1 -and so allows for the possibility that the first match is wet and does not light-upon learning that all matches are struck, but all matches from some match onwards are wet and do not light. But then how can it be rational to retain the belief that ¬F 1 -and thus exclude the possibility that the first match is wet and does not light-upon receiving the same information, with the addition that either match 1 or match 2 is the first match to be wet and fail to light?
Regarding the second condition, there are strong reasons to think that there are other permissible extensions of the doxastic state than those envisaged under the present proposal. For instance, it seems very plausible that it should be permissible for Matt's doxastic state to be such that That is, it should be permissible for Matt to be disposed to give up the belief that ¬F 99 upon learning that It says that all matches are struck, and that for some match m k among the first 100, all matches from m k onwards are wet and do not light. It would seem quite bizarre for Matt, upon receiving this information, to retain the belief that ¬F 99 , and thus to conclude that m k must have been m 100 , i.e. that it must have been match 100 that is the first in the sequence to be wet and fail to light. It certainly does not seem as though having the dispositions in (D.+) and (D.−) requires Matt to respond in this way to the information that F 1 ∨ . . . ∨ F 100 . 19 Similarly, it seems that it should be permissible for Matt to be such that That is, it should be permissible for Matt to be disposed to give up the belief that ¬F 1 upon learning that all matches are struck, and either all the matches, or all matches from the second onwards, are wet and do not light.
These intuitions are in conflict with the principle of intensionality. For under the interpretation given in the match case, F 1 contains F 2 as a conjunct, and so F 1 ∨ F 2 is logically equivalent to F 2 -and upon learning that F 2 , by (D.−), Matt is disposed to retain the belief that ¬F 1 . Likewise, all of F 1 , . . . , F 99 contain F 100 as a conjunct, so F 1 ∨ . . . ∨ F 100 is logically equivalent to F 100 -and upon learning that F 100 , by (D.−), Matt is disposed to retain the belief that ¬F 99 . Let us see, then, where we can get by dropping the assumption of intensionality and trying to accommodate these intuitions.

Towards a Hyperintensional Solution
We begin by sketching a general method for revising one's beliefs that Matt might be seen to follow and that would lead to his conforming to the intuitions just observed. Both (6) and (7) concern how Matt revises his beliefs by a disjunctive piece of information. A very natural idea is that he does this by forming the disjunction of the results of revising his beliefs by each disjunct. Thus, if we write B for Matt's initial beliefs and * for his revision function, the idea is that 20 If so, since B * F 99 , for example, does not entail that ¬F 99 , then neither does B * F 1 ∨. . .∨F 100 , in line with (6). And likewise since B * F 1 does not entail that ¬F 1 , then neither does B * F 1 ∨ F 2 , in line with (7).
Borrowing a term from Fine ([8, p. 52]), we may call this the method of wayward revision, since it involves revising, one by one, by each disjunct of the update, i.e. by each way for the update proposition to be true. (And here, as in Fine, waywardness is considered a good thing.) Revising in this way means that every disjunct of the update is accommodated by the agent in the sense that there is some way for the revised belief system to be true under which that disjunct of the update is true. In other words, for each disjunct Q of the update P , according to the wayward revision by P , it might be that Q. Now to adopt the view that it might be that Q on some occasion for revision-even if one's beliefs previously excluded the possibility that Q-is to treat the occasion as telling one that it might be that Q. In this sense, the method of wayward revision seems to depend on a principle about updates that we section that the role of infinity is merely to turn counter-intuitive results-like this one-into contradictory ones. may roughly express like this:

M
A situation with update P ∨ Q is a situation telling the agent that it might be that P , and that it might be that Q.
Whenever a pair of situation and update satisfy (M) with respect to all disjuncts of the update, I shall say that the update is mighty in that situation. The fully general claim (M) is then that updates are always mighty. A central feature of the approach to belief revision that I want to propose is that it endorses principle (M).
Why should one endorse that principle? One consideration in favour of (M)-not the only one-is that it makes sense of our above described intuitions: our intuitive verdicts regarding the rational ways to revise by updates such as F 1 ∨ . . . ∨ F 100 or F 1 ∨ F 2 in our puzzle cases seem to arise from a tacit assumption that the updates are mighty in the situations under considerations. Now it might be objected that it is a mistake to let oneself be guided by these intuitions, since they are simply owed to certain pragmatic effects. The thought might be spelled out as follows: The update is supposed to capture the total information received by the agent in the relevant situation. To say that an agent receives the information that P ∨ Q pragmatically conveys that, in the situation in question, the agent is given some reason to allow for the possibility that Q. For suppose the agent is given no such reason. Then it will normally be wrong to say that the total information received is that P ∨ Q, since the agent will then also have received the information that P , which is normally stronger than the information that P ∨ Q. The exception is if, as in our examples, the propositions that P and that P ∨ Q are logically equivalent, since Q is of the form P ∧ R. But in such cases it will still be highly misleading to say that the information received is that P ∨ Q, since it is hard to see what the point could be of presenting the information in this disjunctive form except to convey that the agent is given some reason to allow for the possibility that Q. Still, that the agent is given such a reason is merely pragmatically conveyed by the statement that the total information they received is that P ∨ Q. It is not, or so the objection goes, part of the semantic content of that statement.
The objection misses the point. For all I want to argue here, it may well be that as a sentence of ordinary English, an instance of 'the total information the agent received is that P or Q' does not semantically imply that the agent is given reason to allow that it might be that P , and that it might be that Q. But our ultimate goal here is not to analyse ordinary discourse about people receiving information, it is to develop an adequate theory of belief revision, i.e. to adequately capture the general rationality constraints on doxastic states. As part of this, we require some means to pair occasions for revision with propositions-which we call the updates-in such a way that only dynamically equivalent occasions are assigned the same proposition. A rough and ready informal characterization of a suitable pairing uses talk of what the agent learns, or what information they receive. But in developing our theory of belief revision, we may have occasion to clarify or refine that rough characterization in certain ways. How this should be done depends more on the theoretical requirements of a theory of belief revision, and less on the available readings of the relevant locutions in ordinary English.
What I wish to claim is, firstly, that we can pair occasions for revision with propositions as their updates in such a way that (i) only dynamically equivalent situations are paired with the same update, and (ii) updates are always mighty. Secondly, I claim that for the purposes of theorizing about rational belief revision, it is beneficial to characterize occasions for revision in terms of these mighty updates. The distinction between pragmatic and semantic implications has little bearing on these claims. In defence of these claims, I will develop a conception of updates as mighty which is based on the framework of truthmaker semantics (Section 8), formally characterize a class of permissible doxastic states within the truthmaker framework and show that they satisfy versions of all the usual AGM postulates save for intensionality (Section 9), and finally highlight what I take to be the important general advantages, apart from our puzzle cases, of the resulting approach and especially the conception of updates as mighty (Section 10). 21

Mighty Truthmaker Updates
To begin, let me make two important initial clarifications regarding the notion of the update which are independent of any issues around mightiness or hyperintensionality. The first is that I take the update to represent the information the agent takes themselves to obtain in the given situation, or perhaps better: the information the agent treats the situation as providing them with. In particular, if there is also a distinct notion of what information a situation really provides a given agent with, whether or not the agent regards and treats the situation accordingly, then that is not what I intend to capture in the update. An example may help to make this clearer. Suppose I have the kind of visual experience that would normally lead to me coming to know that my neighbour is walking towards my house. The experience is caused in the appropriate sort of way by my neighbour walking towards my house, my visual system is as it should be, and so on. But suppose further that I have misleading evidence to take my visual system to be compromised, and thereby to doubt the veridicality of my experience. In one sense, perhaps, this is a situation in which I receive the information that my neighbour is walking towards my house-it is just that circumstances are such as to (rationally) prevent my uptake of that information. But in the sense I intend, this is not a situation in which I receive the information that my neighbour is walking towards my house. For it is not a situation which I treat as giving me this information. Conversely, a situation in which someone tells me that P , and I trust the speaker, would be a situation in which, in the intended sense, I receive the information that P , even if the speaker is actually lying, and it is false that P .
A second, in some ways complementary clarification is that I take the update to represent what the subject treats the situation-on its own, as it were-as telling them. Consider a version of the previous scenario in which I have no doubts about my visual system and accordingly come to the belief that my neighbour is walking towards my house. Suppose further that my wife previously told me that my neighbour is away on holiday, leading me to conclude that my wife was mistaken. In one sense, perhaps, I might be said to treat the situation as providing me with the information that my wife was mistaken. But this seems to be a case in which, in the course of revising my belief system in the light of the new information, I come to acquire this belief. It is not a case in which the relevant belief is part of the information I treat the situation on its own as providing me with.
Both these stipulations are reasonable independently of the questions of mightiness and hyperintensionality. Unless we make the first stipulation, it is doubtful that rationality requires the agent to come to believe the update. 22 Unless we make the second stipulation, we lose the distinction between the interpretation of a situation by an agent on the one hand and the resulting adjustment of their previous beliefs on the other.
We may thus think of the process of belief revision as divided into two stages. The first stage consists of the agent interpreting the situation in which they find themselves, and deciding what to take it as telling them. The second consists of the agent revising their beliefs in light of what they've taken the situation to tell them. The role of the update is to represent the outcome of stage one. In explaining our conception of the update, what we need to explain is therefore what it says about how the agent interprets the given situation that we are assigning to it a particular update.
The conception of updates I wish to propose is intended to render them mighty, so that by assigning to a situation the update P ∨ Q, we are saying, among other things, that the agent interprets the situation as telling them that it might be that P , and that it might be that Q. The condition of the situation telling the agent that it might be that P here is to be understood in a specific, comparatively demanding way. In a weak sense, we might say that the situation tells the agent that it might be that P whenever the situation, as interpreted by the agent, does not-actively and by itself, as it were-exclude the possibility that P . A more natural interpretation of the condition is more demanding. It requires, we might say, that the situation explicitly presents it as a possibility that P , that it being the case that P would (at least) help account for the situation, or that it being the case that P would (at least) partially constitute the truth of what the agent takes the situation to tell them.
The distinction is difficult to define in independent, non-metaphorical terms, but it is clear and familiar enough. An example may help to illustrate the idea. Suppose my neighbour has twin sons, Bob and Bill. Suppose further that I see someone walking towards my house, and that I see them well enough to be able to tell that it is definitely either Bob or Bill, but I can't tell which. So I take the situation to tell me, 22 This requirement is implicit in the rule of Entailment, and captured in the AGM postulate Success; cf. Stalnaker [34] for a similar approach to justifying the Success postulate. (I do not mean here to exclude the possibility of fruitfully theorizing about belief revision on the basis of a different conception of the update, not subject to the requirement that the agent takes themselves to come to know the update. But this would constitute a more radical departure from the AGM tradition than I wish here to consider. In the literature, approaches of this sort often go under the label of non-prioritized belief revision; for a brief introduction see [19,Section 6.3].) among other things, that Bob or Bill is coming over. Consequently, some propositions are incompatible with the situation as I interpret it, such as any proposition to the effect that both Bob and Bill are away on holidays. Some propositions are merely compatible with the situation as I interpret it, such as the proposition that it is sunny in Ohio. And some propositions are explicitly presented as possiblities by the situation, such as the proposition that Bob is coming over, and the proposition that Bill is coming over. These are propositions we might describe as (partially) accounting for the situation I find myself in, as I interpret it, as propositions whose truth would partially constitute the truth of what I take the situation to tell me. Let us call propositions in this final category explicit possibilities of the situation (under the agent's interpretation 23 ), and those in the former category merely implicit possibilities. 24 The distinction between explicit and implicit possibilities is relevant to how an agent may rationally revise their beliefs. If a proposition is an explicit possibility in a situation, then the situation provides some reason for the agent to allow for the possibility of the proposition's being true, even if their original belief system excludes that possibility. Thus, in the example, even if I initially believed both Bob and Bill to be away on holiday, the situation provides some reason for me to allow for the possibility that Bob is coming over, and it provides some reason for me to allow for the possibility that Bill is coming over. But if a proposition is a merely implicit possibility, then the situation does not give the agent reason to allow for the possibility that it is true. If in our example I originally believed it not to be sunny in Ohio, then the situation provides no grounds whatsoever to subsequently allow for the possibility of it being sunny in Ohio.
Crucially, the condition that the situation tells the agent that it might be that P in (M) is to be understood as requiring that the proposition that P is an explicit possibility in the situation. So under a conception of updates as mighty, to say that the update in a given situation is P ∨ Q is to say, among other things, that the agent is given some reason, in that situation, to allow for the possibility that P , and to allow for the possibility that Q.
We can now argue that if updates are mighty, they must be individuated in a hyperintensional way. For assuming intensionality, any given update P can also be written as P ∨ P ∧ Q , for arbitrary Q. Assuming mightiness, it follows that in any situ- 23 This qualification will henceforth usually remain tacit. 24 The distinction between what I have called explicit and implicit possibilities in a situation may be compared to von Wright's distinction between the strong and weak permissions of a system of norms (cf. [36, p. 90]), where an action is weakly permitted iff it is compatible with the system of norms, and strongly permitted iff it is actively singled out, as it were, as permitted by the system of norms. The difficulties in capturing these distinctions within an intensional framework are likewise parallel. Fine [12] proposes a truthmaker semantics for statements of permission that is sensitive to the distinction, and captures it in much the same way that I propose below. In Section 8 of that paper, Fine also adresses the problem of deontic updating and notes the connection to belief revision. The approach to deontic updating Fine sketches is related to the approach to belief revision to be described below, but with a simple mereological construction taking the place of the transition relation invoked below. Related approaches to deontic updating are also pursued by Yablo [37] and Yablo & Rothschild [31], who likewise draw the connection to belief revision. ation with update P , the agent is told that it might be that P ∧ Q, and hence that it might be that Q, for arbitrary Q. Whatever P is, there will be few if any such situations. Conversely, it seems most situations will not be representable by a mighty intensional update. It seems safe to conclude, therefore, that a conception of updates as mighty requires a hyperintensional way of individuating updates, in particular one that allows us to distinguish between pairs of the form P and P ∨ P ∧ Q . 25 I propose that we model updates as propositions as conceived within the framework of truthmaker semantics. 26 Within this theory, propositions are characterized not (merely) in terms of the possible worlds at which they are true, but in terms of the possible states which make them true. 27 Informally, a possible state may be thought of as a (proper or improper) part or fragment of a possible world, but officially the notion is a primitive of the theory. States are taken to be ordered by part-whole ( ), and some states s 1 , s 2 , . . . are said to be compatible if there is a possible state that contains all of them as parts. It is assumed that there is always a smallest state to contain some given states s 1 , s 2 , . . . , which we call their fusion {s 1 , s 2 , . . .} = s 1 s 2 . . . 28 We may recover a notion of a possible world as the notion of a maximal possible state, i.e. a possible state that contains every state it is compatible with.
An exact truthmaker of a proposition is a state that is not only modally sufficient for the truth of the proposition, but also responsible for it. Thus, the state of it being sunny in New York is not an exact truthmaker of the proposition that 2+2=4. In addition, to be an exact truthmaker of a proposition, a state must be wholly relevant to the truth of the proposition. Thus, the state of it being sunny and cold in New York is not an exact truthmaker of the proposition that it is sunny in New York, since it contains an irrelevant part-the state of it being cold in New York-and therefore fails to be wholly relevant. The condition of being wholly relevant renders exact truthmaking non-monotonic: a given state may exactly verify, i.e. be an exact truthmaker of, a given proposition, without some bigger state also exactly verifying the same propo- 25 Since AGM is based on an intensional conception of the update, it would seem to follow from this that AGM updates cannot be considered mighty. On the other hand, one might argue that the AGM method of revision does reflect a conception of updates as mighty. The reasoning is this. As will become clearer in the next section, regarding an update as mighty means that for each disjunct P of the update, absent special reasons to the contrary, P must be accommodated as a possibility. Now under the AGM account, the agent must accommodate a disjunct P unless they consider no P -worlds to be among the most plausible updateworlds. To the extent that this is a special reason not to accommodate P , AGM revision embodies a view of the update as mighty. Indeed, one might think this is exactly what goes wrong in our puzzle cases. AGM lets us retain ¬F 1 upon revising by F 2 only if we have special reasons to discard the F 1 -worlds among the F 2 -worlds. So in this way, the update F 2 is treated as mighty, and as identical to (F 1 ∧ F 2 ) ∨ F 2 . But we can be in a situation where it is fine just by default to accept F 2 and retain ¬F 1 , because F 1 is merely compatible with the update F 2 , and not an explicit possibility. 26 A semantics of this sort was first formulated by Bas van Fraassen [35]. In recent years, the approach and its various applications have been further developed by Fine and others. Fine's [10,11] offer the best general presentation of the theory. The following brief introduction is indebted to these works. 27 A formally precise presentation of the framework is given in Appendix A. For many applications of truthmaker semantics-including, I believe, some applications related to belief revision-, it is useful also to allow for a multiplicity of impossible states. For our present concerns, however, impossible states are not essential, though it will be convenient to assume that there is a single impossible state. 28 When s 1 , s 2 , . . . are incompatible, this will be the impossible state. sition. Relatedly, if a state s is an exact truthmaker of some proposition, then we may conclude that the proposition is in some good sense about the whole state s (though not in general only about s). 29 This understanding of truthmaking suggests a particular account of the truthmakers of disjunctions and conjunction: A state makes a disjunction true iff it makes one of the disjuncts true, and it makes a conjunction true iff it is the fusion of truthmakers of the conjuncts. 30,31 Under this account, we can make the required distinction between P and P ∨ P ∧ Q . Any fusion of a truthmaker of P and a truthmaker of Q is a truthmaker of P ∨ P ∧ Q , but since a truthmaker of Q will not in general be relevant to the truth of P , such a fusion will not in general be a truthmaker of P . In particular, we can distinguish between, for example, the logically equivalent F 2 and F 1 ∨ F 2 in the match example. For by the clause for disjunction, every exact truthmaker of F 1 will be a truthmaker of F 1 ∨F 2 . But since F 1 by definition has W 1 ∧¬L 1 as a conjunct, by the clause for conjunction, any such truthmaker will contain a part that makes true W 1 ∧ ¬L 1 , the proposition that the first match is wet and does not light. That state will be irrelevant to the truth of F 2 , and therefore no truthmaker of F 1 will be an exact truthmaker of F 2 . 32 29 For much more on the relation of (non-monotonic) truthmaking to the notion of aboutness or subject matter, see Steve Yablo's [38] and Fine's [11,13]. 30 There is also an alternative, inclusive clause for disjunction, in which the fusion of truthmakers of each disjunct is also considered a truthmaker. In some applications of truthmaker semantics it is preferable to work with the inclusive conception of disjunction, but as we shall see shortly, for the present application there are specific reasons not to do so. 31 Readers may wonder about the case of negation. The simplest approach is to associate any given proposition with both a set of exact truthmakers, and a set of exact falsitymakers, and to let negation 'flip' the two sets. For now, since none of the AGM postulates involves negation, we may set negation to one side. (Negation does of course play an important role in the relation between AGM-style revision and another important AGM-operation, namely contraction, which corresponds to the mere removal of a belief. There are important questions about the treatment of contraction and similar operations under a truthmaker approach, as well as about the matter of negation, but discussion of these will have to wait for another occasion.) 32 The central feature of the truthmaker framework is thus its use of a concept of relevant truthmaking, which makes it possible to capture various relationships of relevance between propositions. Relatedly, the distinctive features of the truthmaker-based approach to belief revision developed here can also be put in terms of relevance. On the conception of the update as mighty, the update P ∨ P ∧ Q is relevant to a prior belief in ¬Q, whereas the corresponding update P need not be so relevant. In particular, in our example, the update F 2 ∨ F 1 ∧ F 2 , but not the update F 2 , is regarded as relevant to the belief that ¬F 1 , and so the agent is permitted to be disposed to give up that belief in processing the former update while not being so disposed with regard to the latter update. The claim that AGM is not appropriately sensitive to the matter of which existing beliefs a given update is relevant to has also been made by earlier authors; most notably by Parikh [28], whose proposal for extending AGM by a relevance axiom has been the subject of extensive discussion and refinements, cf. e.g. [23,27]. A proper comparison of the present approach with this tradition or other 'relevantist' criticisms of AGM is beyond the scope of this paper, but it may be worth mentioning two significant points of difference. Most of the work in the tradition initiated by Parikh embraces intensionality and accordingly does not adopt a conception of the update as mighty. That tradition also tends to follow a syntactically driven approach to understanding relevance (an exception is [29], providing a system-of-spheres semantics for Parikh's relevance axiom), whereas the present approach is chiefly driven by semantic concepts and considerations. It would be very interesting to study the relation between these approaches more deeply. One might try, for example, to formulate a suitably hyperintensional version of the relevance axiom and investigate whether it may be satisfied under some version of the present approach.
We can now say which truthmaker proposition we take to be the update on a given occasion for revision. First, note that a division between explicit and implicit possibilities can also be made at the level of states. A state is an (at least) implicit possibility if it is compatible with what the agent takes the situation to tell them, and it is an explicit possibility if it also partially constitutes the truth of, i.e. partially makes true, what the agent takes the situation to tell them. 33 Among the explicit possibilities, we may then further distinguish between those that merely partially make true what the situation tells the agent, and those that fully make true what the situation tells the agent. In our example, what I take the situation to tell me is perhaps not exhausted by the claim that either Bob or Bill are coming over. Perhaps I also see that Bob or Bill-whoever it happens to be-is wearing a black sweater and a red hat. Let us call explicit possibilities that fully make true what the situation tells the agent complete, and the others incomplete. The truthmaker-update-short: tm-update-in a given situation, as interpreted by the agent, is then the set of all and only the situation's complete explicit possibilities. 34 Since the tm-update includes only explicit possibilities, a situation with tm-update P tells the agent, for each state s ∈ P , that s might obtain, and hence for each disjunct Q of P , that it might be the case that Q. Since the tm-update comprises every complete explicit possibility, moreover, a situation with update P tells the agent that it must be the case that P : the situation is taken by the agent to rule out any scenario in which it is not the case that P . We may summarize the point by saying that tm-updates are both musty and mighty.
By way of comparison, consider how an intensional conception of the update might be obtained. The obvious answer would seem to be as follows. Given an agent's interpretation of a situation, we divide the possible worlds into two exclusive and exhaustive categories. To the first belong those worlds that are compatible with the situation, under the agent's interpretation, and to the second belong the others. The possible worlds update-short: pw-update-is the set of the former worlds. Then pwupdates are certainly also musty: given that every world that is compatible with the situation is included in the update, we can conclude that the situation tells the agent that one of the update-worlds must obtain. But in contrast to tm-updates, which are musty and mighty, pw-updates are merely musty. For as we saw above, intensional updates cannot be mighty in the demanding sense in which tm-updates are.
Note that under our conception of tm-updates, assuming as given two situations with logically equivalent tm-updates P and Q that differ with respect to their truthmakers, there is nothing mysterious about why these situations can be dynamically inequivalent even assuming the agent knows the updates to be logically equivalent. That the agent knows that P and Q are logically equivalent means they know that it 33 Note that 'partial' here means part of rather than has as part. Thus, by a partial truthmaker I mean something which is part of a truthmaker rather than something which has a truthmaker as a part. 34 Note that this set is plausibly not closed under fusion. For instance, Ben's coming over and Bob's coming over may each be explicit possibilities without Ben and Bob both coming over being one. That is the reason why I think that in the application to belief revision, we need to allow for truthmaker propositions that fail to be closed under fusion, and relatedly to opt for the non-inclusive clause for disjunction, on which fusions of verifiers of the disjuncts are not automatically verifiers of the disjunction; cf. Footnote 30 above. is absolutely impossible for P to be true without Q being true as well, and vice versa. A situation with tm-update P is one in which the agent takes themselves to learn that P . Knowing P to be equivalent to Q, they will also conclude that Q. Similarly in a situation with update Q. But how the belief that P , or the belief that Q, may appropriately be incorporated in these situations depends also on what the situations tell the agent about what might be the case. Given that P and Q have different truthmakers, situations with tm-updates P and Q will differ in this regard, and may therefore differ with respect to their range of rational responses. 35

Revision
Given the proposed conception of updates as sets of truthmakers, how can we characterize the rationally permissible ways to revise a belief system by an update? First, we need to decide how to model belief systems within our revised setting. Although the issue calls for extended discussion, for present purposes we may adopt a policy of keeping this as simple as possible, and of minimizing deviation from the AGM approach, so that we may see how much, or how little, of that approach we are forced to give up to accommodate the problem cases. We shall therefore continue to model a belief state by the set of possible worlds at which it is true. Thus, the update will be the only source of hyperintensionality under the resulting approach. 36 In imposing rationality constraints on doxastic states, we follow a similar strategy as the possible worlds approach in that we demand that the revision function be definable in a certain way. We suggested above that Matt might plausibly be seen to revise by disjunctions by disjoining revisions by the disjuncts. Within the truthmaker framework, a disjunct of an update is any subset of the update, and the disjuncts of the update which are not themselves disjunctive are the subsets with exactly one truthmaker as member. So the suggestion is, in effect, to take the revision by an update to be the disjunction of the revisions by the individual truthmakers of the update. In this way, we obtain what we called the wayward revision of a belief system by an update.
Under certain circumstances, however, it may be rationally permissible for an agent to deviate from the method of wayward revision. The idea is that one may take a situation to tell one that it might be that P , and at the same reasonably hold that one knows better, as it were-that information one possesses independently of the given occasion for revision, and that is not undermined by the new information obtained, may justify one in continuing to exclude the possibility that P , even if the situation 35 We might compare the situation to the one in approaches to revision using belief bases, which are sets of sentences not (normally) closed under logical consequence. There, a distinction is made between, roughly speaking, sentences an agent believes to be true purely because they follow logically from other sentences the agent believes and sentences an agent believes to be true on (partly) independent grounds. The view is that rational revision is sensitive to this difference, and different but logically equivalent belief bases may rationally be revised differently. Just as in our case, the view is fully compatible with a view of agents as logically omniscient. See e.g. [18, pp. 17ff]. 36 That being said, I suspect that an ultimately more satisfactory approach may be obtained by also embracing hyperintensionality with respect to the belief system and representing an agent's beliefs by their exact truthmakers rather than all the verifying worlds. on its own is taken to explicitly present P as a possibility. First of all, one might so interpret a situation as to assign it the update P ∨ Q, where P but not Q is compatible with one's previous beliefs. In the Bob-and-Bill case, for example, I might take the situation to tell me that Bob might be coming (P ) and Bill might be coming (Q), when my beliefs are compatible with the former but not the latter possibility. In such a case, it is permissible for me to disregard the revision by Q and simply select as my new belief system the revision by P . 37 Second of all, even if all disjuncts of the update are incompatible with the agent's current beliefs, those beliefs may exclude the revisions by some disjuncts much more firmly than others, and it may then be rational for the agent to disregard the latter. In the context of the dominos, a plausible example might be the update to the effect that either all the stones fell, or exactly the odd-numbered stones fell. Given the setup of the case, any world verifying the second disjunct might seem a so much more remote possibility than worlds verifying the first disjunct that it may justifiably be disregarded. This suggests a modification of the simple method of revision, whereby revisions by disjunctive updates are constructed by first forming the disjunction of the revisions by each disjunct, and then applying a "plausibility filter", discarding those disjuncts that are regarded as sufficiently less plausible than others. Just like we did under the possible worlds approach, therefore, we may appeal to a plausibility ordering of the worlds, and let B * P comprise only the most plausible worlds in the wayward revision of B by P .
It needs to be emphasized, however, that while from a formal perspective the plausibility orderings used here are just like those used in AGM, their representational role is quite different, and much less central to the overall account. In particular, under the present approach, an agent may consider two initially excluded worlds equally plausible and yet, after a rational revision, continue to exclude one of them, while no longer excluding the other. Indeed, as will become clearer below, this is exactly what allows us to deal in an intuitively satisfactory way with the puzzle cases.
It remains to characterize the rationally acceptable ways to revise a belief state by a single truthmaker. A natural idea is to once more take a leaf out of Fine's semantics for counterfactuals (cf. [7, pp. 236ff]), and to postulate a transition relation that encodes, roughly speaking, how each of the various worlds in the belief state may be adjusted upon revision by any given input state. 38 We write s → b w to say that world w is a revision of world b by state s, and define the wayward revision B • P of belief state B by update P as {w:p → b w for some p ∈ P and b ∈ B}. The final revision is then obtained by applying the plausibility filter. Where X is a set of worlds, we 37 Indeed, it is standardly assumed that this is not only permissible but mandatory. Specifically, the AGM postulate of Vacuity demands that no beliefs be given up in incorporating information compatible with the agent's current beliefs. 38 Although in its use of a transition relation, the present approach thus maintains a strong parallel to Fine's semantics for counterfactuals, it should be noted that there is no counterpart in the latter to our use of plausibility orderings. Roughly speaking, while I propose to divide the work done by plausibility orderings under the possible worlds approach between plausibility orderings and a transition relation, Fine proposes to let transition do all the work of the similarity ordering in the possible worlds analysis of counterfactuals. I suspect that by using a similarity ordering in the account of counterfactuals, much as we use a plausibility ordering here, we might be able to avoid the difficulties for Fine's semantics raised by Embry [4]. let g X be the set of the maximally plausible members of X, and define B * P as g B • P . 39 Note that the revision operations of the usual possible worlds account constitute a special case of our revision operations, which corresponds to the condition that a world w is a revision of another world b by a consistent state s iff w contains s as part. Then the wayward revision of any belief state is simply the set of worlds at which the update is true, and the final, filtered revision is the set of the maximally plausible update-worlds. Thus, the way our present account improves on the possible worlds account, is by allowing the transition relation to narrow our focus from the start on some subset of the update-worlds, and to do so in a way sensitive to the exact truthmakers of the update.
To illustrate the idea, we sketch a truthmaker model of Dom's doxastic state in the dominos example. 40 For simplicity, we let our worlds be built up purely from states of the form f n -stone n falls-and f n -stone n does not fall. Then Dom's initial belief has just one verifier, the state b = {f n : n ∈ N}. Its revision by the proposition that F 2 , with its sole verifier f 2 , will comprise exactly the maximally plausible worlds w with f 2 → b w. Its revision by the proposition that F 1 ∧ F 2 ∨ F 2 , with is two verifiers f 2 and f 1 f 2 , will comprise exactly the maximally plausible worlds w with either To accommmodate the fact that Dom is disposed to make room for the possibility that F 1 upon learning that F , or learning that F 1 ∧ F 2 ∨ F 2 , we may stipulate that all regular worlds other than b are equally plausible, where a world is regular iff it is of the form {f m : m < n} {f m : m ≥ n} for some n.
Thus, in revising by F 2 , the world with all stones falling is excluded. But it is not excluded because it is less plausible than the other F 2 -worlds. Instead, it does not even come up for consideration at the stage at which the plausiblity filter is applied, because it is not among the worlds that are revisions of b by f 2 . Why is it not among those worlds? Because the state f 2 of the second stone falling is taken by Dom to provide no grounds for replacing the state f 1 of the first stone standing by f 1 . Such a reason to change the relevant state obtains only for the other, later stones in the sequence.
So we do not wish to hold that a world w transitions to another v upon revision by s whenever v contains s-this would render our account intensional, and equivalent to AGM. But there are a number of weaker constraints that we may plausibly impose. In particular, for any consistent state s and any world b ∈ B, we shall require that 41 These constraints, together with the familiar assumptions about plausibility orderings, ensure that filtered revision satisfies natural counterparts of all the basic AGM postulates except for Intensionality. They also ensure that under the natural interpretation of ⇒ in terms of filtered revision, all of Fine's rules from Section 5 are valid with the exception of the intensionalist rule of Substitution. 42 The situation is more complicated with respect to the postulates of Superexpansion and Subexpansion. These are usually stated in a form in which they relate revisions by conjunctions to revisions by their conjuncts. Superexpansion then says that B * P ∧ Q entails B * P ∧ Q , and Subexpansion says that if B * P is compatible with Q, the converse entailment also holds, so that B * P ∧ Q entails B * P ∧ Q. Now as we have noted before, within an intensional framework, the relation between a conjunction and its conjuncts is simply the relation between a proposition and a proposition entailed by it, and thus the same as the relation between a proposition and a disjunction in which it is a disjunct. As a result, we can also formulate versions of Superexpansion and Subexpansion that relate revisions by disjunctions to revisions by their disjuncts. These versions will be equivalent to the usual ones under the assumption of intensionally individuated updates, but they will not be equivalent within our hyperintensional framework. We may thus distinguish between the following four principles: It turns out that conditions (1)-(5) on transition relations, together with the conditions on plausibility orderings, ensure that Superexpansion(∧) and Subexpansion(∨) are satisfied. Superexpansion(∨) and Subexpansion(∧) are not in general satisfied. This is a good thing, though. For as I show in Appendix B, there is no way to do so, given the other principles and constraints, without the account collapsing again into AGM and thereby validating Intensionality. Moreover, we can construct compelling counter-examples to these postulates on the basis of our example cases. For simplicity, consider the dominos case again. For Superexpansion(∨), let P be the proposition that F 2 and Q the proposition that F 1 ∧ F 2 . Then as we have argued, it is permissible for B * P to rule out that F 1 while B * P ∨ Q -and then also B * P ∨ Q ∧ Pdoes not, and therefore fails to entail B * P , in violation of Superexpansion(∨). For Subexpansion(∧), let P be as before and let Q be the proposition that F 1 ∧ F 2 ∨ F 2 . So P says that the second stone fell, and Q says that the second, or the first and the second stone fell. B * P then says that the first stone stands, but the second stone and all subsequent ones fell. This is of course compatible with Q; indeed, it entails Q. P ∧ Q is equivalent, even in terms of its truthmakers, to Q. So B * P ∧ Q = B * Q. But given our assumptions, B * Q makes room for the possibility that all stones fell, and so it cannot entail B * P , which does not allow for that possibiliby. A fortiori, B * Q then does not entail B * P ∧ Q, in violation of Subexpansion(∧).

The Advantages of Mightiness
The results of the previous sections show that a viable, hyperintensional theory of rational belief revision can be developed within the framework of truthmaker semantics and on the basis of a conception of the update as mighty. Moreover, we saw that this kind of approach allows us to give a very natural account of what is going on in our puzzle cases, which is much more in line with an intuitive assessment of these cases than any account that could be given within an intensional framework. In this final section of the paper, I want to briefly indicate at a more general and abstract level some of the further advantages of the proposed approach and in particular the use of mighty updates.
The central requirement on a conception of the update is that dynamically inequivalent situations always be assigned distinct updates. As we have seen, there are logically equivalent tm-updates that represent dynamically inequivalent situations. At first glance, if the tm-updates associated with a pair of dynamically inequivalent situations are logically equivalent, it would seem that the pw-updates associated with those situations must be identical. This would show that pw-updates are plainly incapable of capturing the relevant features of occasions for revision. So the question arises how, if at all, intensionalists can avoid this conclusion.
It will be useful to consider a concrete example. Suppose I have hurt my ankle playing football. I take it to be nothing serious but go to the doctor just in case. After examining me, she tells me: 'Your ankle is sprained, or sprained and broken'. I trust the doctor and see no reason to suspect her to try to mislead me. So I take the situation to tell me that my ankle must be sprained, and that it might in addition be broken. It is then reasonable for me to give up my belief that my ankle is not broken. 43 Now consider a version of the situation in which the doctor tells me simply: 'Your ankle is sprained'. Again, I trust the doctor and see no reason to suspect her to be anything less than fully perspicuous in sharing her opinion of my ankle. So I take the situation to tell me that my ankle is sprained, and I do not take it to tell me that my ankle might be broken. It is then reasonable for me to retain my belief that my ankle is not broken.
Clearly, we have a pair of dynamically inequivalent occasions for revision. Moreover, it seems plausible that under the specified interpretations of the situations, they are to be assigned logically equivalent tm-updates. The tm-update in the first situation-call it the sprained/broken scenario-might plausibly be taken to be the truthmaker proposition that my ankle is sprained, or sprained and broken. In the second situation-call it the sprained scenario-the tm-update is plausibly taken to be the truthmaker proposition that my ankle is sprained. These propositions, of course, are logically equivalent. Now if the corresponding pw-updates are simply the sets of worlds in which these tm-updates are true, then the two situations are assigned the same pw-update, in spite of their dynamic inequivalence. Can the intensionalist plausibly deny the claim that these are the pw-updates?
A natural idea is to point out that the update is supposed to capture the total information received by the agent, and that the updates we specified do not satisfy this condition. For example, in the first scenario, I presumably also obtain the information that the doctor assertorically utters the sentence 'Your ankle is sprained, or sprained and broken', and perhaps I obtain the information that the doctor is not convinved that my ankle is not broken. And in the second scenario, I obtain the information that the doctor utters 'Your ankle is sprained' instead, and perhaps take the situation to also tell me that the doctor confidently rules out that my ankle is broken. If we enrich the updates given above by these further bits of information, then the updates assigned to the two situations will not be logically equivalent. 44 In order to properly evaluate this response, we need to get clearer about the requirement that the update represent the total information received by the agent in the situation under consideration. On the one hand, it is uncontroversial that we need some form of such a completeness requirement: we simply cannot determine the rational responses to a situation purely on the basis of the fact that part of what the agent learns is that P , without being told what else the agent learns. On the other 43 Note that nothing I have said about this scenario depends on it being part of the semantic content of the doctor's utterance that my ankle might be broken. It is perfectly consistent with what I say that this is merely a pragmatic implication. But since I trust the doctor and assume that she is not trying to mislead me, I take on board not only the semantic but also the pragmatic implications of what she says. 44 While this seems to be the most natural response, it is perhaps not the only possible response. A more comprehensive and detailed examination of the options available here is beyond the scope of this paper, but let me mention one alternative strategy, hinted at by Wolfgang Spohn [32, Section 6], when discussing a somewhat similar example. The idea is to maintain that in the situation in question, the appropriate response by the agent consists not simply in a revision by some given update, but in a sequence of belief change operations, first simply removing my previous beliefs about the health of my ankle, including the belief that my ankle is not broken, and then revising with the proposition that my ankle is sprained. This suggestion may yield the right results in our example, but absent plausible general principles telling us what situations call for what combinations of operations, the response appears objectionably ad hoc. hand, a naïvely strict interpretation of the completeness requirement gives rise to severe methodological difficulties. For under such an interpretation, in more or less any realistic situation a doxastic agent might find themselves in, the total information received will be unmanageably rich and complex. For a start, as long as the agent has their eyes open, they would seem to receive, at any point in time, a very rich body of visual information that it is not even feasible to express in words. So if the update needs to capture the total information received in this very demanding sense, we lose the ability to test any proposed theory of belief revision by applying it to realistic scenarios and working out its implications.
In practice, belief revision theorists do not attempt to specify anything like an update that would be complete in this demanding sense. Nor, it might be added, do they normally attempt to fully specify anything like a realistic complete initial belief state that is to be revised, or a complete revised belief state. How is this practice to be justified? To a rough approximation, a natural idea is as follows. First of all, in considering examples, we usually limit attention to the evolution of a certain subset of an agent's beliefs, such as their beliefs concerning the status of certain domino stones or matches, or the whereabouts of their neighbour's twins. We specify those initial beliefs, and tacitly stipulate that in the kind of situation to be considered, any other beliefs the agent might have are irrelevant to how the subset we are considering can rationally be revised. With regard to the update, a related policy is in place: the update is assumed to be complete in the sense of encoding all the information received that is relevant to how the part of the agent's belief system under consideration may rationally be revised. What the example of my injured ankle helps bring out is that the truthmaker approach and the intensional AGM approach differ greatly with respect to how, and how easily, the demands of relevant completeness may be met.
Under the truthmaker approach, we can adequately model the example by specifying my initial beliefs about the health of my ankle, and by taking the updates in the two scenarios to be as described above-that my ankle is sprained in the sprained scenario, and that my ankle is sprained, or sprained and broken in the sprained/broken scenario. Given the assumptions of the example, there seems to be no reason to take these updates to be relevantly incomplete. Under the possible worlds approach, we need to work with a much more complicated model of the situation. In order to capture all the relevant differences about the information received, we have to incorporate in the updates information about which sentences were uttered, or perhaps about which beliefs the doctor holds or does not hold concerning my ankle. To make room for the fact that I can reasonably give up the belief that my ankle is not broken in the sprained/broken scenario while retaining the same belief in the sprained scenario, we should then say that, roughly speaking, I consider worlds in which the doctor's diagnosis is correct to be more plausible than ones in which it is mistaken.
At least in this kind of case, the truthmaker approach thus affords a simpler, more direct, and more elegant representation of the case. But more significantly, it also allows us to straightforwardly capture intuitive rational constraints that cannot be captured under the alternative, more complicated model. To see this, note that how I am disposed to revise in the sprained scenario imposes constraints on how I may rationally be disposed to revise in the sprained/broken scenario. In particular, it seems that my beliefs about the health of my ankle should be strictly weaker in the sprained/broken scenario than in the sprained scenario. Under the truthmaker approach, this constraint follows given the logical relationship between the associated tm-updates. For these constitute a pair of the form P and P ∨ P ∧ Q , and we can show using the principles of Success, Consistency and Subexpansion(∨) that B * P entails B * P ∨ P ∧ Q whenever P is consistent. 45 But it is hard to see how a similar result could be obtained on the basis of a representation of the situations in terms of the associated pw-updates.
A proposition P is said to (loosely 47 ) entail ( |=) a proposition Q iff every world containing a truthmaker of P contains a truthmaker of Q .
We now turn to the task of defining the class of permissible doxastic states. We do this using a notion of a coherent pair of a plausibility ordering and a transition relation.

Definition 2
A plausibility ordering is a two-place relation ≤ on S w ∪{ } satisfying the following conditions, where g X :={w ∈ X : w ≤ v for all v ∈ X} for all X ⊆ S w ∪ { }: These are exactly the conditions imposed under the possible worlds approach, except for the added clause dealing with . Given any plausibility ordering, we often use B to refer to g S w , since this is the set of worlds at which the agent's beliefs are true. PT-Existence s → b t for some t ∈ S PT-Link if w ∈ B • {s t} then v ≤ w for some v ∈ B • {s} for some p ∈ P . It then suffices to show that w ≤ w . Now either w ∈ B • P or w ∈ B •Q. If w ∈ B •P , then w ≤ w is immediate from w ∈ g B •P . So suppose w ∈ B • Q, so q → b w for some q ∈ Q. Since w p , by (T-Incorporation), p q → b w . Then by (PT-Link), u ∈ B • {p }, so u ∈ B • P , and hence w ≤ u. By (P-Transitivity), w ≤ w , as desired.

Definition 3 A transition relation
(R-Success), (R-Vacuity), (R-Inclusion), and (R-Consistency) are the obvious counterparts in our (semantic) setting to the (syntactically formulated) AGM postulates of Success, Vacuity, Inclusion and Consistency. The postulate of Closure serves mainly to ensure intensionality with respect to belief states, which is guaranteed under our account by the identification of belief states with the set of possible worlds at which they are true. The Intensionality postulate, of course, does not hold. Within our semantic setting, the only valid version of this principle is the triviality that B * P = B * Q if P = Q. Under a syntactic formulation of the theory, though, we would have the non-trivial principle that K * α = K * β if α and β are exactly equivalent, i.e. have the same exact truthmakers. 48 Moreover, as expected, all Finean rules from Section 5 except for the intensionalist rule of Substitution are valid under the obvious interpretation of ⇒.
Theorem 2 Let * be the revision function induced by some coherent pair of plausibility ordering and transition relation. For any propositions P , Q, let P ⇒ Q hold iff B * P |=Q. Then R-Entailment P ⇒ Q whenever P |=Q

R-Transitivity
If P ⇒ Q and P ∧ Q ⇒ R then P ⇒ R R-Conjunction If P ⇒ Q and P ⇒ R then P ⇒ Q ∧ R R-Disjunction If P ⇒ R and Q ⇒ R then P ∨ Q ⇒ R Proof (R-Entailment) and (R-Conjunction) are immediate from the definition of ⇒ and the fact that the |=-consequences of a proposition are closed under conjunction. (R-Disjunction) is immediate from the observation that g X ∪ Y ⊆ g X ∪ g Y . (R-Transitivity): Assume B * P |=Q and B * P ∧ Q |=R. If P is inconsistent, P ⇒ R follows immediately given (Success). So suppose P is consistent. Then B * P ⊆ S w . So let w ∈ B * P , and suppose p → b w with p ∈ P and b ∈ B. We need to show that w r for some r ∈ R. Since B * P |=Q, we have w q for some q ∈ Q. Then p q is consistent. By (T-Incorporation), p q → b w, so w ∈ B • P ∧ Q . Now let v ∈ B • P ∧ Q , and let p ∈ P and q ∈ Q be such that v ∈ B • {p q }. By (PT-Link), u ∈ B • {p } and hence u ∈ B • P for some u ≤ v. But since w ∈ B * P , w ≤ u and hence w ≤ v. So w ∈ B * P ∧ Q . Since B * P ∧ Q |=R, w r for some r ∈ R, as desired. 48 On the logic of this equivalence relation, see [3,9,24]. regular. For (3), note that if s t → b w with w consistent, then s t is consistent, hence so is s, and so by consistency so is v with s → b v.
We now show that our doxastic state satisfies the assumptions of the domino case under their obvious interpretation. Since some of these assumptions concern negated propositions, we will move to a bilateral conception of propositions as a pair of set of truthmakers and a set of falsitymakers. We shall take the revision of a belief state by a bilateral proposition to be simply the revision by the set of truthmakers, so our overall account of revision is not changed.
More precisely, we call a bilateral proposition P any pair of unilateral propositions. The first (second) coordinate of P is denoted by P + (P − ) and comprises the truthmakers (falsitymakers) of P. Let P ∧ Q = P + ∧ Q + , P − ∨ Q − , P ∨ Q = P + ∨ Q + , P − ∧ Q − , and ¬P = P − , P + . P is said to be exhaustive iff every w ∈ S w contains either a member of P + or a member of P − as a part, and it is said to be exclusive iff no w ∈ S w contains both a member of P + and a member of P − as a part. Both properties can be shown to be preserved under the boolean operations, and it can also be shown that the logic of loose entailment over exclusive and exhaustive propositions is classical (cf. [10, pp. 665ff]). Now let F n = {f n }, {f n } and B = g S w . Note that F n is always exclusive and exhaustive. Let P ⇒ Q hold iff B * P + |=Q + , and let ⇒ Q hold iff B |=Q + . Then licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.