Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Two Approaches to Belief Revision

Abstract

In this paper, we compare and contrast two methods for the revision of qualitative (viz., “full”) beliefs. The first (“Bayesian”) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second (“Logical”) method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable (and mysterious) finding is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    For the purposes of this paper, we remain neutral on ontological questions regarding the relationship between credences and beliefs. Although we are inclined towards a pluralistic approach admitting the existence and independence of both types of cognitive attitudes, none of the content of this paper requires the adoption of any particular view about their existence or relative fundamentality.

  2. 2.

    We will continue to follow the conventions of letting \(\star \) denote an arbitrary belief revision operator, \({\mathbf {B}}\) denote the agent’s prior belief set, and \({\mathbf {B}}'\) denote the agent’s posterior belief set.

  3. 3.

    Readers who are already familiar with the literature on belief revision will likely recognize that this update procedure on credences is the credal analogue of qualitative update known as expansions. Qualitatively, an expansion is performed when an agent simply adds a proposition to her stock of beliefs. As such, expansions capture updates by propositions that are consistent with her prior belief set. Revisions, on the other hand, capture the more general case in which there is no guarantee that the new proposition is consistent with the agent’s priors. Since conditionalization is undefined when the learned proposition receives a prior probability of zero, it may be seen as the credal analogue of qualitative expansion. This point is relevant to the current application because the novel broadly-Bayesian approach to qualitative revision will be driven by credal expansion. Nonetheless, the new revision operator may be aptly viewed as a qualitative revision operator since it permits revision by a proposition that is logically (viz. qualitatively) inconsistent with the agent’s prior belief set.

  4. 4.

    In an earlier draft, we had noted that the results hold provided the stronger requirements—which hold only for strict conditionalization—that (1) \(b'(E) > b(E)\), (2) \(b'(E) \ge t\) (where t is the agent’s Lockean threshold), and (3) \(b(E \supset X) \ge b'(X)\). However, Genin (2017) has generalized some of our more interesting results to the case of Jeffrey conditionalization by showing that they hold given the weaker pair of conditions mentioned above.

  5. 5.

    Typically, it is required only that \(t\in (\frac{1}{2},1]\). However, we will see shortly that there are compelling diachronic reasons to further constrain this value-range for the Lockean threshold.

  6. 6.

    Those interested in this aspect of the debate are invited to consult Christensen (2004), Foley (1992), and Easwaran and Fitelson (2015).

  7. 7.

    Despite this criticism, we should note that some authors have argued in favor of adopting an extremal Lockean threshold, e.g. see Levi (1973) or, more recently, Dodd (2017).

  8. 8.

    While discussion of Weak Preservation has primarily been provided in the literature surrounding the Ramsey test for conditionals (e.g. see Gärdenfors 1986; Rabinowicz 1995, or Levi 1996), it is interesting to see that this principle offers a contrastive case between our these requirements suffice to guarantee.

  9. 9.

    Indeed, it is entailed by AGM’s characteristic postulate, Preservation. As we progress, we will see that Preservation plays a crucial role in understanding the differences between AGM and Lockean revision.

  10. 10.

    We omit the proof since Lemma 1 offers a straightforward recipe for the construction of a counterexample. Simply let \(b(E)=b(X)=t\) so that \(E,X\in {\mathbf {B}}\), then by the lemma, we have a lower bound of \(\frac{2t-1}{t}\) for \(b(X{\mathbin {|}}E)\). To conclude, simply let \(t\ne 1\) so that \(\frac{2t-1}{t}<t\) and assign \(b(X{\mathbin {|}}E)\) its lower bound. Thus, we have a case where \(E,X\in {\mathbf {B}}\), but \(X\notin {\mathbf {B}}\divideontimes E\) and so \({\mathbf {B}}\nsubseteq {\mathbf {B}}\divideontimes E\).

  11. 11.

    We thank Hans Rott for this interesting observation.

  12. 12.

    Since neither Lockean revision nor AGM permit belief in both a proposition and its negation after a revision, both approaches will regard Very Weak Preservation as strictly weaker than Weak Preservation.

  13. 13.

    We thank Kenny Easwaran for this helpful observation.

  14. 14.

    The decision to rely on sets of sentences to capture belief sets is largely a matter of historical accident and reflects Gärdenfors’s desire to avoid the use of possible worlds, which he viewed as philosophically suspect. Our choice to accord with this convention is made solely out of desire to remain consonant with the most common presentation of the theory.

  15. 15.

    There are various ways to measure the distance between belief sets. The AGM postulates will follow from Conservativity for a very wide variety of such distance measures/geodesics (Georgatos 2009; Rodrigues and Gabbay 2011).

  16. 16.

    Although it is conventional to claim that Conservativity provides the normative foundation for AGM, the statement that we have provided is not universally accepted. In particular, Rott (2000) has challenged this idea suggesting that (1) and (2) are the fundamental requirements.

  17. 17.

    Idempotence follows immediately from AGM’s Preservation axiom along largely the same lines as the Weak Preservation principle discussed in the previous section. On the other hand, Lockean revision satisfies Idempotence as a trivial consequence of the fact that \(b(\cdot {\mathbin {|}}\top )=b(\cdot )\); this straightforwardly implies that \({\mathbf {B}}\divideontimes \top ={\mathbf {B}}\).

  18. 18.

    The arguments supporting this conclusion are provided in fn. 22 and 24.

  19. 19.

    It is roughly along these lines that Rott (1999, 2001) has convincingly argued that AGM imposes synchronic coherence requirements in addition to diachronic ones. Rott actually argues for the still stronger conclusion that AGM also imposes dispositional requirements governing iterated revisions. However, since our current aim is only to contrast the recommendations of AGM and Lockean revision with respect to single revisions, attending to such considerations are beyond the intended scope of this paper. Thus, we will leave discussion of coherence requirements for iterated revision for future work.

  20. 20.

    Aside from the axiomatic presentation that we rely on, AGM is well-known to be equivalent to structures provided in terms of revision based on a “selection function” (Alchourron et al. June 1985), a particular kind of entrenchment ordering (Gärdenfors and Makinson 1988), a Lewisian system of spheres (Grove 1988), the rational consequence relation of non-monotonic logic (Lehmann and Magidor 2002), the probability one part of a Popper function (Harper 1975), or in terms of minimal change updating (Georgatos 2009; Rodrigues and Gabbay 2011).

  21. 21.

    As it turns out, the basic postulates (*1)–(*6) provide an axiomatization of partial-meet revision operators, which can be thought of as emerging from the minimally mutilating revision of some prior belief set \({\mathbf {B}}\) in accord with an entrenchment ordering on propositions. The addition of the supplementary postulates, (*7) and (*8), yields a characterization of a special class of partial meet revision operators whose entrenchment orderings are transitive. See Gärdenfors and Rott (1995) for an overview of the various ways of characterizing AGM belief revision operators. Finally, one can interpret these axioms more generally, in terms of a generalized entailment relation (which may be non-classical). For simplicity, we will assume a classical entailment relation here. What we say below can be generalized to non-classical (e.g., substructural) entailment relations.

  22. 22.

    Consider the non-closed, but consistent (initial) belief set \({\mathbf {B}} = \{P,Q\}\), where P and Q are independent, contingent (atomic) claims. Closure implies that \({\mathbf {B}} * \top \) is closed. Thus, according to AGM, if an agent starts out with the prior belief set \({\mathbf {B}}\) and then revises by a tautology, they must (as a result of this “revision”) come to believe the (contingent) conjunction\(P\wedge Q\) (since, otherwise, the closure of \({\mathbf {B}} * \top \) will not be ensured). But, it is counter-intuitive that “learning” a tautology should provide an agent with a conclusive reason to accept a contingent claim. This drives home the point that AGM really needs to presuppose closure as a standing, synchronic constraint on all belief sets. See fn. 24 for a similar argument regarding consistency.

  23. 23.

    In the original Gärdenfors postulates, (*4) is Vacuity, which says: if \({\mathbf {B}}\nvdash \lnot E\), then \({\mathbf {B}}*E\supseteq {{\mathsf {C}}}{{\mathsf {n}}}({\mathbf {B}}\cup \{E\})\). However, in the presence of the other postulates, Vacuity is equivalent to Preservation. To remain consonant with our previous discussion Preservation’s variants, we prefer to rely on this alternative.

  24. 24.

    Consider the closed, but inconsistent (initial) belief set \({\mathbf {B}} = \{P,\lnot P, \top , \bot \}\), where P is a contingent (atomic) claim. bfConsistency implies that \({\mathbf {B}} * \top \) is consistent. Thus, according to AGM, if an agent starts out with the prior belief set \({\mathbf {B}}\) and then revises by a tautology, they must (as a result of this “revision”) abandon either their belief in P or their belief in \(\lnot P\) (since, otherwise, the consistency of \({\mathbf {B}} * \top \) will not be ensured). But, it is counter-intuitive that “learning” a tautology should provide an agent with a conclusive reason to drop one of their contingent beliefs. This drives home the point that AGM theory really needs to presuppose consistency as a standing, synchronic constraint on all belief sets.

  25. 25.

    For completeness we include statements of (*7) and (*8) below:

    1. (*7)

      \({\mathbf {B}}*(E\wedge E')\subseteq {{\mathsf {C}}}{{\mathsf {n}}}(({\mathbf {B}}*E)\cup \{E'\})\qquad \qquad \qquad \qquad \qquad \qquad {\mathbf{{Superexpansion}}}\)

    2. (*8)

      \(\hbox {If } {\mathbf {B}}*E\nvdash \lnot E', \hbox {then} {\mathbf {B}}*(E\wedge E') \supseteq {{\mathsf {C}}}{{\mathsf {n}}}(({\mathbf {B}}*E)\cup \{E'\})\qquad \qquad \qquad {\mathbf{{Subexpansion}}}\)

  26. 26.

    In classical probability theory, conditionalization is undefined when the proposition that the agent conditionalizes on is assigned zero prior probability. For this reason, we must assume that our agents only learn things to which they assign non-zero credence. However, Gärdenfors’s Theorem generalizes to accommodate such cases when the agent’s credences are represented by Popper functions (Harper 1975; Makinson and Hawthorne 2015). Such generalizations allow for Bayesian style modeling of agents who learn propositions with zero credence.

  27. 27.

    We assume that \(b(E) >0\) for all potential pieces of evidence—see fn. 26.

  28. 28.

    Genin (2017) shows that this result generalizes to Jeffrey conditionalization, since it relies only on the conditional claim \(b(E \supset X) \ge t \Rightarrow b(X {\mathbin {|}}E) \ge t\), which is satisfied by both strict and Jeffrey conditionalization.

  29. 29.

    Again, we are grateful to Genin and Kelly for pointing out that, on the face of it, this fact suggests that Lockeanism is committed to a kind of deductivism regarding ampliative inference. After all, you might think that if any proposition newly learned by a Lockean could have been learned by deduction using their new evidence \(+\) their old beliefs, then the inductive apparatus does not play an essential (viz., ineliminable) role in learning. However, this inference would be too fast. As we will see shortly, acquiring new evidence can undermine a Bayesian agent’s old beliefs and, thus, render them unfit for use in deductive inference.

  30. 30.

    While many are sympathetic to Preservation, the literature (primarily on epistemic conditionals) contains a number of arguments against the principle, e.g. in Gärdenfors (1986), Rabinowicz (1995), Levi (1996), and Costa (1990). More recently (and more directly in the context of belief revision), Lin and Kelly (2012) have independently argued against Preservation on the basis of their own broadly Bayesian account of revision. However, though motivated by similar considerations, their alternative remains distinct from the Lockean approach and is instead based on odds-ratio thresholds rather than the Lockean’s conditional probability thresholds. Specifically, Lin and Kelly’s revision procedure (LK revision) differs from Lockean revision in two respects: (a) LK revision is partition-sensitive, and (b) LK permits agents to believe propositions in which they have arbitrarily low credence. Ultimately, the underlying reason that LK revision deviates in these ways from Lockean revision derives from their adoption of Cogency as a universal requirement of rational belief. Though we differ over the ultimate standing of Cogency, we remain sympathetic to the objections that they provide against AGM. Nonetheless, we take our counter-examples to be more direct and probative in appreciating the fundamental issues.

  31. 31.

    This lemma and the route it provides towards establishing Theorem 1 was pointed out to us by Konstantin Genin in personal correspondence.

  32. 32.

    Jonathan Weisberg is owed credit for first establishing this crucial theorem in personal correspondence; the simplified proof strategy that employs Lemma 2, however, is owed to Konstantin Genin.

  33. 33.

    In fact, this motivation is explicitly offered by many AGM theorists including Gärdenfors himself in Gärdenfors (1988, p. 21).

  34. 34.

    It is important to note that our treatment will assume that belief is the only qualitative attitude for which agents receive any epistemic utility. Accordingly, we treat suspension of belief as nothing more than lacking belief in both the proposition and its negation, and assign suspension of belief neither positive or negative value.

  35. 35.

    It is worth noting that a similar result is also proved (independently) in Easwaran (2016), although Easwaran’s applications of his result are much different than Dorst’s. Historically, this method of deriving Lockean constraints traces back to the work of Hempel (1962).

  36. 36.

    Conditionalization can itself be given a justification using epistemic utility theory—e.g. see Greaves and Wallace (2006).

  37. 37.

    Lockeanism’s risk-aversion (in this sense) should not be wholly surprising, since it is driven by the expected utility calculus via a concave utility function u. Nonetheless, it is interesting to see that, from multiple perspectives, non-extremal Lockean revision is more risk-averse than AGM.

  38. 38.

    That said, insofar as we have relied on conditionalization to define \(\divideontimes \), there is a problem with conditionalizing on any proposition assigned a prior probability of 0 (fn. 26).

  39. 39.

    As we mentioned in the introduction, all of the results we reported here will continue to hold for any mechanical/minimal change Bayesian credal update procedure that satisfies the following two constraints: (1) \(b'(E) > b(E)\), (2) if \(b'(X) \ge t\), then \(b(E\supset X)\ge t\). It would be nice to explore these (and other) non-standard Bayesian updating procedures in conjunction with Lockeanism. In particular, Ben Eva has suggested to us the prospects of investigating “Lockean update”, which relies on the Bayesian version of imaging developed by Joyce (2010) in contrast with Katsuno and Mendelzon’s (1991) procedure for update.

  40. 40.

    The basic idea behind our approach to “contracting a Bayesian belief set \({\mathbf {B}}\) on proposition E” would involve (a) defining \(b'\) as the closest probability function to b such that\(b'(E) \le t\), and then (b) checking which propositions X are such that \(b'(X) > t\). The set \({\mathbf {B}} \div E :=\{X \mathbin {|}b'(X) > t\}\) would be our (initial) explication of what it means to “contract a Bayesian agent’s belief set \({\mathbf {B}}\) on proposition E”.

  41. 41.

    Although Leitgeb actually discusses these requirements as requirements on conditional belief—viz. belief given some proposition—it is unproblematic to translate his account of conditional belief into an account of belief revision. To remain consistent in our notation, we will explain his account in the latter terms; however, nothing rests on this. For a helpful over of conditional belief, see Edgington (1995).

  42. 42.

    This includes not only the basic postulates (*1)–(*6) on which we have focused, but also the supplementary postulates (*7) and (*8) mentioned in fn. 25.

  43. 43.

    This is easily established by first noting that \(\circ \) satisfies Idempotence. Then, observe that Preservation can be inferred from General Revision by letting \(X=\top \).

  44. 44.

    We have been assuming throughout the paper that an agent’s Lockean threshold tremains constant throughout learning events. By allowing t to change as a result of conditionalization on evidence, Leitgeb is able to ensure that all of the AGM principles (viz., Preservation) are preserved by his (variable-threshold) probabilistic update procedure. Our main Theorem reveals that even a constant-threshold approach to Lockean updating must satisfy all of the AGM postulates—provided only that the agent’s threshold falls within a particular range, viz., \(t\in (\frac{1}{2},\phi ^{-1}]\). We leave a more general study of the properties of variable-threshold Lockean updating procedures to future work.

References

  1. Alchourron, C., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic, 50(2), 510–530.

  2. Christensen, D. (2004). Putting logic in its place. Oxford: Oxford University Press.

  3. Costa, H. L. A. (1990). Conditionals and monotonic belief revisions: The success postulate. Studia Logica, 49, 557–566.

  4. Darwiche, A., & Pearl, J. (1996). On the logic of iterated belief revision. Artificial Intelligence, 89, 1–29.

  5. Dodd, D. (2017). Belief and certainty. Synthese, 194(11), 4597–4621.

  6. Dorst, K. (2014). An epistemic utility argument for the threshold view of outright belief (Forthcoming).

  7. Easwaran, K. (2016). Dr. Truthlove, or how I learned to stop worrying and love Bayesian probabilities. Noûs, 50(4), 816–853.

  8. Easwaran, K., & Fitelson, B. (2015). Accuracy, coherence, and evidence. Oxford Studies in Epistemology, 5, 61–96.

  9. Edgington, D. (1995). On conditionals. Mind, 104(414), 235–329.

  10. Foley, R. (1992). Working without a net. Oxford: Oxford University Press.

  11. Genin, K. (2017). How inductive is Bayesian conditioning. (Unpublished Manuscript).

  12. Georgatos, K. (2009). Geodesic revision. Journal of Logic and Computation, 19(3), 447–459.

  13. Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.

  14. Grove, A. (1988). Two modellings for theory change. Journal of Philosophical Logic, 17(2), 157–170.

  15. Gärdenfors, P. (1986). Belief revisions and the Ramsey test for conditionals. The Philosophical Review, 95(1), 81–93.

  16. Gärdenfors, P. (1986). The dynamics of belief: Contractions and revisions of probability functions. Topoi, 5, 29–37.

  17. Gärdenfors, P. (1988). Knowledge in flux. Cambridge: MIT Press.

  18. Gärdenfors, P., & Rott, H. (1995). Belief revision. Handbook of logic in artificial intelligence and logic programming (Vol. 4, pp. 35–132). Oxford: Oxford University Press.

  19. Gärdenfors, P., & Makinson, D. (1988). Revision of knowledge systems using epistemic entrenchment. In TARK ’88 Proceedings of the second conference of theoretical aspects of reasoning about knowledge (Vol. 3, pp. 83–95).

  20. Harper, W. (1975). Rational belief change, popper functions and counterfactuals. Synthese, 30(1–2), 221–262.

  21. Hawthorne, J. (2005). The case for closure. In M. Steup & E. Sosa (Eds.), Contemporary debates in epistemology (pp. 26–42). Oxford: Blackwell.

  22. Hempel, C. (1962). Deductive-nomological vs. statistical explanation. Minnesota Studies in the Philosophy of Science, 3, 98–169.

  23. James, W. (1896). The will to believe. The New World, 5, 327–347.

  24. Joyce, J. M. (2010). Causal reasoning and backtracking. Philosophical Studies, 147(1), 139–154.

  25. Katsuno, H., & Mendelzon, A. O. (1991). On the difference between updating a knowledge base and revising it. In J. Allen, R. Fikes, & E. Sandewall (Eds.), Principles of knowledge representation and reasoning: Proceedings of the second international conference (KR ’91) (pp. 387–394). Burlington: Morgan Kaufmann.

  26. Kyburg, H. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.

  27. Lehmann, D., & Magidor, M. (2002). What does a conditional knowledge base entail? Journal of Artificial Intelligence, 44(1), 167–207.

  28. Leitgeb, H. (2013). The review paradox: On the diachronic costs of not closing rational belief under conjunction. Noûs, 78(4), 781–793.

  29. Leitgeb, H. (2014). The stability theory of belief. Philosophical Review, 123(2), 131–171.

  30. Leitgeb, H. (2016). Stability theory of belief. Oxford: Oxford University Press.

  31. Levi, I. (1973). Gambling with truth: An essay on induction and the aims of science. Cambridge: MIT Press.

  32. Levi, I. (1996). For the sake of argument. Cambridge: Cambridge University Press.

  33. Lin, H., & Kelly, K. T. (2012). Propositional reasoning that tracks probabilistic reasoning. Journal of Philosophical Logic, 41(6), 957–981.

  34. Makinson, D., & Hawthorne, J. (2015). Lossy inference rules and their bounds: A brief review. In A. Koslow, & A. Buchsbaum (Eds.), The road to universal logic. Studies in universal logic. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-10193-4_18.

  35. Pettigrew, R. (2016). Jamesian epistemology formalised: An explication of ‘The Will to Believe’. Episteme, 13(3), 253–268.

  36. Pettigrew R. (2016). Epistemic utility arguments for probabilism. In: E. Zalta (Ed.) The Stanford encyclopedia of philosophy (Spring 2016 Edition). http://plato.stanford.edu/archives/spr2016/entries/epistemic-utility/.

  37. Rabinowicz, W. (1995). Stable revision, or is preservation worth preserving? In A. Fuhrmann, & H. Rott (Eds.) Logic, action and information: Essays on logic in philosophy and artificial intelligence (pp. 101–128). Berlin.

  38. Rodrigues, O., Gabbay, D. M., & Russo, A. (2011). Belief revision. In D. Gabbay & F. Guenthner (Eds.), Handbook of philosophical logic v. 16 (pp. 1–114). New York, NY: Springer.

  39. Rott, H. (1999). Coherence and conservatism in the dynamics of belief part I: Finding the right framework. Erkenntnis, 50, 387–412.

  40. Rott, H. (2000). Two dogmas of belief revision. Journal of Philosophy, 97(9), 503–522.

  41. Rott, H. (2001). Change, choice, and inference. Oxford: Oxford University Press.

  42. Steinberger, F. (2016). Explosion and the normativity of logic. Mind, 125(498), 385–419.

  43. van Eijck, J., & Renne, B. (2014). Belief as willingness to bet. http://arxiv.org/abs/1412.5090.

Download references

Author information

Correspondence to Ted Shear.

Additional information

This paper is very much the result of collaborative efforts by many contributors in our community. However, there are a few people whose contributions stand out. First and foremost, we owe the utmost thanks to Jonathan Weisberg, who is largely responsible for first establishing Theorem 1 and discovering the Golden Threshold. Due to the import of his result, we view Jonathan as a third co-author despite his modest insistence that his contribution does not warrant this. Additionally, special thanks are owed to Konstantin Genin for supplying us with an improved proof of Jonathan’s result and to Hans Rott for his generous and helpful comments on late drafts of this work. We would also like to thank audiences at Tilburg, Boulder, Maryland, Helsinki, Kent, Taipei, Toronto, Bristol, and Groningen for stimulating discussions. Individually, we must further single out Luc Bovens, Kenny Easwaran, Justin Fisher, Haim Gaifman, Konstantinos Georgatos, Jeremy Goodman, Kevin Kelly, Hannes Leitgeb, Isaac Levi, Hanti Lin, Eric Pacuit, Rohit Parikh, Richard Pettigrew, Eric Raidl, Jonah Schupbach, Teddy Seidenfeld, and Julia Staffel for providing useful feedback.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Shear, T., Fitelson, B. Two Approaches to Belief Revision. Erkenn 84, 487–518 (2019). https://doi.org/10.1007/s10670-017-9968-1

Download citation