Abstract
In a recent paper, Pettigrew (Philos Stud, 2019. https://doi.org/10.1007/s11098-019-01377-y) argues that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which he calls deterministic updating, and which says that your updating plan should be deterministic. In that paper, Pettigrew did not consider whether the symmetry arguments due to Hughes and van Fraassen make the same assumption (Hughes and van Fraassen in: Proceedings of the Biennial Meeting of the Philosophy of Science Association. pp. 851–869, 1984; van Fraassen in: Rescher N (ed) Scientific inquiry in philosophical perspective. University Press of America, Lanham, pp. 183–223, 1987). In this note, I show that they do.
Similar content being viewed by others
According to Bayesians, when I learn a proposition to which I assign a positive credence, I should plan to update my credences so that my new unconditional credence in a proposition is my old conditional credence in that proposition conditional on the proposition I learned. In other words, if P is my credence function before I learn E, and \(P^\star \) is the credence function I plan to adopt in response to learning E, and \(P(E) > 0\), then it ought to be the case that, for all X in \({\mathcal{F}}\),
There are many arguments for this Bayesian norm of updating. Some pay attention to the pragmatic costs of updating any other way (Brown 1976; Lewis 1999); others pay attention to the epistemic costs, which are spelled out in terms of the inaccuracy of the credences that result from the updating plans (Oddie 1997; Greaves and Wallace 2006; Briggs and Pettigrew 2018); some show that updating as the Bayesian requires, and only updating that way, preserves as much as possible about the prior credences while still respecting the new evidence (Diaconis and Zabell 1982; Dietrich et al. 2016). And then there are the symmetry arguments that are our focus here (Hughes and van Fraassen 1984; van Fraassen 1987; Grove and Halpern 1998).
In a recent paper, I argued that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which I called deterministic updating, and which says that your updating plan should be deterministic (Pettigrew 2019). An updating plan specifies how you’ll update in response to a specific piece of evidence. Such a plan is deterministic if there’s a single credence function that it says you’ll adopt in response to that evidence, rather than a range of different credence functions that you might adopt in response. That is, if E is a proposition you might learn, deterministic updating says that your plan for responding to receiving E as evidence should take the form:
If I learn E, I’ll adopt \(P^\star \).
It should not take the form:
If I learn E, I’ll adopt \(P^\star \) or I’ll adopt \(P^\dag \) or ... or I’ll adopt \(P^\circ \).
In that paper, I did not consider whether the symmetry arguments due to Hughes and van Fraassen make the same assumption. In this note, I show that they do.
1 The symmetry argument for conditionalization
Let’s start by laying out the symmetry argument. Suppose \({\mathcal{W}}\) is a set of possible worlds, and \({\mathcal{F}}\) is an algebra over \({\mathcal{W}}\). Then an updating plan on \({\mathcal{M}}= ({\mathcal{W}}, {\mathcal{F}})\) is a function \({\mathbf{U}}^{\mathcal{M}}\) that takes a credence function P defined on \({\mathcal{F}}\) and a proposition E in \({\mathcal{F}}\) and returns the set of credence functions that the updating plan endorses as responses to learning E for those with credence function P. A family of updating plans contains an updating plan for each \({\mathcal{M}}= ({\mathcal{W}}, {\mathcal{F}})\). Then we impose four conditions on a family of updating plans \({\mathbf{U}}\).
Coherence
An updating plan should take a probabilistic credence function and a proposition and return a set of probabilistic credence functions.
More precisely: If P is a probabilistic credence function, and \(P^\star \) is in \({\mathbf{U}}^{\mathcal{M}}(P, E)\), then \(P^\star \) is a probabilistic credence function.
Deterministic Updating
An updating plan should endorse only one credence function as a correct response to learning a piece of evidence to which the prior assigned positive credence.
More precisely: If \(P(E) > 0\), then \(|{\mathbf{U}}^{\mathcal{M}}(P, E)| = 1\).
Certainty
Any credence function that an updating plan endorses as a response to learning E should be certain of E.
More precisely: If \(P^\star \) is in \({\mathbf{U}}^{\mathcal{M}}(P, E)\), then \(P^\star (E) = 1\).
Symmetry
The way that an updating plan would have you update should not be sensitive to the way the possibilities are represented.
More precisely: Let \({\mathcal{M}}= ({\mathcal{W}}, {\mathcal{F}})\) and \({\mathcal{M}}' = ({\mathcal{W}}', {\mathcal{F}}')\). Suppose \(f : {\mathcal{W}}\rightarrow {\mathcal{W}}'\) is a surjective function. That is, for each \(w'\) in \({\mathcal{W}}'\), there is w in \({\mathcal{W}}\) such that \(f(w) = w'\). And suppose for each X in \({\mathcal{F}}'\), \(f^{-1}(X) = \{w \in {\mathcal{W}}\, |\, f(w) \in X\}\) is in \({\mathcal{F}}\). Then the worlds in \({\mathcal{W}}'\) are coarse-grained versions of the worlds in \({\mathcal{W}}\), and the propositions in \({\mathcal{F}}'\) are coarse-grained versions of those in \({\mathcal{F}}\). Now, given a credence function P on \({\mathcal{F}}\), let f(P) be the credence function over \({\mathcal{F}}'\) such that \(f(P)(X) = P(f^{-1}(X))\). Then the set of credence functions that \(U^{{\mathcal{M}}'}\) endorses as responses to learning \(E'\) for someone with prior f(P) is the image under f of the set of credence functions that \({\mathbf{U}}^{\mathcal{M}}\) endorses as responses to learning \(f^{-1}(E')\) for someone with prior P. That is,
Now, van Fraassen proves that these four conditions together entail the Bayesian rule of updating:
Conditionalization
An updating plan should exhort you to update by conditionalizing on any piece of evidence to which your prior assigns positive credence.
More precisely: If \(P(E) > 0\), then \({\mathbf{U}}^{\mathcal{M}}(P, E)\) should contain only one credence function, namely, \(P_E(-) := P(-|E)\).
That is:
Theorem 1
(Hughes and van Fraassen) Deterministic updating + Coherence + Certainty + Symmetry \(\Rightarrow \) Conditionalization.
2 The role of Deterministic Updating
The problem with van Fraassen’s argument is that, while Coherence and Certainty are uncontroversial, and Symmetry is very plausible, there is no good reason to assume Deterministic Updating. After all, there’s nothing irrational about making non-deterministic plans in general. On Friday, I might plan as follows: If it rains on Saturday, I’ll do the laundry or I’ll darn my socks. This plan is non-deterministic; but it seems quite reasonable. And the same goes for non-deterministic updating plans.
However, the symmetry argument cannot go through without assuming Deterministic Updating. To see this, consider the following updating rule, which I’ll call \({\mathbf{V}}\). To define it, we must first introduce some notation. Suppose \({\mathcal{M}}= ({\mathcal{W}}, {\mathcal{F}})\). If w is in \({\mathcal{W}}\), then define the following credence function \(v_w\) on \({\mathcal{F}}\):
\(v_w\) is sometimes called the omniscient credence function at \({\mathcal{W}}\) or the valuation function of \({\mathcal{W}}\).
Then, if P is a credence function on \({\mathcal{F}}\) and E is in \({\mathcal{F}}\), let:
That is, \({\mathbf{V}}^{\mathcal{M}}\) takes P and E and returns the set of valuation functions of those worlds in \({\mathcal{W}}\) at which E is true.
Theorem 2
\({\mathbf{V}}\) satisfies Coherence + Certainty + Symmetry, but not Deterministic Updating or Conditionalization.
Proof in Appendix. So \({\mathbf{V}}\) satisfies Coherence, Certainty, and Symmetry, but, it is not the Bayesian updating rule. And this is possible because it does not satisfy Deterministic Updating. The apparent upshot is that we must assume Deterministic Updating if we are to use van Fraassen’s symmetry considerations to establish Conditionalization. But, as I pointed out in my previous paper, there does not seem to be any principled reason to require it.
3 Other conditions on updating rules
But perhaps this is a little quick. After all, we have no reason to think that Coherence, Certainty, and Symmetry exhaust the conditions we should impose on updating rules. Van Fraassen lists only those because, together with Deterministic Updating, they are sufficient to pin down Conditionalization—he has no need to look for any further conditions. But if we drop Deterministic Updating and find that Conditionalization doesn’t follow, then perhaps we can nonetheless add further well-motivated conditions to make up for the shortfall.
So, is there some further condition that \({\mathbf{V}}\) fails to satisfy? Yes, there are some. But we can easily find closely related updating plans that do satisfy these further conditions, as well as Coherence, Certainty, and Symmetry, but which do not satisfy Deterministic Updating or Conditionalization. We’ll describe these below. After we’ve done this, we’ll look at a further condition that, along with the others, entails Deterministic Updating and Conditionalization. But we’ll note that it might well be too strong. Its intuitive force can be captured by a weaker principle that our amended versions of \({\mathbf{V}}\) satisfy.
Here’s the first of our further conditions:
Stationarity
Updating on E should have no effect if you are already certain of E.
More precisely: If \(P(E) = 1\), then \({\mathbf{U}}^{\mathcal{M}}(P, E) = \{P\}\).
\({\mathbf{V}}\) does not satisfy Stationarity. But we can alter it so that it does. Let \(\mathbf {W}\) be the following family of updating plans:
Then \(\mathbf {W}\) satisfies Coherence + Certainty + Symmetry + Stationarity.
Next, here are two further conditions:
Evidential Regularity
If you assign positive credence to a possibility before updating, and the evidence you learn doesn’t rule out that possibility, then the updating rule should demand that you assign positive credence to that possibility after learning.
More precisely: If \(P(w) > 0\), and E is true at w, and \(P^\star \) is in \({\mathbf{U}}^{\mathcal{M}}(P, E)\), then \(P^\star (w) > 0\).
Responsiveness
It should at least be possible for your prior to make a difference to your posterior.
More precisely: There are \(P_1, P_2\) such that \(P_1(E), P_2(E) > 0\), and \(U(P_1, E) \ne U(P_2, E)\).
\(\mathbf {W}\) does not satisfy Evidential Regularity or Responsiveness. But we can alter it so that it does. Given \(0 \le \lambda \le 1\), let \(\mathbf {X}_\lambda \) be the following family of updating plans:
If \(0< \lambda < 1\), then \(\mathbf {X}_\lambda \) satisfies Coherence + Certainty + Symmetry + Stationarity + Evidential Regularity + Responsiveness.
To state the next property, we need a measure of distance between credence functions defined on the same algebra \({\mathcal{F}}\). There are many we could use, but for our purposes it will suffice to use the most natural, which sums up the differences between the credences that they assign. Clearly, this only works if \({\mathcal{F}}\) is finite.
Continuity
Small changes in the prior credence in E should not give rise to large changes in the set of credence functions that the updating rule endorses upon learning E.
More precisely: For all \(\varepsilon > 0\), there is \(\delta > 0\), such that, if \(D(P_1, P_2) < \delta \), then, for all \(P^\star _2\) in \(U(P_2, E)\), there is \(P^\star _1\) in \(U(P_1, E)\) such that \(D(P^\star _1, P^\star _2) < \varepsilon \).
\(\mathbf {X}_\lambda \) does not satisfy Continuity. After all, if we aren’t fully certain of E, then the updating rule endorses the \(\lambda \)-mixture of \(P_E\) with every \(v_w\); but once we become certain of E, it endorses only \(P = P_E\). Again, we can alter it so that it does satisfy Continuity.
Then:
Theorem 3
\({\mathbf{Y}}\) satisfies Coherence + Certainty + Symmetry + Stationarity + Evidential Regularity + Responsiveness + Continuity, but not Deterministic Updating or Conditionalization.
Proof in Appendix.
Next, we come to a condition that, together with Stationarity, entails Deterministic Updating.
Strong order invariance
Updating first on E and then on EF should always result in the same posteriors as updating first on F and then on EF.
More precisely: If \(P(EF) > 0\), then for all \(P^\dag \) in \({\mathbf{U}}^{\mathcal{M}}(P, E)\) and \(P^\star \) in \({\mathbf{U}}^{\mathcal{M}}(P, F)\),
Theorem 4
Strong Order Invariance + Stationarity + Certainty \(\Rightarrow \) Deterministic Updating
Proof in Appendix. How should we respond to this? We might simply accept Strong Order Invariance and use it, Stationarity, and Certainty in place of Deterministic Updating to plug the hole in van Fraassen’s symmetry argument for Conditionalization. But another possibility is that we find Strong Order Invariance too strong. We might find that it is too much to ask that whichever choice you make after learning E will then lead to the same options after you then learn EF as if you first learn F, make any choice in response to that, and then learn EF. Instead, we might think:
Weak Order Invariance
Updating first on E and then on EF should sometimes result in the same posteriors as updating first on F and then on EF.
More precisely: If \(P(EF) > 0\), then there are \(P^\dag \) in \({\mathbf{U}}^{\mathcal{M}}(P, E)\) and \(P^{\dag \dag }\) in \({\mathbf{U}}^{\mathcal{M}}(P^\dag , EF)\) as well as \(P^\star \) in \({\mathbf{U}}^{\mathcal{M}}(P, F)\) and \(P^{\star \star }\) in \({\mathbf{U}}^{\mathcal{M}}(P^\star , EF)\) such that \(P^{\dag \dag } = P^{\star \star }\).
Now, note that \({\mathbf{Y}}\) satisfies Coherence + Certainty + Symmetry + Stationarity + Evidential Regularity + Responsiveness + Continuity + Weak Order Invariance. So those conditions do not entail Deterministic Updating or Conditionalization.
There are no doubt further conditions on updating rules we might consider that would plug the gap left in the symmetry argument for Conditionalization when we reject Deterministic Updating, but I will not consider them. I will leave our investigation here, noting that, relative to the conditions we’ve considered, the success of the symmetry argument turns on the choice between Strong and Weak Order Invariance.
References
Briggs, R. A., & Pettigrew, R. (2018). An accuracy-dominance argument for conditionalization. Noûs, 54, 162–181.
Brown, P. M. (1976). Conditionalization and expected utility. Philosophy of Science, 43(3), 415–419.
Diaconis, P., & Zabell, S. L. (1982). Updating subjective probability. J Am Stat Assoc, 77(380), 822–830.
Dietrich, F., List, C., & Bradley, R. (2016). Belief revision generalized: a joint characterization of Bayes’s and Jeffrey’s rules. J Econ Theory, 162, 352–371.
Greaves, H., & Wallace, D. (2006). Justifying conditionalization: conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.
Grove, A. J., & Halpern, J. Y. (1998). Updating sets of probabilities. In Proceedings of the 14th conference on uncertainty in AI (pp. 173–182). San Francisco, CA: Morgan Kaufman.
Hughes, R. I. G., & van Fraassen, B. C. (1984). Symmetry arguments in probability kinematics. In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association (pp. 851–869).
Lewis, D. (1999). Why Conditionalize? In Papers in metaphysics and epistemology (pp. 403–407). Cambridge: Cambridge University Press.
Oddie, G. (1997). Conditionalization, cogency, and cognitive value. Br J Philos Sci, 48, 533–41.
Pettigrew, R. (2019). What is conditionalization, and why should we do it? Philosophical Studies,. https://doi.org/10.1007/s11098-019-01377-y.
van Fraassen, B. C. (1987). Symmetries of personal probability kinematics. In N. Rescher (Ed.), Scientific inquiry in philosophical perspective (pp. 183–223). Lanham: University Press of America.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proofs of results
Appendix: Proofs of results
Theorem 2
\({\mathbf{V}}\) satisfies Coherence + Certainty + Symmetry, but not Deterministic Updating.
Proof
It is easy to see that \({\mathbf{V}}\) satisfies Coherence and Certainty, since each \(v_w\) is a probabilistic credence function, and since each \(v_w\) is certain of E for each w at which E is true. To see that \({\mathbf{V}}\) satisfies Symmetry, the crucial fact is this: for each w in \({\mathcal{W}}\), \(f(v_w) = v_{f(w)}\). Proof of this little fact: for any X in \({\mathcal{F}}'\), \(f(v_w)(X) = 1 \Leftrightarrow v_w(f^{-1}(X)) = 1 \Leftrightarrow w \in f^{-1}(X) \Leftrightarrow f(w) \in X \Leftrightarrow v_{f(w)}(X) = 1\). Back to the main proof. First, take a credence function in \({\mathbf{V}}^{{\mathcal{M}}'}(f(P), E')\): that is, \(v_{w'}\) for some \(w'\) in \(E'\). Then if \(f(w) = w'\), then w is in \(f^{-1}(E')\) and so \(v_w\) is in \({\mathbf{V}}^{\mathcal{M}}(P, f^{-1}(E')))\). Now \(f(v_w) = v_{f(w)} = v_{w'}\), so \(v_{w'}\) is in \(f({\mathbf{V}}^{\mathcal{M}}(P, f^{-1}(E')))\). Next, take a credence function in \(f({\mathbf{V}}^{\mathcal{M}}(P, f^{-1}(E')))\). That is, \(f(v_w)\) for some w in \(f^{-1}(E')\). But \(f(v_w) = v_{f(w)}\), which is in \({\mathbf{V}}^{{\mathcal{M}}'}(f(P), E')\), since f(w) is in \(E'\). This completes the proof. \(\square \)
Theorem 3
\({\mathbf{Y}}\) satisfies Coherence + Certainty + Symmetry + Stationarity + Evidential Regularity + Responsiveness + Continuity, but not Deterministic Updating or Conditionalization.
Proof
\({\mathbf{Y}}\) satisfies Coherence because \(P_E\) and \(v_w\) are both probabilistic credence functions, and thus any mixture of them is as well. \({\mathbf{Y}}\) satisfies Certainty because \(P_E\) is certain of E and so is \(v_w\), whenever E is true at w, and so any mixture of them is certain of E. The proof that \({\mathbf{Y}}\) satisfies Symmetry proceeds similarly to the proof that \({\mathbf{V}}\) does. We simply add the fact that \(f(P_E) = f(P)_{f(E)}\). \({\mathbf{Y}}\) satisfies Stationarity because, if \(P(E) = 1\), then \(P(E)P_E + (1-P(E))v_w = P_E = P\), for all w. \({\mathbf{Y}}\) satisfies Evidential Regularity, because if \(P(w) > 0\) and E is true at w, then \(P(E)P_E(w) > 0\) and so \(P(E)P_E(w) + (1-P(E))v_{w'}(w) > 0\). \({\mathbf{Y}}\) satisfies Responsiveness, because \(P_1(E)P_1(-|E) + (1-P_1(E))v_w(-) \ne P_2(E)P_2(-|E) + (1-P_2(E))v_w(-)\) for many priors \(P_1, P_2\). \({\mathbf{Y}}\) satisfies Continuity, because if \(\varepsilon > 0\), we let \(\delta = \frac{\varepsilon }{2|{\mathcal{F}}|}\). Then, if \(D(P_1, P_2) < \delta \), then \(|(1-P_1(E)) - (1-P_2(E))| = |P_1(E) - P_2(E)| < \frac{\varepsilon }{2|{\mathcal{F}}|}\) and \(|P_1(XE) - P_2(XE)| < \frac{\varepsilon }{2|{\mathcal{F}}|}\). So
\(\square \)
Theorem 4
Strong Order Invariance + Stationarity + Certainty \(\Rightarrow \) Deterministic Updating.
Proof
Suppose \({\mathbf{U}}\) satisfies Strong Order Invariance and Stationarity. Suppose E is in \({\mathcal{F}}\) and \(\top \) is the top element of \({\mathcal{F}}\), so that \(\top \) is true at all worlds; and suppose P is defined on \({\mathcal{F}}\). We’re going to consider what happens when you update first on E, then on \(E \top \) (which is, of course, E), and what happens when you update first on \(\top \) and then on \(E \top \). Take \(P^\dag \) from \({\mathbf{U}}^{\mathcal{M}}(P, E)\). By Certainty, \(P^\dag (E) = 1\). Then, by Stationarity, \({\mathbf{U}}^{\mathcal{M}}(P^\dag , E \top ) = \{P^\dag \}\). Also by Stationarity, \({\mathbf{U}}^{\mathcal{M}}(P, \top ) = \{P\}\), since \(P(\top ) = 1\). So, by Strong Order Invariance
as required. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pettigrew, R. A note on deterministic updating and van Fraassen’s symmetry argument for conditionalization. Philos Stud 178, 665–673 (2021). https://doi.org/10.1007/s11098-020-01450-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-020-01450-x