Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Regression to the mean and Judy Benjamin

  • 65 Accesses

Abstract

Van Fraassen’s Judy Benjamin problem asks how one ought to update one’s credence in A upon receiving evidence of the sort “A may or may not obtain, but B is k times likelier than C”, where \(\{A,B,C\}\) is a partition. Van Fraassen’s solution, in the limiting case \(k\rightarrow \infty \), recommends a posterior converging to \(P(A|A\cup B)\) (where P is one’s prior probability function). Grove and Halpern, and more recently Douven and Romeijn, have argued that one ought to leave credence in A unchanged, i.e. fixed at P(A). We argue that while the former approach is superior, it brings about a reflection violation due in part to neglect of a “regression to the mean” phenomenon, whereby when C is eliminated by random evidence that leaves A and B alive, the ratio P(A) : P(B) ought to drift in the direction of 1 : 1.

This is a preview of subscription content, log in to check access.

Fig. 1

Notes

  1. 1.

    Other possibilities: (1) it is our expectation of Judy’s prior that is \(\left( {1\over 4}, {1\over 4}, {1\over 4}, {1\over 4}\right) ,\) and (2) Judy’s prior is simply \(( J ( Blue), J ( Red \; HQ ) , J ( Red \; 2nd ) ) = \left( {1\over 2}, {1\over 4}, {1\over 4}\right) \) (i.e. suppress subdivision of the Blue region). Although these alternatives might affect proper analyses in interesting ways, neither would materially impact occurrence of the qualitative features that constitute our concern here.

  2. 2.

    Note however that van Fraassen also gives a “coarsest description” of the problem more along the lines of Assumption \(3_\mathbf{A}\). We don’t know which interpretation he intends his solution to address.

  3. 3.

    We take \(P(Blue)\ne 0\) and “HQ has not ruled out Blue” to be equivalent. Also we take HQ to report a ratio 0 : 0 if and only if he has ruled out Red. Other practices are possible.

  4. 4.

    Some authors would likely disavow both protocols. D&R (2011), for example, treat HQ’s message as a conditional rather than as a report of a conditional probability. We find this implausible. Indeed, we don’t think it is obvious which conditional “if you are in the Red area, the probability is \({3\over 4}\) that you are in the Red HQ area” is supposed to represent—in particular because we don’t know what proposition is intended to serve as consequent. (It can’t be “the probability is \({3\over 4}\) that you are in the Red HQ area”.) One might rephrase to something like “if I were to learn that you are in the Red area then my posterior probability in Red HQ would be \({3\over 4}\)”, but we don’t see how this would require analysis different from a conditional probability report. More natural, by far, is to simply assume that HQ’s message is a (fairly standard) species of vernacular for such a report.

  5. 5.

    This argument needn’t vitiate the \({2\over 3}\) limiting case conclusion in general, although it does show that supporters of such a conclusion must endorse a posterior in Blue strictly less than \({1\over 2}\) at some middle range(s) of the reported conditional probability \(P(Red\; HQ| Red)\). The \({2\over 3}\) limiting case conclusion would be apt, in fact, for a variant of the problem in which Blue is subdivided into two regions and HQ reports that both are live. In this case the effect we are studying manifests at the middle ranges as a “progression from the mean” (Judy raises credence in both Red subregions above the mean value \({1\over 4}\)) rather than a “regression to the mean”. We reserve the latter notion (“regression”) for cases in which one learns nothing beyond which cells in some partition survive.

  6. 6.

    On pain of inconsistency, anyone who subscribes to the general one-half solution must either deny this seeming truism or claim that \(Red\; 2nd\) is not eliminated almost surely. (Drawing semantic distinctions between the conditionals (2) and (2a) misses the point, for it is never the semantic content of a received message that is conditioned on, but the fact that it was received.) Grove and Halpern (1997) is the best attempt; we critique their solution in the next section. We concede that (2) and (2a) are assertible with positive probability and equivalent, both to each other and to (2b) You are not in the Red Second Company area, asserted by a duty officer whose protocol is to report the value of \(P(Red\; 2nd)\) and to indicate so if he knows which region Judy is in. Where we differ from van Fraassen and other \({2\over 3}\) limiting casers is in our admission of regression, whereby Blue and \(Red\; HQ\) trend closer in probability when \(Red\; 2nd\) is eliminated and they are not.

  7. 7.

    Letting \(\epsilon \rightarrow 0\) would effectively fix Judy’s posterior at the conditional expectation of g(xy) on the line segment \(y={3\over 4}(1-x),0\le x\le 1\), with respect to the \(\sigma \)-algebra generated by the triangles \(\{ a(1-x)\le y\le b(1-x):0\le x\le 1\},0\le a<b\le 1\). The inconsistency encountered in the last section concerning how to condition on the message \(P(Red\; 2nd) = 0\) and its various equivalent formulations would be, on this view, an artifact of the (\(y=1-x\)) hypotenus’s status as an atom of at least three different \(\sigma \)-algebras of potential interest. G&H take this hypotenus to be a null set, so Judy’s different values for the conditional expectation of g(xy) on it with respect to the different algebras isn’t (so probabilists assure us) any cause for concern. (For discussion of a famous case of this “paradox”—conditionalization on a great circle—see, e.g., Jaynes 2003).

  8. 8.

    There is good reason to believe that more “story completions” exhibit this property than exhibit the reverse. Certainly larger regions are dramatically less likely to be eliminated when they are “larger by disjunction”, since one must eliminate all disjuncts in order to eliminate a disjunction. A 2-dimensional Brownian motion model (we explore a 3-dimensional Brownian motion below) provides another instance; as shown in (Ferguson, unpublished ms.), the probability that such a motion originating within an equilateral triangle at barycentric coordinates \(({1\over 2},{1\over 4},{1\over 4})\) exits the triangle out the farthest side is \(\approx .1421\). If such a motion is used to model the movement of HQ’s credences on \(\{Blue, Red\; HQ, Red\; 2nd\}\), Judy will adopt posterior probability \(\approx {1\over 1-.1421}\approx .5828<{2\over 3}\) for Blue conditional on a Red subregion being eliminated prior to elimination of Blue.

  9. 9.

    The posteriors at \(R_{499} = (0,.001] \cup [.999, 1)\) were substantially lower, \(\approx .628\) and \(\approx .597\) for \(N=75{,}000\) and \(N=250{,}000\) respectively.

  10. 10.

    It is natural to ask whether such effects arise in the incomplete information scenarios where Jeffrey conditionalization (as developed in Jeffrey 1965) is often taken to apply; if HQ reports that \(P(Red\; 2nd)={1\over 10}\), should Judy (as Jeffrey conditionalization suggests) adopt posterior credence in Blue equal to \({3\over 5}\), or to something else? If we treat the Blue subregions on par with the Red ones then the \({3\over 5}\) posterior is warranted by indifference. So such effects, if any, are sensitive to subdivision.

  11. 11.

    Thanks to Michael Huemer and two anonymous referees for helpful suggestions, and also to the editors at Synthese for their persistence.

References

  1. Douven, I., & Romeijn, J.-W. (2011). A new resolution of the Judy Benjamin problem. Mind, 120, 637–670.

  2. Ferguson, T. Gambler’s ruin in three dimensions. Manuscript. https://www.math.ucla.edu/~tom/papers/unpublished/gamblersruin.pdf. Accessed February 13, 2018.

  3. Goodman, I. R., & Nguyen, H. T. (1999). Probability updating using second order probabilities and conditional event algebra. Information Sciences, 121, 295–347.

  4. Grove, A. J., & Halpern, J. Y. (1997). Probability: Conditioning vs. cross-entropy. In Proceedings of the 13th annual conference on uncertainty in artificial intelligence (pp. 208–214). San Francisco: Morgan Kaufmann.

  5. Jaynes, E. T. (2003). The Borel-Kolmogorov paradox. Probability theory: The logic of science (pp. 467–470). Cambridge: Cambridge University Press.

  6. Jeffrey, R. (1965). The logic of decision. University of Chicago Press.

  7. Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2004). Stopping to reflect. Journal of Philosophy, 101, 315–322.

  8. Seidenfeld, T. (1986). Entropy and uncertainty. Philosophy of Science, 53, 467–491.

  9. van Fraassen, B. C. (1981). A problem for relative information minimizers in probability kinematics. The British Journal for the Philosophy of Science, 1981, 375–379.

  10. van Fraassen, B. C. (1984). Belief and the will. Journal of Philosophy, 81, 235–256.

Download references

Author information

Correspondence to Randall G. McCutcheon.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

McCutcheon, R.G. Regression to the mean and Judy Benjamin. Synthese (2018). https://doi.org/10.1007/s11229-018-1761-4

Download citation

Keywords

  • Conditionalization
  • Regression
  • Reflection
  • Judy Benjamin problem