## Abstract

I show that centered propositions—also called *de se* propositions, and usually modeled as sets of centered worlds—pose a serious problem for various versions of Lewis’s Principal Principle. The problem, put roughly, is that in scenarios like Elga’s ‘Sleeping Beauty’ case, those principles imply that rational agents ought to have obviously irrational credences. To solve the problem, I propose a centered version of the Principal Principle. My version allows centered propositions to be objectively chancy.

This is a preview of subscription content, access via your institution.

## Notes

See Lewis (1980) for a discussion of admissibility.

To be clear: when Susie wakes up at time \(t=1\), she might not know that the

*current*chance of the coin landing heads is \(\frac{1}{2}\). In other words, she might not know the following centered proposition: right now, \(\frac{1}{2}\) is the chance of the coin landing heads. But since Susie knows all the details of the experiment, she*always*knows the following: at time \(t=1\), the chance of the coin landing heads is \(\frac{1}{2}\). She always knows the uncentered proposition that \(Ch_{1}(H)=\frac{1}{2}\). And as shown in the Appendix, that suffices to derive equation (3).When it comes to Susie’s experiment, these update rules are more plausible than, for example, standard update rules for uncentered propositions. In particular, they are more plausible than the standard Bayesian conditionalization rule which implies (4).

I include the qualifier ‘in everyday circumstances’ because there may be unusual circumstances in which the centered chance of “I am Beth”, relative to Beth, is less than 1. Suppose Beth briefly suffers amnesia, and cannot remember whether she is Beth or Bailey. In a circumstance like that, “I am Beth” may, relative to Beth, have a non-unit centered chance.

One might object by claiming that the proper response to Susie’s experiment does not relativize

*chances*to agents. The proper response, one might claim, relativizes the*Principal Principle*to agents instead: rather than centered chances, there are a plurality of Principal Principles, one for each agent or community of agents. The problem with this response is that plausibly, centered chances can do things that a mere collection of agent-relative Principal Principles cannot. In particular, as I explain in Sect. 4, centered chances can be used to confirm or disconfirm certain physical theories: namely, cosmological theories and theories of quantum mechanics. It is not clear, however, whether a plurality of Principal Principles can do likewise. So it is better to posit centered chances, I think, than to posit a plurality of Principal Principles.Because of this, I am guilty of an overly-hasty inference. In Sect. 2, I implicitly inferred that the centered chance of the coin landing heads is \(\frac{1}{2}\) from the fact that the uncentered chance of the coin landing heads is \(\frac{1}{2}\); that is, I implicitly inferred that \(Ch_{1}(H)=\frac{1}{2}\) from the fact that \(Ch_{1}(U)=\frac{1}{2}\). This overly-hasty inference is ubiquitous in the literature on Elga’s ‘Sleeping Beauty’ case: everyone assumes that in Elga’s case, the objective chance of the coin landing heads is \(\frac{1}{2}\). And it is this overly-hasty inference which leads to the principal problem. Presumably, this inference is so common in the literature because the distinction between centered chance and uncentered chance has been overlooked.

This follows from two plausible claims. The first is that \(Cr\big (Ch_{1}(H)=\frac{1}{3}\big )=1\). This is plausible because at the initial time, Susie knows the details of the experiment. The second is that \(Cr_{1}(H)=Cr(H\mid \mathcal {H}_{1})\), since Susie’s credence function at time \(t=1\) is her initial credence function conditional on the history of the world up to that time. I defend the second claim in the Appendix.

In this paper, I remain neutral on exactly what that proposition is. But just to give an illustrative example: perhaps

*H*is \(\{\langle w,x\rangle \mid w\) is a world in which the coin lands heads, and*x*is either time \(t=1\) or time \(t=2\}\), while*U*is \(\{\langle w,t\rangle \mid w\) is a world in which the coin lands heads, and*t*is a time\(\}\).This is why the principal problem cannot be reformulated merely by substituting

*U*for*H*in equations (1), (2), and (3). In order to generate a version of the principal problem using*U*, one would first have to specify the algebra of propositions to which*U*belongs. This algebra will, plausibly, not feature centered propositions like*Mo*: for this algebra contains uncentered propositions rather than centered ones. So this algebra cannot be used to derive the equation \(Cr_{1}(Mo)=1\), since*Mo*is not in this algebra at all. And if one attempts to construct an algebra featuring both*Mo*and*U*, then complications arise; it becomes unclear whether that algebra really does contain*U*, or if it just contains*H*in some other guise.In particular, I think that centered chances and rational centered credences are connected by a relation of explanatory determination (Wilhelm 2020). More precisely, the fact that—relative to agent

*A*—centered proposition*p*has chance*x*, is an explanatory determiner of the fact that x is the rational credence for*A*to have in*p*. For the former fact explains the latter fact.The best system at \(\mathfrak {w}\) also posits uncentered chances. For the best system at \(\mathfrak {w}\) also summarizes uncentered frequency facts, such as facts about the relative frequency with which the coin lands heads.

There might be cases in which the chances depart from the relative frequencies. For instance, arguably, this happens in some quantum theories. In such cases, there is a significant difference between (1) the summary of frequency facts that centered chances provide, and (2) any non-centered summary of those frequency facts. Thanks to a reviewer for drawing my attention to this.

Here is why. Let \(t=1\), and consider all the events in which Susie wakes up, but Susie is unaware of whether it is Monday or Tuesday. In each of those events, it is

*not*the case that (1) it is Tuesday, and yet (2) the coin lands heads. So according to the best summary of that null frequency, the centered chance of it being Tuesday and the coin landing heads—at time \(t=1\), relative to Susie—is 0. That is, relative to Susie, \(Ch_{1}(H_{t})=0\). Now consider all the events in which (1) Susie wakes up, (2) Susie is unaware of whether it is Monday or Tuesday, but (3) it is, as a matter of fact, Monday. In approximately half of those events, the coin ends up landing heads. So according to the best summary of that frequency, the centered chance of the coin landing heads*given that it is Monday*—at time \(t=1\), relative to Susie—is \(\frac{1}{2}\). In other words, \(Ch_{1}(H\mid Mo)=\frac{1}{2}\).This argument for being a thirder is somewhat similar to, yet importantly distinct from, an argument given by Elga (2000). Elga’s argument purports to derive the rational credences directly from the frequencies: at time \(t=1\), Susie ought to have credence \(\frac{1}{3}\) in the coin landing heads because in the long run, approximately one-third of her wakings would be Heads-wakings (2000, pp. 143–144). My argument for being a thirder is different, insofar as it relies on an important intermediary step. In my argument, the rational credence in

*H*derives from (1) the centered chance of*H*, and (2) the Centered Principal Principle. The centered chance of*H*is determined by the best deductive system. So the rational credence in*H*is determined by more than just the frequencies. The rational credence in*H*is determined by (1) the best*summary*of those frequencies, and (2) a principle linking those summaries to the credences which agents ought to have.This is compatible with Alice’s centered credences being justified in other ways. It is compatible, for example, with Alice’s centered credences being justified by principles of rationality, such as the epistemic separability principle endorsed by Sebens and Carroll (2018, p. 40).

Assume that the electron’s spin state before measurement is \(\frac{1}{\sqrt{2}}{|{\uparrow _{x}}\rangle }+\frac{1}{\sqrt{2}}{|{\downarrow _{x}}\rangle }\), where \({|{\uparrow _{x}}\rangle }\) represents the electron being in the ‘x-spin up’ state and \({|{\downarrow _{x}}\rangle }\) represents the electron being in the ‘x-spin down’ state.

The posit is basically a slightly reworded version of the Born rule: the centered chance (for the experimenters) of ending up on a branch corresponding to eigenvalue

*a*of observable \(A\), given a system prepared in state \({|{\psi }\rangle }\), is \(\vert {\langle {a\vert \psi }\rangle }\vert ^{2}\), where \({|{a}\rangle }\) is the eigenvector corresponding to*a*.So

*Th*lists all the relevant history-to-chance conditionals (Lewis 1994, p. 487). A history-to-chance conditional is a specification of what the chances are, given a complete history of the world. It is a conditional of the form “If the complete history of the world is thus-and-so, then the chances are such-and-such.”

## References

Albert, D. Z. (2015).

*After physics*. Cambridge, MA: Harvard University Press.Dorst, C. (2019). Towards a best predictive system account of laws of nature.

*The British Journal for the Philosophy of Science*,*70*, 877–900.Elga, A. (2000). Self-locating belief and the sleeping beauty problem.

*Analysis*,*60*(2), 143–147.Elga, A. (2004). Infinitesimal chances and the laws of nature.

*Australasian Journal of Philosophy*,*82*(1), 67–76.Hall, E. (1994). Correcting the guide to objective chance.

*Mind*,*103*(412), 505–517.Hicks, M. T. (2018). Dynamic humeanism.

*The British Journal for the Philosophy of Science*,*69*, 983–1007.Horgan, T. (2004). Sleeping beauty awakened: New odds at the dawn of the new day.

*Analysis*,*64*(1), 10–21.Ismael, J. (2008). Raid! Dissolving the big, bad bug

*. Noûs*,*42*(2), 292–307.Lewis, D. (1980). A subjectivist’s guide to objective chance. In W. L. Harper, R. Stalnaker, & G. Pearce (Eds.),

*Ifs*(pp. 267–297). Boston, MA: D. Reidel.Lewis, D. (1994). Humean supervenience debugged.

*Mind*,*103*(412), 473–490.Lewis, D. (2001). Sleeping beauty: Reply to Elga.

*Analysis*,*61*(3), 171–176.Loewer, B. (2004). David Lewis’s humean theory of objective chance.

*Philosophy of Science*,*71*(5), 1115–1125.Meacham, C. J. G. (2008). Sleeping beauty and the dynamics of de se beliefs.

*Philosophical Studies*,*138*, 245–269.Ross, J. (2010). Sleeping beauty, countable additivity, and rational dilemmas.

*The Philosophical Review*,*119*(4), 411–447.Sebens, C. T., & Carroll, S. M. (2018). Self-locating uncertainty and the origin of probability in Everettian quantum mechanics.

*The British Journal for the Philosophy of Science*,*69*, 25–74.Srednicki, M., & Hartle, J. (2010). Science in a very large universe.

*Physical Review D*,*81*, 123524.Weatherson, B. (2013). Ross on sleeping beauty.

*Philosophical Studies*,*163*(2), 503–512.Weintraub, R. (2004). Sleeping beauty: A simple solution.

*Analysis*,*64*(1), 8–10.Wilhelm, I. (2020). Explanatory priority monism.

*Philosophical Studies*. https://doi.org/10.1007/s11098-020-01478-z.

## Acknowledgements

Thanks to Austin Baker, Laura Callahan, Lisa Cassell, Kevin Dorst, Kenny Easwaran, Andy Egan, Adam Elga, Nate Flores, Jimmy Goodrich, Alex Guerrero, Alan Hájek, Terence Horgan, Jenann Ismael, Jack Spencer, an anonymous referee, the audience at the 2018 course “The Nomological” at Rutgers, the audience at the 2019 Philosophy of Physics Workshop at CUNY, the audience at the 2020 Eastern APA, and especially Barry Loewer, for much helpful feedback and discussion.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendix

### Appendix

In this Appendix, I derive the equation

from the Principal Principle, the New Principle, and the General Recipe. Then I show how this equation, together with equations

and

implies that, when Susie wakes up on Monday but before she is told that it is Monday, she ought to be completely confident that it is Monday. Finally, I argue that equation

also follows from the Principal Principle; as discussed in Sect. 2.2, one of my arguments for (2) assumes this equation.

Here is the derivation of (3) from the Principal Principle. To start, recall that the Principal Principle concerns agents’ *initial* credence functions: their credence functions at some earlier time. So let *Cr* be Susie’s initial credence function; for simplicity, suppose that *Cr* is Susie’s credence function right after being told all the details of Susie’s experiment. Following Elga (2000) and Lewis (2001), we may stipulate that \(Cr_{1}\) is equal to Susie’s initial credence function *Cr* conditionalized on the proposition \(\mathcal {H}_{1}\), where \(\mathcal {H}_{1}\) expresses the complete history of the world up to time \(t=1\). That is, for all propositions *A*,

It follows, of course, that

Moreover, according to the Principal Principle,

This assumes, of course, that \(\mathcal {H}_{1}\) is admissible. But that is quite reasonable. Propositions like \(\mathcal {H}_{1}\), about the history of the world, are paradigmatic examples of admissible propositions: Lewis, for example, says as much (1980, p. 276). So this is a reasonable assumption to adopt.

Equations (6) and (7) jointly imply equation (3). To see why, note that \(Cr\left( Ch_{1}(H)=\frac{1}{2}\right) =1\), since Susie is told that the coin is fair. Because of this, (7) reduces to \(Cr(H\mid \mathcal {H}_{1})=\frac{1}{2}\). And this, in conjunction with (6), implies (3).

One might object to this derivation by objecting to (6). Perhaps the credence function which Susie ought to have, at time \(t=1\), is not obtained by standard Bayesian conditionalization on a history proposition like \(\mathcal {H}_{1}\). Perhaps the credence function which Susie ought to have, at time \(t=1\), is obtained by some other update procedure. Or perhaps the credence function which Susie ought to have, at time \(t=1\), is obtained by conditionalizing her initial credence function on some proposition other than \(\mathcal {H}_{1}\).

Regardless, (6) still holds. For starters, other update procedures can be used to derive that equation. Given certain reasonable assumptions, for instance, (6) follows from Meacham’s compartmentalized conditionalization rule.

In addition, \(\mathcal {H}_{1}\) is the right proposition for Susie to use when updating her credences. For \(\mathcal {H}_{1}\) need not be an uncentered proposition: it need not describe the uncentered history of the world. \(\mathcal {H}_{1}\) may be a centered proposition: it can describe various centered and uncentered facts that obtain throughout the world’s history. So \(\mathcal {H}_{1}\) can be a history proposition which implies that at time \(t=1\), it is either Monday or Tuesday. \(\mathcal {H}_{1}\) could be Susie’s total evidence at \(t=1\), for instance.

I now derive equation (3) from a different version of the Principal Principle, one proposed by Hall (1994, p. 511) and Lewis (1994, p. 487). Let *A* be a proposition. Let *t* be a time. Let *Ch* be a chance function defined, at time *t*, over an algebra of which *A* is a member. Let \(\mathcal {H}_{t}\) be the proposition that completely characterizes the history of the world up to time *t*. Let *Th* be the proposition expressing the complete theory of chance at the actual world.^{Footnote 20} Then a rational agent’s initial credence function *Cr* ought to satisfy the following equation.

Call this the ‘New Principle’.

To streamline the derivation of (3) from the New Principle, suppose that the complete chance theory *Th* only specifies the chances of the coin landing heads; no other chancy events happen in the world. So *Th* is the proposition that the chance of the coin landing heads is \(\frac{1}{2}\) and the chance of the coin landing tails is \(\frac{1}{2}\). Therefore, \(Ch_{1}(H\mid Th)=\frac{1}{2}\). Since Susie was told all the details of the experiment, Susie knows the chance theory *Th*. In conjunction with (6), it follows that \(\begin{aligned} Cr(H\mid \mathcal {H}_{1}\; \& \; Th)=Cr_{1}(H) \end{aligned}\). By the New Principle, \(\begin{aligned} Cr(H\mid \mathcal {H}_{1}\; \& \; Th)=Ch_{1}(H\mid Th) \end{aligned}\). Therefore, \(Cr_{1}(H)=\frac{1}{2}\); that is, (3) holds.

Now consider a version of the Principal Principle due to Ismael (2008, p. 298). Let *A* and *t* be as before. For each complete theory of chance *Th*, let \(Ch_{Th}(A)\) be the chance of *A* at *t* according to *Th*, and let \(a_{Th}\) be an agent’s subjective assessment of the probability of *Th* at *t*. Then in order for this agent to be rational, her credence function *Cr* at *t* ought to satisfy the following equation.

Call this the ‘General Recipe’.

To streamline the derivation of (3) from the General Recipe, suppose once again that the actual world’s complete chance theory *Th* only specifies the chances of the coin landing heads. Let \(Ch_{1,Th}\) be the chances, according to *Th*, at time \(t=1\). Then \(Ch_{1,Th}(H)=\frac{1}{2}\). Because Susie was told all the details of the experiment, it follows that at \(t=1\), her subjective assessment of the probability of *Th* is 1. So her subjective assessments of the probabilities of all other chance theories are equal to 0. Substituting these values into the General Recipe yields \(Cr_{1}(H)=\frac{1}{2}\); that is, (3) holds.

Now let us see why equations (1), (2), and (3) imply that \(Cr_{1}(Mo) = 1\). To start, note that since the conditional credence in (2) is well-defined, it follows that \(Cr_{1}(Mo)>0\). Therefore,

where (2) yields the first line, (1) yields the fifth line, and (3) yields the eighth line. Multiplying through by \(2Cr_{1}(Mo)\) yields \(Cr_{1}(Mo)=1\). In other words, at time \(t=1\)—that is, before Susie is told that it is Monday—Susie must be completely confident that it is Monday.

Now for the derivation of (5) from the Principal Principle.^{Footnote 21} To start, let \(Ch_{2}\) be the objective chance function at time \(t=2\). Because the coin is fair, the chance of heads at \(t=2\) is \(\frac{1}{2}\); that is, \(Ch_{2}(H)=\frac{1}{2}\). By definition, \(Cr_{2}\) is equal to *Cr* conditionalized on the proposition \(\mathcal {H}_{2}\), where \(\mathcal {H}_{2}\) expresses the complete history of the world up to time \(t=2\). That is, for all propositions *A*,

It follows, of course, that

Moreover, according to the Principal Principle,

These two equations—along with the fact that Susie knows the chance, at time \(t=2\), of the coin landing heads—jointly imply (5).^{Footnote 22}

Equation (9) assumes, of course, that \(\mathcal {H}_{2}\) is admissible. And one might deny that. In fact, Lewis (2001) does. For according to Lewis, *Mo* is inadmissible. Since \(\mathcal {H}_{2}\) implies *Mo*, it follows that \(\mathcal {H}_{2}\) is inadmissible as well.

I disagree with Lewis: *Mo* is admissible, and \(\mathcal {H}_{2}\) is as well. For that is the most intuitively plausible view, and I see no good reasons to think otherwise. And in particular, I think that Lewis’s argument for the inadmissibility of *Mo* does not succeed. According to Lewis, *Mo* is inadmissible because (1) it is “about the future” (2001, p. 175), and (2) *Mo* changes Susie's credence in *H*, the proposition that the coin lands heads. But it is not clear that *Mo* is ‘about’ the future in any sense relevant to Susie’s experiment. *Mo* seems to be ‘about’ the present. And even if *Mo* is ‘about’ the future in some way, it does not follow that *Mo* must be inadmissible simply because it changes Susie’s credence in *H*. Plenty of propositions are about the future, and change agents’ credences in other propositions, but are perfectly admissible. For example, suppose Billy is told that two fair coins will be tossed in two days. Because of that, his credence in the proposition that at least one coin comes up heads—call this proposition *C*—is \(\frac{3}{4}\), as that is the current chance of *C*. The next day, Billy learns the proposition *L*: one of the coins has been lost, and so only one coin will be flipped. This proposition is ‘about’ the future, in the sense that it tells Billy something about the future coin-flipping event. And it changes Billy’s credence in *C* to \(\frac{1}{2}\), since it changes the chance of *C* to \(\frac{1}{2}\) as well. But *L* is not inadmissible. Billy does everything correctly when he uses the Principal Principle to adjust his credence in *C* to the new chance of *C*.

## Rights and permissions

## About this article

### Cite this article

Wilhelm, I. Centering the Principal Principle.
*Philos Stud* **178**, 1897–1915 (2021). https://doi.org/10.1007/s11098-020-01515-x

Published:

Issue Date:

DOI: https://doi.org/10.1007/s11098-020-01515-x