, Volume 21, Issue 1, pp 193–196 | Cite as

Objective Bayesianism defended?

Jon Williamson: In defence of objective Bayesianism. Oxford: Oxford University Press, 2010, vi+185pp, £44.95 HB
Book Review

Objective Bayesianism is the view that an agent’s degrees of belief should satisfy three constraints in order to be rational. First, they should satisfy the probability calculus. Second, they should be sensitive to the agent’s evidence (e.g., of physical chances, frequencies or correlations). Third, and finally, they should otherwise be maximally non-committal (or ‘equivocate between basic outcomes’ (iii)). Williamson calls these the probability, calibration and equivocation norms.

To illustrate, imagine that an agent knows (or believes) of a die only that it is a regular tetrahedron, with sides labelled ‘i’, ‘ii’, ‘iii’ and ‘iv’.1 On the probability norm, her degree of belief that it lands on one of these numbers when rolled should be unity, on her background information, i.e., P(i, b) + P(ii, b) + P(iii, b) + P(iv, b) = 1; one of the possible outcomes must occur, and the probability of something that must occur is unity. Then on the equivocation norm, she should remain maximally non-committal about which outcome will occur. Hence, her degrees of belief in each of the possibilities should be equal if they are to be rational, i.e., P(i, b) = P(ii, b) = P(iii, b) = P(iv, b) = 0.25.

To see how the calibration norm works, adjust the previous scenario by letting the agent know (or believe), in addition, that the die has landed on ‘ii’ two fifths of the time in a large number of rolls. Now her degree of belief in a ‘ii’ result on a roll should match the data she has, P(ii, b′) = 0.4. Then she should also equivocate on the remaining possibilities, such that P(i, b′) = P(iii, b′) = P(iv, b′) = 0.2. As before, the rational degrees of belief in each of the possible outcomes have unique values.

Objective Bayesianism is therefore a relative of the better known logical interpretation developed by Keynes (1921). The equivocation-style rule of Keynes’s interpretation is the notorious principle of indifference—‘that equal probabilities must be assigned to each of several arguments, if there is an absence of positive ground for assigning unequal ones’ (Keynes 1921, 42)—whereas the maximum entropy principle of Jaynes (1957) fulfils this role in the objective Bayesian newcomer. Each rule gives the same result in some circumstances, e.g., when background information involves only an enumeration of possible outcomes. So we are left with the question ‘How close a relative to the logical interpretation is the objective Bayesian one?’ This is important not merely as a matter of historical curiosity—although Keynes’s Treatise on Probability, which is one of the greatest books on probability ever written, is typically underappreciated and often misunderstood2—but also because it is widely believed that the logical interpretation is untenable. Appreciating the relationship between the logical and the objective Bayesian views of probability will help us to ascertain which criticisms of the former are also salient criticisms of the latter.

I have a prior difference of opinion with Williamson on this question, which I do not want to revisit in any depth here. (The aim of his book is hardly to satisfy me!) So let me just say I disagree that Keynes disregarded empirical constraints (Williamson 2005, 68–70) or that adopting a logical interpretation entails doing so, and think that the maximum entropy principle could replace the principle of indifference in a logical view (cf. Rowbottom 2008). Williamson (22) now gives a little ground on the first issue, by saying only that the ‘logical interpretation typically focuses on equivocation at the expense of calibration’, but does not address the second. I think the real difference (between possible logical interpretations and possible objective Bayesian interpretations) is just that the former consider ‘probability to be fundamentally a logical relation that only indirectly concerns degree of belief’ whereas the latter interpret ‘probability directly in terms of degree of belief’ (22). Only as a result of this does the objective Bayesian take ‘probability to be relative to an agent’ (ibid.) in a way that a logical theorist might not. (Much will hinge on how logical relations between propositions are construed, e.g., on the ontology of propositions one employs.) The logical interpretation has many different (possible and actual) variants. Similarly, Williamson (2) ‘departs from orthodoxy’ in his ‘version of objective Bayesianism’.

I mention this to emphasise that a complete defence of objective Bayesianism would show that it does not succumb to the key objections to (historically significant versions of) the logical interpretation, such as the geometrical paradox of Bertrand (1889, 4): ‘Draw a random chord in a circle. What is the probability that it is shorter than the side of the inscribed equilateral triangle?’ [my translation]. Alas, Williamson only provides a brief mention of this paradox, which Shackel (2007) has recently argued is insoluble by the means proposed by Jaynes (1973), the architect of the maximum entropy principle.

Nevertheless, Williamson (§9.1.3) does explain his overall strategy for tackling this kind of paradox. His view is that sometimes there are several different permissible ways to equivocate, from which one may freely select on his equivocation norm. But this seems to be at odds with the main argument that Williamson gives for obeying the equivocation norm (in §3.4.4), namely that ‘the objective Bayesian decision is more cautious [than others that might legitimately be made in the absence of the equivocation norm]’. Wouldn’t it be more cautious still to refrain from equivocating in one way, rather than another, when one has recognised that there are multiple ways to equivocate? In short, why not equivocate over ways to equivocate? I suspect Williamson’s answer—which is a good answer, as far as it goes—would be that we have to choose a probability function when we’re forced to act. But I am not sure that this forces us to adopt relevant degrees of belief, unless these are construed purely as dispositions to bet (or forecast) in particular ways (following De Finetti).

Williamson (32–38) does indeed advocate a betting interpretation of degrees of belief. But it is then misleading to support the claim that: ‘our degrees of belief guide our actions and our actions are tantamount to bets’ (57). (Or, for that matter, to say that ‘degrees of belief quantify strength of conviction’ (171).) On the contrary, on the betting interpretation, an agent’s degrees of belief just are either her betting quotients in actual betting scenarios or the betting quotients she is disposed to select in possible betting scenarios. (The problem with the first option is that degrees of belief don’t exist in the absence of actual bets. Williamson opts for the latter route.) In fact, Williamson explicitly denies that ‘betting behaviour could be used to measure an agent’s actual degrees of belief’ [my emphasis] (33) and emphasises that he uses ‘the betting set-up to specify the meaning of rational degree of belief’ (ibid.)

The resultant problem is that degrees of belief (and hence probabilities based on degrees of belief) only concern a peculiar class of actions, namely betting actions. Worse, only a special class of betting actions—e.g. where the stakes are not trifling or gigantic, and the bettor isn’t sure whether he’ll be betting for or against an event occurring—are covered by the kind of Dutch Book scenarios that Williamson appeals to in order to justify the probability norm for degrees of belief.3 In summary, when degrees of belief are defined in such a way, probability is not the very guide of life that Williamson would have it be.4

Let’s return briefly to the maximum entropy principle, which prompted the discussion of degrees of belief. The concern is the exact connection between action and belief. If I’m forced to bet, I’ll bet as best I can. But this need not require selecting betting quotients that reflect what I really believe. So is the maximum entropy principle just a principle for selecting rational betting quotients? To settle this, and resolve the difficulty above, a careful and sustained treatment of how we should understand degrees of belief is required; and Williamson’s book would have benefitted from this, and engagement with the work of Eriksson and Hájek (2007) on the issue.

Having used so much of my limited space, I must now stop being critical (as is my wont) in order to avoid giving a false impression of this excellent book, which is ‘as much about theory-building as about meeting objections’ (2). Williamson (chapter 4) does a nice job, especially, of showing how objective Bayesian updating need not involve Bayesian conditionalisation. He also deals expertly with technical issues, such as showing how entropy maximisation can be achieved in a computationally tractable fashion (chapter 6). Overall, Williamson succeeds in the aim of defending objective Bayesianism against key criticisms and therefore makes an important contribution to the literature on epistemic interpretations of probability. There are also some interesting peripheral discussions, e.g., concerning how evidence (and what is sometimes, rather unfortunately, called ‘background knowledge’) need not be justified or even true (§1.4.1). Many formal epistemologists will heartily agree with Williamson on this.

In closing, I should like to congratulate Williamson for producing this fine book. I have found it entertaining and intellectually stimulating. Anyone interested in epistemic interpretations of probability should give it a look.


  1. 1.

    Such ‘knows only’ (or ‘believes only’) scenarios, common in the literature on decision theory, are more troublesome than they first appear. It is difficult to specify what precisely we are to assume that the agent knows (or believes) in any given case. (Here, for example, I want you to assume that the agent isn’t familiar with rolls of similar objects, e.g., other regular dice, but is familiar with elementary mathematics.)

  2. 2.

    For example, it was certainly not, in my view, part of a probabilistic ‘dark ages’ (24)!

  3. 3.

    There are also several well-known problems with the Dutch book argument, which Williamson does not do justice to. For example, a bettor might rationally select a betting quotient of zero for an event that she is sure will occur; see Rowbottom (2007a). Similar problems have been discussed in considerable depth elsewhere, e.g. by Seidenfeld et al. (1990) and Hájek (2005). This does not raise an insurmountable obstacle for the objective Bayesian project, however, because Williamson could instead have appealed to De Finetti’s notion of a forecast, and appropriate scoring rules, as explained by Schervish et al. (2009).

  4. 4.

    Williamson’s position appears to be better suited to a dispositional account of degrees of belief, analogous to the dispositional account of belief in the philosophy of mind. (Betting scenarios could then be used to indicate dispositional profiles.) See Schwitzgebel (2001) and Rowbottom (2007b).


  1. Bertrand, J. 1889. Calcul des probabilités. New York: Chelsea (3rd edn., c.1960).Google Scholar
  2. Eriksson, L., and A. Hájek. 2007. What are degrees of belief? Studia Logica 86: 183–213.CrossRefGoogle Scholar
  3. Hájek, A. 2005. Scotching Dutch books. Philosophical Perspectives 19: 139–151.CrossRefGoogle Scholar
  4. Jaynes, E.T. 1957. Information theory and statistical mechanics. Physical Review 106: 620–630.CrossRefGoogle Scholar
  5. Jaynes, E.T. 1973. The well posed problem. Foundations of Physics 4: 477–492.CrossRefGoogle Scholar
  6. Keynes, J.M. 1921. A treatise on probability. London: Macmillan.Google Scholar
  7. Rowbottom, D.P. 2007a. The insufficiency of the Dutch book argument. Studia Logica 87: 65–71.CrossRefGoogle Scholar
  8. Rowbottom, D.P. 2007b. In-between believing and degrees of belief. Teorema 26: 131–137.Google Scholar
  9. Rowbottom, D.P. 2008. On the proximity of the logical and ‘objective Bayesian’ interpretations of probability. Erkenntnis 69: 335–349.CrossRefGoogle Scholar
  10. Schervish, M.J., T. Seidenfeld, and J.B. Kadane. 2009. Proper scoring rules, dominated forecasts, and coherence. Decision Analysis 6: 202–221.CrossRefGoogle Scholar
  11. Schwitzgebel, E. 2001. In-between believing. Philosophical Quarterly 51: 76–82.CrossRefGoogle Scholar
  12. Seidenfeld, T., Schervish, M. J., and Kadane, J. B. 1990. When fair betting odds are not degrees of belief, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, pp. 517–524.Google Scholar
  13. Shackel, N. 2007. Bertrand’s paradox and the principle of indifference. Philosophy of Science 74: 150–175.CrossRefGoogle Scholar
  14. Williamson, J.O.D. 2005. Bayesian nets and causality: Philosophical and computational foundations. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.Faculty of PhilosophyUniversity of OxfordOxfordUK

Personalised recommendations