Skip to main content
Log in

How to be imprecise and yet immune to sure loss

  • Decision-making and hypothetical reasoning
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Towards the end of Decision Theory with a Human Face (2017), Richard Bradley discusses various ways a rational yet human agent, who, due to lack of evidence, is unable to make some fine-grained credibility judgments, may nonetheless make systematic decisions. One proposal is that such an agent can simply “reach judgments” on the fly, as needed for decision making. In effect, she can adopt a precise probability function to serve as proxy for her imprecise credences (or set of probability functions) at the point of decision, and then subsequently abandon the proxy as she proceeds to learn more about the world. Contra Bradley, I argue that an agent who employs this strategy does not necessarily act like a precise Bayesian, since she is not necessarily immune to sure loss in diachronic, as well as synchronic, settings. I go on to suggest a method for determining a proxy probability function (via geometric averaging) whereby the agent does act like a precise Bayesian, so understood.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Note that the description of the “standard model” here follows Jeffrey’s (1965) formulation, as opposed to, say, that of Savage (1954). Indeed, Bradley argues from the outset of the book that Jeffrey’s model provides the best grounding for an account of rationality for human agents, in that it permits decision problems to be modelled at whatever level of granularity best reflects the agent’s viewpoint.

  2. Here Bradley speaks of opinions on “propositions”, but one can substitute the term “prospects”, in keeping with the terminology that we have been using thus far.

  3. Note that there are some further subtleties that are suppressed here, for instance, it is important that the agent’s preference ordering be coherently extendable in a non-trivial way, i.e., without relying on an attitude of indifference. Moreover, there is a question of whether it suffices for rationality that there merely exists a coherent extension of an agent’s attitudes, or whether every extension of an agent’s attitudes be coherent. The question turns on whether or not rationality requires logical omniscience; Bradley thinks not but allows that this is debatable. This issue is orthogonal to our discussion so will not be addressed here.

  4. Bradley provides a representation theorem for Imprecise Probabilism along these lines that is much simpler than other such representation theorems in the literature.

  5. Bradley elsewhere discusses more serious kinds of uncertainty and associated caution that cannot be reconciled with the standard model: so-called option uncertainty, when an agent cannot specify the possible consequences of performing one or another action, and modal uncertainty, when an agent is unsure of the space of relevant contingencies.

  6. Some do propose a more prescriptive version of Imprecise Probabilism, whereby the withholding of judgment is sometimes mandated. Bradley (p. 360) attributes such a view to Levi (e.g., Levi 1978), but that is a much more substantial position, requiring a lot more defence than is given above.

  7. Bradley suggests that the choice of O can be guided by, and thus dependent on, the decision problem at hand. This does not help to resolve, and in fact worsens, the problems that will be introduced in the next section. As such, it suffices for our purposes to assume that the relevant outcome partition, O, is constant across decision situations.

  8. See, e.g., Jon Williamson’s (2010) formulation of “Objective Bayesianism”.

  9. See Genest and Zidek (1986) for a classic survey of these methods.

  10. Following Bradley, the rule is stated here for the case of a finite set of probability functions. Typically, however, imprecise credal sets are taken to be convex, infinite sets. In that case, the weighting function is continuous and the sum must be replaced by an integral.

  11. In other contexts, the interpretation and assignment of weights is considered a source of difficulty for linear and other pooling methods. Bradley (pp. 425–429) raises various concerns. But one could argue that equal weights are uniquely appropriate in the context of Imprecise Bayesianism. In any case, this is not the focus of our discussion.

  12. Aczél and Wagner (1980) and McConway (1981) show that only weighted linear average pooling functions satisfy both eventwise independence, requiring the group probability for any event depend only on the individual probabilities for that event, and unanimity preservation, requiring that if all probability functions in the set \(\{P_i\}\) are equivalent, then \(P_0\) should also be equivalent to these probability functions.

  13. The expression “all those probability functions” is in inverted commas because this expression is not really apt in the case of a finite set. It serves to signify, however, that the imprecise credal set reflects the full scope of uncertainty and also the symmetries of the scenario, which is all that matters for Linear Averaging. Note that the inverted commas are omitted for subsequent uses of the expression in this section and the next.

  14. This is an ad hoc instance of the “pick the best” class of methods. It is also a trivial case of linear averaging, as it amounts to the dictatorial case whereby the probability function labelled a is assigned the full weight (of 1).

  15. This is known as point-wise conditionalisation, whether standard, or, say, Jeffrey conditionalisation. The point-wise approach is the popular account of rational learning for imprecise agents.

  16. The specifics depend on how strongly a rational agent should believe she would choose respective future options that she is indifferent between, were she confronted with the choice in question. This is a complicated issue and deserves more thorough treatment than what can be given here.

  17. For formal results to this effect, see Hammond (e.g., 1988). See also Steele (2010) for discussion of how differing decision theories fare on the various proposed criteria for dynamic coherence.

  18. Note that Hammond’s results (e.g., 1988) concerning dynamic coherence also extend to learning rules.

  19. Note that Williamson’s (2011) demonstration that his maximum entropy rule does not generally coincide with Bayesian conditionalisation is not relevant here. As pointed out in Sect. 3, our MaxEnt rule differs from the one that Williamson articulates. On our MaxEnt rule, the candidate probability functions for maximum entropy do not change as the agent learns, apart from each being conditioned on the evidence. On Williamson’s rule, by contrast, the candidate probability functions change as the agent learns to reflect the changing evidential constraints. These are very different approaches to belief revision.

  20. In fact, Genest (1984) shows that Geometric Averaging satisfies both external Bayesianity and unanimity preservation. These two axioms do not quite uniquely characterise geometric pooling, as shown by Genest et al. (1986).

  21. As per Linear Averaging, the rule is stated for the case of a finite set of probability functions. It also assumes a discrete outcome space. See Genest (1984) for a statement of the rule for a finite set of probability density functions over a continuous outcome space. It is not so straightforward to state the rule for an infinite set of probability (density) functions. This would require expressing the products as an exponential raised to the log of the product, which could itself be expressed as an integral over the log of the product terms.

  22. One might take the exception here to be moot: if it is held that the outcome space excludes outcomes that all \(P_i\) assign zero probability. In that case, the outcome space would change over time if the agent were to learn that some proposition is true and thus has probability one (and that its complement is false and thus has zero probability).

  23. Dietrich and List (2016) propose a stronger constraint to ensure well-definedness: that the probability functions \(P_i\) that are inputs to geometric pooling must be regular (i.e., they must not assign probability zero to any outcome). (This is only a stronger constraint if outcomes that all \(P_i\) assign zero probability may yet be part of the outcome space.) The important thing, as Dietrich and List note, is to rule out the possibility that, for every outcome, there is some probability function \(P_i\) that assigns zero probability to that outcome, such that the geometric average for all outcomes is zero and \(P_0\) is not a probability function. This scenario can in fact be avoided by an even weaker constraint that the one proposed here, but there would be a loss of simplicity, especially if one were to account for the changes that may result from learning.

  24. One can interpret Hammond’s theorems (e.g., 1988) along these lines. See Steele (2010) for discussion.

References

  • Aczél, J., & Wagner, C. (1980). A characterisation of weighted arithmetic means. SIAM Journal on Algebraic and Discrete Methods, 1(3), 259–260.

    Article  Google Scholar 

  • Bradley, R. (2017). Decision theory with a human face. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Dietrich, F., & List, C. (2016). Chapter 25 of: Probabilistic opinion pooling. In A. Hájek & C. Hitchcock (Eds.), The Oxford handbook of probability and philosophy. Oxford: Oxford University Press.

    Google Scholar 

  • Genest, C. (1984). A characterisation theorem for externally Bayesian groups. Annals of Statistics, 12(3), 1100–1105.

    Article  Google Scholar 

  • Genest, C., McConway, K. J., & Schervish, M. J. (1986). Characterization of externally Bayesian pooling operators. Annals of Statistics, 14(2), 487–501.

    Article  Google Scholar 

  • Genest, C., & Zidek, J. V. (1986). Combining probability distributions: A critique and an annotated bibliography. Statistical Science, 1(1), 114–135.

    Google Scholar 

  • Hammond, P. J. (1988). Consequentialist foundations for expected utility theory. Theory and Decision, 25, 25–78.

    Article  Google Scholar 

  • Jeffrey, R. (1965). The logic of decision. New York: McGraw-Hill.

    Google Scholar 

  • Levi, I. (1978). On indeterminate probabilities. In C. A. Hooker, J. J. Leach, & E. F. McClennen (Eds.), Foundations and applications of decision theory (pp. 233–261). Dordrecht: Springer.

    Google Scholar 

  • Madansky, A. (1964). Externally Bayesian groups. Technical Report RM-4141-PR, The RAND Corporation.

  • McConway, K. J. (1981). Marginalization and linear opinion pools. Journal of the American Statistical Association, 76(374), 410–414.

    Article  Google Scholar 

  • Savage, L. J. (1954). The foundations of statistics. New York: Wiley.

    Google Scholar 

  • Skyrms, B. (1993). A mistake in dynamic coherence arguments? Philosophy of Science, 60(2), 320–328.

    Article  Google Scholar 

  • Steele, K. (2010). What are the minimal requirements of rational choice? Arguments from the sequential-decision setting. Theory and Decision, 68, 463–487.

    Article  Google Scholar 

  • Steele, K. (2018). Chapter 35 of: Dynamic decision theory. In S. O. Hansson & V. F. Hendricks (Eds.), Introduction to formal philosophy. Cham: Springer.

    Google Scholar 

  • Williamson, J. (2010). In defence of objective Bayesianism. New York: Oxford.

    Book  Google Scholar 

  • Williamson, J. (2011). Objective Bayesianism, Bayesian conditionalisation and voluntarism. Synthese, 178(1), 67–85.

    Article  Google Scholar 

Download references

Acknowledgements

Many thanks to Michael Nielsen and two anonymous reviewers for comments on earlier drafts of this paper. Research on this paper was supported by an Australian National University Futures grant and an Australian Research Council Discovery grant (Grant Number: 170101394).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katie Steele.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Steele, K. How to be imprecise and yet immune to sure loss. Synthese 199, 427–444 (2021). https://doi.org/10.1007/s11229-020-02665-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-020-02665-5

Navigation