Skip to main content
Log in

A Representation Theorem for Frequently Irrational Agents

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

Abstract

The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility (given her preferences). However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being (amongst other things) always probabilistically coherent with infinitely precise degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Jeffrey Sanford Russell, John Hawthorne & Lara Buchak

Notes

  1. Ramsey [34] developed the first expected utility representation theorem, which he intended as the basis for a definition of credences and utilities. Authors sympathetic to the metaphysical application of representation theorems include Cozic and Hill [8], Davidson [9, 10], Ells [13], Harsanyi [18], Jeffrey [20], Maher [29, 30], and Pettit [32, pp. 171–172]. Note that the issue here is not whether credences and utilities just are preference states, nor whether they are reducible to preferences alone; these are much stronger claims than we need commit ourselves to. See Section 2 for discussion.

  2. Here and throughout, I will use ‘epistemically necessary’and ‘epistemically possible’ (or sometimes just ‘necessary’, ‘possible’) in more or less the sense explicated by Chalmers [6, 7]. Essentially: P is epistemically possible iff it can’t be ruled out a priori, and epistemically necessary iff it is a priori knowable.

  3. Representation theorems for non-expected utility often forego probability functions in favour of non-additive Choquet capacities, Dempster-Shafer belief and plausibility functions, sets of probability functions, and so on. These models tend to be somewhat more realistic, but only marginally so—e.g., each implies that if P necessitates Q, then C r(Q) ≥ C r(P).

  4. There is room for disagreement here. It’s easier to argue that a subject’s preferences can be very ill-behaved when these are thought of as representing choice dispositions. But things are not so straightforward when preferences are understood as mental states, for which we only have intuitive evidence to rely on. To be sure, it is certainly very hard to imagine a strict preference relation which is not asymmetric; likewise an indifference relation which is not symmetric. I’m inclined to think that these properties are constitutive of strict preference and indifference, respectively. But it is much more plausible that transitivity of preference can sometimes fail, and that is what I am mainly appealing to here. Where transitivity fails, one might argue that we can still make sense of local or context-dependent utilities, even though a global numerical representation of the subject’s preferences won’t be possible. I suspect that something like this is probably right, but it also very naturally fits the picture where preferences are prior to, and part of the grounds of, any correct assignment of utilities.

  5. It is sometimes said that where a representation theorem does not determine a unique Cr and U, we ought to take the entire set of admissible Cr and U functions as our representation of the subject’s credences and utilities respectively. Setting aside the sometimes questionable motivations for going this route, note that what’s really going on here is a re-interpretation of the original theorem—i.e., not as saying that S can be non-uniquely represented as an expected utility maximiser with such-and-such credences and utilities (each represented by a single real-valued function), but instead as saying that S can be uniquely represented as following a more complicated decision rule with such-and-such credences and utilities (represented by sets of real-valued functions). The more complicated decision rule may be something like: prefer P to Q just in case the Cr-weighted average utility of P is greater than Q for each/some admissible Cr-U pair.

  6. I do not mean to imply that the deductive argument I have presented is the only way to put a representation theorem to work in fixing a subject’s credences and utilities. For instance, one might try to approach the matter via inference to the best explanation. In the event that S satisfies (or comes close to satisfying) \(\mathcal {A}\), perhaps the best explanation is that she follows \(\mathcal {R}\) with credences Cr and utilities U. The deductive argument I’ve given here is meant to be illustrative, to help us draw out the kinds of properties a theorem should have if it is to be usefully applied in the relevant way. Even on the IBE model, we’ll still want something like the desiderata (i) to (iv) I’ve outlined to hold—e.g., if \(\mathcal {R}\) were relatively implausible, or < R 1, R 2, … , R n > excessively strong, then we wouldn’t have a very good explanation of S’s preferences.

  7. A similar point holds of course for the particular way in which the decision rule \(\mathcal {R}\) is formulated, which is naturally dependent on how Cr and U are characterised.

  8. See Meacham and Weisberg [31, pp. 657–659] for an argument that this restriction to probability functions in Savage is substantive, rather than merely notational. There are a number of issues here regarding what exactly Savage needed to assume about Cr, and what specific properties of his Cr might be conventional rather than substantive (e.g., whether additivity per se is conventional or not is controversial). I don’t want to rest too heavily on this one example; the other examples should suffice to make the point.

  9. Because we have to specify 𝓟 at the outset, the following theorem cannot really be thought of as giving us a way of deriving a subject’s credences from her preferences. Instead, we can say that given knowledge of what propositions S has some credences, the theorem allows us to work out just what degrees of confidence she assigns to each. See Section 5 for further discussion.

  10. There may be some difficulties here regarding framing effects, whereby a choice might be evaluated differently depending on whether its outcomes are cast in a negative or a positive light (see [23, 40]). For example, a doctor might know that giving a population of 1000 deathly ill patients a particular treatment will cure 75 % but kill the rest. When choosing whether to administer the treatment, it seems to make a difference whether this outcome is described as ‘750 lives are saved’ or as ‘250 people die’, although in both cases the doctor presumably recognises that 750 will live and 250 will die. We do not know the mechanisms underlying these effects, so it’s unclear whether they conflict with the assumption that U(P) = U(Q) whenever PQ. One plausible explanation which doesn’t obviously generate conflict is that the way in which a choice is framed can make particular aspects of a complex outcome more salient than other aspects [25, 41]. So, instead of representing the doctor as assigning different utilities to distinct but recognisably equivalent representations of one and the same outcome (750 will live & 250 will die), we see her as having different utilities towards non-equivalent aspects of the outcome (750 will live, 250 will die), with positive or negative descriptions of that outcome influencing which aspects get represented as ‘the’ outcome. If this kind of explanation is correct, then framing effects describe an error in how agents go from descriptions of choices to their own internal representations of those choices. Since my ≿ is defined over the representations directly, we do not have to worry about any potential cognitive biases that might influence how we go from a description of a gamble or outcome to the (mis-)representation thereof.

  11. I do not place very much weight on this assumption about the semantics of counterfactuals. For instance, if there can be impossible counterfactuals with impossible antecedents, then alternative conditions can be placed on 𝓖 to fix upon the appropriate set.

  12. A similar point holds, I think, for the equivalence between P, ¬¬P, ¬¬¬¬P (and etc.). To the extent that these represent distinct objects of thought, it’s reasonable to think that most ordinary agents know (at least implicitly) that if the number of negations preceding a claim P is a multiple of two, then the proposition expressed is equivalent to P; otherwise it’s equivalent to ¬P.

  13. To ensure that C r(P) ≥C r(Q), it suffices to assume that \(o_{3} \succsim o_{1} \succ o_{4} \succsim o_{2}\) and \([o_{1}, P; o_{2}] \succsim [o_{3}, Q; o_{4}]\). Letting o 3 and o 4 be \(o_{1}^{\prime }\) and \(o_{2}^{\prime }\) respectively makes the reasoning somewhat more transparent, especially when it comes to defining π.

  14. Note the role of S1.2 and S1.3 in this proof: they are used to establish that if \(P \rightleftharpoons Q\), then if [o 1, P; o 2] is in \(\boldsymbol {\mathcal {G}}\), [o 1, Q; o 2] will be in \(\boldsymbol {\mathcal {G}}\) too. Given A9, we could get away with dropping both conditions if we made the relatively weak partially structural assumption that when \(P \rightleftharpoons Q\), there is some pair of outcomes o 1, o 2 such that there are [o 1, P; o 2], \([o_{1}^{\prime }, Q; o_{2}^{\prime }] \in \boldsymbol {\mathcal {G}}\), and \([o_{1}, P; o_{2}] \sim [o_{1}^{\prime }, Q; o_{2}^{\prime }]\). Alternatively, we could tweak the second part of A10 to say that if \(P \rightleftharpoons Q\), then there will be a pair of gambles [o 1, P; o 2], [o 3, Q; o 4 ] in \(\boldsymbol {\mathcal {G}}\) such that:

    $$\frac{U([o_{1},\ P;\ o_{2}])\ -\ U(o_{2})}{U(o_{1})\ -\ U(o_{2})} \ = \ \frac{U([o_{3},\ Q;\ o_{4}])\ -\ U(o_{4})}{U(o_{3})\ -\ U(o_{4})} $$

    However, this latter option would result in an axiom somewhat less intuitive than A10 as stated. Finally, if we wanted to get rid of S1.2, S1.3 and A9 while preserving the result that \(P \rightleftharpoons Q\) implies C r (P) = C r (Q) (see Section 4.1), we’d need to posit that whenever \(P \rightleftharpoons Q\) and \([o_{1}, P; o_{2}] \in \boldsymbol {\mathcal {G}}\), there’s a \([o_{1}^{\prime }, Q; o_{2}^{\prime }]\) in \(\boldsymbol {\mathcal {G}}\).

  15. How agents make decisions with imprecise credences is a matter of much contemporary discussion, so I cannot say anything very definite here. For an overview of the main approaches to decision-making with imprecise credences, see [39]. For a very natural model according to which imprecise credences will generate imprecise utilities for gambles, see [35]. Most descriptively-motivated models of decision-making with imprecise credences aim at representing the apparently risk-averse attitudes that ordinary subjects take towards gambles conditional on propositions with ‘ambiguous’ probabilities. As such, it is unclear how well they fit with the assumption that our subject is risk-neutral (Section 3.1). If it turns out that otherwise ordinary, risk-neutral agents with imprecise credences follow a rule quite unlike expected utility maximisation, like Γ-maximin (see [1, 38]), then Theorem 3 will likely have to be revised at a very fundamental level (e.g., the motivations for Definitions 1, 2 and 4 will be undermined).

  16. With the exception of A9—which I’ll return to in a moment—none of the other axioms appear especially problematic if S has imprecise credences. A1.1 does presuppose that there are at least two propositions towards which S assigns a credence of exactly \(\frac {1}{2}\), but this seems a relatively minor idealisation.

  17. Taking this strategy also requires a slight re-interpretation of ≻ and ∼, as they were defined in Section 3.4. We can keep the definitions of ≻ and ∼ in terms of \(\succsim \), but we should only say that S strictly prefers P to Q (or is indifferent between them) if PQ (or PQ) on all coherent completions of \(\succsim \).

  18. The extension of U to \(\boldsymbol {\mathcal {O}} \cup \boldsymbol {\mathcal {G}}\) is an optional extra. It’s straightforward to restate the theorem such that U is only defined for outcomes, with a distinct function EU on \(\boldsymbol {\mathcal {G}}\) characterised in terms of Cr and U.

  19. In connection with this point, it’s worth pointing out that Savage’s credence functions are fundamentally incapable of representing subjects’ credences regarding their own actions and anything probabilistically dependent upon them [14]. The same applies to every theorem based on a similar kind of formal framework. If agents do have credences towards the relevant kinds of proposition, then no Savagean theorem will let us fully pin down all of the credence facts using information from preferences alone.

  20. This is essentially the case, for example, of Tversky and Kahneman’s [40] cumulative prospect theory, widely thought to be the most empirically accurate model of decision-making so far developed. Simplifying somewhat, CPT models agents as preferring acts with the greatest μ-weighted average utility, where μ is a monotonic function from a set of events to [0, 1]. The ‘decision-weight’ μ is usually taken to be decomposable into the subject’s credences and her attitude towards risk (cf. [43])

References

  1. Alon, S., & Schmeidler, D. (2014). Purely subjective maxmin expected utility. Journal of Economic Theory, 152, 382–412.

    Article  Google Scholar 

  2. Anscombe, F.J., & Aumann, R.J. (1963). A definition of subjective probability. The Annals of Mathematical Statistics, 34(2), 199–205.

    Article  Google Scholar 

  3. Aumann, R.J. (1962). Utility theory without the completeness axiom. Econometrica, 30(3), 445–462.

    Article  Google Scholar 

  4. Bradley, R. (2001). Ramsey and the measurement of belief. In Corfield, D., & Williamson, J. (Eds.) Foundations of Bayesianism (pp. 261–275). Kluwer Academic Publishers.

  5. Buchak, L. (2023). Risk and rationality. Oxford: Oxford University Press.

    Google Scholar 

  6. Chalmers, D. (2011a). Frege’s puzzle and the objects of credence. Mind, 120 (479), 587–635.

  7. Chalmers, D. (2011b). The nature of epistemic space, (pp. 60–107). Oxford: Oxford University Press.

  8. Cozic, M., & Hill, B. (2015). Representation theorems and the semantics of decision-theoretic concepts. Journal of Economic Methodology, 22, 292–311.

    Article  Google Scholar 

  9. Davidson, D. (1980). Toward a unified theory of meaning and action. Grazer Philosophische Studien, 11, 1–12.

    Article  Google Scholar 

  10. Davidson, D. (1990). The structure and content of truth. The Journal of Philosophy, 87(6), 279–328.

    Article  Google Scholar 

  11. Dogramaci, S. (forthcoming). Knowing our degrees of belief. Episteme.

  12. Easwaran, K. (2014). Decision theory without representation theorems. Philosopher’s Imprint, 14(27), 1–30.

    Google Scholar 

  13. Eells, E. (1982). Rational decision and causality. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  14. Elliott, E. (forthcoming a). Probabilism, representation theorems, and whether deliberation crowds out prediction. Erkenntnis.

  15. Elliott, E. (forthcoming b). Ramsey without ethical neutrality: a new representation theorem. Mind.

  16. Fishburn, P.C. (1981). Subjective expected utility: a review of normative theories. Theory and Decision, 13, 139–199.

    Article  Google Scholar 

  17. Gilboa, A.I., Postlewaite, A., & Schmeidler, D. (2012). Rationality of belief or: why savage’s axioms are neither necessary nor sufficient for rationality. Synthese, 187, 11–31.

    Article  Google Scholar 

  18. Harsanyi, J. (1977). On the rationale of the Bayesian approach: comments of Professor Watkins’s paper. In Butts, R. E., & Hintikka, J. (Eds.) Foundational Problems in the Special Sciences (pp. 381–392). Dordrecht: J. Reidel.

  19. Jeffrey, R. (1986). Bayesianism with a human face. Minnesota Studies in the Philosophy of Science, 10, 133–156.

    Google Scholar 

  20. Jeffrey, R.C. (1968). Probable knowledge. Studies in Logic and the Foundations of Mathematics, 51, 166–190.

    Article  Google Scholar 

  21. Jeffrey, R.C. (1990). The logic of decision. Chicago: University of Chicago Press.

    Google Scholar 

  22. Joyce, J. (2015). The value of truth: a reply to Howson. Analysis, 75, 413–424.

    Article  Google Scholar 

  23. Kahneman, D., & Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica, 47, 263–291.

    Article  Google Scholar 

  24. Krantz, D.H., Luce, R.D., Suppes, P., & Tversky, A. (1971). Foundations of measurement: additive and polynomial representations Vol. I. Academic Press.

  25. Levin, I.P., Schneider, S.L., & Gaeth, G.J. (1998). All frames are not equal: a typology and critical analysis of framing effects. Organizational Behavior and Human Decision Processes, 76, 149–188.

    Article  Google Scholar 

  26. Lewis, D. (1974). Radical interpretation. Synthese, 27(3), 331–344.

    Article  Google Scholar 

  27. Luce, R.D. (1992). Where does subjective expected utility fail descriptively? Journal of Risk and Uncertainty, 5, 5–27.

    Google Scholar 

  28. Luce, R.D., & Krantz, D.H. (1971). Conditional expected utility. Econometrica, 39(2), 253–271.

    Article  Google Scholar 

  29. Maher, P. (1993). Betting on theories. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  30. Maher, P. (1997). Depragmatized dutch book arguments. Philosophy of Science, 64(2), 291–305.

    Article  Google Scholar 

  31. Meacham, C.J.G., & Weisberg, J. (2011). Representation theorems and the foundations of decision theory. Australasian Journal of Philosophy, 89(4), 641–663. http://dx.doi.org/10.1080/00048402.2010.510529.

    Article  Google Scholar 

  32. Pettit, P. (1991). Decision theory and folk psychology. In Bacharach, M, & Hurley, S (Eds.) Foundations of Decision Theory: Issues and Advances (pp. 147–175). Oxford: Basil Blackwater.

  33. Rabinowicz, W. (2012). Value relations revisited. Economics and Philosophy, 28, 133–164.

    Article  Google Scholar 

  34. Ramsey, F.P. (1931). Truth and probability. In Braithwaite, R B (Ed.) The foundations of mathematics and other logical essays (pp. 156–198). London: Routledge.

  35. Rinard, S. (2015). A decision theory for imprecise credences. Philosopher’s Imprint, 15, 1–16.

    Google Scholar 

  36. Savage, L.J. (1954). The foundations of statistics. New York: Dover.

    Google Scholar 

  37. Schervish, M.J., Seidenfeld, T., & Kadane, J.B. (1990). State-dependent utilities. Journal of the American Statistical Association, 85, 840–847.

    Article  Google Scholar 

  38. Seidenfeld, T. (2004). A contrast between two decision rules for use with (convex) sets of probabilities: gamma-maximin versus e-admissibility. International Journal of Approximate Reasoning, 140, 69–88.

    Google Scholar 

  39. Troffaes, M.C.M. (2007). Decision making under uncertainty using imprecise probabilities. International Journal of Approximate Reasoning, 45, 17–29.

    Article  Google Scholar 

  40. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453–458.

    Article  Google Scholar 

  41. Van Schie, E.C.M., & Van Der Pligt, J. (1995). Influencing risk preference in decision making: the effects of framing and salience. Organizational Behavior and Human Decision Processes, 63, 264–275.

    Article  Google Scholar 

  42. von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton: Princeton University Press.

    Google Scholar 

  43. Wakker, P.P. (2004). On the composition of risk preference and belief. Psychological Review, 111, 236–241.

    Article  Google Scholar 

  44. Walley, P. (1999). Towards a unified theory of imprecise probability. International Journal of Approximate Reasoning, 24, 125–148.

    Article  Google Scholar 

  45. Weirich, P. (2004). Realistic decision theory: rules for nonideal agents in nonideal circumstances. Oxford: Oxford University Press.

    Book  Google Scholar 

  46. Zynda, L. (2000). Representation theorems and realism about degrees of belief. Philosophy of Science, 67(1), 45–69.

    Article  Google Scholar 

Download references

Acknowledgments

I would like to thank an anonymous referee for this journal for detailed and helpful comments. I am grateful to Ben Blumson, Rachael Briggs, David Chalmers, Daniel Elstein, Jessica Isserow, Al Hájek, James Joyce, and Robbie Williams, for helpful comments and discussion on the paper and its immediate predecessors. Thanks also to audiences at the ANU, the University of Leeds, the National University of Singapore, and the LSE. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 312938.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edward Elliott.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elliott, E. A Representation Theorem for Frequently Irrational Agents. J Philos Logic 46, 467–506 (2017). https://doi.org/10.1007/s10992-016-9408-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-016-9408-8

Keywords

Navigation