Skip to main content
Log in

The problems of transformative experience

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Laurie Paul has recently argued that transformative experiences pose a problem for decision theory. According to Paul, agents facing transformative experiences do not possess the states required for decision theory to formulate its prescriptions. Agents facing transformative experiences are impoverished relative to their decision problems, and decision theory doesn’t know what to do with impoverished agents. Richard Pettigrew takes Paul’s challenge seriously. He grants that decision theory (in its traditional state) cannot handle decision problems involving transformative experiences. To deal with the problems posed by transformative experiences, Pettigrew proposes two alterations to decision theory. The first alteration is meant to handle the problem posed by epistemically transformative experiences, and the second alteration is meant to handle the problem posed by personally transformative experiences. I argue that Pettigrew’s proposed alterations are untenable. Pettigrew’s novel decision theory faces both formal and philosophical problems. It is doubtful that Pettigrew can formulate the sort of decision theory he wants, and further doubtful that he should want such a decision theory in the first place. Moreover, the issues with Pettigrew’s proposed alterations help reveal issues with Paul’s initial challenge to decision theory. I suggest that transformative experiences should not be taken to pose a problem for decision theory, but should instead be taken to pose a topic for ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. I will explicitly distinguish the formal problems from the philosophical ones. The more serious problems are the philosophical ones, as the formal problems can be palliated by formal means. Readers who are uninterested in formal issues should feel free to skip the formal sections of this paper.

  2. It remains to be seen how strictly the requirements for a transformative experience should be construed. It is, for example, a substantive question whether a Davidsonian swampman could know what it’s like to see red. For more, see Davidson (1987).

  3. Paul (2015a).

  4. To make a decision about whether or not to see red, he would need utilities defined over the various ways of seeing red and the various ways of not seeing red. To make a decision about whether or not to eat Vegemite, he would need utilities defined over the various ways of eating Vegemite and the various ways of not eating Vegemite. To make a decision about whether or not to have a child, he would need utilities defined over the various ways of having a child and the various ways of not having a child.

  5. Pettigrew (2015).

  6. Pettigrew (2016).

  7. Pettigrew (2015).

  8. Passages concerning a lack of access to certain utilities or an inability to see certain utilities suggest that the relevant utilities exist. But there is contrary textual evidence as well. For example, Paul writes, “[A]n agent without a value function for transformative outcomes is an agent without a (standard) model for a rational decision.”

  9. Note that this does not presuppose that agents assign utilities appropriate for the actual character of the epistemically transformative experience in question. It’s not as though an agent has to assign a utility that corresponds to the actual, unknown taste of Vegemite. Similarly, it’s not as though an agent has to assign a utility that corresponds to the actual, unknown effect of pushing a button. Instead, with both Vegemite and buttons, an agent can assign varying utilities to any possibilities that his credences differentiate. The unfathomability of an epistemically transformative experience might be taken to preclude such varying credences and utilities, but such a view takes the ontological interpretation rather than the epistemological interpretation, and will thus be treated in the next section.

  10. Barring a tie, in which case both choices maximize expected utility.

  11. It may be helpful to think about a very simplified case. Let us act as though there is only one possibility in which Marie has children and only one possibility in which she does not have children. The expected utility of a choice then trivially equals the utility of the relevant possibility. Suppose that Marie’s utility for having children is 4 and her utility for not having children is 3. Suppose also that Marie is uncertain about her utiliites; she has credence .5 that her utility for having children is 4 and her utility for not having children is 3, and she has credence .5 that her utility for having children is 3 and that her utility for not having children is 4. So Marie is uncertain about which choice maximizes expected utility. It is nonetheless a fact that the choice to have children maximizes expected utility.

  12. A normal decision-theoretic analysis requires only that an agent has credences and utilities. More on this shortly.

  13. Pettigrew’s proposal may therefore be incoherent if it’s necessary for utilities to serve a functional role.

  14. Note that the scope of the problem posed by epistemically transformative experiences will depend on how decision problems are individuated. If the possibilities under consideration are individuated coarsely, then the problems posed by epistemically transformative experiences will be relatively rare. If the possibilities under consideration are individuated finely, then the problems posed by epistemically transformative experiences will be ubiquitous. Suppose you’re facing a very pedestrian decision: whether or not to have a sip of water. If you have just one utility for having a sip and just one utility for not having a sip, then the problems posed by epistemically transformative experiences won’t arise. On the other hand, if you have utilities for all the worlds in which you have a sip and for all the worlds in which you don’t have a sip, then the problems posed by epistemically transformative experiences will arise, as you have epistemically transformative experiences in many of those worlds.

  15. Pettigrew (2015).

  16. Pettigrew (Forthcoming).

  17. This may well seems strange. One might think this vision of decision theory insufficiently “internalist”. Now admittedly, an agent’s credences and utilities plausibly supervene on that agent’s intrinsic properties, so there’s still a measure of internalism there. But more than that just isn’t part of the framework of decision theory. One can, of course, stipulate more about an agent’s relationship to his credences and utilities. And much fruitful theorizing has been done on the basis of such stipulations. (For example, Robert Aumann’s (1976) celebrated agreement theorem makes this stipulation along with several others.) But such stipulations are not essential to decision theory.

  18. For more about the epistemic significance of the limitations of ordinary agents see Williamson (2000). For a generalization of Williamson’s reasoning to probabilistic contexts see Williamson (2008).

  19. The philosophical and mathematical issues concerning averaging across utility functions are already widely understood, so I don’t wish to belabor them. But neither to I wish to presuppose familiarity with those issues. For an explanation of them, see the appendix. For even more on this issue, see Briggs (2015).

  20. Speaking for myself, I do not think that special maneuvering helps. I think that averages across different utility functions are insuperably ill-defined. But I appreciate that others may think that the structure of utilities is more flexible than I do. Thus I feel I should explore the uses that such flexibility could be put to.

  21. Given only one anchor it would be impossible to capture the relative scales of the two utility functions. And the scale is always the root of the problem; merely additive differences always wash out on their own.

  22. And likely will as the agent gets more evidence.

  23. Pettigrew (2016).

  24. Moreover, if one cares at all about the experiences of others, then the unfathomability of someone else’s experiences would pose an identical problem.

  25. Pettigrew (2015).

  26. Pettigrew (2015).

  27. Note that my point is not that Pettigrew’s modified decision theory cannot deliver the natural verdict about the case—the weighting function he employs can discount sufficiently abhorrent utilities. My point is instead that Pettigrew’s modifications are not necessary to deliver that verdict, that standard decision theory comes to it as well. There is no need to explicitly discount abhorrent future-utilities; current utilities can fully express the extent to which an agent cares or does not care about abhorrent future-utilities. For more on this sort of decision problem, see Elster (1979).

  28. At bottom, this case isn’t any more fraught than that of working up an appetite so that one will enjoy dinner more.

  29. The weighting function over the local utilities is also something of a blank check, and it is unclear how deal with different possible life-spans.

  30. See Bricker (1980), Parfit (1984), Gibbard (1992) and Bykvist (2010) for more.

  31. See Harsanyi (1955) and Arrow (1963) for treatments of this issue by two eminent economists.

  32. Such as lotteries.

References

  • Arrow, K. (1963). Social choice and individual values. New York, NY: Wiley.

    Google Scholar 

  • Aumann, R. (1976). Agreeing to disagree. The Annals of Statistics, 4(6), 1236–1239.

    Article  Google Scholar 

  • Bricker, P. (1980). Prudence. Journal of Philosophy, 77(7), 381–401.

    Article  Google Scholar 

  • Briggs, R. (2015). Transformative experience and interpersonal utility comparisons. Res Philosophica, 92(2), 189–216.

    Article  Google Scholar 

  • Bykvist, K. (2010). Can unstable preferences provide a stable standard of well-being? Economics and Philosophy, 26(1), 1–26.

    Article  Google Scholar 

  • Davidson, D. (1987). Knowing one’s own mind. Proceedings and Addresses of the American Philosophical Association, 60(3), 441–458.

    Article  Google Scholar 

  • Elster, J. (1979). Ulysses and the sirens. Cambridge: Cambridge University Press.

    Google Scholar 

  • Gibbard, A. (1992). Interpersonal comparisons: Preference, good, and the intrinsic reward of a life. In J. Elster & A. Hylland (Eds.), Foundations of social choice theory. Cambridge: Cambridge University Press.

    Google Scholar 

  • Harsanyi, J. C. (1955). Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of Political Economy, 63(4), 309–321.

    Article  Google Scholar 

  • Parfit, D. (1984). Reasons and persons. Oxford: Oxford University Press.

    Google Scholar 

  • Paul, L. (2014). Transformative experience. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Paul, L. (2015). Transformative choice: Discussion and replies. Res Philosophica, 92(2), 473–545.

    Article  Google Scholar 

  • Paul, L. (2015). What you can’t expect when you’re expecting. Res Philosophica, 92(2), 1–23.

    Google Scholar 

  • Pettigrew, R. (Forthcoming). Transformative experience and the knowledge norms for action: Moss on Paul’s challenge to decision theory. In Lambert, E., & Schwenkler, J. (eds.), Becoming someone new: Essays on transformative experience, choice, and change, Oxford University Press.

  • Pettigrew, R. (2015). Transformative experience and decision theory. Philosophy and Phenomenal Research, 91(3), 766–774.

    Article  Google Scholar 

  • Pettigrew, R. (2016). Review of Transformative Experience, by L.A. Paul. Mind, 125(499), 927–935.

    Article  Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  • Williamson, T. (2008). Why epistemology cannot be operationalized. In Smith, Q. (Ed.), Epistemology: New essays (pp. 277–300). Oxford: Oxford University Press.

    Chapter  Google Scholar 

Download references

Acknowledgements

For helpful comments, I thank John Hawthorne, Alan Hájek, and Laurie Paul.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoaav Isaacs.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

It might seem easy to average the utilities of two different people. Utilities are numbers, and averaging numbers is easy. But averaging utilities is not so easy a thing. It’s well-known among economistsFootnote 31 that averaging the utilities of two different people—indeed, making any interpersonal comparisons of utility whatsoever—is not meaningful given the standard structure of decision theory. It’s also well-known that with some additional technical apparatus more can be meaningfully done. Such issues are not as well-known among philosophers, however. I therefore include this explanation of the philosophical issues at stake in the mathematical structure of utilities.

There are limits to what sorts of mathematical operations can be meaningfully performed. Suppose someone wondered, “What’s the height of a tree that’s as tall as the temperature of a hot summer day?” One could reply “A hot summer day is around 95 degrees Fahrenheit, so the tree would be around 95 inches tall. It wouldn’t be a particularly tall tree, then, but more likely a sapling.” But this reply is obvious nonsense. The whole idea of directly equating some height with some temperature is absurd. Note that the numbers don’t help even though the same numbers appear when measuring heights and temperatures. There’s no equivalence to be had between 95 degrees Fahrenheit and 95 inches. Degrees Fahrenheit and inches are just very different things; the common ‘95’ doesn’t help. Of course, the selection of those units was arbitrary. One could have equally well taken degrees Celsius and centimeters and gotten a different bogus answer. But the basic problem is not the plurality of units. Then the problem would be that there were too many viable answers to the question. There are, however, no viable answers to the question. The question itself is deeply misguided.

Numbers (specifically, the natural numbers) can be put to a great many uses. Different uses capitalize on different aspects of the numbers’ numerical structure. We can use the numbers 1, 2, and 3 to describe how many coins Alice, Bob, and Carol have in their pockets. In this case, it makes sense to use the additive structure of those numbers. Just as \(1 + 2 = 3\), it makes perfect sense to say that Alice and Bob have as many coins in their pockets as Carol does in her pockets. We can also use the numbers 1, 2, and 3 to describe the order in which Alice, Bob, and Carol completed a race. But in this case it makes no sense to use the additive structure of those numbers. There’s no sense in which Alice’s race result and Bob’s race result, taken together, are equivalent to Carol’s race result. But the ordinal structure of the numbers still applies. Just as \(1< 2 < 3\), it makes perfect sense to say that Alice finished before Bob, who in turn finished before Carol.

The mathematical structure of utility functions is quite modest, too modest to allow for interpersonal comparison. Utilities have the structure only of an interval scale. Utilities have an order, and comparisons of differences between utilities can be made, but that’s it. Utilities can express that an agent prefers A to B, and utilities can express that an agent prefers A to B by twice as much as he prefers C to D. But utilities standardly express nothing more. It is, for example, not meaningful to say that A has twice as much utility as B. The reason is that utilities are standardly meant to capture preferences and nothing more. The preferences defined over simple outcomes are purely ordinal. Preferences over mixturesFootnote 32 of simple outcomes get only a little more structure. An agent prefers A to B by as much as he prefers B to C just in case the agent is indifferent between the certainty of B and a 50/50 mixture of A and C. And so on. The numerical representation of an interval scale is—as Pettigrew correctly notes—unique only up to positive affine transformation (that is, adding or subtracting any number to all utilities and multiplying all utilities by any positive number leaves those utilities unchanged). All the properties that matter in a utility function will be preserved by any positive affine transformation. Suppose an agent has three options, A, B, and C. Giving them utilities of 1, 2, 3 is the same as giving them utilities of 5, 6, 7 (just add 5), is the same as giving them utilities of 10, 20, 30 (multiply by 10), is the same as giving them utilities of 15, 25, 35 (multiply by 10 and then add 5), is the same as giving them utilities of 50, 60, 70 (add 5 and then multiply by 10). All the properties a utility function has are preserved in those assignments. Those utility functions all just represent preferring C to B to A, and preferring C to B and B to A by the same amount. Those utility functions are the same utility function, just as (\(2 + 2\)) and (\(1 + 3\)) are the same number. The representations are different, but it’s the same thing being represented.

On a mathematical level it makes sense that you can’t average across different utility functions. Suppose you’re trying to take a straight average of two utility functions, giving them each equal weight. What numerical representations of the two functions should you take? If you average 1, 2, 3 and 3, 2, 1, you get 2, 2, 2. But if you average 100, 200, 300 and 3, 2, 1, you get 51.5, 101, 150.5. The latter representation of the first utility function gives it massively more of an effect than the former representation does. But any representation-dependent operation is nonsense.

On a philosophical level it makes sense that you can’t average across different utility functions. Utilities don’t measure absolute amounts of desirability. It’s not as though you can average liking X a lot with liking X a little and thereby get liking X moderately. Two agents, one of whom likes every possibility and the other of whom dislikes every possibility, can easily have the same utility function for those possibilities. Utilities show only relative desirabilities, and standings in two different hierarchies of relative desirability can’t be averaged together.

Note that—even in the worst-case scenario—formal problems with averaging across different utility functions do not mean that information about other people’s utilities has no bearing on what you should think about your own utilities. Suppose you’re uncertain about your own preferences, but have reason to believe that your preferences are similar to a friend’s preferences. Then learning that your friend prefers A to B can easily give you evidence that you prefer A to B. There are plenty of ways that the utilities of others can bear what one should think about one’s own utilities. The formal problems have a limited scope. The worry isn’t that information about other people’s utilities will be categorically unhelpful; the worry is that certain sorts of comparisons—including the sorts of comparisons that Pettigrew’s proposal requires—will not be possible.

One can, however, define non-standard utilities which express more than relative desirability according to some agent. The structural poverty of an interval scale can be enriched. And if one is to alter decision theory in the ways Pettigrew prescribes, then such enrichment is necessary.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Isaacs, Y. The problems of transformative experience. Philos Stud 177, 1065–1084 (2020). https://doi.org/10.1007/s11098-018-01235-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-018-01235-3

Keywords

Navigation