Skip to main content
Log in

Hypothetical Retrospection

  • Published:
Ethical Theory and Moral Practice Aims and scope Submit manuscript

Abstract

Moral theory has mostly focused on idealized situations in which the morally relevant properties of human actions can be known beforehand. Here, a framework is proposed that is intended to sharpen moral intuitions and improve moral argumentation in problems involving risk and uncertainty. Guidelines are proposed for a systematic search of suitable future viewpoints for hypothetical retrospection. In hypothetical retrospection, a decision is evaluated under the assumption that one of the branches of possible future developments has materialized. This evaluation is based on the deliberator’s present values, and each decision is judged in relation to the information available when it was taken. The basic decision rule is to choose an alternative that comes out as morally acceptable (permissible) from all hypothetical retrospections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Following convention I use the term “risk” to denote such lack of knowledge that can be expressed in probabilities, and “uncertainty” to denote lack of knowledge that cannot be so expressed. For simplicity, the word “risk” will sometimes be used in lieu of “risk or uncertainty.”

  2. The idealization used in most of moral theory is in fact much stronger than determinism. The consequences of an agent’s actions are assumed to be not only determined but also knowable at the point in time of deliberation. This corresponds fairly well with the standard decision-theoretical notion of decision-making under certainty. (Luce and Raiffa 1957, p 13) For reasons of convenience, the term “deterministic” is used here to denote such conditions for (moral) decision-making.

  3. See Hansson (1993) for some of the arguments against this way of evaluating losses in lives, Although it does not follow from expected utility maximization that deaths should be evaluated in this way, this evaluation method is almost universally applied in risk analysis and risk-benefit analysis.

  4. One well-known decision rule that employs such weights is Kahneman’s and Tversky’s ([1979] 1988) prospect theory. In that theory, the objective probability p(A) of an event A is replaced by π(p(A)), where π is an increasing function from and to the set of real numbers between 0 and 1. In prospect theory, π(p(A)) takes the place that p(A) has in expected utility theory. However, the composite function π(p( )) does not satisfy the axioms of probability, and therefore prospect theory is not a form of expected utility theory but an alternative to it.

  5. http://www.havkom.se/eng/pdf/AboutSHK.pdf. Cf. http://www.notisum.se/rnp/SLS/LAG/19900712.HTM.

  6. I use the noun “alternative” to denote an option than can be chosen in a particular decision. A “branch” (or “branch of possible future development”) is one of the possible developments after a particular event (typically after the choice of an alternative in a decision). For the present purposes, a branch is not followed indefinitely into the future but only to a certain point in time that, in combination with the branch itself, constitutes a “viewpoint” from which evaluations can be made.

  7. Much of this is applicable to self-regarding decision-making as well, but here the focus will be on decision-making that has morally relevant effects on others than the decision-maker.

  8. The idea of hypothetical retrospection is an extension and regimentation of patterns of thought that are prevalent in everyday moral reasoning. Unsurprisingly, numerous instances of related ideas can be found in the philosophical literature. Careful consideration of one’s future interests was recommended by Plato’s Socrates (Prot. 356a–e). In classical accounts of prudence, the moral perspective from future hindsight was a key component. (Mulvaney 1992; Vanden Houten 2002) John Rawls’s concept of deliberative rationality, ascribed by him to Sidgwick, includes a notion of a rational plan of life and a requirement that “a rational individual is always to act so that he need never blame himself no matter how things finally transpire” (Rawls 1972, p 422; for comments see Williams 1976 and Ladmore 1999). Similar ideas by Nagel (1986, p 127) have been interpreted by Dickenson (1991, p. 51) as a remorse-avoiding strategy. In decision theory, regret-avoiding strategies have been given exact formal interpretations. (Bell 1982; Loomes and Sugden 1982; Sugden 1985) A proposal to use regret-avoidance as a moral principle was put forward by Ernst-Jan Wit (1997). Jeffrey’s (1983) criterion of ratifiability should also be mentioned, since it recommends what can be described as hypothetical retrospection with respect to probability assignments.

  9. Weber (1998, pp 105–106) distinguishes between outcome regret that refers to “how things turned out” and “decision-making regret” that requires “that one can, in hindsight, think that one had an available reason at the time of choice to choose other than as one did.” In hypothetical retrospection, only the latter form of regret should be (hypothetically) elicited.

  10. The same applies, for similar reasons, to several related notions such as a predicted wish to have acted differently.

  11. Williams (1976, pp 130–131) criticized Rawls for not paying attention to preference changes when proposing that a rational individual should act so that he will never need to blame himself. Humberstone (1980) and Weirich (1981) both argued convincingly, but with different arguments, that it may be rational to do something one knows that one will regret. More recently, Ladmore (1999) has again emphasized that our decisions can never be immune to regret since our later self judges earlier choices on the basis of changed preferences.

  12. It is not assumed that the decision has to be morally optimal, or satisfy moral requirements maximally. Hence, scope is left for choice between alternatives that are all acceptable but not all of the same moral value.

  13. Obviously, errors of prediction cannot be avoided. This is a problem shared by all decision rules.

  14. As one example of this, the fact that the accident took place gives us a reason to reconsider whether we were justified in believing it to be highly improbable.

  15. On the use of probabilistic arguments in hypothetical retrospection, see Section 4.

  16. To make this more precise (although admittedly somewhat overexact), let there be a set A of alternatives and a set V of viewpoints. Each viewpoint is constituted by a point in time in a branch of possible future development, as explained above in footnote 6. Let f(a,v) denote the degree to which the alternative a violates moral requirements as seen from the viewpoint v. If f(a,v) = 0 then a does not at all violate any moral requirements as seen from the viewpoint v. We have a moral dilemma if and only if for all a′ ∈ A:

    $$ {\mathop {{\text{max}}}\limits_{v \in V{\text{ }}} }{\left( {f{\left( {a\prime ,v} \right)}} \right)} \geqslant {\mathop {{\text{max}}}\limits_{v \in V{\text{ }}} }{\left( {f{\left( {a,v} \right)}} \right)} $$

    Although this rule does not explicitly mention probabilities, it is not a non-probabilistic decision rule in the same sense as for instance the maximin rule in its standard version that leaves us without resources for making use of probabilistic information. In contrast, for each a and v, f(a,v) represents an evaluation in which the probabilistic information available at the time of decision should be taken into account.

  17. In contrast, it is often reasonable to take predicted or expected future changes in preferences into account. Admittedly, the distinction between moral values and personal preferences is not always crystal-clear.

  18. In the language of footnote 16, this corresponds to identifying, for each a, the value of

    $$ {\mathop {{\text{max}}}\limits_{v{\text{ $ \hat{I} $ }}V{\text{ }}} }{\left( {f{\left( {a,v} \right)}} \right)}, $$

    which is exactly what we need to apply the decision rule described there.

  19. This is an instance of the general requirement for rational comparisons that they should refer to the same aspects of the comparanda. This is only possible if the same characteristics are available for the different comparanda. Hence, it would be difficult to compare two opera performances if you have only heard a sound recording of one of them and seen a silent film of the other.

  20. Rawls himself emphasized the independence of the two components, and noted that “[o]ne may accept the first part of the theory (or some variant thereof), but not the other, and conversely.” (Rawls 1972, p 15) In subsequent discussions the distinction between the two components has sometimes been less clear.

  21. An exception must be made if we extend the procedure of hypothetical retrospection to future viewpoints in which the agent is no longer present as a capable reasoner. In such cases, a hypothetical evaluator with the same moral standards as the agent can be used as a heuristic device.

  22. On the use of unrealistic examples in moral philosophy, see Lucey (1976), Lackey (1976), and Ward (1995).

  23. This is so even if the uncertainty is symmetric in the sense that it gives no reason to either raise or lower the estimate.

  24. For simplicity we may assume that there are no other differences, such as side effects, that can influence the decision.

  25. For a discussion of this argumentation, see Hansson (2006b).

  26. See Hansson (2004b) for some examples of this.

  27. The most common argument against compulsory seat belts is that such legislation is paternalistic. On paternalism and anti-paternalism in risk-related issues, see Hansson (2005b).

  28. See Hansson (1993). – This also illustrates two essential differences between the present framework and that of expected utility maximization. The latter but not the former is committed to (1) assign exact probabilities to all possible events and (2) use these probabilities as weights in all decisions to be made.

  29. On the former approach, see Bicevskis (1982) and Peterson (2002).

  30. For details, see Hansson (2003).

References

  • Bell DE (1982) Regret in decision making under uncertainty. Oper Res 30:961–981

    Article  Google Scholar 

  • Bicevskis A (1982) Unacceptability of acceptable risk. Search 13(1–2):31–34

    Google Scholar 

  • Dickenson D (1991) Moral luck in medical ethics and practical politics. Avebury, Aldershot

    Google Scholar 

  • Gilles SG (1994) The invisible hand formula. Va Law Rev 80:1015–1054

    Article  Google Scholar 

  • Hansson SO (1993) The false promises of risk analysis. Ratio 6:16–26

    Article  Google Scholar 

  • Hansson SO (1999) But what should I do? Philosophia 27:433–440

    Article  Google Scholar 

  • Hansson SO (2001) The modes of value. Philos Stud 104:33–46

    Article  Google Scholar 

  • Hansson SO (2003) Ethical criteria of risk acceptance. Erkenntnis 59:291–309

    Article  Google Scholar 

  • Hansson SO (2004a) Weighing risks and benefits. Topoi 23:145–152

    Article  Google Scholar 

  • Hansson SO (2004b) Great uncertainty about small things. Techne 8(2)

  • Hansson SO (2005a) Seven myths of risk. Risk Manage 7(2):7–17

    Google Scholar 

  • Hansson SO (2005b) Extended antipaternalism. J Med Ethics 31:97–100

    Article  Google Scholar 

  • Hansson SO (2006a) Economic (ir)rationality in risk analysis. Econ Philos 22:231–241

    Article  Google Scholar 

  • Hansson SO (2006b) Uncertainty and the ethics of clinical trials. Theor Med Bioethics 27:149–167

    Article  Google Scholar 

  • Humberstone IL (1980) You’ll regret it. Analysis 40:175–176

    Article  Google Scholar 

  • Jeffrey RC (1983) The logic of decision, 2nd edn. University of Chicago Press, Chicago, IL

    Google Scholar 

  • Kahneman D, Tversky A ([1979] 1988) Prospect theory: an analysis of decision under risk. In: Gärdenfors P, Sahlin N-E (eds) Decision, probability, and utility: selected readings. Cambridge University Press, Cambridge, UK, pp 183–214

    Google Scholar 

  • Lackey D (1976) Empirical disconfirmation and ethical counter-example. J Value Inq 10:30–34

    Article  Google Scholar 

  • Ladmore C (1999) The idea of a life plan. Soc Philos Policy 16:96–112

    Google Scholar 

  • Loomes G, Sugden R (1982) Regret theory: an alternative theory of rational choice under uncertainty. Econ J 92:805–824

    Article  Google Scholar 

  • Luce RD, Raiffa H (1957) Games and decisions: introduction and critical survey. Wiley, New York

    Google Scholar 

  • Lucey KG (1976) Counter-examples and borderline cases. Personalist 57:351–355

    Google Scholar 

  • Mulvaney RJ (1992) Wisdom, time, and avarice in St Thomas Aquinas’s treatise on prudence. Mod Schman 69:443–462

    Google Scholar 

  • Nagel T (1986) The view from nowhere. Oxford University Press, New York

    Google Scholar 

  • Peterson M (2002) What is a de minimis risk? Risk Manage 4:47–55

    Google Scholar 

  • Rawls J (1972) A theory of justice. Oxford University Press, Oxford

    Google Scholar 

  • Sugden R (1985) Regret, recrimination and rationality. Theory Decis 19:77–99

    Article  Google Scholar 

  • Vanden Houten A (2002) Prudence in Hobbes’s political philosophy. Hist Polit Thought 23:266–287

    Google Scholar 

  • Ward DE (1995) Imaginary scenarios, black boxes and philosophical method. Erkenntnis 43:181–198

    Article  Google Scholar 

  • Weber M (1998) The resilience of the Allais paradox. Ethics 109:94–118

    Article  Google Scholar 

  • Weirich P (1981) A bias of rationality. Australas J Philos 59:31–37

    Article  Google Scholar 

  • Williams BAO (1976) Moral luck. Proc Aristot Soc, Suppl Vol 50:115–135

    Google Scholar 

  • Wit E-JC (1997) The ethics of chance. PhD thesis, Pennsylvania State University

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sven Ove Hansson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hansson, S.O. Hypothetical Retrospection. Ethic Theory Moral Prac 10, 145–157 (2007). https://doi.org/10.1007/s10677-006-9045-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10677-006-9045-3

Key words

Navigation