How could the theory of responsibility-catering prioritarianism be put to practical use to manage societal risks? Much work needs to be performed before methods and tools have been produced that can compete with established decision-support approaches such as cost–benefit analysis, but the argument that will be presented here aims to show that such methods seem feasible. For this purpose, the discussion here will mainly rely on the groundbreaking work of Matthew D. Adler (2008, 2009, 2012), who has developed a social choice framework, which, it will be suggested, can be used to operationalize responsibility-catering prioritarianism. Adler’s framework can be briefly summarized as follows.
Assume that a numerical measure of the well-being of an individual—a utility measure—can be defined. Let \( u_{i} (x) \) be the utility for individual \( i \) in outcome \( x \). Outcome \( x \) can then be described as a vector of individual utilities: \( \left( {u_{1} (x), u_{2} (x), \ldots ,u_{N} (x)} \right) \). Let \( W(x) \) be a function from a utility vector to a real number: \( W(x) = W\left( {u_{1} (x), u_{2} (x), \ldots ,u_{N} (x)} \right). \) A ranking of two different outcomes \( x \) and \( y \) can then be constructed by defining outcome \( x \) to be at least as good as outcome \( y \) if and only if \( W(x) \ge W(y) \). If different choice alternatives yield different outcomes, the alternative with the highest-ranking outcome is then chosen. The social welfare function (SWF) is simply the mathematical rule for ranking different outcomes, \( W(x) \).
Given this social choice framework, different SWFs can be defined. A utilitarian SWF that says that the outcome with the greatest sum of individual utilities will be ranked highest will be defined as \( W(x) = \mathop \sum \nolimits_{i = 1}^{N} u_{i} (x) \). A prioritarian SWF will be of the form \( W(x) = \mathop \sum \nolimits_{i = 1}^{N} g(u_{i} (x)) \), where \( g( \cdot ) \) is a strictly increasing and strictly concave function. This means that benefits to worse off individuals will be given greater weight in the calculation. For example, a prioritarian SWF could be \( g\left( {u_{i} (x)} \right) = \sqrt {u_{i} (x)} \) (a strictly increasing and concave function). Assume that there are only two individuals with \( u_{1} = 3 \) and \( u_{2} = 1 \), and one has to choose between Policy A that increases the utility of individual 1 with one unit and Policy B that increases the utility of individual 2 by one unit. A prioritarian SWF would then rank the Policy A to be better than the Policy B (because \( \sqrt 4 + \sqrt 1 \) < \( \sqrt 3 + \sqrt 2 \)), while a standard utilitarian SWF would be indifferent between Policy A and B.
If the prioritarian and utilitarian SWFs described above are going to be meaningful, one needs a utility measure \( u( \cdot ) \) that is interpersonally comparable. This means that it should be possible to say that the well-being of individual \( i \) is greater than the well-being of individual \( j \). One also needs to specify the function \( g( \cdot ) \) that provides the prioritarian weighing of the SWF. Exactly how the utility measure \( u( \cdot ) \) and prioritarian weighing function \( g( \cdot ) \) should best be defined is outside the scope of this article, but note that the feasibility of interpersonally comparable utilities is a common assumption in the literature on moral philosophy and social welfare functions. Adler’s (2012) approach to creating a utility measure builds on John Harsanyi’s (1986) idea of extended preferences. For the prioritarian weighing function, Adler argues for a set of continuous and prioritarian SWFs commonly used in welfare economics, which are called Atkinsonian SWFs. A simple Atkinsonian SWF is \( W(x) = \frac{1}{1 - \gamma }\sum\nolimits_{i = 1}^{N} {u_{i}^{1 - \gamma } } \), where \( \gamma \) is an “inequality-aversion” parameter with \( \gamma > 0 \).
The choice of \( \gamma \) determines the amount of equality-sensitivity of W (it goes towards utilitarianism as it approaches zero, while it becomes an absolute priority for the worst-off individual as it approaches infinity). How inequality-averse the social policy should be (what value should \( \gamma \) have) is an inherently normative question.
The final, and central element is a way to incorporate responsibility into this framework. Several different approaches to modeling considerations of responsibility in social choice frameworks have been developed recently (for an overview see Fleurbaey 2008, chapter 8, see also Adler 2012, pp. 579–584). It is beyond the scope of this paper to compare and assess which approach to model responsibility is the best. Instead, the aim here is only to argue that it is indeed possible to operationalize responsibility in a prioritarian social choice framework.
The proposal described here is based on defining a “hypothetical utility” that is used in the calculation of the social welfare function (the proposal is similar to, but distinct from Adler’s proposal). More specifically, the proposal here is that responsibility can be modeled by introducing a hypothetical utility measure called responsibility-adjusted utility, defined as follows:
responsibility-adjusted utility, \( v_{i} \left( x \right) \) = the utility individual \( i \) could normally be expected to achieve, given circumstances affecting \( i \) that are outside of \( i \):s control.
The responsibility-catering prioritarian social welfare function then becomes \( W(x) = \sum\nolimits_{i = 1}^{N} {g\left( {v_{i} (x)} \right)} \). The social ranking of the outcomes \( x \) and \( y \) is done as before: \( x \) is at least as good as \( y \) if and only if \( W(x) \ge W(y) \). The interpretation of the relationship between the expected utility, \( u_{i} (x) \), and the responsibility-adjusted utility, \( v_{i} (x) \), is the following: If \( u_{i} (x) > v_{i} (x) \), then individual \( i \) is seen as responsible for achieving a higher utility than could normally be expected. The implication is that individual \( i \) is treated as if she or he had a lower utility than in reality, which will be an advantage for \( i \) in the evaluation of polices that assign priority to the worst off. If, on the other hand, \( u_{i} (x) < v_{i} (x) \), then individual \( i \) is considered responsible for achieving a lower utility than could normally be expected. Individual \( i \) is then treated as if she or he had a higher utility than in reality, which will be a disadvantage for \( i \) in the evaluation of polices that assign priority to the worst off.
The interpretation of \( v_{i} (x) \) above states that it should be set relative to circumstances outside the control of the individual. This interpretation means that different individuals may have various responsibility-adjusted utilities, given different circumstances. What is called circumstances correspond to “brute luck” in Dworkin’s terminology (see “Introducing Responsibility-Catering Prioritarianism” above). Precisely what should count as circumstances and how they should influence \( v_{i} (x) \) depends on the particular application (see Fleurbaey (2008) for a discussion on circumstances and non-circumstances concerning responsibility).
The proposed approach is perhaps easier to understand by examples comparing the results of the social welfare function for normal utility-numbers and responsibility-adjusted utilities. This approach will, therefore, be illustrated by hypothetical examples of health policy decisions that are more or less caused by personal risk-taking. A commonly used example when discussing responsibility will be used: tobacco smoking. It is, of course, controversial whether it is a mainly free choice to take up the habit of smoking, or if it is primarily due to a biological disposition or social and economic circumstances. Also, it should be stressed that this should not be taken as an argument that smokers ought to be treated in any particular way, but only as an example used as a basis for discussing how responsibility can be modeled by a social welfare function. This example should be relevant to the topic of this paper as it involves managing risks from the perspective of the policy-maker. In the following examples, it is assumed that the decision-maker has access to the expected utility of different alternatives, which by the way is not an unreasonable assumption in the case of exposure to statistically significant health risks in large populations, such as smoking.
To make the example as simple as possible, calculations will be made for only two individuals: Jill who is a smoker and Jack who is a non-smoker. The smoker, Jill, is expected on average to have a shorter life and hence a lower level of well-being than the non-smoker, Jack. (Jack and Jill could be seen as representing a statistically significant group of individuals, but the examples become more accessible if one just assume that there are two individuals.) The outcomes for the two policy alternatives and the status quo are given in utility numbers (expected utility (\( u_{i} \))) that allow for interpersonal comparisons of well-being levels, well-being differences, and comparisons to a neutral level of zero well-being.
The first example is as follows: Assume that the expected utility for Jack and Jill is 90 and 70 respectively without a policy (the status quo). A choice can be made between the Status quo or either of two alternatives: Policy A, which is a general health care program that improves the well-being of both individuals with equal amounts (+ 4 units), while Policy B focuses on smoking-related diseases and therefore only promotes the well-being of the smoker (+ 8 units). This example is summarized in Table 1.
Table 1 Numbers represent expected utility (\( u_{i} \)) Which alternative (Policy A, Policy B, or Status quo) should be chosen? It is easy to see that a prioritarian social welfare function \( W(x) = \mathop \sum \nolimits_{i = 1}^{N} g\left( {u_{i} (x)} \right) \), where \( g( \cdot ) \) is a strictly increasing and strictly concave function, would evaluate Policy B as better than Policy A.
For the sake of the example, assume that people who smoke should be considered responsible for smoking and the effects of smoking and that this should influence the choice of health care options. For this purpose, a similar table containing responsibility-adjusted utilities (\( v_{i} \)) can be constructed. For instance, if the average expected utility for non-smokers is \( u = 90 \), it could be argued that this is what should normally be expected, \( v = 90 \). If Jack and Jill are seen as belonging to the same class of relevant circumstances, the responsibility-adjusted outcomes will look like in Table 2.
Table 2 Numbers represent responsibility–adjusted utility (\( v_{i} \)) In Table 2, Policy A would be preferred to Policy B from the point of view that gives sufficient priority to the worst off. The interpretation of the responsibility-adjusted utilities is that the difference between Jack and Jill is due to responsibility (they have the same circumstances). Hence, the preferred social policy is the one that does not take into account the lower expected well-being of Jill because she is seen as entirely responsible for smoking.
What happens if differences in circumstances between Jack and Jill are caused by factors outside their control (i.e., they are lucky or unlucky)? For example, Jill may be born with a handicap that reduces her general level of well-being. If the average utility for non-smokers with that typical handicap is 80, then the responsibility-adjusted utilities would instead be as in Table 3.
Table 3 Numbers represent responsibility–adjusted utility (\( v_{i} \)) In Table 3, Policy B is the preferred choice according to a prioritarian evaluation. The effect of assigning Jack to a circumstance class with higher responsibility-adjusted value (e.g., 100) will be similar. The examples show how responsibility changes the evaluation when a responsibility-adjusted utility is used in a prioritarian social welfare function. This section has aimed to show, then, how responsibility-catering prioritarianism could be operationalized, although much work remains to elaborate all the details.