Philosophical Studies

, Volume 162, Issue 2, pp 219–236

Rule consequentialism and disasters

Authors

    • Department of PhilosophyUnited States Air Force Academy
Article

DOI: 10.1007/s11098-011-9756-8

Cite this article as:
Kahn, L. Philos Stud (2013) 162: 219. doi:10.1007/s11098-011-9756-8

Abstract

Rule consequentialism (RC) is the view that it is right for A to do F in C if and only if A’s doing F in C is in accordance with the set of rules which, if accepted by all, would have consequences which are better than any alternative set of rules (i.e., the ideal code). I defend RC from two related objections. The first objection claims that RC requires obedience to the ideal code even if doing so has disastrous results. Though some rule consequentialists embrace a disaster-clause which permits agents to disregard some of the rules in the ideal code as a necessary means of avoiding disasters, they have not adequately explained how this clause works. I offer such an explanation and show how it fits naturally with the rest of RC. The second disaster objection asserts that even if RC can legitimately invoke a disaster-clause, it lacks principled grounds from distinguishing disasters from non-disasters. In response, I explore Hooker’s suggestion that “disaster” is vague. I contend that every plausible ethical theory must invoke something similar to a disaster clause. So if “disaster” is vague, then every plausible ethical theory faces a difficulty with it. As a result, this vagueness is not a reason to prefer other theories to RC. However, I argue, contra Hooker, that the sense of “disaster” relevant to RC is not vague, and RC does indeed have principled grounds to distinguish disasters from non-disasters.

Keywords

Act consequentialismRule consequentialismAgent-centered constraintsDisastersIdeal codeVagueness

1 Rule consequentialism

Let me begin with a few stipulations in order to avoid ambiguity. I use the term “consequentialism” to denote the diverse family of moral theories according to which the rightness and wrongness of actions is solely a function of the goodness or badness of their consequences, broadly conceived. Consequentialism, so understood, includes utilitarianism. While it is possible to distinguish many kinds of consequentialism (and utilitarianism), I mention here only those kinds which are central to my purposes in this paper. There are many points worth exploring here, but to explore them now would be to wander too far afield.1

I use the expression “direct consequentialism” to refer to the sub-family of consequentialist theories which hold that the rightness and wrongness of actions is solely a direct function of the goodness or badness of their consequences. The most commonly discussed form of direct consequentialism is

Act Consequentialism (AC): It is right for agent A to do action F in circumstances C if and only if the consequences of A’s doing F are at least as valuable as any other action which A could have done in C.

As is well known, AC is subject to a wide array of criticisms. It is especially relevant for my purposes that one genre of these is rooted in the tension between what is required by our ordinary conception of morality2 and what is required by AC. More specifically, our ordinary conception of morality appears to provide what Shelly Kagan (1989, p. 4) calls

Agent-Centered Constraints: There are some types of action, G, H, etc. such that it is wrong for A to do G, H, etc. in C even if A’s doing G, H, etc. is a necessary means to bringing about the most valuable consequences.3

Though agent-centered constraints are typically said to prohibit lying, stealing, breaking promises, and intentionally harming the innocent (among other things), the precise content of agent-centered constraints is of secondary importance to this particular line of attack on AC. For AC does not include room for any agent-centered constraints in its account of rightness and wrongness. On the contrary, it tells us that it is right to lie, steal, break promises, and intentionally harm the innocent whenever the value of the consequences even slightly outweighs the alternatives. Indeed, that is the force of the term “agent-centered.” These constraints do not require agents to minimize, e.g., the instances of lying among all agents; they prohibit each agent from lying. As a result, agent-centered constraints require us not to lie, even if we could thereby prevent two or more other lies (each with equally bad consequences, we can assume for the purposes of this example). Of course, act consequentialists offer a wide variety of responses to criticism based of agent-centered constraints, many of which exhibit considerable ingenuity and creativity. However, I do not attempt to evaluate them here since my interests are farther downstream. So I largely take the cogency of this genre of criticism for granted here.

Now, many remain sympathetic to consequentialism even though they think that the problems raised by agent-centered constraints are fatal to AC. As a result, they have attempted to push past direct consequentialism in general and AC in particular. While the classical utilitarians—Bentham, Austin, Mill, and Sidgwick—addressed this possibility, R.F. Harrod deserves much of the credit for framing this response in its current form. Harrod sought to provide “a theory of the nature of moral obligation, which is consistent with [consequentialism], but in closer conformity with the ordinary view of obligation” (1936, p. 137). Call Harrod’s alternative “indirect consequentialism.” According to indirect consequentialism the rightness and wrongness of actions is (unsurprisingly) an indirect function of the goodness or badness of their consequences. Though there are many forms of indirect consequentialism, Harrod outlined an influential theory which was indirect in the following sense: The rightness and wrongness of action is a function of its conformity with rules which lead to good results (1936, p. 155). To borrow another expression from Kagan (2000, pp. 135–136), rules are the “evaluative focal points” of this approach. Call the position Harrod had in mind “rule consequentialism.”4

Yet how, precisely, should we formulate rule consequentialism? Derek Parfit provides the basis of an answer when he calls it the theory which tells us that “Everyone ought to follow the principles whose universal acceptance would make things go best” (2011).5 On the basis of Parfit’s paraphrase, we might try the following stipulations:

Rule Consequentialism (RC): It is right for agent A to do action F in circumstances C if and only if A’s doing F in C is in accordance with the the ideal code.

The Ideal Code (IC): The set of rules which, if accepted by all, would have consequences which are better than any alternative set of rules.6

RC and IC provide the bare bones of rule consequentialism. Of course, it is possible—indeed, necessary—to flesh out these bones if one is to offer a complete version of the theory.7 However, I won’t do that here. This is the case because—and let me stress this point as strongly as I can—I am primarily concerned with two aspects of a single line of criticism of RC, a line I begin to discuss in Sect. 2. The above statement of RC is sufficient for this purpose. While I do touch on to the matter of other criticisms of RC occasionally, I can do no more than acknowledge them in passing and point to where I address them elsewhere.

At any rate, it should now be clear enough why some have been attracted to RC. We have already seen that AC does not have room for agent-centered constraints in its account of rightness and wrongness and, as a result, suffers from a mismatch with our ordinary conception of morality. However, RC does have room for such constraints. In order to find it, we need only reflect on the rules that are likely to appear in IC. I agree with both proponents and critics of RC in concluding that generally following rules against lying, stealing, breaking promises, and hurting others almost certainly leads to better results than the alternatives, so IC will include these rules.8 The bottom line is that some criticisms of AC which are quite potent are simply non-starters when applied to RC since the latter theory allows consequentialism and agent-centered constraints to live side-by-side. It is true, of course, that RC claims that these constraints are derivative rather than basic. That is to say, the agent-centered constraints are grounded in the general value of acting in accordance with them. But it is in no way clear that our ordinary conception of morality takes agent-centered constraints as basic. Moreover, few well-developed ethical theories hold that the wrongness of lying or stealing is basic, instead of being grounded in something deeper, e.g., respect for rational nature, human excellence, or some such. So the fact that RC sees agent-centered constraints as derivative hardly counts against the theory.

2 The first disaster objection

Nevertheless, RC is open to criticism as well. In the rest of this paper, I offer a defense of the theory from two manifestations of what I call the “disaster objection.” A thought experiment will be useful for introducing the first of these.

Torture-1: A demon will torture everyone in London for a year unless John lies to Harriet.9

It should be obvious that it is right for John to lie to Harriet because doing so is the only way he can prevent 8,000,000 people (roughly the population of London) from suffering unbearable agony. And it should also be obvious that this is exactly what AC tells John to do. Why? Recall that AC recognizes no agent-centered constraints, and saving 8,000,000 people from suffering terribly is obviously the best thing that John can do in the circumstances. According to AC, nothing (or at least nothing more than the easily outweighed harm to Harriet) stands in the way of the rightness of John’s lying to her.

The worry for those who find RC plausible is that this form of indirect consequentialism fails to yield a similar result. Why? John will have to lie to save the Londoners, but RC prohibits actions which violate IC, and we just saw that lying would do so. Thus, this line of reasoning concludes, RC does not permit actions such as lying even if they are the only way to prevent disasters. Peter Railton nicely sums up this reasoning, explaining that RC “could recommend acts that (subjectively or objectively) accord with the best set of rules even when the rules are not in fact generally accepted, and when as a result these acts would have devastatingly bad results” (1984, p. 169, italics in the original). RC is, this line concludes, an implausible theory. Call this the “first disaster objection” or simply “DO-1.”10 I should note that it does not matter for the purposes of this paper why the disaster is in the offing. It might or might not be the result of the failure of others to comply with IC.

It is probably not an accident that some of those who are sympathetic to AC (such as Railton and others to be considered below) have also pressed DO-1 against RC. For DO-1 derives some of its strength from the way in which RC is meant to avoid the problems of AC. Let me explain. On the one hand, recall that AC is committed to the rightness of doing whatever best promotes the value of the consequences of one’s actions. So, as we have already seen, if the value of the consequences of violating an agent-centered constraint even slightly outweighs not doing so, then AC claims that it is right that we do it. On the other hand, while RC avoids this requirement, as we have just seen it appears to face a difficulty at the opposite extreme. RC is committed to the rightness of doing only what is in conformity with IC. So even if the value of violating one of the elements of the code greatly outweighs not doing so, RC forbids us to do so. The very move that saves RC from the fate of AC sinks it in cases of disaster. Or so goes the argument.

But does Torture-1 really show that RC is implausible? To be sure, there is a sense in which Railton and other adherents of RC are correct. If RC were formulated in such a way that allows no exceptions to any of the rules in IC, then, in consequence, it “could” require us to act in a way that would lead to disaster. And it is true that in unguarded moments, some advocates of RC are tempted by this formulation. E.g., Richard Brandt once wrote that RC “may quite well” hold “that certain kinds of actions are morally out of bounds no matter what the circumstances” (1972, p. 337).

Nevertheless, rule consequentialists usually maintain that there is a far more important sense in which Railton and other advocates of DO-1 are wrong. There is no reason, they say, to assume that RC is so strict about rules that there are absolutely no exceptions, for instance, to the rule against lying, regardless of what would happen as a result. On the contrary, Brad Hooker among others maintains that “the idea that rule consequentialism would require blind obedience to such rules” in these situations is “absurd” (2000, p. 99). Indeed, in a more sober moods, Brandt himself has made clear that he thinks the rules in the ideal code are not absolute, see e.g. his (1996, p. 148).

All that said, there is more work to do before we can acquit RC of this charge. It is one thing for rule consequentialists to say that it would be “absurd” for their theory to claim that we should never violate IC. It is another thing altogether to show that rule consequentialists have a non-ad hoc reason for avoiding this result that is supported by their theory. Showing this is vital to providing anything resembling a vindication of RC. For unless rule consequentialists can show that they have resources within their own theory to explain why it is permissible to violate certain rules in disaster circumstances, then it would appear that RC can only be defended itself by implicitly appealing to the premises of another ethical theory, rendering their own theory otiose. Can they do so? In fact, it is surprising that this question has not been explored in much detail. In the next section, I offer what I take to be a reasonable reconstruction of an answer that does just that.

3 Response to the first disaster objection

The answer centers on an element of IC that is often called the “disaster clause” (e.g., Hooker, 2000, p. 86). On first hearing, this phrase suggests that we should think of IC as a set of clauses or rules—including the disaster clause, all of which operate on the same level.11 Here is an example of such a set, which I call “Code 1”.
  1. (i)

    Do not lie.

     
  2. (ii)

    Do not steal.

     
  3. (iii)

    Do not break promises.

     
  4. (iv)

    Do not intentionally harm innocent persons.

     
  5. (v)

    Do not allow disasters to occur. (The Disaster Clause)

    [Further rules as necessary]

     
I take it for granted that the elements of the code would have to be fairly simple in order to facilitate wide-spread internalization.12 Let me stress that the sets of rules in Code 1 and its successors (to be considered in a moment) are merely meant to illustrate certain kinds of interactions among their elements. It is for this reason that I add qualifiers such as “further rules as necessary.” I do not claim that any of these even approximate a complete ideal code. The precise nature of the content of code is, to a large degree, an empirical question, and I can at best gesture at details here. It is one thing to make seat-of-the-pants generalizations about the value of the consequences of honoring promises and the property of others; it is another to go much farther.

What we need to attend to for my present purposes is that Code 1 cannot be correct because of its structure. For the disaster clause is meant to show how RC can make good sense of cases in which the only way to avoid a disaster is to breach one or more of the other rules in the ideal code. But there is a problem. If we apply Code 1 to Torture-1, then we are simply left saying that there is a conflict between rules (ii) and (v). We are given no guidance about what to do when rules on the same level conflict. Code 1 does not tell us whether to violate the rule against stealing or the rule against allowing disasters.

Hence, we might think that it is better to conceive of the disaster clause as operating on a higher-level than the rules, i.e., as a principle rather than a rule, as in Code 2.13

Rules:
  1. (i)

    Do not lie.

     
  2. (ii)

    Do not steal.

     
  3. (iii)

    Do not break your promises

     
  4. (iv)

    Do not intentionally harm innocent persons.

    [Further rules as necessary]

     
Principles:
  • (I) Break one (or more) rules if and only if doing so will avoid a disaster. (The Disaster Clause)

  • [Further principles as necessary]

Unlike rules, principles grant individuals in special circumstances powers, immunities, and the like with regard to rules.

A word or two about rules and principles is in order. I am, of course, using the expressions “rules” and “principles” as terms of art, but nothing of any importance turns on this fact. My use of “rules” and “principles” has clear affinities with some uses of the terms “primary rules” and “secondary rules,” most obviously those of H.L.A. Hart (1961). However, there are two important differences. First, I am addressing moral, not legal, phenomena, and, as such, these rules and principles are not social matters of fact that are suitably understood along the lines which legal positivism understands laws. Second, I do not see my term “principles” as picking out anything similar to Hart’s rule of recognition. If anything, it is RC, not anything in IC, that conforms best to the rule of recognition. Some other philosophers use the terms “primary rules” and “secondary rules” in a way very different from Hart but distinct from my own as well. Here a good example is Kagan (1998, pp. 68, 70, and 224) of someone who uses these terms to denote what many philosophers call a “decision procedure.” I shall have more to say about decision procedures in Sect. 5 of this paper.

Let me return now to Code 2, which is, I believe, an improvement over Code 1. This is the case because, unlike Code 1, Code 2 gives us guidance for avoiding disasters. Under normal conditions, Rule (ii) requires us not to lie. So it requires that John not lie to Harriet for the fun of it. But under disaster conditions such as those in Torture-1, Principle (I) requires us to do certain things if they contravene rules such as (ii). Nevertheless, Code 2 fails as well. I said a moment ago that the right thing for John to do in Torture-1 is to lie to Harriet—and so it is. Yet it is instructive to note the need for moral reasoning in this case. There is a felt conflict in Torture-1 between two moral centers of gravity, and, even if one of these centers has a much stronger pull, we need to be able to explain this conflict.14 However, Code 2 does not provide the resources for doing so. Principle (I) simply allows agents to set aside Rule (ii) in this case is dispenses with any sense of moral conflict.

So let me offer and third and final way of understanding the disaster clause. Consider Code 3:

Rules:
  1. (i)

    Do not lie.

     
  2. (ii)

    Do not steal.

     
  3. (iii)

    Do not break your promises

     
  4. (iv)

    Do not intentionally harm innocent persons.

     
  5. (v)

    Promote the good of others at least to some minimum level L.

    [Further rules as necessary]

     
Principles:
  • (I) Under ordinary conditions, agents are neither obligated nor empowered to violate (i)–(iv) in order to fulfill (v). However, under disaster conditions, agents are empowered and obligated to violate (i)–(iv) only if doing so is a necessary means to (v). (The Disaster Clause)

    [Further principles as necessary]

What is vital to note here is that Code 3 both identifies an object-level conflict between two rules and provides practical guidance to agents faced with this conflict. To be sure, Principle (I) does not resolve the conflict without remainder, but it is not meant to. Rather, it is intended to tell us what it is right to do in the face of such conflict.

We can now acquit RC of the charge considered a the end of Sect. 2. For we can see how it provides a non-ad hoc explanation of why we are permitted to violate some elements of IC in certain situations. RC can provide for the avoidance of disasters without appealing to something outside of IC or itself. Moreover, we can even see why something like Code 3 would form part of IC since, by avoiding disasters, it would have better overall consequences than its rivals. So RC is not just consistent with a disaster-avoidance clause; it justifies it.

It is worth noting a few further points about disaster clauses here, though I shall have more to say about them later in this paper as well. First, avoiding disasters has a darker side than we have so far seen. Hooker asks us to imagine “that, because of some bizarre twist of fate, torturing to death a child who happens to be on a different planet is the only way to save the rest of the entire human species (and every other species) from excruciating suffering followed by a painful death” (2000, p. 129, italics in the original). He thinks that there are circumstances such as these in which “refusing to save the world at the cost of one innocent child would, I believe, be worse than sacrificing the child” (2000, p. 130). Hooker is surely right, and it is a feature of any plausible moral theory that it will license torturing and killing the innocent when the stakes are very high, but it is an ugly feature nonetheless, and we should not shut our eyes to it.

A second point concerns the possibility of being faced with multiple disasters. To repeat, the exact nature of the ideal code is beyond my power to specify in any detail here. Yet one can see that even Principle (I) needs to be fine-tuned further since it does not provide us with sufficient guidance if there is nothing that an agent can do in order to avoid disaster. E.g., in Torture-1, John’s choices are between lying to Harriet or saving 8,000,000 Londoners from a year of agony. But he might be faced with a further complication.

Choice: A demon will torture everyone in London for a year unless John lies to Harriet, and he will torture everyone in Tokyo for a year unless John lies to Helen. But the demon will torture both groups if John lies to both Harriet and Helen.

In Choice, it is not in John’s power to keep some disaster from occurring. Either 8,000,000 Londoners will suffer agony, or 13,000,000 residents of Tokyo will. But IC should direct John to prevent the worse of the disasters. As a result, Principle (I) needs to be modified as follows:

(I) Under ordinary conditions, agents are neither obligated nor empowered to violate (i) – (iv) in order to fulfill (v). However, under disaster conditions, agents are empowered and obligated to violate (i) – (iv) only if doing so is a necessary means to (v) and, if it is not possible to avoid at least some disaster from occurring, doing so brings about the less bad disaster. (The Disaster Clause)

The vital point to note is that Principle (I) can be modified in such a way that, in order to direct individuals to prevent the worse of two disasters, it does not have to appeal to a yet higher-order principle. There is no threat of an infinite regress facing RC.
Here is a third point. It is natural to think of disasters as circumstances in which at least some individuals—perhaps a very large number of them—will suffer high costs unless someone acts to prevent it. E.g., it is highly intuitive to think of a situation in which 8,000,000 Londoners are tortured unless John prevents it as a disaster. But consider a situation which differs in some important ways but is similar in others.

Ecstasy: A demon will make everyone in London feel as much pleasure as they can possibly feel for a year but only if John lies to Harriet.

Must rule consequentialists consider Ecstasy to be a disaster case? If so, then the disaster clause appears to be a bad fit with out ordinary conception of morality since a situation in which a demon fails to make people ecstatic is not a disaster. But if not, then the rule consequentialists seem to need to provide a reason for distinguishing between cases in which a great harm occurs and cases in which a great benefit does not. However, there is an alternative that I have already adumbrated. Rule (v) in Code 3 only requires that agents promote the good of others up to some minimal level. As before, it is an empirical question precisely what that level is. In particular, it is a question about what level of promotion would, if universally internalized, bring about the best results. Yet it is plausible to think that this level falls significantly short of providing others with the greatest amount of pleasure they can feel. So in Ecstasy, Principle (I) does not even come into play. For there is no conflict between Rules (ii) and (v) since John is not required to make Londoners feel ecstatic.

4 The second disaster objection

While proponents of RC provide a plausible response to DO-1, at least once the nature of the disaster clause is clarified, they have yet done so with regard to another related objection. Call it the “second disaster objection” or “DO-2.” In order to illustrate DO-2, it will be useful to consider a few slight variations of Torture-1, namely

Torture-2: A demon will torture everyone in London, except for one person chosen at random, for a year unless John lies to Harriet.

What should John do? Surely, the answer in Torture-2 is that same as in Torture-1. John should lie to Harriet because doing so is the only way he can prevent 7,999,999 people from being tortured. And RC can explain why this is the case by invoking the disaster clause in IC. Now consider

Torture-3: A demon will torture everyone in London, except for two people chosen at random, for a year unless John lies to Harriet.

Again, John should lie to Harriet because doing so is the only way he can prevent 7,999,998 people from being tortured. And, just as before, RC can explain why this is the case by invoking the disaster clause in IC. However, a potential problem should be coming into focus. At some point, the number of Londoners who would be tortured if John does not lie to Harriet will reach a point that is too low to qualify as a disaster. Under those conditions, let us say, it qualifies as only a “near disaster.” But Richard Arneson asks, “If following the ideal code of rules even when doing so leads to disaster is irrational and morally wrong, why shouldn’t we agree that following the sophisticated ideal code of rules even when doing so leads to near disaster is also irrational and morally wrong?” (2005).15 It seems to me that rule consequentialists need either to provide a compelling answer to Arneson’s question or to show that it is somehow misconceived.

Before moving on to this task in the next section, it is important to pause in order to appreciate the dialectical force of DO-2. The point is not that RC will collapse into “extensional equivalence” with AC—that is to say, the point of DO-2 is not that it shows that AC and RC permit and require exactly the same actions. And the point is not that the disaster clause in IC will become so expansive as to make RC too demanding. The first point has, I think, been decisively addressed elsewhere.16 The second point is worth serious consideration but treating it adequately would require another paper at least as long as this one, though I have already hinted at how limitations on the degree to which one must promote the good of others can be built into the rules.17 Rather, what is supposed to be at stake in DO-2 is the fact that RC is forced to make a morally arbitrary distinction between John’s obligations in disasters and his obligations in near disasters. For the difference in the value of the consequences of John’s options in the least bad case of disaster and the worst case of near disaster are slight, and, therefore, they cannot justify requiring him to act in one case and requiring him not to act in the other.

5 Two replies to the second disaster objection

How should rule consequentialists respond to DO-2? Brandt (1996, p. 149) seems surprisingly unconcerned with this question; in order to determine whether something is a disaster situation, he says. one needs only “to supply numbers” as do courts of law “which seem to have no difficulty with the situation.” Given the worries about vagueness in law raised by Timothy Endicott (2001) and others, it striking that Brandt seems so relaxed about this analogy. Hooker clearly takes the situation more seriously, but his response is not persuasive. In Hooker’s reply to Arneson he tells us that “what counts as a disaster is vague” (2005, p. 272). More recently when pressed on the matter, Hooker added that “While [vagueness] is a worry, trying to eliminate all vagueness from ethics would be quixotic” (2010, p. 114). However, even Hooker seems to have his doubts about the cogency of this line of reply. In an earlier discussion, he acknowledges that RC “will not be plausible if it insists on precision” in disaster cases (2000, p. 135), and, as a result the theory plausibly “comes up short” (2000, p. 136). It is not hard to see why Hooker might be skeptical about his own reply. For a proponent of DO-2 does not have it deny that trying to eliminate all vagueness from ethics would be a fool’s errand. The question is not whether vagueness can be eliminated altogether but how it can be minimized. Surely, the defender of DO-2 is right to say that less vagueness is preferable to more vagueness in any ethical theory since the less vague a theory is the better it will be able to guide our actions. So if one shows that theory T1 needs to use a vague sense of a predicate while theory T2 need not, then as a result one shows that T2 is, ceteris paribus, better than T1. Now, proponents of DO-2 who are sympathetic to AC (such as Arneson, de Lazari-Radek, and Singer) can conceivably argue that their theory has no need of anything like a disaster clause or the notion of a disaster more generally.

In order to be a credible candidate for a reply to DO-2, Hooker’s response needs to be developed further. Begin with a general principle about the evaluation of ethical theories in light of vagueness:

(P) If every plausible ethical theory helps itself to X and X is vague, then X is not a special problem for any plausible ethical theory, i.e., it is not a reason for preferring one theory to another.

In addition to (P), we need two further premises:
  1. (1)

    Every plausible ethical theory must incorporate either a disaster clause or a relevantly similar alternative.

     
  2. (2)

    “Disaster” is vague (as are all relevantly similar alternatives).

     
We can conclude from (P), (1), and (2):
  1. (3)

    RC is not at a disadvantage to any plausible ethical theory with regard to the vagueness and disaster clause.

     
Call this the “Vagueness Response.” This reply is an improvement on Hooker’s because a proponent of DO-2 cannot accept its premises (including (P)) without abandoning her objection to RC.

A point needs to be addressed before continuing. Obviously, vagueness is an exciting topic, and the literature on it has undergone an explosion of late.18 But talk of vagueness is not meant to pick out a general sense of imprecision. Rather, the sense of a predicate is vague if and only if there are borderline cases with respect to whether or not this sense is truly predicated of its subject. So “disaster” is vague if and only if there are some cases in which it is indeterminate whether or not, say, X is a disaster. Since DO-2 relies on the possibility of such borderline cases, vagueness is indeed the correct concept to invoke here.

Since my discussion of the Vagueness Response is somewhat complex, let me adumbrate. I argue that (1) is correct. In particular, I begin by assuming that any plausible ethical theory at some level incorporates (i) agent-centered restrictions as well as (ii) the means to set aside these restrictions in extraordinary circumstances. This is even true of AC, though, significantly, it incorporates agent-centered constraints into its decision procedure rather than its account of right and wrong action. (More about this distinction in a moment.) Now, I contend that the conjunction of (i) and (ii) commit any plausible ethical theory to either a disaster clause or something that is dialectically on a par. As a result, if “disaster” (or everything relevantly similar is vague), then the vagueness of the disaster clause is not a special problem for RC. However, I also argue that (2) is false—or at least false once it is stated with sufficient care. So the Vagueness Response is unsound. However, rule consequentialists have a better reply to DO-2. As I argue, they can show that the sense of “vague” that is relevant to their theory is neither vague nor applied in a morally arbitrary manner, as Arenson maintains.

Return now to Premise (1). Recall from Sect. 1 that talk of “agent-centered constraints” is usually introduced in order to limit the ways in which agents can seek to promote the good. According to agent-centered constraints, there are some types of action, G such that it is wrong for A to do G in C even if A’s doing G is a necessary means to bringing about the most valuable consequences. Of course, agent-based constraints do not apply only to those who are seeking to promote the good; no one who accepts agent-centered constraints believes that one can lie, steal, break promises, etc. if one is not seeking to promote the good. Nevertheless, it is within the context of promoting the good that agent-centered constraints are thought to be most relevant. Now, let T be any theory that incorporates agent-centered constraints. For the purposes of example, I shall assume that these constraints in T prohibit lying. Now, return to Torture-1. I claimed—reasonably, I hope—that the only plausible thing for John to do is to lie to Harriet in order to save 8,000,000 Londoners. So it is not just RC which must find grounds to set aside its agent-based constraints in Torture-1. To repeat, any plausible theory must do so. While RC is best understood as achieving this through a disaster clause, as I argued in Sect. 2 of this paper, other theories might do so in other ways. Nevertheless, they must do so, somehow or other in order to avoid the wildly implausible idea that the badness of lying somehow trumps the badness of millions being tortured. Turn from Torture-1 to Torture-2. Once again, any plausible theory with agent-centered constraints must provide justification for setting aside the constraint against lying in order to save 7,999,999 Londoners. The same is true if we turn from Torture-2 to Torture-3. But once again the number of Londoners who would be tortured if John does not lie will reach a point that is too low, according to T, to justify setting aside its constraint against lying. While nothing turns on calling this point a transition form “disaster circumstances” to “near-disaster circumstances,” I shall retain this terminology to emphasize the point I am making. Now, Arenson asked about RC, “If following the ideal code of rules even when doing so leads to disaster is irrational and morally wrong, why shouldn’t we agree that following the sophisticated ideal code of rules even when doing so leads to near disaster is also irrational and morally wrong?” It should be clear from the foregoing that if substitute “agent-centered constraint” for “ideal code of rules,” Arneson’s question has no less force. Nothing about this natural extension of DO-2 makes reference to RC. To the extent it can be motivated at all (a subject to which I shall return shortly), it can be motivated against any theory the incorporates agent-centered constraints into its account of rightness and wrongness. Such theories include not only RC both other forms of indirect consequentialism, as well as many (if not all) versions of deontology and virtue ethics. Surely, if our ordinary conception of morality counts as an ethical theory then it even includes this as well, since that was originally what motivated the move from AC to RC.

Let me pause to address a possible source of objection. One might think that any form of consequentialism is especially vulnerable to DO-2 because of the importance it places on the value of the consequences of actions. Merely accepting the existence of agent-centered constraints, this objection runs, does not commit one to the overwhelming importance of the good, but RC is not like that. It justifies agent-centered constraints solely, though indirectly, in terms of the good. So RC faces a special problem.

However, this objection is confused. The concern raised by DO-2 is about moral arbitrariness, not about value. It gathers its strength from the intuition that it is illegitimate to set aside rules in one case but not do so in another nearly identical case. Of course, some critics object to all forms of indirect consequentialism as being incoherent in their relationship with value. The idea here appears to be roughly this: If one accepts that the rightness and wrongness of actions is a function of the value of the consequences of these actions, then it is incoherent to hold that any action is right unless its consequences are at least as good as the consequences of any alternative action.19 Indeed, this criticism comes close to assuming that direct consequentialism is the only coherent form of the theory. In fact, I believe this criticism is mistaken. However, I do not need to make the case for that claim here. On the contrary, this line of criticism of RC is completely independent of DO-2. If it were correct, then there would be no need to provide DO-2, and if it is, as I believe, incorrect, it implies nothing for DO-2. We can, therefore, set it aside for the purposes of this paper.

Despite initial appearances, even act consequentialists cannot avoid helping themselves to a disaster clauses. Any minimally credible form of AC recommends adopting a decision-making procedure that is distinct from the act consequentialist theory of right and wrong. Echoing many, Roger Crisp (1997, p. 106) puts the matter this way: AC

should be understood only to be a theory about the criterion of right action. Nothing follows from it alone concerning exactly how we should thinkabout how to act in our daily lives.20

Although act consequentialists do not—and cannot—incorporate agent-centered restrictions into their account of right and wrong, they must take seriously the notions of agent-centered restrictions when they turn to decision procedures. Invariably, act consequentialists claim that such a decision procedure prohibits lying, stealing, harming the innocent, etc. However, these prohibitions cannot be absolute for reasons we have already discussed. As a result, act consequentialists also have to take disaster clauses seriously when formulating a decision-making procedure. For the procedure must require its own violation (or, more properly, its own partial violation) when doing so will avoid disaster. Hence, an act consequentialist has no advantage over a rule consequentialist (or the adherent of any other theory) with regard to whether or not they make use of a disaster clause.

Now it might appear that Premise (1) of the Vagueness Response hands act consequentialists precisely what they need to reply to the concerns raised in Sect. 1 about agent-centered concerns. For in that Section, I reported that at least one source of motivation to move from AC to RC was the inability of AC to accommodate the place that our ordinary conception of morality has for such concerns. But haven’t I just shown that AC can accommodate it after all? Not quite. The complaint raised in Sect. 1 is not simply that AC cannot find some place for agent-centered concerns; the complaint is that it cannot find a place for these concerns within its account of right and wrong. Nothing I have said in this section requires revising this point. To be sure, I agree with Crisp and others that AC is better off for finding some place for agent-centered concerns, but doing so does not itself solve the problem with which I began.

So Premise (1) appears to be correct, but what of Premise (2)? While it is natural to speak of words or, more specifically, predicates, as being vague, it is in fact the senses of predicates that are in play in the Vagueness Response. I do not mention this fact out of a spirit of pedantry: It is surely correct that if every sense of “disaster” is vague, then it is a problem for every plausible ethical theory since every plausible theory must incorporate either it or something relevantly similar. Given the truth of that claim, the vagueness of “disaster” does not count against RC in particular. Moreover, there are certainly some senses of “disaster” that are vague. That is to say, there are some senses of the predicate that admit of borderline cases in which there is no fact of the matter whether it is true or false to say that “This is a disaster.” So Premise (2) can be reformulated to take into consideration the point about senses of predicates rather than predicates being vague. However, this reformulation of Premise (2) is not strong enough, in conjunction with Premise (1) to satisfy the antecedent to the reformulation of (P). Rather, what is required to satisfy the antecedent is that every sense of “disaster” is vague.

Yet it is false that every sense of “disaster” has borderline cases. Let me explain. English contains many predicates which have both vague senses in their ordinary applications and non-vague senses within theoretical contexts. An obvious example is “light.” Suppose you are asked whether there is any light in a room with open windows at noon. Of course, this is almost certain to be a case in which it is correct to say that there is light in the room, at least in this ordinary sense of “light.” But if you are asked the same question at dusk, you might well be confronted with a borderline case in which it is indeterminate whether or not the room contains light. So this ordinary sense of “light” is vague. However, “light” also has a sense used in physics and in some related special sciences. This sense picks out electromagnetic radiation in a range from 380 to 780 nm. In this sense of “light,” there are no borderline cases. Either a particular phenomenon counts as being light, or it does not.

Now return to the term “disaster.” As I have already conceded, there are senses of “disaster” which are vague. Surely, the recent oil spill in the Gulf of Mexico is a disaster, in almost any sense of “disaster” used in ordinary language. But consider something far less damaging yet very bad nonetheless—a small earthquake in a third world. It might well be indeterminate whether it is not. Ordinary language is no worse for this fact, yet we are not limited by it. Just as a physical theory can provide us with a non-vague sense of a ordinary term such as “light,” so too an ethical theory can provide us with a non-vague sense of “disaster.” I turn to such senses in just a moment, but first let me make it clear that I am not claiming that ethics is, or can be, as precise as a science such as physics. In fact, the comparison is misplaced. Ethics and physics deal with different subject matter, and I do not think there is a clear sense in which their precision can be compared fruitfully. However, this much is true: Physics and many other sciences currently have the instruments capable of measuring certain phenomena with great precision, while ethics largely lacks this now and perhaps always will.

So the Disaster Response fails. As we have already seen, Premise (1) is correct, and any plausible ethical theory must incorporate either a disaster clause or something relevantly similar to it.

However, “disaster” has both vague and non-vague senses. So Premise (2) is incorrect, and any ethical theory might be able to help itself to non-vague senses of “disaster” in order to avoid something analogous to DO-2. Of course, matters are slightly more complicated. It is not enough for a theory to appeal to a non-vague sense of “disaster”; it must also show that the distinction implicit in it between disasters and non-disasters is not morally arbitrary.

Contrary to DO-2, RC can do both. In fact, the distinction that RC makes between disasters and non-disasters is straightforward. According to RC, whether or not a particular situation counts as a disaster or not is determined by whether or not everyone would be justified in violating one or more elements of IC in order to prevent it, if there were no other way to do so. More to the point, a situation counts as a disaster if and only if the net value (benefits minus costs) of the consequences of allowing everyone to violate one or more of the rules in the ideal code, given that there is no other way to do so, is greater the net value of not allowing everyone to do so. There is nothing inherently vague about this sense of “disaster.” Of course, rule consequentialists face a difficulty getting access to the information necessary to make the relevant calculations. As a result, it might well be difficult to know whether this or that situation counts as a disaster in a way that is quite different from the difficulty in knowing whether this or that counts as an instance of light. However, that fact is beside the point. The question is whether the sense of “disaster” used by RC is vague, not how easily we can learn whether this or that event is a disaster. Moreover, the specific epistemic difficulty here is simply an instance of a far more general epistemic difficulty faced by all forms of consequentialism and, indeed, many other ethical theories: It is sometimes difficult to know what the consequences of our actions will be and what value they will have. Though critics are free to press this point against consequentialism, it has no special relevance for DO-2.

Furthermore, the charge that RC’s distinction between disasters and non-disasters is morally arbitrary also fails. A distinction is morally arbitrary only if there is not sufficient moral reason to make the distinction. As we just saw, RC makes the distinction between disasters and non-disasters on the basis of whether or not everyone would be justified in violating one or more elements of IC in order to prevent it, if there were no other way to do so. Of course, this distinction would appear morally arbitrary if one already accepted another account of rightness and wrongness—e.g., the account of AC. Yet it would be question-begging on a grand scale to ask us to accept AC, or any other account of right and wrong, as part of a criticism of RC. Rather, what is required is that RC make the distinction between disasters and non-disasters in a way that is consistent with its own account of right and wrong. If, e.g., it counted some actions as disasters but denied that it was permissible for everyone to violate one or more of its rules to avoid as a necessary means of avoiding the rule, then it would fail its own test. But this it manifestly does not do. Here, then, are RC’s principled grounds which DO-2 claims it lacks.

Of course one might go beyond DO-2 and claim that these grounds are mistaken. I began this paper with the assumption that AC is open to criticism, not because it is internally inconsistent, but because it is out of step with our ordinary conception of morality. So one might think that RC must live up to the same standard. And so it must. Yet we must tread cautiously here. Ethical theory would have little point if it did nothing more than reproduce the claims of our ordinary conception of morality. Rather, some features of our ordinary conception have greater weight in the evaluation of ethical theories because they are better supported, more resistant to revision in the judgment of competent agents, more central to the coherence of our ordinary conception, etc. AC is not only open to criticism because of its inability to incorporate agent-centered constraints into its account of rightness and wrongness—it is open to serious criticism because of the importance of this feature of our ordinary conception of morality. No one—and certainly not the advocates of DO-2—has made the case that the distinction between disasters and non-disasters is as well supported, as resistant to revision, or as central to our ordinary conception as the place of agent-centered constraint. Moreover, no one has made the case that RC’s distinction between disasters and non-disasters is seriously misaligned with, or arbitrary from the point of view of, our ordinary conception of morality. Of course, RC’s distinction is much more precise, but that is to its advantage since it provides a method of guiding our action in difficult circumstances.

Footnotes
1

For helpful discussion of some of the varieties of consequentialism, see Brink (2006) and Sinnott-Armstrong (2006).

 
2

On our ordinary conception of morality, see what Parfit (1984, pp. 95–116) calls “Common-Sense morality” or simply “M,” what Kagan (1989, pp. 1–46) calls “ordinary morality,” and what is often called “folk morality” by those working in and around experimental philosophy such as Knobe (2003).

 
3

In this paper, I limit myself to discussion of agent-based restrictions and do not address the parallel issue of agent-based prerogatives.

 
4

For discussions of alternative forms of indirect consequentialism, see e.g., Adams (1976), Railton (1988), Crisp (1992), Brandt (1996), and—despite her own characterization—Driver (2003).

 
5

I omit all page numbers from references to this forthcoming book since I have only an MS copy of it.

 
6

A highly abbreviated list of philosophers working on rule consequentialism (or something very similar) includes Harsanyi (1977), Brandt (1979), Hooker (2000), Mulgan (2006; though compare his 2001, especially Chaps. 3 and 8), Ridge (2006), Parfit (2011), as well as Kahn (Unpublished 1) and Kahn (Unpublished 2).

 
7

Compare, e.g., Brandt (1979, pp. 193–199), Hooker (2000, p. 32), and Mulgan (2006, p. 184).

 
8

E.g., Kagan (1989, p. 37) and Brandt (1996, pp. 145–155).

 
9

Obviously, more realistic thought experiments of this sort are possible, but they require much more qualification and take up considerably more room as a result. So I avoid them here. I turn to variations of Torture-1 in Sect. 4.

 
10

Also see Smart (1956), Foot (1985), Brandt (1979, pp. 278–285), Parfit (1984, pp. 30–31), Foot (1985), Brink (1989, p. 237), Kagan (1989, pp. 36–37, 1998, pp. 231–232), and de Lazari-Radek and Singer (2010).

 
11

This is the model suggested in Hooker (2008). See also Kagan (1998, p. 232).

 
12

See, e.g., Brandt (1979, p. 181), Kagan (1989, p. 37), and Hooker (2000, pp. 78–80).

 
13

This is the model suggested in Hooker (2000, pp. 98–99).

 
14

See especially Williams (1985, pp. 185–187) and Brandt (1996, pp. 70–71). I discuss this point at length in Kahn (2011) and Kahn (Forthcoming).

 
15

I omit all page numbers from references to this paper since at the moment I lack access to the journal and have only an text copy of it. See also Harrison (1979), Lyons (1994), and de Lazari-Radek and Singer (2010).

 
16

Lyons (1965). The literature on this subject is too large to cite in any detail here, but for an overview of more recent work see Kagan (1998, pp. 225–230), Hooker (2000, pp. 93–99), and Mulgan (2006, pp. 137–140).

 
17

Also see Driver (2002), Mulgan (2006, especially, pp. 301–339), and Hooker (2009).

 
18

Beginning, it seems to me, primarily with Williamson (1994).

 
19

Discussed, e.g., by Smart (1956), Kagan (1989, pp. 32–37), and Darwall (1998, p. 137).

 
20

See also Hare (1981, p. 150), Parfit (1984, Chapts. 1, 4, and 5), Railton (1988, pp. 168–174), Brink (1989, pp. 216–217), Brandt (1996, pp. 142–145), and Kagan (1998, pp. 68, 70, and 224).

 

Acknowledgement

I am very grateful to Dale Miller for his helpful comments on this paper.

Copyright information

© Springer Science+Business Media B.V. (outside the USA)  2011