1 Competition or Co-operation?

There is something that might loosely be called a paradox at the heart of philosophy. Philosophy is the love – the philia – of wisdom – sophia. Having wisdom is having knowledge, and knowledge, if of propositions, must be true. And the truth is something on which rational beings might be expected to converge. But since its inception, philosophy has been characterized by, one might even say (again in a loose sense) defined by, disagreement. Socrates’ method of elenchus proceeds largely by attempts at refutation of, for example, the views that what we should aim for in life are money and power (Thrasymachus and Meno), or pleasure (Callicles). In the Meno (80a-d), Socrates defends himself against the charge that he is like a sting-ray, which numbs those with whom it comes into contact, by arguing that it is better to be in a state of being undecided (aporia) than in a state of false belief, since this gives one greater chance of apprehending the truth, and – if others join you – of agreement. Sadly, of course, Socrates’ dream has become ever more distant, as more and more theories and views have developed in philosophy. But I shall end my paper with the suggestion that there is nevertheless something in the idea that aporia may have advantages over false belief, or even true belief if that belief is itself unjustified.

The situation in moral philosophy is as bleak as it has ever been. I shall return in my conclusion to Henry Sidgwick’s Methods of Ethics, first published in 1874 and revised several times until reaching the canonical and posthumous edition of 1907. But here I want to suggest that the essence of Sidgwick’s view of the moral – or, to speak more broadly, the ethicalFootnote 1 -‑ landscape is broadly correct.

Sidgwick saw ethics as in large part a debate between egoists (who believe that one should maximize one’s own good), consequentialists (who believe that one should produce the best outcome for all), and deontologists (who believe that there is more to ethics than producing the best outcome and that, for example, one should keep a promise for reasons of justice rather than merely because of the good consequences of doing so). The terms here are not all used by Sidgwick, and he restricted his own discussion to specific forms of the views in question (so, for example, both the egoism and the consequentialism he discusses are hedonistic). Further, there are fewer philosophical egoists around now than there were in Sidgwick’s day, and even then they were on the wane. But in the main things have remained the same as they always were: the war in ethics is between these three parties, though of course there are many bloody internecine conflicts within each party, and there are other parties – moral sceptics, for example, who see the war itself as based on a deep misunderstanding.

This raises a central and vitally important question. Should we, rather than joining one side or the other in these battles, step back and try to learn something from the various participants, constructing an ecumenical view which may have stronger claim to justification and constitute a better chance of convergence – that is, the end of philosophy as we now know it, and the beginning of the philosophical contemplation of the truth practised by the lucky philosophers who have emerged from the cave described by Socrates (Republic 514a-520e)?

2 Questions and Concepts

I have just said that normative or first-order ethics is largely a war between three different parties. But, unfortunately perhaps, things are a good deal more complicated than that. You may have come across the stereotype of the Oxford don who is always telling their students: ‘Define your question!’. That is usually taken to mean that the students have to offer their own definitions of the terms in the question given them by their tutor. This seems to me good advice, up to a point, but sometimes tutors just give their students some suggested readings, and they have to come up with their own question. That meta-question – what should be my question? – is, I think remarkably, rarely asked by philosophers, and it has to be said that they are not always good at defining their terms in the usual way either. Here again we can learn from Socrates, who insisted on agreement on the issue under discussion, and that no progress should be made in any discussion until all sides were in agreement (of course, he rarely if ever adhered to this principle himself, but it is still a good principle). So what question, or questions, are people asking in philosophical ethics, and which question, or questions, should they be asking?

There are many, and here are some of them.

(i) What makes actions right?

This is perhaps the most obvious question we might take typical moral theories to be addressing. Note the important distinction between this, explanatory, question, and the substantive question: ‘Which actions are right?’. Two people might agree on some list of right actions – benefiting others, respecting justice, telling the truth, and so on – but disagree on what makes these actions the right ones. One might be a divine command theorist, who believes that the right-making property of these actions is their being commanded by God, while the other might be an atheist, Rossian pluralist (see Ross 1930), who believes they are right for other reasons (perhaps just because they are right).

(ii) What do we have strongest reason to do?

Some will see this question as equivalent to the first, since for them rightness consists in the property of an action that one has strongest reason to do it. But there are other conceptions of rightness, some of which will not overlap even substantively with the property of an action of being such that one has strongest reason to do it. Consider, for example, the view that moral reasons are just one category of reasons among others, such that they can come into conflict with reasons in other categories. One obvious non-moral category might that of self-interested or prudential reasons. Take this case, from Brad Hooker (1986). You are in a front-line trench, and the enemy are approaching. You have promised your comrades not to run away, and so have a moral reason to stay where you are. But you also have a self-interested reason to run which of course conflicts with, and may even outweigh, your moral reason. On some views you may even be rationally required to run, which brings us to our third question.

(iii) What is it most rational to do?

Again, it is possible to see this question as equivalent to either or both of the first or second questions above. But they can come apart. Consider for example an objective consequentialist who believes that what makes an action right is that it maximizes overall utility. In some case where I know that one of two options, A and B, will produce 100 units of utility and the other zero, but I have no idea which is which, and I have available to me another option C, which will produce 90 units, then the right thing for me to do might be A (if we assume it will produce 100), whereas the rational choice is C. This consequentialist may use the language of objective and subjective reasons: I have objective reason to choose A, and subjective reason to choose C. But they still provide material for two, independent questions.

(iv) What feelings, motivations, or emotions are right?

Similar questions about feelings can of course be asked in terms of reasons or rationality, or indeed other notions. It might be suggested, for example, that treating feelings as right or wrong is a mistake, since it is reasonable to blame someone only for what is under their control, and we do not have voluntary control over our feelings. Rather, perhaps, we should ask which feelings are admirable, and then ask what it is right to do in the light of our answer to that question (for example, adopt some strategy for dealing with proneness to anger).

(v) What kind of person should I be?

Here again overlaps with previous questions are possible and indeed likely. One common answer to this question is that I should be a person with certain traits or dispositions, and that at least some of these traits are to do what is right, to feel appropriately, and so on. These traits might be described as virtues, and the lack of them as vices, and questions will then arise about whether the mere possession of these traits is valuable, and if so in what way.

(vi) What decision-procedure should I use?

The distinction between the notion of what makes an action right (often called the ‘criterion of rightness’, though note that its role is not purely in identification of right actions) and that of the correct decision-procedure was especially clearly articulated in the last century, mainly in connection with consequentialism (e.g. Bales 1971). Consider utilitarianism. What makes an action of mine right is that it maximizes utility, but it is not unlikely that if I focus constantly on trying to maximize utility I will produce less than the best outcome. Rather, as most utilitarians have suggested, I should, with appropriate caution, follow the rules of common-sense morality – forbidding lying, encouraging assistance to others, and so on – on the ground that following that strategy is itself recommended by utilitarianism. It is true, of course, that choosing to follow that strategy is itself an action, but it is obvious that following the strategy is a different decision-procedure from that of applying utilitarianism on a case-by-case basis.

(vii) What is evil, and how does it relate to ‘mere’ wrongness?

I mention this question primarily to illustrate the richness of our conceptual resources in ethics. There will be many like it, concerning concepts that I have not yet mentioned, such as superogation, rights, and so on. The three key moral theories I’ve mentioned may, or may not, provide resources for answering such questions, but as I have already said they are usually seen as answers to one or more of the questions at the start of my list.

3 Socrates’ Question

To provide focus, then, I must decide which question, or questions, I am going to see egoism, deontology, and consequentialism as attempts to answer. It will be closest to (ii) above: What are our reasons to act, and if there are several of them how do they relate to one another? I see this question as equivalent to what Socrates described as the most important philosophical question: how should one live? And it is reassuring to find that Bernard Williams, in his Ethics and the Limits of Philosophy (1985: 19), also elucidates Socrates’ question in terms of reasons. I restrict myself to reasons for action partly to avoid the worries about the voluntariness of feelings I mentioned above, which might also apply to reasons to believe or other epistemic reasons, but also because I see Socrates’ question as ultimately practical: what he really wants to know is what he should do.

The reasons in question are ultimate rather than derivative. Consider utilitarianism again. On that view, it may be true of some action ϕ that what makes it the action I have strongest reason to do is that it will maximize utility. But there will be other descriptions of that action available, as is always true of any action. Because of the enjoyment people find in friendship, for example, we can imagine that ϕ‑ing consists in my visiting my friend in hospital. But the fact that this person is my friend is not, on the utilitarian view, an ultimate reason to visit her. I have a derivative reason to visit my friend because, ultimately, it will maximize utility to do so (or perhaps its doing so is part of a general strategy which will maximize utility overall).

4 Disagreement in Practice and Pluralist Views

Our three theories are clearly different from one another, and from other philosophical views on Socrates’ question, such as that of the sceptic. But could the three of them agree substantively, that is, practically? Yes they could, on some conceptions. Consider a form of deontology which requires one to conform to some basic moral rules, such as those I mentioned above, forbidding lying, and so on. A consequentialist might argue that conforming to these rules is required as the best strategy to maximize utility. And an egoist may also require conforming to the rules on the grounds that the sanctions for immorality, such as guilt and legal and social punishment, are so great, and the pleasures of a good conscience so valuable. Or at a more theoretical level we can imagine a deontologist who holds that benevolence is prior to all other moral rules, and that well-being consists in virtue understood as conforming to the basic rule.

It has to be admitted, of course, that such universal practical agreement will be very rare, and that most egoists, consequentialists, and deontologists will frequently disagree with one another about what to do. Nor of course are we getting explanatory agreement, which is what philosophers, as philosophers, are presumably most interested in. This explains why the so-called ‘consequentializing’ project in normative ethics (see e.g. Dreier 2011) does not succeed, not because some deontological claims cannot be consequentialized (though that may perhaps be true), but because a consequentialized deontological theory is not a deontological, that is non-consequentialist, deontological theory. The principle ‘keep your promises, because keeping promises is right’ is practically equivalent to ‘do not let it be a consequence of your act that you have broken a promise’, but it is not philosophically, that is explanatorily, equivalent.

But of course one might seek a pluralistic theory which was not explanatorily monistic or exclusive in the way that contemporary theories tend to be. And this theory would then have the advantage that it was capturing the plausibility of principles accepted by more than one group of ethical theorists. The attractions of convergence, and the problems of disagreement, appear to be partly what motivated Derek Parfit to develop his so-called ‘Triple Theory’ in the second volume of On What Matters (2011). As Parfit sees it, at least some theorists can view themselves as climbing the same mountain from different sides, in such a way that when they reach the summit they recognize other routes to to the top as acceptable as their own.

In the first volume of On What Matters, Parfit develops what he sees as the most plausible version of Kantian contractualism, which may be stated as follows:

KF4: When some act is disallowed by one of the principles whose universal acceptance everyone could rationally will, that makes this act wrong in the senses of being unjustifiable to others, blameworthy, and an act that gives its agent reasons to feel remorse and gives others reasons for indignation. (2011: 1.369)

T.M. Scanlon developed another version of contractualism, using the idea of what it is reasonable to reject. Parfit states this as:

SF4: When some act is disallowed by some principle that no one could reasonably reject, this fact makes this act unjustifiable to others, blameworthy, and an act that gives its agent reasons for remorse and gives others reasons for indignation. (2011: 2.214)

A little later, Parfit offers what he calls his ‘convergence argument’ (2011: 2.444-5). According to this argument, the only principles whose universal acceptance everyone could rationally choose would be the optimific ones. So Kantian Contractualism implies Rule Consequentialism. And since these are the only principles everyone could rationally choose, no one could reasonably reject them, so that Kantian Rule Consequentialism can be combined with Scanlonian Contractualism. This gives him:

The Triple Theory: Everyone ought to follow these optimific principles because these are the only principles whose universal acceptance everyone could rationally choose, and the only principles that no one could reasonably reject.

At this point, I have to confess that none of the elements of the Triple Theory strikes me as especially plausible, because I am inclined to see the reasons we have for particular actions as much more directly linked to the well-being produced by those actions. It may be, that is to say, that the Triple Theory is right substantively, but it would be only because following the principles in question would, say, maximize well-being. But my own individual view is of little significance. Many in the Kantian tradition have seen morality in terms of rationally universalizable principles, so there may well be potential in seeking agreement between contractualists and consequentialists. Still, it has to be said that rule consequentialism has been far less popular in the consequentialist tradition than standard act consequentialism, and this is probably at least in part because it contains Kantian elements with little direct connection to the advancement of well-being. Consensus in this area might be more likely through attempts like those of R.M. Hare (1993) and David Cummiskey (1996) to combine Kantian positions with act rather than rule consequentialism – and if possible also with egoism. In general, then, what we see in these views is a worthwhile attempt to seek consensus, and this kind of research programme may well be developed further in future.

5 Syncretism

Another approach might be inspired by a broadly Aristotelian approach to epistemology. Aristotle begins his Metaphysics with the claim that all human beings desire to know, and his dialectical method involves an attempt to find a view which will be acceptable to both the many and the wise (i.e. other philosophers) (see e.g. Nicomachean Ethics 1098b, 1145b). Could we take elements from those theories currently on offer, without having to see those theories as providing ultimate reasons for action or criteria of rightness, in such a way that the result might turn out to be more plausible than any of its various sources?

On one version of such a syncretic position, we will end up with a form of deontology in which either there are agent-centred options to give special weight to oneself or a robust virtue of prudence, to allow for some of the attractions of egoism to be incorporated into the account. This may be the point to note that current proponents of so-called ‘virtue ethics’ (e.g. Hursthouse 1999) would be wise to see their view not as an alternative to deontology, but as a version of it (see Crisp 2015). According to such theorists, right actions are those that would be performed by a virtuous person. But this cannot sensibly provide an explanatory account of rightness. A virtuous person will not do an act because it is the kind of act people like her do, but because it is, say, just or generous. And deontology is the view that such actions are right, independently of their effect on overall well-being. It is also worth noting that this syncretic view can fully allow a place for the consequentialist principle, even in its act-utilitarian form, as long as that principle is stated with an ‘other things equal’ clause alongside other principles. The idea that, other things equal, one should maximize overall utility is not only harmless, but almost undeniable. Not acting on it when appropriate would be pointless and wasteful.

Nevertheless, it is unlikely that an act consequentialist will accept such an extreme modification to their position. Further, as I have already noted, they have their own syncretic form of argument, often employed to deal with objections to their position: act consequentialists will not recommend, say, the hanging of innocent people, slavery, torture, great inequality, and so on, because strategies designed to avoid such things are likely to produce the best consequences in the long term. This form of argument, however, is often unpersuasive, since it cannot incorporate the view that hanging an innocent person is wrong because it is unjust, not because it is generally forbidden by an optimific principle.

6 Moral Epistemology and Theoretical Disagreement

I have already made some possibly dispiriting remarks about the prevalence of disagreement in philosophy. So perhaps I should apologize for making another. One common occurrence in philosophy is that disagreements in one area are caused by disagreements in another. This certainly happens in ethics, where epistemological differences increase the chances of differences in normative ethics.

Perhaps the standard view in moral epistemology at present is the kind of reflective equilibrium championed by John Rawls (1999: 18-19, 42-5), in which we are required to see equilibrium between our moral responses to particular cases and our ethical theories. This view allows epistemic weight to common-sense morality which those of a more foundationalist persuasion may wish to reject. There is a long-recognized problem with strongly coherentist epistemologies – that they involve a kind of boot-strapping, with no belief having credibility in itself but gaining such credibility from its relation to other beliefs. That has led some to seek priority for certain ethical beliefs which are plausible in themselves, and can then lend weight to other beliefs derived from or otherwise dependent on them. This seems to me a reasonable strategy, and avoids giving weight to common-sense beliefs right from the start of enquiry when we know that they vary greatly over time and space, are the result of biological and cultural evolution, and often emerge out of dubious power relationships.

One moral philosopher who has offered some criteria for assessing such beliefs is Sidgwick, who offers four tests which any allegedly self-evident intuition must pass if it is to be judged of the ‘highest certainty’ (1907: 338-42). The first three are relatively straightforward:

(I) Clarity. ‘The terms of the proposition must be clear and precise.’

(II) Reflection. ‘The self-evidence of the proposition must be ascertained by careful reflection.’

(III) Consistency. ‘The propositions accepted as self-evident must be mutually consistent.’

The fourth is more tricky, and stated indirectly:

(IV) Non-dissensus.

Since it is implied in the very notion of Truth that it is essentially the same for all minds, the denial by another of a proposition that I have affirmed has a tendency to impair my confidence in its validity.... And it will be easily seen that the absence of such disagreement must remain an indispensable negative condition of the certainty of our beliefs. For if I find any of my judgments, intuitive or inferential, in direct conflict with a judgment of some other mind, there must be error somewhere: and if I have no more reason to suspect error in the other mind than in my own, reflective comparison between the two judgments necessarily reduces me temporarily to a state of neutrality. And though the total result in my mind is not exactly suspense of judgment, but an alternation and conflict between positive affirmation by one act of thought and the neutrality that is the result of another, it is obviously something very different from scientific certitude.

Principle (IV) is open to a certain amount of interpretation. But it seems to me that Sidgwick is recommending that if I believe P, and come across someone who holds not-P and whom I have no reason to think epistemically inferior to myself, I should suspend judgement on P for the time being. And whether he is recommending this or not, this form of Pyrrhonist scepticism strikes me as highly plausible. Insisting that you are wrong when I have no reason to think you, in this very case, an epistemic inferior is a form of unreasonable dogmatism.

What are the implications for ethics (see Crisp 2006: 88-97)? On the face of it, they are rather worrying. Sidgwick himself did not apply the fourth test very carefully to his own views, at least in the text of the Methods, though he was fully aware that many of his contemporaries would, on reflection, have rejected both the hedonistic act utilitarianism and the hedonistic egoism which he himself found plausible candidates for self-evidence. At this point, consider what the deontologist W.D. Ross has to say about promising:

If, so far as I can see, I could bring equal amounts of good into being by fulfilling my promise and by helping some one to whom I had made no promise, I should not hesitate to regard the former as my duty…. [and] normally promise-keeping … should come before benevolence. (1930: 18-19)

Many will agree with Ross, against Sidgwick, that the fact that some action is the keeping of a promise is in itself a reason to perform it, and this form of disagreement between consequentialists and deontologists will be replicated across the whole of ethics. And of course egoists will disagree with both groups.

Is normative philosophical ethics, then, left in paralysis?

I think not. Even if all of us, as it seems we are required to do, were to suspend judgement on our basic ethical views, ethical debate could continue. That is, we can continue to report to one another how things appear to us, rather than insisting on our being right while others must be wrong. Such debate would be less adversarial and more constructive than much in philosophy at present. This would have several significant advantages. First, each participant would be more likely to notice the faults in her own position and the advantages in those of others. Second, philosophers would see that there is often greater epistemic benefit in discussing issues with those of radically different views than with some clique of one’s own. Third, the aim of debate would be not the victory of one’s own position but convergence on some truth, which might or might not turn out to be a syncretic conglomeration of various elements from several existing ethical theories. Ethical enquiry must be informed by a spirit of impartiality, in which those who propose normative principles are prepared both to hold up those principles to the light of rational reflection and the arguments of others, actual or imagined, and to look enthusiastically at the views of others, in search of enlightenment rather than dialectical victory. Critical argument, of course, would continue to be the mainstay of moral philosophical discussion, but if it were freed of its unjustified dogmatism there would be a greater likelihood of convergence on the truth. It might even be that philosophy would become more like science, in which researchers work together collaboratively on some common problem.

Consider the following analogy of contemporary philosophical ethics. A group of equally experienced cavers is lost underground in the darkness, and want to get out as soon as possible. They share ideas, and it turns out that they all disagree on the best escape strategy. Only one of them, at best, can be right. Now imagine that each decides to spend the time she has available trying hard to persuade her colleagues to agree with her, focusing only on what she sees as the weaknesses of their views and the strengths of her own. If I were lost, I would much prefer to be in a group whose members, though prepared to state their own views as well as possible, were also ready seriously to look for flaws in their own view and advantages in the views of others. Do we, or do we not, seriously want to find our way out of the cave? If so, then we should change the way we currently do moral philosophy.