Computation, Coherence, and Ethical Reasoning
Theories of moral, and more generally, practical reasoning sometimes draw on the notion of coherence. Admirably, Paul Thagard has attempted to give a computationally detailed account of the kind of coherence involved in practical reasoning, claiming that it will help overcome problems in foundationalist approaches to ethics. The arguments herein rebut the alleged role of coherence in practical reasoning endorsed by Thagard. While there are some general lessons to be learned from the preceding, no attempt is made to argue against all forms of coherence in all contexts. Nor is the usefulness of computational modelling called into question. The point will be that coherence cannot be as useful in understanding moral reasoning as coherentists may think. This result has clear implications for the future of Machine Ethics, a newly emerging subfield of AI.
KeywordsCoherentismEthical reasoningFoundationalismMachine ethicsPractical reasoningUnderdeterminationRobot ethicsUnsupervised neural network
Computational modelling and ethical reasoning
As a field, Machine Ethics tends to be thought of as a branch of AI research, and a recently developed subfield at that. In some sense or other, it is concerned with adding an ethical dimension to machine behaviour. Robot ethics is concerned with the ethical dimension of a specific type of machine—robots. In the western philosophical tradition, the study of ethics has been around for over two millennia. Not surprisingly, researchers in AI are availing themselves of developments in the philosophical study of ethics in doing work in Machine Ethics. One result of this research is that a feedback effect takes place. In attempting to construct computational models of moral reasoning, insights may result about moral reasoning that are of interest to philosophers as well. This should not be surprising. A similar development has taken place in what is sometimes called Computational Epistemology—witness the work of John Pollack and Paul Thagard. This paper is a kind of case-study that examines Paul Thagard’s approach for modelling practical reasoning. By constructing a computational model, Thagard has provided a more detailed account of coherence-based reasoning than philosophers have traditionally done. The details, however, allow us to see the problems of the coherence-based methodology it is premised on. The insights gleaned by studying this model are of interest to philosophers, AI researchers, and anyone thinking about psychologically realistic models of moral reasoning.
Paul Thagard and Karsten Verbeurgt have developed a detailed account of coherence (which I will call TV coherence) that has applications to a variety of domains: theoretical reason, practical reason, and even sub-or non-propositional forms of cognition. Thagard has applied TV coherence to moral reasoning, and the next or second part of this paper provides a summary of Thagard’s multi-coherence account of moral reasoning. The third part argues that an underdetermination problem undercuts the claim to prescriptive guidance. In the fourth part, concerns will be raised about whether TV coherence can underwrite the kind of understanding of reasoning that is required if it is to provide us or machines with prescriptive guidance in ethical reasoning. These points are of interest to both philosophers and AI researchers. As will be shown in part five of the paper, the arguments just mentioned have implications for the theory of moral justification, for attempts to construct an Artificial Moral Intelligence (AMI), and for attempts to construct computational models of human moral reasoning. The point of the paper is not to argue against the possibility of constructing an AMI, whether in robot form or some other form; rather it is to state some (largely negative) constraints on what could count as a being (natural or artificial) that is morally justified. In other words, these constraints are about what moral justification could not be.
A multi-coherence model of practical reasoning
... given a finite set of elements ei and two disjoint sets, C+ of positive constraints, and C− of negative constraints, where a constraint is a pair of elements (ei, ej) and a weight wij. Denote a partition of the set of elements into two sets, A (accepted) and R (rejected) as (A, R), and the weight of the partition w(A, R) as the sum of the weights of the satisfied constraints, where a constraint is satisfied if either of the following holds:
1: if (ei, ej) is in C+, then ei is in A if and only if ej is in A.
2: if (ei, ej) is in C−, then ei is in A if and only if ej is in R.
Problem: find the partition with the maximum weight. (Millgram, 2000, pp. 83–84)
Given that we are concerned with practical and theoretical reasoning, the elements in questions will be things like goals and propositions. The above tells us that there are two kinds of constraints. The first kind tells us that if one element is in (or accepted) then so is another; the second kind tells us if one element is out (or rejected) then so is another. The constraints are assigned weights, which amounts to assigning a level of importance to each constraint. A coherence problem becomes the problem of satisfying as many constraints as possible while taking into consideration the level of importance indicated by the weight on each constraint.
To understand Thagard’s multi-coherence account of moral reasoning, we need to examine each of the four kinds of coherence involved, starting with explanatory coherence. Thagard views theory selection in science as a coherence problem. Very roughly, the elements in this type of coherence problem consist of (a) the propositions making up the competing theories and (b) the propositions expressing the evidential support for the theory. Propositional elements are understood to correspond to the units of a neural network; positive constraints correspond to excitatory links; negative constraints to inhibitory links; an accepted element corresponds to a positively activated unit, and a rejected element corresponds to a negatively activated unit. ECHO is a program that has been used to model a variety of theory conflicts from the history of science, for example, Ptolemaic versus Copernican views of astronomy, phlogiston versus oxygen chemistry, Darwinian evolution versus creationism, and a variety of others (Nowak & Thagard, 1992a, b; Thagard, 1992a, b, 2000). In every case, using the same parameter values, ECHO settles on the correct theory. Thagard conceives of theory selection in science as inference to the best explanation, and inference to the best explanation is understood as maximizing global coherence. A brief example is in order.
The reason for including explanatory coherence in the account of moral reasoning is that some normative principles are tied to empirical claims. For example, the general principle that capital punishment is acceptable may be argued to depend on the deterrent effect that it has. But whether capital punishment has a deterrent effect is a largely empirical question. Just as ECHO can be used to model conflicting hypotheses from the history of science, it can be used to model conflicting hypotheses regarding the ability or inability of capital punishment to deter.
I have coined a new term to describe an approach that is intended to be both descriptive and prescriptive (normative). I shall say that a model is “biscriptive” if it describes how people make inferences when they are in accord with the best practices compatible with their cognitive capacities. Unlike a purely prescriptive approach, a biscriptive approach is intimately related to human performance. But unlike a purely descriptive approach, biscriptive models can be used to criticize and improve human performance. (Thagard, 1992b, p. 97)
Both the coherence approach to modelling reasoning and the biscriptive methodology are applied to Thagard’s attempt to model practical reasoning.
Deductive coherence is about finding a reasonable fit between general principles and particular judgements. ECHO can be set up to take general principles, empirical hypotheses, and particular judgements as a set of elements having various constraints in an attempt to maximize explanatory coherence. Inconsistencies yield negative constraints. ECHO can be run on a body of principles and judgements, attempting to yield as good a fit as possible between principles and judgements.
Deliberative coherence is about finding a reasonable fit between judgements and goals. Paul Thagard and Elijah Millgram (1995) have developed a coherence theory of decision-making that takes as its elements actions and goals. (See also Millgram & Thagard, 1996). The primary positive constraint between these elements is facilitation, and the negative constraint is the incompatibility of an action with a goal. For example, the goal of saving tax dollars may be facilitated by capital punishment, whereas imprisoning individuals for decades may not facilitate that goal. DECO is a program created by Thagard and Millgram that attempts to solve for coherence given goals and actions as elements. Intrinsic goals are given a kind of defeasible priority in a manner not unlike the way data are given priority in explanatory coherence.
Analogical coherence is about finding a reasonable fit between judgements of some cases with the judgements of other cases. Interlocutors sometimes appeal to an agreed upon case (the source) to argue that some disputed case (the target) should be treated in the same way. Keith Holyoak and Paul Thagard (1995) have developed a coherence approach to determining the strength (or lack thereof) of analogical correspondence between two cases. The program that implements their approach to analogical mapping is called ACME, and it takes as its elements hypotheses about what features of the source and target cases correspond to one another.
Underdetermination and prescriptive guidance
the two major problems of foundationalist approaches to ethics and epistemology. The first problem is that, for epistemology as for ethics, no one has ever been able to find a set of foundations that even come close to receiving general assent.... The second problem is that proposed foundations are rarely substantial enough to support an attractive epistemic or ethical edifice, so that foundationalism degenerates into scepticism.
I will argue that these claims are, at best, premature.
My response is first to point out that the extreme versions of both these approaches [Kantian and utilitarian] have familiar incoherences with most people’s ethical judgements, and second to point to the multifarious nature of actual ethical arguments that embrace different kinds of ethical concerns, including both Kantian and utilitarian ones. I do not have an algorithm for establishing the weights on people’s constraints, only the hope that, once discussion establishes a common set of constraints, coherence algorithms can yield consensus.
While these are useful observations, they do not help his theory of ethical coherence overcome the two traditional problems of foundationalism in ethics. The most that follows from the considerations that Thagard marshals is that some weight must be given to both Kantian and utilitarian types of considerations. What is not clear is how much weight must be assigned to each of the four types of coherences in calculating overall coherence. Moreover, even if discussion should yield agreement on which types of coherence should be involved in moral reasoning, and even if that discussion should yield agreement on a common set of correct answers in past moral disagreements, all that agreement may still underdetermine the weights to be assigned to the different types of coherence in the computation of overall coherence. This requires further explanation.
ECHO is used to establish links for explanatory and deductive coherence; DECO is used to establish links for deliberative coherence, and ACME is used to establish links for analogical coherence. There are weights associated with all these links. Even if we get agreement that all four types of coherence are involved in moral reasoning, we still have to decide whether the weights for some types of coherence should be greater or lesser than the weights for other types of coherence. That will not be an easy matter to settle even if there is agreement on how past disputes are to be resolved. The reason is that given a set of resolved moral disputes, there may be more than one way to set the weights so as to model those resolutions—the cases may underdetermine the selection of weights. One set of weights may favour, say, analogical coherence to a greater degree than another set of weights, but both sets may be in agreement on past cases. However, in future cases, the different sets of weights may lead to different results on a new case. This gives rise to two problems. First, it is not clear how effective past cases can be in helping to determine the weighting of different types of coherence. Second, and as a consequence of the preceding, the claim to prescriptive guidance is mitigated since it is not clear how useful a multi-coherence account of moral reasoning can be when wedded to a biscriptive methodology that commits it to modelling past reasoning in a way that may underprescribe future results. Finally, even if there is agreement on the relative importance of the four types of coherence with respect to one another, the underdetermination argument still applies since there are many different ways to set weights even if there is agreement on the relative importance of different types of coherence. For example, saying that deductive coherence is twice as important as analogical coherence tells us that the weights for the positive explanatory links will be twice as high as the weights for the positive analogical links, but there are still many possible sets of values for these weights.
The following objections might be made: (i) it has not been proven that different sets of weights will be compatible with a set of agreed upon resolutions to past cases, and (ii) even if the preceding were proved, a further point would require proving—that the different sets of weights would diverge in the future. All that the above shows is that underdetermination is a logical possibility, not that it will pose a real problem. There are at least two ways to respond to this type of objection. The first response would consist in doing a mathematical proof to show that given a past set of cases, what is demanded by (i) and (ii) can be supplied. The second response would consist in taking a set of agreed upon past cases and running the neural net simulator supplied by Thagard to supply existence proofs of the sets of weights required by (i) and (ii). Notice that both of these strategies require an agreed upon set of successfully resolved moral disputes, something Thagard has not supplied. This makes it difficult to meet the demands presented in (i) and (ii). However, it can be shown that different sets of weights in ECHO can come to the correct answer regarding scientific disputes ranging from the time of Ptolemy to the 19th century, but that not all of these weights will give exactly the same answer in a 20th century case.4 Moreover, there is empirical evidence in cognitive science that neural nets are very powerful tools, and if a neural net can solve a problem with one set of weights, then there is often more than one set of weights to solve that problem. These considerations suggest that it is reasonable to believe that the demand placed on us by (i) can be satisfied. The demand placed on us by (ii) is a little trickier. It is hard to prove that weights will diverge on future cases if you do not know what those cases are. Still, the fact that ECHO can use different sets of weights to get the right answer on very complex disputes such as Copernicus versus Ptolemy, Newton versus Descartes, Lavoisier versus his opponents, and so on, and still get different answers with respect to a 20th century problem gives us some reason for concern. ECHO is used to construct the links for explanatory and deductive coherence in the multi-coherence account of moral reasoning. The way DECO and ACME establish weights for deliberative and analogical coherence is similar to ECHO, and the algorithm used by DECO and ACME to settle a neural net is identical to ECHO’s; this is what makes it possible to combine them to construct the kind of net seen in Fig. 2 or to construct other networks with a different arrangement of elements. These similarities between ECHO, DECO, and ACME suggest that we should not be surprised if underdetermination problems generated for ECHO should reappear when ECHO is used as a part of a multi-coherence model of reasoning, where the other parts are algorithmically of the same kin as ECHO.
The problem of argumentative access
When applied to practical reasoning, TV coherence requires that the justifiability of one set of views, B1, over another set, B2, is derived from the relative coherence of B1 when in competition with B2.5
In order to argue that B1 is more justified than B2, we would have to be able to give reasons to believe that B1 is more coherent than B2.
We do not, in general, have argumentative access to the relative coherence ratings of B1 and B2. In other words, we cannot, in general give reasons to believe B1 is more coherent than B2.
Therefore, we cannot, in general, use TV coherence to argue that one set of views in practical reasoning is better justified than another.
The number of views expressed in B1 and B2 need not be very large, so the Bi in question need not state all the moral views an individual may possess. As we will see, even if the number of competing views under consideration is limited—such as the circumscribed set found in Thagard’s Bernardo example—the above argument has force.
The multi-coherence account of moral reasoning makes use of four different kinds of reasoning, each of which contributes to overall coherence. Justifiability is a function of coherence. However, it must be kept in mind that justification in terms of coherence is a relative matter. Thagard and Verbeurgt are careful to warn us that they do not have an absolute notion of coherence on offer. In other words, it is not possible on their theory of coherence to say something like, ‘As long as the coherence rating for belief set B is above some value v, then it is justified.’ Justification in terms of TV coherence is always justification in terms of the relative coherence of one set of elements when compared to another (possibly overlapping but not identical) set of elements.
Premise one states a relatively straightforward commitment of the TV coherence when applied to moral reasoning. Premises two and three are not explicit commitments of the TV theory of coherence, so they will need greater explanation and defence. As we saw earlier, Thagard suggests that his multi-coherence account of moral reasoning can overcome some of the traditional problems of foundationalist ethics and provide normative guidance. I suggest that if TV coherence is to guide us in rational discourse (or even monolectical reasoning) about moral issues, we must have some reason for believing that one set of beliefs is more coherent than another since justifiability is defined in terms of coherence. If we are not required to have some reason to believe that one set of beliefs is more coherent than another, it is hard to see how the multi-coherence theory of moral reasoning can provide guidance to interlocutors engaged in a disagreement or to an individual who is torn between different views.
With respect to the third premise, consider the network of elements in Fig. 2. This is a grossly oversimplified collection of considerations that may go into the assessment of whether or not Bernardo should be executed. In spite of that, there are 2,048 (or 211) possible ways6 to partition this collection of beliefs into accepted and rejected elements. The weights on many of the elements will be real numbers varying between −1 and 1. There are a total of 12 constraints, 4 negative and 8 positive. To test one way to partition this set of elements, many acts of multiplication and addition would have to be carried out. A brute force approach would require that this process be carried out at least 2,048 times (assuming activation values of 1 or −1, which is very conservative given that activation is real number valued). It is beyond the ability of most people to consciously carry out these operations and keep track of them. Connectionist algorithms are a useful approximation tool, but it is beyond the ability of most people to consciously make use of a connectionist algorithm. Once again, consider Fig. 2, and take each element to be represented by a neuron or unit. The activation value of each unit j, represented by aj—which can range between −1 (rejected) and 1 (accepted)—is calculated as a function of the old value, ai, of every unit i linked to j. The following equation is used to carry out the calculations.
These computations would have to be carried out for every unit in a settling cycle, and dozens of cycles may be required to settle the network. The point here is that the average person cannot consciously carry out these computations. Premise three refers to our inability to have argumentative access to coherence ratings. By that I mean that when we make an argument to others (or ourselves) about coherence ratings, we will not be able to defend the view that one set of views has a greater coherence rating than another. It will not do to simply say that B1 feels more coherent than B2, since someone might counter that B2 feels more coherent than B1. A more compelling reason might be that the relative coherence rating of one set of beliefs is higher than the other, but most agents will not have argumentative access to those ratings.
Objections and replies
At this point, it might be objected that it is simply enough for neural nets in the brain to settle on the most coherent of the competing sets of propositions. The individuals to whom that happens are as justified as they can be, or so the objection might go. In his critique of Thagard on theoretical reason, Millgram points out that coherence is something that is supposed to be useful. I agree, and so does Thagard: coherence is supposed to provide us with prescriptive guidance. If we do not have argumentative access to coherence, it is difficult to see how we can make justificatory use of it in reasoning or argument. Without that ability, it is difficult to see how it can provide prescriptive guidance. If someone were prepared to say, in spite of the preceding, that all that is generally required to be justified is that an individual settles on the most coherent set of elements even if the individual does not have reason to believe it is the most coherent of the available alternatives, there is still a problem. The point of the three cases 7that follow is to show that, in general, the coherentist should require reasons for believing a set of views has greater relative coherence over another if coherence is to play a justificatory role.
Huey is a Total Turing Test passing android with conscious states who settles on B1 over B2, and he believes that he has the ability to select the most coherent set of elements when presented with different sets to choose from. He does so intuitively. In other words, he does not consciously carry out calculations when selecting one set of views over another. For the purpose of this example, assume that Huey is a good intuitive judge of coherence, and assume that B1 is more coherent than B2. However, while Huey believes he is a good judge of relative coherence, he has no reason to believe he is a good judge. In fact, he has reason to believe that he does not have the power to select the more coherent of two sets of views. To see how this could come to pass, imagine that he participates in some psychological tests designed to test for sensitivity to relative coherence, and when he is debriefed, he is given the incorrect results. Perhaps the lab assistant performing the debriefing was working from the wrong set of notes; further, assume that the lab assistant has years of experience and has been a perfectly reliable debriefer until this mishap. Huey knows of the assistant’s excellent reputation when she informs him that he performed quite poorly and that he is not a good judge of relative coherence. In fact, he is told that he is almost always wrong; the set of beliefs he thinks is more coherent is usually less coherent. Huey ignores (out of stubbornness, not because he has any opposing evidence) what he is told during debriefing, continues to believe that he is a good judge of relative coherence, and opts for B1 over B2 on his intuitive sense that B1 seems more coherent than B2. Remember: B1 really is more coherent than B2, and Huey really is a good judge of relative coherence. In spite of the preceding, Huey does not appear to be justified in accepting B1 over B2.
B1 is more coherent than B2; Louie is a Total Turing Test passing android with conscious states who accepts B1 over B2; Louie is a good intuitive judge of relative coherence, but Louie has no reason to believe that he is good judge in general or that B1 is more coherent than B2. He has never inquired into whether he is a good judge of relative coherence. In this case, we will stipulate that there is an easy way for Louie to find out that B1 is more coherent than B2—perhaps the computations are simple enough to do in his head—but he refuses to do them. Louie does not appear to be any more justified in accepting B1 over B2 than Huey. Together, Cases 1 and 2 suggest that if coherence is to do justificatory work, some reason is needed to think that B1 is more coherent than B2. Without such reasons, the behaviour of Huey and Louie just seems irresponsible.
B1 is more coherent than B2; Dewey is a Total Turing Test passing android with conscious states who accepts B1 over B2; Dewey is a good intuitive judge of relative coherence, but Dewey has no reason to believe that he is good judge in general or that B1 is more coherent than B2. He has never inquired into whether he is a good judge of relative coherence. If he is asked why he selects B1 over B2, he simply shrugs and says that B1 feels like the better view. In this case, unlike in Case 2, we will stipulate that reasons for believing that B1 is more coherent than B2 are not readily available—the calculations are too complex to carry out in one’s head, and there is no time to do the calculations by hand.
One possible response to Case 3 is to say that Dewey, unlike Huey and Louie, is justified in making his selection. One way to argue for this point is to claim we ought not to require more from agents than is humanly possible. If Dewey is a good judge of relative coherence, and it is not within his (or any human’s) powers to consciously work out the relative coherence problem at issue, then we cannot expect him to have reason to believe that B1 is more coherent than B2. One possible concern with the preceding response is that it begs the question against a limited form of scepticism that claims that when there is reason to believe that one view is more coherent than another, then we can have prescriptive guidance and justification; when there is no such reason, then there is no prescriptive guidance or justification. While this reply has some force, I am suspicious of scepticism that is premised on the demand that we perform cognitive tasks involving memory or computation that are beyond our abilities. Without supporting argument, such scepticism strikes me as poorly motivated (since abandoning the superhuman cognitive demands in favour of human demands seems at least as plausible as scepticism). However, it is not my intent to suggest that scepticism can be dismissed easily. Indeed, forms of scepticism that are not based on transcending human cognitive powers (pertaining to memory and computation) need to be taken seriously, but that topic is beyond the scope of this paper.
I think there is a deeper concern with the view that Dewey can be seen as justified in situations where obtaining argumentative access to coherence exceeds human abilities. Cases 1 and 2 suggest that if relative coherence is to play a justificatory role, then there is a general obligation to have argumentative access to that coherence. Even counterfactual analysis of Case 3 carries that suggestion: if it were possible for Dewey to do the required calculations to solve for relative coherence, would he? Should he? If he would not, then Case 3 can be treated like Case 2. The only thing that makes these two cases relevantly different is the stipulation that in Case 3, no human, or android could perform the relevant calculations in the required time. If that difference is eliminated, then Case 3 collapses into Case 2. If relative coherence plays a justificatory role, and if Dewey could obtain access to such coherence, then he should. If someone were to concede that argumentative access to coherence is required in Cases 1 and 2 but not in (the actual) Case 3, that would lead to the view that while there is a general obligation to have access to coherence, that obligation is frequently, and perhaps usually, defeated by the limits of human cognitive powers. I have argued that in many cases, access to the required coherence is not to be had; consequently, circumstances like those found in Case 3 will be common place. In short, it will frequently, if not usually, be the case that the required coherence will provide no prescriptive guidance to human practical reasoners and androids with similar cognitive powers. This is a high price to pay. Cases 1 and 2 together with the counter factual Case 3 suggest that a kind of reflective access is, in general, a requirement of a coherence theory of practical reasoning. It is one thing to say that, on occasion, the general requirement may be defeated. However, to say that the requirement is frequently or even usually defeated seems to do great violence to our intuitions pertaining to responsible reasoning. While I have been arguing about the limits of TV coherence, the lessons to be learned (here and below) are more general since the arguments in this section turn largely on the requirement for argumentative access and the problems that arise when a theory of coherence makes it impossible to obtain argumentative access. In other words, a constraint on an adequate theory of coherence in practical reasoning (whether done by humans or machines) is that it does not make argumentative access impossible.
Another objection to my position might go as follows: so we need to have some reason to believe that one set of views is more coherent than another if coherence is to do justificatory work; but look, it is easy to compute coherence with a computer, so what is the problem? Well, it is just this: the biscriptive methodology is supposed to capture how we reason when we reason at our best, and as a general rule, many good practical reasoners do not carry around computers to help them compute the coherence of positions they are considering. Part of what the biscriptive methodology is supposed to capture is what we have historically regarded as good reasoning, and much of that reasoning in the moral domain has not required external computational aids.
... the mind must proceed more sporadically, alternating between focusing on one kind of coherence and another. Instead of systematically identifying different kinds of constraints, people focus for a while on a particular kind of coherence such as the deductive fit between principles and judgments, then shift to other kinds of coherence such as deliberative. (Thagard, 1998, p. 414.)
A meta-ethical externalist might take issue with most of the above and insist that we simply do not need to be consciously aware of coherence values. As long as an agent settles on the most coherent set of beliefs given the available alternatives, then that agent is justified (even if the agent is not consciously aware of the coherence values). To such an individual, the thought experiments and arguments presented above would be unconvincing. The reply to such an unconstrained form of externalism is that meta-ethics has been concerned with (among other things) the issue of how (and if) it is possible to resolve normative disputes. The issue of which set of views is morally justified often arises precisely when there is disagreement between agents or confusion within an agent. When there is such disagreement, we seek to publicly or consciously articulate the reasons for our views. Perhaps there are contexts where such public or conscious articulation is not required,8 but there are many contexts where it is, and it is those contexts with which this paper is concerned.9 If we succeed in building artificially intelligent moral agents, they too could be party to disagreements or confusions, so the issue of articulating reasons will arise for them as well. Indeed, the issue appears especially pressing in practical reasoning.
While the problem of argumentative access has been used to show that there is an important class of problems in practical reasoning where TV coherence cannot be used to provide guidance, argumentative access has not been used to show that TV coherence is totally useless in accounting for anything that is going on in practical reasoning. Millgram (2000, pp. 84–85) gives an interesting example of a simple, constraint satisfaction problem that many people could solve by performing simple computations in their heads. In other words, there are some problems for which we do have argumentative access to relative coherence ratings. The view I have attempted to defend in this paper is that TV coherence will not provide the level of prescriptive guidance that Thagard suggests: it will not help us to overcome the traditional problems of foundationalist approaches to ethics since argumentative access to relative coherence ratings is required if TV coherence is to be prescriptively useful, and there are conflicts between competing sets of views where we do not have such access. I also argued that using external computational aides will not usually provide argumentative access of the required type since biscriptive methodology requires that we account for what has traditionally been thought to be good reasoning. If we are impressed with someone’s ability to engage in reasoning about abortion, capital punishment, euthanasia, war, and the like, it is not usually because that individual wields a mean calculator. To be sure, these and other subject matters pertaining to action may require that we carry out computations, but we often recognize that people can reason well about these subjects without the aid of a computational device. Moreover, when we think a computational device is required, it is not usually for the purpose of computing overall coherence. For example, if someone is trying to defend (or criticize) the use of a certain weapon in war by claiming that it will not cause too much damage (or that it will), complex calculations may be required to determine the expected effects of the weapon, but this is not a computation of TV coherence. What has been claimed is that we do not generally find people to be deficient in practical reasoning because they do not have a computational aid to solve for TV coherence. There are important instances of practical reasoning where we do not insist on the use of computational aids to solve for relative coherence, and when the TV coherence approach is applied to these problems, such aids end up being required—a problematic result.
The conservativeness complaint
It might be objected that the arguments presented above are premised on an unacceptable form of conservatism.10 It was assumed that part of what we want a theory of moral reasoning to capture is much of what we have traditionally considered acceptable moral reasoning. Perhaps this is a mistake. For example, perhaps it is perfectly acceptable to say that humans ought to make use of computers in arriving at relative coherence values, and if we do not do so, then we have not reasoned well. If such a view leads to the conclusion that most moral reasoning has been inadequate because it has not made use of detailed coherence calculations, then so be it. Moreover, it may be possible to build an AMI that can compute the required coherence values, and that AMI may then be able to present an argument that one set of views is superior to another based on the coherence values. Indeed, perhaps AMIs will be superior to humans precisely because they will be able to generate arguments based on detailed coherence calculations. Let us see why the preceding line of thought is unconvincing.
Consider the following claim:
Moral agent S (whether natural or artificial) is justified in believing one set of first order beliefs B1 over another set of first order beliefs B2 if and only if both (a) B1 has a higher coherence rating than B2, and (b) S has computed the coherence ratings of the Bi and recognizes that the rating of B1 is higher than that of B2.
P1 is sufficiently general that it captures both (i) the idea that a human working with a calculator to compute TV coherence could be justified while a human without such a calculator would not, and (ii) the idea that an android with greater conscious computational prowess than our own could be justified while a human who could not compute the appropriate TV coherence values would not.
A reply to the conservativeness complaint
If P1 is asserted sincerely, then how is it justified? Is it justified because it coheres well with the other views we hold? This seems implausible, since most (including coherentists) hold that some people have engaged in acceptable moral reasoning even though they have not satisfied P1. Imagine that one day you are having a moral conversation with a friend whose intellectual abilities you greatly respect. They persuade you of some position on an important moral issue. You both go home, and your friend works out the TV coherence values of the positions you were discussing that day. When you get together the next day, he informs you that he did the calculations. Do you now say, “Ahh, now you are justified; yesterday you were not”? No, but this is what P1 would require. Clearly, adding P1 to current beliefs will not increase coherence since P1 contradicts the view that people have reasoned well even if they have not (consciously) recognized that the coherence rating of one set of beliefs is higher than the ratings of other sets. What about if we were to consider a set of beliefs such that (i) they included most of our current beliefs but without any beliefs pertaining to actual examples of what counts as a justified or unjustified reasoning, and (ii) included P1 in this set of beliefs? Perhaps this hypothetical set of beliefs might come out more coherent than our current set of beliefs. Perhaps.11 It is hard to say without a detailed model that would allow us to carry out calculations. Still, there is worry: the preceding position assumes that coherence is valuable in reasoning and abandons a common methodology for evaluating claims about what is valuable in reasoning. It is a common place methodology to test theories of reasoning against highly intuitive examples of what we consider good or bad reasoning. The position just considered precludes such a method of evaluation since it is trying to argue for P1 making an increased contribution to coherence by subtracting all the cases from our belief set that could be used to evaluate the alleged importance of coherence.
In reply to the above concerns, it might be claimed that P1 is foundational, thereby not being in need of any inferential justification. Of course, someone subscribing to P1 clearly has coherentist sympathies, so it would be odd of such an individual to claim that P1 is foundational, but oddity does not make a position false. In reply to such a foundationalist move, consider this claim:
Moral agents (natural or artificial) ought not to kill people simply for entertainment.
P2 appears more plausible than P1 as a foundational claim. If a moral theory or a general theory of reasoning, coherence-based or otherwise, asserted the negation of P2, we would likely reject that theory of reasoning before rejecting P2. I take this as evidence that P2 has more of a claim to being foundational than P1, but P1 precludes treating P2 as foundational.
Insisting on P1 appears to be problematic whether we seek its justification from a coherentist perspective or a foundationalist perspective.12 This is so whether (a) we are engaged in trying to better understand moral reasoning as humans do it, or (b) we are thinking about how an AMI might be built. The arguments above raise concerns for P1, but those concerns apply to rational agents in general (whether natural or artificial). Moving forward, it would appear that neither a better understanding of moral reasoning in general nor attempts to build an AMI should be concerned with satisfying P1.13
The goal of this paper is not to suggest that computational models will have no role to play in informing accounts of practical reasoning. On the contrary, for those of us who expect prescriptively adequate views not to exceed what is humanly possible with respect to various cognitive functions (involving memory and computation), psychologically accurate computational models might prove quite informative in constraining the demands of a prescriptive theory. Moreover, a well-worked out theory about how we ought to reason should inform any attempt to build or assess an AMI. Millgram urged that the price of the ticket for invoking relative coherence in a theory of reasoning is that the notion of coherence be worked out in sufficient detail for it to be useful in assessing which of the positions in question is more coherent. This paper is an attempt to show that when TV coherence is applied to practical reasoning in a manner consistent with biscriptive methodology, the price, in many cases, is more than we can afford.
Figure 1 grossly oversimplifies Thagard’s account of explanatory coherence since it does not consider the role that is granted to simplicity, unification, and other factors.
Thagard frequently abbreviates in presenting constraint networks. For example, strictly speaking, “Paul Bernardo should not be executed” cannot be deduced from “Capital punishment is wrong”, which are two of the propositions in his constraint network. The proposition “Executing Paul Bernardo is an instance of capital punishment” needs to be added in order for the deduction to go through. The last proposition is not in the constraint network; I take this to be an abbreviation. Except where noted, I will follow this strategy myself, often speaking of entailments and assuming that it is understood that propositions need to be added for the deductions to hold. However, it should be kept in mind that Thagard is claiming to model deductive coherence.
Thagard suggests that deliberative coherence, with the goal of promoting the overall good, can be used to capture utilitarian concerns, and deductive coherence can be used to capture roughly Kantian concerns.
By use of the term “views” I am including all elements involved in practical reasoning, including not only beliefs but goals as well.
I am assuming that the evidence node in the network is an abbreviation and that there must be at least one piece of evidence in favour of capital punishment being a deterrent, and one further piece of evidence in favour of capital punishment not being a deterrent. A further abbreviation is that the special evidence unit (which would be connected to each unit representing an evidence statement) is omitted. Since the SEU has its value clamped at 1, I omit it in calculations of the number of possible partitions since its value is never an issue (given that it never changes during a simulation). I consider only nodes whose values are at issue in determining the number of possible partitions.
The cases are inspired by chapter three of Laurence BonJour’s The Structure of Empirical Knowledge.
In empirical matters, we often defer to someone else’s expertise. Jasmine is a physics major and has performed impeccable experiments in a lab to determine the speed of light in different media. Jane may then acquire beliefs about the speed of light based on Jasmine’s testimony. Jane may be justified in her beliefs in some deferential sense. It is not that she has conducted any experiments or can consciously muster any direct (or non-deferential) evidence to support her views; rather, she defers to the judgement of someone who can, or at least at some point, has acquired the direct evidence. Perhaps something like this is the case in moral reasoning. Jane may defer to Jasmine’s judgement on some moral issue or set of moral issues, which she knows Jasmine has thought a lot about (and we will assume that Jasmine is an excellent moral reasoner). Perhaps Jane can acquire some sort of deferential justification in this way even though she cannot give direct reasons for why she holds some specific view. However, not all justification can be indefinitely deferred. As some point, there needs to be someone who can offer direct reasons and acquires justification in a non-deferential sense. It is this non-deferential sense of justification of which I write in this paper.
Ernest Sosa (1997) has made the distinction between animal or prereflective justification, on the one hand, and reflective justification on the other. Of course, Sosa is speaking of the epistemology of empirical knowledge, not of moral epistemology. Still, some might be tempted to suggest that coherence rankings can be arrived at unconsciously, yielding a kind of prereflective justification; that would be to miss the point of what is being argued for herein. Among the contributors to coherence and incoherence in Thagard’s model are arguments and counterarguments, which are offered and considered by reflective agents. While I do not wish to deny that there may be a prereflective (or subreflective) dimension to moral cognition, when the type of cognition we are examining begins to consider arguments and counterarguments, surely we have entered the realm of reflective cognition. The kind of justification being considered in this paper is reflective justification. There may well be different defensible senses of the term “justification”, and no claim is made here that responsible, conscious articulation of reasons captures all senses of that term. However, if global coherence theories (whether understood as internalist or externalist) are incapable of explaining much of what we consider important in reflective and responsible moral cognition, then so much the worse for such theories with respect to understanding reflective cognition and its normative dimensions.
Thagard has never said anything like this. The objections considered in this section come from informal exchanges with several colleagues over the last few years.
The coherentist could also argue that in the future we will acquire beliefs such that adding P1 to those beliefs will lead to a higher coherence rating than our current set of beliefs. Assuming that the preceding is possible, the burden is clearly on the coherentist to provide those further beliefs.
Trying to argue that the overall justification of P1 comes in part from its foundational status and in part from its coherence with other beliefs inherits the problems of both approaches: P2 still has a better claim to being foundational than P1, and P1 would not fit in well with the rest of our beliefs.
Notice: it has not been asserted that coherence can play no role whatsoever in constructing an AMI. The arguments in this paper are not strong enough to establish that point. To see why I am making the preceding qualification, consider an android that gets around in the world about as well as a 5-year-old child. Imagine that it is unable to linguistically articulate or consciously reflect on its moral standards, but it does have standards because it refrains from biting, kicking, or otherwise harming humans, and it even makes attempts to break up fights that occur among human children. In some sense, this android has a kind of moral intelligence (even if it cannot articulate or reflect on why it behaves the way it does). Could this kind of intelligence be generated by a coherence engine of some sort? I do not know. No argument in this paper is strong enough, on its own, to rule out such a possibility. The kind of intelligence that has been focused on herein is reflective intelligence, and it is with respect to such intelligence that I have raised concerns about the use of coherence.
I wish to thank Andrew Bailey, Pierre Boulos, and Paul Thagard for comments and questions during the early stages of the work that eventually lead to this paper. I would also like to thank the participants at both the Dartmouth AI@50 conference (July 2006) and the North American Computing and Philosophy Conference (August 2006) for valuable input. For financial assistance during the writing of this paper, I am indebted to the Social Sciences and Humanities Research Council of Canada.