This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
The so called Responsibility Gap objection AWS has been developed by several scholars. The exact provenance of responsibility based objections against AWS is debated. In our view, most famously and influentially, Robert Sparrow (2007) argued that if an AWS made a mistake in war and, say, killed a noncombatant, that no one could legitimately be held morally responsible (not the commander, not the programmers, and so forth), resulting in an odd responsibility gap, the possibility of which makes deployment of AWS morally impermissible. This is discussed below. Several others have made a similar point or developed responsibility based objections to AWS. Andreas Matthias (2004) actually coined the term ‘Responsibility Gap’ as it pertains to AWS. Heather M. Roff (2013) has made similar arguments, particularly as it relates to the technical aspects of control. Christian Enemark (2013) has discussed similar arguments. Alex Leveringhaus has argued that responsibility based objections fail to rule out AWS as morally impermissible (2013).
The autonomous weapons we have in mind are an example of “weak AI”: they boast sophisticated decision- making abilities, even to the extent that their ultimate decisions could be a mystery to their creators. But these capabilities are confined to a narrow domain of decision-making, unlike the capabilities of strong AI. The autonomous weapons we have in mind are not good at chess, they cannot play “Jeopardy!”, they cannot diagnose a medical condition from a list of symptoms, and they cannot pass the Turing test in conversation with a human.
To be clear, in our view the kinds of weapon technology in use today does not yet constitute what we mean by AWS, but several weapons point to the impending likelihood of AWS being developed and deployed. The most notable example in widespread use is likely the Phalanx CIWS (close in weapon system) used on US Navy and Royal Navy surface vessels, and it’s land-based variant the C-RAM (Counter Rocket Artillary and Mortar), when those systems are used in so-called ‘autonomous’ mode. But in this paper we are analyzing weapon systems that go beyond such present-day technology.
Of course, our arguments here would apply to many autonomous weapons at lower levels of autonomy as well.
There are several arguments that suggest that fully autonomous weapons will be deployed in the future (Sparrow 2007: 64). See, relatedly, the ‘Principle of Unnecessary Risk’ discussed by one of us, Strawser (2010: 344): “If X gives Y an order to accomplish good goal G, then X has an obligation, other things being equal, to choose a means to accomplish G that does not violate the demands of justice, make the world worse, or expose Y to potentially lethal risk unless incurring such risk aids in the accomplishment of G in some way that cannot be gained via less risky means.” While Strawser (2010) uses this premise in an argument for the obligation to deploy unmanned aerial vehicles, there is clearly an analogous argument to be made for the moral obligation to deploy fully autonomous weapons. We find these arguments compelling, but a fuller exploration is beyond the scope of this paper.
Of course, there are also reasons for militaries to be apprehensive about the deployment of autonomous weapons, namely, precisely that they are autonomous and therefore more difficult to control than human soldiers. We thank an anonymous referee for raising this point. Nevertheless, we believe that armies will face an increasing pressure to outsource the decisions of human soldiers to AWS. In particular, the corresponding decreased risk to our soldiers’ lives (and thus the decreased political cost of waging war), combined with the increased accuracy and reliability of AWS in some domains, will make their deployment an irresistible option.
This may not pose a decisive practical problem for AWS. In reality, many accepted practices of warfare such as bombing do not provide the option of surrender and do not require stopping an attack when somebody gets injured. Thank you to an anonymous referee for making this point.
Ronald Arkin has made similar points in conversation and conference presentations. Also see Arkin (2009).
We are heavily indebted to Sparrow (unpublished manuscript), for alerting us to these possible solutions.
As Sparrow puts it, even if AWS never commit an action “of the sort that would normally be described as a war crime” (2007: 66).
Supposing that AWS would become just as reliable as humans at making moral decisions (and not more), we generate another interesting worry that has not appeared in the literature. In these cases, we might encounter the inverse of Sparrow’s responsibility gaps, namely, merit gaps. Whereas Sparrow worries that we would have no one to blame or punish in the event a robot makes a mistake, we might just as well worry that we would have no one to praise or reward should an autonomous weapons system perform especially admirably. Worries about such a merit gap seem much less serious, and we wonder if this points either to an asymmetry in our ascriptions of praise and blame in general or else an inconsistency in our attribution of agency to autonomous systems. At any rate, such a discussion is outside the scope of this paper.
It would be a virtue but is not required, of course. That is, it may well be unavoidable that any legitimate moral objection against AWS also indicts non-weaponized autonomous technology, though we are hopeful that our objections offered here do not do that for the reasons given below.
See Louden (1992) for an influential account of “moral theorists” as writers who see the project of moral philosophy as including the development of a straightforwardly applicable moral code. What we are calling the necessity of moral judgment is the denial of Louden’s fourth tenet: “The correct method for reaching the one right answer [in some morally freighted situation] involves a computational decision procedure…” (1992: 8).
See McKeever and Ridge (2005) for an excellent cataloguing of the various species of anti-theory.
Dancy (1993) is the most famous proponent of this view.
This view represents the legacy of McDowell’s passage quoted above. See Little (2000: 280): “there is no cashing out in finite or helpful prepositional form the context on which the moral meaning depends.” See also McNaughton, who says that moral principles are “at best useless, and at worst a hindrance” (1988: 191).
See Little (1997: 75): “The virtuous and nonvirtuous can alike believe that cruelty is bad, or conclude that some particular action is now called for. The virtuous person, however, holds the belief as part and parcel of the broad, uncodifiable, practical conception of how to live, while the nonvirtuous person holds it without so subsuming it. The two differ, if you like, in their conceptual gestalts of the situation… Virtue theory, then, does indeed claim that the virtuous person is in a cognitive state—a state satisfying a belief direction of fit—that guarantees moral motivation. But the guarantee is not located in any particular belief or piece of propositional knowledge. It is, instead, located in a way of conceiving a situation under the auspices of a broad conception of how to live.” See also Hursthouse (1995). This intellectual lineage also includes McDowell, whose aforementioned piece defends the Socratic thesis that virtue is a kind of knowledge.
“It is not the fault of any creed, but of the complicated nature of human affairs, that rules of conduct cannot be so framed as to require no exceptions, and that hardly any kind of action can safely be laid down as either always obligatory or always condemnable. There is no ethical creed which does not temper the rigidity of its laws, by giving a certain latitude, under the moral responsibility of the agent, for accommodation to peculiarities of circumstances…” (Mill 1863: 36).
See Scheffler (1992: 43): “If acceptance of the idea of a moral theory committed one to the in-principal availability of a moral decision procedure, then what would commit one to is something along these lines. But even if it did so commit one, and even if it also committed one to thinking that it would be desirable for people to use such a procedure, it still would not commit one to thinking it either possible or desirable to eliminate the roles played in moral reasoning and decision by the faculties of moral sensitivity, perception, imagination, and judgment. On the contrary, a decision procedure of the kind we have described could not be put into operation without those faculties.” See Hooker (2000: 88): “Rule-consequentialists are as aware as anyone that figuring out whether a rule applies can require not merely attention to detail, but also sensitivity, imagination, interpretation, and judgment”. See also his (2000: 128–129, 133–134, 136).
See Vodehnal (2010: 28 n53): “On a Kantian account, significant amounts of moral judgment are required to formulate the maxim on which an agent intends to act, and which the agent can test using the Categorical Imperative. In addition, the entire class of imperfect duties leaves agents with extensive latitude in how these duties are fulfilled, requiring significant moral judgment as well.” See Ross (2002: 19): “When I am in a situation, as perhaps I always am, in which more than one of these prima facie duties is incumbent on me, what I have to do is to study the situation as fully as I can until I form the considered opinion (it is never more) that in the circumstances one of them is more incumbent than any other…” (emphasis added). See McNaughton, op. cit.
As James Griffin writes: “The best procedure for ethics… is the going back and forth between intuitions about fairly specific situations on the one side and the fairly general principles that we formulate to make sense of our moral practice on the other, adjusting either, until eventually we bring them all into coherence. This is, I think, the dominant view about method in ethics nowadays” (Griffin 1993). See also Van den Hoven (1997).
See, specifically, what Dreyfus terms the “epistemic assumption” of the project of artificial intelligence (1992: 189–206). That assumption is that a system of formalized rules could be used by a computer to reproduce a complex human behavior (for our purposes, moral deliberation). This is one of several assumptions underlying the project of artificial intelligence about which Dreyfus is deeply skeptical. On a fascinating tangential note, see the “ontological assumption,” the genus of which the anti-codifiability thesis is a species, and which Dreyfus terms “the deepest assumption underlying… the whole philosophical tradition” (1992: 205).
We thank an anonymous referee for highlighting this example of one kind of moral mistake that AWS might be prone to make.
Some transhumanists argue that we could one day replicate the human brain, and hence the human mind (including intentions, consciousness, and qualia). Advances in quantum computing or nanotechnology could allegedly make this possible. However, the transhumanists like Bostrom (2003) and Kurzweil (2000; 2005) who are confident about this prospect are a small minority, and there are several famous counterexamples to their views that many philosophers of mind take to be conclusive. See: Block (1978), Searle (1992), Schlagel (1999), and Bringsjord (2007), among others. We thank an anonymous referee for this journal for pressing us on this point.
We are here heavily indebted to Sparrow (unpublished manuscript) and much of this last point is rightfully attributed to him. Also consider, for example, the analogous case of driverless cars, which are safer and more efficient ‘drivers’ due in part to their ability to process much more information than a human driver and to react almost instantaneously (Del-Colle 2013).
There are many plausible reasons for this. For example, unlike humans, robots may never become fatigued, bored, distracted, hungry, tired, or irate. Thus they would probably be more efficient actors in a number of contexts common during warfighting. For more discussion on this point, see Arkin (2009).
George R. Lucas (2013) raises similar thoughts along these lines.
This constitutes yet another respect in which our argument is not ultimately contingent.
Our suggestion in this section aligns neatly with—and can be recast in terms of—Julia Markovitz’ account of morally worthy action (Markovitz 2010). Markovitz provides an account of morally worthy action according to which morally worthy actions are those performed for the reasons why they are right. To put our objection to AWS in terms consistent with Markovitz’ account, AWS are morally problematic because they are incapable of performing morally worthy actions.
Scanlon (1998: 58–64), for instance, endorses the view that reason-taking consists basically in the possession of belief-like attitudes about what counts as a reason for acting. Schroeder has proposed that the considerations that one takes as reasons are considerations about the means to one’s ends that strike one with a ‘certain kind of salience’, in the sense that ‘you find yourself thinking about them’ when you think about the action (Schroeder 2007: 156). Schroder’s account seems to require some kind of attitude of holding a consideration before one’s mind. AI cannot manifest either of these attitudes.
Our point in this argument is directly contra to Sparrow (2007: 65), which takes for granted that artificially intelligent systems will have ‘desires,’ ‘beliefs,’ and ‘values,’ at least in some inverted commas sense.
While it is controversial, it is widely held that psychopaths are unable of appreciating characteristically moral reasons in their deliberations. However, the data also support the view that psychopaths are can recognize moral facts, but are simply not moved by them to act morally. See Borg and Sinnott-Armstrong (2013) for a survey of the relevant literature. Here, we suppose the first of these views, and we think this is acceptable since it is supported by some scientific findings.
The unease with which we still regard the sociopathic soldier recalls Williams’ objection to utilitarianism on the grounds that it discounts a person’s integrity. Williams regards the utilitarian’s answer in Jim’s case as “probably right” (Williams 1995: 117). One way of understanding the objection he famously elaborates here is not that utilitarianism gets the answer wrong, it is that it treats the answer as obvious (Williams 1995: 99). We think the sociopathic soldier example lends some credence to Williams’ original argument.
It is worth acknowledging the reality that sociopaths find their way into the military—it is often difficult to screen them out—but we take this fact to be regrettable. Suppose, however, that most will accept this regrettable fact as an unavoidable cost of doing business. There is nonetheless an important difference between foreseeing that some small number of human sociopaths will inevitably find their way into the military and adopting a national policy of deploying large numbers of sociopaths to fight our wars for us. Adopting such a policy is not an inevitable cost of doing business, nor is the deployment of AWS. We thank an anonymous referee for helpful discussion of this point.
Of course there could potentially be some moral benefit to deploying the sociopath or AWS. For instance, neither a sociopath nor a machine is capable of feeling the force of moral dilemmas; they will therefore not suffer psychological harm associated with making difficult moral choices. But this fact, on its own, does not mean that it is preferable to deploy sociopaths or AWS. Mike Robillard and one of us (Strawser) have recently written about a peculiar kind of moral exploitation that some soldiers experience (2014). They argue that when society imposes the difficult moral choices required by war on soldiers who are not properly equipped to handle them, this can result in certain unjust harms to the soldiers; a form of exploitation. While this sounds plausible, one should not thereby conclude that it would be better to use actors incapable of moral feelings in war, such as AWS. Rather, it simply raises the (already significant) burden on society to employ as soldiers only those who are reasonably capable of making the moral decisions necessary in war.
See Augustine’s (2004) Letter 189 to Boniface, §6.
Similarly, one might object that in prosecuting a war individual combatants do not act in a personal capacity but rather, as agents of the state, in a purely “professional” or “official” capacity and, as such, their intentions are not relevant. Such a view is highly controversial among moral philosophers writing about war, and we disagree that such a distinction can rule out the moral relevance of the actors intentions who carry out war (or any ‘official’ policy) for the reasons given below regarding in bello intentions. Consider: we still think the intentions of a police officer are morally relevant in our judgment of an action she may carry out, even if the action is taken as part of her official duties. Our thanks to an anonymous reviewer for help on this point.
What we say here—and quote others as saying—is a defense of the view that having the right intention is necessary for acting rightly. It should go without saying that having the right intention does not guarantee that an agent acts rightly.
We might also consider our reactions to another modification of the racist soldier case, call it the Racist Policy case. Imagine that it were official policy to deploy racist soldiers; take the policies of the Confederacy during the United States Civil War as a hypothetical example. Then, this distinction between personal behavior and official policy becomes blurred, once the racist motives of the soldier are endorsed and underwritten by the state. Considering that, for reasons we mention above, AWS could become widespread, i.e., their deployment could become official policy, even proponents of this more restrictive view have reason to be alarmed. We are grateful to an anonymous referee for this journal for drawing our attention to this point.
In fact, some have cited epistemic difficulties as a justification for leaving the jus ad bellum criterion of right intention out of the International Law of Armed Conflict (Orend 2006: 47).
Michael Slote (2001) takes a stronger position than Hurka on the moral significance of virtuous motivations: he allows that there can be virtuous motives that do not issue in right acts, but his approach implies that an act is right only if it is virtuously motivated.
For more arguments along these lines, see Kershnar (2013).
It is worth noting that, like unmanned drones, cruise missiles or landmines have the potential to be improved with respect to AI and sensors, which would make them better at discerning targets. Does the fact that such ‘autonomous” landmines would fail to act for the right reasons mean that we are morally required to continue to use ‘dumb’ landmines instead? We concede that our argument entails that there would be at least one serious pro tanto moral objection to deploying autonomous landmines that does not apply to traditional landmines: only autonomous landmines would choose whom to kill, and they would do so without the right reasons. However, this prima facie objection must be weighed against other considerations to arrive at an all-things-considered judgment about their deployment. For instance, traditional landmines are considered problematic by just war theorists because their use often violates the discrimination and non-combatant immunity criterion of jus in bello. Autonomous landmines, which, we are imagining, have the ability to distinguish between legitimate and illegitimate targets would thus be a moral improvement in this regard. We thank an anonymous referee for pressing us on this point.
In this limited sense, all objections to AWS may be contingent in that none of them may justify an absolute prohibition on the deployment of AWS. Still, our objection remains non-contingent insofar as the reason against deploying AWS that we have identified is not dissolved in the presence of countervailing considerations; it is always present, but may simply be outweighed.
Recently, popular attention has been drawn to the possibility that driverless cars will soon replace the human-driven automobile. See, for example, Del-Colle (2013) and Lin (2013b). There are questions about whether driverless cars really would be safer that human-driven cars. But there are several reasons for thinking that driverless cars would be better at driving than humans, in some respects, in the same way that autonomous weapons would be better at soldiering than humans, in some respects. Their faster reaction times and improved calculative abilities are clear. Moreover, driverless cars would not get tired or fatigued, and they could be programmed to drive defensively, as Google’s car is, for example, by automatically avoiding other cars’ blind spots. Nor do driverless cars demonstrate these benefits only when they have the roads to themselves. Google’s driverless cars have already driven over 700,000 miles on public roads occupied almost exclusively by human drivers, and have never been involved in an accident (Anthony 2014). We would expect a human driver to experience about 2.5 accidents in that time (Rosen 2012). According to Bryant Walker Smith at Stanford Law School’s Center for Internet and Society, we are approaching the point at which we can confidently say that Google’s driverless car is significantly safer than human-driven cars (Smith 2012).
We might think that a person surrenders their autonomy in a problematic way if they board a driverless car, but this is not obviously true. Human passengers in driverless cars cannot always choose their routes, but they can choose their destination and can also retake control if they so choose. This is possible, for example, in Google’s driverless car. For this reason, getting into a driverless car surrenders a person’s autonomy less than does getting on a human-piloted airplane or a human-driven bus.
Thank you to an anonymous referee for highlighting this possibility.
We thank an anonymous reviewer on this point.
Again, see Lin (2013a) where this kind of moral dilemma is raised for a driverless car.
This may strike you as equivalent to asking, “What should the avalanche do?” and, like the question about avalanches, may seem confused. It may be more precise to ask, “Which movement of the car would result in a morally better outcome?” or, “What should the programmers of the car have designed the car to do in a case such as this?” Because it is simpler and, we think, coherent, we will continue to speak as if autonomous systems should do certain things.
To use our language above, the advantage to putting driverless cars into use would stem from their abilities to not make as many empirical and practical mistakes that humans do; not their (in)ability to make genuine moral mistakes.
We should be clear that we are not arguing for an absolute prohibition on AWS. We believe there is a strong pro tanto reason against using AWS in contexts where they would be making life or death decisions, but that reason could be outweighed. It could be outweighed if AWS were significantly morally better than human soldiers, for example, because they made fewer moral or practical mistakes. In contexts where this presumption is not outweighed, it could be all-things-considered wrong to deploy autonomous systems.
Sparrow (2007) has made a similar point.
Thanks to an anonymous referee for this journal for pointing out something similar to us.
See, for example, Persson and Savulescu (2010), which argues that we would be obligated to modify humans with AI, if doing so could make us morally better. We go further here and suggest that if AI could on its own be an excellent moral agent, we might be required to outsource all of our moral decisions to it.
Or, instead, should we pity the machine itself? Could it become so overburdened by contemplating the sorrows and tribulations of humanity that it would contrive to have itself destroyed, as did Isaac Asimov’s “Multivac,” a computer designed to solve every human problem (Asimov 1959)?
Some believe that this autonomy is so important that losing one’s autonomy could not be outweighed even by tremendous amounts of other goods. See on this point Valdman (2010).
Adams TK (2001) Future warfare and the decline of human decision-making. Parameters: US Army War College Q Winter 2001–2:57–71
Anscombe GEM (1979) Under a description. Noûs 13(2):219–233
Anthony S (2014) Google’s self-driving car passes 700,000 accident-free miles, can now avoid cyclists, stop at railroad crossings Extreme Tech. http://www.extremetech.com/extreme/181508-googles-self-driving-car-passes-700000-accident-free-miles-can-now-avoid-cyclists-stop-for-trains. Accessed 29 April 2014
Aquinas T (1920) Summa theologica. 2nd edn. Fathers of the English Dominican Province
Arkin R (2009) Governing lethal behavior in autonomous robots. CRC Press, London
Asaro P (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Intl Rev Red Cross 94(886):687–709
Asimov I (1959) Nine tomorrows: tales of the near future. Fawcett, Robinsdale
Augustine (1887) Contra Faustum Manichaeum. In: Schaff (ed) From Nicene and post-Nicene fathers, first series, vol 4. Christian Literature Publishing Co, Buffalo
Augustine (2004) Letter 189 to Boniface. In Letters 156–210 (II/3) Works of Saint Augustine. New City Press, New York
Block N (1978) Troubles for functionalism. Minn Stud Philos Sci 9:261–325
Borg JS, Sinnott-Armstrong W (2013) Do psychopaths make moral judgments?. In: Kiehl K, Sinnott-Armstrong W (eds) The oxford handbook of psychopathy and law. Oxford, Oxford
Bostrom N (2003) Ethical issues in advanced artificial intelligence. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley-Blackwell, 277–286
Bringsjord S (2007) Offer: one billion dollars for a conscious robot; if you’re honest, you must decline. J Conscious Stud 14(7):28–43
Brutzman D, Davis D, Lucas GR, McGhee R (2010) Run-time ethics checking for autonomous unmanned vehicles: developing a practical approach. Ethics 9(4):357–383
Crisp R (2000) Particularizing particularism. In: Hooker B, Little MO (eds) Moral particularism. Oxford, New York, 23–47
Dancy J (1993) Moral reasons. Wiley-Blackwell
Darwall S (1983) Impartial reason. Cornell, New York
Davidson D (1964) Actions, reasons, and causes. J Philos 60(23):685–700
Davidson D (1978) Intending. In: Yirmiahu (ed) Philosophy and history of action. Springer, p 41–60
De Greef TE, Arciszewski HF, Neerincx MA (2010) Adaptive automation based on an object-oriented task model: implementation and evaluation in a realistic C2 environment. J Cogn Eng Decis Making 31:152–182
Del-Colle A (2013) The 12 most important questions about self-driving cars. Popular Mechanics. http://www.popularmechanics.com/cars/news/industry/the-12-most-important-questions-about-self-driving-cars-16016418. Accessed 28 Oct 2013
Enemark C (2013) Armed drones and the ethics of war: military virtue in a post-heroic age. Routledge, New York
Gibbard A (1990) Wise choices, apt feelings. Clarendon, Oxford
Griffin J (1993) How we do ethics now. R Inst Philos Suppl 35:159–177
Guarini M, Bello P (2012) Robotic warfare: some challenges in moving from noncivilian to civilian theaters. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 129–144
Hooker B (2000) Ideal code, real world: a rule-consequentialist theory of morality. Oxford University Press, Oxford
Hurka T (2010) Right act, virtuous motive. In: Battaly HD (ed) Virtue and vice, moral and epistemic. Wiley-Blackwell, 58–72
Hursthouse R (1995) Applying virtue ethics. Virtues and reasons: 57–75
Jaworska A, Tannenbaum J (2014) Person-rearing relationships as a key to higher moral status. Ethics 124(2):242–271
Kershnar S (2013) Autonomous weapons pose no moral problems. In: Strawser BJ (ed) Killing by remote control: the ethics of an unmanned military. Oxford University Press, Oxford
Korsgaard C (1996) The sources of normativity. Cambridge University Press, Cambridge
Kurzweil R (2000) The age of spiritual machines: when computers exceed human intelligence. Penguin
Kurzweil R (2005) The singularity is near: when humans transcend biology. Penguin
Lin P (2013) The ethics of saving lives with autonomous cars are far murkier than you think. Wired magazine. July 30, 2013. http://www.wired.com/opinion/2013/07/the-surprising-ethics-of-robot-cars/. Accessed 28 Oct 2013
Lin P (2013) The ethics of autonomous cars. The atlantic. http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/. Accessed 28 Oct 2013
Little M (1997) Virtue as knowledge: objections from the philosophy of mind. Noûs 31(1):59–79
Little M (2000) Moral generalities revisited. In: Hooker B, Little MO (eds) Moral particularism. Oxford University Press, New York, pp 276–304
Louden RB (1992) Morality and moral theory: a reappraisal and reaffirmation. Oxford University Press, New York
Lucas GR (2013) Engineering, ethics, and industry: the moral challenges of lethal autonomy. In: Strawser BJ (ed) Killing by remote control: the ethics of an unmanned military. Oxford University Press, Oxford, pp 211–228
Markovitz J (2010) Acting for the right reasons. Philos Rev 119(2):201–242
Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183
McDowell J (1979) Virtue and reason. Monist 62(3):331–350
McKeever S, Ridge M (2005) The many moral particularisms. Can J Philos 35(1):83–106
McMahan J (2009) Killing in war. Oxford University Press, Oxford
McNaughton D (1988) Moral vision. Wiley-Blackwell
Mill JS (1863) Utilitarianism. Parker, Son, and Bourn, London
Nagel T (1972) War and massacre. Philos Publ Aff 1(2):123–144
Orend B (2006) The morality of war. Broadview Press
Orend B (2008) War. In: Edward NZ (ed) The stanford encyclopedia of philosophy, (Fall 2008 Edition). URL = <http://plato.stanford.edu/archives/fall2008/entries/war/>
Persson I, Savulescu J (2010) Moral transhumanism. J Med Philos 35(6):656–669
Quinn W (1993) Morality and action. Cambridge University Press, Cambridge
Rawls J (1971) A theory of justice. Harvard University Press, Cambridge
Reichberg GM (2010) Thomas Aquinas on military prudence. J Mil Ethics 9(3):262–275
Roff H (2013) Killing in war: responsibility, liability, and lethal autonomous robots. In: Henschke A, Evans N, Allhoff F (eds) Routledge handbook for ethics and war: just war theory in the 21st century. Routledge, New York
Roff H, Momani B (2011) The morality of robotic warfare. National post
Rosen R (2012) Google’s self-driving cars: 300,000 miles logged, not a single accident under computer control. Atlantic. http://www.theatlantic.com/technology/archive/2012/08/googles-self-driving-cars-300-000-miles-logged-not-a-single-accident-under-computer-control/260926/. Accessed 28 Oct 2014
Ross WD (2002) The right and the good. Oxford University Press, Oxford
Scanlon TM (1998) What we owe each other. Harvard University Press, Cambridge
Scheffler S (1992) Human morality. Oxford University Press, New York
Schlagel RH (1999) Why not artificial consciousness or thought? Mind Mach 9(1):3–28
Schmitt M (2013) Autonomous weapon systems and international humanitarian law: a reply to the critics. Harvard natl. security j. 1–37
Schroeder M (2007) Slaves of the passions. Oxford University Press, Oxford
Searle J (1992) The rediscovery of the mind. MIT Press, Cambridge
Setiya K (2007) Reasons without rationalism. Princeton University Press, Princeton
Shafer-Landau R (1997) Moral rules. Ethics 107(4):584–611
Sharkey N (2010) Saying ‘no!’ to lethal autonomous targeting. J Mil Ethics 9(4):369–384
Slote MA (2001) Morals from motives. Oxford University Press, London
Smith BW (2012) Driving at perfection. Stanford Law School, Center for Internet and Society. http://cyberlaw.stanford.edu/blog/2012/03/driving-perfection. Accessed 28 Oct 2014
Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77
Strawser B (2010) Moral predators. J Mil Ethics 9(4):342–368
Strawson PF (1962) Freedom and resentment. Proc Br Acad 48:1–25
Valdman M (2010) Outsourcing self-government. Ethics 120(4):761–790
Van den Hoven J (1997) Computer ethics and moral methodology. Metaphilosophy 28(3):234–248
Vodehnal C (2010) Virtue, practical guidance, and practices. Electronic theses and dissertations. Paper 358. Washington University in St. Louis. http://openscholarship.wustl.edu/etd/358. Accessed 2/12/14
Walzer M (1977) Just and unjust wars: a moral argument with historical illustrations. Basic Books, New York
Williams B (1995) Making sense of humanity and philosophical papers. Cambridge University Press, Cambridge
The authors are indebted to many people for helpful contributions. In particular, we thank Rob Sparrow, David Rodin, Jonathan Parry, Cecile Fabre, Rob Rupert, Andrew Chapman, Leonard Kahn, and two anonymous referees for help on this paper.
About this article
Cite this article
Purves, D., Jenkins, R. & Strawser, B.J. Autonomous Machines, Moral Judgment, and Acting for the Right Reasons. Ethic Theory Moral Prac 18, 851–872 (2015). https://doi.org/10.1007/s10677-015-9563-y
- Autonomous weapons
- Just war theory
- Right reasons
- Moral judgment
- Driverless cars
- Artificial intelligence