Skip to main content
Log in

A Challenge for Machine Ethics

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both implementable into machines and whose tenets permit the creation of such AMAs in the first place. Without consistency between ethics and engineering, the resulting AMAs would not be genuine ethical robots, and hence the discipline of Machine Ethics would be a failure in this regard. Here this challenge is articulated through a critical analysis of the development of Kantian AMAs, as one of the leading contenders for being the ethic that can be implemented into machines. In the end, however, the development of Kantian artificial moral machines is found to be anti-Kantian. The upshot of all this is that machine ethicists need to look elsewhere for an ethic to implement into their machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The most thorough examination of Kantian ethics within the Machine Ethics literature thus far is offered by Powers (2006). Although such an analysis of Kant’s ethics is a step in the right direction, Powers’ discussion all along remains at the level of implementation, and never considers whether Kantian morality permits the development of Kantian AMAs in the first place.

  2. Nadeau (2006) even goes so far as to suggest that only androids could be ethical.

  3. For a nice review of these weapons, see Sparrow (2007).

  4. For a recent interdisciplinary discussion of creativity, see Boden (1994). For a discussion of the intersection of emotions and AI, see Picard (1997).

  5. For a more in depth analysis of Kant’s moral philosophy, see O’Neill (1989) or Rawls (2000).

  6. The Metaphysics of Morals, p. 141. (Hereafter MM).

  7. Fundamental Principles of the Metaphysic of Morals, p. 80. (Hereafter FPMM).

  8. FPMM, p. 65.

  9. FPMM, p. 49.

  10. FPMM, p. 58.

  11. Kant puts the idea quite nicely in FPMM: “If now we attend to ourselves on occasion of any transgression of duty, we shall find that we in fact do not will that our maxim should be a universal law, for that is impossible for us; on the contrary we will that the opposite should be a universal law, only we assume the liberty of making an exception in our own favor of (just for this time only) in favor of our inclination” (52).

  12. FPMM, pp. 17–20.

  13. Here I assume that Kant was not a compatibilist. In this way, if the will of an entity is determined, as determinists and compatibilists suggest it to be (albeit for the purpose of supporting different views), then they are not the possessors of unabated free will. Although it is worth noting that Kant believed that the existence of free will could never be proven, he also believed it to be an indispensable element of genuine moral agency.

  14. This claim is admittedly controversial. Some have argued that free will could be instilled in robots. See especially McCarthy (2000).

  15. MM, p. 148.

  16. See Wallach and Allen (2009) for a discussion of the benefits and promise of developing virtuous artificial moral agents.

  17. MM, p. 186.

  18. See Kant’s Lectures on Ethics. There Kant distinguishes between heroic (supererogatory), blameworthy (abhorrent), and permissible (accidental) suicide. Heroic suicide represents self-termination that is done with the intent of maintaining morality in the world, most notably in cases where remaining alive would initiate a more severe moral violation.

  19. See Mill’s Utilitarianism for a classic Utilitarian account.

  20. It is worth noting that several authors have recognized the difficulties in implementing both Kantian and Utilitarian ethics into machines. See for example Anderson and Anderson (2007b), Wallach et al. (2008), Allen et al. (2000, 2005), and Gips (1995).

References

  • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155.

    Article  Google Scholar 

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 251–261.

    Article  MATH  Google Scholar 

  • Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. L. (2006). Machine ethics. IEEE Intelligent Systems, 21(4), 10–11.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. L. (2007a). The status of machine ethics: A report from the AAAI symposium. Minds and Machines, 17, 1–10.

    Article  Google Scholar 

  • Anderson, M. & Anderson, S. L. (2007b). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–26.

    Google Scholar 

  • Boden, M. A. (Ed.). (1994). Dimensions of creativity. Cambridge: MIT Press.

    Google Scholar 

  • Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139–159.

    Article  Google Scholar 

  • Calverley, D. J. (2008). Imagining a non-biological machine as a legal person. AI & SOCIETY, 22(4), 523–537.

    Article  Google Scholar 

  • Floridi, L., & Sanders, J. W. (2007). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Article  Google Scholar 

  • Gips, J. (1995). Towards the ethical robot. In K. Ford, C. Glymour, & P. Hayes (Eds.), Android epistemology (pp. 243–252). Cambridge: MIT Press.

    Google Scholar 

  • Gips, J. (2005). Creating ethical robots: A grand challenge. AAAI symposium on machine ethics. Washington, DC.

  • Grau, C. (2006). There is no ‘I’ in ‘Robot’: Robots and utilitarianism. IEEE Intelligent Systems, 21(4), 52–55.

    Article  Google Scholar 

  • Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21(4), 22–28.

    Article  Google Scholar 

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.

    Article  Google Scholar 

  • Kant, I. (1785/1988). Fundamental principles of the metaphysic of morals (T. K. Abbott, Trans.). New York: Prometheus Books.

  • Kant, I. (1797/1996). The metaphysics of morals (M. Gregor, Trans.). Cambridge: Cambridge University Press.

  • Kant, I. (1997). Lectures on ethics (P. Heath, Trans.). Cambridge: Cambridge University Press.

  • McCarthy, J. (2000). Free will—even for robots. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 341–352.

    Article  MATH  MathSciNet  Google Scholar 

  • McLaren, B. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4), 29–37.

    Article  Google Scholar 

  • Mill, J. S. (1871/2000). Utilitarianism. New York: Broadview.

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  • Nadeau, J. E. (2006). Only androids can be ethical. In K. Ford, C. Glymour, & P. J. Hayes (Eds.), Thinking about android epistemology (pp. 241–248). Cambridge: MIT Press.

    Google Scholar 

  • O’Neill, O. (1989). Constructions of reason: Explorations of Kant’s practical philosophy. New York: Cambridge University Press.

    Google Scholar 

  • Picard, R. W. (1997). Affective computing. Cambridge: MIT Press.

    Google Scholar 

  • Powers, T. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51.

    Article  Google Scholar 

  • Rawls, J. (2000). Lectures on the history of moral philosophy. Cambridge: Harvard University Press.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • The U.S. Army Future Combat Systems Program. (2006). Retrieved July 31, 2008, from www.cbo.gov/ftpdoc.cfm?index=7122.

  • Torrance, S. (2008). Ethics and consciousness in artificial agents. AI & SOCIETY, 22(4), 495–521.

    Article  Google Scholar 

  • Wallach, W. & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press (forthcoming).

  • Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & SOCIETY, 22, 565–582.

    Article  Google Scholar 

Download references

Acknowledgments

Thank you to Olaf Eleffson, Verena Gottschling, Marcello Guarini, and Hilary Martin for helpful discussion in the early stages of this research. An earlier version of this paper was published as part of the proceedings from the 2009 SSAISB Symposium on Computing and Philosophy in Edinburgh. Thank you to the engaging audience there for their comments, especially Steve Torrance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Tonkens.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tonkens, R. A Challenge for Machine Ethics. Minds & Machines 19, 421–438 (2009). https://doi.org/10.1007/s11023-009-9159-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-009-9159-1

Keywords

Navigation