Skip to main content
Log in

Making moral machines: why we need artificial moral agents

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis of the relevant arguments for and against creating AMAs, and we argue that all things considered we have strong reasons to continue to responsibly develop AMAs. The key contributions of this paper are threefold. First, to provide the first comprehensive response to the important arguments made against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019) and to introduce several novel lines of argument in the process. Second, to collate and thematise for the first time the key arguments for and against AMAs in a single paper. Third, to recast the debate away from blanket arguments for or against AMAs in general, to a more nuanced discussion about the use of what sort of AMAs, in what sort of contexts, and for what sort of purposes is morally appropriate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Asaro’s (2006, p. 11) alternative five categories of AMAs roughly maps onto Moor’s four categories as follows: “amoral” robots (Level 1), “robots with moral significance” (Level 2), “robots with moral intelligence” (3a), “robots with dynamic moral intelligence” (3b), and fully autonomous moral agents (Level 4).

  2. For the distinction between direct and indirect moral duties, see Formosa (2017).

References

  • Addyman C, French R (2012) Computational modeling in cognitive science. Top Cogn Sci 4:332–341

    Article  Google Scholar 

  • Allen C, Smit I, Wallach W (2005) Artificial morality. Ethics Inf Technol 7(3):149–155

    Article  Google Scholar 

  • Anderson M, Anderson S (2007) Machine ethics. AI Mag Winter 28(4):15–26

    Google Scholar 

  • Anderson S, Anderson M (2009) How machines can advance ethics. Philos Now 72:17–20

    Google Scholar 

  • Anderson M, Anderson S (2018) “GenEth.” Paladyn J Behav Robot 9:337–357

    Article  Google Scholar 

  • Arkin R, Ulam P, Duncan B (2009) An ethical governor for constraining lethal action in an autonomous system, Technical report GIT-GVU-09-02

  • Arkin R, Ulam P, Wagner A (2012) Moral decision making in autonomous systems. Proc IEEE 100(3):571–589

    Article  Google Scholar 

  • Asaro PM (2006) What should we want from a robot ethic? Int Rev Inform Ethics 6:9–16

    Article  Google Scholar 

  • Bankins S, Formosa P (2019) When AI meets PC. Eur J Work Organ Psychol 26:1–15

    Google Scholar 

  • Bekey GA (2012) Current trends in robotics. In: Lin P, Abney K, Bekey GA (eds) Robot ethics. MIT Press, Cambridge, MA, pp 17–34

    Google Scholar 

  • Boden M (2016) AI. OUP, Oxford

    Google Scholar 

  • Bonnemains V, Saurel C, Tessier C (2018) Embedded ethics. Ethics Inf Technol 20(1):41–58

    Article  Google Scholar 

  • Bostrom N (2014) Superintelligence. OUP, Oxford

    Google Scholar 

  • Broadbent E (2017) Interactions with robots. Annu Rev Psychol 68(1):627–652

    Article  Google Scholar 

  • Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372

    Article  Google Scholar 

  • Bryson J (2018) Patiency is not a virtue. Ethics Inf Technol 20(1):15–26

    Article  Google Scholar 

  • Chalmers D (2010) The singularity. J Conscious Stud 17(9):7–65

    Google Scholar 

  • Danaher J (2016) Robots, law and the retribution gap. Ethics Inf Technol 18(4):299–309

    Article  Google Scholar 

  • Darling K (2017) Who’s Johnny? Anthropomorphic framing in human–robot interaction, integration, and policy. In: Lin P, Abney K, Jenkins R (eds) Robot ethics 2.0. OUP, New York, pp 173–188

    Google Scholar 

  • Dietrich E (2001) Homo Sapiens 2.0. J Exp Theor Artif Intell 13(4):323–328

    Article  Google Scholar 

  • Etzioni A, Etzioni O (2016) AI assisted ethics. Ethics Inf Technol 18(2):149–156

    Article  Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Mach Ethics 14:349–379

    Google Scholar 

  • Formosa P (2017) Kantian ethics, dignity and perfection. CUP, Cambridge

    Book  Google Scholar 

  • Gogoll J, Müller J (2017) Autonomous cars. Sci Eng Ethics 23(3):681–700

    Article  Google Scholar 

  • Greene J et al (2001) An FMRI investigation of emotional engagement in moral judgment. Science 293:2105–2108

    Article  Google Scholar 

  • Gunkel D (2014) A vindication of the rights of machines. Philos Technol 27(1):113–132

    Article  Google Scholar 

  • Gunkel D (2017) Mind the gap. Ethics Inf Technol. https://doi.org/10.1007/s10676-017-9428-2

    Article  Google Scholar 

  • Hevelke A, Nida-Rümelin J (2015) Responsibility for crashes of autonomous vehicles. Sci Eng Ethics 21(3):619–630

    Article  Google Scholar 

  • Himma K (2009) Artificial agency, consciousness, and the criteria for moral agency. Ethics Inf Technol 11(1):19–29

    Article  Google Scholar 

  • Himmelreich J (2018) Never mind the trolley. Ethical Theory Moral Pract. https://doi.org/10.1007/s10677-018-9896-4

    Article  Google Scholar 

  • Laukyte M (2017) Artificial agents among us. Ethics Inf Technol 19(1):1–17

    Article  Google Scholar 

  • Lin P (2015) Why ethics matters for autonomous cars. In: Maurer M et al (eds) Autonomes Fahren. Springer, Berlin, pp 69–85

    Chapter  Google Scholar 

  • McCauley L (2007) AI Armageddon and the three laws of robotics. Ethics Inf Technol 9(2):153–164

    Article  Google Scholar 

  • Miller K, Wolf M, Grodzinsky F (2017) This ‘Ethical Trap’ is for roboticists, not robots. Sci Eng Ethics 23(2):389–401

    Article  Google Scholar 

  • Moor J (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Article  Google Scholar 

  • Moor J (2009) Four kinds of ethical robots. Philos Today 72:12–14

    Google Scholar 

  • Nyholm S (2018) The ethics of crashes with self-driving cars. Philos Compass. https://doi.org/10.1111/phc3.12507

    Article  Google Scholar 

  • O’Neill O (1989) Constructions of reason. CUP, Cambridge

    Google Scholar 

  • Peterson S (2012) Designing people to serve. In: Lin P, Abney K, Bekey GA (eds) Robot ethics. MIT Press, Cambridge, MA, pp 283–298

    Google Scholar 

  • Powers T (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

    Article  Google Scholar 

  • Robbins S (2020) AI and the path to envelopment. AI Soc 35(2):391–400

    Article  Google Scholar 

  • Roff HM, Danks D (2018) Trust but verify. J Mil Ethics 17(1):2–20

    Article  Google Scholar 

  • Scheutz M (2016) The need for moral competency in autonomous agent architectures. In: Müller V (ed) Fundamental issues of artificial intelligence. Springer, Cham, pp 517–527

    Chapter  Google Scholar 

  • Scheutz M (2017) The case for explicit ethical agents. AI Mag 38(4):57–64

    Google Scholar 

  • Sharkey N (2012) The evitability of autonomous robot warfare. Int Rev Red Cross 94(886):787–799

    Article  Google Scholar 

  • Sharkey A (2017) Can robots be responsible moral agents? Connect Sci 29(3):210–216

    Article  MathSciNet  Google Scholar 

  • Sparrow R (2012) Can machines be people? In: Lin P, Abney K, Bekey GA (eds) Robot ethics. MIT Press, Cambridge, MA, pp 301–316

    Google Scholar 

  • Sparrow R (2016) Robots and respect. Ethics Int Aff 30(1):93–116

    Article  MathSciNet  Google Scholar 

  • Staines D, Formosa P, Ryan M (2019) Morality play: a model for developing games of moral expertise. Games Cult 14(4):410–429

    Article  Google Scholar 

  • Tonkens R (2009) A challenge for machine ethics. Mind Mach 19(3):421–438

    Article  Google Scholar 

  • Tonkens R (2012) Out of character. Ethics Inf Technol 14(2):137–149

    Article  Google Scholar 

  • Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22(4):495–521

    Article  Google Scholar 

  • Turkle S (2011) Alone together. Basic Books, New York

    Google Scholar 

  • Vallor S (2015) Moral deskilling and upskilling in a new machine age. Philos Technol 28(1):107–124

    Article  Google Scholar 

  • van Wynsberghe A, Robbins S (2019) Critiquing the reasons for making artificial moral agents. Sci Eng Ethics 25:719–735

    Article  Google Scholar 

  • Voiklis J et al (2016) Moral judgments of human vs. robot agents. In: 25th IEEE international symposium on robot and human interactive communication, 775–780

  • Wallach W (2010) Robot minds and human ethics. Ethics Inf Technol 12(3):243–250

    Article  Google Scholar 

  • Wallach W, Allen C (2009) Moral machines. OUP, Oxford

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Formosa.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Formosa, P., Ryan, M. Making moral machines: why we need artificial moral agents. AI & Soc 36, 839–851 (2021). https://doi.org/10.1007/s00146-020-01089-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01089-6

Keywords

Navigation