Skip to main content

Why machines cannot be moral

Abstract

The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of an ethical claim depends in part on the life history of the person who is making it. For both these reasons, machines could at best be engineered to provide a shallow simulacrum of ethics, which would have limited utility in confronting the ethical and policy dilemmas associated with AI.

This is a preview of subscription content, access via your institution.

Notes

  1. Despite all the attention being paid to AI and ethics at the moment there is surprisingly little recognition that there might be genuine intellectual content to the disputes amongst philosophers about the nature of ethics as well as about what is ethical. I could not count the number of times I have sat on, or listened to, panels on AI and ethics that did not include anyone else who had undertaken formal study of ethics, even though the engineers and the audience would have bristled at the thought of listening to discussions about engineering from those without any qualifications in these disciplines.

  2. It is also worth pointing out that if there are no right or wrong answers to ethical questions there cannot be anything wrong with not deferring to the opinions of others on ethical questions, nor indeed with any other course of action.

  3. Of course, in actuality some people are better at maths or engineering than others. That expertise, however, can be described and reproduced without making any essential reference to the individual themselves or to their life history (Gaita 1989).

  4. For an alternative, but not necessarily incompatible, account, see Pianalto (2011).

  5. Another way of making the same claim is to point out that, as I have argued at length elsewhere (Sparrow 2007), machines cannot be held responsible for moral decisions. This claim has been contested in the literature (see, for instance, Hellström 2013) but only, I believe, because the literature is operating with an attenuated notion of responsibility.

References

  • Anderson M, Anderson SL (2007) Machine ethics: Creating an ethical intelligent agent. AI Mag 28(4):15–26

    Google Scholar 

  • Anderson M, Anderson SL (2018a) Machine ethics. Cambridge University Press, New York

    Google Scholar 

  • Anderson M, Anderson SL (2018b) A prima facie duty approach to machine ethics: Machine learning of features of ethical dilemmas, prima facie duties, and decision principles through a dialogue with ethicists. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 476–494

    Google Scholar 

  • Annas J (2011) Intelligent virtue. Oxford University Press, New York

    Book  Google Scholar 

  • Aristotle (1986) Ethics. Penguin Books, Harmondsworth UK

    Google Scholar 

  • Arkin RC (2009) Governing lethal behavior in autonomous robots. CRC Press, Boca Raton

    Book  Google Scholar 

  • Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, Bonnefon JF, Rahwan I (2018) The moral machine experiment. Nature 563(7729):59–64

    Article  Google Scholar 

  • Bringsjord S, Taylor J, Van Heuveln B, Arkoudas K, Clark M, Wojtowicz R (2018) Piagetian roboethics via category theory: Moving beyond mere formal operations to engineer robots whose decisions are guaranteed to be ethically correct. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 361–374

    Google Scholar 

  • Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372

    Article  Google Scholar 

  • Cervantes JA, Rodríguez LF, López S, Ramos F, Robles F (2016) Autonomous agents and ethical decision-making. Cognit Comput 8(2):279–296

    Article  Google Scholar 

  • Cherry C (1991) Machines as persons? Philos Supp 29:11–24

    Article  Google Scholar 

  • Cockburn D (1985) The mind, the brain and the face. Philosophy 60(234):477–493

    Article  Google Scholar 

  • Cockburn D (1990) An attitude towards a soul. In: Cockburn D (ed) Other human beings. Palgrave Macmillan, London, pp 3–12

    Chapter  Google Scholar 

  • Cockburn D (1994) Human beings and giant squids. Philosophy 69(268):135–150

    Article  Google Scholar 

  • Cordner C (2014) Moral philosophy in the midst of things. In: Taylor C, Graefe M (eds) A sense for humanity: The ethical thought of Raimond Gaita. Monash University Publishing, Clayton, pp 125–140

    Google Scholar 

  • Eubanks V (2018) Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York

    Google Scholar 

  • Gaita R (1989) The personal in ethics. In: Phillips DZ, Winch P (eds) Wittgenstein: Attention to particulars. MacMillan, London, pp 124–150

    Chapter  Google Scholar 

  • Gaita R (2004) Good and evil: An absolute conception (Second Edition). MacMillan, London

    Book  Google Scholar 

  • Gaita R (2011) Truth and truthfulness in narrative. In: Gaita R (ed) After Romulus. Text, Melbourne, pp 90–119

    Google Scholar 

  • Giubilini A, Savulescu J (2017) The artificial moral advisor. The `ideal observer´ meets artificial intelligence. Philos Technol 31:1–20

    Google Scholar 

  • Gunkel D (2012) The machine question: Critical perspectives on AI, robots, and ethics. MIT Press, Cambridge

    Book  Google Scholar 

  • Hellström T (2013) On the moral responsibility of military robots. Ethics Inf Technol 15(2):99–107

    MathSciNet  Article  Google Scholar 

  • HLEGAI (High-Level Expert Group on Artificial Intelligence) (2018) Ethics guidelines for trustworthy AI. European Commission, Brussels

    Google Scholar 

  • Hursthouse R (1999) On virtue ethics. Oxford University Press, Oxford

    Google Scholar 

  • Lin P (2016) Why ethics matters for autonomous cars. In: Maurer M, Gerdes JC, Lenz B, Winner H (eds) Autonomous driving. Springer, Berlin, Heidelberg, pp 69–85

    Google Scholar 

  • Miner AS, Milstein A, Schueller S, Hegde R, Mangurian C, Linos E (2016) Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Intern Med 76(5):619–625

    Article  Google Scholar 

  • Moor J (2018) The nature, importance, and difficulty of machine ethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 13–20

    Google Scholar 

  • O’Neil C (2016) Weapons of math destruction: How big data increases inequality and threatens democracy. Allen Lane, London

    MATH  Google Scholar 

  • Pereira LM, Saptawijaya A (2018) Modelling morality with prospective logic. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 398–421

    Google Scholar 

  • Pianalto P (2011) Speaking for oneself: Wittgenstein on ethics. Inquiry 54(3):252–276

    Article  Google Scholar 

  • Plato (1961) The sophist. In: Klibansky R, Anscombe E (eds) The sophist; and the statesman. Translation and introduction by Taylor AE. T. Nelson, London

    Google Scholar 

  • Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

    Article  Google Scholar 

  • Roff HM (2014) The strategic robot problem: Lethal autonomous weapons in war. J Mil Ethics 13(3):211–227

    Article  Google Scholar 

  • Savulescu J, Maslen H (2015) Moral enhancement and artificial intelligence: Moral AI? In: Romportl J, Zackova E, Kelemen J (eds) Beyond artificial intelligence. The disappearing human-machine divide, Springer, Cham, pp 79–95

    Google Scholar 

  • Shapiro SC (1992) Artificial intelligence. In: Shapiro SC (ed) Encyclopedia of artificial intelligence, 2nd edn. Wiley, New York, pp 54–57

    Google Scholar 

  • Scharre P (2018) Army of none. WW Norton & Co, New York and London

    Google Scholar 

  • Scheutz M (2017) The case for explicit ethical agents. AI Mag 38(4):57–64

    Google Scholar 

  • Smith M (1991) Realism. In: Singer P (ed) A companion to ethics. Blackwell Reference, Cambridge, pp 399–410

    Google Scholar 

  • Skilbeck A (2014) The personal and impersonal in moral education. In: Lewin D, Guilherme A, White M (eds) New perspectives on philosophy of education: Ethics, politics and religion. Bloomsbury Academic, London, pp 59–76

    Google Scholar 

  • Sparrow R (2004) The Turing triage test. Ethic Inf Technol 6(4):203–213

    Article  Google Scholar 

  • Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77

    Article  Google Scholar 

  • Sparrow R (2016) Robots and respect: Assessing the case against autonomous weapon systems. Ethics Int Aff 30(1):93–116

    MathSciNet  Article  Google Scholar 

  • Taylor C (2014) Moral thought and ethical individuality. In: Taylor C, Graefe M (eds) A sense for humanity: The ethical thought of Raimond Gaita. Monash University Publishing, Clayton, pp 141–151

    Google Scholar 

  • van den Hoven J, Lokhorst GJ (2002) Deontic logic and computer-supported ethics. Metaphilosophy 33(3):376–386

    Article  Google Scholar 

  • Wallach W, Allen C (2009) Moral machines: Teaching robots right from wrong. Oxford University Press, Oxford

    Book  Google Scholar 

  • Walzer M (2015) Just and unjust wars: A moral argument with historical illustrations, 5th edn. Basic Books, New York

    Google Scholar 

  • Whitby B (2018) On computable morality: An examination of machines as moral advisors. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 138–150

    Google Scholar 

  • Winch P (1980) The Presidential address: ‘Eine Einstellung zur Seele.’ Proc Aristot Soci 81:1–15

    Google Scholar 

  • Winfield AFT, Blum C, Liu W (2014) Towards an ethical robot: Internal models, consequences and ethical action selection. In: Mistry M, Leonard A, Witkowski M, Melhuish C (eds) Advances in autonomous robotics systems. Springer, Cham, pp 85–96

    Chapter  Google Scholar 

  • Wittgenstein L (1989) Philosophical investigations, 3rd edn. Basil Blackwell, Oxford

    MATH  Google Scholar 

Download references

Acknowledgements

I would like to acknowledge helpful conversations with David Simpson over the course of drafting this manuscript. I am grateful to Joshua Hatherley for his assistance with bibliographic research. This paper also owes an obvious, and large, debt to the work of Raimond Gaita, and also to his personal example when he taught me in several seminars at the University of Melbourne. It is a cause of some discomfort to me that, besides bringing his arguments to bear on the case of AI, I am not sure how much I have added to them. Nevertheless, I hope that, at the very least, by bringing them to the attention of a larger audience and demonstrating the extent to which they illuminate the central questions of AI ethics, I will encourage abler minds to engage with, and perhaps extend, his work on the personal in ethics, the role of remorse, the concept of the person, and the nature of ethics itself.

Funding

None.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Sparrow.

Ethics declarations

Conflicts of interest/Competing interests

None.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sparrow, R. Why machines cannot be moral. AI & Soc 36, 685–693 (2021). https://doi.org/10.1007/s00146-020-01132-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01132-6

Keywords

  • Artificial intelligence
  • Ethics
  • Machine ethics
  • Moral authority
  • Raimond Gaita