Skip to main content
Log in

Moral dilemmas for moral machines

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. This work has included using moral dilemmas as validation mechanisms for implementing decision-making algorithms in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. So too have attempts to codify principles for ethical AI research, though largely to little effect. See Jobin et al. [58] for a recent survey; see also LaCroix and Mohseni [66] for a discussion of the efficacy of such proposals.

  2. For example, the goal of the Dartmouth Summer Research Project on Artificial Intelligence, held in 1956 and organised by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, was stated as follows:

    The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

    Of course, these are all still open problems today [86].

  3. The individual tasks—detection, recognition, localisation, prediction, action—can be accomplished using several different methods—including regression, pattern recognition, clustering, and decision matrices, often involving a plethora of state-of-the-art machine-learning techniques. For example, support vector machines and principal component analysis may be used for pattern recognition; K-means clustering may be used to identify data in low-resolution images; and gradient-boosting may be used for decision-making, depending on confidence levels for detection, classification, and prediction. Advances are continually being made in this field, with some focus on end-to-end learning; for example, Bojarski et al. [19], Yang et al. [106] employ convolutional neural networks to train their vehicles without requiring the highly complex suite of algorithms used in traditional methods. Kuefler et al. [64] use Generative Adversarial Networks to train their system by mimicking human behaviour.

  4. In fact, full automation is not necessary for the problems described to arise since human reaction time will not be useful in split-second moral decisions.

  5. In the last few years alone, there have been dozens of articles that refer to Philippa Foot’s 1967 paper in the context of autonomous vehicles; see, for example, Allen et al. [2], Wallach and Allen [102], Pereira and Saptawijaya [84], Pereira and Saptawijaya [83], Berreby et al. [15], Danielson [33], Lin [68], Malle et al. [73], Saptawijaya and Pereira [89], Saptawijaya and Pereira [90], Bentzen [14], Bhargava and Kim [16], Casey [26], Cointe et al. [29], Greene [48], Lindner et al. [69], Santoni de Sio [88], Welsh [103], Wintersberger [104], Bjørgen et al. [17], Grinbaum [51], Misselhorn [75], Pardo [82], Sommaggio and Marchiori [93], Baum et al. [13], Cunneen et al. [31], Krylov et al. [63], Sans and Casacuberta [87], Wright [105], Agrawal et al. [1], Agrawal et al [9], Banks [11], Bauer [12], Etienne [40], Gordon [46], Harris [52], Lindner et al. [70], Nallur [77]. And, several more articles that discuss trolley problems without citing Foot; e.g., Bonnefon et al. [20], Etzioni and Etzioni [41], Lim and Taeihagh [67], Evans et al. [42]; or, which appear to reinvent the trolley problem (without citing Foot); e.g., Keeling [59].

  6. There are obvious and perhaps pressing philosophical questions that arise concerning some of these classifications, but we will put those aside for now.

  7. Of course, this approach raises all the usual problems of biased data, insofar as certain individuals are going to be overrepresented—i.e., those individuals from countries who have easy access to the internet. Falbo and LaCroix [43] argue that these considerations may exacerbate structural inequalities and mechanisms of oppression—although, they also note that ‘more data’ is not necessarily going to fix that, since the data are reflective of extant inequalities in society. However, it is crucial to note that the thing being measured in this case is not how ethical the decision is, but how closely the decision accords with the opinions of humans, on average.

  8. There is a vast metaphilosophical literature on the role and purpose of thought experiments in philosophy. I do not have the space to delve into any adequate detail here; however, see Brown and Fehige [23] for an overview. See also Asikainen and Hirvonen [8] for a discussion of thought experiments in the context of science.

  9. See, for example, Greene et al. [47, 49, 50], Nichols and Mallon [79], Cushman et al. [32], Schaich Borg et al. [91], Ciaramelli et al. [28], Hauser et al. [53], Koenigs et al. [61], Waldmann and Dieterich [101], Moore et al. [76].

  10. Surveys have shown that a vast majority would choose to pull the switch in this scenario [22, 78].

  11. For example, if it were determined that a utility calculus is the ‘correct’ normative theory, then we could use moral dilemmas as a validation tool. However, no such determination has been made.

  12. Note that in response to the problem of enabling autonomous systems to distinguish between available choices and to choose the ‘least unethical’ one, Dennis et al. [37] suggest that the pressing question to be resolved is ‘how can we constrain the unethical actions of autonomous systems but allow them to make justifiably unethical choices under certain circumstances?’ But this presupposes that we already know what it means for a decision to be the least unethical. As is common, Dennis et al. [37] seem to understand ‘least unethical’ in terms of ‘least unacceptable’ by the standards of some subset of society.

References

  1. Agrawal, M., Peterson, J.C., Griffiths, T.L.: Scaling up Psychology via Scientific Regret Minimization. Proc. Natl. Acad. Sci. USA 117(16), 8825–8835 (2020)

    Article  Google Scholar 

  2. Allen, C., Wallach, W., Smit, I.: Why machine ethics? In: Anderson, M., Anderson, S.L. (eds.) Ethics, Machine, pp. 51–61. Cambridge University Press, Cambridge (2011)

    Chapter  Google Scholar 

  3. Anderson, M., Anderson, S.L.: Ethical healthcare agents. In: Sordo, M., Vaidya, S., Jain, L.C. (eds.) Advances Computational Intelligence Paradigms in Healthcare 3. Studies in Computational Intelligence, vol. 107, pp. 233–257. Springer, Berlin (2008)

    Google Scholar 

  4. Anderson, M., Anderson, S.L., Armen, C.: MedEthEx: a prototype medical ethics advisor. In: Proceedings of the Eighteenth Conference on Innovative Applications of Artificial Intelligence (IAAI-06), Boston, MA. AAAI (2006)

  5. Arkin, R.C.: Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture—Part I: motivation and philosophy. In: HRI ’08: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, pp. 121–128. Association for Computing Machinery (2008)

  6. Arkin, R.C.: Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture—Part II: formalization for ethical control. In: Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pp. 51–62. IOS Press (2008)

  7. Asaro, P.: Autonomous weapons and the ethics of artificial intelligence. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence, pp. 212–236. Oxford University Press, Oxford (2020)

    Chapter  Google Scholar 

  8. Asikainen, M.A., Hirvonen, P.E.: Thought experiments in science and in science education. In: Matthews, M.R. (ed.) International Handbook of Research in History, Philosophy and Science Teaching, pp. 1235–1256. Springer, Dordrecht (2014)

    Chapter  Google Scholar 

  9. Awad, E., Dsouza, S., Bonnefon, J.-F., Shariff, A., Rahwan, I.: Crowdsourcing moral machines. Commun. ACM 63(3), 48–55 (2020)

    Article  Google Scholar 

  10. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., Rahwan, I.: The moral machine experiment. Nature 563, 59–64 (2018)

    Article  Google Scholar 

  11. Banks, J.: Good robots, bad robots: morally valenced behavior effects on perceived mind, morality, and trust. Int. J. Soc. Robot. 13, 2021–2038 (2021)

  12. Bauer, W.A.: Virtuous vs. utilitarian artificial moral agents. AI Soc. 35(1), 263–271 (2020)

    Article  Google Scholar 

  13. Baum, K., Hermanns, H., Speith, T.: Towards a framework combining machine ethics and machine explainability. In: Finkbeiner, B., Kleinberg, S. (eds.) Third International Workshop on Formal Reasoning about Causation, Responsibility, and Explanations in Science and Technology (CREST 2018), vol. 286, pp. 34–49. Electronic Proceedings in Theoretical Computer Science, EPTCS (2019)

  14. Bentzen, M.M.: The principle of double effect applied to ethical dilemmas of social robots. In: Seibt, J., Nørskov, M., Schack Andersen, S. (eds.) Frontiers in Artificial Intelligence and Applications, vol. 290, pp. 268–279. IOS Press (2016)

  15. Berreby, F., Bourgne, G., Ganascia, J.-G.: Modelling moral reasoning and ethical responsibility with logic programming. In: Davis, M., Fehnker, A., McIver, A., Voronkov, A. (eds.) LPAR 2015: Logic for Programming, Artificial Intelligence, and Reasoning. Lecture Notes in Computer Science, vol. 9450, pp. 532–548. Springer, Berlin (2015)

    MATH  Google Scholar 

  16. Bhargava, V., Kim, T.W.: Autonomous vehicles and moral uncertainty. In: Lin, P., Abney, K., Jenkins, R. (eds.) Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, pp. 5–19. Oxford University Press, Oxford (2017)

    Google Scholar 

  17. Bjørgen, E.P., Madsen, S., Bjørknes, T.S., Heimsæter, F.V., Håvik, R., Linderud, M., Longberg, P.-N., Dennis, L.A., Slavkovik, M.: Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making. In: Furman, J., Marchant, G., Price, H., Rossi, F. (eds.) AIES 2018—Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 23–29. Association for Computing Machinery (2018)

  18. Blake, P.R., McAuliffe, K., Corbit, J., Callaghan, T.C., Barry, O., Bowie, A., Kleutsch, L., Kramer, K.L., Ross, E., Vongsachang, H., Wrangham, R., Warneken, F.: The ontogeny of fairness in seven societies. Nature 528(7581), 258–261 (2015)

    Article  Google Scholar 

  19. Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to end learning for self-driving cars. 1–9. arXiv:1604.07316 (2016)

  20. Bonnefon, J.-F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352(6293), 1573–1576 (2016)

    Article  Google Scholar 

  21. Bonnemains, V., Saurel, C., Tessier, C.: Embedded ethics: some technical and ethical challenges. Ethics Inf. Technol. 20, 41–58 (2018)

    Article  Google Scholar 

  22. Bourget, D., Chalmers, D.J.: What do philosophers believe? Philos. Stud. 170, 465–500 (2014)

    Article  Google Scholar 

  23. Brown, J.R., Fehige, Y.: Thought experiments. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2019 edition (2019)

  24. Brun, G.: Thought experiments in ethics. In: Stuart, M.T., Fehige, Y., Brown, J.R. (eds.) The Routledge Companion to Thought Experiments, pp. 195–210. Routledge, London and New York (2018)

    Google Scholar 

  25. Bughin, J., Seong, J., Manyika, J., Chui, M., Joshi, R.: Notes from the AI Frontier: Modeling the Impact of AI on the World Economy. McKinsey Global Institute, New York (2019)

    Google Scholar 

  26. Casey, B.: Amoral machines, or: how roboticists can learn to stop worrying and love the law. Northwest. Univ. Law Rev. 111(5), 1347–1366 (2017)

    Google Scholar 

  27. Christian, B.: The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, New York (2020)

    Google Scholar 

  28. Ciaramelli, E., Muccioli, M., Ladavas, E., di Pellegrino, G.: Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Soc. Cognit. Affect. Neurosci. 2(2), 84–92 (2007)

    Article  Google Scholar 

  29. Cointe, N., Bonnet, G., Boissier, O.: Jugement éthique dans le processus de décision d’un agent BDI. Revue d’Intelligence Artificielle 31(4), 471–499 (2017)

    Article  Google Scholar 

  30. Conti, A., Azzalini, E., Amici, C., Cappellini, V., Faglia, R., Delbon, P.: An ethical reflection on the application of cyber technologies in the field of healthcare. In: Ferraresi, C., Quaglia, G. (eds.) Advances in Service and Industrial Robotics. Proceedings of the 26th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2017, Volume 49 of Mechanisms and Machine Science, pp. 870–876. Springer, Cham (2017)

  31. Cunneen, M., Mullins, M., Murphy, F., Gaines, S.: Artificial driving intelligence and moral agency: examining the decision ontology of unavoidable road traffic accidents through the prism of the trolley dilemma. Appl. Artif. Intell. 33(3), 267–293 (2019)

    Article  Google Scholar 

  32. Cushman, F., Young, L., Hauser, M.: The role of conscious reasoning and intuition in moral judgment: testing three principles of harm. Psychol. Sci. 17(12), 1082–1089 (2006)

    Article  Google Scholar 

  33. Danielson, P.: Surprising judgments about robot drivers: experiments on raising expectations and blaming humans. Etikk i praksis. Nordic J. Appl. Ethics 9(1), 73–86 (2015)

    Article  Google Scholar 

  34. Dennett, D.C.: The milk of human intentionality. Behav. Brain Sci. 3, 428–430 (1980)

    Article  Google Scholar 

  35. Dennett, D.C.: Consciousness Explained. Little, Brown and Co., Boston (1991)

    Google Scholar 

  36. Dennett, D.C.: Intuition Pumps: and Other Tools for Thinking. W. W. Norton & Company, New York and London (2013)

    Google Scholar 

  37. Dennis, L., Fisher, M., Slavkovik, M., Webster, M.: Formal verification of ethical choices in autonomous systems. Robot. Auton. Syst. 77, 1–14 (2016)

    Article  Google Scholar 

  38. Döring, N., Rohangis Mohseni, M., Walter, R.: Design, use, and effects of sex dolls and sex robots: scoping review. J. Med. Internet Res. 22(7), e18551 (2020)

    Article  Google Scholar 

  39. Eichenberg, C., Khamis, M., Hübner, L.: The attitudes of therapists and physicians on the use of sex robots in sexual therapy: online survey and interview study. J. Med. Internet Res. 21(8), e13853 (2019)

    Article  Google Scholar 

  40. Etienne, H.: When AI ethics goes astray: a case study of autonomous vehicles. Soc. Sci. Comput. Rev. (2020). https://doi.org/10.1177/0894439320906508

    Article  Google Scholar 

  41. Etzioni, A., Etzioni, O.: Incorporating ethics into artificial intelligence. J. Ethics 21(4), 403–418 (2017)

    Article  Google Scholar 

  42. Evans, K., de Moura, N., Chauvier, S., Chatila, R., Dogan, E.: Ethical decision making in autonomous vehicles: the AV ethics project. Sci. Eng. Ethics 26(6), 3285–3312 (2020)

    Article  Google Scholar 

  43. Falbo, A., LaCroix, T.: Est-ce que vous compute? Code-switching, cultural identity, and AI. 1–19. arXiv:2112.08256 (2021)

  44. Foot, P.: The problem of abortion and the doctrine of double effect. Oxford Rev. 5, 5–15 (1967)

    Google Scholar 

  45. Gettier, E.L.: Is justified true belief knowledge? Analysis 23(6), 121–123 (1963)

    Article  Google Scholar 

  46. Gordon, J.-S.: Building moral robots: ethical pitfalls and challenges. Sci. Eng. Ethics 26(1), 141–157 (2020)

    Article  Google Scholar 

  47. Greene, J., Morelli, S., Lowenberg, K., Nystrom, L., Cohen, J.: Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107(3), 1144–1154 (2008)

    Article  Google Scholar 

  48. Greene, J.D.: The rat-a-gorical imperative: moral intuition and the limits of affective learning. Cognition 167, 66–77 (2017)

    Article  Google Scholar 

  49. Greene, J.D., Nystrom, L.E., Engell, A.D., Darley, J.M., Cohen, J.D.: The neural bases of cognitive conflict and control in moral judgment. Neuron 44(2), 389–400 (2004)

    Article  Google Scholar 

  50. Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M., Cohen, J.D.: An fMRI investigation of emotional engagement in moral judgment. Science 293(5537), 2105–2108 (2001)

    Article  Google Scholar 

  51. Grinbaum, A.: Chance as a value for artificial intelligence. J. Responsib. Innov. 5(3), 353–360 (2018)

    Article  Google Scholar 

  52. Harris, J.: The immoral machine. Camb. Q. Healthc. Ethics 29(1), 71–79 (2020)

    Article  Google Scholar 

  53. Hauser, M., Cushman, F., Young, L., Jin, R.K., Mikhail, J.: A dissociation between moral judgments and justifications. Mind Lang. 22(1), 1–21 (2007)

    Article  Google Scholar 

  54. Headleand, C.J., Teahan, W.J., Ap Cenydd, L.: Sexbots: a case for artificial ethical agents. Connect. Sci. 32(2), 204–221 (2020)

    Article  Google Scholar 

  55. Hellström, T.: On the moral responsibility of military robots. Ethics Inf. Technol. 15(2), 99–107 (2013)

    Article  Google Scholar 

  56. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R.: In search of homo economicus: behavioral experiments in 15 small-scale societies. Am. Econ. Rev. 91(2), 73–78 (2001)

    Article  Google Scholar 

  57. House, B.R., Silk, J.B., Joseph Henrich, H., Barrett, C., Scelza, B.A., Boyette, A.H., Hewlett, B.S., McElreath, R., Laurence, S.: Ontogeny of prosocial behavior across diverse societies. Proc. Natl. Acad. Sci. USA 110(36), 14586–14591 (2013)

    Article  Google Scholar 

  58. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019)

    Article  Google Scholar 

  59. Keeling, G.: Legal necessity, pareto efficiency and justified killing in autonomous vehicle collisions. Ethical Theory Moral Pract. 21, 413–427 (2018)

    Article  Google Scholar 

  60. Kim, R., Kleiman-Weiner, M., Abeliuk, A., Awad, E., Dsouza, S., Tenenbaum, J.B., Rahwan, I.: A computational model of commonsense moral decision making. In: Furman, J., Marchant, G., Price, H., Rossi, F. (eds.) AIES 2018—Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 197–203. Association for Computing Machinery (2018)

  61. Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., Damasio, A.: Damage to the pre-frontal cortex increases utilitarian moral judgements. Nature 446(7138), 908–911 (2007)

    Article  Google Scholar 

  62. Krishnan, A.: Killer Robots: Legality and Ethicality of Autonomous Weapons. Ashgate, Surrey (2009)

    Google Scholar 

  63. Krylov, N.N., Panova, Y.L., Alekberzade, A.V.: Artificial morality for artificial intelligence. Hist. Med. 6(4), 191–199 (2019)

    Google Scholar 

  64. Kuefler, A., Morton, J., Wheeler, T., Kochenderfer, M.: Imitating driver behavior with generative adversarial networks. In: IEEE Intelligent Vehicles Symposium, IV, pp. 204–211 (2017)

  65. Kuhn, T.S.: A function for thought experiments. In: The Essential Tension: Selected Studies in Scientific Tradition and Change, pp. 240–265. University of Chicago Press, Chicago (1977)

    Chapter  Google Scholar 

  66. LaCroix, T., Mohseni, A.: The tragedy of the AI commons. 1–40. arXiv:2006.05203 (2020)

  67. Lim, H.S.M., Taeihagh, A.: Algorithmic decision-making in AVs: understanding ethical and technical concerns for smart cities. Sustainability 11(20), 5791 (2019)

    Article  Google Scholar 

  68. Lin, P.: Why ethics matters for autonomous cars. In: Maurer, M., Gerdes, J., Lenz, B., Winner, H. (eds.) Autonomes Fahren, pp. 69–85. Springer Vieweg, Berlin and Heidelberg (2015)

    Chapter  Google Scholar 

  69. Lindner, F., Bentzen, M.M., Nebel, B.: The HERA approach to morally competent robots. In: IEEE International Conference on Intelligent Robots and Systems, volume 2017-September. IEEE, pp. 6991–6997 (2017)

  70. Lindner, F., Mattmüller, R., Nebel, B.: Evaluation of the moral permissibility of action plans. Artif. Intell. 287, 103350 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  71. Lourie, N., Bras, R.L., Choi, Y.: SCRUPLES: A corpus of community ethical judgments on 32,000 real-life anecdotes. 1–16. arXiv:2008.09094 (2020)

  72. Luccioni, A., Bengio, Y.: On the morality of artificial intelligence. 1–12. arXiv:1912.11945 (2019)

  73. Malle, B.F., Scheutz, M., Arnold, T., Voiklis, J., Cusimano, C.: Sacrifice one for the good of many?: people apply different moral norms to human and robot agents. In: HRI ’15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction, pp. 117–124. Association for Computing Machinery (2015)

  74. Mayo-Wilson, C., Zollman, K.J.S.: The computational philosophy: simulation as a core philosophical method. PhilSci Arch. 18100:1–32. http://philsci-archive.pitt.edu/18100/ (2020)

  75. Misselhorn, C.: Artificial morality. Concepts, issues and challenges. Society 55(2), 161–169 (2018)

    Article  Google Scholar 

  76. Moore, A., Clark, B., Kane, M.: Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychol. Sci. 19(6), 549–557 (2008)

    Article  Google Scholar 

  77. Nallur, V.: Landscape of machine implemented ethics. Sci. Eng. Ethics 26(5), 2381–2399 (2020)

    Article  Google Scholar 

  78. Navarrete, C.D., McDonald, M.M., Mott, M.L., Asher, B.: Virtual morality: emotion and action in a simulated three-dimensional ‘Trolley Problem’. Emotion 12(2), 364–370 (2012)

    Article  Google Scholar 

  79. Nichols, S., Mallon, R.: Moral dilemmas and moral rules. Cognition 100(3), 530–542 (2005)

    Article  Google Scholar 

  80. Noothigattu, R., (Neil) S. Gaikwad, S., Awad, E., Dsouza, S., Rahwan, I., Ravikumar, P., Procaccia, A.D.: A voting-based system for ethical decision making. In: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pp. 1587–1594. Association for the Advancement of Artificial Intelligence (2018)

  81. Olson, R.S., La Cava, W., Orzechowski, P., Urbanowicz, R.J., Moore, J.H.: PMLB: a large benchmark suite for machine learning evaluation and comparison. BioData Min. 10(36), 1–13 (2017)

    Google Scholar 

  82. Pardo, A.M.S.: computational thinking between philosophy and STEM–programming decision making applied to the behavior of “Moral Machines’’ in ethical values classroom. Revista Iberoamericana de Tecnologias del Aprendizaje 13(1), 20–29 (2018)

    Article  Google Scholar 

  83. Pereira, L.M., Saptawijaya, A.: Modeling morality with prospective logic. In: Anderson, M., Anderson, S.L. (eds.) Ethics, Machine, pp. 398–421. Cambridge University Press, Cambridge (2011)

    Chapter  Google Scholar 

  84. Pereira, L.M., Saptawijaya, A.: Bridging Two Realms of Machine Ethics. In: White, J., Searle, R. (eds.) Rethinking machine ethics in the age of ubiquitous technology, pp. 197–224. Hershey, Information Science Reference (2015)

    Chapter  Google Scholar 

  85. Raji, I.D., Bender, E.M., Paullada, A., Denton, E., Hanna, A.: AI and the everything in the whole wide world benchmark. 1–20. arXiv:2111.15366 (2021)

  86. Russell, S.: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, New York (2019)

    Google Scholar 

  87. Sans, A., Casacuberta, D.: Remarks on the possibility of ethical reasoning in an artificial intelligence system by means of abductive models. In: Nepomuceno-Fernández, Á., Magnani, L., Salguero-Lamillar, F.J., Barés-Gómez, C., Fontaine, M. (eds.) MBR 2018: Model-Based Reasoning in Science and Technology. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 49, pp. 318–333. Springer, Cham (2019)

    Google Scholar 

  88. Santoni de Sio, F.: Killing by autonomous vehicles and the legal doctrine of necessity. Ethical Theory Moral Pract. 20(2), 411–429 (2017)

    Article  Google Scholar 

  89. Saptawijaya, A., Pereira, L.M.: Logic programming applied to machine ethics. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015: Progress in Artificial Intelligence. Lecture Notes in Computer Science, vol. 9273, pp. 414–422. Springer, Cham (2015)

    Google Scholar 

  90. Saptawijaya, A., Pereira, L.M.: Logic programming for modeling morality. Logic J. IGPL 24(4), 510–525 (2016)

    Article  MathSciNet  Google Scholar 

  91. Schaich Borg, J., Hynes, C., Van Horn, J.J., Grafton, S., Sinnott-Armstrong, W.: Consequences, action, and intention as factors in moral judgments: an fMRI investigation. J. Cogn. Neurosci. 18(5), 803–817 (2006)

    Article  Google Scholar 

  92. Sharkey, A., Sharkey, N.: Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf. Technol. 14(1), 27–40 (2012)

    Article  Google Scholar 

  93. Sommaggio, P., Marchiori, S.: Break the chains: a new way to consider machine’s moral problems. BioLaw J. 2018(3), 241–257 (2018)

    Google Scholar 

  94. Sütfeld, L.R., Gast, R., König, P., Pipa, G.: Using virtual reality to assess ethical decisions in road traffic scenarios: applicability of value-of-life-based models and influences of time pressure. Front. Behav. Neurosci. 11, 122 (2017)

    Article  Google Scholar 

  95. Szczepański, M.: Economic impacts of artificial intelligence (AI). Eur. Parliam. Res. Serv. PE 637(967), 1–8 (2019)

    Google Scholar 

  96. Thomson, J.J.: Killing, letting die, and the trolley problem. Monist 59, 204–217 (1976)

    Article  Google Scholar 

  97. Tonkens, R.: The case against robotic warfare: a response to Arkin. J. Mil. Ethics 11(2), 149–168 (2012)

    Article  Google Scholar 

  98. Unger, P.: Living and Letting Die. Oxford University Press, Oxford (1996)

    Book  Google Scholar 

  99. Vincent, J.: Global preferences for who to save in self-driving car crashes revealed: congratulations to young people, large groups of people, and people who aren’t animals. The Verge, 24 Oct 2018. https://www.theverge.com/2018/10/24/18013392/self-driving-car-ethics-dilemma-mit-study-moral-machine-results (2018)

  100. Wakabayashi, D.: Self-driving uber car kills pedestrian in Arizona, where robots roam. The New York Times, 19 Mar 2018. https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html (2018)

  101. Waldmann, M.R., Dieterich, J.H.: Throwing a bomb on a person versus throwing a person on a bomb: intervention myopia in moral intuitions. Psychol. Sci. 18(3), 247–253 (2007)

    Article  Google Scholar 

  102. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford (2009)

    Book  Google Scholar 

  103. Welsh, S.: Ethics and Security Automata: Policy and Technical Challenges of the Robotic Use of Force. Routledge, London and New York (2017)

    Book  Google Scholar 

  104. Wintersberger, P., Frison, A.-K., Riener, A., Thakkar, S.: Do moral robots always fail? Investigating human attitudes towards ethical decisions of automated systems. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1438–1444. IEEE (2017)

  105. Wright, A.T.: Rightful machines and dilemmas. In: Conitzer, V., Hadfield, G., Vallor, S. (eds.) AIES ’19—Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 3–4. Association for Computing Machinery (2019)

  106. Yang, S., Wang, W., Liu, C., Deng, W., Hedrick, J.K.: Feature analysis and selection for training an end-to-end autonomous vehicle controller using deep learning approach. In: IEEE Intelligent Vehicles Symposium, vol. IV, pp. 1033–1038. IEEE (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Travis LaCroix.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

LaCroix, T. Moral dilemmas for moral machines. AI Ethics 2, 737–746 (2022). https://doi.org/10.1007/s43681-022-00134-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00134-y

Keywords

Navigation