Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations

Abstract

Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. (1) Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, (2) trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, (3) trolley cases seem to demand a moral answer when a political answer is called for. Finally, (4) trolley cases might be epistemically problematic in several ways. To put forward a positive proposal, we illustrate how ethical challenges arise from mundane driving situations. We argue that mundane situations are relevant because of the specificity they require and the scale they exhibit. We then illustrate some of the ethical challenges arising from optimizing for safety, balancing safety with other values such as mobility, and adjusting to incentives of legal frameworks.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    We define an “autonomous vehicle” as a motorized ground vehicle with the capability of highly or fully automated driving, what is sometimes called automation level 4 and 5 (SAE International 2016).

  2. 2.

    Many contributions do not distinguish as strictly as we do between trolley cases and trolley problems. However, we think this distinction is important. We are grateful to anonymous reviewers for their encouragement to make this distinction clear upfront.

  3. 3.

    We do not endorse this claim. For many authors this claim motivates trolley cases as relevant to the ethics of autonomous vehicles (see Nyholm and Smids (2016) for an overview).

  4. 4.

    The objections that we discuss in this paper largely supplement objections discussed by Goodall (2016) and Nyholm and Smids (2016). Trolley cases, according to Goodall (2016), are problematic in that they (1) pose a false dilemma (in fact, there are more than two options), (2) assume certainty over outcomes, (3) assume certainty over the environment, and (4) are in fact rare. Nyholm and Smids (2016) argue that trolley cases are not perfectly analogous to the situations of autonomous vehicles. This is because (5) the decision problem is different (e.g. with respect to when a decision is taken, and the numbers of agents involved), (6) the issues of moral and legal responsibility are in fact relevant but neglected by trolley cases, and because (7) decisions in fact need to be made under uncertainty. We take on board the points about uncertainty, that is, point (2), (3), and (7) in our fourth objection. We also agree with points (5) and (6) as raised by Nyholm and Smids (2016) but we do not pursue these points in our paper in this way (but see note 5). Our positive proposal on mundane situations incorporates the proposal made by Goodall (2016) on the importance of risk-management but it also extends this proposal in that we highlight considerations beyond risk and safety.

  5. 5.

    Specifically, we are not aware of a full discussion elsewhere of our first objection (that given technical restrictions, trolley cases rest on assumptions that are in tension with one another) and our second objection (that trolley cases cohere with a certain design framework). Our third objection (that trolley cases look for a moral answer when a political answer is called for), can be seen as a version of point (5) made by Nyholm and Smids (2016). However, we instead focus on a specific instance of their point highlighting a difference between moral and political philosophy. Our fourth objection (that trolley cases might be epistemically problematic) combines objections made by many others (Elster 2011; Fried 2012; Wood 2013; Kagan 2015).

  6. 6.

    We restrict the discussion to collisions, given the context of autonomous vehicles. A more general definition would instead be formulated in terms of distributions of harms and benefits.

  7. 7.

    Nyholm and Smids (2016) argue that each of these three assumptions is not met in the reality faced by autonomous vehicles and that trolley cases are therefore not a good analogy.

  8. 8.

    For a helpful overview see Nyholm and Smids (2016: 1280).

  9. 9.

    In the literature on autonomous vehicles, Lin (2014) argues that trolley problems are “meant to simplify the issues in order to isolate and study certain variables.”

  10. 10.

    See http://moralmachine.mit.edu. For another example see Frison et al. (2016).

  11. 11.

    Trolley cases in some ways resemble the party game of “would you rather” questions. Some of the reasons for which “would you rather” questions exert a certain attraction might also explain why trolley cases are captivating.

  12. 12.

    However, it stands to reason to what extent this situation – a paradigmatic instance of a collective action problem in which individual incentives lead to an outcome that is overall worse – is actually typical of politics.

  13. 13.

    That trolley cases are not contradictory in this way is supported by the fact that they are clearly conceivable. Their conceivability suggests that trolley cases are epistemically possible.

  14. 14.

    Because this scenario raises not only the question of to whom the harms accrue but also the question of whether harms should be minimized, it fails to isolate two values. An intuition is hence no clear indication about relative importance of two values.

  15. 15.

    Some argue against trolley cases on the basis that they are rare (Goodall 2016). We do not pursue this objection. Even if the situations that give rise to trolley cases are rare, they will occur with certainty over the long run. Moreover, regardless of whether these situations in fact occur, autonomous vehicles still need to be programmed to behave in one way or another to prepare for the eventuality of unavoidable collisions. In short, the low frequency of trolley cases is, as such, not yet an argument against their relevance for the ethics of autonomous vehicles.

  16. 16.

    It should be noted that Nyholm and Smids (2016) discuss decision-making situations – such as the number of agents involved, and the information available. Nyholm and Smids do not discuss different design approaches in artificial intelligence.

  17. 17.

    Despite these limitations, trolley cases here play their role as a didactical device.

  18. 18.

    In this way the methodology of trolley cases differs significantly from that of trolley problems which aims at the formulation of moral principles. We thank an anonymous referee for pressing us to make this clear.

  19. 19.

    Nyholm and Smids (2016: 1282) make a similar point in that they identify as a disanalogy between autonomous vehicles and the trolley problem the fact that the former is a decision-situation involving “multiple stakeholders” whereas the in latter “the morally relevant decision-making is done by a single agent.” However, their objection is much more general. They do not highlight this distinction between moral and political philosophical approaches.

  20. 20.

    Judith Jarvis Thomson reminds us that “we should be troubled by the fact that so many people have tried, for so many years—well over a quarter of a century by now—and come up wanting.” (2008)

  21. 21.

    This question is the starting point of Gogoll and Müller (2017) for their discussion of whether ethics settings should be mandatory or personal.

  22. 22.

    Nevertheless, trolley cases can play a useful role as a didactical device by illustrating the issue of the ethics of user settings (Millar 2014; Gogoll and Müller 2017; Millar 2017).

  23. 23.

    Other examples of relevant values are values of social justice, such as sustainability, privacy, and equality of access (Mladenovic and McPherson 2016).

  24. 24.

    In most legislations, pedestrians’ responsibilities are higher when crossing the street outside of dedicated crossings. Unlike in crosswalks, drivers might not have to yield to pedestrians.

References

  1. Achenbach, J (2015) Driverless cars are colliding with the creepy trolley problem. Washington Post December 29, 2015. https://www.washingtonpost.com/news/innovations/wp/2015/12/29/will-self-driving-cars-ever-solve-the-famous-and-creepy-trolley-problem/. Accessed 31 October 2017

  2. Bjorndahl A, London AJ, Zollman KJS (2017) Kantian decision making under uncertainty: dignity, price, and consistency. Phil Impr 17

  3. Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science 352:1573–1576

    Article  Google Scholar 

  4. Borenstein J, Herkert JR, Miller KW (2017) Self-driving cars and engineering ethics: the need for a system level analysis. Sci Eng Ethics. https://doi.org/10.1007/s11948-017-0006-0

  5. Casey BJ (2017) Amoral machines, or: how roboticists can learn to stop worrying and love the law. Northwest U Law Rev 11:231–250

    Google Scholar 

  6. Coughenour C, Clark S, Singh A, Claw E, Abelar J, Huebner J (2017) Examining racial Bias as a potential factor in pedestrian crashes. Accid Anal Prev 98:96–100. https://doi.org/10.1016/j.aap.2016.09.031

    Article  Google Scholar 

  7. Crane D, Logue K, Pilz B (2017) A survey of legal issues arising from the deployment of autonomous and connected vehicles. Mich Telecommun Technol. Law Rev 23:191–320

    Google Scholar 

  8. Elster J (2011) How outlandish can imaginary cases be? J Appl Philos 28:241–258. https://doi.org/10.1111/j.1468-5930.2011.00531.x

    Article  Google Scholar 

  9. Etzioni A, Etzioni O (2017) Incorporating ethics into artificial intelligence. J Ethics 21:403–418. https://doi.org/10.1007/s10892-017-9252-2

    Article  Google Scholar 

  10. Fleetwood J (2017) Public health, ethics, and autonomous vehicles. Am J Public Health 107:532–537. https://doi.org/10.2105/AJPH.2016.303628

    Article  Google Scholar 

  11. Foot P (1967) The problem of abortion and the doctrine of double effect. Oxford Rev 5:5–15

    Google Scholar 

  12. Fraichard T, Asama H (2004) Inevitable collision states – a step towards safer robots? Adv Robot 18:1001–1024. https://doi.org/10.1163/1568553042674662

    Article  Google Scholar 

  13. Fried BH (2012) What does matter? The case for killing the trolley problem (or letting it die). Philos Q 62:505–529. https://doi.org/10.1111/j.1467-9213.2012.00061.x

    Article  Google Scholar 

  14. Frison AK, Wintersberger P, Riener A (2016) First Person Trolley Problem: Evaluation of Drivers’ Ethical Decisions in a Driving Simulator. Adjun Proc 8th Intern Conf Automoti User Interfaces Interact Vehicular Appl, 117–122. doi:https://doi.org/10.1145/3004323.3004336

  15. Goddard T, Kahn KB, Adkins A (2015) Racial Bias in driver yielding behavior at crosswalks. Transp Res Part F: Traffic Psychol Behav 33:1–6. https://doi.org/10.1016/j.trf.2015.06.002

    Article  Google Scholar 

  16. Gogoll J, Müller JF (2017) Autonomous cars: in favor of a mandatory ethics setting. Sci Eng Ethics 23:681–700. https://doi.org/10.1007/s11948-016-9806-x

    Article  Google Scholar 

  17. Goodall NJ (2016) Away from trolley problems and toward risk management. Appl Artif Intell 30(8):810–821. https://doi.org/10.1080/08839514.2016.1229922

    Article  Google Scholar 

  18. Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD (2001) An FMRI investigation of emotional engagement in moral judgment. Science 293:2105–2108. https://doi.org/10.1126/science.1062872

    Article  Google Scholar 

  19. Greene JD, Cushman FA, Stewart LE, Lowenberg K, Nystrom LE, Cohen JD (2009) Pushing moral buttons: the interaction between personal force and intention in moral judgment. Cognition 111:364–371. https://doi.org/10.1016/j.cognition.2009.02.001

    Article  Google Scholar 

  20. Hansson SO (2013) The ethics of risk: ethical analysis in an uncertain world. Palgrave Macmillan, New York

    Google Scholar 

  21. Jackson F, Smith M (2006) Absolutist moral theories and uncertainty. J Philos 103:267–283. https://doi.org/10.2307/20619943

    Article  Google Scholar 

  22. Jackson F, Smith M (2016) The implementation problem for deontology. In: Lord E, Maguire B (eds) Weighing reasons. Oxford University Press, Oxford, pp 279–292. https://doi.org/10.1093/acprof:oso/9780199315192.003.0014

    Google Scholar 

  23. Kagan S (2015) Solving the trolley problem. In: Rakowski E (ed) The trolley problem mysteries. Oxford University Press, Oxford. https://doi.org/10.1093/acprof:oso/9780190247157.001.0001.

    Google Scholar 

  24. Kamm FM (2008) Intricate ethics: rights, responsibilities, and permissible harm. Oxford University Press, Oxford

    Google Scholar 

  25. Kamm FM (2016) The trolley problem mysteries. In: Rakowski E (ed) The trolley problem mysteries. Oxford University Press, Oxford

    Google Scholar 

  26. Lazar S (2018) In dubious battle: uncertainty and the ethics of killing. Philos Stud 175:859–883. https://doi.org/10.1007/s11098-017-0896-3

    Article  Google Scholar 

  27. Lazar S, Lee-Stronach C (2017) Axiological absolutism and risk. Noûs. https://doi.org/10.1111/nous.12210

  28. Lin P (2014) The robot Car of tomorrow may just be programmed to hit you. WIRED. https://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/. Accessed 31 October 2017

  29. Luetge C (2017) The German Ethics Code for Automated and Connected Driving. Philos & Technol. doi:https://doi.org/10.1007/s13347-017-0284-0

    Article  Google Scholar 

  30. Marchant GE, Lindor RA (2012) The coming collision between autonomous vehicles and the liability system symposium article. Santa Clara Law Rev 52:1321–1340

    Google Scholar 

  31. Marcus G (2012) Moral machines. The New Yorker https://www.newyorker.com/news/news-desk/moral-machines. Accessed 26 October 2017

  32. Millar J (2014) An Ethical Dilemma: When Robot Cars Must Kill, Who Should Pick the Victim? Robohub. http://robohub.org/an-ethical-dilemma-when-robot-cars-must-kill-who-should-pick-the-victim/. Accessed 31 October 2017

  33. Millar J (2017) Ethics Settings for Autonomous Vehicles. In Lin P, Jenkins R, Abney K (eds) Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. New York: Oxford University Press, pp. 20–34

  34. Millard-Ball A (2016) Pedestrians, autonomous vehicles, and cities. J Plan Educ Res 38:6–12. https://doi.org/10.1177/0739456X16675674

    Article  Google Scholar 

  35. Mladenovic MN, McPherson T (2016) Engineering social justice into traffic control for self-driving vehicles? Sci Eng Ethics 22:1131–1149. https://doi.org/10.1007/s11948-015-9690-9

    Article  Google Scholar 

  36. Nyholm S, Smids J (2016) The ethics of accident-algorithms for self-driving cars: an applied trolley problem? Ethical Theory Moral Pract 19:1275–1289. https://doi.org/10.1007/s10677-016-9745-2

    Article  Google Scholar 

  37. Posner EA, Sunstein CR (2005) Dollars and death. Univ of chic. Law Rev 72:537–598

    Google Scholar 

  38. Rakowski E (2016) Introduction. In: Rakowski E (ed) The trolley problem mysteries. Oxford University Press, Oxford

    Google Scholar 

  39. Rawls J (1993) Political liberalism. Columbia University Press, New York

    Google Scholar 

  40. Rosenbloom T, Nemrodov D, Eliyahu AB (2006) Yielding behavior of Israeli drivers: interaction of age and sex. Percept Mot Skills 103:387–390. https://doi.org/10.2466/pms.103.2.387-390

    Article  Google Scholar 

  41. SAE International (2016) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. http://standards.sae.org/j3016_201609/. Accessed 29 November 2017

  42. Santoni de Sio F (2017) Killing by autonomous vehicles and the legal doctrine of necessity. Ethical Theory Moral Pract 20:411–429. https://doi.org/10.1007/s10677-017-9780-7

    Article  Google Scholar 

  43. Schneider RJ, Sanders RL (2015) Pedestrian safety practitioners’ perspectives of driver yielding behavior across North America. Transp Res Re: J Transp Res Board 2519:39–50. https://doi.org/10.3141/2519-05

    Article  Google Scholar 

  44. Shariff A, Bonnefon JF, Rahwan I (2016) Whose Life Should Your Car Save? The New York Times. https://www.nytimes.com/2016/11/06/opinion/sunday/whose-life-should-your-car-save.html. Accessed 27 October 2017

  45. Tenenbaum S (2017) Action, deontology, and risk: against the multiplicative model. Ethics 127:674–707. https://doi.org/10.1086/690072

    Article  Google Scholar 

  46. Thomson JJ (1976) Killing, letting die, and the trolley problem. Monist 59:204–217. https://doi.org/10.5840/monist197659224

    Article  Google Scholar 

  47. Thomson JJ (1985) The trolley problem. Yale Law J 94:1395. https://doi.org/10.2307/796133

    Article  Google Scholar 

  48. Thomson JJ (2008) Turning the trolley. Philos Public Aff 36:359–374. https://doi.org/10.1111/j.1088-4963.2008.00144.x

    Article  Google Scholar 

  49. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  50. Wood A (2013) Humanity as end in itself. In: Scheffler S (ed) On what matters vol 2. Oxford University Press, Oxford, pp 58–82

    Google Scholar 

Download references

Acknowledgements

For their helpful comments and discussions, I am grateful to Chris Gerdes, Geoff Keeling, Patrick Lin, Jason Millar, Jesse Saloom, and two anonymous referees of this journal.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Johannes Himmelreich.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Himmelreich, J. Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations. Ethic Theory Moral Prac 21, 669–684 (2018). https://doi.org/10.1007/s10677-018-9896-4

Download citation

Keywords

  • Applied ethics
  • Ethics of autonomous vehicles
  • Ethics of technology
  • Driverless cars
  • Ethics of self-driving cars
  • Methodology
  • Thought experiments
  • Autonomous vehicles
  • Self-driving cars
  • Trolley problem