Abstract
Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the ethics of the Trolley Dilemma as important input for self-driving cars, the route towards simply implementing it into automated cars is blocked.
Similar content being viewed by others
Notes
Per November 2016; see http://perma.cc/F9N2-N6QS.
This claim can be understood in more than one way, as one reviewer rightly pointed out. One reading would see a triviality in claiming that usefulness depends on use. Usefulness, however, is different from use since the former, but not the latter, is an evaluative term. Here, however, I want to express the idea that particularly with regard to SDCs, their promise to reduce traffic fatalities depends on their widespread use because we cannot reap the SDCs full benefits if they are not used by everyone. The reason is that only to the extent that SDCs interact with other SDCs on the road without the interference of humans who happen to make mistakes can they work according to their purpose (which is: to make traffic safer). The same holds true with regard to vaccination, for example: only if a majority of people are vaccinated, the usefulness of vaccination is fully implemented.”
Some of the voices discussed below were not raised in an academic journal, but, for example, in publicly accessible blogs. Nevertheless, they deserve serious attention since they reflect (and probably influence) the public opinion about SDCs.
This is not to say that just because humans happen to reason that way, it is the correct way, as one reviewer remarked. However, the argument pursued here goes a step further and argues that there is some reason behind the fact that humans sometimes judge and act in a utilitarian way.
Thanks to an anonymous reviewer for making this point.
One reason to assume the truth of this thesis is the fact that we are able to talk about morality, rightness and goodness, even in the face of widespread pluralism. The fact that we can and do discuss moral issues is explained by a shared concept of morality, as opposed to differing conceptions of morality. I want to thank an anonymous reviewer for pressing me to clarify this extremely important issue.
An argument somewhat similar to the one advanced by Nyholm and Smids can be found in Hevelke and Nida-Rümelin (2015). In their view, the ethics of SDCs is not accessible using the TD. They argue that a utilitarian algorithm can be ethically justified as long as the victim’s identity (i.e. those to whom the SDC directs its trajectory) is not fixed prior to the crash and the algorithms reduces the total number of victims. In this case, SDCs realize a general decrease of the risk of dying in an accident, without being biased towards one group of persons or an individual having a higher risk of getting hurt by a SDC. Hevelke and Nida-Rümelin argue that this situation differs from a Trolley Dilemma where the identity of the actual and possible victims are known, thus the ethics of SDCs should rather not make use of the Trolley Dilemma to come to an ethical conclusion about crash-optimizing algorithms. However, when a group decides about the rules guiding SDCs in their community, no one knows on which side s/he will end up when it comes to a crash: they can end up in the group consisting of five persons or they can be the single person on the track (or be the passenger of the SDC). This means that should we try to find a rule guiding the behavior of SDCs in advance, the TD can actually be used to describe that situation with the same results that Hevelke and Nida-Rümelin find. See also Wolkenstein (2017b) for a discussion of this kind of argument.
That the TD is important might seem like a weak conclusion, as a reviewer has pointed out. However, since there are voices that wish to downplay the role of and research into the TD, this article has provided arguments to support the TD. Moreover, since it has revealed a few insights into what ethical research can do, it has cleared the path towards locating ethics more properly in technology development (see below).
“Widespread acceptance” has to be understood in the context of the promise that SDCs make: They can reduce traffic-related fatalities to the extent that the human factor is eliminated and only (or mostly) SDCs are on the streets.
References
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.
Brandom, R. (2016). Who will you decide to kill with your self-driving car? Let’s find out! The Verge. Retrieved August 31, 2016 from http://www.theverge.com/2016/8/9/12412190/moral-machine-trolley-problem-self-driving-car.
Brennan, J., & Jaworski, P. M. (2016). Markets without limits. Moral virtues and commercial interests. New York: Routledge.
Collingridge, D. (1980). The social control of technology. London: Frances Pinter.
Ethikkommission. (2017). Automatisiertes und vernetztes Fahren. Retrieved from http://www.bmvi.de/SharedDocs/DE/Anlage/Presse/084-dobrindt-bericht-der-ethik-kommission.pdf?__blob=publicationFile.
Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.
Greene, J. (2013). Moral tribes: Emotion, reason, and the gap between us and them. New York: Penguin.
Hevelke, A., & Nida-Rümelin, J. (2015). Selbstfahrende Autos und Trolley-Probleme: Zum Aufrechnen von Menschenleben im Falle unausweichlicher Unfälle. Jahrbuch für Wissenschaft und Ethik, 19(1), 5–23. https://doi.org/10.1515/jwiet-2015-0103.
Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press.
Kamm, F. (2015). The trolley mysteries. Oxford: Oxford University Press.
Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63, 190–193.
Knobe, J. (2008). The concept of intentional action. A case study in the uses of folk psychology. In J. Knobe & S. Nichols (Eds.), Experimental philosophy (pp. 129–148). Oxford: Oxford University Press.
Lin, P. (2013, October 8). The ethics of autonomous cars. The Atlantic. Retrieved August 31, 2016 from http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/.
Lin, P. (2014). The robot car of tomorrow may just be programmed to hit you. Wired. Retrieved from https://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/.
Lin, P. (2016). Why ethics matters for autonomous driving. In M. Maurer, J. C. Gerdes, B. Lenz & H. Winner (Eds.), Autonomous driving. Technical, legal and social aspects (pp. 69–85). Berlin: Springer.
Liu, H.-Y. (2016). Structural discrimination and autonomous vehicles: Immunity devices, trump cards and crash optimisation. In J. Seibt, M. Norskov & S. Schack Andersen (Eds.), What social robots can and should do (pp. 164–173). Amsterdam: IOS Press.
Malle, B. F. (2015). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology. https://doi.org/10.1007/s10676-015-9367-8.
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Paper presented at the HRI 15, Portland, OG, USA.
Maurer, M., Gerdes, J. C., Lenz, B., & Winner, H. (Eds.). (2015). Autonomes Fahren. Technische, rechtliche und gesellschaftliche Aspekte. Berlin: Springer Vieweg.
Mikhail, J. (2013). Elements of moral cognition. Cambridge: Cambridge University Press.
Millar, J. (2014, June 11). An ethical dilemma: When robot cars must kill, who should pick the victim? robohub.org. Retrieved August 31, 2016 from http://robohub.org/an-ethical-dilemma-when-robot-cars-must-kill-who-should-pick-the-victim/.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679.
Navarette, C. D., MacDonald, M. M., Mott, M. L., & Asher, B. (2012). Virtual morality: Emotion and action in a simulated three-dimensional “Trolley problem”. Emotion, 12(2), 364–370.
Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289. https://doi.org/10.1007/s10677-016-9745-2.
Pan, X., & Slater, M. (2011a). Confronting a moral dilemma in virtual reality: A pilot study. In HCI 2011, the 25th BCS Conference on Human Computer Interaction, July 4–8, 2011, Newcastle Upon Tyne, 2011.
Pan, X., & Slater, M. (2011b). Computer-based video and virtual environments in the study of the role of emotions in moral behavior. In S. D’Mello, A. Graesser, B. Schuller, & J.-C. Martin (Eds.), Affective Computing and Intelligent Interaction. Fourth International Conference ACII 2011, October 9–12, 2011, Proceedings Part II, Memphis, TN, USA, 2011 (pp. 52–61).
Pan, X., & Slater, M. (2011c). Should you push the switch, and would you? An experimental study on a moral dilemma in virtual reality. Paper presented at the Moral Emotions and Intuitions (MEI), The Hague, Netherlands.
Smith, B. W. (2015, January 12). Slow down that runaway ethical trolley. CIS Blog (Vol. 2016). Retrieved August 31, 2016 from https://cyberlaw.stanford.edu/blog/2015/01/slow-down-runaway-ethical-trolley.
Society of Automotive Engineers. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles (J3016_201609): SAE International.
Thierer, A. (2015, January 13). Making sure the “trolley problem” doesn’t derail live-saving innovation. The Technology Liberation Front. Retrieved August 31, 2016 from https://techliberation.com/2015/01/13/making-sure-the-trolley-problem-doesnt-derail-life-saving-innovation/.
Thierer, A. (2016). Permissionless innovation. The continuing case for comprehensive technological freedom. Revised and expanded edition. Arlington, VA: Mercatus Center at George Mason University.
Thomson, J. J. (1985). The trolley problem. Yale Law Journal, 94(5), 1395–1515.
Thomson, J. J. (2008). Turning the trolley. Philosophy and Public Affairs, 36(4), 359–374.
Wolkenstein, A. (2017a). Automation without limits. Retrieved from https://andreaswolkenstein.com/2017/07/04/automation-without-limits/.
Wolkenstein, A. (2017b). Verrechnungsverbot, Autonomie und politische Ethik. Über drei Probleme mit den ethischen Regeln für das autonome Fahren. Retrieved from https://andreaswolkenstein.net/2017/06/23/verrechnungsverbot-autonomie-und-politische-ethik-ueber-drei-probleme-mit-den-ethischen-regeln-fuer-das-autonome-fahren/.
Acknowledgments
Funding was provided by FP7 Security (Grant No. 312745).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflict of interest.
Rights and permissions
About this article
Cite this article
Wolkenstein, A. What has the Trolley Dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars. Ethics Inf Technol 20, 163–173 (2018). https://doi.org/10.1007/s10676-018-9456-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-018-9456-6