Advertisement

Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles

  • Anja K. Faulhaber
  • Anke Dittmer
  • Felix Blind
  • Maximilian A. Wächter
  • Silja Timm
  • Leon R. Sütfeld
  • Achim Stephan
  • Gordon Pipa
  • Peter König
Original Paper

Abstract

Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in utilitarian ways, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past few years, especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as drivers in virtual reality environments. Participants had to make decisions between driving in one of two lanes where different obstacles came into view. Eventually, the participants had to decide which of the objects they would crash into. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, the influence of sidewalks as potential safe harbors and a condition implicating self-sacrifice were tested. Results showed that participants, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence by the other variables. Derived from these findings, which are in line with the utilitarian approach in moral decision making, it will be argued for an obligatory ethics setting implemented in ADVs.

Keywords

Autonomous driving Utilitarianism Trolley problem Moral dilemma 

Notes

Acknowledgements

The authors would like to thank all study project members: Aalia Nosheen, Max Räuker, Juhee Jang, Simeon Kraev, Carmen Meixner, Lasse T. Bergmann and Larissa Schlicht. This study is complemented by a philosophical study with a broader scope (Larissa Schlicht, Carmen Meixner, Lasse T. Bergmann). The work in this paper was supported by the European Union through the H2020-FETPROACT-2014, SEP-210141273, ID: 641321 socializing sensorimotor contingencies (socSMCs), PK.

Author Contributions

This study was planned and conducted in an interdisciplinary study project supervised by Prof. Dr. Peter König, Prof. Dr. Gordon Pipa, and Prof. Dr. Achim Stephan. Maximilian Alexander Wächter, Anja Faulhaber, and Silja Timm shaped the experimental design to a large degree. Leon René Sütfeld had a leading role in the implementation of the VR study design in Unity. Anke Dittmer and Felix Blind contributed to VR implementation. Anke Dittmer, Felix Blind, Silja Timm, and Maximilian Alexander Wächter contributed to the data acquisition, analysis, and writing process. Anja Faulhaber contributed to the data acquisition and the writing process.

Financial Interests

This publication presents part of the results of the study project “Moral decisions in the interaction of humans and a car driving assistant”. Such study projects are an obligatory component of the master’s degree in cognitive science at the University of Osnabrück. It was supervised by Prof. Dr. Peter König, Prof. Dr. Gordon Pipa, and Prof. Dr. Achim Stephan. Funders had no role in the study’s design, data collection and analysis, the decision to publish, or the preparation of the manuscript.

References

  1. Asimov, I. (1950). I, Robot. Greenwich, CT: Fawcett Publications.Google Scholar
  2. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2015). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars? arXiv:1510.03346.
  3. Bonnefon, J., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1574.  https://doi.org/10.1126/science.aaf2654.CrossRefGoogle Scholar
  4. Fagnant, D. J., & Kockelman, K. (2015). Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice, 77, 167–181.  https://doi.org/10.1016/j.tra.2015.04.003.Google Scholar
  5. Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5–15.Google Scholar
  6. Gerdes, J. C., & Thornton, S. M. (2015). Implementable ethics for autonomous vehicles. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomes Fahren. Technische, rechtliche und gesellschaftliche Aspekte (pp. 87–102). Berlin: Springer.  https://doi.org/10.1007/978-3-662-45854-9_5.Google Scholar
  7. Gogoll, J., & Müller, J. F. (2017). Autonomous cars: In favor of a mandatory ethics setting. Science and Engineering Ethics, 23(3), 681–700.CrossRefGoogle Scholar
  8. Goodall, N. J. (2014). Machine ethics and automated vehicles. In G. Meyer & S. Beike (Eds.), Road vehicle automation (pp. 93–102). New York: Springer.  https://doi.org/10.1007/978-3-319-05990-7_9.CrossRefGoogle Scholar
  9. Hars, A. (2016). Transformations 2025: How Volkswagen prepares for the (driverless?) future. Resource document. Driverless-Future. http://www.driverless-future.com/?p=1019. Accessed November 19, 2017.
  10. Hevelke, A., & Nida-Rümelin, J. (2014). Selbstfahrende Autos und Trolley-Probleme: Zum Aufrechnen von Menschenleben im Falle unausweichlicher Unfälle. Jahrbuch für Wissenschaft und Ethik, 19(1), 5–24.  https://doi.org/10.1515/jwiet-2015-0103.Google Scholar
  11. Hevelke, A., & Nida-Rümelin, J. (2015a). Ethische Fragen zum Verhalten selbstfahrender Autos. Zeitschrift Für Philosophische Forschung, 69(2), 217–224.  https://doi.org/10.3196/004433015815493721.CrossRefGoogle Scholar
  12. Hevelke, A., & Nida-Rümelin, J. (2015b). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.  https://doi.org/10.1007/s11948-014-9565-5.CrossRefGoogle Scholar
  13. Kuss, M., Jäkel, F., & Wichmann, F. A. (2005). Bayesian inference for psychometric functions. Journal of Vision, 5(5), 478–492.  https://doi.org/10.1167/5.5.8.CrossRefGoogle Scholar
  14. Li, J., Zhao, X., Cho, M., Ju, W., & Malle, B. (2016). From trolley to autonomous vehicle: Perceptions of responsibility and moral norms in traffic accidents with self-driving cars. SAE Technical Paper No. 2016-01-0164.  https://doi.org/10.4271/2016-01-0164.
  15. Lin, P. (2013). The ethics of autonomous cars. Resource document. The Atlantic. https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360. Accessed November 19, 2017.
  16. Lin, P. (2015). Why ethics matters for autonomous cars. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomes Fahren. Technische, rechtliche und gesellschaftliche Aspekte (pp. 69–85). Berlin: Springer.  https://doi.org/10.1007/978-3-662-45854-9_4.Google Scholar
  17. Madary, M., & Metzinger, T. (2016). Real virtuality: A code of ethical conduct. Recommendations for good scientific practice and the consumers of vr-technology. Frontiers in Robotics and AI, 3, 3.  https://doi.org/10.3389/frobt.2016.00003.CrossRefGoogle Scholar
  18. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2016). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In McBride, N. The ethics of driverless cars. ACM SIGCAS Computers and Society, 45(3), 179–184.  https://doi.org/10.1145/2874239.2874265.CrossRefGoogle Scholar
  19. McBride, N. (2016). The ethics of driverless cars. ACM SIGCAS Computers and Society, 45(3), 179–184.CrossRefGoogle Scholar
  20. Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143–152.  https://doi.org/10.1016/j.tics.2006.12.007.CrossRefGoogle Scholar
  21. Millar, J. (2016). An ethics evaluation tool for automating ethical decision-making in robots and self-driving cars. Applied Artificial Intelligence, 30(8), 787–809.  https://doi.org/10.1080/08839514.2016.1229919.CrossRefGoogle Scholar
  22. Morris, D. Z. (2016). Mercedes-Benz’s self-driving cars would choose passenger lives over bystanders. Resource document. Fortune. http://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/. Accessed November 11, 2017.
  23. Murray, C. J. (1994). Quantifying the burden of disease: The technical basis for disability-adjusted life years. Bulletin of the World Health Organization, 72(3), 429–445.Google Scholar
  24. Navarrete, C. D., McDonald, M. M., Mott, M. L., & Asher, B. (2012). Virtual morality: Emotion and action in a simulated three-dimensional “trolley problem”. Emotion, 12(2), 364–370.  https://doi.org/10.1037/a0025561.CrossRefGoogle Scholar
  25. Pan, X., Banakou, D., & Slater, M. (2011). Computer based video and virtual environments in the study of the role of emotions in moral behavior. In S. D’Mello, A. Graesser, B. Schuller, & J. Martin (Eds.), Affective computing and intelligent interaction (pp. 52–61). Heidelberg: Springer.  https://doi.org/10.1007/978-3-642-24571-8_6.CrossRefGoogle Scholar
  26. Patil, I., Cogoni, C., Zangrando, N., Chittaro, L., & Silani, G. (2014). Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas. Social Neuroscience, 9(1), 94–107.  https://doi.org/10.1080/17470919.2013.870091.CrossRefGoogle Scholar
  27. Sachdeva, S., Iliev, R., Ekhtiari, H., & Dehghani, M. (2015). The role of self-sacrifice in moral dilemmas. PLoS ONE, 10(6), e012740.  https://doi.org/10.1371/journal.pone.012740.CrossRefGoogle Scholar
  28. Sikkenk, M., & Terken, J. (2015). Rules of conduct for autonomous vehicles. In G. Burnett (Ed.), Proceedings of the 7th international conference on automotive user interfaces and interactive vehicular applicationsAutomotive UI’15 (pp. 19–22). New York: ACM Press.  https://doi.org/10.1145/2799250.2799270.
  29. Skulmowski, A., Bunge, A., Kaspar, K., & Pipa, G. (2014). Forced-choice decision-making in modified trolley dilemma situations: A virtual reality and eye tracking study. Frontiers in Behavioral Neuroscience, 8, 426.  https://doi.org/10.3389/fnbeh.2014.00426.CrossRefGoogle Scholar
  30. Sütfeld, L. R., Gast, R., König, P., & Pipa, G. (2017). Using virtual reality to assess ethical decisions in road traffic scenarios: Applicability of value-of-life-based-models and influences of time pressure. Frontiers in Behavioral Neuroscience, 11, 122.  https://doi.org/10.3389/fnbeh.2017.00122.CrossRefGoogle Scholar
  31. Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217.  https://doi.org/10.2307/796133.CrossRefGoogle Scholar
  32. Thomson, J. J. (1985). The trolley problem. The Yale Law Journal, 94(6), 1395–1415.CrossRefGoogle Scholar
  33. Unger, P. (1996). Living high and letting die: Our illusion of innocence. New York, Oxford: Oxford University Press.  https://doi.org/10.1093/0195108590.001.0001.CrossRefGoogle Scholar
  34. Winfield, A. F. T., Blum, C., & Liu, W. (2014). Towards an ethical robot: Internal models, consequences and ethical action selection. In M. Mistry, A. Leonardis, M. Witkowski, & C. Melhuish (Eds.), Advances in autonomous robotics systems. TAROS 2014. Lecture notes in computer science (Vol. 8717, pp. 85–96). Cham: Springer.  https://doi.org/10.1007/978-3-319-10401-0_8.

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  • Anja K. Faulhaber
    • 1
  • Anke Dittmer
    • 1
  • Felix Blind
    • 1
  • Maximilian A. Wächter
    • 1
  • Silja Timm
    • 1
  • Leon R. Sütfeld
    • 1
  • Achim Stephan
    • 1
  • Gordon Pipa
    • 1
  • Peter König
    • 1
    • 2
  1. 1.Institute of Cognitive ScienceUniversity of OsnabrückOsnabrückGermany
  2. 2.Department of Neurophysiology and Pathophysiology, Center of Experimental MedicineUniversity Medical Center Hamburg-EppendorfHamburgGermany

Personalised recommendations