Skip to main content

Advertisement

Log in

MIT’s moral machine project is a psychological roadblock to self-driving cars

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

A Correction to this article was published on 02 November 2020

This article has been updated

Abstract

In the moral machine project, participants are asked to form judgments about the well-known trolley example. The project is intended to serve as a starting point for public discussion that would eventually lead to a solution to the social dilemma of autonomous vehicles. The dilemma is that autonomous vehicles should be programed to maximize the number of lives saved in trolley-style dilemmas. But consumers will only purchase autonomous vehicles that are programed to favor passenger safety in such dilemmas. We argue that the project is seriously misguided. There are relevant variants of trolley to which the project’s participants are not exposed. These variants make clear that the morally correct way to program autonomous vehicles is not at odds with what consumers will purchase. The project is hugely popular and dominates public discussion of this issue. We show that, ironically, the project itself is largely responsible for the dilemma.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Change history

Notes

  1. For representative papers, please see [1,2,3,11,14].

  2. This example is inspired by an example used in Lin [9].

  3. The literature covering these issues is surveyed in Chapter 2 of Machery [11].

  4. See Petrinovich and O’Neill [15], Lanteri et al. [7], Lombrozo [10], Wiegman et al. [18], and Liao et al. [8].

  5. As well as the work discussed in the previous footnote, see Nadelhoffer and Feltz [12], Cikara et al. [5], Strohminger et al. [17], Pastotter et al. [14].

References

  1. Shariff, A., Bonnefon, J.F., Rahwan, I.: Psychological roadblocks to the adoption of self-driving vehicles. Nat. Hum Behav. 1, 694–696 (2017)

    Article  Google Scholar 

  2. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., Rahwan, I.: The moral machine experiment. Nature 563, 59–64 (2018)

    Article  Google Scholar 

  3. Bonnefon, J.F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352(6293), 1573–1576 (2016)

    Article  Google Scholar 

  4. Brennan, J.: The Best Moral Theory Ever: The Merits and Methodology of Moral Theorizing. Dissertation. University of Arizona (2007)

  5. Cikara, M., Farnsworth, R.A., Harris, L.T., Fiske, S.T.: On the wrong side of the trolley track: Neural correlates of relative social valuation. SocCognit Affect Neurosci 5, 404–413 (2010)

    Article  Google Scholar 

  6. Furey, H., Hill, S., Bhatia, S.: Beyond the Code: A Philosophical Guide to Engineering Ethics. Routledge, London (2021)

    Book  Google Scholar 

  7. Lanteri, A., Chelini, C., Rizzello, S.: An experimental investigation of emotions and reasoning in the trolley problem. J. Bus. Ethics 83, 789–804 (2008)

    Article  Google Scholar 

  8. Liao, S.M., Wiegmann, A., Alexander, J., Vong, G.: Putting the trolley in order: experimental philosophy and the loop case. PhilosPsychol 25, 661–671 (2012)

    Google Scholar 

  9. Lin, P.: The ethical dilemma of self-driving cars [Video file]. Retrieved from https://www.ted.com/talks/patrick_lin_the_ethical_dilemma_of_self_driving_cars?language=en#t-140814 (2015)

  10. Lombroso, T.: The role of moral commitments in moral judgment. CognitSci 33, 273–286 (2009)

    Google Scholar 

  11. Machery, Edward: Philosophy within its Proper Bounds. Oxford University Press, Oxford (2017)

    Book  Google Scholar 

  12. Nadelhoffer, T., Feltz, A.: The actor–observer bias and moral intuitions: adding fuel to Sinnott-Armstrong’s fire. Neuroethics 1, 133–144 (2008)

    Article  Google Scholar 

  13. R. Noothigattu, S. Gaikwad, E. Awad, S. Dsouza, I. Rahwan, P. Ravikumar, A. D. Procaccia (2017). A voting-based system for ethical decision making. (arXiv)

  14. Pastötter, B., Gleixner, S., Neuhauser, T., Bäuml, K.H.T.: To push or not to push? Affective influences on moral judgment depend on decision frame. Cognition 126, 373–377 (2013)

    Article  Google Scholar 

  15. Petrinovich, L., O’Neill, P.: Influence of wording and framing effects on moral intuitions. Ethol. Sociobiol. 17, 145–171 (1996)

    Article  Google Scholar 

  16. Strohminger, N., Lewis, R.L., Meyer, D.E.: Divergent effects of different positive emotions on moral judgment. Cognition 119(2), 295–300 (2011)

    Article  Google Scholar 

  17. Temkin, L.: Rethinking the Good: Moral Ideals and the Nature of Practical Reasoning. Oxford University Press, Oxford (2012)

    Book  Google Scholar 

  18. Wiegmann, A., Okan, J., Nagel, J.: Order effects in moral judgment. PhilosPsychol 25, 813–836 (2012)

    Google Scholar 

Download references

Funding

This material is based upon work supported by the National Science Foundation under Grant No. SES–1734521.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Scott Hill.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: A funding note had accidentally been omitted in the original publication. The missing funding note is given here: “This material is based upon work supported by the National Science Foundation under Grant No. SES–1734521.”. The original article has been corrected.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Furey, H., Hill, S. MIT’s moral machine project is a psychological roadblock to self-driving cars. AI Ethics 1, 151–155 (2021). https://doi.org/10.1007/s43681-020-00018-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-020-00018-z

Keywords

Navigation