Advertisement

Science and Engineering Ethics

, Volume 23, Issue 4, pp 951–967 | Cite as

Who Should Decide How Machines Make Morally Laden Decisions?

  • Dominic MartinEmail author
Original Paper

Abstract

Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.

Keywords

Artificial intelligence Ethics Self-driving car Collective decision-making Regulation Public policies Market freedom Economic efficiency Moral agency 

Notes

Acknowledgments

The ideas behind this paper were presented at the Centre de recherche en éthique at the Université de Montréal in Spring 2016. I would like to thank the members of the Centre for their useful comments.

References

  1. Ackerman, B. A. (1980). Social justice in the liberal state. New Haven: Yale University Press.Google Scholar
  2. Allen, C., Wallach, W., & Smith, I. (2006). Why machine ethics. IEEE Computer Society, 21(4), 12–17.Google Scholar
  3. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15.Google Scholar
  4. Appiah, A. (2008). Experiments in ethics. Cambridge: Harvard University Press.Google Scholar
  5. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2015). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars?” arXiv. http://arxiv.org/abs/1510.03346.
  6. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. doi: 10.1126/science.aaf2654.CrossRefGoogle Scholar
  7. Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31. doi: 10.1111/1758-5899.12002.CrossRefGoogle Scholar
  8. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.Google Scholar
  9. Burn-Murdoch, J. (2013). The problem with algorithms: Magnifying misbehaviour. The Guardian. http://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour. Accessed Aug 2014.
  10. Carter, I. (1995). The independent value of freedom. Ethics, 105(4), 819–845.CrossRefGoogle Scholar
  11. Crawford, K. (2016). Artificial Intelligence’s white guy problem. The New York Times, June 25. http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
  12. de Bruin, B., & Floridi, L. (2016). The ethics of cloud computing. Science and Engineering Ethics,. doi: 10.1007/s11948-016-9759-0.Google Scholar
  13. Dworkin, G. (1972). Paternalism. The Monist, 56(1), 64–84.CrossRefGoogle Scholar
  14. Dworkin, R. M. (1979). Liberalism. In Public and private morality, Cambridge: Cambridge University Press, pp. 113–43.Google Scholar
  15. Dworkin, G. (2005). Moral paternalism. Law and Philosophy, 24(3), 305–319. doi: 10.1007/s10982-004-3580-7.CrossRefGoogle Scholar
  16. Dworkin, G. (2014). Paternalism. In Edward N. Z. (Ed.). The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/sum2014/entries/paternalism/.
  17. Engel, J. (2016). Making the web more open: Drupal creator floats an ‘FDA for Data.’ Xconomy, March 2. http://www.xconomy.com/boston/2016/03/02/making-the-web-more-open-drupal-creator-floats-an-fda-for-data/.
  18. Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156. doi: 10.1007/s10676-016-9400-6.CrossRefGoogle Scholar
  19. Evans, O., Andreas S., & Noah D. G. (2015). Learning the preferences of bounded agents. In NIPS 2015 Workshop on Bounded Optimality. http://web.mit.edu/owain/www/nips-workshop-2015-website.pdf.
  20. Evans, O., Andreas S., & Noah D. G. (2016). Learning the preferences of ignorant, inconsistent agents. In 13th AAAI Conference on Artificial Intelligence. http://web.mit.edu/owain/www/evans-stuhlmueller.pdf.
  21. Fiala, B., Adam, A., & Shaun, N. (2014). You, Robot. In M. Edouard (Ed.), Current controversies in experimental philosophy. Abingdon-on-Thames: Routledge.Google Scholar
  22. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(August), 349–379. doi: 10.1023/B:MIND.0000035461.63578.9d.CrossRefGoogle Scholar
  23. Foot, P. (1978). Virtues and vices and other essays in moral philosophy. Berkeley: University of California Press.Google Scholar
  24. Friedman, M. (1962). Capitalism and freedom. Chicago: University of Chicago Press.Google Scholar
  25. Gantenbein, R. E. (2014). Watson, Come Here! The Role of Intelligent Systems in Health Care. In 2014 World Automation Congress (WAC), 165–68. doi: 10.1109/WAC.2014.6935748.
  26. Garber, M. (2014). Would you want therapy from a computerized psychologist? The Atlantic, May 23. http://www.theatlantic.com/technology/archive/2014/05/would-you-want-therapy-from-a-computerized-psychologist/371552/.
  27. Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29(7), 1645–1660. doi: 10.1016/j.future.2013.01.010.CrossRefGoogle Scholar
  28. Holtug, N. (2002). The harm principle. Ethical theory and moral practice, 5(4), 357–389.CrossRefGoogle Scholar
  29. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar Straus and Giroux.Google Scholar
  30. Knight, W. (2015). How to help self-driving cars make ethical decisions. MIT Technology Review, July 29. https://www.technologyreview.com/s/539731/how-to-help-self-driving-cars-make-ethical-decisions/.
  31. Legg, S. (2008). Machine super intelligene. Doctoral dissertation, University of Lugano.Google Scholar
  32. Lewis, M. (2014). Flash boys: A wall street revolt. New York: W.W. Norton & Company.Google Scholar
  33. Lin, P. (2013a). The ethics of saving lives with autonomous cars is far murkier than you think. WIRED, July 30. http://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/.
  34. Lin, P. (2013b). The ethics of autonomous cars. The Atlantic, October 8. http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/.
  35. Lin, P. (2014). Here’s a terrible idea: Robot cars with adjustable ethics settings. WIRED, August 18. http://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/.
  36. Lohr, S. (2015). Data-Ism: The revolution transforming decision making, consumer behavior, and almost everything else. New York: HarperBusiness.Google Scholar
  37. Metz, C. (2016). Self-driving cars will teach themselves to save lives—But also to take them. WIRED, June 9. http://www.wired.com/2016/06/self-driving-cars-will-power-kill-wont-conscience/.
  38. Mill, J. S. (1859). On liberty. London: John W. Parker and Son.Google Scholar
  39. Millar, J. (2014a). Technology as moral proxy: Autonomy and paternalism by design. In 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering, 1–7. doi: 10.1109/ETHICS.2014.6893388.
  40. Millar, J. (2014b). You should have a say in your robot car’s code of ethics. Wired. September 2. http://www.wired.com/2014/09/set-the-ethics-robot-car/.
  41. MIT Technology Review. (2015). Why Self-Driving Cars Must Be Programmed to Kill. October 22. https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/.
  42. Nader, R. (1965). Unsafe at any speed; the designed-in dangers of the American automobile. New York: Grossman.Google Scholar
  43. Orseau, L., & Mark R. (2012). Space-time embedded intelligence. In Joscha B., Ben G., & Matthew I. (Eds.). Artificial General Intelligence, pp. 209–18. Lecture Notes in Computer Science 7716. Berlin: Springer. doi: 10.1007/978-3-642-35506-6_22.
  44. Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872. doi: 10.1007/s10677-015-9563-y.CrossRefGoogle Scholar
  45. Rawls, J. (1999). A theory of justice (Rev ed.). Cambridge: Belknap Press of Harvard Univeristy Press.Google Scholar
  46. Rawls, J. (2005). Political liberalism (Exp ed.). Columbia Classics in Philosophy. New York: Columbia University Press.Google Scholar
  47. Satz, D. (2010). Why some things should not be for sale : The moral limits of markets. Oxford Political Philosophy. New York: Oxford University Press.CrossRefGoogle Scholar
  48. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. doi: 10.1017/S0140525X00005756.CrossRefGoogle Scholar
  49. Searle, J. R. (1984). Minds, brains, and science. Cambridge: Harvard University Press.Google Scholar
  50. Susskind, R., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. New York: Oxford University Press.Google Scholar
  51. Transport Canada Civil Aviation. (2016). Flying a Drone or an Unmanned Air Vehicle (UAV) for Work or Research. Transport Canada. April 21. http://www.tc.gc.ca/eng/civilaviation/standards/general-recavi-uav-2265.htm.
  52. Wall, S. (2012). Perfectionism in Moral and Political Philosophy. In Edward N. Z. (Ed.) The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/win2012/entries/perfectionism-moral/.
  53. Weng, Y-H., Chien-Hsun C., & Chuen-Tsai S. (2007). The legal crisis of next generation robots: On safety intelligence. In Proceedings of the 11th International Conference on Artificial Intelligence and Law, 205–209. New York: ACM. doi: 10.1145/1276318.1276358.

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.John Molson School of BusinessConcordia UniversityMontréalCanada
  2. 2.Department of PhilosophyMcGill UniversityMontréalCanada

Personalised recommendations