Abstract
Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.
Similar content being viewed by others
Notes
Providing a definition of intelligence is a difficult problem from a philosophical perspective, but it may be useful to point out that one common conception of intelligence relies on the notion of rationality. What is more, it is common in decision theory or rational choice theory to draw a distinction between different forms of rationality: practical rationality (choosing the right action to satisfy one’s desire), volitional rationality (forming the right desires) and epistemic rationality (forming the right beliefs). Each of these forms of rationality can be seen as a form of intelligence. Interestingly, it is more common in the machine learning literature, as well as more general AI literature, to define intelligence as a form of practical rationality. See, for instance, the work Orseau and Ring (2012) and Shane Legg (2008). But we may wonder if this definition is sufficient and if we should not take into account other dimensions of rationality, such as the capacity to form proper desires and beliefs.
For the present purposes, I define autonomy as the ability of a system to perform a task without real-time human intervention. See Etzioni and Etzioni (2016, 149) for a similar definition.
Such as the ‘refurbished’ Watson system created by IBM that won first place at the game Jeopardy! in 2011. The system is now used as a clinical decision support system, among other things (Gantenbein 2014).
See also Kate Crawford (2016) for other cases of discrimination performed by computer algorithms. For instance, some software used to assess the risk of recidivism in a criminal is twice as likely to mistakenly flag black defendants as being at higher risk of committing a future crime. Another study found that women were less likely than men to be shown ads on Google for highly paid jobs.
Drawing clear limits is likely to require developing some form of typology of morally laden decisions, which is not a trivial issue from a philosophical perspective. Another related issue goes as follows: how does a machine decide that it is faced with a morally laden decision? This is not a trivial issue either, both from an IA research and philosophical perspective. Presumably, one needs to establish some form of typology of moral decisions before building a system than can operate on this typology. But some cases are more straightforward than others. When live casualties can be encountered, for instance, a decision is likely to be morally laden, and a machine such as a self-driving car can be designed to identify these situations. A machine can also be designed to identify the other situations mentioned above (risk for physical and/or mental integrity, the destruction of buildings or infrastructure) to some extent. But this list is by no mean exhaustive.
See Debra Satz (2010) for an overview of contemporary arguments in favor of more market freedom.
A group of intelligent machines could also be connected together and share information on user choices. It seems likely that system like these will be developed in the future. This option potentially mixes different approaches, depending on how these machines use the information they share. I will say more about these mixed options when I will discuss the fourth approach: letting other machines decide.
Given that the two options in the trolley problem flesh out a dilemma between deontological and consequentialist modes of moral reasoning — a consequentialist is more inclined to divert the trolley to spare as many lives as possible, which promotes the best consequences, while a deontological thinker would be more sensitive to the value of the action of diverting the trolley, which involves killing the man on the side track — the usual joke is to claim that users of self-driving cars should have access to a deontological/consequentialist configuration settings. Think of it as the ‘balance’ or ‘fader’ control on a car radio. But of course, this is just a joke. Moral configuration settings do not have to be that simplistic.
A related question goes as follows: is there such a thing as ‘the heat of the moment’ for an intelligent machine? One may be inclined to say no, since machines always make decisions at the same pace through similar processes, but nothing is so sure. New and more advanced forms of AI may evolve into something similar to the human brain and use faster or lower decision systems depending on the circumstances, the latter being able to perform more thorough, but also slower, assessments (see also the next note). As well, intelligent machines may rely on information online, but not be able to access that information when it must take a quick decision. I will not address these questions directly here, for I am mostly interested on how circumstances influence human decision making, not machine decision making, but it is worth keeping in mind that machines may also make different decisions depending on how much time they have to make these decisions.
See Daniel Kahneman (2011) for a particularly interesting account of the differences in fast and slow mental processes, as he names them.
This may even be an advantage of some intelligent machines. Self-driving cars may be more advantageous than human-driven cars, precisely because it may be easier to decide collectively about their behavior.
This is in line with Etzioni and Etzioni’ (2016, 151) suggestion that focus groups or public opinion pools could be used to determine the relevant values that should inform the behavior of intelligent machines. See also Jean-François Bonnefon et al. (2015, 2016) studies and an article in the MIT Technology Review (2015) for examples of this approach. The 2016 study suggests that most people think self-driving cars should minimize the total number of fatalities, even at the expense of the passengers in the car. But most people surveyed also claimed they would not buy such a car. They want a car that will protect them and their passengers before other people outside the car. Pools also raise other problems, starting with methodological questions. Who should be surveyed? How can we account for gender, age-related or cultural variations or biases in answers? How are we to use a pool result where no clear trends can be identified?
For a canonical formulation of such views, see, for instance, Milton Friedman (1962).
See Ian Carter (1995) for an argument in favor of more freedom for the sake of technological progress. Even though scientific and technological developments may have disadvantages, claims Carter, governments (and other regulating bodies) won’t always be able to predict the disadvantageous outcomes of these developments, and they should therefore minimize interference during the development phase. The claim is not that developing clearly harmful technology, such as nuclear weapons, should be allowed; the risks of that technology are rather straightforward to determine. Rather, the idea is that in a situation in which clear indications of serious downside risks are so far lacking, government bans are premature. Carter suggests that we must see, in each case, if the burden of increased regulation is justified by the risk, see also de Bruin and Floridi (2016, 13).
As car lobbyists in the US pointed out every single time transport authorities tried to raise security standards, a trend identified by Ralph Nader (1965) a long time ago.
On a potential paternalistic dimension, see also Millar (2014a).
Another proposal that may be interpreted in different ways is the proposal that machines should “teach themselves” what to decide (Metz 2016). The proposal overlaps with the first approach if the makers of these machines have an important influence on these self-teaching mechanisms. The proposal may overlap with the second approach if the machines are sensitive to user’s behaviors. The proposal may overlap with the fourth approach, or be very similar to the fourth approach, if these machines can learn how to make morally laden decisions with a high degree of autonomy.
Susskind and Susskind (2015, 280–84) discuss this idea, though they do not necessarily endorse it.
References
Ackerman, B. A. (1980). Social justice in the liberal state. New Haven: Yale University Press.
Allen, C., Wallach, W., & Smith, I. (2006). Why machine ethics. IEEE Computer Society, 21(4), 12–17.
Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15.
Appiah, A. (2008). Experiments in ethics. Cambridge: Harvard University Press.
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2015). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars?” arXiv. http://arxiv.org/abs/1510.03346.
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. doi:10.1126/science.aaf2654.
Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31. doi:10.1111/1758-5899.12002.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
Burn-Murdoch, J. (2013). The problem with algorithms: Magnifying misbehaviour. The Guardian. http://www.theguardian.com/news/datablog/2013/aug/14/problem-with-algorithms-magnifying-misbehaviour. Accessed Aug 2014.
Carter, I. (1995). The independent value of freedom. Ethics, 105(4), 819–845.
Crawford, K. (2016). Artificial Intelligence’s white guy problem. The New York Times, June 25. http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
de Bruin, B., & Floridi, L. (2016). The ethics of cloud computing. Science and Engineering Ethics,. doi:10.1007/s11948-016-9759-0.
Dworkin, G. (1972). Paternalism. The Monist, 56(1), 64–84.
Dworkin, R. M. (1979). Liberalism. In Public and private morality, Cambridge: Cambridge University Press, pp. 113–43.
Dworkin, G. (2005). Moral paternalism. Law and Philosophy, 24(3), 305–319. doi:10.1007/s10982-004-3580-7.
Dworkin, G. (2014). Paternalism. In Edward N. Z. (Ed.). The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/sum2014/entries/paternalism/.
Economist. (2014). The Computer Will See You Now. The Economist, August 16. http://www.economist.com/news/science-and-technology/21612114-virtual-shrink-may-sometimes-be-better-real-thing-computer-will-see.
Economist. (2016). “Who Wields the Knife?” The Economist, May 7. http://www.economist.com/news/science-and-technology/21698220-operations-performed-machines-could-one-day-be-commonplaceif-humans-are-willing.
Economist The. (2013). Robot Recruiters. The Economist, April 6. http://www.economist.com/news/business/21575820-how-software-helps-firms-hire-workers-more-efficiently-robot-recruiters.
Engel, J. (2016). Making the web more open: Drupal creator floats an ‘FDA for Data.’ Xconomy, March 2. http://www.xconomy.com/boston/2016/03/02/making-the-web-more-open-drupal-creator-floats-an-fda-for-data/.
Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156. doi:10.1007/s10676-016-9400-6.
Evans, O., Andreas S., & Noah D. G. (2015). Learning the preferences of bounded agents. In NIPS 2015 Workshop on Bounded Optimality. http://web.mit.edu/owain/www/nips-workshop-2015-website.pdf.
Evans, O., Andreas S., & Noah D. G. (2016). Learning the preferences of ignorant, inconsistent agents. In 13th AAAI Conference on Artificial Intelligence. http://web.mit.edu/owain/www/evans-stuhlmueller.pdf.
Fiala, B., Adam, A., & Shaun, N. (2014). You, Robot. In M. Edouard (Ed.), Current controversies in experimental philosophy. Abingdon-on-Thames: Routledge.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(August), 349–379. doi:10.1023/B:MIND.0000035461.63578.9d.
Foot, P. (1978). Virtues and vices and other essays in moral philosophy. Berkeley: University of California Press.
Friedman, M. (1962). Capitalism and freedom. Chicago: University of Chicago Press.
Gantenbein, R. E. (2014). Watson, Come Here! The Role of Intelligent Systems in Health Care. In 2014 World Automation Congress (WAC), 165–68. doi:10.1109/WAC.2014.6935748.
Garber, M. (2014). Would you want therapy from a computerized psychologist? The Atlantic, May 23. http://www.theatlantic.com/technology/archive/2014/05/would-you-want-therapy-from-a-computerized-psychologist/371552/.
Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29(7), 1645–1660. doi:10.1016/j.future.2013.01.010.
Holtug, N. (2002). The harm principle. Ethical theory and moral practice, 5(4), 357–389.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar Straus and Giroux.
Knight, W. (2015). How to help self-driving cars make ethical decisions. MIT Technology Review, July 29. https://www.technologyreview.com/s/539731/how-to-help-self-driving-cars-make-ethical-decisions/.
Legg, S. (2008). Machine super intelligene. Doctoral dissertation, University of Lugano.
Lewis, M. (2014). Flash boys: A wall street revolt. New York: W.W. Norton & Company.
Lin, P. (2013a). The ethics of saving lives with autonomous cars is far murkier than you think. WIRED, July 30. http://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/.
Lin, P. (2013b). The ethics of autonomous cars. The Atlantic, October 8. http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/.
Lin, P. (2014). Here’s a terrible idea: Robot cars with adjustable ethics settings. WIRED, August 18. http://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/.
Lohr, S. (2015). Data-Ism: The revolution transforming decision making, consumer behavior, and almost everything else. New York: HarperBusiness.
Metz, C. (2016). Self-driving cars will teach themselves to save lives—But also to take them. WIRED, June 9. http://www.wired.com/2016/06/self-driving-cars-will-power-kill-wont-conscience/.
Mill, J. S. (1859). On liberty. London: John W. Parker and Son.
Millar, J. (2014a). Technology as moral proxy: Autonomy and paternalism by design. In 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering, 1–7. doi:10.1109/ETHICS.2014.6893388.
Millar, J. (2014b). You should have a say in your robot car’s code of ethics. Wired. September 2. http://www.wired.com/2014/09/set-the-ethics-robot-car/.
MIT Technology Review. (2015). Why Self-Driving Cars Must Be Programmed to Kill. October 22. https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/.
Nader, R. (1965). Unsafe at any speed; the designed-in dangers of the American automobile. New York: Grossman.
Orseau, L., & Mark R. (2012). Space-time embedded intelligence. In Joscha B., Ben G., & Matthew I. (Eds.). Artificial General Intelligence, pp. 209–18. Lecture Notes in Computer Science 7716. Berlin: Springer. doi:10.1007/978-3-642-35506-6_22.
Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872. doi:10.1007/s10677-015-9563-y.
Rawls, J. (1999). A theory of justice (Rev ed.). Cambridge: Belknap Press of Harvard Univeristy Press.
Rawls, J. (2005). Political liberalism (Exp ed.). Columbia Classics in Philosophy. New York: Columbia University Press.
Satz, D. (2010). Why some things should not be for sale : The moral limits of markets. Oxford Political Philosophy. New York: Oxford University Press.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. doi:10.1017/S0140525X00005756.
Searle, J. R. (1984). Minds, brains, and science. Cambridge: Harvard University Press.
Susskind, R., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. New York: Oxford University Press.
Transport Canada Civil Aviation. (2016). Flying a Drone or an Unmanned Air Vehicle (UAV) for Work or Research. Transport Canada. April 21. http://www.tc.gc.ca/eng/civilaviation/standards/general-recavi-uav-2265.htm.
Wall, S. (2012). Perfectionism in Moral and Political Philosophy. In Edward N. Z. (Ed.) The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/win2012/entries/perfectionism-moral/.
Weng, Y-H., Chien-Hsun C., & Chuen-Tsai S. (2007). The legal crisis of next generation robots: On safety intelligence. In Proceedings of the 11th International Conference on Artificial Intelligence and Law, 205–209. New York: ACM. doi:10.1145/1276318.1276358.
Acknowledgments
The ideas behind this paper were presented at the Centre de recherche en éthique at the Université de Montréal in Spring 2016. I would like to thank the members of the Centre for their useful comments.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Martin, D. Who Should Decide How Machines Make Morally Laden Decisions?. Sci Eng Ethics 23, 951–967 (2017). https://doi.org/10.1007/s11948-016-9833-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-016-9833-7