Skip to main content
Log in

Who Should Decide How Machines Make Morally Laden Decisions?

  • Original Paper
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Jason Millar (2014b), and Etzioni and Etzioni (2016). For an earlier formulation of a similar problem, see Patrick Lin (2013a; 2013b). See also two articles in the MIT Technology Review (2015; Knight 2015).

  2. For work dealing with a similar problem see, for instance, Lin (2014); Purves et al. (2015); Millar (2014a); and Weng et al. (2007).

  3. Providing a definition of intelligence is a difficult problem from a philosophical perspective, but it may be useful to point out that one common conception of intelligence relies on the notion of rationality. What is more, it is common in decision theory or rational choice theory to draw a distinction between different forms of rationality: practical rationality (choosing the right action to satisfy one’s desire), volitional rationality (forming the right desires) and epistemic rationality (forming the right beliefs). Each of these forms of rationality can be seen as a form of intelligence. Interestingly, it is more common in the machine learning literature, as well as more general AI literature, to define intelligence as a form of practical rationality. See, for instance, the work Orseau and Ring (2012) and Shane Legg (2008). But we may wonder if this definition is sufficient and if we should not take into account other dimensions of rationality, such as the capacity to form proper desires and beliefs.

  4. For the present purposes, I define autonomy as the ability of a system to perform a task without real-time human intervention. See Etzioni and Etzioni (2016, 149) for a similar definition.

  5. Such as the ‘refurbished’ Watson system created by IBM that won first place at the game Jeopardy! in 2011. The system is now used as a clinical decision support system, among other things (Gantenbein 2014).

  6. See also Kate Crawford (2016) for other cases of discrimination performed by computer algorithms. For instance, some software used to assess the risk of recidivism in a criminal is twice as likely to mistakenly flag black defendants as being at higher risk of committing a future crime. Another study found that women were less likely than men to be shown ads on Google for highly paid jobs.

  7. Drawing clear limits is likely to require developing some form of typology of morally laden decisions, which is not a trivial issue from a philosophical perspective. Another related issue goes as follows: how does a machine decide that it is faced with a morally laden decision? This is not a trivial issue either, both from an IA research and philosophical perspective. Presumably, one needs to establish some form of typology of moral decisions before building a system than can operate on this typology. But some cases are more straightforward than others. When live casualties can be encountered, for instance, a decision is likely to be morally laden, and a machine such as a self-driving car can be designed to identify these situations. A machine can also be designed to identify the other situations mentioned above (risk for physical and/or mental integrity, the destruction of buildings or infrastructure) to some extent. But this list is by no mean exhaustive.

  8. See Debra Satz (2010) for an overview of contemporary arguments in favor of more market freedom.

  9. A group of intelligent machines could also be connected together and share information on user choices. It seems likely that system like these will be developed in the future. This option potentially mixes different approaches, depending on how these machines use the information they share. I will say more about these mixed options when I will discuss the fourth approach: letting other machines decide.

  10. Given that the two options in the trolley problem flesh out a dilemma between deontological and consequentialist modes of moral reasoning — a consequentialist is more inclined to divert the trolley to spare as many lives as possible, which promotes the best consequences, while a deontological thinker would be more sensitive to the value of the action of diverting the trolley, which involves killing the man on the side track — the usual joke is to claim that users of self-driving cars should have access to a deontological/consequentialist configuration settings. Think of it as the ‘balance’ or ‘fader’ control on a car radio. But of course, this is just a joke. Moral configuration settings do not have to be that simplistic.

  11. A related question goes as follows: is there such a thing as ‘the heat of the moment’ for an intelligent machine? One may be inclined to say no, since machines always make decisions at the same pace through similar processes, but nothing is so sure. New and more advanced forms of AI may evolve into something similar to the human brain and use faster or lower decision systems depending on the circumstances, the latter being able to perform more thorough, but also slower, assessments (see also the next note). As well, intelligent machines may rely on information online, but not be able to access that information when it must take a quick decision. I will not address these questions directly here, for I am mostly interested on how circumstances influence human decision making, not machine decision making, but it is worth keeping in mind that machines may also make different decisions depending on how much time they have to make these decisions.

  12. See Daniel Kahneman (2011) for a particularly interesting account of the differences in fast and slow mental processes, as he names them.

  13. This may even be an advantage of some intelligent machines. Self-driving cars may be more advantageous than human-driven cars, precisely because it may be easier to decide collectively about their behavior.

  14. This is in line with Etzioni and Etzioni’ (2016, 151) suggestion that focus groups or public opinion pools could be used to determine the relevant values that should inform the behavior of intelligent machines. See also Jean-François Bonnefon et al. (2015, 2016) studies and an article in the MIT Technology Review (2015) for examples of this approach. The 2016 study suggests that most people think self-driving cars should minimize the total number of fatalities, even at the expense of the passengers in the car. But most people surveyed also claimed they would not buy such a car. They want a car that will protect them and their passengers before other people outside the car. Pools also raise other problems, starting with methodological questions. Who should be surveyed? How can we account for gender, age-related or cultural variations or biases in answers? How are we to use a pool result where no clear trends can be identified?

  15. For a canonical formulation of such views, see, for instance, Milton Friedman (1962).

  16. See Ian Carter (1995) for an argument in favor of more freedom for the sake of technological progress. Even though scientific and technological developments may have disadvantages, claims Carter, governments (and other regulating bodies) won’t always be able to predict the disadvantageous outcomes of these developments, and they should therefore minimize interference during the development phase. The claim is not that developing clearly harmful technology, such as nuclear weapons, should be allowed; the risks of that technology are rather straightforward to determine. Rather, the idea is that in a situation in which clear indications of serious downside risks are so far lacking, government bans are premature. Carter suggests that we must see, in each case, if the burden of increased regulation is justified by the risk, see also de Bruin and Floridi (2016, 13).

  17. As car lobbyists in the US pointed out every single time transport authorities tried to raise security standards, a trend identified by Ralph Nader (1965) a long time ago.

  18. On a potential paternalistic dimension, see also Millar (2014a).

  19. See also the work of Evans et al. (2015, 2016) on learning the preferences of human agents.

  20. Another proposal that may be interpreted in different ways is the proposal that machines should “teach themselves” what to decide (Metz 2016). The proposal overlaps with the first approach if the makers of these machines have an important influence on these self-teaching mechanisms. The proposal may overlap with the second approach if the machines are sensitive to user’s behaviors. The proposal may overlap with the fourth approach, or be very similar to the fourth approach, if these machines can learn how to make morally laden decisions with a high degree of autonomy.

  21. Susskind and Susskind (2015, 280–84) discuss this idea, though they do not necessarily endorse it.

References

Download references

Acknowledgments

The ideas behind this paper were presented at the Centre de recherche en éthique at the Université de Montréal in Spring 2016. I would like to thank the members of the Centre for their useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dominic Martin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martin, D. Who Should Decide How Machines Make Morally Laden Decisions?. Sci Eng Ethics 23, 951–967 (2017). https://doi.org/10.1007/s11948-016-9833-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-016-9833-7

Keywords

Navigation