1 Introduction

Uncertainty yields problems for moral decision-making in at least two ways. First, we have the issue of ‘moral uncertainty’, which is uncertainty about which normative principles should guide what we ought to do.Footnote 1 Second, we have the issue of ‘factual uncertainty’, which is uncertainty about the (possible) state of affairs or the (possible) consequence(s) of actions.Footnote 2 Both forms of uncertainty affect the question of what we ought to do in a given situation. Moral uncertainty because we do not know which principles ought to guide us, or which facts matter for decision-making. Factual uncertainty because we do not know all morally relevant facts about the given situation. In this article, I will focus on factual uncertainty, as it concerns the ethics of machine decisions (by which I mean decisions made by any input/output machine, but the focus will be on AI systems).Footnote 3

In ethics, a common method for dealing with factual uncertainty is to first analyze idealized cases. Once we know what we ought to do in idealized cases, we can analyze what to do based on a theory of rational decision-making for situations involving factual uncertainty. That is, the ethical analysis can be restricted to the idealized cases. I will call this method ‘the standard approach’.Footnote 4

Contrarily, some argue that the standard approach is problematic because factual uncertainty changes the normative evaluation of a given situation.Footnote 5 Hence, if we want to determine what we ought to do in a situation involving factual uncertainty, we cannot idealize; rather we must analyze the situation simpliciter, including the associated factual uncertainty. I call this ‘the uncertainty approach’.Footnote 6

We can see this distinction plays a role in the applied debate on the ethics of machine decisions, irrespective of whether one of these approaches is explicitly endorsed. For example, there is a large literature on the ethics of crashing with autonomous vehicle(s), which is concerned with the ethics of machine decision-making for autonomous vehicles in situations of an unavoidable crash.Footnote 7 In this context, proponents that explicitly or implicitly adhere to the standard approach mostly discuss so-called ‘applied trolley problems’. These exist in many variants but most commonly involve one autonomous vehicle facing an unavoidable accident, involving a bivalent choice with idealized factual descriptions.On the other side, promoting the uncertainty approach, are those arguing that the idealization of these applied trolley problems is problematic because (1) they ignore risk uncertainties, and (2) risk uncertainties must be normatively evaluated. If we want to give an answer to what principles ought to guide autonomous vehicles’ decision-making in these situations, then it is necessary to determine which approach that should be applied.Footnote 8

In this article, I will use examples from the discussion on the ethics of crashing to make a broader point about the methods used in applied ethics to address the ethics of machine decisions (i.e., AI decision-making). Thus, the point I am making here is not really concerned with the ethics of crashing, rather it concerns the broader methodology for addressing the ethics of machine (AI) decision-making.

On the view I will defend, I concur with the uncertainty approach that we must normatively analyze the uncertainty component of machine decisions. Yet, I will argue that this analysis is insufficient because we are dealing with a moving target. That is, the ethics of machine decisions involve a triple-edge problem. First, the question of machine decision is not only a question about what a machine ought to do in situations (of uncertainty), given its technical limits, but it also concerns how the machine needs to be constituted to achieve the right decisions. That is, what inputs are needed to achieve the ethically correct—or a sufficiently correct—decision? Second, it concerns the question about how much decision uncertainty we can accept and what inputs we need to achieve that. Third, given that increasing inputs in most cases implies various trade-offs or risks thereof, the question is what trade-offs are justified for reducing that decisional uncertainty? Thus, the ethics of machine decisions is a moving target in so far as all three aspects of the problem involve the question of how the machine ought to be constituted, because how the machine is constituted affects its decision-making abilities and—at the same time—it can yield potential harms (or so I will argue). This trilemma is what I call the ‘input-selection problem’, which concerns the question of which inputs that are needed (for ethical decision-making with sufficient certainty) and which inputs are acceptable (granted the possible harms of using those inputs). The conclusion of this article is that ethical machine decisions have to be analyzed as a response to the input-selection problem.

The remainder of the article will be structured as follows. In Sect. 2, I will discuss the standard and uncertainty approaches and I will present some brief reasons why I think we ought to prefer the uncertainty approach to the standard approach. In doing so I will also argue for the importance of inputs in order to yield the right decisions. In Sect. 3, I will introduce what I will call ‘the grandma problem’, which aims to illustrate that reducing decisional uncertainty involves trade-offs. My point is not that these trade-offs are necessary (examples of AIs that are relatively harmless include, for example, AIs playing chess), but that they are common for many types of AI applications and hence an ethical evaluation machine decision must address such potential trade-offs. In Sect. 4, I sketch an idea of how ethics of machine decision should proceed in light of these insights. The article ends with a section summing up the main conclusions.

Lastly, in the article I will use terms such as ‘data’ and ‘information’ in a fairly colloquial sense (i.e., I will not clearly distinguish them). While more precise definitions, which clearly distinguish these concepts, are available in the literature, those definitions and distinctions will not matter for the issues I am addressing; hence, I will set that aside. Moreover, to simplify the language, I will often refer to normative ethical questions as normative questions (and likewise for similar formulation), even if normative questions are not limited to ethics.

2 The standard or the uncertainty approach?

In this section, I will defend the uncertainty approach against the standard approach. Moreover, the aim is also to establish that a machine needs to have access to the right inputs in order to achieve an acceptable level of certainty that the machines’ decisions are ethically correct. As noted in the introduction I will use the literature on the ethics of crashing as an illustrative example.

Early writers on the ethics of crashing, such as Patrick Lin, argued that autonomous vehicles will unavoidably crash and, thus, we must determine how they should crash. In determining how the vehicle should crash, it is argued that the vehicle will face choices such as that between crashing into a kid or a grandmother.Footnote 9 As I mentioned in the introduction, these unavoidable accident scenarios are often called applied trolley problems due to their similarity with, and inspiration from, the trolley problem.Footnote 10

Critics have pointed out that applied trolley problems are missing the important fact that AIs are dealing with probabilities. For example, Sven Nyholm and Jilles Smids note that “[an autonomous vehicle] necessarily has to work with […] estimate[s]”.Footnote 11 One way to read these critiques is to take their claims to be that we must include (ranges of) probabilities into our ethical evaluation. Yet, proponents of the standard approach would not necessarily deny that. Arguably, this is precisely how Geoff Keeling has responded to their arguments. Keeling defends the standard approach, arguing that we first ought to settle the question of what utility is and then it is a matter of decision-making under risk (or uncertainties), in which case Keeling endorses expected utility maximization.Footnote 12

Although Keeling’s point may satisfy some critics, it seems that it does not address the alternative reading of Nyholm and Smids—that risks and uncertainty must be normatively evaluated (as I have pointed out previously).Footnote 13 That is, another way of reading these critiques is that they adhere to the uncertainty approach. Given that Nyholm and Smids refer to Sven Ove Hansson—who strictly defends the uncertainty approach—this is arguably the best interpretation of their arguments.

So far, I have not said much to favor either the standard or the uncertainty approach. However, in the context of the ethics of machine decision-making, I think there are several reasons why I think we need to opt for the uncertainty approach. I will present three main reasons below; while doing so I will also establish the importance of inputs for achieving an acceptable level of certainty in decision-making.

First, there is moral uncertainty about how to evaluate risks and uncertainties. That is, the standard approach seems to ignore that a large set of questions about decisions under risk and uncertainty must be normatively evaluated (in context). For example, is there a pro tanto right against risk exposure or should the evaluation of risks be done according to purely consequentialist principles? Does fairness in distribution of risks and rewards matter?

Remember that the idea behind the standard approach is that we can resolve all normative questions in idealized cases (i.e., uncertainties and risks can be handled by a theory of rational decision-making), but there seems to be substantial normative questions concerning decision-making under factual uncertainty that require contextual analysis. For example, what makes a machine decision right or wrong arguably depends on what factual uncertainty is normatively acceptable, which is a normative question. To see this, suppose that we are determining an appropriate speed of an autonomous vehicle in a given situation. How fast we ought to drive arguably depends on the uncertainty of the machine’s ability to identify and avoid objects in proximity of its travel path and to determine whether these objects are trees, pedestrians, dogs, or other vehicles, and so forth. However, the answers to those questions are not static. That is, in order to determine the appropriate speed in a given situation, we must normatively evaluate how much factual uncertainty is acceptable in the given context (e.g., how certain do we need to be that A will not crash into x in situation S?). This speaks in favor of the uncertainty approach, because it means that factual uncertainty changes the normative evaluation of the situation and that it cannot be separately analyzed.Footnote 14

Second, while we might be tempted to think that a measure such as the one suggested by Keeling is acceptable when we are talking about decisions under known (ranges of) probabilities, we must also keep in mind that uncertainty also includes the possibility of information gaps (and potentially false information). No machine can input all possible data; there are and always will be technical constraints. Because of these constraints, it is possible that ethically relevant data inputs are missing from the machine. Moreover, it is possible that ethically relevant data cannot be predicted from other available information. This is problematic, because even if the machine makes the perfect decision based on the available inputs, it can still make the wrong decision, because the available inputs are flawed (i.e., a perfect reasoner may fail if she is reasoning based on false information).Footnote 15 This example illustrates that decisions based on flawed information (information that may even be false) creates special normative challenges.

Let us pause for a moment to look more closely at the details of this example. To do so, I will create a simplified thought example. Suppose that we know what the correct fact-relative ethical theory is. That is, suppose that we have a complete description of what is right/wrong “in the ordinary sense if we knew all of the morally relevant facts”.Footnote 16 Moreover, suppose that we could program, or train, an AI to apply it perfectly.Footnote 17 Lastly, suppose that we have created such an AI. Under these suppositions, the AI would be able to for each given description of a situation, correctly determine the morally right action to perform. This raises the question of whether that resolves all ethical queries involving ethical machine decisions. Here is a generalizable illustration to show that it would not. Suppose that for any given situation the possible ethically relevant factors are A, B, and C. Suppose further that the machine can only input A and B. Suppose that two situations are ethically equivalent, but that they vary in relation to A and B. The machine will incorrectly determine that these situations are ethically non-equivalent, because the machine can only describe the situation using A and B. Furthermore, suppose that two situations are ethically non-equivalent because they differ in relation to factor C (although they are equivalent relative to A and B). The machine will incorrectly determine that these situations are ethically equivalent.Footnote 18

To give an example from the literature, suppose that an autonomous vehicle is facing the bivalent choice of crashing into a kid or a grandmother. Suppose that the correct priority, given the contextual factors—such as the speed of the vehicle—is to avoid crashing into the grandmother. Moreover, suppose that in a counterfactual situation—such that the machine was facing the bivalent choice of crashing into a kid or an adult, ceteris paribus—the machine should avoid the kid over the adult. Lastly, suppose that the machine cannot distinguish between different forms of adults (i.e., it cannot separate adult grandmothers from other adults). If so, the machine mistakenly assumes that it is facing the choice in the counterfactual situation and although it makes the right decision, given the available information, it ends up making the wrong decision (all things considered).

This hopefully makes it clear that even a “perfect” decision-making machine (i.e., as defined earlier), would yield erroneous decisions if it is missing ethically relevant facts (mutatis mutandis for false information). One problem for the standard approach is that the possibility of false information and information-gaps cannot be dealt with in the same way as risks and other uncertainties. For example, Keeling suggested a method that arguably depends on known possibilities, but given that we are dealing with information gaps (unknown unknowns) and potentially false information, this problem arguably cannot be solved as Keeling suggests. Of course, one may modify Keeling’s proposal to suggest that there are other rational decisional principles that apply in the case of unknown unknowns and false information. However, it is difficult to see how the choice of those principles will not involve ethical questions. For example, under which conditions should we accept the possibility that an autonomous vehicle fails to identify a pedestrian crossing the street? That seems to be an inherently normative question.

Third, the reason why I favor the uncertainty approach is also tied to what I previously called the ‘science-fiction presumption’.Footnote 19 This is a name for various examples from the literature on the ethics of crashing in which these systems are presumed to have capabilities that are currently not available. This idealization is different from the idealization standardly used in the applied trolley problem, since it concerns an idealization of the features of the machine making the decision. For example, Derek Leben supposes “that it is possible for an autonomous vehicle to estimate the likelihood of survival for each person in each outcome”.Footnote 20 The problem is that proposals of what a machine, with some specific capabilities, ought to do, tells us very little about what machines with other features ought to do. If the standard approach was correct, then one may argue that these examples could still be useful because they show us what the basic decision-making principles ought to be. However, decision-making principles based on science-fiction presumptions may be incongruent with the best available technical solutions. Furthermore, as I will argue in the upcoming section, technical choices must be normatively evaluated.

To summate: In this section I have argued that we must adhere to the uncertainty approach, because the ethics of machine decision-making is not reducible to what a machine ought to do in a given situation, but it also depends on the machines’ uncertainty about the facts of the situation. Moreover, what the machine ought to do can only be determined in relation to what degree of factual uncertainty is acceptable in the given situation (e.g., how fast an autonomous vehicle should be allowed to drive depends, in part, on its ability to correctly identify and avoid objects in its proximity). Lastly, we need to consider what inputs are needed to achieve an acceptable level of certainty that a machine is making the right decision.

3 The grandma problem

Based on the previous section it should be clear that to achieve an acceptable level of certainty that a machine makes the right decisions we need to ensure that it has access to the relevant inputs. In this section, I will argue that adding inputs in most cases creates trade-offs. As previously noted these trade-offs do not need to be necessary; what is important is that the potential of trade-offs are so common that they must be part of the normative evaluation. The main trade-offs are between the inputs needed to achieve an acceptable degree of certainty that the machine decisions are ethically correct and the risks of harms from using these inputs. Moreover, I will also argue that more inputs may yield problems in situations with time constraints. To establish all of this, I will start by making use of an example from the discussion of ethics of crashing.

Suppose an ethical machine decision depends on whether someone is a grandmother.Footnote 21 Granted this presumption, a machine must be able to model the property of being a grandmother in order to achieve an ethical decision (i.e., without an evaluation of the relevant ethical factors, the machine will not be able to make the correct decision). The problem is that it is difficult to determine whether someone has the property of being a grandmother.Footnote 22 A simple model could predict that x is a grandmother by first determining that x is a human (a feature that autonomous vehicles arguably need to have anyway), then using a simple image analyze that x is a woman and that x is old. Setting aside the uncertainty involved in these evaluations, it is clear what the problem with the proposed model is: The predictor is neither necessary, nor sufficient (i.e., there are young grandmothers and there are old women who are not grandmothers). Hence, if determining whether someone is a grandmother is necessary in order to make an ethical decision, we need a model with a more complex informational input. One idea is to equip the autonomous vehicle with facial recognition capability and access to an appropriate database.Footnote 23 Although such a model could be highly successful (granted the completeness of the database and the ability to perform timely, accurate, and precise facial recognition), it should be obvious that equipping autonomous vehicles with such technologies would be highly detrimental. It would not only be a privacy invasion for the individual, we would also enable an extreme mass surveillance system (making all vehicles moving parts of a joint visual surveillance system).Footnote 24

This illustrates a trade-off between the ability to predict whether x has the property y and risks of harms from information needed to predict that x has the property y. On the one hand, we have a relatively (but not entirely) innocent image analysis that yields unsuccessful results. On the other hand, we have a potentially successful system with a really dangerous and detrimental integrated information analytic system. The question is if the trade-offs involved in this example apply more generally and if so to what extent?Footnote 25 In two upcoming subsections I will aim to establish that we have at least two common types of trade-offs between inputs that we may need for making the right decisions and the risks of using those inputs. As I have said before, the goal here is not to establish that these trade-offs apply to all decision-making machines. Arguably, it holds for many, if not most, machines that have a sufficiently broad application and capacity. I will not settle this distinction precisely, but it should be clear that there is a difference between an AI playing chess and an autonomous vehicle.

The grandma problem can also be used to illustrate a difficulty with time-sensitive decisions. Hence, in a third subsection I will turn to the trade-offs needed for right decision-making and the time needed to process these inputs.

Before turning to the subsections, it is worth pointing out that there is an overall trade-off for all these issues: cost and benefit. Arguably, adding an input often implies a cost (e.g., adding a sensor or processing the data), hence there is a trade-off between that cost and the benefit of adding that input. Although such cost–benefit analyses often are performed according to various methodological rules, any such analyses are substantially normative in nature, and cost–benefit analyses are not without problems.Footnote 26 This first trade-off is quite simple, but I mention it since it enters all input-selection choices.

Lastly, to summate: What the grandma problem showed was that inputs are needed to reduce decisional uncertainty. Specifically, inputs are necessary for the machine to know (with sufficient certainty) what is going on. Because it is difficult to a priori or ex ante determine what might be relevant facts in any possible situation and because there is factual uncertainty about which facts may matter for instrumental reasons or serve as a model for some instrumental or intrinsic value, we face a problem of adding inputs broadly (because we have reasons to believe that they may be of relevance) and the potential trade-off of adding these inputs. In the upcoming subsections I will deal with trade-offs of adding inputs, as well as the problem of time-sensitive decisions (which may be considered a trade-off in its own right). In the next section, I will make a brief sketch of how ethics of machine decision-making ought to proceed in light of the input-selection problem.

3.1 Transparency

I take for granted what I have argued previously, that is, that inputs are needed. In this subsection, I will deal with one negative aspect of adding inputs: how it affects the transparency of the system and why that matters. Simply put I will argue that adding inputs—ceteris paribus—increases the complexity and sophistication for many AI systems (such as artificial neural networks), which in turn would decrease the transparency of the system.Footnote 27 Hence, this would create a trade-off between inputs and transparency. In fact, some hold that the trade-off is a trilemma, between transparency, accuracy, and robustness.Footnote 28 Moreover, I will give a few examples demonstrating why a lack of transparency may be a problem.

Generally speaking a model can be opaque (i.e., non-transparent) or uninterpretable for two reasons: the internals of the model are unknown or we cannot assign any meaning or understandable explanation to the internals.Footnote 29 The problem of understanding the systems’ internals arguably has to do with its complexity. That is, while complexity is defined in different ways—relative to different techniques—it is standardly viewed as an opposed term of interpretability.Footnote 30 Some “define the model complexity as the model’s size”,Footnote 31 which indicates that increasing the inputs increases the complexity and decreases interpretability (or transparency).

Understanding it in this rough and simplified way we get that transparency is decreased when model size increases. Given that adding inputs increases the size of the model, adding inputs, ceteris paribus, generally decreases model transparency. To see this more clearly, it might be illustrative to consider that different authors have defined complexity in terms of the number of regions, non-zero weights, the depth of the decision tree, or “the length of the rule condition”.Footnote 32 So, for example, increased inputs would prima facie add to the length of the rule condition by adding criteria that must be considered in the rule condition (likewise for the other definitions). Thus, as a rule of thumb, adding inputs decreases transparency (i.e., there is a trade-off between inputs and transparency).Footnote 33

Transparency is broadly promoted in the literature on ethical AI.Footnote 34 Ethically, we can distinguish between two different transparency demands, which we may desire for various reasons. On the one hand, we may demand that the system satisfy a demand of explainability (i.e., that the machine decision, or the justification thereof, is understandable). On the other hand, we may demand that the system satisfy a demand of traceability (i.e., that we have the ability to trace the decision from input to output).Footnote 35

Before explaining why these demands matter, it should be recognized that explainability, in all fairness, does not link directly to model complexity, since what we need to understand is not necessarily the model, but the result of the model. Yet, the argument between increased inputs, complexity, and transparency, arguably holds as a rule-of-thumb, which is sufficient. (Keep in mind that the point is to establish that these are concerns that deserve our attention when evaluating the ethics of machines’ decision-making.)

Explainability is important in legal, political, and medical contexts, for example. In a legal context, we usually want to avoid procedural opacity, because you have a right to understand and (in many cases, if it applies to you) appeal a legal decision (and in order to so, one must understand the decision). Moreover, a legal decision is often strongly connected to the legal reasoning that it is based upon.Footnote 36

Understanding political decisions is also important, at least in a democracy. It is also important for political participation, since if you do not understand the political process or political decision-making, then participation will be difficult. Hence, political usage of algorithmic decision-making may make political participation more difficult.Footnote 37

In the medical context, informed consents are considered a gold standard for medical decision-making (e.g., because they protect individuals against harms and abuse; protect their autonomy, self-ownership, and personal integrity; and increase trust and decrease domination),Footnote 38 and decisional opacity is a problem for informed consent since it makes it difficult to inform the individual of the reasons for her treatment. Even if we hold that decisional accuracy is more important than explainability, that does not mean that there is no trade-off.Footnote 39

Explainability can also increase trust,Footnote 40 which may be important to alleviate fear of new technologies, such as the fear of riding a fully autonomous vehicle.Footnote 41

Traceability is important for responsibility and accountability (e.g., when something has gone wrong or to increase trust in a system by allowing it to be monitored and evaluated), and to increase safety and reliability (e.g., in the case of an autonomous vehicle crashing we might not only need to determine who is responsible or accountable, but also how we can improve the system). Our ability to fully understand the system can also be important to reveal bias in algorithms.

These are just a few examples to illustrate the importance of transparency and that it must be part of the normative evaluation of ethical machine decisions.

In summation, this subsection has shown that we need to consider a possible trade-off between transparency and adding inputs. As previously noted, the point here is not that transparency necessarily matters in all situations, nor that adding inputs necessarily reduces transparency in a relevant way in any situation. The point is, as I just stated, that it needs to be part of the normative analyses of machine decisions.

3.2 Privacy and data protection

The grandma problem quite clearly illustrated a trade-off between privacy and data protection and the adding of inputs to decrease decisional uncertainty. While privacy has been recognized as a substantial problem for autonomous vehicles,Footnote 42 it ought to be clear that issues relating to privacy and data protection apply much more broadly to most kinds of machine decisions.

However, one may think—based on the grandma problem—that privacy is only at risk when we are dealing with sensitive data. Thus, one may worry that the trade-off is restricted to situations in which one is dealing with sensitive information. For this reason, in this subsection I will aim to show that we have very broad reasons to minimize usage of data and information. Simply put, I will show that the idea of restricting machines to non-sensitive information inputs does not guarantee a protection of sensitive information. Moreover, I will exemplify why we should be concerned about an individual’s privacy, besides a right to privacy or an individual desire for secret keeping.

There are various examples of how seemingly innocent data-sets can be used to predict fairly sensitive information. For example, it has been shown how ‘Likes’ on Facebook (i.e., giving a virtual thumbs-up to a social media posting) can be used to predict personal information such as political leaning (Republican/Democrat), sexuality, parental separation before 21 years old, etcetera.Footnote 43 Once we predicate more substantial and/or sensitive information there is a risk that such information could be used to manipulate individuals and blackmail them.Footnote 44 Manipulation based on information harvesting is arguably a business model used by many so-called “free” online services.

As this illustrates, there are further reasons for data protection beyond privacy concerns. For example, Jeroen van den Hoven argues that there are at least three reasons for data protection, beyond privacy: information-based harm, informational inequality, and informational injustice. Information-based harm is harm to an individual that makes use of personal information; informational inequality is concerned with (a lack of) transparency and fairness in the informational marketplace (i.e., access to information is power); and informational injustice is concerned with how information is used to discriminate against an individual.Footnote 45 As you can see, some of these reasons overlap with transparency considerations.

There are also surveillance problems. For example, autonomous vehicles must have the ability to track both the passenger(s) and people in its vicinity. The surveillance capability and tracking of people in its vicinity is arguably an extremely substantial problem, since it also affects the privacy of non-users (meaning that they are negatively affected without receiving the benefits and without being able to properly opt out, which poses a problem for solving this by using informed consents). If combined, these surveillance capabilities can also be used for undemocratic purposes, to control the population.

This can put individuals in a problematic situation where they have to choose between using AI services and protecting their privacy. For example, Dorothy J. Glancy discusses an example involving an autonomous vehicle in which you must choose between increasing mobility (which increases user autonomy) or giving up your informational privacy (because the service requires access to, e.g., travel data).Footnote 46

In summation, this subsection has shown that we need to consider a possible trade-off between, on the one hand, privacy and other informational wrongdoings and, on the other hand, adding inputs. As previously noted, the point here is not that all situations of adding inputs will affect an individual’s privacy or cause informational wrongdoings. Nevertheless, it is clear that adding inputs cannot only affect an individual’s privacy because the data is sensitive; even insensitive data can be privacy-problematic. Moreover, information can be used and abused in various ways and that gives us reason to consider limits on information access (as well as creation, for that matter). Furthermore, AI systems can, when combined, also lead to a risk of mass surveillance. Thus, the choice of inputs must be evaluated against various privacy concerns and reasons for data protection, whether directly or indirectly, and more broadly against risks of mass surveillance. The overall point is, as I just stated, that these trade-offs need to be part of the normative analyses of machine decisions.

3.3 Time-sensitive decisions

Adding inputs not only adds a monetary cost, decreases transparency, or affects an individual’s privacy, it also yields an increased decision time (because the input must be processed). That is, adding inputs postpones the machine’s decision-making, ceteris paribus. The problem with postponing decisions in general is that it can—all things considered—lead, to more harm, due to delayed response time.

In this subsection, I will argue that this implies an ethical problem, in the design process, between adding a function that allows for a more highly grained ethical analysis versus making a decision in time. The problem is that we can end up with a machine that although it makes “better decisions” (if allowed to run through the full process), ends up performing worse, because making better decisions takes more time. One may suppose that this is a matter of optimization, but that depends on knowing beforehand the trade-offs between time and best analysis, decisions, and action, in any given situation. That it, for systems that will be used in varied contexts with varied complexity there will also be a variation in decision time. Although this is partly an engineering problem, it is not only an engineering problem. It includes the normative choice on how much decisional uncertainty we can accept relative to how quick decisions can be made in a given situation.

It is easy to see how this conflict may create a situation in which we end up with a suboptimal decision. I will establish this for both absolutist rule-based ethics and consequence-based ethics. In situations of uncertainty, absolutist decision rules (e.g., a constraint) are usually applied as follows: “it is permissible to Φ only if” the probability that Φing will breach the constraint “is lower than some threshold”.Footnote 47 Given that more inputs add processing time, this means that there would be some set(s) of input data and some time constraint(s) for a given machine-choice mechanism such that the machine would miscalculate the threshold at the time- limit because of the added processing time from added inputs, while a smaller sets of inputs would yield the correct decision (even if the estimate of the probability that Φing breaches the constraint would be more imprecise if the algorithm were allowed to run without time constraints). That is, more is sometimes less (or worse).

Similarly, for a consequence-based ethics, the evaluation of the utility of the two actions can take more time because of added inputs, which may also skew the balance of the choice in a way that during a specific time limit gives the incorrect result in accordance with the theory applied, while an alternative with fewer inputs gives the correct outcome.

In this example, it is clear that the problem is that some inputs were not needed, for otherwise, fewer inputs could not possibly generate the correct decision. However, the fact that some inputs were not needed to reach a correct decision in this situation does not imply that they would not be necessary in another situation (e.g., if more precision would be needed and the time constraints would differ). Thus, this example illustrates that the selection of inputs is a substantial normative choice that we must engage with.

4 So what should we do?

Given that the current practices of applied ethics have largely ignored the role factual uncertainty plays in what I have called the input-selection problem, what should ethicists do differently henceforth? First, ethicists need to take inputs and input limitations into consideration when analyzing what constitutes ethical machine decisions. Although this may seem obvious, this is currently largely neglected (at least in the applied ethical literature on autonomous vehicles). Second, taking inputs and input limitations into consideration requires an analysis of the trade-off between the benefits for ethical machine decisions by including various inputs X1,…,Xn and the potential negative effects (e.g., for privacy) of including such inputs. Thus, the discussion of ethical machine decisions needs to change radically to take these potential trade-offs into consideration.

To illustrate the above point, consider the examples of unavoidable crash scenarios in which we must choose whether a vehicle should crash into A or B. Suppose, for example, that we think that in situations with a choice between accidentally killing individual A or B, the ethical choice depends on A’s and B’s individual properties. (As before, I am merely using this as an example, with all caveats of simplification.)

If we think that the individual’s properties matter, then we can attempt to put a value on this. For example, if A has property x and B has property y, then A should be prioritized over B. That is how it is commonly discussed in the literature, with the caveat of adding conditions for varying degrees of uncertainty. However, I have argued that these analyses are incomplete because we must also consider how the machine can conclude that A has the property x (with some sufficient degree of probability, whatever that is), because we need to evaluate the risks of potential harms from using that kind of machine and selecting the inputs needed for that machine.

There are different ways of doing this. One way would be to attempt to spell out all conditions that apply (all decisions about how to handle utilities and/or non-consequentialist values, uncertainties, and trade-offs). Alternatively, we can consider available alternatives and see if anyone of them is permissible and/or obligatory.

For example, considering the grandma problem we may conclude that using facial recognition technologies implies too many risks of serious harm. Here we ought to consider not only risks involved if the machine is used as intended, but also risks involved with abuses and accidental misuse. Therefore, when evaluating potential trade-offs, we might conclude that the downsides of adding these inputs dominate, all-things-considered, the benefits of adding them. If so, we can consider other options, with a, ceteris paribus, lower degree of certainty in the evaluation. If any option is prima facie permissible in its own right (i.e., the benefits of the machine decisions seem to outweigh its risks), then we set that option aside so that it can be considered against other prima facie permissible options.

In any case, we have to repeat the process for a representative sample of alternatives. With all alternatives, we must evaluate the trade-off against the different degrees of (un)certainty about the ethical machine decision (what that is may in itself be uncertain). For example, how important is it—from an ethical perspective—to be able to say that the probability that A has property x is between 0.99 and 1 or that it is between 0.8 and 1? Furthermore, how would any new inputs and functionality affect the machines’ decision-making capabilities in time-sensitive situations?

Simplified, what I suggest is that there are five steps that I believe need to be part of the ethical evaluations of machine decision-making henceforth. Note that we might have to go back-and-forth through the steps. First, we start with a normative investigation of the basic goals of machine decisions, which is already considered in the literature. Second, we need to find out what technical options there are for achieving that goal, which is something often ignored in the literature, sometimes—as previously noted—in favor of science-fiction presumptions. Third, for each option we need to identify the potential ethical trade-off between achieving the goal and the normative cost of doing so (e.g., how does the proposed functionality of the system affect its transparency or individuals’ privacy). This is the core of what I argued for in the previous section. Fourth, with the trade-off in mind we need to evaluate the normative value of accuracy and robustness of machine decisions relative to the potential trade-offs. That is, what degree of certainty in achieving our goals justifies the associated risks? Are there alternatives that better protect data and achieve a higher degree of transparency? Fifth, for time-sensitive decision-making we need to evaluate all the previous considerations relative to the risks involved in time-limited decision-making. This fifth step is perhaps best considered not a separate step, but as part of steps 3 and 4.

Currently, the ethical analysis of machine decisions focuses only on the first step. It ignores technical limitations; it ignores the potential trade-offs of having certain machine decision-making capabilities; proponents for the standard approach also largely ignore the normative evaluation needed for the value and disvalue of certainty and uncertainty; and it does not address the particulars of time-sensitive decision-making.

5 Summation and conclusions

In conclusion, to analyze the ethics of machine decisions we need to consider how much decisional uncertainty we can accept. Achieving an ethically perfect decision requires not only that the machine has a well-calibrated decisional algorithm, but also that it has access to all the ethically relevant facts (i.e., we need to consider which inputs are needed for decision-making). Thus, to achieve an acceptable level of decisional uncertainty, a central question is what inputs the machine needs access to. The problem is that it is prima facie difficult to know precisely which inputs are ethically relevant. More importantly, as I have argued in this article adding inputs in most cases implies a trade-off (e.g., adding inputs puts various important values such as transparency and privacy at risk). These trade-offs are not necessary, but they are so common that one must evaluate them when one evaluates what constitutes an ethical machine decision. Moreover, for decisions under time constraints fewer inputs might yield a better output because of added processing time. All these conclusions imply a revision of the way that the ethics of machine decisions are currently being discussed: We need to take the machine and possible ways that the machine could be constituted into consideration, we have to consider potential trade-offs, and we have to pay particular attention to decisions under time constraints. Hence, we cannot think about machine ethics in isolated idealized terms, we need to analyze it in context and analyze the associated uncertainty and the possibly related trade-offs with reducing the uncertainty to an acceptable level—and that is the input-selection problem.

Lastly, it should be mentioned that while I have focused on two trade-offs in this text, there are arguably other trade-offs that deserve attention in the normative analysis. For example, how should we deal with a data-set that is highly accurate but biased? That is, this article should not be read as an indication that transparency, privacy, cost–benefit, and time constraints are the only problems that need to be addressed.