1 Introduction

Sophisticated artificially intelligent (AI) systems are spreading to evermore sensitive areas of human life. More generally, (less sophisticated) software systems, including decision support systems (DSS), which have been used for decades at this point, influence our lives in countless ways.Footnote 1 They are used in autonomous vehicles (Levinson et al., 2011), to support hiring decisions (Langer et al., 2018), to interpret medical images in search of indications of cancer (Kourou et al., 2015), to determine recidivism scores for convicts and help determine their sentences (Hartmann & Wenzelburger, 2021), and so forth. Many of these applications are quite advanced and err less often than humans (McKinney et al., 2020). Their use not only saves their users’ time but often also helps to achieve appropriate outcomes and to prevent unwelcome or harmful consequences, e.g., car accidents or wrong medical treatments, even though these systems are not immune to error themselves.

However, many such systems are black boxes: while users can often access the systems’ inputs and outputs, they cannot access or understand, let alone reenact, what happens inside the system. One reason for this is that artificial neural networks and other non-linear machine learning systems usually employ models that involve subsymbolic representations such that even developers or data science experts cannot comprehend their inner workings (Bathaee, 2018).Footnote 2 This is often taken to be problematic especially in sensitive situations and has several bad consequences for their use: It is difficult to detect errors in the system’s operations, and (arguably) neither users nor affected parties can reasonably trust such systems or make well-founded decisions based on a decision support system’s recommendation, seeing as they cannot understand what underlies it. Moreover, one may worry that AI systems infringe on users’ autonomy (e.g., due to nudging or outright forms of manipulation) if the systems’ behavior is not interpretable to their users. And, given that it is impossible to recognize erroneous decisions or misleading recommendations, it may be difficult, and in some cases impossible, to appropriately attribute responsibility and hold anyone accountable, although in many sensitive situations it is desirable to be able to hold someone accountable (a claim we aim to support in Sect. 2). This problem of responsibility—the inability or difficulty of holding someone accountable even when doing so is desirable, which will be spelled out in more detail below—is the topic of this paper.Footnote 3

That the opacity of AI systems gives rise to these problems is intuitively plausible. Arguably, solutions have to put users and people affected by automated or algorithmically supported decisions in a position to understand what underlies the decisions of the systems. In other words, explanations must be provided.Footnote 4 The goal of research in explainable AI (XAI), consequently, is to open the black box, or at least to make it more translucent and perspicuous (Langer, Oster, et al., 2021). XAI, understood in a broad sense, is pursued by researchers from a range of disciplines.

This multidisciplinarity comes with a lot of different perspectives and focuses. For instance, a whole host of papers revolve around problems like those mentioned in the previous paragraphs; they provide arguments for XAI from the broader context of morality or society in general (e.g., Asaro, 2015; Binns et al., 2018; Cave et al., 2018; Floridi et al., 2018; Langer, Oster, et al., 2021; Lipton, 2018; Wachter et al., 2017). However, these discussions do not always tell us how exactly we can get from a need for reasonable trust, human autonomy, accountability, responsibility, or the like, to a requirement for explainable AI systems. Moreover, they typically do not tell us which kinds of explanations should be given to meet these concerns.

At the same time, there is a broad variety of technically minded papers from computer science introducing and discussing concrete methods for coaxing explanations out of AI systems (e.g., Bach et al., 2015; Kim et al., 2018; Montavon et al., 2017; Ribeiro et al., 2016; Selvaraju et al., 2017). These papers, however, simply presuppose that their results will help to fulfill the proclaimed requirement. This is not surprising, since they are usually not informed regarding the richness of the nature of concepts like explanations, explaining, interpretation, or understanding (Miller, 2019; Miller et al., 2017).

Finally, there are a few papers, such as Wachter et al. (2018), Zerilli et al. (2018), and Miller (2019), that strive to provide a more philosophically and psychologically informed picture of the explanations that AI systems should give. However, despite proposing particular kinds of explanations (viz., intentional, counterfactual, or contrastive explanations), they remain silent on whether, or how, explanations of these kinds meet the needs which motivate the call for explainable AI to begin with, such as enabling reasonable trust, human autonomy, or responsibility. To sum up, there is little discussion of whether and how specific forms of explanations—to be provided by technical tools mentioned in the previous paragraph—deliver precisely what the arguments from a societal perspective in favor of XAI demand.

Against this backdrop, our aim is to combine ideas from all three types of papers: We begin by defending and clarifying the claim that there is a desideratum to be able to hold an individual morally responsible for morally problematic AI-supported decisions or actions in Sect. 2. We then argue that such decisions should often be made by a human in the loop who receives recommendations from a decision support system (Sect. 3). Next, by appealing to the epistemic condition on moral responsibility, we substantiate the claim that the outputs of many such decision support systems must be explainable for the human in the loop for her to bear responsibility (Sect. 4). By appealing to cases of disagreement between DSS and human in the loop, we argue that explanations of a certain kindviz., reason explanationsare especially suitable for enabling morally responsible decision-making (Sect. 5). We conclude with some practical challenges for developing reason-giving XAI systems (Sect. 6).

2 The Challenge of Adequate Responsibility Attribution

The call for XAI is often motivated by appeal to worries about high-stakes situationsFootnote 5 in which moral harms may result from opaque systems, among them the worry that missing explainability leads to an inability to hold anyone accountable, or responsible, if something goes wrong. Let us first turn to why exactly it is important to be able to (appropriately) ascribe responsibility when AI systems are operating, and then turn to the question what is needed to be able to do so. For this, we need to understand what is meant by “responsibility” in the relevant contexts, starting with clarifying the concept of responsibility. To do so, we compare and contrast it with the related legal concept of accountability.

Problems of legal accountability are central to the legal concerns with XAI, for instance, in connection with discussions of an alleged EU Right to Explanation (Wachter et al., 2017).Footnote 6 Unfortunately, the term “accountability” is used in a variety of ways in this debate.Footnote 7 Decision-makers (and agents generally) are accountable, in the sense in which we are interested, when they can appropriately be held to account, i.e., when it is appropriate to demand that they explain or justify their conduct or, further, when they deserve reprimand or punishment, given that their decisions or actions are unlawful (Zarsky, 2013; Edwards & Veale, 2017; see Duff, 2007 and 2019 for a nuanced picture of criminal responsibility).

This legal term is structurally quite similar to philosophical notions of moral responsibility (Talbert, 2019).Footnote 8 Moral responsibility for an action, as discussed in philosophy, is often spelled out in terms of the agent’s blame—or praiseworthiness for the action, where this is understood in terms of its being fitting to have certain emotions towards the agent such as resentment, indignation, anger, or gratitude (Strawson, 1962). This approach has been developed in contemporary debates in various forms (see, e.g., Wallace, 1994; Watson, 1996; McKenna, 2012; Pereboom, 2014; Shoemaker, 2015). The corresponding notion of responsibility is often called “accountability,” and is distinguished from other notions of moral responsibility such as attributability or answerability. Though much of what we argue may hold for different forms of responsibility, we are concerned with responsibility primarily in the sense of appropriate praise—or blameworthiness, as exemplified by Shoemaker’s (2015, 113) notion of accountability: “One is an accountable agent just in case one is liable for being a fitting target of a subset of responsibility responses to one – a subset organized around the paradigm sentimental syndrome pair of agential anger/gratitude – in virtue of one’s quality of regard.”Footnote 9 In the following, when speaking of responsibility, accountability is what we have in mind.

Moreover, while it may be that moral responsibility presupposes causal—or more broadly—counterfactual responsibility, it goes beyond that concept: It may be that a moral harm would not have come about had I acted differently, but still I am not blameworthy, for example, because I was not aware of what I was doing. Relatedly, computer scientists will say that certain components of a system are accountable or responsible for its failure, i.e., the failure is counterfactually dependent on the performance of these components, but this does not amount to a claim that they are morally responsible or accountable in the sense we are concerned with (Chockler & Halpern, 2004; Halpern & Pearl, 2005).

Here is an everyday example for the kind of moral responsibility we are interested in. Imagine that human resources (HR) manager Herbert, who is tasked with deciding which applicant will get an important management position in the company, disqualifies April, a black female applicant, because of her race and gender. Herbert is not seriously psychologically impaired. It is therefore appropriate to respond to Herbert’s action by blaming or reproaching him for his behavior, and in this case even by taking up legal measures against him for discriminating against the applicant (see, e.g., Title VII of the US Civil Rights act of 1964 or the German Allgemeine Gleichbehandlungsgesetz). That is to say, Herbert is morally responsible and legally accountable for his action.

The use of AI systems can challenge the ascription of this kind of responsibility. Suppose Herbert’s company employs a fully automated hiring system to screen, rank, and select job applicants. Assume that the system ranks April in the last place and excludes her from the further hiring process. Now maybe this ranking was decisively influenced by the fact that April is a Black woman, or some other irrelevant information. If this is the case, this intuitively raises multiple concerns. One is the question of unfair algorithms and algorithmic bias and discrimination (e.g., Garcia, 2016).Footnote 10 Another is the worry that no one can be held morally responsible or legally accountable for excluding April, for there was not anyone who excluded her. Matthias (2004) calls this a “responsibility gap”.Footnote 11 This responsibility gap, understood as an accountability gap, will be the focus of our paper.Footnote 12 We will concentrate on cases of responsibility for biased AI-supported decisions since there is much discussion of algorithmic bias.Footnote 13

Let us sketch two motivations for closing the moral responsibility gap, for making sure that there is a person who can be properly held responsible for such morally problematic decisions. We do so by focusing on the case of Herbert and April. On the one hand, there is a motivation from incentives: If someone like Herbert is morally responsible for the problematic decision or action, this means that he can fittingly be blamed for it. It is then, at least pro tanto, just to express blame or even to establish legal sanctions (McKenna, 2012, though there may be exceptions). This will plausibly motivate him to be diligent in making up his mind about whether to (follow the system’s recommendation and) disqualify the black female applicant to avoid negative consequences for himself. Such an incentive for diligent decision-making may lead to better hiring decisions and less wrongdoing (for empirical evidence, see Fehr & Gächter, 2002).Footnote 14

We acknowledge that this argument needs further details in order to evade counterarguments. For instance, it has been put to us that one may always be able to find someone responsible for producing or employing the DSS if it discriminates against applicants, and that person will have an incentive to be diligent that no discrimination arises. However, as we point out in the next section, if the system’s discrimination is not foreseeable to anyone, there may be no one bearing indirect responsibility of this kind. Furthermore, it might be that only someone at the company who developed the system can be held indirectly responsible but nobody at the companies that employ the system. Then, the system might be applied carelessly by a great number of users who need not bother as long as, for instance, the system is not taken off the market. In this case, there would be no incentive for applicants to avoid wrongdoing in hiring decisions.

On the other hand, there is a motivation of justice: If April suspects—or finds out—that she was discriminated against because of her race and gender, it would intuitively be desirable to enable her to blame someone for wronging her. It would be desirable to make it possible for her to get justice, in the sense of a person responsible for discriminating against her owning up to the fact that they did something wrong. She should be able to be fittingly angry with someone and to express this anger by demanding of a responsible decision-maker that they acknowledge their wrongdoing, that they apologize, make amends; it would be desirable to make it possible that they get sanctioned. To motivate this further, imagine that the responsibility gap cannot be closed. Then April’s situation is morally equivalent to the situation of another agent, call her Berta, who has been harmed by a natural disaster: Both April and Beth are harmed, nobody is responsible, and nobody is blameworthy. However, April’s and Berta’s situations are intuitively different. Many people were involved in setting up and using the system that harms April, but no human is involved in harming Berta. And it seems that this makes a difference in terms of justice, for Berta really cannot justly blame anyone, but intuitively April should be able to appropriately blame someone and may reasonably desire to do so.

Of course, here too one may raise doubts, for instance, by questioning whether justice requires being able to angrily blame someone or just being able to do something in the vicinity. Maybe all that is needed is someone who is answerable in the sense explicated in Shoemaker (2015, 82),Footnote 15 i.e., someone who is able to cite their reasons for the action and who is thereby liable for being a fitting target of responses like agential regret or pride in virtue of their quality of judgment. In our concrete case, this person would be expected to admit and regret a discriminatory hiring decision. Such answerability would not imply accountability.

Arguably, that there is someone who is answerable might already be helpful to some degree to ensure justice for the wronged applicant. However, in the case of discrimination and other offensive treatment, it would further be desirable for an agent like April to be able to fully hold someone accountable. While it seems right that April deserves an explanation, she should also be able to be fittingly resentful for being disadvantaged based on her race and gender, and to be able to call for moral sanctions in terms of blame. This indicates that accountability and not just answerability is relevant (see Shoemaker, 2011, 616 and 621).Footnote 16 We acknowledge that these questions can be debated further. However, since our main focus is an argument to the effect that explanations are often the best way of closing the moral responsibility gap, it is sufficient for our purposes here to present these initial motivations that could be spelled out further. In our view, the argument from incentives and the justice-based argument provide a compelling rationale for a desideratum to avoid high-stakes situations in which no one can be held responsible.

3 Why We Need Someone in the Loop

But how to make sure that there is a person who can properly be held responsible? In this section, we will argue that, if we want to ensure that a human can bear responsibility for morally problematic decisions, we often cannot—and, in fact, should not—delegate these to fully automated systems. Instead, we should keep a human in the loop: AI systems should be used merely to supply recommendations about what to do, but the final decision should be left to a human decision-maker—in our example, to Herbert.Footnote 17 However, keeping a human in the loop is, as we argue in the next section, not sufficient to ensure that there is someone who can bear responsibility. But before we can turn to the question of what is missing for a sufficient condition—and how this relates to explainability of a certain kind—we want first to give a convincing argument for requiring a human in the loop.

The obvious alternative to keeping a human in the loop to bear responsibility would be to find someone else at the company, someone who decided to purchase the system or (one of) the developers of the system and to allocate moral responsibility for the specific fully automated decision to that person. What speaks against this alternative? In a fully automated decision process, no one made the decision or was able to influence it directly. So no person can bear direct responsibility for (the outcome of) the fully automated decision. A person at the company or a developer could, at most, bear responsibility indirectly given that the decision was fully automated. Indirect responsibility can be ascribed to an agent for an outcome where she is directly responsible for something else—such as her own ignorance or loss of control—which led to that outcome. In such cases, this “something else” is her fault, for instance, because she did not do enough to meet obligations to stay informed or in control (Mele, 2021; Rosen, 2003; Zimmerman, 1997).Footnote 18 One might think that someone will bear indirect responsibility for the fully automated decision by being responsible directly for employing—or designing—a faulty system, so we can also blame them, indirectly, for the particular decision made by the system.

The proposed assignment of indirect responsibility, however, runs up against an especially nasty variant of the problem of many hands (Thompson, 1980; van de Poel et al., 2015). The problem of many hands, as we understand it here, is quite generally that, in a complex situation, in which the contributions of many agents lead to moral harm, such as when large corporations and companies cause a problem, it is difficult to allocate direct or indirect moral responsibility to anyone in particular. The problem of many hands has an epistemic and a metaphysical dimension: On the one hand, it is concerned with difficulties in determining who is morally responsible and, on the other hand, with difficulties with respect to whether anyone actually is responsible.Footnote 19 It further has a practical-political dimension: Even if there is someone who is—directly or indirectly—responsible, complex situations with many contributors lend themselves to obfuscation, making it easy for companies and other agents to let themselves off the hook.

Adding a fully automated AI system to the mix compounds the problem. Suppose that a level 5 self-driving vehicleFootnote 20 kills a pedestrian. Should the blame and thus the moral responsibility for the accident be allocated to the company who built the car, to the company who supplied the car’s LiDAR (light detection and ranging), or to the company who owned and employed the vehicle for the mission it was undertaking, etc.? If one of the companies is held responsible, which person at the company is to bear responsibility? It appears that an already complex situation here is made even more confusing by the involvement of an autonomous AI system (Awad et al., 2018; Coeckelbergh, 2020; de Laat, 2018; Mittelstadt et al., 2016; Nissenbaum, 1996; Sparrow, 2007). Having a human in (or at least on) the loop, by contrast, alleviates the problem of many hands by providing at least one easily detectable and plausible candidate for bearing the direct responsibility for a particular decision that caused harm. After all, if the car were operated by a human who could reject any recommendation or decision of the system, that person is an obvious candidate for blame.Footnote 21 This is not to deny that it may be important to allocate additional indirect responsibility to the companies involved if these could and should have done more to prevent accidents.Footnote 22

One central way in which the problem of many hands may arise is that, because of the many agents involved in a situation, agents contributing to a harm are unable to foresee that their combined actions will lead to a problematic outcome. We will explore the role of knowledge for responsibility in more detail in the next section, but even pre-theoretically, it seems problematic to hold someone responsible for a harmful outcome when it was not foreseeable by them, and so when they were not at fault for not foreseeing it. Transferring this worry to the case of fully autonomous AI systems like the hiring system used by Herbert’s company, it may well be the case that all people who might bear moral responsibility indirectly for the output of an AI system blamelessly lack relevant (fore-)knowledge regarding the system’s output. This may be true, e.g., of the developers of an AI system, of people at an accreditation agency, and of the companies employing the system. Even a thoroughly tested and generally reliable system may give a problematic output when certain features of the situation to which it responds combine in an unusual way (Edwards & Veale, 2017).Footnote 23 Focusing on the issue of bias, systems cannot easily be tested before or during employment with respect to bias because the biases for which they would be tested concern protected classes, such as sexual orientation, which often is not available in the data to the developers (Lepri et al., 2018). Furthermore, bias might be hidden in the statistics—even though a system, say, puts women at a disadvantage as compared to men, the overall statistics regarding its performance may look unproblematic. Thus, there are likely many cases in which neither developers nor the persons employing the system can be expected to foresee particular harmful future outputs of a system. And so they cannot properly be held (indirectly) responsible for them.

A final problem for allocating indirect responsibility is that there may be cases in which, although we may succeed in finding a person (e.g., developers or employers of the AI system) to whom we can ascribe indirect responsibility for a particular output, it would be wrong to hold them responsible nonetheless. For it could be that the system is the best system that could have been developed—e.g., it is the least biased hiring system that it is possible to design—but still there are some fringe cases in which its output is biased, e.g., in that it puts Black queer women from a low socioeconomic background at a disadvantage. To put it differently, it may be that the overall great performance can only be achieved at the price of allowing some suboptimal outputs in rare cases. We can even imagine that the system is much better overall than a human decision-maker would be. If so, it seems that the developers or employers of the system have done nothing wrong, and so cannot be blamed. Nonetheless, it would be desirable to have someone who can bear responsibility for the individual biased hiring decision and its morally problematic outcome.

Even if a candidate for indirect responsibility could be identified (contrary to the epistemic dimension of the problem of many hands), the unpredictability of problematic outputs and the issue of overall optimal performance may prevent that candidate from bearing responsibility. This holds both for the developer of the AI system and for a customer who relies on an accredited system that she leaves to operate by itself. Bearing moral responsibility for a particular output of an AI system, even indirectly, requires that there is someone who is able to foresee it and who cannot evade blame because they did the best they could. As argued, these conditions will often not be met.

With this argument in place, we need to add a qualification. Fully automated or autonomous AI systems may be acceptable in some cases. For instance, two of the most pressing and widely debated applications discussed in the context of responsibility gaps are autonomous driving and (lethal) autonomous weapons. Autonomous driving typically involves no human at all or at most a human on the loop. Similarly, while the mode of operation for drones in general has been moving more and more from human in the loop to human on the loop setups,Footnote 24 lethal autonomous drones involve at best a human on the loop, who can interfere with the decisions of some autonomous system that identifies potential targets. In both cases, the time available to make a decision may not be sufficient for an effective handover, let alone an “explained handover,” even if such a handover is technically possible. In light of this, a human in the loop and concurrent explainability of an output to this human may not be all things considered the best way to go, even if this entails pro tanto undesirable responsibility gaps. For example, assume that autonomous vehiclesFootnote 25 prove to be clearly superior to human drivers in certain contexts, so that critical situations only occur in a fraction of cases, while the time for an (explained) handover is too short. If this is the case, it may be that a fully automated set-up is in some cases significantly better than one involving a human in the loop, so that it may be all things considered permissible to leave corresponding responsibility gaps open.Footnote 26 However, we believe that this is true only of a limited number of cases (e.g., some cases with extreme time pressure or very low stakes), so that our argument gets a grip in a significant number of cases.

Next, are we not letting developers and companies employing decision support systems off the hook too easily? This is not so. Note the following two features of our argument. First, the strength of our claim: Our aim is to establish that keeping a human in the loop (and providing them with explanations) is one good way of ensuring that we can properly hold someone responsible. We suggest that, in some contexts, this may be the best or even the only way to go, but leave open that there may be other ways for ensuring responsibility more suitable for other situations (and some of these ways may rely on explainability or other perspicuity enhancing capabilities after the fact, to determine what went wrong in the relevant situation, see Sterz et al., 2021).

Second, the scope of our claim: Our focus is on moral responsibility, and how to ensure that there is an agent who can properly bear it in the context of AI-supported decision-making. Whether the same argument applies to related phenomena such as legal accountability is a further issue beyond the scope of this paper. One suggestion is that, even if it is not possible properly to ascribe moral responsibility to the developers or employers of an AI system, we may still be able to hold them accountable by law (e.g., by imposing a strict liability for damages arising from the operation of a car on its registered keeper). The current debate over the German law for regulating automated driving, which has been criticized for making vehicle owners liable for damages instead of manufacturers, indicates that similar problems arise in legal contexts.Footnote 27 Finally, we allow that there may be cases in which it is justifiable not to enable moral responsibility, e.g., where affected parties are compensated for not having someone to hold responsible.

Overall, we conclude that the allocation of indirect responsibility is often infeasible. Instead, we then need a person who is presented with the output during use and has the chance to interfere—a human in the loop. Since a human in the loop is made knowledgeable of the recommendation during use and makes the relevant decision herself, she is a candidate for direct responsibility for the outcome.Footnote 28

4 Connecting Responsibility to Explainability

A human in the loop is a candidate for responsibility, but there are further requirements to properly allocate responsibility to them. This is where the demand for explainability comes in. As Floridi et al. put it, ensuring “that the technology – or, more accurately, the people and organizations developing and deploying it – are held accountable in the event of a negative outcome, … would require … some understanding of why this outcome arose” (2018, p. 700, our italics).Footnote 29 To have such understanding, the human in the loop, at the time of the decision, needs access to an explanation of the DSS’s recommendation and possibly its overall functioning. Our aim in this section and the following is to motivate and substantiate the claim that explainability is needed to make the human-in-the-loop solution work, and to investigate what kind of explainability would do the job well. We do so by focusing on what a human in the loop needs in order to meet a standard condition for moral responsibility: This is, at bottom, an explanation of the system’s output.

The foundation for our reasoning lies in a necessary condition on direct moral responsibility which is widely discussed in the philosophical debate—the epistemic condition (Noorman, 2020; Rudy-Hiller, 2018). According to it, an agent is not directly morally responsible for an action unless she is aware, or in a position to be aware, of what she is doing, of the (probable) consequences of her action, of its moral significance, or of alternative options available to her. For instance, an agent who flips the switch to turn on the light and who thereby electrocutes her neighbor by an unfortunate combination of circumstances that was not foreseeable is not directly responsible for the harm caused.

One way to make this distinction more tangible is to resort to a coarse-grained view of actions according to which one action can be picked out under a range of different descriptions (Anscombe, 1962; Davidson, 1963). In the example, the agent’s action can be described as flicking the switch or as turning on the light or as electrocuting the neighbor. Since she is not in a position to be aware that her action is one of electrocuting her neighbor, she is not directly morally responsible for it under that description, though she may still be responsible for flicking the switch. For Herbert and April, the crucial question then, in the context of the epistemic condition, is not whether Herbert is responsible for rejecting April, but whether he is responsible for discriminating against April.Footnote 30 In light of this distinction, the epistemic condition can then be spelled out thus:

(Epistemic Condition) An agent is morally responsible for her action or decision only if she has sufficient epistemic access to it. That she has sufficient epistemic access to it entails at least that she is in a position to know the action under relevant descriptions.Footnote 31

The epistemic condition on moral responsibility can be used to provide two motivations for making decision support systems explainable—the first motivation will be introduced by appealing to an initial case, and the second by appealing to a fleshed-out version of this case. Our initial case is the hiring case in which HR manager Herbert is a human in the loop and makes the final hiring decision, but does not have an explanation of the hiring system’s recommendation. Assume that, before his company started to employ the decision support system, Herbert used to be the HR manager who competently and responsibly made hiring decisions for his company, and that he will continue to do so, using the DSS’s output as one source of support. We focus on human in the loop cases like Herbert’s, in which the decision-maker relies on a DSS and no other AI systems play a role.

Imagine that Herbert decides to exclude April’s application because the hiring system recommended doing so. Imagine further that the system’s recommendation is due to its bias against Black female applicants, but that, since it is an accredited system, Herbert justifiably believes that it has no such problems. Herbert is therefore not indirectly responsible for discriminating against April—he is not to blame for being unaware of the system’s bias. If he is responsible, he must be directly responsible, which requires his being in a position to know what it is that he is doing, its probable consequences, and its moral significance. As described, if he does not have access to what moved the DSS to provide its recommendation, then his AI-supported decision will be made without him being in a position to know these things. Herbert is aware that he rejects April’s application, and so he is aware of his action under that description. But he is not in a position to know that what he is doing, under another description, is to discriminate against her. Nor is he in a position to know that he unfairly rejects her application and that this is an act of moral wrongdoing. Consequently, he is not morally responsible for discriminating against April.

Once a meaningful explanation of the recommendation is available to the decision-maker, we can more easily bridge the responsibility gap. For instance, assuming that the system discriminates against April immediately based on her race and gender, then, if Herbert has access to this fact, he does have access to—is in a position to know—the fact that to reject her application on this basis is to discriminate against her; and that it is unfair and an act of moral wrongdoing. But even in the case where the system discriminates against April based on a learned correlation involving some otherwise innocent proxy variables such as, say, April’s Alma Mater, her hobbies, and her zip code, explanations may enable Herbert to get the right kind of epistemic access. For the proxies will typically be either suspicious or seemingly irrelevant. In both cases, Herbert should doubt the system’s recommendation: If the system indicates that it considers the combination of April’s Alma Mater, her hobbies, and her current zip code to be particularly crucial, this may catch Herbert’s attention: Is this not one of the historically Black colleges and universities? And is that not a primarily Black neighborhood?

In any case, an explanation allows Herbert to become suspicious and to pay particular attention to the role played by other factors. Herbert can then check, if necessary, whether candidates with otherwise similar profiles are rated similarly. In this case of proxy-based discrimination, Herbert may not be sure that discrimination is present, but given sufficient background knowledge and awareness of the danger of discrimination by models, he can develop an initial distrust and at least begin to consider that other descriptions of the situation might be relevant. He is therefore in a position to know at least that a decision that follows the system’s recommendation may very well be discriminatory. So, while explanations may not guarantee in all cases that the epistemic condition on moral responsibility is met, they clearly facilitate its fulfillment.Footnote 32

Let us turn to our second motivation. At least on one way of fleshing out Herbert’s situation further, his epistemic situation is even worse than has become apparent so far. Our fleshed-out scenario shows that, if a decision-maker cannot tell why a DSS provided the recommendation it did, then there may be situations, particularly situations of disagreement between system and decision-maker, in which he cannot tell whether his decisions bring him closer to his goals. As a consequence, he is unable to guide his decisions so as to pursue these goals, or to execute his intentions in acting. This gives rise to an especially threatening way in which an agent lacks epistemic access to his action, and thereby also lacks moral responsibility for it.

Here is the fleshed-out scenario. Imagine that Herbert, at the end of a lengthy selection procedure, is presented with a list of three applicants that the DSS ranks as the top candidates; the system recommends keeping them in the running for the position. April did not make the list, but made it into the top ten. However, Herbert, by going through the top ten applications independently, counted her among the top three applicants beforehand. So we have a case of disagreement between the system’s recommendation and Herbert’s initial judgment. Since there is no explanation of the system’s recommendation available, Herbert cannot reasonably resolve the disagreement.

Here is how this might happen: Say that his own assessment of April’s qualities is due to good, but not conclusive reasons—she has more relevant work experience than most; received great grades in her studies at Yale; speaks a foreign language, which is useful but not absolutely necessary for the job; and has work experience abroad. (By saying that his reasons are not conclusive, we mean that they are weak enough that he may reasonably question his own judgment if the system gives a contrary recommendation.) On the other hand, the system was accredited to be reliable by a trustworthy watchdog organization, though Herbert is aware that systems of this kind may have hidden bugs or biases. In this situation, the system’s countervailing recommendation leaves open both the possibility that Herbert correctly assesses the situation and the system is mistaken and the possibility that the system has a superior understanding of the situation, and Herbert is in the wrong. In the first possibility, the system’s recommendation may be due to some kind of bug, or to its bias against women of color; in the second possibility, the system’s recommendation may be due to the fact that it has access to information Herbert does not have, or detects patterns that Herbert misses. Say that the system relies on all of Herbert’s reasons for taking April to be among the top three candidates (her great grades from Yale, her foreign language competences, etc.). However, it has detected that applicants with these qualifications taken together tend to move on to other, better jobs very quickly. So the system detects a pattern which turns what would otherwise be great reasons for hiring a candidate into a reason against hiring her.

This illustrates that, in a particular situation, Herbert may be unable to tell whether he is in one of two relevant cases:

  • Case 1 The system’s recommendation is mistaken and Herbert’s assessment is right.

  • Case 2 The system’s recommendation is correct and Herbert’s assessment is wrong.

Given that the two cases are indistinguishable to him, he cannot reasonably resolve the disagreement. For he cannot compare or reconcile his own and the system’s reasons for or against keeping April in the running, and so cannot figure out which reasons are superior, e.g., by weighing them against each other. Consequently, if he decides to keep her in the running, this decision is arbitrary; but if he decides to exclude her from the short list, that decision is also arbitrary.Footnote 33 The lack of access to the system’s reasons undermines Herbert’s ability to come to a well-founded all-things-considered judgment about which applicants to keep in the running.

In light of his inability to come to a well-founded all-things-considered decision, Herbert is then unable to competently pursue his goal. Say he is genuinely trying to find the best candidate for this prestigious, responsible position at his company. Since he is unable to tell which is the proper means to doing so—keeping April in the running or excluding her—he is thereby unable to respond to pertinent reasons in pursuit of his goal. In other words, he cannot properly guide his decisions in light of his goals, so as to execute his intentions. This undermines his ability to find the best candidate or to reach various related goals. Imagine Herbert is instead trying to damage the company by hiring an unsuitable candidate. Again, since he cannot tell whether it is his or the system’s assessment of April that is right, he is unable to tell whether excluding April would be a good means to pursuing this goal, and this undermines his ability to guide his hiring decision in response to pertinent reasons.

In the fleshed-out scenario, Herbert is especially epistemically impaired: He is not in a position to know either of his options under the relevant descriptions. He cannot tell whether, if he complies with the system’s recommendation, his decision is one that wrongs April; but neither can he tell whether, if he goes with his own initial assessment, his decision can be described as one of harming his company. In this fleshed-out version of the scenario, then, Herbert’s access to his decision is undermined in a more severe way. Because of this more wide-ranging epistemic disconnect, Herbert is not directly morally responsible for his AI-supported decision.Footnote 34

Of course one might object that cases of disagreement are insignificant outliers. Typically, the decision-maker will agree with the system’s recommendation. However, this objection renders the use of decision support systems obsolete. If the system’s recommendation allows for well-founded decision-making only where it supports what the decision-maker would choose anyway, then it is pointless to combine a DSS with a human in the loop for the hiring decision. From Herbert’s perspective, adding the DSS does not improve his decision-making; from the perspective of the company, keeping a human in the loop does not add an advantage over employing a fully automated system. The system can lead to better decision-making exactly by way of disagreement with the decision-maker where there is room for changing his mind. So, exactly when it counts—when the decision-maker has reasons that are not conclusive, and the system makes a recommendation that is potentially better than his take on the situation—the system undermines the decision-maker’s epistemic access to his decision, and thus his moral responsibility.

Without explainability, we face a dilemma for human in the loop scenarios: It is either pointless to have the system provide a recommendation to the human decision-maker (in cases where human and system agree, or when the decision-maker has conclusive reasons anyway), or the lack of explainability undermines his epistemic access to his decision and thus the moral responsibility which the human in the loop is supposed to bear (in cases where human and system disagree, while the human has non-conclusive reasons). Now the second horn of the dilemma is due to the fact that the decision-maker has no access to why the DSS provided a certain recommendation. If he had a suitable explanation of the system’s recommendation available, so that he would be able to compare his reasons with the system’s reasons, he would be in a better position to figure out whether it is the system’s or his own assessment of the situation that is correct. So, he would be able to resolve the disagreement in a non-arbitrary way, thereby be able to make the hiring decision that best suits his goal (finding the right person for the job), and thus be in a position to know his decisions and actions under the relevant descriptions. We conclude that, in many cases of disagreement where the decision-maker’s reasons are non-conclusive, he is in a position to bear direct responsibility for his decision just in case he has a suitable explanation of the system’s recommendation available.

To sum up, a human decision-maker needs explanations. These enable responsible AI-supported decision-making by enabling the agent to meet the epistemic condition in cases like the ones discussed in this section.

5 The Advantages of Reason Explanations

Which form should an explanation take to ensure that decision-makers are morally responsible for their AI-supported decisions? While different kinds of explanations could enable responsibility when properly interpreted by human decision-makers, reason explanations are particularly well-suited for this job. They are the ones that humans typically use when trying to understand and explain action, when exchanging justifications for actions and recommendations, and when trying to resolve disagreements (Alvarez, 2010; Hieronymi, 2011). Just like human experts would provide reasons for their recommendations, so should decision support systems. In this section, we spell out how reason explanations help to resolve different kinds of disagreement between humans in the loop and DSS and what kind of reasons are needed for the job.

Before returning to the disagreement case and illuminating what reason explanations for decision support systems should look like, let us first clarify what reasons are and which kinds of reasons figure in reason explanations. In the philosophy of action, reasons are categorized by the distinction between normative and motivating reasons (Alvarez, 2017; Hieronymi, 2011; Mantel, 2018). We here apply—without defending it—this widely accepted philosophical distinction to the recommendations of decision support systems. Normative reasons are facts that objectively favor or disfavor an action (such as the action recommended by a DSS). All normative reasons, taken together, make the action right or wrong. For instance, the fact that eating vegetables is healthy counts in favor of my eating vegetables. Applied to decision support systems, we may say that normative reasons are the facts which favor or disfavor a DSS’s recommendation and the recommended action. When a system’s input data contains information that fits the facts and supports the recommended action over another, we can say that the system has available normative reasons favoring a certain recommendation.

Although ideally a DSS has normative reasons available, reason explanations should focus on motivating reasons instead, because systems can make mistakes. A motivating reason is a consideration that an agent relies on in acting, a consideration “for which someone does something, a reason that, in the agent’s eyes, counts in favor of her acting in a certain way”—whether or not it is a fact and actually favors the action (Alvarez, 2017). Motivating reasons stand at the intersection between explanation and justification insofar as they help to explain the output in the light of what the decider took to justify or favor it (Hieronymi, 2011). Unlike normative reasons, motivating reasons can include merely apparent facts, i.e., non-obtaining states of affairs or false propositions that the agent falsely takes to obtain (Dancy, 2000; Schmidt, 2018). For instance, that spinach is a good source of iron is a merely apparent fact. Even though it is not the case that spinach is a good source of iron, this can be the reason which motivates me to eat spinach—since I mistakenly believe that spinach is a good source of iron, in my eyes, this favors the action, and it is the light in which I act. If a motivating reason is not mistaken, we say that it corresponds to a normative reason.

We suggest what one might call a functional picture of motivating reasons, on which “favoring in an agent’s eyes” is not interpreted as entailing awareness. We talk of motivating reasons more loosely to pick out information which plays a certain role in determining the output of a system, e.g., in whether or not it recommends a certain action. With this functional characterization in mind, it becomes feasible to transfer reasons to decision support systems. A DSS can then be described as providing recommendations on the basis of reasons available to it, or, to put it differently, as treating something within its inputs as reasons for its recommendation. For it can be correct that the system provides a certain recommendation because it has certain information (i.e., motivating reasons) available. Note that this does not yet commit us to the claim that there is a form of (non-deflationary) reasoning to be found within that system.

Turning next to reason explanations, a reason explanation explains an action in terms of an agent’s motivating reasons—that is in terms of the information or misinformation that led her to the action. Ideally then, a reason explanation of a system’s recommendation will include only the information on which the system relied in producing its output—the information contained in the data available to the system on which it relied in providing its recommendation. The explanation refers to the information which actually contributed to the system’s coming to a particular recommendation, and not to confabulations. This is not to say, however, that the reason explanation refers to all the information that made a contribution to the recommendation or decision. Although agents may be aware of a huge number of pro and contra considerations and may be led to an action by such a bundle of reasons, most reason explanations of human action focus on just one or a few contextually relevant motivating reasons. Even if a DSS takes into account much more information than a human would in providing a recommendation, this complexity therefore does not rule out providing a simple reason explanation for its recommendation, for such explanations typically do not require to name all of the motivating reasons but only the most relevant ones. What it does require, of course, is singling out some contextually relevant pieces of information, and especially the most significant ones.

Typically, humans have no access to the reasons on which a DSS bases its recommendations or to the roles they play in producing these recommendations. In order to be able to offer reason explanations, therefore, one would ideally be in a position to examine the actual decision-making processes of the system and to present the involved reasons and inferences accordingly. But presuming this would be naïve. More and more DSS are based on modern developments in AI. Neural networks and support vector machines, which operate on high-dimensional data spaces, seem to elude precisely this form of access and understanding of the internals, which has earned them the title of “black boxes” (Bathaee, 2018).

There are several obstacles to providing reason explanations for the recommendations of such DSS: First, there might simply be no decision process in the relevant sense. Perhaps a system learns to solve a particular task without any representation or structure at all. The concept of tacit knowledge (compare Polanyi’s paradox, Autor, 2014; Polanyi, 1966) and the distinction between “knowing how” and “knowing that” (Bathaee, 2018) may be relevant in explaining how such systems can prepare recommendations and make decisions without relying on reasoning processes. Importantly, though, such systems will still offer systematic, non-random outputs relative to inputs. Otherwise they would just be random generators. But they are not—many such systems work really well, i.e., reliably provide extremely useful and fitting outputs.

Second, however, our inability to provide explanations for a DSS’s outputs may be rooted in an epistemic deficiency: We simply do not gain access to hidden reasoning processes. A typical explanation of this is that the reasons and processes are represented in a distributed manner at the subsymbolic level of artificial neurons (Goodfellow et al., 2016). But if these processes elude our access, we can certainly not easily provide them or the reasons involved therein.

And even if we could access such reasoning processes, there is a third reason why we might fail to provide the right kind of reason explanations: It is possible that the actual reasons and reasoning processes simply cannot be processed and grasped by humans, i.e., that they are incomprehensible to us (Armstrong et al., 2012). This could be the case because they are too high-dimensional to be visualized or otherwise too complicated to be suitably represented. Alternatively, such systems might use a conceptual scheme that is too different from ours to be expressible in human terms and that therefore resists translation (for doubts concerning the meaningfulness of this last claim, see Davidson, 1973).

However, these obstacles do not render the pursuit of reason explanations an impossible, hopeless endeavor. For one, the reason explanations we give for human actions are useful even though they are often approximations of far more complex processes (and may similarly face problems such as members of different cultures or linguistic communities having different conceptual schemes, or the connectionist structure of and processing in the human brain). For another, even complex reasons and reasoning processes—given they do exist—can in principle be approximated. A satisfactory account of how this is possible lies beyond the scope of this paper, but discussions in the philosophy of science regarding the non-factivity of understanding (Elgin, 2007), surrogative reasoning (Contessa, 2007), as well as idealization and approximation with respect to models (Frigg & Hartmann, 2020; Potochnik, 2007; Strevens, 2017) indicate a way forward. In case of their non-existence, we can generate sufficiently good explanations externally by methodically interpreting the systematic behavior of the DSS.Footnote 35

Indeed, many existing explainability methods do something along these lines. LIME (Ribeiro et al., 2016) is a good example of this. To explain the prediction for some input, LIME approximates a complex model locally around this input by a simpler model that can then easily be explained. In other words, what is used to explain the prediction is not the original model (that may elude understanding because of its high-dimensionality), but a simpler model (with fewer dimensions) that behaves like the original model for inputs similar to the input in question. Similarly, we could generate reason explanations for a complex system by constructing a simpler system that locally approximates (relative to some observed prediction or recommendation) the original DSS. To do so, we would have to construct the simpler system in such a way that we can properly attribute reasons to its decision-making process, while staying sufficiently faithful to the behavior of the original DSS. That is to say that the simpler system has to give more or less the same recommendations for sufficiently similar inputs (for a suggestion along these lines, see Baum et al., 2017).Footnote 36

Reason explanations allow humans to assess a system. For example, such explanations make it in principle possible to assess whether the system’s motivating reasons are, or correspond to, normative reasons that favor the recommended action.Footnote 37 A well-working system responds to facts which are normative reasons.Footnote 38 This means that the system’s recommendation will be actually favored by the facts, and the system will, in general, be robustly responsive to the facts, so it is no mere luck that it provides a good recommendation, but it does so across a broad range of situations.Footnote 39 By contrast, if the system is completely off track, a reason explanation of its outputs may not mention any normative reasons at all but only non-obtaining or irrelevant considerations. Even so, the explanation would be very useful to the person in the loop—for instance, by revealing that the system is malfunctioning in a particular way.

Let us apply these thoughts to our example of a hiring system.Footnote 40 If a hiring system recommends hiring a certain candidate for a job, a normative reason for that recommendation would be any fact that indeed obtains and that objectively favors hiring the candidate, such as the fact that she is very clever, well educated, and works accurately even under pressure. Many normative reasons may not be available to a DSS, for instance, because a CV does not fully disclose a candidate’s personality and capacities. But given useful and human-processable reason explanations, a human in the loop should be able to incorporate further reasons available to her into her reasoning process—which is a crucial part of her role.

Now return to our disagreement case. Suppose Herbert has what he thinks are good, but not conclusive reasons for keeping April in the running, whereas the system excludes her from the top three applicants. What exactly is it that Herbert needs so he can resolve this disagreement in a non-arbitrary way? He needs to be able to figure out which party to the disagreement is in the wrong, by figuring out whether one party overlooked normative reasons that the other recognized, relied on motivating reasons that were mistaken, or gave the reasons too much or too little weight, or the like. In many situations, there are further features which modify normative reasons by disabling, attenuating, or strengthening them. They, too, need to be considered, as we will show below. In sum, the decision-maker has to be in a position to figure out whether the following possibilities are at the root of the disagreement:

  • Disagreement of fact System and decision-maker represent reality differently. They treat different propositions as facts or assign different uncertainty measures to propositions.

  • Disagreement of relation System and decision-maker treat different purported facts as favoring (or disfavoring) a course of action, they assign different strengths to favoring (or disfavoring) relations, or they treat purported facts as interfering with favoring (or disfavoring) relations, e.g., by disabling or attenuating them.

The human in the loop wants to check the motivating reasons on which the system relies and thus to identify disagreements of relation and disagreements of fact. For instance, there is a disagreement of fact if Herbert falsely believes that April has more relevant work experience than the others, whereas the system does not. If the system’s rejection of April is explained by appeal to the reason that she has less work experience, this will enable him to double-check his information and to reasonably revise his original judgment (or to stick with it, if the mistake lies with the system’s representation of the facts).

A disagreement of relations is in place, for example, if Herbert and the hiring system assume that the same facts obtain, but assign them different roles or different relations to the decision in question. This may be the case when they take the reasons in a situation to have different weights or to count in favor of different, mutually exclusive decisions; or if they disagree over whether these facts are reasons, or over whether some features modify the given reasons, as intensifiers, attenuators, enablers, or disablers. To illustrate a disagreement based on different assessments of modifiers, suppose that the DSS detects a pattern that Herbert misses: Applicants with April’s (otherwise positive) traits taken together tend to move on to other, better jobs very quickly. Here, facts that would individually be great reasons to hire April, together constitute a reason against hiring her.

To take a more complex case (see Fig. 1), both Herbert (“H”) and the DSS (“S”) may be apprised of the fact that April was accepted at Yale after her mother bribed the school (“Bribing”).Footnote 41 The system counts this fact as a disabler (dashed arrow): Given that she was accepted at Yale because of a bribe, the system does not take the fact that she studied at Yale (“Yale”) to be a reason in favor of hiring her (“High Rank”). Moreover, it takes the fact that her mother bribed the school as a reason against hiring her, taking it as evidence of a lack of moral integrity. By contrast, Herbert, a person of low moral character, believes that the fact that April’s mother is willing and able to use bribes to pave her daughter’s way as a (prudential) reason to hire her (continuous arrow). To his mind, this fact indicates that April is from a rich, well-connected family and will therefore be an asset to the company. So, while he also thinks of this fact as a disabler, he treats it not as a reason against hiring her, but as a reason that favors hiring her.

Fig. 1
figure 1

Yale bribing example

Again, if Herbert receives the system’s assessment that the fact that her mother bribed the school disables the fact that April went to Yale as a reason to hire her, he is in a position to integrate this knowledge into his own decision-making. For instance, he might then discount the system’s recommendation because the system is blind to the importance of coming from a well-connected family; or he might come to realize that it is more important to fill this position with someone who made it into an excellent university without bribery, and comply with the system’s recommendation. Either way, he will meet the epistemic condition with respect to his decision, and bear moral responsibility for it.

Generally speaking, disagreements are typically resolved by taking into account the reasons of the other party. The decision-maker needs a grasp of what reasons the system operated with and how it treated pieces of information, e.g., as reasons or as disablers. This is to say, a reason explanation needs to state explicitly what pieces of information served as reasons for or against a certain recommendation and what pieces of information served as modifiers of reasons. Furthermore, the explanation needs to include the strengths of these reasons. If the decision-maker has access to this information, he can reassess his information about the facts as well as his treatment of them as playing different roles such as those of reasons, disablers, attenuators, etc. He can then come to an all-things-considered decision that integrates all relevant facts in a coherent way, weighing the relevant reasons against each other, and he is then in a position to know his decision under the relevant descriptions and, thus, to be morally responsible for the decision.Footnote 42

Reason explanations are the explanations we typically use to communicate reasons. By contrast, other forms of explanations would seem to make resolving disagreements much harder. Imagine that Herbert is provided with the following explanation of why the system recommended against April: If her mother had not bribed the school, it would have recommended to hire April. Such a counterfactual explanation, as suggested by Wachter et al. (2018), indicates that the facts mentioned in the explanation were taken either as reasons against hiring April, or as disablers of other reasons to hire her (and in the example, both are the case). But as this example illustrates, he may still be unable to tell which of the two roles a fact played (reason against or disabler, or both)—with one role he agrees, with the other he does not—and it will take extra work for him to assign the facts their proper roles and to integrate them correctly into his own reasoning.

6 Open Questions and Future Work

In this paper, we have argued that, to close responsibility gaps, we often need a human in the loop who is in a position to bear direct responsibility for her—AI-supported—decisions. However, for a human in the loop to be in a position of directly responsible decision-making, she needs to have the right kind of epistemic access to relevant features of her action. We have argued that the epistemic condition on moral responsibility often cannot be met by the human in the loop if she has no access to the system’s motivating reasons for its recommendation. We have explained how meeting the epistemic condition translates to certain abilities in practice, first and foremost to the ability to recognize and resolve disagreements of different kinds between man and machine. And we have argued that reason explanations are theoretically well-suited to restore epistemic access, supplying a background picture of motivating and normative reasons from the philosophy of action, which we started to transfer to decision support systems and their recommendations.

However, all this can only be a starting point. Several empirical and technical tasks remain on the path to useful machine-generated reason explanations. In a further empirical step, one could ask how many and which motivating reasons need to be provided (especially when a DSS processes a great amount of information) and how they need to be presented in the explanation of a recommendation such that the human can best use the explanation. This includes the issue of how the strength of reasons or different roles such as disabling, attenuating, or intensifying should be represented. In explaining a system’s recommendations, what is needed is an explanation that users can understand and often one that they can comprehend quickly. When humans give explanations, they intuitively present information selectively and focus on the information that seems relevant in the context of a given question. Providing more information than necessary can be distracting, and it leaves the recipient of the explanation the time-consuming task of singling out the bits that are most relevant. This can be counterproductive.Footnote 43 Hence, the explanation presented initially would ideally involve only the most relevant motivating considerations, while flagging their respective roles. Less relevant motivating reasons would be provided only upon a request to give more detailed information. But which reasons are relevant and why? This calls for further, especially psychological and normative research.

Relatedly, it seems problematic to try to provide general principles of which reasons will be the most relevant elements of a reason explanation. For the relevance of a reason is not determined solely by its significance within the specific reasoning process, but might well be a function also of the aims or background knowledge of the human who receives the explanation. This seems to call for an interactive way of explaining that allows the human to dive deeper into the why, a typical problem for human–computer interaction.Footnote 44

This brings us to the question of how, quite generally, reason explanations of AI systems’ outputs should be represented to their addressees. What is a suitable data structure for reason explanations? We not only need a way to represent reasons, but also their relations and quantitative information like their weight or potentially involved uncertainty. Formal, graph-based approaches to reasons (Horty, 2012) as well as argumentation and dialectical frameworks (Amgoud & Prade, 2009; Baum et al., 2018, 2019; Dung, 1995) might lead the way. Further research along these lines calls for input from both theoretical and practical computer scientists.

In short, we believe that the endeavor of equipping decision support systems with the ability of giving reason explanations is not only imperative, but opens up several interesting and highly interdisciplinary lines of research for the future.