The problem of explainability of tools
Not every technological invention yields genuinely new ethical and epistemological challenges. However, philosophers agree that explainability (or, as Floridi et al (2018) put it, “explicability”), and the lack thereof constitute such a challenge (for an overview on the different dimensions of this debate, see Mittelstadt et al., 2019). AI, especially deep learning, has led to largely autonomous decision-making processes in algorithms that can rarely be fully explained in mechanistic-causal terms. This warrants the questions to what degree we should be able to explain these tools and under which conditions we ought to use them, if at all.
Different positions have been put forward in response to this question, with most granting that explainability of AI poses serious ethical and epistemological challenges that ought to be addressed before implementing them. Especially in discussions surrounding medical decision-making (in diagnostics, risk-assessments for operations and other medical interventions, as well as prescribing certain treatments like medical drugs), the use of AI poses a high-stakes opportunity to save lives. However, without properly being able to explain the tools used to make these decisions, AI appears to simultaneously endanger long-held norms of responsibility attribution. In the following, we reconstruct the goods increased explainability is supposed to provide to medical decision-making.
Four goods of explainability
The assumption that we ought to be able to explain how a tool works before we use it and while we use it has immediate intuitive appeal. It seems prima facie irresponsible to even consider using a machine the workings of which we do not fully understand, let alone using it in high-stakes contexts like medical diagnostics. From dangers of misuse to misunderstanding its results, there are many reasons to find concern in the lack of such ability, and for which we assign value when present.
This intuitive appeal to insist on high levels of explainability for the AI in use, however, can lead to insufficient differentiation between the different types of good provided by explanations. Additionally, the question of what actually triggers the strong expectation about the ability of experts to explain the tools they are using may require some closer examination than what intuitions provide at first glance.
In opposite to approach this debate from different kinds of explanations like Mittelstadt et al. (2019), we propose to disentangle four different goods that increased explainability of medical AI can provide: preconditional, consequentialist-instrumental, deontological-conditional, and democratic goods. This distinction allows to analyze exactly what kind of good is provided by increased explainability of an AI, and how these different goods are usually included in arguments for or against certain methods of AI (i.e., deep learning), the underlying training data, or the tools (i.e., diagnostics algorithms), especially in respect to medical uses. From these goods, we usually come to value-judgments about the inclusion of explainability-norms in medical decision-making. We do not claim these lists of good to be the only four, or that they cannot be subsumed under more inclusive terms. Rather, we chose these four goods as especially relevant to explainability.
The distinction between kinds of explanations, such as causality, simulatability, decompositionality or post-hoc rationalizability (Lipton, 2016) is very useful to the issue of explainability: by providing different methods to increase our understanding of how specific machines work, we can increase the desired explainability for differently programed, machine-learned algorithms or our use-specific knowledge requirements in using them. Different ways to program algorithms, their uses, and our knowledge-requirements can necessitate different kinds of explanations.
However, we stress that approaching the issue of explainability from a value-theory perspective can clarify why explainability as such, i.e., indifferent to its different forms, is considered a morally desirable endeavor and can introduce arguments that uncover tacit presumptions about our standards of medical explanations. Thus, the debate around the kinds of explanations and the value of explanations is not mutually exclusive but pursue different, equally important purposes: distinguishing kinds of explanations can help explain more algorithms in different forms, distinguishing goods of explainability can help explain why we should explain in the first place.Footnote 1
Preconditional good
As a general epistemic rule, experts ought to understand the tools they are using. Without looking at the consequences of using a machine, it appears a priori correct that it is better to know more about the machine’s functions than less when operating it. This preconditional understanding or knowledge of tools in use can be understood as a conceptual connection between “expertise” and tool-knowledge as an epistemic virtue. This condition has probably the biggest intuitive appeal (“an expert should know what they are doing!”). We can question the strength of this preconditional knowledge, however, as experts in using the machine are not the same as experts in constructing and hence explaining the workings of a machine. In medical contexts this shared labor of medical and technical experts becomes more apparent, as the operator of an MRI machine may know how the machine works, but does not know how to diagnose a patient based on the machine’s results, while a radiologist may be able to explain how to diagnose the patient, while not being able to explain the machine that provides the patient’s data.
An expert that not only knows what a machine does, but also how the machine does it, is usually considered better equipped to deal with the machine than someone who can “merely operate “ it. Both are considered valuable: the former usually is considered part of the explainability requirement, while the latter is less about the machine and more about the discipline, in our case medical diagnostics. And though this may constitute a supererogatory duty of experts, in high stakes contexts like medical diagnostics, these epistemic duties of physicians to possess the know-what and the know-how on the machines they are using seem to have a certain intuitive appeal.
The issue with medical AI currently lies with the lack of such assistants or technical staff that can be called to explain the workings of a machine. This is why some have called for specifically educated medical-technical staff that work on medical AI alone (Jha & Topol, 2016, e.g., call for the conceptualization of radiologists and other diagnostic professionals as “information specialists”).
Another example demonstrating the value of preconditional knowledge emerges in the case of the “never failing oracle”: If we imagine an oracle whose diagnoses are always correct without fail, the practical need to explain its workings are diminished—we simple would ask it to diagnose patient after patient and work from there. However, even then philosophical curiosity will insist on answers that explain how it works and how the oracle can be correct, as it must have access to knowledge that we would like to share. Theoretically speaking, then, if a machine would never fail a given task, there would be no practical need to explain its workings. And yet, the cognitive purpose of humanity’s natural inquisitiveness to explain it would persist nevertheless.
Consequentialist-instrumental good
Increased explainability comes, so goes the argument, with a better understanding of the potential errors of the machine, its biases, malfunctions (Lombrozo, 2011). Therefore, in order to further improve the performance of the diagnostic process at large, and to avoid introducing new issues previously not present, we ought to achieve a certain level of explainability of the AI used in clinical contexts. This is specifically independent of current methods—the goal of constantly improving diagnostic methods will potentially require to improve an AI’s explainability.
Further, the higher the level of explainability the better it can be adjusted for specific purposes, increasing its usefulness even more. Deep-learned AI is usually fairly inflexible, having led to misleading promises about its performance in non-ideal (i.e., non-laboratory) contexts. With increased knowledge about the pathways and patterns of decision-making processes within the AI we may be able to adjust the decision-making according to real-life conditions.
Many of the technical research in explainability of performance-improvement is primarily consequentialist, as even non-ethical considerations of performance are affected by explainability: for example, the ability to sell algorithms for a wider range of applications or an increased robustness widens markets, which both requires an extended understanding of how algorithms work.
Medical AI can also profit from increased explainability of this kind, as the parameters of medical care vary heavily. Even small issues such as indoor-light can influence the performance of an AI at this stage as the attempt of using laboratory-tested eye-scanning algorithms have somewhat unfortunately impressively shown (Heaven, 2020). These adjustments are best implemented with an increased understanding of how the AI works, and what it gets wrong and why.
Deontological-conditional good
If the decision-making process of an AI is directly or indirectly affecting a patient, it appears reasonable to assume that explainability of the decision-making process is a condition for informed consent. While many patients probably do not care about the technological details of the diagnosing machine, as they already do not care about the technological details of current technology (but, if at all, about the diagnostic system of human physicians and their use of such technology (Montague et al., 2010)), the point of the conditional good of explainability is its potentiality. It is at least theoretically possible to explain the process of current technology used in diagnostic processes, while this explainability is both lost in current methods of training AI, as well as ever more needed as the AI is not only providing evidence on which a physician can base their diagnosis on, but may provide a (preliminary) diagnosis on its own.
The ability of a machine to not only take over more complex cognitive pre-instructed tasks, but to perform most parts of the decision-making process, appears to increase the conditional requirements for explainability from the patient’s perspective, which is usually connected to making informed decision about the risks and benefits of certain treatments, to which an explainable AI can contribute (Bjerring & Busch, 2020).
However, the question of responsibility attribution may arise here as well. Explainability seems to be normatively connected to the level of explainable decisions (Grote & Di Nucci, 2020). In the case of medical AI, the physician ought to be able to explain their decision, and if medical AI was part of the decision-making process, physicians ought to be able to explain how the AI’s process features in their own decisions. And while this may be less relevant in a context where role-responsibilities demand of physicians to take responsibility even for processes in part of our their control, the requirements for lowering this burden by making medical AI “inspectable” (Zerilli et al (2019) use this term) to the degree that physicians can take responsibility seems a justified demand.
Democratic good
Next to the specific goods that can emerge from interactions between technology and its users (physicians and their patients), the context in which these machines are created can be attributed as democratic good (e.g., Robinson, 2020). The higher the explainability of a machine, the more it can be subject to democratic discourse about its constitution.
In AI, these democratic goods of explainability expand on one side to its transparency, i.e., which data was used, how it was sourced, and how the algorithm was trained, and on the other it allows for increased bias detection. The High Level Expert Group for AI (HLEGAI) accordingly includes explainability and transparency as part of their “trustworthy AI “ approach (HLEGAI, 2019).
Especially given the racially and gender-biased history of medicine, we should expect traces of these biases to be present in the medical training data of these algorithms. Only an open, democratically controlled and supervised process of training methods can ensure a decrease in biased diagnoses, which we consider distinct from the consequential and deontological good. Explainability can thus be a source of improving the social conditions of human–machine interactions, as increased explainability requires transparency of potential biases.Footnote 2
Relative versus absolute explainability
These types of good are usually considered to constitute explainability as necessary for the ethically justified use of AI in medical practice and elsewhere. However, this necessity presupposes that the explainability of AI is a goal to be achieved on its own, as we ought to explain how the machine makes decisions to achieve the aforementioned goods. Following the assumption about this context-free relevance of AI-explainability, several authors conclude certain ethical (im-)permissibilities or formulate ethical concern about the use of AI that is insufficiently explainable. E.g., McDougall’s claim that an unexplainable AI may reach levels of decision-making autonomy that can re-introduce an unjustified soft paternalism (McDougall 2019; Grote and Berens 2020, concurring), while Bjerring and Busch (2020) are equally concerned for a patient’s ability to consent. We call the view that AI needs to be explainable “no matter what” the “absolute explainability view.”
However, the ethical contribution of explainability of AI can only be properly understood in comparison with other methods of decision-making and their explainability without having to deny the goods it can provide. Thereby, the value of explainability is best thought of to be relative, in which different decision-making processes and their levels of explainability ought to be compared (the “relative explainability view”). In the context of medical diagnostics, the explainability of AI ought to be assessed in the light of the explainability of physicians’s decision-making processes and their norms.
This is not to say that the different types of good of explainability are relative; in fact, increased explainability appears to be a purpose striving for independent of other practical considerations. However, in the assessment of just how much we have to be able to explain the decision-making processes of AI, it is possible to presuppose an undue double-standard on AI when compared to established standards of medical explanations. While higher standards for AI (in contrast to physicians) might be justifiable, double-standards are to be avoided. The harm of double-standards, i.e., undue high standards for explainable AI, becomes apparent when they are used to discredit the potential of a promising technology because it is not achieving these undue high standards, or to increase the regulatory process to implement them in clinical contexts.
While normative standards should not be lowered to merely dream up a more technologically advanced, automated but ultimately unethical clinic, the chances of improving medical care both for the caring and the cared for ought to be taken into consideration when determining explainability standards.