In context of this normative approach, we will now explore how diffusions of responsibility unfold in the use of AI-CDSS along three dimensions of the attribution of responsibility. These dimensions, drawn from the three fundamental changes in decision-making described above as impacts of AI-CDSS, are the causal, the moral, and the legal. This outline also highlights and maps current debates in the field of AI ethics across these three levels to illustrate how they address diffusions of responsibility and their causes in the ongoing scientific ethical discourse.
Causal attribution of responsibility
The endeavor of attributing causal responsibility for events of harm sustained from misdiagnosis or misguided treatment due to the use of AI-CDSS faces the difficulty of locating and tracing errors in AI-driven processes. Questions that may arise include: Did the error occur in the system, in the machine learning processes driven by adaptive learning algorithms? Or was the system trained with a biased data set? Did the error occur during the data collection and acquisition process? Did something go wrong when the system was in use? Or did the user cause the error? Which algorithm, software, mechanisms, or tools caused the error? Even the designers of a system will struggle, in the face of the “black box” processes undertaken by AI-CDSS and on account of the systems adaptive learning processes, with analysis and retracing of the decision made. In the case of a “digital tumor board” that runs through a tremendous amount of data and complex data processes over which it is virtually impossible to exercise knowledge or control, in light if this, it will be challenging to attribute responsibility for the system’s decision or its potential consequences to any specific agent. This leads inevitably to a second pair of questions: What may the object be, for which responsibility can be attributed—the algorithm, the data set, the processing of the data, the system statements, the consequences of decisions arrived at by AI-driven systems? And for which actions or AI-driven system operations may responsibility be attributed?
This “black box” decision-making process imposes severe limitations on control and on the epistemic conditions of knowledge, which, at least in an Aristotelian tradition, are essential to any assumption of responsibility [29, 42,43,44]. The assumption of responsibility, in this tradition, is indissolubly linked either to the free will to act or to the causation of the act and existence of sufficient control over that act. Further, responsibility in this school of thought depends on relevant knowledge about the decision to be made and its possible consequences. To the end of managing this challenge of controllability and the epistemic uncertainties around “black box” decision-making, recently issued ethical guidelines and codices give prominence to the issue of transparency  and thus, in technical, legal, economic, political, and social terms, to the call for the development of explainable AI (XAI). XAI appears to have evolved into a new moral principle for the development and design of responsible AI [46, 47]. Applying this concept to the idea of the “digital tumor board”, it seems clear that explainability is a central aspect of a clinician’s assessment and is particularly key to the justification of a decision to the patient affected. To the end of obtaining informed patient consent to a course of treatment, it seems reasonable to ensure the technical and organizational disclosure of the machine-based and physician-related decision criteria and parameters. Explainability in practice would therefore center the perspective of patients as data subjects by providing them with reasons to contest the decisions if the outcome is not as desired or has caused harm. The provision of access to an understanding of algorithmic decision-making is thus also a basis for action and change as Wachter et al.  argue. We might state in this context that explainability attempts to bridge the epistemic and controllability gaps to the end of enabling data subjects to manage their data responsibly.
Notwithstanding the drive to eliminate or at least contain the epistemic and control uncertainties associated with the “black box problem”, it is debatable whether “opening” the black box is a feasible or desirable aim in the medical context. The question arises as to the extent to which, and the instances in which explainability is required, or, looked at the other way around, opacity is acceptable. According to Alex London , the demand for explainability of ML may even reproduce the misconception that medical decision-making by physicians is in any way more consistent or explainable than ML-based decision-making. Epistemic questions of explainability and comprehensibility arise with respect to decisions taken by physicians, just as they do with ML; London argues that such decisions are influenced by a mixture of the physician’s experience, associations, and causal evidence. London  insists that empirical validation of the reliability and accuracy of ML models should take precedence over their explainability, and asserts that explainability in the sense of interpretability can prove deceptive or harmful in certain situations. If we were to concur with this line of argument in the case of the “digital tumor board”, we might conclude that the matter of how the AI-CDSS reaches its conclusions may not be of interest to clinicians or patients, as long as the system is precise, accurate, takes medical parameters sufficiently into account and is embedded in a comprehensive decision-making process as one component thereof. In line with this thought, it might be worth considering comparing transparency claims and requirements with cases of non-transparency in the same or similar contexts. In other words, we suppose to evaluate the design of transparency, its technical, or social requirements in light of the needs of the relevant parties and contexts. There are areas, especially in the medical context, in which opacity is easily accepted. In this case, most patients, even when provided with medical information, will have little knowledge of or even interest in the exact biochemical workings of a medication. As long as the hoped-for effect occurs, they are likely to accept the risk of side-effects. Analogously, very few clinicians are likely to be well-informed regarding the functioning of and the technical processes underlying software used in the healthcare system. As long as the software fulfills its purpose—such as image recognition in computed tomography—clinicians will typically consider precise knowledge and explainability of data processing procedures to be irrelevant. Expectations that an AI-driven system be explainable require from this perspective assessment of these contextual needs of patients and contexts in coherence with other medically used instruments. All this raises the question of whether there are areas in which “black box” AI processes are acceptable, given appropriate evidence-based reasons? Managing the diffusion of responsibility on a causal level, to contain the epistemic problems, thus requires the discussion of criteria for the explainability of AI-CDSS in relation to its specific context.
At the threshold from causal to moral attribution of responsibility, the implementation of controllability functions and tools to ensure a certain level of control or to avoid failures or malfunctions emerges as an important addition to aspects on the epistemic challenges. Accordingly, alongside considerations around XAI, it will be necessary to take into account the robustness, reliability, and accuracy of the system’s output, in terms of control in the sense of monitoring or oversight of AI-CDSS. Control opportunities are quite relevant when the causal responsibility implies or infers moral attributions of responsibility. Should a “digital tumor board” prove prone to malfunctions and errors or exhibit IT security flaws with respect to patient data, the causal attribution of responsibility will change accordingly; depending on the cause of the malfunctions and who has knowledge of and could be expected to have control over them, responsibility may shift to the developers, providers, or users of the AI-CDSS. The attribution of causal responsibility in this sense crucially depends on robust performance, accurate output, and IT security measures such as the prevention of hacker attacks. The establishment of tools for controllability in the sense of opportunities of monitoring or oversight is an important aspect of maintaining knowledge of and control over the AI-driven system to manage the threshold from causal to moral attribution of responsibility and not letting it become diffuse.
Moral attribution of responsibility
The implementation of AI-CDSS in the medical context raises the question of the extent of the moral responsibility for the outcomes of their use which may be attributable to clinicians, software and tech companies as providers, computer scientists as developers, or the system itself, or indeed patients or other moral agents. A clinician, relies on the recommendation of the “digital tumor board”, as generated by AI-driven data evaluation, and adjusts a patient’s treatment accordingly. The patient subsequently suffers harm, and the recommendation and the treatment decision that ensued from it turn out to be mistaken. For example, the AI-generated recommendation was for surgery as opposed to radiotherapy, but the individual health status of the patient in question made surgery a risk, and significant harm was the result. At this point, alongside causal questions, moral issues arise as to the extent to which the clinician is morally responsible for the harm sustained, especially if, for instance, they would have made a different decision without the support of the system. It is even imaginable that they had fundamentally disagreed with the system’s decision but had still followed its recommendation, because her previous experience was that the system had always been correct. Is it fair in this situation to hold only the clinician accountable? And what about instances in which, a highly automated system interacts directly with the patient and gives medical recommendations and advice? In this scenario, if harm occurred, would the patient be at fault for following the system’s recommendations, or would the system’s developers, its deployers, or other stakeholders be held responsible?
A key point of discussion in this context is whether moral responsibility can legitimately attach to the system itself. The academic debate has linked this question closely to that of whether an AI system can be considered a “moral agent” [50,51,52]. The issue of moral agency arises frequently in relation to the automated activities or, as we often hear in the debate, the so-called “autonomy” of AI-driven systems; the term “autonomy” in this context signifies these systems’ use of automated data processing and evaluation mechanisms through which they adaptively, that is, via self-learning algorithms or neural networks, arrive at intelligent, unpredictable evaluations of data. With each set of data they process, they learn and improve their decision-making. These evaluations of data can then lead to system operations and actions (as in robotics) or to data analysis and therefrom to decisions (as in the case of AI-CDSS) on the basis of data correlations and probabilistic calculations. This form of “intelligence” adds pertinence to the debate on the moral agency of robots and AI-driven systems, not least because a key property of this type of machine is its interactivity. Some of these systems are sensitive to their environment, process live data, and can thus respond directly to external inputs, people, animals, things, movements, and situational changes in their surroundings. The view taken on these automated system activities and the system’s ability to interact determines the attribution or negation of moral agency or moral consideration to or in AI-driven systems [26, 50,51,52,53,54]. A prevailing opinion asserts that robots or similar automated systems—among them the “digital tumor board”—cannot be regarded as full moral agents due to (for example) their lack of emotionality, freedom, or awareness of self [42, 53, 55]. Responding to this view, Sven Nyholm explains that, notwithstanding the refutation of full moral agency, it is possible to identify “acceptable ways of interpreting robots as some sorts of agent” [, p. 42]. An analogous position emerges in the argument put forward by Behdadi and Munthe  as well as by Coeckelbergh  that artificial entities, while not full moral agents, are morally relevant to considerations of the attribution of responsibility and definable as “quasi-moral”  agents: “We should stop asking questions of what the conditions for being a moral agent are, and whether or not artificial entities may meet those conditions. Instead, we should ask how and to what extent artificial entities should be incorporated into human and social practices that would normally have us ascribe moral agency and responsibility to participants” [, p. 32]]. These views appear to indicate the emergence of an overall “relational turn” in debates on the moral status and agency of automated systems [56,57,– 58].
Against this background we argue, that the simple ascription of a particular moral status to an AI-driven system such as the “digital tumor board” does not clarify how we are to attribute responsibility on this basis , especially in cases where neither the “humans in the loop” nor the system itself can be held solely responsible for harm sustained. Suppose, for instance, the system processes ran precisely, the users did everything correctly and interpreted the data sensibly, but the data set used caused bias in the decision-making process—perhaps the historical data used by the “digital tumor board” is biased and the data analysis arrived at discriminatory conclusions founded in socio-economic conditions or in relation to specific groups of patients due to their having a rare genetic predisposition or being overweight, of a certain age, female or male (a matter of particular pertinence in relation to treatment decisions for breast cancer with the exemplified “digital tumor board”). Although such specific patient characteristics may enable the drawing up of individualized treatment proposals, there is also a risk that particular groups of people may not receive the most effective treatment for them due to a lack of diversified comparative data and training data sets. Who is to blame if bias has led to damaging correlations, discriminatory conclusions, and ultimately harm to individuals?
Both the issue of moral agency and the difficulty of attributing moral responsibility in instances of bias illustrate the central problem of attribution of responsibility at the moral level, namely that, to use the terminology employed by Coeckelbergh , of “many hands” and “many things”—the expansion of the possibilities for attribution of moral responsibility to include more agents and the accompanying proliferation, in light of the relational turn, of “things” requiring moral consideration. In the case of the “digital tumor board”, the problem of “many hands” and “many things” manifests in two ways: First, the arrival of the “digital tumor board” on the decision-making scene expands the setting to encompass a whole range of new stakeholders including developers, providers, and the data subjects of the historical data set used. Second, the “digital tumor board” combines various “things” in terms of technologies and software that are interconnected and require consideration in matters of agency and responsibility, as does the social interaction undertaken with and by these technologies . The attribution of moral responsibility to multiple people and things may accumulate or shift depending on the particular manner of AI-CDSS’ incorporation into the specific medical context and on the social interactions that take place within this setting.
Legal attribution of responsibility
Where damage has occurred, diffusions of responsibility are particularly problematic from a legal point of view with regard to compensation. The discussion that now follows will refer to European jurisprudence and outline the current state of the European discourse on these matters, which appears amply illustrative, particularly with regard to liability issues, for our purpose of highlighting the legal implications of diffused responsibility. The use of AI-CDSS presents a challenge to frameworks of liability  just as it challenges the causal and moral attribution of responsibility. The associated debate has encompassed considerations of whether it is accurate or permissible to regard an AI system such as a robot as an e-person in matters of liability, with the consequence that an “autonomous” machine (whatever that may mean) would receive the status of a legal entity [61,62,63,64]. Other discussions have examined whether AI systems call for a more comprehensive definition of product liability to the extent of, for example, obliging manufacturers to stop the system or remove it from circulation if damage or harm occurs. Considerations have also taken place around the potential regulation of user liability in terms of strict liability as in the case of vehicle-driver or pet-owner liability. The Report on Safety and Liability Aspects of AI recently issued by the European Commission’s Expert Group on Liability and New Technologies discusses provisions for covering the risks associated with emerging digital technologies [65, 66]. According to the European Commission’s White Paper on AI, considerations on the development of a predictable and sound framework for legal liability in this area are guided by the view that such a framework requires the capacity to address risks and situations such as “[u]ncertainty as regards the allocation of responsibilities”  among different agents. This suggests that there is awareness at the political level of the specific dynamics of responsibility diffusions and their disruptive potential as regards the legal allocation and distribution of responsibility.
The fundamental changes in the roles of the agents involved and the likewise changing relationships among them are likely to be of particular relevance to legal considerations around the application of AI-CDSS in the medical context. There may also be concomitant transformations and modifications in the requirements and claims of stakeholders, especially in the case of harm. Depending on the level of automation, the use of AI-CDSS can differ in modes of interaction between clinicians (including physicians and nursing staff) and the system, clinicians and patients, and patients and system [22, 68]. In this context, significant questions arise that affect the self-images of clinicians and patients in particular and exert an impact on their expectations of one another and the system. Clinicians may find themselves required to consider whether they are to trust their own experience and contextual knowledge or decisions made by the system which contradict their own assessment. Matters of trust and authority also come to the fore, varying in accordance with the particular configuration of clinician/patient/machine interaction at work in each instance. For example, the more autonomously a system acts, the greater the difficulty for the patient in identifying the physician’s part in the treatment process, the extent to which they can rely on the decisions of the system, and which participant is the more reliable. Further, the use of AI-CDSS such as the “digital tumor board” alters tasks and roles: patients may, for instance, co-generate data by using wearables and smart devices that monitor their health status during chemotherapy and supply the “digital tumor board” with data; physicians may delegate repetitive tasks such as recording diagnostic results to intelligent devices that deliver their analyses directly to the tumor board database; and AI-driven systems may take over the analysis of data from clinicians and even, given an appropriate degree of automation, independently conduct patient consultations on cancer treatment options. Changes in clinicians’ and patients’ self-conceptions and role perceptions, shifts in authority associated with altered degrees of faith in clinicians or technologies or with re-adjustments in expectations, and re-allocations of tasks may all call into question established conditions of the attribution of responsibility in causal and moral terms and consequently in legal terms.
The changed expectations associated with these human–machine interaction scenarios are as such indicators of where institutionally embedded clinical decision-making processes are changing. In this respect, they are relevant for considerations of responsibility, in particular from a legal point of view, in order to indicate which diffusions are to be discussed politically and contained legally. It is, after all, the political legislative arena which serves as a space for the negotiation of needs and interests expressed by stakeholders and “patients” of responsibility, with the ultimate aim of striking a balance between the interests of users, developers, and providers of AI-driven systems. In light of the relational turn mentioned above in terms of a moral level of attribution, efforts to find ways of articulating the claims and demands of “patients” as well as “subjects” of responsibility thus comes to the fore in order to make legal attributions tenable and to politically legitimate them.
Admittedly, these considerations cannot answer the question to which extent to which non-human agents can assume legal responsibility, nor can they determine to whom or what we attribute or distribute responsibility. Instead, our thoughts on diffusions of responsibility in a legal sense illustrate the importance of opportunities for the articulation of needs and interests on the part of all agents and “patients” to responsibility, especially in changing settings. It therefore appears imperative to broaden the field of negotiation in this regard. Empirical studies of how established clinical decision-making processes change can, in this sense, indicate which transformations are relevant for legal debates. Considering changing role configurations and modes of interaction thus paves the way for answers to the question of how we can or should attribute responsibility in a legal sense.