Introduction

Healthcare is poised to be a critical artificial intelligence (AI) consumer. Evidently, there are many advantages of integrating AI—or clinical AI for this article’s purposes—in healthcare services. In addition to its great potential to quickly and objectively process/interpret large complex datasets and surpass health professionals, clinical AI can equally identify or predict essential patterns that may be unrecognised by, or too subtle for, humans. In the process, clinical AI will radically enhance patient precision care, detect diseases, optimise workflow, and increase efficiency in health while reducing personnel and healthcare costs/waste (Afnan et al., 2021; Campbell et al., 2020; Liu et al., 2019). Notably, studies have found that clinical AI performs better than human surgeons, for example, in safely executing laparoscopic suturing (Leonard et al., 2014). An advanced AI can function autonomously, that is, without any human in the loop.

However, to foster the uptake of AI-based approaches to healthcare and reap its benefits, multiple perspectives are required to address the problems that different clinical AI formats generate, particularly black boxes. Many scholars recognise that the black box problem indeed causes a problem. However, missing in this conversation is how it creates this problem and the specific nature of the problem the black box creates from the African perspective. This article draws on the thinking about trust in African scholarship to describe the nature of the problem black box clinical AI generates in health professional-patient relationships.

Research design and method

The design and method of this enquiry deserve to be explained, even if briefly. This article is an exercise in Ethics that is mostly descriptive or exploratory. My (descriptive) claim is that dominant views of trust grounded primarily in African scholarship can enhance our thinking about the specific problems black box clinical AI causes, at least from the African perspective. In this regard, the manuscript draws on an equally acceptable philosophical method to push the boundary of knowledge on the current discourse on the problems black box clinical AI creates within the clinical context (Gorard and Tan, 2022).

This philosophical approach is not uncommon since this method has been adopted by different ethicists (Behrens, 2013; Cordeiro-Rodrigues and Ewuoso, 2022; Fayemi, 2018). Within the context of this article, the approach consists of drawing on the thinking about a value (trust) in Afro-communitarianism to explain the specific problems that black box clinical AI causes within the healthcare context. Notably, this article’s overarching question is, what do African views of trust tell us about the specific problems black box clinical AI generate? The article does not only draw on the relevant views of trust in African scholarship to explore or explain the problems black box clinical AI causes, it also suggests how these problems may be addressed in ways that align with these African views of trust.

There are three main sections in ethics papers that adopt the philosophical method that I propose for this article’s objective; introduction, discussion and conclusion. The discussion section draws on the thinking about trust to deeply interrogate the question I raise and addresses potential objections to the manuscript’s central descriptive claim. To outline these moral norms, I adopted a non-systematic approach—using phrases like “trust and Afro-communitarianism”, “trust in Africa”, “exploitation, trust, and Africa”, “African scholarship on trust”, etc.—to retrieve relevant articles from important databases not limited to PhilPapers, PubMed, and Google Scholar. The same process was repeated for the discussion on black box problems in the healthcare context. My search yielded over 200 articles that were analysed carefully to realise the relevant objective in each section.

There are many reasons to justify the importance of addressing the black box problem. In healthcare, an error could cost lives. Regulations in many regions still lag behind technological development. Particularly, technologies move faster than regulatory bodies. New ethical ways of living in a technologized world are required to address the pressing challenges these disruptive technologies raise. This article contributes to this critical task.

To realise the aim of this article, the first section clarifies the black box. I will also explain how a black box creates a black box problem in this section. In the second section, the article justifies the approach to draw on trust in African scholarship to increase our understanding of the specific nature of the problem that the black box creates. Since descriptive arguments can be sources of contestation (Gerring, 2012), in the final section, the article addresses important potential criticisms, (i) which claim that a black box problem will occur only when conditions like the patient’s capacity to choose—from among clinical AI predictions—in ways that align with their values are not available or (ii) claim that there are far more critical ways a black box can create a black box problem such as when they are proprietary. These extrinsic factors deserve greater consideration than the more intrinsic factors that I focus on. In response to the former, I contend that it matters for epistemic justice that patients can contribute to the knowledge production that informs the predictions themselves. To address the latter objection, I demonstrate how extrinsic factors that may render black-box clinical AI problematic imply that regulations should be in place to foster relations between the industry and key stakeholders in the healthcare system.

Discussion

Clarifying concepts: black box, black box problem and un-explainability

This section clarifies the black box/black box problem and justifies the importance of explainability to realise design publicity or fulfil an essential duty—of disclosure of material information—in health professional-patient relationships. To clarify, a clinical AI is black-boxed if it is uninterpretable or unexplainable. A black box clinical AI may produce a black box problem, but not all black box clinical AI generate a black box problem. The reader would be correct to observe that there are many ways a black-boxed clinical AI may be unexplainable. First, black-box clinical AI is unexplainable when its post-hoc interpretability is impossible. This condition has been explained by Michel Loi and colleagues (2021) and Zachary Lipton (2016). Post-hoc interpretability of clinical AI aims to achieve normative justification, whereby the AI provides clear, open, and helpful information about its purpose in ways that help particular individuals (like health professionals who use the system to inform their practice and patients for whom it is applied) form informed judgements regarding whether the clinical AI deployment and predictions are justifiable (Loi et al., 2021). In other words, design transparency and publicity are at the heart of the normative justification. Specifically, suppose an AI can realise its purpose and information about this (that is, how the AI was intentionally deployed to realise this goal or how the goal was translated into the programme) has been communicated or offered to key stakeholders to form informed opinions about the clinical AI (Loi et al., 2021). In that case, it has realised design publicity and transparency. Michele Loi and colleagues (2021: p. 260) define design publicity as “adequate communication of the essential elements needed for determining if a decision driven or made by an algorithm system is justified.” Design publicity aims to provide basic and essential information for forming informed judgements about a system (or its decisions). Essentially, what are the reasons for this output? Is the outcome justified? Suppose an AI output discriminates. In that case, it is crucial to interrogate if such discrimination is justified or unethical, hence, the importance of design publicity.

It is essential to clarify that AI outputs themselves are not clinical decisions. Instead, they are recommended predictions/interventions or options for treatment or treatment recommendations concerning what AI believes will likely enhance a patient’s care. Within the modern context of patient-centred care, patients have a right to make informed decisions about what recommendations should be implemented in their care (Lewis, 2009).

The reader would also be correct to observe that a black-box clinical AI that fails to realise design transparency and publicity does not raise a black-box problem since acquiring the relevant knowledge and skills will be sufficient to address the situation. This is not the claim I make in this section. I have merely justified the importance of design publicity. Equally, the view of black box clinical AI in the former paragraph hardly has anything to do with understanding or whether a clinical AI performs well. In other words, digital publicity and transparency do not mean those receiving such information will understand. There are different levels of understanding in the same way as there are different levels of explanation. Even if clinical AI can realise digital publicity and transparency and, thus, avoid being black-boxed, clinical AI may still be understandable to no one, understandable to everyone, and only understandable to designers and engineers. In other words, the capacity to explain does not always imply understanding. However, within the health professional-patient context, where informed consent is a key requirement, it is vital that the clinician can explain to the patient health predictions that have significant implications for them to fulfil the requirement of disclosure of material information.

Against this background, although the clinician implementing AI predictions may not understand the internal operations of the AI in the same way that an AI engineer will, AI design publicity would still be necessary for the reasons mentioned in the previous paragraph. Moreover, design publicity helps to shift clinical practice to a more patient-centred approach that meets the demand for justification and reasoning instead of leaving patients in the dark about healthcare predictions that have implications for them. A failure to realise design publicity will likely foster machine paternalism. I provide other justifications in subsequent paragraphs.

Second, a black box clinical AI is unexplainable if its simulations and internal operations are un-surveyable or opaque. This conception of unexplainable black box clinical AI implies that the system itself is—in Jordan Wadden’s (2021) view—presently uninterpretable, unexplainable and unexaminable by developers (who create these systems), and within the healthcare context—to patients who must make informed decisions and health professionals involved in the patient’s care. One study has found that communicating how AI makes predictions to patients is the most frequent ethical challenge with this new model (Jobin et al., 2019). It is less relevant if the system becomes interpretable tomorrow. What matters—in this second way, a black box clinical AI may be unexplainable—is that the AI’s inner operations are currently—rather than essentially—uninterpretable. Why does the AI recommend this prediction rather than others? How did it come to this decision? In this regard, an unexplainable clinical AI tends to entail an incapacity to piece together, work through—within the neural network—and interpret every step leading to the clinical AI prediction or recommendations. It entails a failure to explain how a clinical AI works and why it makes certain predictions or associations, the processes for reaching certain predictions, and the predictions themselves.

Black-boxed clinical AI, like those based on deep neural networks, is said to create black-box problem in this second sense since they remain theory agnostic, notwithstanding our current capacity to observe, learn and understand their inputs and outputs (London, 2019). In other words, given that these tools are adaptive and may become increasingly complex over time, neither designers, end-users, nor potential beneficiaries of these technologies can explain the process for making statistical associations regarding how the clinical AI leverages such associations—or what variables—to make or rank predictions. What are the technical, ethical and logical reasonings behind the outputs? Concretely, this implies that a clinician cannot explain to a Jehovah’s Witness patient who wants to know why the clinical AI settled on blood transfusion as a necessary intervention after blood loss. Did it consider the patient’s religious background? What level of importance did the clinical AI give this variable if it does? Would this be justified?

Under the assumption that a black box clinical AI creates a problem, can it be trusted? This is a separate question concerning whether a black box problem will undermine the use of AI for healthcare and in what ways. Notice that some scholars like Alex John London (2019), and Ryan Felder (2021) have pointed out that black-box clinical AI that is problematic in the relevant sense may be used if there is high statistical validation. Specifically, the unexplainability of clinical AI should not prevent its use since several common clinical practices are, in some sense, black-boxed. Yet, many health professionals do not often have problems adopting such practices for reasons of their statistical validation. Many drugs like lithium are black box treatments for mood stabilising. For the most part, one cannot explain how the drug works (2021; 2019). Yet they have been permissibly used for patient care. In a future study, I will draw on key values in Afro-communitarianism to address this key question. In this current article, I focus on how the key ideas about trust in dominant African scholarship can help us think about the black box problem that clinical AI presents.

Justifying African scholarship

This section justifies the importance of grounding the discussion on the black box problem in African scholarship. Notably, no work has addressed how African thinking about trust can impact African attitudes towards clinical AI. Equally, reflections on how clinical AI will impact trust have focused mainly on Western views. Some examples of these scholars in the West include Annette Baier, Alvin Goldman, Juan Duran, Andrea Ferrario, Joshua Hatherley, and Richard Spoor and Loi Michele, to name a few (Baier, 1986; Durán and Jongsma, 2021; Ferrario et al., 2021; Goldman, 2001; Hatherley, 2020; Spoor and WaterNaude, 2020). There are many reasons to justify the importance of drawing on the opinions about trust dominant in African scholarship to think critically about the black box clinical AI problem. One reason is that studies continue to reveal a plummeting trust in medical scientists, professionals and elected officials (Kennedy et al., 2022). The thinking is that many professionals, including medical experts, are failing to build trust in their relationships with the public. In the United States, the decline of trust is rooted in histories of racism and exploitation (Warren et al., 2020). In Africa, this decline, particularly in the Western world, is rooted—amongst other reasons—in the histories of betrayal, slavery and pillage of the continent (Nunn and Wantchekon, 2011). This explains why surveys continue to demonstrate that Africans prefer caution to trust when relating with groups and individuals outside of the continent (Inglehart et al., 2014). Are the consequences of an absence of trust grave for integrating AI in patient care? Will the public accept clinical AI if they do not trust the same? Are there unique ways Africans conceptualise trust? Is the African view of trust mere reliance?

These questions are important since—for example—some scholars link trust to reliance and demonstrate how this is not problematic for the public’s trust in AI. Notably, Andrea Ferrario and colleagues (2021) believe AI systems can still be trusted despite the black box problem. In their view, trust results from social interactions that occur over time. Through interactions with the AI, the health professional first acquires beliefs about the performance of the AI. This performance-belief acquisition continues until the health professional eventually comes to trust the AI without necessarily needing to update or acquire more beliefs. Hence, they describe trust as entailing a reliance on AI with little control and monitoring “of the elements that make the AI trustworthy” (Ferrario et al., 2021, p. 437). Similarly, Zachary Lipton (2018) thinks trust is the assurance that an AI will perform well. How is the African conception different from these views?

Moreover, it is common to believe that trust tends to differ among groups and is shaped by varying factors like culture and modes of being (Schoorman et al., 2007). Given that the conditions that shape trust can differ amongst individuals. Equally, suppose trust can influence and inform attitudes towards others and things (Schoorman et al., 2007). In that case, knowing how Africans think about trust will be the helpful first step towards improving and assessing attitudes toward clinical AI and advancing knowledge on how clinical AI may be successfully and effectively integrated into medical care in the Global South.

The reader should notice that the relevant aspects of trust in African scholarship that this article outlines do not exhaust all possible formulations and thinking about the same in Africa. There are diverse opinions on trust among African scholars. One reason for this diversity could be the highly multicultural nature of the African continent. Nonetheless, a systematic review will be required to increase our understanding of the various thinking about trust in African scholarship and the reasons for this diversity. The subsequent sections merely draw on the relevant aspects in African scholarship, particularly in the works of African political scientists, philosophers, sociologists and psychologists. Equally, note that though this article highlights different aspects of trust in African scholarship, I have not claimed that all African scholars believe these aspects to be true of trust or that scholars who defend one aspect of trust necessarily agree that other aspects of trust may reasonably be accepted as essential ways trust has been conceptualised in African scholarship.

Trust, relationships and fiduciary relations

In this section and subsequent ones, I draw on the core views about trust, primarily in African scholarship, to explain the specific problems black box clinical causes. Specifically, these sections highlight three views about trust: relational, experience-based and normative. Many scholars have articulated how Africans conceptualise trust. One thought about trust in African scholarship is that it is critical to and inheres in the relationship of interdependence and interconnection. Precisely, trust is both necessary to foster relationships and, at the same time, it is the reason for the existence of the relationship. Trust is strengthened and brought about by frequent interactions. A study of trust among African populations found higher levels (of trust) among co-ethnics owing to frequent contact, while distance, separation and corruption fostered distrust (Addai et al., 2013).

As an essentially relational concept, trust requires that individuals in the interdependent relationship be transparent about the terms of interactions. Precisely, individuals’ capacity to truly relate with one another is severely compromised, suppose the terms of that interaction are unclear. As Cornelius Ewuoso (2021: p. 34) remarks, “[African view of relationship often] requires subjects and objects to be clear about the terms of their communal interactions, wherever possible.” In other words, transparency enhances genuine relationships. Individuals can hardly relate genuinely in the absence of transparency.

Applied to a clinical AI that generates a black box problem, notice that health professional-patient relationships are by nature fiduciary (Bernabe et al., 2014). Transparency—about patients’ care and how health professionals respect patients’ right to self-determination—is essential to foster genuine fiduciary relationships. Suppose clinical AI creates a black box problem; that is, it is intrinsically unexplainable. In that case, health professionals cannot be transparent regarding whether and how a clinical AI respects patients to foster genuine fiduciary relationships.

In addition to undermining the health professional’s capacity to be a good fiduciary by being transparent about the terms of the caring relations with the patient, AI unexplainability or black box problem also undermines a patient’s autonomy in other ways. Precisely, the complexities inherent to most clinical AI designs and programming, like those based on artificial neural networks, do not offer the level of transparency necessary for inquiring about their inclusiveness and diversity to promote the self-determination necessary for genuine fiduciary relations. It undermines the capacity to evaluate if and how a patient’s values are considered or engaged in the internal process of the machine. How are patients’ individualised preferences considered in these processes? What significances are given to them? As the South African philosopher Kevin Behrens (2017) remarks, respect for persons [and human relations] is hardly complete without honouring an individual’s values.

The challenges the black box problem creates for fiduciary relations may be addressed through explainable clinical AI. There are three justifications for this claim. First, suppose cooperating genuinely requires transparency. In that case, transparency can be enhanced if AI boxes are unblacked and its inner operations are less opaque. Second, as an essentially relational concept, trust is also grounded in a form of social exchange entailed by the obligation of reciprocity. Specifically, relationalism in African moral philosophy entails reciprocity Ewuoso et al. (2022). And reciprocity involves a two-fold willingness: to make oneself vulnerable and to accept vulnerability. As the South African scholars Marita Heyns and Sebastiaan Rothmann (2015) observe, this willingness motivates a trustor to cooperate and allows a trustee to play significant roles in their life. Individuals will likely be unwilling to cooperate or give vulnerability to those or entities they do not trust.

Although I discuss vulnerability in a different section, it is important to state here that the social exchange of giving and accepting vulnerability requires crucial principles. They include (i) intentionality, which requires the trustee to act with goodwill, and purpose and in ways that increase the opportunities the trustor has to enjoy a good life, (ii) transparency, as previously stated, individuals in an interdependent relationship can hardly genuinely relate if they are not clear about the terms of engagement or the other fails to disclose information that a reasonable person might consider necessary, and (iii) dependability and expectation, implying that the trustor has good reasons to expect the trustee to act in their best interest. These principles explain why several African cultures describe trust the way they do. Amongst the Yoruba people of South West Nigeria, trust is igbekele (or dependability), whilst, among the Setwana people of Southern Africa, it is tshepa (or expectation that others will behave responsibly). Explainable AI or less opaque AI can enhance health professionals’ capacity to act more intentionally to foster patient care or meet the expectations of fiduciary relations if they can control how clinical AI works or ensure that the important values of patients are given significant consideration.

Finally, one feature of trust as essentially relational is that the deeper the relationship, the more intense is trust among individuals, such that higher levels of trust are associated with deeper relationships. And lower levels of trust correlate with weak relationships (Idemudia and Olawa, 2021). Some African scholars believe that higher levels of trust may be fostered through agreeableness. Agreeableness describes the extent to which a trustee is willing to go to maintain fiduciary relations with the trustor and act for the trustee’s sake (2021). The reader would be correct to observe that this conception of agreeableness is related to willingness (particularly, of the trustee) to maintain a genuine relationship with the trustor. Agreeable individuals are benevolent and seek social connections for their own sake (Ezirim et al., 2021). Precisely, Nigerian political scientists Gerald Ezirim and colleagues (2021) have found among undergraduate students in Nigeria (West Africa) that agreeableness is highly correlated with fostering trust, whilst openness (described as the propensity to take risks, embark on new activities, seek novelties, engage in unfamiliar tasks, explore new territories, etc.) had little, or no effect on maintaining, building—and may, in fact, undermine—the same. Suppose trust and trustworthiness are predicated by agreeableness. Furthermore, suppose a willingness to seek to relate genuinely is essential for being agreeable. In that case, acting genuinely is impossible—as demonstrated in a former paragraph—without explainability or at least, explainability will enhance genuineness. An explainable clinical AI creates opportunities for health professionals to clarify the terms of interaction or justify clinical AI decisions (as well as how those decisions were reached and what values were considered). It also provides opportunities for patients to validate the system and whether it is transparent.

Experience-based trust and accountability

In African scholarship, trust is also connected to trustworthiness. Notably, trust is the outcome of the judgement of the trustee’s trustworthiness, implying a cumulation of the trustor’s personal knowledge and previous range of life experiences (as well as the experiences of others with whom both trustor and trustee share social connections) of the trustee to endeavour to meet expectations (even if sometimes they may fail) and behave responsibly. In this regard, this conception of trust is related to the conception of the same as relational since it is the outcome of the experiences or antecedents in that relationship. It is important to clarify here that this thinking about trust does not require the trustee always to meet these expectations. It is essential that the trustee endeavours to or demonstrates enough goodwill to want to meet these expectations. The former point is corroborated by the finding of the South African psychologists Marita Heyns and Sebastiaan Rothmann (2015) that ability (that is, the trustee’s competence in the relevant field), benevolence (intention to act well and with goodwill), and integrity (the trustor’s belief that the trustee will abide by the values that the trustor finds acceptable, act with justice and accountability) remain vital to one’s judgement of the trustworthiness of others.

A critic may point out here that competence, integrity and benevolence are also epistemic indicators of a trustworthy expert in the philosophy of trust literature. This is a well-known notion in the domain Philosophy of trust, particularly in the scholarship of Alvin Goldman (Goldman, 2001). Thus, the critic may conclude that the features of trust that I mentioned in this section are not found only in African scholars’ works but also elsewhere.

In response, although the features of trust I mentioned in this section may be found elsewhere (and I have not claimed that they are found only in Africa), the intuitions that underlie these features have not come to the African continent from elsewhere. For example, the intuition that underlies benevolence is exhibiting goodwill, which is a dominant intuition, as confirmed by the famous African moral philosopher and the 2021 Prospect magazine’s top 50 thinkers in the world, Thaddeus Metz (2022). This intuition differentiates this conception of trust from the evidence-based account of trust in the philosophy of trust literature (Simpson, 2017). Notably, while the dominant account of trust as evidence-based in the philosophy of trust literature requires that trustees always meet expectations, the experience-based account of the same in African literature requires they exhibit goodwill and endeavour to meet these expectations. As Thaddeus Metz (2007: p. 336) explains,

One has a relationship of good-will insofar as one: wishes another person well (conation); believes that another person is worthy of help (cognition); aims to help another person (intention); acts so as to help another person (volition); acts for the other’s sake (motivation); and, finally, feels good upon the knowledge that another person has benefited and feels bad upon learning she has been harmed (affection).

Suppose this conception of trust requires the health professional to consistently act in specific ways (with competence, integrity and benevolence) to earn the trustor’s trust. In that case, black-box clinical AI creates a real problem to act in those ways. They (health professionals) do not have the competence to understand the internal process of the AI, which undermines their capacity to act with goodwill, nor will they be able to guarantee the trustor that they will give serious consideration to their values in the intervention recommendations. How does a health professional tell a Jehovah’s witness patient that their religious values were seriously considered when an AI highly recommends blood transfusion for their blood loss without alternatives?

There are important points to note about experience-based trust. First, it is essential to clarify that this view of trust (given that it is context-specific) can be consistent with forms of non-trusting behaviours. For example, a patient can trust a health professional to use their knowledge to improve their well-being but refuse to enter a plane to be piloted by the same health professional if they are not convinced about the health professional’s capacity to pilot a plane. Relatedly, a failure to trust someone does not necessarily imply distrust. In other words, those who are not trusted are, by that fact, not necessarily distrusted. Trust and distrust are not necessarily binaries. Similarly, trust can be misguided (Hatherley, 2020). However, this does not impact the nature of trust as experience-based. I can trust a building engineer to act in my best health interest. Though my expectations that the engineer will act in my health interest may be mistaken (or not welcomed by the engineer), this does not undermine the nature of trust itself as something one responses in individuals who exercise the power of agency to fulfil specific responsibilities toward others.

Second, though there is an expectation that the trustee will act benevolently and interact with integrity, this does not imply that the trustor is certain about the trustee’s motivations. The trustor is relatively ignorant of this. However, a mere belief that the trustee will act in their best interest is sufficient to generate trusting relations until the trustee fails to meet this expectation and, thus, fosters discord or distance.

Third, thinking about trust as entailing having and (endeavouring to) meet expectations is critical since expectations allow the trustor to open up to the trustee. Notably, the trustor believes that the trustee will exhibit goodwill and avoid ill this is what fiduciary relations require. Within the context of this article, expectation allows a patient to disclose confidential information about themselves to the professional. Suppose a patient cannot open up to their health professional. In that case, the ability of the health professional to aid a patient will be undermined. As Paul Hofman and colleagues have found in their study on trust in African villages, “sharing [understood broadly] positively correlates with trusting behaviour” (2017, p.19). Patients who trust their health professionals are more likely to share sensitive information with them, give vulnerability and may voluntarily comply with medical advice.

Having and (endeavouring to) meet expectations also implies that health professionals must be accountable for the trust that patients repose in them. Accountability is worth discussing at some length since it is essential for acting with integrity, which is a vital feature of the experience-based conception of trust in African scholarship. Notably, accountability is crucial in health professional-patient relationships. Specifically, it (accountability) is essential to ensure that health professionals abide by professional standards. It can equally incentivize health professionals to behave ethically, provide strong justifications for why it seems reasonable for patients to give vulnerability or trust their health professionals with confidential/sensitive information (required for their care), and why the decisions health professionals make on their behalf may be rational (Ferrario et al., 2021; Hatherley, 2020). Suppose a patient experiences harm that results directly from taking instructions from a clinical AI. In that case, the black box problem renders accountability in health professional-patient relationships nearly impossible. Who should be held accountable for the harm: the clinical AI, the manufacturers/engineers, the medical institution, the maintenance agent, the operators or the health professional who grounds their decisions in the AI? Suppose an action can impact others, especially negatively. In that case, someone needs to be accountable. Indeed, there are good reasons to hold manufacturers and engineers responsible for their products’ behaviour, especially when they are defective.

Moreover, a clinical AI that is programmed and functions adequately could still perform inhumanely or fail to take the wishes of end-users and potential beneficiaries seriously. Given this assumption, responsibility and accountability-related questions matter for health professional-patient contexts where patients can reasonably expect their health professionals to be accountable for instances of the failure to promote non-maleficence or patient’s health benefits.

A critic will be correct to point out that many people are involved in a patient’s care or act in ways that impact the patient’s health, implying that accountability should not only reside in the attending health professional but be conceptualised as an aggregate concept (Felder, 2021). Whilst this is true, this article focuses more on the actual health professional-patient context, where health professionals must act with competence, integrity and benevolence with their patients. In that context, the black box problem is problematic since it undermines two critical requirements for accountability: first, an agent is accountable for action if they have sufficient control over the same. Second, whether they are aware of their actions and can explain them (Neri et al., 2020). Concerning the former, many health professionals neither have control over the internal operations of clinical AI that generates a black box problem nor will they be able to explain the same; how the clinical AI combines or mines data. Additionally, when harm occurs, the black box problem makes it nearly impossible to identify or isolate the exact point in the decision-making process the harm was introduced in the machine to correct it or prevent a recurrence. Moreover, individuals would be unable to tell whether the harm resulted from a security breach (hacking) or a minor error in the system. This way, the harm would not only be perpetuated but could also be mechanised and amplified, given that these models are adaptive.

Normative view of trust and justification for vulnerability

Another belief about trust in African scholarship is that it is a normative concept. Given this, it is more than reliance. This view of trust aligns with those expressed by Joshua Hatherley (2020) and Andrea Ferrario and colleagues (2021). Precisely, trust conveys an idea about how a trustee ought to act in light of expectations from the trustor and what should motivate such action. The normative conception of trust is related to the experience-based account as the moral standards concerning how the trustee’s trustworthiness is assessed. For example, a survey conducted by Mikhail Moosa and Jan Hofmeyr (Moosa and Hofmeyr, 2021) to describe the level of South Africans’ trust in institutions found that many South Africans have low levels of trust in public officials because of the failure of these elected officials to meet the ideals of South Africans or act for the good of South Africans for their own sake.

The normative conception of trust is equally related to the thinking about the same as essentially relational. Specifically, as a normative concept, trust is a moral implication of interdependent relationships that morally require individuals in relationships to exhibit other-regarding behaviours towards one another. The normative conception of trust explains Martin Luther King Jnr’s (2001) distrust of White moderates, as repeatedly confirmed by many African historians and Africana scholars like Kevin Gaines (2007). Voicing his concerns on the impunity and racist attacks in Birmingham, as well as the failure of mostly well-meaning White moderates to condemn these impunities directly, King Jnr remarks, “[the] ultimate tragedy of Birmingham was not the brutality of the bad people, but the silence of the good people” (2001,p.48). As a normative concept, trust is action-oriented rather than silence or inaction. Trust gives rise to duties that require the trustee to endeavour and/or act to fulfil what a trustor trusts them to do. x trusts y because x has good reasons to believe that y would act in specific ways (Ferrario et al., 2021; Hatherley, 2020).

Good reasons to believe in the previous paragraph imply that one constituent of the normative conception of trust is that it is based on subjective factors and may be eroded if the trustee fails to act in the appropriate ways or develop goodwill. Notably, in this context, one reason for making themselves vulnerable is because patients trust that health professionals will act with good will rather than ill will. As Joshua Hatherley’s remarks “trust is inseparable from vulnerability, in that there is no need for trust in the absence of vulnerability” (2020, p. 478). For example, sickness places patients in a position of vulnerability. As vulnerable individuals, patients often turn to their health professionals, whom they trust to attend to their health needs competently. Patients believe that health professionals will seek their (health) interests because these are the trustee’s interests.

Equally, a second constituent of this normative conception is that trust equally entails the trustee’s willingness to accept patients’ vulnerability and, thereby, act in relevant ways. Specifically, health professionals often believe that the health professional-patient relationships require them to act in certain ways towards their patients.

In summary, trust as a normative concept allows patients to give discretion/vulnerability to their health professionals because they believe that health professionals will exhibit goodwill. Equally, it enables health professionals to accept patients’ vulnerability by acting in relevant ways or exhibiting goodwill to patients.

The normative conception of trust that I articulate in this section is a vital notion of trust that African scholarship brings to the philosophy of trust literature. In existing literature, trust as a normative concept deals with how the trustee is morally responsible for what they are trusted with by the trustor (Agassi, 2016; Faulkner and Simpson, 2017). Hence, when trust is violated, the trustor has the right to feel betrayed. In other words, the dominant normative conception of trust in the philosophy of trust literature tends to be unidirectional, emphasising to a higher degree the moral responsibilities of the trustee. This differs from the normative conception of trust in African scholarship that is bidirectional, focusing on two core constituents; having and meeting expectations or giving and accepting vulnerabilities. Specifically, the African normative conception of trust creates an enabling environment for patients to be vulnerable and for health professionals to resist the seductive temptation of exploiting patients’ vulnerabilities because this is not who we are (Metz, 2022).

Concerning meeting expectations or accepting vulnerabilities, suppose the black box problem is true; that is, a health professional cannot know about the inherent operations of a clinical AI to provide material information that can aid informed decisions. The professional’s capacity to accept vulnerability will be undermined in that case. To understand how, notice black box problem renders it (nearly) impossible for the health professional to fulfil their professional duties to a patient since one vital reason patients often give vulnerability to health professionals is the belief that health professionals have knowledge that they (patients) lack. In other words, being the health professional in the health professional-patient relationship requires these professionals to exercise knowledge that patients lack. However, the black box problem implies that health professionals themselves would also lack knowledge. As a result, they do not have any epistemological advantage over the patients to accept patients’ vulnerabilities. Having the epistemological advantage is important in a health professional-patient relationship. For example, suppose clinical AI can have life-or-death implications. In that case, it matters that health professionals are in an epistemological position to know when to reject harmful clinical AI predictions.

There are other ways the black box problem also undermines health professionals’ ability to accept vulnerability (or meet expectations). Precisely, it (the black box problem) limits the number of options a health professional can offer their patients through the assignment of probabilities (Tibbels et al., 2022; Wadden, 2021). The reader may point out that health professionals could discuss the sorted probability distribution of the possible options for the patients in the order of magnitude. However, without the advantage of epistemological positioning, it is hardly conceivable that this suggestion will go far enough. There are other problems. What happens when the clinician defers from how the AI ranks magnitude or believes that AI has excluded certain other equally good options? It would also be difficult to have other discussions without the epistemological advantage. For example, the health professional cannot inform patients of the values the black box clinical AI considers if these patients require information about the exact values and preferences considered in determining magnitudes. This explains why Jordan Wadden (Wadden, 2021: p. 3) remarks, “Simply using an algorithm already limits potential choices through categorisation and the assignment of probabilities which remove some potential options from the clinician’s consideration.”

In addition to the challenges the black box problem has for accepting vulnerability, it also undermines a patient’s capacity to give vulnerability to the health professional. Notably, giving vulnerability requires a patient to exercise adequate authority (or autonomy) over how their lives will go. Yet exercising adequate authority over oneself necessarily requires—within this context—a patient (i) to disclose sensitive information about themselves (to their health professionals) and (ii) to inform their health decisions with expert opinions or relevant information they receive from health professionals.

Although these are the two necessary requirements for exercising authority over oneself adequately, nonetheless, these two requirements imply three conditions. One condition is the ability to be heard. This enables the fulfilment of requirement (i). A second condition is to receive relevant expert information from the health professional that can enhance one’s decisions (or informed decision-making). How much information should be considered material for informed decisions tends to vary among individuals. A third condition is understanding why one diagnosis/prognosis was recommended rather than another.

Both conditions 2 and 3 are necessary to fulfil the requirement (ii). Patients who do not understand certain information can at least discuss it with their health professionals. I conceptualise good communication as consisting of conditions 1, 2 and 3. The black box problem undermines conditions 2 and 3, and by extension, requirement (ii). By undermining conditions 2 and 3, it undermines good communication. Specifically, it undermines the possibility of good communication or dialogue that allows patients to interrogate prognosis and/or health professionals to clarify why the decision (conditions 2 and 3); what the risks are. What are the benefits? There is also another way it undermines good communication. Notably, a patient’s ability to receive relevant information from a health professional is undermined by the black box problem because health professionals cannot give what they do not have or know. As previously stated, they lack the epistemic condition necessary for providing relevant information, thus, undermining both conditions 2 & 3, and by extension, requirement (ii).

Suppose good communication is vital for exercising authority over oneself adequately. And suppose exercising authority over oneself adequately is a core feature of giving vulnerability. In that case, by undermining good communication, the black box problem also undermines the capacity of patients to give vulnerability. To give and accept vulnerability within the health professional-patient context, clinical AI must be explainable.

Notice that there are other ways clinical AI that creates a black box problem can undermine patients’ adequate exercise of their authority (or autonomy). For example, a clinical AI may fail to incorporate the wishes and values of patients or misrepresent the same. As Thomas Quinn and colleagues observe, “though there is one right diagnosis, there may be many possible treatments. The right treatment depends on the needs and autonomous wishes of…patients” (2022, p. 6). Black box problem implies that a patient will be unable to assess how their wishes were taken into consideration or if these wishes were considered at all, whether the correct procedures were followed, and the ways their preferences may have been dis/regarded. This is important in light of the Protection of Personal Information Act in South Africa and the European Union’s General Data Protection Regulations that accord a right of explanation to their citizens (Watson et al., 2019).

Contrarily, an explainable clinical AI offers the best opportunity to promote a patient’s right to self-determination. Moreover, explainability is intrinsically valuable and epistemically satisfying; knowing the reasons regarding how a clinical AI arrives at its outcomes can enhance one’s capacity to assess its merits and demerits (Tables 1 and 2).

Table 1 Key African features of trust and implications for the black box problem.
Table 2 Relationship among the features of African trust.

Is there a black box problem?

It is important to reiterate my central descriptive argument. Specifically, under the assumption of a black box problem, the African view of trust that is inherently relational implies that health professionals cannot explain whether and how a clinical AI incorporates a patient’s values or leverages the same (in its outputs) to honour fiduciary relations. Additionally, the conception of trust as experience-based and accepting responsibility implies that health professionals can neither be held accountable for black box clinical AI outputs that they can hardly. Finally, given the understanding of trust as a normative concept, health professionals cannot accept patients’ vulnerabilities, and patients cannot give the same. This section and the subsequent ones will address some potential objections.

Notably, a critic may contend that the specific form of black box clinical AI, which I described as problematic, is only problematic if (i) humans are not in the loop, (ii) one outcome is suggested, and (iii) the individual patient cannot accept or reject outputs altogether. The former appears to be the position of Juan Duran and Karin Jongsma (2021). A clinical AI ranks the recommended interventions, thus allowing patients to choose any that align with their values. Moreover, clinical AI is only a supportive tool, implying that it does not currently function autonomously and humans are still in the loop. What the former points demonstrate is that black box clinical AI is not, in fact, problematic, or black box clinical AI problems can be addressed through meaningful conversations (with the patient, which ought to be encouraged to ensure that patients’ choices align with their values) and by establishing regulations that limit clinical AI use to supportive tools.

Relatedly, another critic may point out that I am requesting too much from clinical AI that is often not demanded of humans. As previously mentioned, many routine aspects of medicine are black-boxed, meaning we cannot rationalise certain decisions or recommendations given by health professionals. In most cases, we cannot understand why they make these recommendations. In this regard, emphasising explainability over care is not warranted (London, 2019). Moreover, the primary goal of health professional-patient relationships is care, implying that in some circumstances, the ability to explain how a clinical AI makes predictions is less relevant than whether the proposed intervention would work or enhance patient care. Contrarily, the reliance, or even the overreliance on explanation, can prevent us from enjoying the benefits of clinical AI (London, 2019)

I acknowledge that most clinical AI often ranks recommendations, and patients can choose from any of these treatment options. However, the reader should notice that intrinsic to enhancing an individual’s right to an informed decision is epistemic justice, that is, the capacity to be respected as an equal contributor to the knowledge production that informs—within this context—the clinical AI-recommended treatment options. The former is especially important when a decision will significantly impact a patient. In other words, suppose a clinical AI makes a mistake. In that case, epistemic justice requires explaining why to the patient and whether the knowledge production that underlies these recommended options incorporates their values (to what extent?). In other words, the more modern patient-centred healthcare model mandates patients’ contributions to their health care. However, black-box clinical AI, whose operations are unknowable and opaque, undermines patients’ capacity to contribute to decision-making about how their lives would go.

Equally, suppose a clinical AI functions autonomously. In that case, humans will be involved in the AI operations only tangentially (if they are involved at all), implying that they will be in the dark about its process and what knowledge production informs its recommended options. Why does the AI rank these options in the way it did? The same question could also be raised when AI functions as a supportive tool. Given the black box problem, even when a patient can choose a treatment option that may align with their values among several options, they would be unable to tell the extent to which their modes of being or encountering the world formed part of the process leading to the recommended treatment options or the knowledge production that underlie how these options are ranked. Knowing the basis and process leading to these treatment options is as important as the options to enhance informed decisions, at least in light of the requirement for giving vulnerability that I discussed in a previous section. Moreover, in African philosophy broadly, it is common to assert in African moral philosophy that individuals can hardly interact genuinely with one another when the rules of interaction are unclear or non-transparent (Ewuoso, 2021). For example, if patients believe that clinical AI predictions do not integrate their values, they cannot be offered an explanation to address their concerns. Suppose individuals are denied epistemic access to the knowledge production that underlies predictions concerning how their health will go. In that case, they are epistemically wronged; that is, they are wronged as individuals who have this right and as a competent source of knowledge or knower of their experience. This is indeed the nature of the problem black box clinical AI generates.

Is extrinsic unexplainability more important?

Another critic may point out that my conception of the black box problem lacks depth since it focuses mainly on intrinsic opaqueness or non-transparency. Focusing on intrinsic unexplainability is insufficient to increase our understanding of how trust may be undermined by a clinical AI that creates a black box problem. Other extrinsic factors can render a black-box clinical AI problematic. This is the case for the black box problem that arises because they are proprietary. Proprietary secrets create extrinsic non-transparency, implying that this essentially has nothing to do with the unexplainability of the clinical AI but more to do with the corporate secrets that must be protected to maintain advantage (Quinn et al., 2022). This extrinsic opaqueness deserves consideration since it raises far-reaching problems beyond intrinsic opaqueness. Specifically, suppose health professionals and patients can overcome black-box clinical AI’s inherent epistemic and methodological limitations that generate black-box problems; extrinsic factors could still render explainability impossible. Companies will lose their marketable products if corporate secrets regarding clinical AI’s internal operations are disclosed (Klugman, 2021). The preceding criticism implies that extrinsic unexplainability is as important as (and even more than) intrinsic unexplainability. And to increase our understanding of the implications of African views of trust for the black box problem deeply, I will also need to interrogate the implications of these views of trust for extrinsic unexplainability. Precisely, vital to increasing our understanding regarding how the black box problem will undermine trust in health professional-patient relationships will also require us to interrogate the exact ways they cause an imbalance between the intellectual property rights of corporate institutions and rights to informed decision-making. This article has not done this; at least, to a significant degree.

In response, it is critical to state that a black box clinical AI that creates a problem because it is propriety can be addressed by obtaining the relevant rights or reaching agreements with the designers or corporations. In this regard, it seems intuitive that such extrinsic factors, though problematic in some way, do not necessarily raise serious ethical dilemmas for accountability, professional autonomy and informed consent. Nonetheless, this paper acknowledges that such an issue ought to be proactively addressed since corporations can refuse to give up their rights to guard AI secrets and, in this way, cause a black box problem. For example, suppose a clinical AI suggests interventions that discriminate like when it suggests that the only available ventilator ought to be given to a 12-year-old child with an existing liver condition rather than a 63-year-old older adult with no existing comorbidities. In that case, it would be difficult to know what specific variables about the child it considered. End-users will suffer harm without knowing how and/or when the harm occurred because the clinical AI is proprietary.

Nonetheless, the extrinsic factors highlight the importance of greater collaboration between key stakeholders in healthcare, like health professionals and developers of clinical AI. Suppose developers have good reasons to protect their intellectual property. In that case, health professional bodies and other stakeholders in healthcare can engage developers and designers of clinical AI to establish minimum ethical standards that should inform clinical AI (from development to deployment). This will ensure that clinical AI is not driven only by commercial interests but informed by important ethical standards. Alternatively, rather than provide the technical details of how their algorithms work (which may be legally protected), companies may also share feature attribution that details what specific features in the input stage mostly informed what predictions. Such feature attribution should be attached to each output.

Furthermore, to safeguard the value of patients for whom most clinical AI is applied, independent clinical review ought to be implemented to ensure that essential values can be integrated into the development of clinical AI. Other recommendations by Juan Duran and Karin Jongsma (2021) are worth mentioning here and include training on the responsible use of clinical AI for health professionals. For example, as users, health professionals are well placed to be the first to identify when clinical AI recommendations are discriminatory. We can enhance their capacity to quickly recognise harm if trained on the responsible use of clinical AI, including bias evaluation for unjustified discrimination.

Conclusion

This paper has contributed an African perspective to the discourse on the nature of the problem that black-box clinical AI creates. Notably, under the assumption of a black box clinical AI problem, one conception of trust grounded in African scholarship implies that giving and receiving vulnerabilities cannot occur in health professional-patient fiduciary relations. Professional and patient autonomy would equally be undermined. Trust will be vital to the global acceptance and successful implementation of clinical AI. Thus, more perspectives are required to increase our understanding of how the black box problem will challenge health professional-patient relationships. How would different groups react to different formats of clinical AI? What more is needed to foster the acceptance of the same globally? Under the assumption that automating certain aspects of healthcare is a moral imperative. In that case, what is the point from which automating health care becomes impermissible? Future studies can address these critical questions.