1 Introduction

Uncertainty in medicine is as ubiquitous as it is challenging [1]. It occurs, for example, where patients’ symptoms are ambiguous, so that no definite diagnosis can be made. Where physicians do not have sufficient knowledge of their patients to accurately assess their current health status or enough information to know what treatment would work best. Such uncertainties can lead physicians to misdiagnose their patients or to recommend incorrect, ineffective, or even harmful treatments. Medical uncertainties, in short, prove to be major challenges to medical practice.

Accordingly, it is not surprising that there are many attempts to reduce such uncertainties as far as possible, e.g., by continuous medical research, the introduction of evidence-based procedures or, most recently, the use of Artificial Intelligence (AI) in medicine. However, the use of AI technologies in medicine to reduce uncertainties and produce new medical certainties also poses some challenges. Our paper delves into these challenges and aims to answer the question: what are the challenges of using AI to reduce medical uncertainties and produce new certainties for medical practice?

In our paper, we will explore this question on a conceptual level. To do this, we will use a multimethod approach that combines conceptual analyses, philosophical reasoning, and ethical considerations to identify key challenges that arise when AI is used for medical certainty purposes. As a starting point, we will delve into the concepts of uncertainty and certainty in detail. Engaging with core philosophical concepts of both concepts, we will cultivate a pragmatic understanding of uncertainty and certainty, followed by a brief examination of how the former negatively affects medical practice. Next, we will turn our focus to the role of AI in medicine. By looking at three specific AI applications—used to support medical diagnosis, decision-making, and predictions—we aim to illustrate how AI can reduce medical uncertainties and produce new certainties. Through philosophical reasoning and ongoing engagement with current literature, we will address several challenges posed by these AI-driven shifts between certainty and uncertainty in medicine. Guided by philosophical reasoning and continuously informed by contemporary literature, we will then address several challenges posed by this AI-driven (un-)certainty shifts in medicine. Among these challenges are: (a) patients are stripped down to their measurable data points and are made disambiguous, (b) human physicians are being pushed out of the loop, patient participation is limited, and the emergence of (c) surveillance as well as privacy and security issues. We will then discuss these challenges that are direct consequences of AI-driven certainty efforts. Grounded in a reflective equilibrium [2, 3], we will offer some ethical considerations, making some brief suggestions on how to tackle these challenges. Although medical certainties are desirable, it will be of paramount importance to address the challenges that come along with them and their production using AI.

2 Uncertainty and certainty: and how the former challenges medicine

What is uncertainty? What do we mean by certainty? Defining both terms is not easy. There are countless definitions of uncertainty spanning a range of disciplines. From epistemological concepts that delineate between various shades of ignorance, non-knowledge, and knowledge [4, 5], to probability and statistical models used in fields like computer science or management studies [6, 7]. Many of these concepts are intricate and domain-specific, making them tough to grasp or apply outside their original context. In this article, we embrace a pragmatic approach to understanding uncertainty and certainty, as developed in a representative way by John Dewey. Such a pragmatic understanding has the advantage of being easily understandable even for laypeople and can be applied to different professional contexts—without, on the other hand, being imprecise or superficial.

To truly grasp uncertainty from a pragmatic perspective, it is best to begin with the idea of decision-making. For individuals to make informed decisions, they first need information of their current situation [8, 9]. This means they must be aware of their present circumstances, as well as have some inkling of what will happen next. Second, individuals must know and assess their options for action. This means they should know which actions they can currently take and estimate the consequences that will follow. When people struggle with understanding their current situation or predicting the consequences of potential actions, they’re in a state of uncertainty [10, 11]. This uncertainty can arise because individuals lack information about their situation, or because the information they have is ambiguous, statistical, or vague [12]. Hence, uncertainty can be described as an epistemic state where individuals lack the necessary reliable information to assess their situation, weigh various action options against each other, and make informed decisions [13]. During situations of uncertainty, people are forced to make their decisions “partially blind”—which is not only highly unpleasant, but makes decision-making a lot harder. Due to the unpleasantry of such uncertainty, individuals, as Dewey articulates most prominent in his Quest for Certainty, have always sought to reduce their uncertainties by doing research, gathering more and more precise information and getting to know their surroundings better [10, 14].

Certainty is commonly understood as the counterpart to uncertainty. However, it's important to emphasize that certainty is not just the mere absence of not-knowing—because this could also mean being unaware about the things one does not know [4]. In turn, certainty does not mean having complete information about a situation that allows for a flawless grasp of it. Instead, from a pragmatic viewpoint, certainty describes a state where an individual has enough information to assess their situation and make predictions with high reliability about what the future holds [10, 15]. This predictability enables individuals to weigh their various action options and their consequences against each other, allowing them to make informed decisions [10]. The exact amount and precision of the information needed for a situation to be deemed “certain” cannot be universally set—it rather depends on the complexity of the given situation and the individuals involved. Therefore, certainty can be defined as an epistemic state where there's sufficient information that allows individuals to make precise, albeit never comprehensive, predictions and, based on that, make informed decisions [13].

As previously indicated, uncertainty and certainty are counterparts. However, they should not be understood in black and white terms, implying a situation can be either entirely uncertain or totally certain. Instead, uncertainty and certainty are better understood as the ends of a continuum [13, 16, 17]. Every situation can be placed on this continuum and, depending on the available information and its ability to enable predictions and decisions, can be more or less uncertain or certain. Absolute uncertainty and total certainty are never attainable. As Wittgenstein notes in his prominent investigations On Certainty, a person never possesses no knowledge, as some level of understanding and anticipation always exists [18, 19]. Similarly, complete knowledge and understanding are unattainable, as there will always be an element of uncertainty present [10]. People move along this continuum, constantly balancing their level of understanding and knowledge with the unknowns that exist.

These forms of certainty and uncertainty can be found in the field of medicine as well, for example as “medical uncertainty” [20]. Medical uncertainty is not a standalone concept, but simply an application of the aforementioned uncertainty concept in the medical realm. It might appear when there is lack of information about a patient’s health or physicians having only vague ideas of the outcome of a treatment [1, 21,22,23]. The same goes for “medical certainty” [24], which can, e.g. manifest in evidence-based diagnoses and assessments of patients’ health, as well as educated predictions about the effectiveness of treatments and the potential future health outcomes for patients.

Medical uncertainty presents a significant challenge for the field of medicine [1]. Patients typically expect their physicians to accurately diagnose their conditions and provide effective treatments to alleviate their symptoms. They may also expect physicians to administer the treatments themselves. However, physicians often lack information about their patients, their symptoms, and their health trajectories, or the information physicians have about them is vague or ambiguous [25]. Despite this lack of information, physicians ate still are expected, sometimes even required to make diagnoses anyway and to recommend or carry out treatments [21]. This uncertainty resulting from a lack of situational knowledge and the need to take action leads to the diagnoses given and the treatments recommended being provisional to a certain extent [26]. This means: they may appear to be correct at the time—but there is a possibility that they may be proven incorrect, ineffective or even harmful in the future [21].

3 The use of AI in medicine—and how it shifts medical (un-)certainties

One approach to managing uncertainty, particularly in the field of medicine, is to convert it into quantifiable risks [24, 27]. By using the information available, probabilities of a correct diagnosis or successful treatment can be determined, along with the potential consequences if they are not. This approach offers statistical insights into both the likelihood and severity of specific outcomes, enhancing our understanding of the situation and allowing for a nuanced evaluation of potential actions [28]. It enables the comparison of different options, thereby reducing uncertainties and creating a form of medical certainty [29]. Given the complexity of medical risk assessment, new technologies are continually being developed to aid in this process, to assist physicians in managing uncertainty, and to prevent possible negative consequences [29]. Currently, these include AI technologies in particular [30]. We will introduce three such AI-technologies and show how they reduce medical uncertainties and produce risk-based certainties.

A prominent example is AI assessment tools for skin cancer screening [31]. When dermatologists detect a suspicious spot on a patient's skin, they can photograph the spot from various angles. A specialized AI-tool can then analyze the photos, examine the shape of the skin anomaly, its hue, edges, and other characteristics, and then compare it with a large database of similar images and the associated diagnoses [30]. That way, the tool can assess with a high degree of precision—for which radiologists need many years of training—whether the skin anomaly is potentially dangerous and the patient should see a dermatologist, or whether it is probably harmless and presumably no action needs to be taken [32]. By providing these precise assessments of the risk potential of skin anomalies, AI-driven assessment tools can aid dermatologists’ diagnoses and treatment recommendations [33]—and reduce medical uncertainties.

AI technologies have the potential to not only alleviate uncertainty in (more or less) acute medical situations but also to generate predictive certainties. This is particularly true when it comes to making predictions about the health outcomes for individuals (or populations). An example of this is AI-Clinical Decision Support Systems (AI-CDSS) [34]. These are often used during tumor board conferences when different experts come together to decide on how to treat a patient diagnosed with cancer [35]. AI-CDSS can use both a patient’s health data and a vast database of other cancer patients, their treatments, and treatment outcomes to simulate various treatment options in complex cases. By assessing the data, the AI-CDSS can provide an estimation of which treatment option has the highest probability of success and the lowest associated risks in the long run—thus producing predictive certainties and allowing medical professionals to make more informed decisions [36].

Another example of predictive AI-technologies are medical Digital Twins (DT) [37]. They are virtual representations of individuals or specific organs, composed of data that are analyzed in real-time by AI, and used for personalized health predictions. Similar to AI-CDSS, DT are used in part to simulate how well patients might tolerate medical interventions or certain medications and how this might affect their health in the medium to long term [38, 39]. But it is also envisaged that in the near future, as Matthias Braun outlines in his ethical considerations, DT will act predictively and warn individuals in real time when they are most likely to suffer a medical complication in the near future and should seek medical treatment as a precaution [40]. With their predictions—whether in the context of clinical treatment or as a kind of medical “early warning system” [41]—DT provide patients and physicians alike with a greater understanding of expected health outcomes, thus creating a form of medical certainty.

The examples of AI assessment tools for skin screening, AI-CDSS, and DT demonstrate how the incorporation of AI-technologies can help mitigate medical uncertainties, generate new certainties, and, as various authors suggest, contribute to improving, personalizing, and making medicine more precise [30].

4 Shifts of medical (un-)certainties and their challenges for medicine

But the AI-driven reduction of medical uncertainties and creation of medical certainties does not only have advantages. It also comes with some challenges. We will now outline several of these—two per heading—and explain why they are indeed challenges. We will delve into the first four challenges in more detail, as there is already significantly more literature on the last two—data protection and security—than on the first ones.

4.1 Stripping down humans to their measurable data points and making them disambiguous

To reduce medical uncertainties and produce new certainties, AI-technologies work with data from their patients. Primarily with their vital data (blood pressure, pulse, blood oxygen saturation, etc.), lifestyle data (physical activity, duration and quality of sleep, time spent sitting down, etc.) or genetic data. From these data, AI-technologies create patient health profiles that they can analyze and match with a variety of other patient profiles—or from which they can make predictions about patients’ future health trajectories. In general, the more data and the better the quality of the data, the better the results, i.e., the more AI-technologies can reduce uncertainties and produce higher certainties.

However, AIs working with health data comes with two challenges. First, there is the danger that data collection and processing will increasingly reduce humans to their data [42, 43]. What counts for AI-technologies are parameters that can be quantified, i.e., measured and converted into a numerical value. Because these are the parameters that AI-technologies can work with. However, parameters that cannot be measured or quantified accurately, such as the individual well-being of people or their social support, are more or less useless for AI-technologies. They cannot work with them—and in turn ignore and exclude them. This focus on AI-processable parameters gradually turns people into mere “bundles of information” [44], i.e. they are reduced to the sum of their measurable data.

It may be asked what is wrong with such “medical reductionism” [45] that increasingly sees people as bundles of information in medicine. If this data-focused perspective helps to achieve more medical certainty and improves medical practice, why not welcome it? However, such objections must be countered by the fact that there are several parameters that cannot be quantified but are indeed quite relevant for medicine. For example, neither pain [46] nor suffering [47] can be measured and quantified completely objectively, so are the subjective experience and wellbeing of patients [48] but they play a vital role for patients and their medical treatment, nevertheless. Medicine that ignores these aspects and focuses only on objectively measurable parameters and “fixing” them becomes increasingly reductionist [44, 49]. The fact that such an approach may lead to wrong diagnoses, misguided treatment recommendations, or cause harm in other ways [50]—although not always so—and may displace the subjective perspective with a data perspective [51]—although they are not necessarily incompatible—shows why reducing patients to their data is, indeed, a challenge [45].

The second challenge, closely related to reductionism, is that measuring, quantifying and datafying them deprives persons of their ambiguity. Ambiguous or inconclusive physical states—e.g., whether a person is ill or not, or whether she suffers or not and if so, how much—are converted into fixed values to be able to evaluate them and make them comparable [52].

Here, too, the question arises whether such “disambiguation” [53] is bad per se. Is not unambiguity to be welcomed if it contributes to medical certainty and improves medical practice? In response to this question, one could again ask whether making people, their bodies and health disambiguous and comparable does justice to the complexity and multidimensionality of their individuality. Is not ambiguity sometimes part of being human [54, 55]? E.g., the question whether a person with a body temperature of 38.2 °C is ill or not, or whether a person suffers from slight pain or not, or even perceives it as pleasurable, often depends on the individual person – and sometimes cannot even be answered by the patient herself [56]. Referencing Thomas Bauer one could ask if such ambiguities are not only a big part of what is considered being human but if it’s not exactly these ambiguities that make the world and being human exciting and worth living [53]. With Hartmut Rosa one could state that human ambiguity is part of their “unavailability” [57] and “disambiguating” [53] them would rob persons of their humanity, vitality and even dignity [58]. Against this background, making persons disambiguous certainly proves to be a challenge.

4.2 Pushing human physicians “out of the loop” and limiting patient participation

Furthermore, it can be observed that certainties quite often have a conclusive effect. What physician dares to question an AI, that has access to all available patient data, can evaluate it highly efficiently and compare it with a large number of other patient profiles [59]; or to doubt its diagnoses, that are quite often better than their own [60]? And what patient wants to be involved in decision making if it can only make outcomes worse? However, this reliance on AI increasingly has the potential to increasingly push physicians “out of the loop,” that is medical decision-making, and might have negative implications for patient engagement.

Physicians may be increasingly sidelined by the use of AI, becoming less and less actively involved in diagnosis and finding the best treatment option, but merely “signing off” on the AIs diagnoses and recommendations in the end [30, 61]. Either by actively agreeing with the results or not disagreeing with them. Although there might not be an intrinsic value in human agency per se it might cause some major challenges if human physicians are totally “outlooped.” For example, if medical AI-technologies increasingly take over all professional tasks and physicians are—at most—only responsible for medical follow-ups and communication with the patient [62]: will it not slowly but surely lead to a deskilling of the physician? [63, 64] And who is morally and legally responsible if the AI’s diagnoses are wrong and the treatments it recommends are harmful? [35, 65] Both questions become even more challenging, since even the use of AI-technologies cannot reduce medical uncertainties to zero and create total certainties [66]—which is why they inevitably will still sometimes make mistakes [67].

Second, it is a high medical ideal to involve patients in finding the right diagnoses and treatments [68]. And even if some scholars assume that AI medicine will make medicine more participatory [69], one might ask: apart from providing data, what role do patients take in the medical process? In the end, are not AI technologies single-handedly making precise diagnoses and suggesting optimal treatments with a high degree of certainty? Would not patient involvement that goes beyond the mere production of data just worsen the outcomes—which would inevitably lead to a reduction in patient participation in the long term?

Again, one might ask: if patient involvement would only make the results of AI worse, when patients take an active role in the medical process, is it not more effective to exclude them in the first place? Wouldn’t it be more efficient to just focus transparency and explainability [70], and simply explain to patients afterwards how and why AI arrives at its diagnoses and makes decisions? Even though this query is valid and the proposal quite tempting, it must be objected that the participation of patients is an important principle of medical-ethics [71, 72]. Actively involving patients in their entire medical treatment and giving them the opportunity to participate in every aspect of diagnosing and finding the right treatment is an expression of respect for their autonomy and their notion of good life [73]. Merely explaining how the AI came to a certain diagnosis and why it proposes a certain treatment, after all has been set and done, does not appreciate the patient’s autonomy in the same way.

4.3 Surveillance of patients, privacy and security issues

Last, there is the fact, that the AI-driven production of medical certainty relies on substantial amounts of data. The best way to produce as much and as good data as possible, to ensure the best AI results, is to use a variety of sensors that continuously monitor the person’s activities and vital signs. However, as has been pointed out several times [74], constant medical monitoring is a form of surveillance—which, in turn, poses the risk of the monitored person being nudged [75], or unknowingly having their freedom and autonomy restricted [76].

In addition, there are privacy and security issues [77]. Since medical data are extremely sensitive and allow conclusions to be drawn about a person’s private life and health, it can be very harmful if the data falls—whether legitimately (privacy issues) or through data leaks and hacks (safety issues)—into the wrong hands. For example, into the hands of insurance companies or employers who might change their contracts’ conditions as a result [78], into the hands of private companies who might use it for their own [79], or into the hands of criminals who might use it for criminal purposes [80].

5 Discussion

The two previous chapters suggest that the production of medical certainties through AI is a kind of tradeoff. The benefits of shifting (un-)certainties as well as the improvement, personalization, and precision of medicine come along with several new challenges that we have outlined above. This brings into focus the ambivalence of AI-driven certainty efforts and raises the question of how can and should be dealt with them. In this chapter, we will explore this question and briefly discuss how one might tackle these challenges posed by AI-driven certainty efforts. Then, we will touch upon the limitations of our study.

5.1 Suggestions on how to deal with the challenges of AI-driven certainty efforts

There are several alternatives for dealing with the challenges posed by AI-driven certainty efforts. To contemplate these alternatives, we will employ a wide reflective equilibrium [2, 3]. Using this approach, we'll aim for a grounded evaluation of using AI for certainty purposes in medicine, taking into account particular concerns and potentials of these technologies while incorporating ethical guidelines, moral theories, and legal benchmarks [81] to formulate recommendations.

The first alternative would be to refrain from using AI-technologies for medical (un-)certainty purposes altogether to sidestep these challenges. However, this radical approach would also forego the potential benefits that go along and literally “throw the baby out with the bathwater.” Moreover, it completely ignores the fact that all technologies are to a certain extent ambivalent [82], that every technical advance brings its own risks and challenges [83]. That is why, apart from the fact that this approach will be almost impossible to implement, we consider this approach too black and white and pragmatically half-baked.

Rather than questioning whether AI-technologies should be used at all for uncertainty-reducing and certainty-producing purposes in medicine, it would be better to ask for ways to address the challenges as best as possible while utilizing AI-technologies’ benefits. We would like to make some suggestions on how to achieve both—while at the same time addressing the limits of our suggestions. That is why, in Table 1, for each challenge, we offer a concise suggestion followed by its potential constraints, i.e., factors that might impede the success of that recommendation.

Table 1 The various challenges associated with producing medical certainties through AI, suggestions on how to address these challenges, and their limitations (created by the authors)

Without lapsing into Neo-Luddism on the one hand and renouncing AI-technologies as a whole [95], or lapsing into various forms of solutionism on the other hand and ignoring the challenges [96], these suggestions can help to find a way to deal with AI-technologies in medicine und utilize their (un-)certainty potentials for the best of the patients.

5.2 Limitations

Wrapping our paper up, it is pivotal to address a few critical questions, thereby pinpointing some limitations of our study.

The first question might be: What’s genuinely novel about this research? Many of the issues highlighted, such as the risk of medical reductionism or concerns over privacy and data safety in AI-driven medicine, have been analyzed in prior studies. While this criticism holds some merit, it is essential to recognize that these topics, though individually explored, have not been presented altogether under perspective of (un-)certainty. Drawing from Aristotle’s idea that the whole is greater than the sum of its parts [97], and inspired by Richard Rorty’s insight that progress often involves crafting new perspectives on familiar subjects [98], this unique assembly of points and their recontextualization are what make our paper distinctive and contribute something fresh to the discourse.

Another critical question might be why we have not engaged more thoroughly with the existing literature on the mentioned challenges. For instance, there are already comprehensive debates on medical reductionism [45, 49, 99,100,101,102] and publications addressing autonomy [103, 104] or (meaningful) control within the realm of AI [40, 105,106,107]. Indeed, while numerous discussions exist, the primary aim of this contribution was to draw attention to the multifaceted challenges of AI-driven certainty efforts in medicine. We only cited aspects of the existing discussions when they served this purpose; otherwise, we chose not to elaborate on them.

Further criticism might highlight our lack of original data to support our claims. While true, we have referenced empirical studies at relevant points in the text to support our claims and arguments—providing an empirically backed rationale.

Lastly, one could argue that our recommendations are too brief and require further elaboration on how they should be implemented. We fully agree. Before the recommendations can be implemented, they need to be further developed. However, our primary aim in this paper was to highlight the challenges arising from AI-driven certainty efforts in medicine and to chart an initial course on how (not) to approach them. For further nuances and implementation, more research is required.

6 Conclusion

Starting from the observation that uncertainties in medicine are omnipresent and prove to be a great challenge, we have outlined how AI-technologies could be used to reduce these uncertainties and produce new medical certainties. Nonetheless, there are not only advantages to using AI-technologies for this purpose. There are also several challenges, which this contribution focuses on.

After introducing the concepts of uncertainty and certainty, showing where they occur in medicine, and how AI-technology helps to reduce and produce them, we have gone into detail about these challenges. We have shown that the use of AI-technologies tends to stripping down humans to their measurable data points and to make them disambiguous. Furthermore, we have shown how the use of AI-technologies pushes human physicians out of the loop of medical decision-making, and has negative implications for patient engagement. Last, the use of AI implies constantly surveilling patients and bears the risk of privacy and safety issues.

Having outlined these challenges, we have made some suggestions on how to address them. Although none of these suggestions will completely solve the challenges, they can help mitigate the potentially harmful effects of AI-driven certainty efforts and show how certainty-producing AI may be beneficially used in medicine. We hope that this analysis of AI-generated certainties in medicine, the challenges associated with producing it, and the suggestions for addressing them, will contribute to expanding perspectives on the use of AI in medicine and furthering discussions within the medical community on the topic of medical (un-)certainties.