Critical questions in the era of AI-driven health care

To integrate the rich pool of expertise in the CARS community on modelling into intelligent clinical decision-making is a challenge requiring mind sets, which have been cultivated at CARS for the last 40 years and are now in need to be given attention to, more than ever before. This applies in particular to addressing questions and problems relating to complexity and transparency (or lack thereof) of IT systems and their application in health care systems such as intelligent robotic systems for the operating room or AI-assisted clinical decision-making in radiology.

Assuming that many engineers and scientists consider certain state-of-the-art AI algorithms and systems to be incomprehensibly complex [1], how can one expect patients, physicians and health care providers to be well advised to actually using them? Some critical questions need to be addressed, such as:

  1. 1.

    What basic value system (e.g. of patients, physicians/medical staff, health care providers, researchers, public solidarity-based, profit-oriented, control-driven), if any, should be reflected in AI based IT systems that are designed to assist in clinical decision-making, specifically in the domain of CARS?

  2. 2.

    Why do we need to re-examine communication behaviour of humans with intelligent and networked machines?

  3. 3.

    How should IT systems be designed that record and (transparently) display a reproducible path on clinical decision-making?

  4. 4.

    How can possible negative side effects in the use of AI based IT systems be minimized?

  5. 5.

    Who assumes responsibility for damages incurred through the use of AI systems in health care, specifically in the domain of CARS?

  6. 6.

    Where and when can different concepts and models relating to AI based IT systems be realized in a controlled (certified?) and verifiable manner?

During the Clinical Day at CARS 2023, a start has been made into the challenging adventure of integrating AI-related systems and Model-Guided Medicine (MGM) into CARS. This was expressed and discussed in a specific panel by a distinguished list of individuals, who have been concerned for some time about the pros and cons of AI in the domain of Computer-Assisted Radiology and Surgery.

The first three questions (#1, #2 and #3) relate to AI based IT system design (e.g. value base, intelligence and communication aspects, transparency with respect to complexity and incomprehensibility), whereas the last three questions (#4, #5 and #6) relate more to AI system applications as well as social and legal implications. Central to all 6 questions is the concern that an AI-driven health care (HC) system may not necessarily be optimal for a patient-oriented HC system, as aimed for by the CARS community in their R&D activities for the last 40 years. In order to overcome such an uncertainty, an additional question needs to be addressed:

  1. 7.

    How should AI based IT systems be employed as an empowering tool for all stakeholders involved in the domain of radiology and surgery, in order to enable a wisdom-oriented health care system?

This editorial is an introduction to several follow-up editorials and panels at the CARS congress, planned to selectively address the critical questions as outlined above. Historically, a related set of questions has been discussed in a CARS-supported panel some 20 years ago in Dresden, Germany, on the topics of telemedicine, robotics and AI. An important signal from this panel consisting of physicians, (computer) scientists, engineers, health care providers, philosophers and theologists was that the different professions involved in HC have to:

  1. 1.

    Work closely together,

  2. 2.

    Find respect for each other’s point of view, and to.

  3. 3.

    Balance viewpoint summaries reflecting society as a whole.

Another strong recommendation that came from this panel was: “Nemo est judex in causa sua”. In other words, a new direction of ethics is called for, that is not only focussing on codices of particular professions or stakeholders but being oriented towards society as a whole. To consider in this editorial and within the tradition of CARS, the four stakeholders, grouped into patients, physicians, researchers and health care providers, are a first step into this direction. Hopefully, follow-up editorials, full papers and future panel discussions relating to the seven critical questions respect the above signals and recommendations.

Stakeholders in health care and their expectations

It can be assumed, but not conclusively proven, that each of the prime stakeholders in a patient-oriented health care, e.g. patients, physicians/medical staff, health care providers, researchers, see Fig. 1, will eventually have access to an AI system that has been designed to reflect their specific value system (bias). In the respective AI system, this will be reflected in the algorithmic steps for selecting the relevant corpus of knowledge and the step of parameter tuning of the corresponding ANN models. In some cases, also the subgroups of the different stakeholder categories will have their specific AI system, for example, for different clinical disciplines of physicians such as radiologists and surgeons. Market forces will likely determine the granularity level of stakeholder/user specific AI systems.

Fig. 1
figure 1

Perceptions of stakeholders in health care in the domain of CARS and some of their viewpoint summaries

It can also be assumed that answers to questions from user-specific AI systems are more correct, i.e. have a higher truth value, than what is generated by generalized AI systems. In particular, this may be achieved through an intelligent AI system interface, designed to enhance/augment truth finding and handling in the spirit of the philosophy of fallibilism.

How far and how long the different stakeholders in HC remain in control of searching, finding and handling truth in their respective domain, is anybody’s guess but still worthwhile to pay attention to in order to support the notion of “Der Mensch, nicht die Maschine, ist das Maß … ” [1].

Which raises the question on how the human oriented “term of reference” is being defined as compared to the characteristic results/behaviour being displayed by the machine? One possibility of defining the human being in this context is the human characteristic attribute/ability to be able to think freely (not necessarily algorithmic) about complex situations and being able to express the corresponding thoughts as part of the spoken and/or written word. Consequently, the origins of possible truth and wisdom become traceable and may be considered to be one of a personal marker of a specific human being.

When employing AI based results/behaviour in the era of an AI-driven HC, a prime concern should be to secure that any claims for truth and wisdom are made transparent as regards the origin of the sources used by the AI system (e.g. type of data, information, knowledge and/or models [3]), as well as being verifiable with respect to the algorithmic processes, which have been applied. Depending on how AI is employed, this may lead to an augmentation or erosion of personal human markers.

Each of the four prime stakeholders for a patient-oriented HC system is shown in Fig. 1, and possibly some other stakeholders with impact in HC have more or less developed their specific codices and/or expectations how health care should be practiced. Hopefully, this also is based on a reproducible and truth-related set of data, information, knowledge and model entities. If we can assume that all of the stakeholders have in common the search for truth, scholarly publishing and communication (SPC) positions itself as a central denominator and universal tool to assist in searching, finding and handling this very treasurable good (Fig. 2). It is therefore of the interest of all stakeholders that SPC remains free of fake truths, fake papers and hallucinations that may be introduced by AI based IT systems.

Fig. 2
figure 2

Joseph Weizenbaum (pioneer in computer science, AI and sociological thinking): „Der Mensch, nicht die Maschine, ist das Maß!“… The Human Being, not the machine, is the term of reference” [1]

Results of R&D published in peer-reviewed scientific/medical journals and communicated in congresses, as practiced by the CARS community, hopefully, fulfil the quest relating to objective knowledge and truth management with the aim to minimize erroneous assumptions or dogma, for example, with a minimal amount or none at all of bias, predetermined opinions, believe or fake truth. This applies in particular to identifying biases, which may be introduced by AI.

In the interest of a patient-oriented HC system supported by all four stakeholders, there are a number of important if not vital requirements on how truth relating to HC should be discovered, unfolded and managed, e.g. truth should be.

  1. 1.

    Scientifically based, verified and documented,

  2. 2.

    Medically relevant and significant,

  3. 3.

    When transformed into clinical tools and workflow: to be accessible and affordable.

Not an easy undertaking as the quest for truth implies, for example;

  • To pay attention to and make transparent as many of the relevant variables as possible,

  • To determine their importance rating, taking into account the “viewpoints” of all stakeholders impacted by the result, including the taxpayer, and

  • To enable a holistic medicine by means of multidisciplinary networking, in which each of the stakeholders feel empowered through having access to truth finding tools.

It remains to be seen whether and how truth-finding tools based on MGM and AI can assist in the reliable and transparent linking of all variables relating to clinical situational and workflow models for a computer-assisted diagnosis and therapy.

Last but not least, user-specific AI systems based on a truth reflected by a related SPC-based corpus of knowledge are likely to ascertain fairness in the sense of adapting to the information and communication needs of the specific user for prediction-based decision-making, sometimes also referred to as algorithmic fairness [2].

Discussion

Questions, which surface from the theme of MGM relate to why, how, where and when MGM methods and tools, will impact an increasingly AI based (biased!) decision-making process in health care. This is particularly relevant as there are tendencies to move from a patient-oriented HC system to an AI-driven HC system, rather than a wisdom-oriented health care system that may be of benefit to all stakeholders.

For example, how can MGM become an enabler for moving from a data-driven machine learning/AI to a transparent model-driven machine learning/AI in Medicine [3] and, hopefully, thereby to a wisdom-oriented health care system? In particular, how can certain desirable AI concepts such as transparency, predictability, cause–effect reasoning, cooperativeness, agent and safety driven, data and model interoperability be promoted with MGM? Should model-driven machine learning be the basis for a transparent machine intelligence and replace a rather black box-based artificial intelligence?

Finally, what role will a model-based medical domain evidence play when it comes to verifying, validating and evaluating AI algorithms? Related to this is the question, do we need a new professional expertise in health care, perhaps entitled “Medical Intelligence Consultant”, for example, for AI and MGM related problems in radiology and surgery? These and related questions will become the focus of attention for future editorials planned for IJCARS and panel discussions at CARS congresses. As a starter, this editorial is meant to raise awareness on what is in front of the prime stakeholders in health care and society at large.