Keywords

1 Introduction

Artificial intelligence (AI) generally describes, “the capability of a computer program to perform tasks or reasoning processes that we usually associate with the intelligence of a human being (Lupton 2018).” Although it is unlikely that AI will completely replace human physicians anytime soon, it is now possible for AI to independently perform tasks that fall squarely within the scope of medical practice, most notably diagnosis, prognosis, and consultation in response to individualized medical information.Footnote 1 In April 2018, the U.S. Food and Drug Administration (FDA) approved IDx-DR, the first artificially intelligent device capable autonomously diagnosing patients with diabetic retinopathy without the input of a human doctor (U.S. Food & Drug Administration 2018). In Europe, Oxipit, an AI capable of autonomously producing “final reports for healthy patient X-ray studies” received a CE mark, clearing the way for its use in clinical practice (Oxipit 2022). Deep learning (DL) is the subset of AI most likely to produce technologies, like IDx-DR and Oxipit, capable of autonomous medical decision making by training machines with artificial neural networks to analyze large amounts of medical and health data to detect patterns.Footnote 2

The introduction of artificial intelligence using DL in modern medicine holds promise for improving the accuracy, efficacy, and efficiency of medical diagnosis, prognosis, therapeutic decision making, image analysis, and patient monitoring (Chang et al. 2019; Jackson et al. 2021). On the other hand, it also introduces a host of ethical and legal concerns surrounding safety, transparency, bias and discrimination, data privacy, consent and autonomy, and responsibility and accountability (Lawry et al. 2018; Gerke et al. 2020; Jackson et al. 2021). Further amplifying these concerns is the fact that the existing ethical and legal regimes that govern medical practice and medical malpractice are not designed for nonhuman doctors.

Pathology, as a data-rich subspecialty of medicine, is a hotbed for the development and implementation of medical AI (Chauhan and Gullapalli 2021). As a result, Jackson et al. (2021) call on pathologists to provide both developmental as well as regulatory and ethical leadership for the uptake of AI in clinical and laboratory medicine (Jackson et al. 2021). This chapter combines the medical and legal expertise of its authors to recommend parameters for the Autonomous AI Physician and identify the ethical and legal issues that arise from the practice of medicine by the Autonomous AI Physician. Following this, the authors identify and suggest the potential application of concepts from the various regulatory and legal regimes that currently govern medical practice and medical malpractice to the future practice of medicine by the Autonomous AI physician.Footnote 3

2 Artificial Intelligence in Pathology

Pathology uses a data-intensive, complex, and comprehensive workflow to diagnose and study disease processes (Pallua et al. 2020). Both anatomic and clinical pathological findings and data heavily inform diagnosis, prognosis, and therapeutic recommendations of all medical specialties (Chauhan and Gullapalli 2021). Digitization has already improved workflows in pathology by allowing virtual microscopic analysis of whole slide imaging, which has proven to be comparable to the conventional microscope, long considered the gold-standard for detecting pathological changes in tissues and cells (Pallua et al. 2020). The introduction of AI into this morphologic analysis promises to further improve accuracy by reducing diagnostic inconsistency caused by human observer variability (Chang et al. 2019). For example, AI can be trained with digitized images and associated diagnoses rendered by human pathologists to analyze new images for pathological patterns that lead to quicker and more accurate diagnoses (Jackson et al. 2021). Recently, AI has proven that it can outperform human physicians in even more complex tasks, including predicting the stage and grade of lung cancer, using DL techniques (Chang et al. 2019).

Pathology’s digitization combined with its generation of large amounts of medical data make the field, along with radiology, a “prime target[] for disruptive innovation of health care AI applications over the next decade (Chauhan and Gullapalli 2021).” Allen articulates three progressive levels of AI integration in pathology (Allen 2019). The first level keeps pathologists in the workflow loop by integrating AI as one of the many diagnostic tools that pathologists use for medical decision making (Allen 2019). The second level describes AI that can independently render pathologist reports but keeps human pathologists on the workflow loop to provide quality oversight for AI-generated medical decisions (Allen 2019). The third level of AI involvement removes human pathologists from the workflow loop which is entirely controlled by autonomous AI (Allen 2019).

Other medical experts doubt a future in which AI completely replaces human pathologists noting that AI has yet to master the unique ability of the human brain to synthesize information across various sectors of knowledge (Chauhan and Gullapalli 2021). Though Chauhan and Gullapalli (2021) admit that the future role of AI in pathology is unpredictable, they dare to make one prediction: “The need for a wary and cautious eye on the quality and process control by pathologists is unlikely to be automated anytime soon (Chauhan and Gullapalli 2021).” Pathologists in clinical laboratories are also responsible for the generation and safekeeping of “one of the largest single sources of objective and structured patient-level data within the healthcare system (Jackson et al. 2021).” As a result, pathologists are not only well-positioned but, as the custodians of highly coveted medical data, ethically obligated to help usher in a new age of AI.Footnote 4

3 The Autonomous AI Physician: Parameters

While the concept of a self-sufficient robot doctor may be the stuff of science fiction, AI is already capable of autonomously practicing medicine, including diagnosis, prognosis, and provision of treatment recommendations.Footnote 5 Deep learning allows AI to mimic human brain function to independently process data and reach decisions using algorithmic reasoning that continuously improves as the AI collects more data (Ahmad et al. 2021). Although AI manufacturers may attempt to describe AI as “cognitive computing” or medical support tools, the reality is that AI can now independently consult millions of pages of literature to suggest individualized medical treatments (Chung and Zink 2018), analyze and interpret radiology images and pathology slides (Griffin 2021; Oxipit 2022), diagnose and stage cancer (Ahmad et al. 2021), and predict patient outcomes (Ahmad et al. 2021). And with investment in healthcare AI outperforming any other sector in the global economy, the capability of medical AI will only continue to grow (Griffin 2021). Some futurists predict artificial general intelligence to be a reality by 2029 (Chung and Zink 2018).

Now is the time to set parameters for the autonomous practice of medicine by AI. While recognizing that AI may be able to “fill much of the gap between human performance and perfection (Jorstad 2020),” it will never be capable of providing the integral human components of medical practice, “like touch, compassion, and empathy (Griffin 2021).” Griffin explains that: “Medicine is not purely a science that can be managed with statistics, mathematics, and computer algorithms, and overreliance on AI may lead to harm in instances when human compassion, human touch, or human interpretation of data context is necessary (Griffin 2021).” From the perspective of diagnostic pathology, Ahmad, et al. note that, “[t]he diagnostic process is too complicated and diverse to be trusted to hard-wired algorithms alone. It is hoped that AI and human pathologists will be natural cooperators, not natural competitors (Ahmad et al. 2021).” As recognized by the EU’s Special Committee on Artificial Intelligence in a Digital Age, human oversight of autonomous AI medical decisions is indispensable (European Parliament Special Committee of Artificial Intelligence in a Digital Age 2021). As a result, the Autonomous AI Physician, as used in this Chapter, describes artificial intelligence that is capable of performing acts ordinarily considered medical practice (diagnosis, prognosis, development of a treatment plan, etc.) using algorithmic reasoning to make medical decisions without a human involved in that medical decision-making process. Additionally, the Autonomous AI Physician should currently not stand alone as the sole medical decision maker for an individual patient but should instead be situated within a larger treatment team that includes human medical practitioners.

4 Ethical and Legal Implications of the Autonomous AI Physician

The proliferation of AI technologies capable of performing tasks typically reserved for human medical professionals can translate to cheaper, more accessible, and higher quality healthcare (See Jackson et al. 2021). In addition to diagnostic AI ranging from IDx-DR’s ophthalmologic diagnoses to Oxipit’s radiology reports, AI applications in medicine can read eye scans, predict early-stage coronary artery disease, and detect cardiac arrest over the phone in real time (Gerke et al. 2020). Gains realized by innovative AI technologies in medicine; however, do not come without risks to patient safety and privacy. As a result, medical, legal, and data experts call for robust ethical and regulatory oversight of AI in the health sector to ensure that new technologies are implemented fairly, safety, and securely (Lawry et al. 2018; Allen 2019; Jackson et al. 2021). Although regulatory agencies are now attempting to address AI risks, the early “development of AI, broadly speaking, has occurred substantially outside of any regulatory environment (Allen 2019).”

Regulating the development of AI in any sector is inhibited by the “pacing problem,” which describes the proclivity of technological innovation to disengage from regulatory regimes and social norms that lag behind the development of new technology.Footnote 6 Additionally, innovation in the tech industry is driven by values markedly different than those in the healthcare industry (Jackson et al. 2021). “Mov[ing] fast and break[ing] things” doesn’t exactly translate to an acceptable patient safety strategy.Footnote 7 Still, successful ethical, regulatory, and legal strategies for guiding the implementation of AI in medical practice will need to balance the benefits of encouraging innovation in the health sector with the risks to patient safety and privacy (Allen 2019). Interestingly, the tech industry, including AI researchers and data-scientists, have initially led discussions surrounding the importance of ethical and responsible AI, but experts warn that the industry in charge of developing AI cannot alone guide the ethical implementation of the same technology (Chauhan and Gullapalli 2021). Allen opines that accomplishing such a task “is likely to require an unprecedented level of governmental, professional societal, and industrial cooperation and trust-building (Allen 2019).”

Though the ethical and legal aspects of integrating autonomous AI in medicine cannot be neatly separated, we nevertheless organize the ethical discussion considering the core principals of medical ethics—autonomy, beneficence, nonmalfeasance, and justice—to conclude that ethical AI must be transparent, reliable, safe, and free of bias, while organizing the legal discussion around data privacy and liability for patient harm.Footnote 8

4.1 Ethical Consideration: Transparency

The requirement that medical AI maintain a level of transparency sufficient to ensure patient autonomy is twofold. First, AI developers should be transparent about the use of patient data for training medical AI systems (Jackson et al. 2021). Second, patients should have sufficient information about the use of AI in clinical care, including the risks and benefits of AI-based medical decisions as well as information about how those decisions are made (Jackson et al. 2021).

The question of whether patients need to give permission for AI developers to use health data generated during their medical treatment depends on the jurisdiction’s rules governing disclosure of personal health information. In the United States, because patient data is typically deidentified before being shared with AI developers for training new technologies, the primary law protecting health data, the Health Insurance Portability and Accountability Act (HIPAA), does not prevent disclosure (Jackson et al. 2021). Jackson, et al. argue that the risk of reidentification of patient data by cross-referencing multiple data sets mandates that a patient’s consent be obtained prior to using their health data for training AI (Jackson et al. 2021). This consent requirement, should it be recognized, cannot be satisfied by obtaining patient consent for processing data for individual medical treatment and payment, but instead requires additional consent (Jackson et al. 2021). In Europe, the General Data Protection Regulation (GDPR) generally requires explicit patient consent for a specifically identified purpose before an individual patient’s health data is processed for any reason (The European Parliament and the Council of the European Union 2016, Art. 9). Although the GDPR does not regulate “anonymized data,” which can no longer be connected to an identifiable person, it does restrict the use of data that has a reasonable likelihood of being re-identified (The European Parliament and the Council of the European Union 2016, Recital 26).

In addition to having control over their health data, patients should also be informed about the role of AI-sourced decision making in their medical care. When and to what extent patients are informed about the use of AI in making diagnoses and therapeutic decisions is unsettled, leaving medical and legal experts concerned about infringement upon a patient’s ability to exercise the autonomy needed to make informed decisions about their treatment (Gerke et al. 2020). Minimally, a patient should be informed when AI is used to generate diagnoses or treatment recommendations including an explanation of any risks attendant with accepting AI-sourced medical decisions. Ideally, patients should also be given a plain-language explanation of how the AI reached its conclusions; however, the reality is that AI technology that uses DL can conceal algorithmic decision-making criteria from even the AI’s developer creating the problem of opaque “black-box” AI, which describes the inability of humans to understand the basis for the AI’s decision. (Chauhan and Gullapalli 2021). Nevertheless, disclosures about data used to train the AI as well as the AI’s pre-market performance statistics should be given to patients whose care is influenced by AI medical decision making.Footnote 9

4.2 Ethical Considerations: Reliability and Safety

The “black-box” problem also presents an impediment to ensuring that AI-sourced medical decision making is reliable and safe for patients because it conceals the process by which the AI system reached a decision, preventing analysis and oversight of the decision-making process. Some experts argue that the accuracy of the AI’s decisions is what matters regardless of the hidden process it used to reach those decisions (Gerke et al. 2020). Although, the algorithmic functions used in the AI’s analysis of data are not, and cannot possibly be, completely transparent with “black-box” AI, developers must still provide crucial information about how the AI was trained and potential biases of the software for independent oversight and analysis (Gerke et al. 2020). Typically, the quality of the training data given to the AI will directly correlate to the quality of the AI’s medical decisions (Gerke et al. 2020).

However, even when AI produces technically reliable decisions based upon the data it received, those decisions may be clinically unreliable and threaten patient safety (Lawry et al. 2018). While DL can enable an AI to develop and apply rules to detect patterns using large data sets, AI still cannot exercise clinical reasoning of a human doctor to determine the difference between causation and correlation (Lawry et al. 2018). For example, an AI system trained to triage patients with pneumonia determined that asthmatic patients were low risk because they had better recovery outcomes following a pneumonia diagnosis than the general population (Caruana et al. 2015). While the system was trained to consider underlying risk in its decision making, it failed to recognize that patients with a history of asthma, and considered high-risk pneumonia patients, received a higher level of care, thereby producing better outcomes. This inability of AI to properly recognize cause and effect can make the system unreliable, and therefore, unsafe (Lawry et al. 2018).Footnote 10

4.3 Ethical Consideration: Bias

Patient safety can also be compromised by systemic biases that manifest in AI medical decision making. Despite being nonhuman, AI can express subjective biases as a result of human-generated algorithms and data used to develop the AI (Chauhan and Gullapalli 2021). Algorithmic bias describes the systemic bias of AI decisions that reflect the human biases of the AI’s programmer (Nelson 2019). Algorithmic bias can be introduced through, “the data algorithm authors choose to use, as well as their data blending methods, model construction practices, and how results are applied and interpreted (Nelson 2019).” Chauhan and Gullapalli explain how AI algorithms with a “tunable variable” require researchers to make conscious choices that present an entry point for biases that can have “cascading effects downstream.”

Another source of AI bias comes from the data used to train the AI, which consists primarily of data from electronic medical and billing records. Nelson describes the data used to train AI as, “the data that we have as opposed to the data that is ‘right (Nelson 2019).’” First, because data in electronic health records is not generated for the specific analytic functions of modern AI technology, but rather for medical treatment and billing, it reflects systemic biases—including racial, gender, geographic, and economic biases—that operate to further disadvantage underrepresented populations (Nelson 2019). For example, black women have historically been, and continue to be, victims of obstetric racism and subjected to unnecessary medical procedures (Campbell 2021). When data used to train AI contain bias, the AI will generate biased medical decisions in the absence of adequate measures designed to both identify and eliminate such bias (Lawry et al. 2018).

An overarching bias in health data used to train AI is that it comes from populations who have access to healthcare and is typically not representative of minorities and other marginalized subpopulations (Jackson et al. 2021). This problem is further exacerbated when wearable technologies source AI training data (Lawry et al. 2018). As a result, data used to train AI suffers from category imbalance and under specification, both of which can make the AI’s decisions unreliable and unsafe when applied to members of a minority or underrepresented population (Chauhan and Gullapalli 2021).

4.4 Legal Considerations: Data Privacy

Though most data used to train AI come from electronic health records, other sources of training data include purchasing records, income data, criminal records, and social media (Hoffman 2021). Third-party access to such data can reach far beyond the harm associated with an initial privacy violation to impact employment and credit decisions and insurance access and rates (Jackson et al. 2021). Additional negative impacts of third-party access to health data can include social stigma and psychological harm (Hoffman 2021). Hoffman notes that AI predictions regarding future medical conditions, including cognitive decline, substance abuse, and even suicide can cause both discrimination and psychological harm for individuals who are not offered counseling to manage the impacts of such findings (Hoffman 2021). Although the GDPR offers more protection to individuals in the European Union than HIPAA offers to Americans, the cross-border capabilities of AI require international regulations to protect personal data used by AI developers (Gerke et al. 2020).

4.5 Legal Consideration: Liability

Though AI is currently capable of independently performing tasks, like diagnosis, that fall squarely within the practice of medicine, a clear legal framework to directly address legal liability for patient injuries caused by AI-based medical decision making does not exist (Lupton 2018). The introduction of AI into clinical decision-making upsets the traditional notions of negligence by asking questions like: Can a computer be unreasonable? (Chung and Zink 2018). The European Commission (EC) has recently introduced proposed Directives to govern liability for AI-caused harm generally (European Commission 2022a (PLD); European Commission 2022b (AILD)). However, these proposed Directives still do not provide a clear liability framework for medical technologies like the Autonomous AI Physician in cases where the AI’s algorithmic medical decision making is designed to be unpredictable and opaque and cannot be sufficiently connected to either (1) a defect in the AI’s creation or (2) human fault (or negligence) as judged under existing law (Duffourc and Gerke 2023).

Currently, there are several existing legal frameworks within which courts might assign liability for injuries caused by an autonomous AI physician, including strict liability, enterprise liability, vicarious liability, negligence, and no-fault liability. Some legal scholars question the extent to which AI-inflicted damages can be compensated by machines, which have no financial assets (Allen 2019). Chung, who advocates for legal personhood for AI, argues that risks can be assessed and insured to compensate injured patients within the existing medical malpractice and products liability frameworks (Chung and Zink 2018). To some extent, a negligence-based regime can deter bad behavior by only punishing actions that are found to fall below a medically acceptable standard of care. On the other hand, a strict or no-fault liability regime can force the industries responsible for creating and employing AI in healthcare to absorb the risk of injury caused by that technology. Applying vicarious liability or corporate negligence law can shift liability to the institutions who “hire” AI and operate under causes of action like negligent hiring or negligent credentialling (Gerke et al. 2020).

The answer to legal liability for autonomous AI probably lies in the combination of several existing legal approaches depending on the cause of the injury. The uncertainty surrounding the legal liability situation for autonomous AI in healthcare will likely inhibit the uptake of emerging AI technologies, which if sufficiently regulated, can improve patient care (Lupton 2018; European Commission 2022b (AILD)). As a result, legal scholars in the U.S. and Europe call upon lawmakers to provide clarity regarding liability for medical injuries caused by AI (Lupton 2018; Gerke et al. 2020).

5 Regulating the Autonomous AI Physician

Proper regulation of the Autonomous AI Physician through a careful combination of governmental, industry, and legal rules and regulations must address the ethical and legal concerns identified in order to promote the successful integration of safe, reliable, and fair autonomous AI in medicine.Footnote 11 To guide the development of AI in medicine, all stakeholders must collaborate to develop both ethical norms to govern the creation, implementation, and maintenance of AI as well as legal and regulatory mechanisms to ensure accountability for ethical violations and responsibility for injuries caused by the Autonomous AI Physician.Footnote 12

Robust industry and governmental guidance and regulations that aim to provide transparency, safety, reliability, fairness, and privacy are the first line of defense for patients of the Autonomous AI Physician. However, it is inevitable that patients will incur damages because of autonomous AI integration into healthcare. When damages manifest, the legal system must ensure accountability and compensation for injured patients. Because the Autonomous AI Physician is both algorithm and doctor, it should be regulated under regimes that both ensure ethical development and implementation of the software and hold it accountable as a self-learning autonomous decision maker. Since autonomous AI are “educated” and “trained” by software developers and engineers who write algorithms, regulatory bodies that test, approve, and oversee, the quality of the AI software and its development process are akin to medical boards that test, license, and oversee the practice of physicians. On the other hand, liability regimes that govern damages caused by products are generally not suited to encompass liability for damages caused by the Autonomous AI Physician. Instead, a combination of medical negligence, organizational negligence, vicarious liability, and enterprise liability are better equipped to handle patient damages caused by autonomous AI decisions, with products liability governing the small portion of cases that involve damage caused by the AI’s design and physical components.

5.1 Healthcare Industry Regulation

Medical experts can help ensure the ethical and safe development of autonomous AI in healthcare. Some medical professional and regulatory organizations have already begun to tackle this challenge. In the U.S., the American Medical Association seeks to “[p]romote the development of thoughtfully designed, high-quality, clinically validated health care AI,” which includes AI conformity with best practices, transparency, reproducibility, fairness, privacy, and security (American Medical Association 2018). In the U.K., the National Health Service seeks to prevent unintended harm caused by data-driven technology in healthcare, including AI, by providing a framework for AI developers that addresses, “issues such as transparency, accountability, safety, efficacy, explicability, fairness, equity, and bias (Department of Health and Social Care and National Health Service 2021).” The Royal Australian and New Zealand College of Radiologists (RANZCR) drafted AI Standards of Practice to guide the development, regulation, and integration of AI into radiology practice according to similar ethical principles (RANZCR 2020). The Digital Pathology Association has established an AI/ML taskforce that seeks to aid the development of artificial intelligence and machine learning in pathology by providing its members with information and resources regarding, “regulatory insight, best practices, scholarly activity, vendor relationships, and ethics (Digital Pathology Association 2020).”

At the provider level, healthcare organizations can implement several practices to help achieve the safe, ethical, and accountable AI envisioned by these professional societies. First, organizations should establish an institutional review board (IRB) to assess the scientific value, validity, and reliability of medical AI, the risks to patients’ health, autonomy, and privacy, and fairness and accountability associated with using AI to provide patient care (Jackson et al. 2021). Second, organizations should adopt policies, procedures, and protocols that clearly delineate levels of responsibility for ensuring that AI implementation reflects values driving the IRB’s assessments (Chauhan and Gullapalli 2021). These protocols should be continuously reviewed and updated to “reflect the current state of knowledge in healthcare practices (Chauhan and Gullapalli 2021).” One practical way to incorporate ethical values surrounding medical AI is to write them into transparent contracts with AI developers and vendors, which can include provisions regarding data quality, privacy, and sharing as well as mechanisms for oversight and audits of AI performance (Jackson et al. 2021).

Another practical recommendation is the creation of patient-facing Health Information Counselors (HICs) to provide patients with information regarding the use of AI in their healthcare, including AI performance, risks, benefits, and costs (Jorstad 2020). HICs would be a new class of interdisciplinary healthcare professionals who are trained to understand the technological, analytical, and medical capacities of autonomous AI as well as the clinical and financial impacts of an individual patient’s care (Jorstad 2020). Jorstad cautions that while HICs “might prove an invaluable resource as a mediary between patients and medical professionals,” under current liability standards, physicians must still understand and explain the risks of medical AI necessary to obtain informed consent (Jorstad 2020). As such, practical implementation of HICs will “require broader structural and policy changes (Jorstad 2020).”

If the healthcare industry takes a proactive role in implementing ethical AI in patient care, it can also guide the development of AI regulatory regimes to prevent misregulation, which could act as a barrier to the continued uptake of future AI technology in healthcare, including the Autonomous AI Physician (See Jorstad 2020).

5.2 Government Regulation

Governmental regulation of AI falls broadly under two spheres: safety and data privacy and security.

5.2.1 Safety Regulation

Government regulation of the Autonomous AI Physician to ensure its safety is a complex endeavor. The self-learning capability of autonomous AI that makes it a valuable asset to healthcare delivery is also the feature that makes it difficult to regulate. Current government regulatory systems were designed for static medical devices and products, not the ever-changing deep learning Autonomous AI Physician (Jorstad 2020). Additionally, as Jorstad points out, AI does not have the historical benefit of proving itself through decades of peer review, scientific research, and clinical trials, which underly traditional government regulation in the health sector (Jorstad 2020). Instead, “deep learning has turned the scientific process on its end,” as Ahmad et al. explains, by using data to generate, rather than prove, hypotheses (Ahmad et al. 2021). As a result, regulatory bodies need to develop a more dynamic approach to pre-market authorization and post-market monitoring to ensure ethical and responsible adoption of autonomous AI in the healthcare industry. Lawry, et al. proposes a regulatory regime that includes: “systematic evaluation of the quality and suitability of the data and models used to train AI-driven systems; adequate explanation of the system operation including disclosure of potential limitations or inadequacies in the training data; medical specialist involvement in the design and operation process; evaluation of the role of medical professional input and control in the deployment of the systems; and a robust feedback mechanism from users to developers (Lawry et al. 2018).”

Regulating “moving target” AI that involves self-learning algorithms that continue to change after placed in the healthcare market requires an approach focused on the quality of the development process pre-market and continued performance monitoring post-market (Homeyer et al. 2021). In the U.S., the Food and Drug Administration has already introduced a pilot certification program to streamline approval for software as a medical device (SaMD) (U.S. Food & Drug Administration 2021). The pilot program uses a “Total Product Lifecycle” approach, which consists of pre-market evaluation of companies that develop AI as well as continuous post-market product performance oversight of SaMD (U.S. Food & Drug Administration 2021). Under the program, a company can achieve “precertified status” if it can, “establish trust that they have a culture of quality and organizational excellence such that they can develop high quality SaMD products, leverages transparency of organizational excellence and product performance across the entire lifecycle of SaMD, utilizes a tailored streamlined premarket review, and leverages unique postmarket opportunities available in software to verify the continued safety, effectiveness, and performance of SaMD in the real-world (U.S. Food & Drug Administration 2021).”

In Europe, the EC’s proposed AI ActFootnote 13 attempts to provide uniform governance of AI to ensure, “a high level of protection of health, safety and fundamental rights,” and “free movement of AI-based goods and services cross-border (European Commission 2021).” The proposal classifies AI used in health care as a high-risk medical device that must comply with existing regulations, for example the Medical Device Regulation (MDR) and the Regulation on in vitro diagnostic medical devices (IVDR), as well as the AI-specific requirements contained in the proposal (European Commission 2021). The IVDR controls the certification process for AI in pathology and already requires an assessment of technical development, performance, and a post-market surveillance plan (European Commission 2017). The new proposal imposes additional “requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness (U.S. Food & Drug Administration 2021).” While the proposed AI Act is designed to establish public trust in technology, some experts view the new proposal as overregulation that will require duplicate certifications under various EU regulations and stifle innovation in the market (Taylor 2021). Indeed, striking the delicate balance between protecting patients and encouraging innovation is essential to the successful development and implementation of the Autonomous AI Physician.

5.2.2 Data Regulation

Regulators must also attempt to protect personal data used to develop and train AI. The framework for regulating health data in the U.S. is insufficient to address the ethical and legal concerns regarding data privacy and security raised by the Autonomous AI physician. On the other hand, Europe has taken a more proactive approach to regulating big data, which includes protecting health data of EU citizens from exploitation by the tech industry.

American legal scholars have highlighted HIPAA’s inability to adequately protect individual health data in the United States (Gerke et al. 2020; Hoffman 2021). HIPAA’s failure to regulate data sharing by entities other than healthcare providers and insurers is the law’s most glaring weakness when it comes to data privacy. For example, technology companies are free to share individual health data for research or commercial purposes because they are not considered “covered entities” under the law (Gerke et al. 2020). HIPAA also fails to regulate user-generated health data or data that can be used to make inferences about health, leaving social media posts concerning health conditions or internet purchasing data up for grabs by tech companies for medical AI research and development (Gerke et al. 2020). Finally, de-identified data that would otherwise be protected under HIPAA’s privacy rules can be shared by covered entities for research and commercial purposes. However, de-identification can be insufficient to protect patients’ privacy when data can be re-identified by cross-reference to other available databases (Gerke et al. 2020). Although states are free to impose stricter privacy protections than HIPAA requires for personalized health information, the failure to enact a comprehensive data protection framework at the federal level may both stifle the development of innovative AI health technologies as well as compromise individuals’ privacy rights (Gerke et al. 2020). Some legal experts call for expansion of HIPAA and the Americans with Disabilities Act to protect data and prevent discrimination based on future health conditions (See Hoffman and Podgurski 2007; Hoffman 2017).

The GDPR in Europe offers a higher level of protection for personal data concerning European Union data subjects. The regulation’s general prohibition on sharing genetic data, biometric data, and data concerning health applies to any entity that handles personal data, including natural persons and business entities (The European Parliament and the Council of the European Union 2016, Sect. 4). The GDPR also prevents the processing of data for “automated individual decision making,” which can have a legal or other significant consequences on the data subject, absent necessity for entrance into a legal contract, authorization by the member state and measures to safeguard individual freedoms and privacy interests, or explicit consent (The European Parliament and the Council of the European Union 2016, Art. 22). Finally, the GDPR’s required impact assessments, including risk assessments and anticipated risk mitigation and data protection efforts, apply to the introduction of new AI-based technology in clinical health settings (Gerke et al. 2020). Although the GDPR offers more protection to individuals in the European Union than HIPAA offers to Americans, the cross-border capabilities of AI requires international regulations to protect personal data used by AI developers (Gerke et al. 2020).

5.3 Liability for Injuries

Current legal regimes for medical liability are not designed for the Autonomous AI Physician’s expression of both software and human qualities. Current theories of liability for medical injury are either “human-centric” or “machine-centric,” and fail to provide a workable framework for liability of a hybrid entity (Chung and Zink 2018). Nevertheless, we agree with Griffin that, “[c]urrent legal frameworks are likely to provide the foundation of liability analysis of AI systems with some twists specific to AI (Griffin 2021).” Identifying the proper modification of legal frameworks prior to “a med-mal claim involving AI misdiagnosis arriving in court” is crucial to prevent courts from either banning the Autonomous AI Physician or creating, “such significant restrictions that AI’s functionality becomes more trouble to implement that it is worth (Jorstad 2020).”

To-date, no courts have directly addressed liability for injury caused by autonomous medical AI (Jackson et al. 2021). Liability for damages caused by the Autonomous AI Physician will likely be distributed among AI manufacturers and developers, individual healthcare providers, and healthcare organizations (Schweikart 2021). Jorstad predicts that healthcare organizations will primarily bear the costs of injuries caused by their employment of an Autonomous AI Physician (Jorstad 2020). Maliha, et al., believe that under the current liability scheme in the U.S., physicians who rely on AI-decision making will be the primary targets, but questions whether it is fair to hold providers accountable for unpredictable autonomous AI decisions that are made using “black-box” deep learning algorithms (Maliha et al. 2021). Of course, the continuous self-learning features of the Autonomous AI Physician are precisely what makes it valuable in clinical practice (Maliha et al. 2021).

Ultimately, the question of liability assignment is answered by asking: who has control over the particular function(s) of the Autonomous AI Physician that leads to a patient injury? (Schweikart 2021) Control can manifest in several ways. First, AI developers and manufacturers have control over the physical components of the Autonomous AI Physician as well as control over its “education and training” through the algorithmic development of the AI. Second, healthcare organizations exhibit control over “hiring” and organizational oversight through the selection and implementation of AI in clinical practice. Third, individual healthcare providers have limited control over AI recommendations for clinical action through human oversight and quality control. This, of course, leaves a gap in control for the Autonomous AI Physician’s independent medical decision-making, which can be opaque and obscured by the “black-box” problem.Footnote 14

Jorstad opines that given the “limited to nonexistent control physicians, hospitals, or even AI manufacturers exert over the machine’s diagnosing, it may be unreasonable to hold them liable when error surfaces (Jorstad 2020).” Schweikart (2021) agrees that “black-box” AI decision-making makes it nearly impossible to fairly assign liability under tort law. The logical conclusion is that the Autonomous AI Physician itself controls its own decisions, but this presents a problem in the current liability framework because AI does not have legal personhood and is therefore incapable of being assigned liability (Chung and Zink 2018). Chung and Zink solve this problem by suggesting the creation of limited legal personhood for medical AI, which would allow the Autonomous AI Physician to be held legally responsible for harms caused by its independent medical decisions (Chung and Zink 2018). Once the Autonomous AI is assigned limited legal personhood, and its risks can be insured as an individual healthcare provider, the existing medical liability system can effectively compensate patients for AI-caused injured under the control paradigm outlined above using a combination of products liability, organizational liability, vicarious liability, enterprise liability, and medical malpractice liability. Additionally, potentially liable entities can choose to allocate liability among themselves through contractual agreement. Finally, in countries that opt for no-fault liability regimes, special adjudication systems can compensate patients for AI-induced injuries; however, for negligence-based regimes, such a broad structural change is probably not feasible.

5.3.1 Products Liability

Products liability operates to hold manufacturers liable for inherently dangerous products by imposing a strict liability standard for injuries caused by defective products and failing to warn consumers of the same (Schweikart 2021). Products liability for damages caused by the Autonomous AI Physician is difficult to prove absent evidence of a human-driven design element. While it is true that manufacturers are in the best position to explain “black-box” technology of autonomous AI (Jorstad 2020), the AI’s decision cannot always be logically traced and is generally not foreseeable, even by its creators (Schweikart 2021). As a result, it would be difficult for patients to prove the AI was defective and the availability and feasibility of an alternative design as required under a products liability cause of action (Maliha et al. 2021). Additionally, the learned intermediary doctrine holds healthcare providers, rather than manufacturers, responsible for informing patients about risks disclosed to providers (Schweikart 2021). Jorstad notes that even holding providers liable for failure to disclose the unforeseeable risks associated with autonomous AI medical decision-making is “difficult to rationalize (Jorstad 2020).” Finally, using a strict products liability regime for the Autonomous AI Physician can hamper the development of beneficial AI technology (Jorstad 2020).

The imposition of binding regulations on the pre-market development and post-market monitoring of AI should provide limited immunity from liability for manufacturers who receive the proper authorizations.Footnote 15 Still, AI manufacturers should be held strictly liable for defects concerning data input, original software code, output display, or mechanical failure (Maliha et al. 2021).Footnote 16

5.3.2 Organizational, Vicarious, and Enterprise Liability

Organizational liability can include direct liability for a healthcare provider for failing to exercise due care in selecting and retaining competent physicians, maintaining appropriate facilities and equipment, training and supervising employees, and implementing appropriate protocols and procedures.Footnote 17 These organizational duties can require comprehensive vetting of the Autonomous AI Physician’s capabilities prior to using it in clinical practice (Maliha et al. 2021). Once implemented, the organization can also be held liable for failing to continually monitor the AI’s quality and train the AI as needed to keep it up-to-date. Maliha, et al. recommend administration of “stress tests” to test the AI’s ability to produce reliable and accurate decisions in response to difficult situations not considered by the AI’s developers (Maliha et al. 2021). Additionally, organizations should be required to utilize the rich learning opportunities made available by the Autonomous AI Physician’s near-miss errors—errors that do not cause damage—to retrain and update the Autonomous AI Physicians to prevent error repetition.Footnote 18

Healthcare organizations can also be held responsible for negligence of their employees under the vicarious liability doctrine (Schweikart 2021). As a result, if the Autonomous AI physician is considered an agent or employee of the healthcare organization, damages caused by the autonomous AI decision-making could be covered by the organization. Such coverage would operate like a hospital’s vicarious liability for its nurses and staff doctors. Of course, the healthcare organization would have to maintain sufficient insurance coverage for acts of the Autonomous AI physician.

Enterprise liability can hold all entities engaged “in pursuit of a common aim” jointly and severally liable for damages caused by that common enterprise (Schweikart 2021). This arrangement could allow for cost sharing between AI developers and healthcare providers and organizations who implement AI in clinical practice (Jorstad 2020). Allen believes that enterprise liability is a strong option for spreading risk associated with the “unpreventable calculable harm” that will occur as a result of autonomous AI medical decision making (Allen 2019).

5.3.3 Medical Malpractice

Assigning limited legal personhood is necessary to hold the Autonomous AI Physician accountable for medical malpractice. Chung emphasizes that personhood for AI is a legal fiction to be distinguished from the colloquial understanding of what it means to be a person (Chung and Zink 2018). Giving legal rights and responsibilities to a non-human is not a novel concept. As Schweikart points out, both ships and corporations are assigned legal personhood (Schweikart 2021). Chung and Zink argue that IBM’s former AI, Watson, could have been given limited legal personhood considering its ability to work as an integral member of a patient care team capable of providing individualized interpretation and analysis of patients’ medical conditions and giving treatment recommendations (Chung and Zink 2018). They compared Watson to a medical student with specialized education and training, who is capable of making independent medical decisions but requires a level of supervision and oversight (Chung 2017). Based on this comparison, the framework for insuring risks and evaluating liability for damages caused by medical AI is already in place, eliminating the need for establishing new insurance and liability systems, an unlikely endeavor (Chung and Zink 2018). Chung and Zink further point out that limited legal personhood for AI is flexible enough to encompass future smarter and more independent AI (Chung and Zink 2018).

Of course, allowing the Autonomous AI Physician to be held liable for its own medical decision making under the current medical malpractice regime requires some discussion of the applicable standard of care. The liability regime already applies heightened standards of care to specialists with extensive training in a specific medical field (Jorstad 2020). The Autonomous AI Physician is already capable of exceeding humans’ ability to review and process big data and has, in some instances, even surpassed the diagnostic abilities of human clinicians (Jorstad 2020). On the other hand, it lacks the ability to physically examine patients with human senses, synthesize information across various knowledge sectors, prescribe medication, or order tests. As a result, the Autonomous AI Physician would need to be considered a unique medical specialist that requires unique corresponding standards of care.

Jorstad provides some options for determining when the Autonomous AI Physician breaches the applicable standard of care (Jorstad 2020). The first option is to use the “nearest neighbor” method, which involves looking at the AI’s diagnostic history for comparable cases to compute the AI’s gross accuracy rate (Jorstad 2020). This case-based analysis should provide some measurement by which the alleged error can be compared and judged under the reasonableness standard (Jorstad 2020). The second option is “AI cross-testing,” which involves running the data from an injured patient’s case through other AI algorithms to discover whether the machines arrive at comparable results (Jorstad 2020). Two additional options involve human testimony of AI programmers or human medical experts to independently evaluate the AI’s decisions and opine regarding whether the AI’s processes and results, respectively, are reasonable (Jorstad 2020). In reality, attorneys will likely try some combination of these methods, and as a result, the standards of care will develop organically over time as the Autonomous AI physician becomes a common litigant in medical malpractice cases. Alternatively, professional and industry organizations can attempt to proactively establish standards of care by drafting AI practice guidelines. Still, courts will likely view non-compliance with such guidelines as evidence of negligence rather than being dispositive of the issue.

Individual healthcare providers can still be liable under the current medical malpractice regime for failure to properly supervise or oversee autonomous AI. Such causes of action are already recognized in relation to subordinate medical providers. One area of human medical liability that requires special attention is informed consent. Although the Autonomous AI Physician can render independent medical decisions, it should remain within the scope of a human provider’s responsibility to inform patients of the risks and benefits of the AI’s medical decisions. Although, as discussed above, the law could change to allow delegation of this duty to HICs, under the current law, physicians must consult with patients to provide information necessary to obtain informed consent. At a minimum, this information should include notice that a medical decision was generated by an Autonomous AI Physician, the right to a second opinion by a human clinician when feasible, and disclosure of possible uses of health information for future AI training (Jorstad 2020).

5.3.4 Contractual Assignment of Liability

Despite a legal framework for assigning liability following a patient injury, healthcare providers and AI manufacturers can still contractually divide or assign liability and insurance obligations for the Autonomous AI Physician. Jorstad opines that such agreements are the simplest option for dividing responsibility for AI-induced injuries (Jorstad 2020).

5.3.5 Special Adjudication Systems

Special adjudication systems can provide a no-fault approach to compensation for damages caused by the Autonomous AI physician. This can include compensation from an established fund and/or mandatory binding arbitration to determine damages caused by AI medical decision making (Jorstad 2020). The benefits of no-fault systems include streamlined adjudication and increased access to recovery for those injured by an Autonomous AI Physician (Maliha et al. 2021). Additionally, all stakeholders would share in the costs of risks posed by AI in healthcare delivery by contributing to a common fund (Gerke et al. 2020). While there are some examples of no-fault systems like vaccine injury compensation in the U.S., incorporating medical injuries caused by autonomous AI into those systems may require large structural changes that cannot be easily or quickly developed and implemented. No-fault systems also fail to provide the benefit of deterring sub-standard behavior during a time of rapid development and implementation of new technology.

6 Conclusion

The Autonomous AI Physician is here, and it will only get smarter and faster as DL technology improves at an alarming pace. While AI holds great promise for improving healthcare access and quality, patient care cannot and should not be left exclusively to machines. All stakeholders in the development and use of the Autonomous AI Physician have an obligation to ensure that AI is implemented in a safe and responsible way, including through regulatory and legal mechanisms that provide the requisite levels of safety, reliability, transparency, fairness, and accountability.Footnote 19