Introduction

AI applications in musculoskeletal (MSK) radiology abound [1]. They may play key clinical decision support (CDS) roles in modulating radiation dose, protocoling, and scheduling, in addition to interpreting and acquiring images [2]. Indeed, some algorithms have demonstrated performance comparable or superior to human physicians in detecting proximal humeral fractures and vertebral body compression [3, 4], identifying osteoarthritis of the hip and knee [5, 6], and estimating bone age [7], among others.

In evaluating these products, the U.S. Food and Drug Administration (FDA) is confronted with the limitations of existing regulatory paradigms [8]. Unlike conventional medical devices, AI programs demand large volumes of exogenous training data, are potentially capable of self-modification, and in the case of so-called “black-box” algorithms, may lack the ability to explain their findings [9]. These features complicate disclosure of information typically required by the FDA and do not fit neatly within traditional categories of regulatory review. The legal ramifications of clinical AI applications are similarly unresolved. This is particularly so with medical professional liability, a significant concern among practicing radiologists with its attendant litigation, productivity, and financial costs [10]. This paper seeks to outline the dimensions of legal risk stemming from the use of AI products in an MSK radiology context. It proceeds by introducing the relevant theory of legal liability through illustrative examples.

The liability of radiologists

General negligence

Suppose an MSK radiologist employs an AI tool for rapid fracture detection. The program is applied to a pelvic radiograph of an elderly patient status post fall; it returns a false negative reading. The radiologist relies upon the program to affirm normal findings in their impression. The patient is subsequently discharged, develops avascular necrosis of the femoral head, and eventually requires arthroplasty. Is this radiologist liable for malpractice?

The theory of negligence has traditionally governed medical malpractice disputes. This demands that plaintiffs demonstrate (1) the existence of a duty (of care), (2) a breach of said duty, and (3) that damages were sustained (4) through a causal relationship between the latter two factors. As will be described at length in this paper, the sheer novelty of clinical AI, lack of related case law, and multiplicity of parties involved in its use render standard tort doctrine hard-pressed to offer adequate solutions [11]. In evaluating (1) duty and (2) breach, courts look to standards of care articulated by professional societies, community practices (in what is known as the “locality rule”), and departmental protocols [12,13,14]. Carefully drafted guidelines, sufficient user training, and appropriate integration of AI into imaging work patterns are thus critical. Moreover, each of these exists within a dynamic system and can only be expected to change over time; as AI products gain traction in the MSK radiology space, stakeholders will become more familiar with their deployment and establish increasingly sophisticated standards for use.

It would be relatively straightforward for a plaintiff to allege (3) damages from undetected radiological features. The issue of (4) causation is a different matter. Here, the absence of clinical AI-related case law points to a gray zone which courts have not yet considered. The issue cuts both ways: whether physicians miss findings as a result of their reliance upon AI, or if physicians override ultimately accurate AI output with their own erroneous personal judgments. In the former scenario, the physician would likely face greater exposure to malpractice risk. If it can be demonstrated that a human physician could have identified the finding which AI failed to detect, a compelling case for causation can be made. Moreover, CDS applications are regarded as implements to aid in diagnosis, rather than substitute for the diagnosticians themselves. As currently written, federal law requires that “healthcare professionals [be capable of] independently review[ing] the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision.” [15] In other words, physicians remain the final arbiters of clinical judgment [16]. This language would preclude the use of black-box algorithms, which display results without explaining their decision-making process [9]. In fact, interrogating the “basis” of their findings may prove technically unfeasible [17]. Under the second case, the physician’s liability exposure is somewhat weaker.. This would not necessarily represent a breach of clinical duty; radiologists can compellingly defend their finding based on their professional judgment. AI algorithms, at least for the time being, have not been integrated into radiology standards of care [18]. If this occurs, deviations from AI readout may indeed prompt liability.

Returning to the radiologist introduced above, an apparently blind deference to AI may suffice to trigger “causation.” They should have instead performed a primary read of the hip radiograph and utilized the AI tool in a confirmatory capacity. Where there was conflict between their impression and the program’s readout, the radiologist would have been better advised to rely upon their own executive judgment, consult with colleagues, and adhere to established protocols in arriving at an ultimate finding.

Informed consent

Now imagine that the MSK radiologist did not reveal their use of AI tools, either in their formal report or to the patient prior to radiography. Would this compound liability? Informed consent raises a number of thorny issues relating to the (1) duty of care and its possible (2) breach. This begs two questions: should the use of AI compel disclosure to the patient and, if so, what exactly should be disclosed? The scope of each is unclear; courts and state legislatures have yet to articulate an AI-specific standard for informed consent, a matter which remains contested in the literature [19]. One can appreciate the significant distinction between merely announcing that AI tools were employed in the imaging process versus disclosing a comprehensive listing of false positive and negative rates, risks, potential issues with data collection and privacy, financial relationships between the AI vendor, hospital, and radiology group, and so on [20].

Currently, these concerns play a limited role in existing AI applications, which tend to be CDS functioning as adjuncts to human radiologists. However, this would change with the introduction of autonomously functioning algorithms. The adequacy of informed consent will likely hinge on the degree to which AI substitutes, rather than merely augments clinical decision-making by a human radiologist. States adopt one of two doctrinal approaches to informed consent, each of which will be discussed in turn: the provider-based or patient-based standard [21].

The first mandates release of “those disclosures which a reasonable medical practitioner would make under the same or similar circumstances,” thereby tying the substance of consent procedures to a professional baseline [22]. Although an AI-specific standard of care does not yet exist, a more coherent radiology standard may develop as AI tools proliferate across imaging departments. This process could be further aided by professional bodies, such as the American College of Radiology (ACR) or Society of Skeletal Radiology (SSR), which can articulate a specialty-wide protocol for obtaining informed consent. The second relates to the patient’s “right to self-decision,” for which “the scope of the physician’s communications to the patient…must be measured by the patient’s need [for] information material to the decision,” with “materiality” relating to “when a reasonable person, in what the physician knows or should know to be the patient’s position, would be likely to attach significance to the risk or cluster of risks in deciding whether or not to forego the proposed therapy.” [23] As opposed to the provider-based approach, the patient-based standard has the radiologist place themselves in their patient’s shoes, evaluate their decision-making rubric, and inquire after those facts which would encourage or dissuade them from proceeding. In reality, the distinction between the two is hazy. The rub of the matter is that radiologists should be cognizant of their peers’ disclosures and anticipate the sorts of information patients would find important when deploying AI. The coming debate may track the controversy surrounding the informed consent of contrast administration. During the 1980s, jurisdictions were divided regarding its necessity and scope; as it was gradually incorporated to the general standard of care, such consent processes tapered off [24].

The liability of radiology groups

Consider a radiologist performing a CT-guided biopsy for spondylodiscitis. They rely upon an AI tool which provides depth and orientation information for needle placement; on this occasion, its readout is incorrect and the patient suffers damage to adjacent nerve roots. The radiologist is employed, supervised, and directed by a multispecialty radiology group, which routinely utilizes this AI program. Is the practice as a whole exposed to liability?

Rather than target individual physicians, injured plaintiffs may opt to sue hospitals, outpatient clinics, and radiology practices, thereby indirectly exposing peer radiologists to liability. As a litigation strategy, this could permit greater recovery of damages or encourage early settlement. Healthcare systems would face an altogether different theory than the negligence framework discussed above—that of vicarious liability. It is implicated by the doctrine of respondeat superior; this dictates that the faults of subordinates ascend the organizational ladder to attach to principals. Vicarious liability operates according to a “strict” basis, meaning that only a showing of harm is necessary to trigger liability. In this sense, it functions as a legal “circuit breaker” that actuates when individuals sustain injury as a result of an agent’s actions. This underscores the need for carefully drafted practice/departmental protocols.

The liability of radiologist-developers

MSK radiologists may increasingly find themselves involved not only in the use, but also the development of AI products. Suppose one founds a startup marketing a novel AI-enhanced bone imaging tool. Over the course of its deployment, it mislabels osteosarcoma as a benign osteochondroma, and biopsy is not suggested for the patient in question. This patient subsequently develops extensive disease which is not detected in timely manner. In addition to the interpreting radiologist, is the radiologist-developer liable to suit?

The nature of their liability exposure remains unclear. Some scholars have suggested that product liability theories would be most appropriate, despite the fact that courts have historically not regarded software as “products” per se [25, 26]. This may change as courts develop a body of AI jurisprudence, and as AI systems increasingly merge with hardware products with implications for bodily harm—self-driving cars and CT scanners among them [27]. Should courts apply theories of product liability to AI technologies, either the negligence or strict liability principles outlined above would attach.

Under a strict liability framework, courts would analyze characteristics of the AI product itself, evaluating whether the particular program was a “manufacturing defect,” and consequently led to the plaintiff’s injury. If the plaintiff is able to demonstrate the application’s deficiency, he would prevail in his case. However, this may prove difficult to achieve in practice. The inherent sophistication of AI products, which would have undergone rigorous FDA review as a prerequisite to market entry, may stymie attempts to prove technical inadequacy [28, 29].

By contrast, negligence principles consider features of the enterprise, rather than the product. This does not simplify matters, in turn inviting a host of further questions. To whom, if anyone, would the radiologist-developer owe a “duty of care?” [30] Without a coherent “AI industry” to speak of, which professional standards would apply? Do startups face different standards of care than blue chip tech conglomerates? It remains to be seen how courts will answer these questions, apportion fault, and analyze chains of causation.

Conclusion

This piece introduces the basic mechanics of liability related to AI products in MSK radiology. Radiologists, their practices, and AI enterprises face differing sets of risk exposure which dynamically interact. Given the sparsity of legal precedent, all parties face an uncertain liability landscape. However, there is a silver lining—AI tools are unlikely to supplant human radiologists any time soon. In this, MSK radiologists find two unlikely allies: federal law and the tort system. Both emphasize the primacy of physician decision-making and establish a subordinate, though adjunctive niche for AI products. Looking forward, imaging departments should articulate clear protocols for their use, to include procedures in the event of human-AI discrepancy. They can be aided by ACR and SSR-validated guidelines, training programs, and model practices for deploying AI technologies.