Use of the “Black box” in clinical practice
In radiology, the clinical use of computer-aided diagnosis (CAD) is well known; multiple studies have shown advantages, limitations and risks of image interpretation in a CAD paradigm, as first reader, second reader, or in concurrent reading [12,13,14,15]. In all cases, the final responsibility is in the hands of the radiologist, and a debate is still open about including CAD results into the radiology report and informing patients that the diagnosis is supported by automated software.
However, a clear distinction between CAD and AI must be traced, since CAD is designed to perform specific tasks on the basis of a training set and AI power and promise is that useful features can exist that are not currently known or are beyond the limit of human detection. In practice, AI brings new features from images that radiologists, as human beings, cannot detect or quantify. A typical example is radiomics, where a texture analysis can generate hundreds of features that a human being cannot generate and interpret .
In clinical practice, radiologists will be asked to monitor AI system outputs and validate AI interpretations; so they risk to carry the ultimate responsibility of validating what they cannot understand.
Key point 4
Radiologist responsibility is at risk of validating the unknown (black box).
The recent North America and European multi-society paper about “Ethics of artificial intelligence in radiology” reports the risk of the automation bias .
Automation bias is the tendency for humans to favor machine-generated decisions, ignoring contrary data or conflicting human decisions. Automation bias leads to errors of omission and commission, where omission errors occur when a human fails to notice, or disregards, the failure of the AI tool. High decision flow rates, where decisions are swiftly made on radiology examinations, and the radiologist is reading examinations rapidly, predispose to omission errors. This is compounded by AI decisions made based on features that are too subtle for humans to detect.
Commission errors occur when the radiologist erroneously accepts or implements a machine’s decision in spite of other evidence to the contrary.
The typical feature of the act performed by the radiologist, as a professional, is the autonomy of decisions on the provision of service and the technical tools to be used, and the personality of the service . It is clear that these peculiarities must be harmonized with the automation of AI.
Key point 5
Radiologist decision may be biased by the AI automation.
Shortage of radiologists and inappropriate training
Much discussion has been raised in the media about the introduction of artificial intelligence in radiological practice, suggesting that radiologists could become useless or even disappear as a specialty [19,20,21]. This could lead to a lack of motivation for young doctors to pursue a career in radiology, with a real imminent risk of not having enough radiologist specialists.
An additional risk for young doctors in training is the reduction in training opportunities. In fact, if the use of artificial intelligence systems in clinical practice can accelerate interpretation times, it can also reduce the number of examinations/images that a radiologist in training will be able to interpret. In other words, the greater the automatic tasks performed by artificial intelligence, the less those performed by the radiologist.
The paradox of this situation would be the need to implement ever greater AI tasks to compensate for the progressive lack of radiologists.
Key point 6
Risk of a paradox: increasing AI tools to compensate the lack of radiologists.
Radiologist–patient relationship and data integrity
A contingent problem with the introduction of AI and of no less importance is transparency toward patients. They must be informed that the diagnosis was obtained with the help of the AI.
All this poses problems to the patient’s informed consent and the adoption of quality measures of the patient’s care providers (public or private institutions) on AI systems.
Quality measures should relate to software robustness and data security, as well as constant updating of software and hardware, avoiding equipment obsolescence. Particular attention should also be paid to image processing, guaranteeing its integrity during the analysis process with neural networks, thus avoiding a modification of the raw data.
Key point 7
Need for informed consent and quality measures.