Artificial Intelligence is defined as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Others have referred to it, in particular, to scenarios when a machine mimics cognitive decisions performed by the human brain, such as learning and problem solving. With the advent of computers that are increasingly capable of handling enormous amounts of data with rapid processing, there has been increasing interest that “big data” and “machine learning” can be used to facilitate everything from self-driving cars to modernizing the practice of medicine. There has been much debate about the use of artificial intelligence in medicine, particularly in radiology and image interpretation. Can we create artificial intelligence programs that can interpret cardiac nuclear scans in the same manner as a physician and if so, what role will these programs play in the future of nuclear cardiology?

In the field of nuclear cardiology, a picture is worth a thousand words. Semi-quantitative visual analysis of perfusion and function has been the cornerstone of image interpretation. Quantitative analysis is now an integral part of nuclear imaging, with multiple software algorithms that can automatically segment the left ventricle, assess perfusion at rest and stress and compare to normalized databases, and quantify ejection fraction.1,2,3 These programs have been validated, shown to be reproducible, and have similar diagnostic accuracy to visual analysis by expert readers.4,5 Currently, these programs are used as an adjunct to clinical interpretation, given inherent needs for physician supervision to ensure proper contours, alignment, and review for artifacts. Moreover, final interpretation of a cardiac scan requires integration of multiple factors, in particular the correlation between perfusion, function and artifacts.

Preliminary data have shown that increasingly less physician supervision may be needed for proper alignment for these quantitative programs and there has been increasing interest in fully automated quantification.6 In this issue of the Journal, Motwani et al. investigated the feasibility of a large-scale fully automated quantitative analysis of SPECT myocardial perfusion imaging to predict acute myocardial infarction (AMI).7 They had several notable findings. First, in review of almost 6,000 patients, they found that fully automated analysis (fully unsupervised) was indeed feasible with about 10% of studies being flagged as “potential error” by their left ventricular contour quality control (QC) program, requiring visual supervision by an experienced technician.8 After the QC step, they were able to demonstrate that batch processing could be used to quantitate stress total perfusion deficit (sTPD) and ischemic total perfusion deficit (iTPD) in these studies, with modest predictive accuracy for future AMI over long-term follow-up. Annualized AMI rates increased in proportion to the magnitude of abnormality, with automated sTPD being a stronger predictor than iTPD. Both sTPD and iTPD had better immediate (1 year) than long-term (5 year) prediction of AMI.

Automated batch processing made it feasible to analyze large numbers of studies processed separately in two distinct manners: with attenuation correction (AC) and non-corrected (NC). The study showed that AC made no significant difference to predictive accuracy of sTPD or iTPD for AMI or death. Notably, separate processing of AC and NC images does not take into account the impact of AC on each attenuation artifact. Unlike PET-MPI where only AC images are analyzed, in SPECT-MPI, AC images should be interpreted side-by-side and in the context of each other.9 Expert comparison of AC and NC images allows for the recognition of abnormalities that might be introduced in the AC process or “over correction” of true perfusion abnormalities. In the present study, separate analysis of AC and NC images may have undercut the incremental value of attenuation correction, which most experts view as a valuable tool to enhance the diagnostic accuracy of SPECT-MPI and therefore its prognostic value.10,11 A high level of expertise and human intelligence is needed to analyze the impact of AC on a given defect and to provide an interpretation that takes into account NC and AC datasets. However, previous work from the same group argues that fully automated analysis is at least as good as expert readers in identifying patients with ≥70% angiographic coronary stenoses. This held true even when AC, computer analysis, and clinical data were taken into account by the expert readers.11 Given rapid advances in computer technology, software that can integrate AC, NC, and functional data in a single automated interpretation is not out of reach, if the clinical need is there. Whether such an advancement can further improve the clinical utility of SPECT-MPI remains to be seen.

From a research perspective, there are several notable applications for the information derived from this study. In an era of evidence-based medicine, large-scale outcomes studies are critically important for diagnostic and treatment algorithms and patient care. These large outcomes studies have utilized nuclear core labs, which have relied on “expert interpretation” of at least three experienced readers. Fully automated quantitative analysis for nuclear imaging could become the main stay for core labs for large-scale studies, providing rapid, consistent, automated reads. In fact, fully quantitative MPI analysis is increasingly being used in recent nuclear cardiology literature.12,13,14,15 In addition, this could be applied to large databases of patients in a “batch processing” method, allowing rapid processing of large volume sets. The integration of clinical parameters, other imaging studies, and angiography data with the quantitative nuclear data could have significant diagnostic and prognostic power that could be important for population-based research.16,17 From a clinical perspective, automated quantification and artificial intelligence could allow for earlier detection of disease, improved risk prediction, automated serial assessment of imaging studies to assess response to therapy, and integration of the SPECT imaging data with other imaging studies and clinical data to drive patient-specific treatment decisions. The demand for personalized medicine is growing and automated quantitative analysis may help bring cardiac SPECT imaging into the forefront of patient-specific decision-making.

As exciting as this may be, there are legitimate concerns about the quality of the data being used for automated processing. In the current study, there was a 10% rate of error with the automatic contour QC program which required readjustment by an experienced technician. However, this rate is likely to decrease with evolution of technology. The SPECT studies that were analyzed in the current study were acquired from 2001 to 2008, and extrapolation of the results to SPECT studies that are acquired using new camera and software technology would best be served with more recent data. The use of “dated” line source AC method, and not state-of-the-art x-ray-based AC, may limit the applicability of the study to more modern instrumentation. Furthermore, there was no integration of perfusion and function. Although automated analysis may be proficient at assessing perfusion and function separately, integration of these two enhances the ability to distinguish an attenuation artifact from a true perfusion abnormality. This may be a target for future development of automated analysis.18 Lastly, as quantitative programs rely heavily on normal perfusion databases that are specific to patient demographics, the scanner, tracer, and acquisition algorithm, one would question the applicability of this technology to large datasets that might have significant variability in these factors.

From a clinical perspective, the factors mentioned above highlight the limitations in using this technology at the present time. In particular, there is caution against over-use and over-reliance on quantitative analysis. It should be emphasized that it is still best considered to be a clinical tool that can augment physician interpretation and should not be used in lieu of it. The ability to detect artifacts, attenuation, and subtle nuances by an experienced nuclear reader in combination with clinical judgement is still invaluable for the final interpretation of the study and ultimately, patient care.

Artificial Intelligence in medicine has many novel and exciting applications from a research and clinical perspective. Many physicians are understandably wary with fears that this could supplant the need for clinicians. Rather than focusing on fear, we should embrace how the development of these automated technologies could augment research and patient care. The integration of automated MPI analysis and clinical data using machine learning applications could further augment diagnostic and prognostic utility of MPI and facilitate decision-making.16,17,19 Fully automated analysis of SPECT clearly requires further study, optimization, and prospective validation; nonetheless, one can imagine the possibilities that such software could bring to the future of patient care. As automated analysis is currently viewed as a supplement to enhance physician interpretation, we could envision a future where fully automated MPI analysis coupled with machine learning could play a larger role in MPI interpretation and decision making.