Over the last 50 years, nuclear medicine has undergone a profound evolution, marked in particular by the technological progress of scanning systems, equipped with increasingly performant scintillation detectors, transformed from planar to cylindrical geometries, from two-dimensional to three-dimensional configurations, coupled with ever more accurate reconstruction algorithms, with the aim of making the scanner’s eye increasingly powerful in terms of resolution and sensitivity.

Notwithstanding the inherent potential of PET, the most advanced among the nuclear medicine techniques for the quantification of physiological variables, such potential has never been fully exploited in clinical practice. In fact, quantification has not been deemed necessary by most nuclear medicine specialists who have relied mostly on their own experience-trained eyes, i.e., something hard to convert in validated and user friendly software for clinical practice. Moreover, there has been a lack of convincing evidence that quantitative PET measures of biochemical variables under pathological conditions were (1) possible and accurate and (2) more effective than “eye-based” analysis for patient management, thus the use of absolute quantification has been set aside and substituted by the use of semi-quantitative analyses such as the Standardized Uptake Value or the Metabolic Tumor Volume (or their derivatives).

Nowadays, 40 years after the invention of PET, and more than 20 years of its clinical use, the time has come for a major leap forward based on a paradigm shift. Over the last 10 years, in view of the different prognosis and outcomes observed in patients with somewhat similar diagnosis, the potentials of artificial intelligence (AI) is being sought for an innovative and more practical, as well as effective, analysis of diagnostic imaging data. This revolutionary approach is based on the hypothesis that the analysis of the results of diagnostic investigations, including imaging data, when integrated with the information on treatment response and clinical endpoints obtained in large populations, could offer a totally new array of results and information for pursuing the goals of personalized precision medicine. This approach may completely revolutionize the way some primary objectives are pursued in the workup of individual patients including an early diagnosis and accurate staging, the definition of personalized treatment planning, the prognostic stratification, the prediction of the response to treatment, as well as the interim follow-up and restaging.

The use technologies that allow to “read” medical images in a more objective and quantitative way, using AI and its subfield called machine learning (ML), paves the way for a major advancement in medicine. According to a series of papers published by the AI pioneer Arthur Samuel back in 1960 [1, 2], ML gives computers the ability to learn (and to subsequently perform) a specific task without being explicitly programmed. The use of these intelligent systems makes it possible to completely rethink medical imaging, in particular for two tasks: (1) automating the “reading” of medical images, to speed up labor-intensive processes and to reduce the number of subjective errors coming also from expert clinicians, and (2) improving medical performances by going beyond what clinician “eyes”—although expert—can “see” from medical images, integrating, into an intelligent artificial system, multiple information collated in all the phases of the disease state of the patient, along with personal and family medical history.

As a consequence, a number of scientific papers have been published in the last years showing the potential of ML—and of a subfield of ML called deep learning—to create “intelligent eyes” for many applications and purposes in the field of nuclear medicine. In this sector, more than in any other medical imaging sector, the difficulty of reading images for clinicians is high, due to the limited spatial resolution and the low signal-to-noise ratio, making even more impacting the automation of image reading, analysis and interpretation when compared with other higher resolution, higher contrast imaging tools.

State-of-the-art examples in a first category of ML algorithms, focused on the automation of specific radiology tasks, include the automatic detection of disease lesions [3,4,5,6] and segmentation or tissue differentiation [7,8,9,10,11]. A second category of ML algorithms finalized to the improvement of clinician performances, applied to nuclear medicine images, include automatic diagnosis and prognosis of specific diseases, and prediction of treatment response [12,13,14,15,16]. Another recently emerging application of such algorithms is the improvement of image quality [17, 18].

The variety and the potential of the published applications clearly show the importance for clinicians to become familiar with the use of ML-based tools. If they feared that AI could mark their professional end, now, instead, the time has come for awareness of ML usefulness. In fact, given the increasing number of examinations carried out in each patient, radiologists are among the first professionals to be affected by AI. The progressive recognition of different forms and expressions of diseases, as well as the aging of the population, have raised the demand for diagnostic procedures, with important additional costs. A scanning procedure every 15 min, in the “big data” and precision medicine era, entails the analysis of about two thousand images daily, a task that is becoming unsustainable if founded only on the physician’s eye-based approach [19]. To this extent, AI will help radiologists, as these technologies can support clinicians throughout the workup and follow-up phases, leading to personalized and cost effective management of patients.

However, we are not there yet. There are important barriers that affect the role of the radiologist in this new context. For example, it is important to underline that the development of a ML system requires a large amount of properly “labeled” images, which makes the contribution of radiologists fundamental to overcome several challenges that pertain to the specific nature of medicine. Furthermore, radiologists should guide the development of ML tools that produce transparent and understandable outputs, avoiding “black-box” software, to favor their clinical use and allow controls of results. Last, but not least, it should be highlighted that one of the main limitations of the AI application in nuclear medicine, similarly to that in other medical specialties, is related to ethical issues. Indeed, the employment of ML algorithms to make diagnosis, prognosis and prediction of treatment response strongly shifts the relationship between patients, healthcare providers, and medical data [20]. As stated by Morris et al. [21], in this new era of nuclear medicine, with the advent of AI and personalized medicine, the patient’s individual data are now used as part of “big data”, to extract information useful for the understanding of pathological processes and the care of other patients. This secondary use of patient data imposes legal and ethical rules that have just begun to be completely reformulated by the General Data Protection Regulation (GDPR (EU) 2016/679) [22]. The European Regulation delegates to an increasingly informed and aware patient the consent to the processing of his data, also for the benefit of other patients, for example to train intelligent systems to assist the doctor in making diagnosis and prognosis. This new context, on one hand protects the patient from unauthorized use of its medical data, but on the other hand entails the risk to slow down—if not to prevent—the use of the large amount of data already available at hospitals and clinics, a priceless heritage to accelerate the building and testing of new intelligent systems [20]. Finally, there is also a great new question to face, on how to classify these new “eyes” at a regulatory level, given their increasingly autonomous and performing role in medicine. Let us “see” what will happen in the future.