Advertisement

Clinical and Translational Imaging

, Volume 7, Issue 4, pp 233–235 | Cite as

The eye of nuclear medicine

  • Annalisa Polidori
  • Christian Salvatore
  • Isabella CastiglioniEmail author
  • Antonio Cerasa
Open Access
Editorial
Part of the following topical collections:
  1. Instrumentation and physics

Over the last 50 years, nuclear medicine has undergone a profound evolution, marked in particular by the technological progress of scanning systems, equipped with increasingly performant scintillation detectors, transformed from planar to cylindrical geometries, from two-dimensional to three-dimensional configurations, coupled with ever more accurate reconstruction algorithms, with the aim of making the scanner’s eye increasingly powerful in terms of resolution and sensitivity.

Notwithstanding the inherent potential of PET, the most advanced among the nuclear medicine techniques for the quantification of physiological variables, such potential has never been fully exploited in clinical practice. In fact, quantification has not been deemed necessary by most nuclear medicine specialists who have relied mostly on their own experience-trained eyes, i.e., something hard to convert in validated and user friendly software for clinical practice. Moreover, there has been a lack of convincing evidence that quantitative PET measures of biochemical variables under pathological conditions were (1) possible and accurate and (2) more effective than “eye-based” analysis for patient management, thus the use of absolute quantification has been set aside and substituted by the use of semi-quantitative analyses such as the Standardized Uptake Value or the Metabolic Tumor Volume (or their derivatives).

Nowadays, 40 years after the invention of PET, and more than 20 years of its clinical use, the time has come for a major leap forward based on a paradigm shift. Over the last 10 years, in view of the different prognosis and outcomes observed in patients with somewhat similar diagnosis, the potentials of artificial intelligence (AI) is being sought for an innovative and more practical, as well as effective, analysis of diagnostic imaging data. This revolutionary approach is based on the hypothesis that the analysis of the results of diagnostic investigations, including imaging data, when integrated with the information on treatment response and clinical endpoints obtained in large populations, could offer a totally new array of results and information for pursuing the goals of personalized precision medicine. This approach may completely revolutionize the way some primary objectives are pursued in the workup of individual patients including an early diagnosis and accurate staging, the definition of personalized treatment planning, the prognostic stratification, the prediction of the response to treatment, as well as the interim follow-up and restaging.

The use technologies that allow to “read” medical images in a more objective and quantitative way, using AI and its subfield called machine learning (ML), paves the way for a major advancement in medicine. According to a series of papers published by the AI pioneer Arthur Samuel back in 1960 [1, 2], ML gives computers the ability to learn (and to subsequently perform) a specific task without being explicitly programmed. The use of these intelligent systems makes it possible to completely rethink medical imaging, in particular for two tasks: (1) automating the “reading” of medical images, to speed up labor-intensive processes and to reduce the number of subjective errors coming also from expert clinicians, and (2) improving medical performances by going beyond what clinician “eyes”—although expert—can “see” from medical images, integrating, into an intelligent artificial system, multiple information collated in all the phases of the disease state of the patient, along with personal and family medical history.

As a consequence, a number of scientific papers have been published in the last years showing the potential of ML—and of a subfield of ML called deep learning—to create “intelligent eyes” for many applications and purposes in the field of nuclear medicine. In this sector, more than in any other medical imaging sector, the difficulty of reading images for clinicians is high, due to the limited spatial resolution and the low signal-to-noise ratio, making even more impacting the automation of image reading, analysis and interpretation when compared with other higher resolution, higher contrast imaging tools.

State-of-the-art examples in a first category of ML algorithms, focused on the automation of specific radiology tasks, include the automatic detection of disease lesions [3, 4, 5, 6] and segmentation or tissue differentiation [7, 8, 9, 10, 11]. A second category of ML algorithms finalized to the improvement of clinician performances, applied to nuclear medicine images, include automatic diagnosis and prognosis of specific diseases, and prediction of treatment response [12, 13, 14, 15, 16]. Another recently emerging application of such algorithms is the improvement of image quality [17, 18].

The variety and the potential of the published applications clearly show the importance for clinicians to become familiar with the use of ML-based tools. If they feared that AI could mark their professional end, now, instead, the time has come for awareness of ML usefulness. In fact, given the increasing number of examinations carried out in each patient, radiologists are among the first professionals to be affected by AI. The progressive recognition of different forms and expressions of diseases, as well as the aging of the population, have raised the demand for diagnostic procedures, with important additional costs. A scanning procedure every 15 min, in the “big data” and precision medicine era, entails the analysis of about two thousand images daily, a task that is becoming unsustainable if founded only on the physician’s eye-based approach [19]. To this extent, AI will help radiologists, as these technologies can support clinicians throughout the workup and follow-up phases, leading to personalized and cost effective management of patients.

However, we are not there yet. There are important barriers that affect the role of the radiologist in this new context. For example, it is important to underline that the development of a ML system requires a large amount of properly “labeled” images, which makes the contribution of radiologists fundamental to overcome several challenges that pertain to the specific nature of medicine. Furthermore, radiologists should guide the development of ML tools that produce transparent and understandable outputs, avoiding “black-box” software, to favor their clinical use and allow controls of results. Last, but not least, it should be highlighted that one of the main limitations of the AI application in nuclear medicine, similarly to that in other medical specialties, is related to ethical issues. Indeed, the employment of ML algorithms to make diagnosis, prognosis and prediction of treatment response strongly shifts the relationship between patients, healthcare providers, and medical data [20]. As stated by Morris et al. [21], in this new era of nuclear medicine, with the advent of AI and personalized medicine, the patient’s individual data are now used as part of “big data”, to extract information useful for the understanding of pathological processes and the care of other patients. This secondary use of patient data imposes legal and ethical rules that have just begun to be completely reformulated by the General Data Protection Regulation (GDPR (EU) 2016/679) [22]. The European Regulation delegates to an increasingly informed and aware patient the consent to the processing of his data, also for the benefit of other patients, for example to train intelligent systems to assist the doctor in making diagnosis and prognosis. This new context, on one hand protects the patient from unauthorized use of its medical data, but on the other hand entails the risk to slow down—if not to prevent—the use of the large amount of data already available at hospitals and clinics, a priceless heritage to accelerate the building and testing of new intelligent systems [20]. Finally, there is also a great new question to face, on how to classify these new “eyes” at a regulatory level, given their increasingly autonomous and performing role in medicine. Let us “see” what will happen in the future.

Notes

Compliance with ethical standards

Conflict of interest

Annalisa Polidori, Christian Salvatore, Isabella Castiglioni, and Antonio Cerasa declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. 1.
    Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3:210–229 (Reprinted in Feigenbaum EA, Feldman J (eds) (1963) Computers and thought New York: McGraw-Hill, 71–105)CrossRefGoogle Scholar
  2. 2.
    Samuel AL (1967) Some studies in machine learning using the game of checkers. II—Recent progress. IBM J Res Dev 11(6):601–617CrossRefGoogle Scholar
  3. 3.
    Gutte H, Jakobsson D, Olofsson F, Ohlsson M, Valind S, Loft A, Edenbrandt L, Kjaer A (2007) Automated interpretation of PET/CT images in patients with lung cancer. Nucl Med Commun 28(2):79–84 (PubMed PMID: 17198346) CrossRefGoogle Scholar
  4. 4.
    Schwyzer M, Ferraro DA, Muehlematter UJ, Curioni-Fontecedro A, Huellner MW, von Schulthess GK, Kaufmann PA, Burger IA, Messerli M (2018) Automated detection of lung cancer at ultralow dose PET/CT by deep neural networks–Initial results. Lung Cancer 126:170–173.  https://doi.org/10.1016/j.lungcan.2018.11.001 Epub 2018 Nov 3. PubMed PMID: 30527183 CrossRefGoogle Scholar
  5. 5.
    Teramoto A, Fujita H, Yamamuro O, Tamaki T (2016) Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique. Med Phys 43(6):2821–2827.  https://doi.org/10.1118/1.4948498 (PubMed PMID: 27277030) CrossRefGoogle Scholar
  6. 6.
    Li S, Jiang H, Wang Z, Zhang G, Yao YD (2018) An effective computer aided diagnosis model for pancreas cancer on PET/CT images. Comput Methods Programs Biomed 165:205–214.  https://doi.org/10.1016/j.cmpb.2018.09.001 Epub 2018 Sep 4 CrossRefGoogle Scholar
  7. 7.
    Berthon B, Marshall C, Evans M, Spezi E (2016) ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography. Phys Med Biol 61(13):4855–4869.  https://doi.org/10.1088/0031-9155/61/13/4855 (Epub 2016 Jun 8. PubMed PMID: 27273293) CrossRefGoogle Scholar
  8. 8.
    Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO (2018) Automatic lesion detection and segmentation of 18F-FET PET in gliomas: a full 3D U-net convolutional neural network study. PLoS One 13(4):e0195798CrossRefGoogle Scholar
  9. 9.
    Vogl WD, Pinker K, Helbich TH, Bickel H, Grabner G, Bogner W, Gruber S, Bago-Horvath Z, Dubsky P, Langs G (2019) Automatic segmentation and classification of breast lesions through identification of informative multiparametric PET/MRI features. Eur Radiol Exp 3(1):18.  https://doi.org/10.1186/s41747-019-0096-3 (PubMed PMID: 31030291; PubMed Central PMCID: PMC6486931) CrossRefGoogle Scholar
  10. 10.
    Lindgren Belal S, Sadik M, Kaboteh R, Enqvist O, Ulén J, Poulsen MH, Simonsen J, Høilund-Carlsen PF, Edenbrandt L, Trägårdh E (2019) Deep learning for segmentation of 49 selected bones in CT scans: first step in automated PET/CT-based 3D quantification of skeletal metastases. Eur J Radiol 113:89–95.  https://doi.org/10.1016/j.ejrad.2019.01.028 Epub 2019 Feb 1. PubMed PMID: 30927965 CrossRefGoogle Scholar
  11. 11.
    Chen L, Shen C, Zhou Z, Maquilan G, Albuquerque K, Folkert MR, Wang J (2019) Automatic PET cervical tumor segmentation by combining deep learning and anatomic prior. Phys Med Biol 64(8):085019.  https://doi.org/10.1088/1361-6560/ab0b64 CrossRefGoogle Scholar
  12. 12.
    Gomez J, Doukky R, Germano G, Slomka P (2018) New trends in quantitative nuclear cardiology methods. Curr Cardiovasc Imaging Rep 11(1):1.  https://doi.org/10.1007/s12410-018-9443-7 (Epub 2018 Jan 19) CrossRefGoogle Scholar
  13. 13.
    Wang H, Zhou Z, Li Y, Chen Z, Lu P, Wang W, Liu W, Yu L (2017) Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res 7(1):11.  https://doi.org/10.1186/s13550-017-0260-9 Epub 2017 Jan 28 CrossRefGoogle Scholar
  14. 14.
    Ahn HK, Lee H, Kim SG, Hyun SH (2019) Pre-treatment 18F-FDG PET-based radiomics predict survival in resected non-small cell lung cancer. Clin Radiol 74(6):467–473.  https://doi.org/10.1016/j.crad.2019.02.008 Epub 2019 Mar 18 CrossRefGoogle Scholar
  15. 15.
    Milgrom SA, Elhalawani H, Lee J, Wang Q, Mohamed ASR, Dabaja BS, Pinnix CC, Gunther JR, Court L, Rao A, Fuller CD, Akhtari M, Aristophanous M, Mawlawi O, Chuang HH, Sulman EP, Lee H, Hagemeister FB, Oki Y, Fanale M, Smith GL (2019) A PET radiomics model to predict refractory mediastinal hodgkin lymphoma. Sci Rep. 9(1):1322.  https://doi.org/10.1038/s41598-018-37197-z CrossRefGoogle Scholar
  16. 16.
    Li S, Wang K, Hou Z, Yang J, Ren W, Gao S, Meng F, Wu P, Liu B, Liu J, Yan J (2018) Use of radiomics combined with machine learning method in the recurrence patterns after intensity-modulated radiotherapy for nasopharyngeal carcinoma: a preliminary study. Front Oncol 8:648.  https://doi.org/10.3389/fonc.2018.00648 (eCollection 2018) CrossRefGoogle Scholar
  17. 17.
    Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D (2017) Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 267:406–416CrossRefGoogle Scholar
  18. 18.
    Zaharchuk G (2019) Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning. Eur J Nucl Med Mol Imaging.  https://doi.org/10.1007/s00259-019-04374-9 Google Scholar
  19. 19.
    Waite S, Kolla S, Jeudy J, Legasto A, Macknik SL, Martinez-Conde S, Reede DL (2017) Tired in the reading room: the influence of fatigue in radiology. J Am Coll Radiol 14(2):191–197CrossRefGoogle Scholar
  20. 20.
    Jaremko JL, Azar M, Bromwich R, Lum A, Alicia Cheong LH et al (2019) Canadian association of radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Can Assoc Radiol J 70(2):107–118CrossRefGoogle Scholar
  21. 21.
    Morris MA, Saboury B, Burkett B, Gao J, Siegel EL (2018) Reinventing radiology: big data and the future of medical imaging. J Thorac Imaging 33(1):4–16CrossRefGoogle Scholar
  22. 22.
    GDPR—General Data Protection Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC. (OJ L 119 4.5.2016, p. 1; cor. OJ L 127, 23.5.2018)Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Institute of Molecular Bioimaging and PhysiologyNational Research Council (IBFM-CNR)MilanItaly
  2. 2.Neuroimaging Research Unit, Institute of Molecular Bioimaging and PhysiologyNational Research Council (IBFM-CNR)CatanzaroItaly
  3. 3.S. Anna Institute and Research in Advanced Neurorehabilitation (RAN) CrotoneCrotoneItaly

Personalised recommendations