Background

FDG PET/CT is widely used to detect metabolically active lesions, especially in oncology [1, 2]. PET/CT scanners are becoming widespread because of their usefulness, whereas the number of FDG PET/CT examinations has also increased. In Japan, the number of institutes that have installed a PET/CT scanner has increased by 177 (212 to 389) from 2007 to 2017, with examinations increasing 72% from 414,300 to 711,800 [3]. In the current clinical practice, FDG PET/CT images require interpretation by specialists in nuclear medicine. As the physicians’ burden of interpreting images increases, the risk of oversight or misdiagnosis also increases. Therefore, there is a demand for an automated system that can prevent such incidents.

Image analysis using a convolutional neural network (CNN), a machine learning method, has attracted a great deal of attention as a method of artificial intelligence (AI) in the medical field [4,5,6,7]. CNN is a branch of deep neural network (so-called deep learning) techniques and is known to be feasible for image analysis because of its high performance at image recognition [8]. In a previous study using a CNN, tuberculosis was automatically detected on chest radiographs [9]. The use of a CNN also enabled brain tumor segmentation and prediction of genotype from magnetic resonance images [10]. Another study showed high diagnostic performance in the differentiation of liver masses by dynamic contrast agent-enhanced computed tomography [11]. CNN methods have also been applied to PET/CT, with successful results [12,13,14].

We hypothesized that introducing an automated system to detect malignant findings would prevent human oversight/misdiagnosis. In addition, the system would be useful to select patients who need urgent interpretation by radiologists. Physicians who are inexperienced in nuclear medicine would particularly benefit from such a system.

In this research, we aimed to develop a CNN-based diagnosis system that classifies whole-body FDG PET images into 3 categories: 1) benign, 2) malignant and 3) equivocal; such a system would allow physicians performing radiology-based diagnosis to double-check their opinions. In addition, we examined region-based predictions in the head and neck, chest, abdomen, and pelvis regions.

Methods

Subjects

This retrospective study included 3485 sequential patients (mean age ± SD, 63.9 ± 13.6 y; range, 24–95 y) who underwent whole-body FDG PET/CT (Table 1). All patients were scanned on either Scanner 1 (N = 2864, a Biograph 64 PET/CT scanner, Asahi-Siemens Medical Technologies Ltd., Tokyo) or Scanner 2 (N = 621, a GEMINI TF64 PET/CT scanner, Philips Japan, Ltd., Tokyo) at our institute between January 2016 and December 2017.

The institutional review board of Hokkaido University Hospital approved the study (#017–0365) and waived the need for written informed consent from each patient because the study was conducted retrospectively.

Model training and testing

Experiment 1 (Whole-body): First, input images were resampled to (224, 224) pixels to match the input size of the network. After that, we trained CNN using data from the FDG PET images. CNN was trained and validated using 70% patients (N = 2440; 896 benign, 1015 malignant, and 529 equivocal) which were randomly selected. After the training process, the remaining 30% patients (N = 1045; 384 benign, 435 malignant, and 226 equivocal) were used for testing. A 5-fold cross-validation scheme was used to validate the model, followed by testing. In the model-training phase, we used “early stopping” and “dropout” to prevent overfitting. Early stopping is a method used to monitor the loss function of training and validation and to stop the learning before falling into excessive learning [15]. Early stopping and dropout have been widely adopted in various machine-learning methods [16, 17].

Experiment 2 (Region-based analysis): In this experiment, the neural network having the same architecture were trained using 4 datasets consisting of differently cropped images: (A) head and neck, B) chest, C) abdomen, and D) pelvic region, respectively. The label was malignant when the malignancy existed in the corresponding region. The label was equivocal when the equivocal uptake existed in the corresponding region. Otherwise, the label was benign. The configuration of the network was the same as in Experiment 1.

Experiment 3 (Grad-CAM [18]): We carried out additional experiments using the Grad-CAM technique, which visualizes the part activating the neural network. In other words, Grad-CAM highlights the part of the image that the neural network responds to. The same image as the original image used in Experiment 1 was used as the input image. To evaluate the results of Grad-CAM, we extracted 100 malignant patients randomly from the test cohort. Grad-CAM provided continuous value for each pixel, and we set 2 different cut-offs (70 and 90% of maximum) to contour the activated area. The Grad-CAM result was judged correct or incorrect by a nuclear medicine physician.

Labeling

An experienced nuclear medicine physician classified all the patients into 3 categories: 1) benign, 2) malignant and 3) equivocal, based on the FDG PET maximum intensity projection (MIP) images and diagnostic reports. The criteria of classification were as follows.

  1. 1)

    The patient was labeled as malignant when the radiology report described any malignant uptake masses and the labeling physician confirmed that the masses were visually recognizable.

  2. 2)

    The patient was labeled as benign when the radiology report described no malignant uptake masses and the labeling physicians confirmed that there was no visually recognizable uptake indicating malignant tumor.

  3. 3)

    The patient was labeled as equivocal when the radiology report was inconclusive between malignant vs. benign and the labeling physician agreed with the radiology report. In case the labeling physician disagreed with the radiology report, the physician further investigated the electric medical record and categorized the patient into malignant, benign, or equivocal.

Finally, 1280 (37%) patients were labeled “benign”, 1450 (42%) “malignant” and 755 (22%) “equivocal”. Note that the number of the malignant label was smaller than the number of pretest diagnoses in Table 1, mainly because Table 1 includes patients who were suspected of cancer recurrence before the examination but showed no malignant findings on PET.

Table 1 Patient characteristics

The location of any malignant uptake was determined as A) head and neck, B) chest, C) abdomen, or D) pelvic region. For the classification, the physician was blinded to the CT images and parameters such as maximum standardized uptake value (SUVmax). Diagnostic reports were made based on several factors including SUVmax, the diameter of tumors, visual contrast between the tumors, location of tumors, and changes over time by 2+ physicians each with more than 8 years’ experience in nuclear medicine.

Image acquisition and reconstruction

All clinical PET/CT studies were performed with either Scanner 1 or Scanner 2. All patients fasted for ≥6 h before the injection of FDG (approx. 4 MBq/kg), and the emission scanning was initiated 60 min post-injection. For Scanner 1, the transaxial and axial fields of view were 68.4 cm and 21.6 cm, respectively. For Scanner 2, the transaxial and axial fields of view were 57.6 cm and 18.0 cm, respectively. Three-min emission scanning in 3D mode was performed for each bed position. Attenuation was corrected with X-CT images acquired without contrast media. Images were reconstructed with an iterative method integrated with (Scanner 1) or without (Scanner 2) a point spread function. For Scanner 2, image reconstruction was reinforced with the time-of-flight algorithm.

Each reconstructed image had a matrix size of 168 × 168 with the voxel size of 4.1 × 4.1 × 2.0 mm for Scanner 1, and a matrix size of 144 × 144 with the voxel size of 4.0 × 4.0 × 4.0 mm for Scanner 2. MIP images (matrix size 168 × 168) were generated by linear interpolation. MIP images were created at increments of 10-degree rotation for up to 180 or 360 degrees. Therefore, 18 or 36 angles of MIP images were generated per patient. In this study, CT images were used only for attenuation correction, not for classification.

Convolutional neural network (CNN)

A neural network is a computational system that simulates neurons of the brain. Every neural network has input, hidden, and output layers. Each layer has a structure in which multiple nodes are connected by edges. A “deep neural network” is defined as the use of multiple layers for the hidden layer. Machine learning using a deep neural network is called “deep learning.” A convolutional neural network (CNN) is a type of deep neural network that has been proven to be highly efficient in image recognition. CNN does not require predefined image features. We propose the use of a CNN to classify the images of the FDG PET examination.

Architectures

In this study, we used a network model with the same configuration as ResNet [19]. In the original ResNet, the output layer was classified into 1000 classes. We modified the number of classes to 3. We used this network model to classify whole-body FDG PET images into 1) benign, 2) malignant and 3) equivocal categories. Here we provide details on CNN architectures with the techniques used in this study. The detailed architecture is shown in Fig. 1 and Table 2. Convolution layers create feature-maps that extract image features. Pooling layers have the effect of reducing the amount of data and improving the robustness against misregistration by down-sampling the obtained feature-map. “Residual” is a block that can be said to be a feature of ResNet that combines several layers, thereby solving the conventional gradient disappearance problem. Each neuron in a layer is connected to the corresponding neurons in the previous layer. The architecture of the CNN used in the present study contained five convolutional layers. This network also applied a rectified linear unit (ReLU) function, local response normalization, and softmax layers. The softmax function is defined as follows:

$$ \mathrm{F}\left({x}_i\right)=\frac{\mathit{\exp}\left({x}_i\right)}{\sum \limits_j\mathit{\exp}\left({x}_j\right)} $$

where xi is the output of the neuron i (i = 1, 2, …, n, with n being the number of neurons belonging to the layer).

Fig. 1
figure 1

The functional architecture of the CNN. a The detailed structure of the CNN used in this study. b An internal structure of the residual layer

Table 2 Details of architecture

Patient-based classification

The patient-based classification was performed only in the test phase. After test images were classified by CNN, the patient was classified based on the 2 different algorithms (A and B).

Algorithm A:

  1. 1)

    If one or more images of the patient were judged as malignant, the patient was judged as being malignant.

  2. 2)

    If all the images of the patient were judged as benign, the patient was judged as being benign.

  3. 3)

    If none of the above were satisfied, the patient was judged as being equivocal.

Algorithm B:

  1. 1)

    If more than 1/3 of all the images of the patient were judged as malignant, the patient was judged as being malignant.

  2. 2)

    If less than 1/3 of all the images of the patient were judged as malignant and more than 1/3 were judged as equivocal, the patient was judged as being equivocal.

  3. 3)

    If none of the above were satisfied, the patient was judged as being benign.

Hardware and software environments

This experiment was performed under the following environment:

Operating system, Windows 10 pro 64 bit; CPU, intel Core i7-6700K; GPU, NVIDIA GeForce GTX 1070 8GB; Framework, Keras 2.2.4 and TensorFlow 1.11.0; Language, Python 3.6.7; CNN, the same configuration as ResNet; Optimizer, Adam [20].

Results

Figure 2 shows typical images of each category. A total of 76,785 maximum intensity projection (MIP) images were investigated. The number of images of benign patients, malignant patients, and equivocal patients was 28,688, 31,751 and 16,346, respectively.

Fig. 2
figure 2

Typical cases in this study. (1) benign patient with physiological uptake in the larynx, (2) malignant uptake patient with multiple metastases to bones and other organs, and (3) equivocal patient with abdominal uptake that was indeterminant between malignant or inflammatory foci

Experiment 1 (whole-body analysis)

In the image-based prediction, the model was trained for 30 epochs using an early stopping algorithm. The CNN process spent 3.5 h for training and < 0.1 s/ image for prediction. When images of benign patients were given to the learned model, the accuracy was 96.6%. Similarly, the accuracies for images of malignant and equivocal patients were 97.3 and 77.8%, respectively. The results are shown in Table 3 (a). In addition, Table 3 (b) shows the results of recall, compatibility, and F-value calculations.

Table 3 Details of Results of Experiments 1 and 2

In the patient-based classification, we applied algorithms A and B. When the algorithm A was applied, 91.0% of benign patients, 100% of malignant patients, and 57.5% of equivocal patients were correctly predicted. When the algorithm B was applied, 99.4% of benign patients, 99.4% of malignant patients, and 87.5% of equivocal patients were correctly predicted (Table 3c and d). The prediction showed a tendency to fail especially when strong physiological accumulation (e.g., in the larynx) or mild malignant accumulation was present. Typical cases where the neural network failed to predict the proper category are shown in Fig. 3.

Fig. 3
figure 3

Typical cases whose category was incorrectly classified (a, false-positive case; b, false-negative case)

Experiment 2 (region-based analysis)

The same population was used in this experiment as was used in Experiment 1. The model was trained for 33–45 epochs for each dataset using an early stopping algorithm. The CNN process spent 4–5 h for training and < 0.1 s/image for prediction.

In the experiment for the head-and-neck region, a new labeling system was introduced to classify the images into 3 categories: 1) benign in the head-and-neck region, 2) malignant in the head-and-neck region, and 3) equivocal in the head-and-neck region. When images from “malignant in the head-and-neck region” patients were given to the learned model, the accuracy was 97.3%. The accuracy was 97.8 and 96.2% for “benign in the head-and-neck region” patients and “equivocal in the head-and-neck region” patients, respectively.

Similar experiments were performed for the chest, abdominal, and pelvic regions. The details of the results are shown in Table 3 (g)-(j). The accuracy was higher for the pelvic region (95.3–99.7%) than for the abdominal region (91.0–94.9%).

Experiment 3 (grad-CAM [18])

We employed Grad-CAM to identify the part of the image from which the neural network extracted the largest amount of information. Typical examples are shown in Fig. 4. As a result, when the activated area was defined with the cut-off of 70% maximum, 93% of patients had at least one image that showed the activated area covering any part of the tumor. Similarly, when the activated area was defined with the cut-off of 90% maximum, 72% of patients had at least one image that showed the activated area covering any part of the tumor.

Fig. 4
figure 4

Visualization of classification standard of CNN. a Examples of original images input to CNN. b Examples of images activated area with the cut-off of 70% maximum by Grad-CAM, highlighting the area of malignant uptake. c Examples of images activated area with the cut-off of 90% maximum by Grad-CAM, highlighting the area of malignant uptake

Discussion

In patient-based classification, the neural network predicted correctly both the malignant and benign categories with 99.4% accuracy, although the accuracy for equivocal patients was 87.5%. Therefore, an average probability of 95.4% suggests that CNN may be useful to predict 3-category classification from MIP images of FDG PET. Furthermore, in the prediction of the malignant uptake region, it was classified correctly with probabilities of 97.3% (head-and-neck), 96.6% (chest), 92.8% (abdomen) and 99.6% (pelvic region), respectively. These results suggested that the system may have the potential to help radiologists avoid oversight and misdiagnosis.

To clarify the reasons for the classification failure, we investigated some cases that were incorrectly predicted in Experiment 1. As expected, the most frequent patterns we encountered were strong physiological uptake and weak pathological uptake. In the case shown in Fig. 3a, the physiological accumulation in the oral region was relatively high, which might have caused erroneous prediction. In contrast, another case (Fig. 3b) showed many small lesions with low-to-moderate intensity accumulation, which was erroneously predicted as benign despite the true label being malignant. The equivocal category was more difficult for the neural network to predict; the accuracy was lower than for the other categories. The results may be due to the definition; though common in clinical settings, “equivocal” is a kind of catch-all or “garbage” category for all images not clearly belonging to “malignant” or “benign”; thus, a greater variety of images was included in the equivocal category. We speculate that such a wide range may have made it difficult for the neural network to extract consistent features.

We also conducted patient-based predictions in this study. In patient-based prediction, the accuracy was higher than that in image-based prediction by an ensemble effect. This approach takes advantage of MIP images generated from various angles. More specifically, we applied 2 different algorithms: more sensitive Algorithm A and more specific Algorithm B. The select of algorithm may depend on the purpose of FDG PET/CT.

In general, CNN is said to classify images based on some features of the images. Grad-CAM is a technology that visualizes “the region of AI’s interest”. It could be useful for building explainable AI instead of the black box and thus for gaining the trust of the users. The results of Experiment 3 suggested that, in many cases, CNN responded to the part of the malignant uptake if existed. However, in quantitative assessment, when the cut-off of 70% maximum was used to segment highlight regions, the location of the actual tumor was covered in only 93% cases. There were cases where the AI’s diagnosis was correct although Grad-CAM highlighted non-relevant areas of the images. More studies are needed to clarify whether Grad-CAM or other methods are useful for establishing explainable AI.

The computational complexity becomes enormous when CNN directly learns with 3D images [21,22,23,24,25]. Although we employed MIP images in the current study, an alternative approach may be to provide each slice to CNN. However, even in the case of ‘malignant’ or ‘equivocal’, the tumor is usually localized in some small area and thus most of the slices do not contain abnormal findings. Consequently, a positive vs. negative imbalance problem would disturb efficient learning processes. In this context, MIP seems to be advantageous for a CNN as most MIP images of malignant patients contain accumulation in the image somewhere unless a stronger physiological accumulation (e.g., brain or bladder) hides the malignant uptake. In contrast, in 2D axial images or 3D images, tumor uptake is not hidden by physiological uptake. Therefore, we speculate that the prediction accuracy could be improved by using 2D axial images or 3D images if an appropriate neural network architecture is used.

In this study, we used only 2 scanners, but further studies are needed to reveal what will happen when more scanners are investigated. For instance, what if the numbers of examinations from various scanners are imbalanced? What if a particular disease is imaged by some scanners but not by the other scanners? There is a possibility that the AI system cannot make a correct evaluation in such cases. The AI system should be tested using “real-world data” before using it in clinical settings.

Some approaches could further improve the accuracy. In this research, in order to reduce the learning cost, we used a network that is equivalent to ResNet-50 [19], which is a relatively simple version of the “ResNet” family. In fact, ResNet systems with deeper layers can be built technically. More recently, various networks based on ResNet have been developed and demonstrated to have high performance [26, 27]. From the viewpoint of big-data science, it is also important to increase the number of images for further improvement in diagnostic accuracy.

There are many other AI algorithms that can be used for PET image classification and detection. In a recent study by Zhao et al., they used the so-called 2.5D U-Net to detect lesions on 68Ga-PSMA-11 PET-CT images for prostate cancer [28]. They trained the CNN using not fully 3D images but axial, coronal, and sagittal images in order to simulate the workflow of physicians and save computational and memory resources. They reported that the network achieved 99% precision, 99% recall, and 99% F1 score. Not only U-Net [29] as an image segmentation method but also regional CNN (RCNN) and M2Det [30] as object extraction methods, may be useful to localize the lesion. In a study by Yan K et al., MR image segmentation was performed using a deep learning-based technology named the Propagation Deep Neural Network (P-DNN). It has been reported that by using P-DNN, the prostate was successfully extracted from MR images with a similarity of 84.13 ± 5.18% (dice similarity coefficient) [31]. On the other hand, these methods also have a problem that enormous time is required to create training data.

The oversight rate (i.e., the rate of misclassifying malignant images as benign ones) was 0.6%. We think that the rate is small but not satisfactory. As we consider the current system will contribute to radiologists as a double-checking system, decreasing oversight is much more important to decreasing the false-positive rate. We are planning experiments to decrease the oversight rate by adding the CT data to CNN.

This study has some limitations. First, this model can only deal with FDG PET MIP images in the imaging range from the head to the knees; correct prediction is much more difficult when spot images or whole-body images from the head to the toes are given. Future studies will use RCNN to solve the problem. Second, less FDG-avid lesions such as pancreatic cancer cannot be classified only with MIP images, and there is a possibility that it cannot be labeled correctly. Third, we applied patient-based labeling but not image-based labeling. Thus, some MIP images of particular angles may be labeled as ‘malignant’ but do not visualize the tumor that is hidden by physiological uptake. To improve the quality of training data, each image within the patient should be labeled separately although it takes plenty of time. Finally, the cases were classified by a nuclear medicine physician but were not based on a pathological diagnosis.

Conclusion

The CNN-based system successfully classified whole-body FDG PET images into 3 categories in whole-body and region-based analyses. These data suggested that MIP images were useful for classifying PET images and that the AI could be helpful for physicians as a double-checking system to prevent oversight and misdiagnosis. Before using AI in clinical settings, more advanced CNN architectures and prospective studies are needed to improve and validate the results.