Introduction

Turning raw CT projection data into volumetric maps (i.e., images) of patient attenuation has undergone multiple changes from the beginnings of CT in the 1970s [1, 2]. One way to understand the differences between CT reconstruction methods is by what assumptions they make. The first two methods used for CT image reconstruction were algebraic reconstruction (ART) and FBP. Both methods make no assumptions about the absolute value of attenuation of a patient, nor on the frequency content of the patient. In layman’s terms, these methods are incapable of recognizing a relatively “flat” portion of patient anatomy (e.g., a urine-filled bladder) and preferentially reducing noise in this “flat” region. Similarly, ART and FBP cannot recognize regions which should not undergo noise reduction and thereby possibly degrade spatial resolution over patient anatomy with high frequency content (e.g., the inner ear bony anatomy). While ART was used on the first commercial CT scanner, it was quickly replaced by FBP as reconstruction computer hardware and software improved. As of today, FBP is still a standard option available on all CT scanners.

In response to the increasing fear of ionizing radiation due to scientifically debatable applications of the BIER VII report to CT radiation [3, 4] and tissue effects from inappropriately performed brain perfusion exams [5, 6], CT manufacturers developed commercial iterative reconstruction (IR) methods in 2009. IR methods assuaged fears over radiation because they offset the noise increases usually encountered when lowering radiation dose. Today, all major manufacturers and many 3rd-party companies have IR offerings. IR makes assumptions on the imaging object’s signal level and content. In layman’s terms, IR methods can identify the regions of an image which likely are smooth (i.e., a urine-filled bladder) and apply noise reduction processing to those specific regions. Unfortunately, noise reduction is commonly accompanied by spatial resolution degradation and changes to image noise texture. IR methods are therefore capable of identifying regions of high spatial frequency content (i.e., the inner ear bony anatomy) and minimizing the application of any spatial resolution degradation (i.e., noise reduction processes) to those regions. This ability to selectively reconstruct different regions within an image makes IR algorithms powerful noise-reducing methods, but their behavior is highly dependent on CT ionizing radiation dose [7], object size [8], object contrast level [7], and background patient anatomical noise [9]. Furthermore, the noise reduction methods used by the majority of IR algorithms results in suboptimal image texture, often referred to as “plastic” or “blotchy” in the literature [8, 10,11,12,13]. See Fig. 1 for an example of such an objectionable image noise texture.

Fig. 1
figure 1

Image reproduced with permission from Ref. [18]

Axial slices of the abdomen of the same patient at the (left) highest and (right) lowest level of iterative denoising applied. The inset images are zoomed-in views of the kidneys and the surrounding tissue. Note the “plastic” or “blotchy” texture of the image on the left relative to the image on the right.

More advanced forms of IR methods are often referred to as “model-based,” albeit while their performance has been shown to be superior to IR methods for some image quality facets [14], they also suffer from nonlinearities in their performance and noise texture issues [8, 15,16,17].

The latest advancement in the field of CT reconstruction is the focus of this article. Deep learning (DL) approaches also make assumptions about the patient, albeit their assumptions have nothing to do with how or where to apply denoising or edge preservation. DL methods assume the patient is similar in size, attenuation, and frequency content to the data used to train the DL model. If this assumption is valid, then the DL method should produce an image similar to the data used to train the DL model (Table 1).

Table 1 List of CT reconstruction and or denoising methods categorized by type and assumptions made

Issues with Filtered Back-Projection

Filtered back-projection (FBP) makes no assumptions about the imaged object and fails to account for several realities of CT data acquisition [20]. FBP doesn’t account for projection data containing noise, the polychromatic nature of the imaging spectrum, the finite size of the x-ray source, and the size and shape of the detector elements [20,21,22]. A major limiting factor of FBP is that it also fails to account for variable Poisson distributed photon count statistics across an imaging plane, which, in light of these simplifications, means FBP is quite susceptible to noise [22]. Noise increases as the inverse square root of dose or slice thickness with FBP [2, 23]. This fundamental property of CT imaging means large dose increases are needed to lower noise when employing FBP. For example, lowering image noise by a factor of 2 requires 4 times more radiation dose. This explains why dose levels will vary by hundreds of percent over the same body region for the same patient size for different clinical indications. For example, a CT protocol for interpreting the bony detail of the lumbar spine (i.e., an indication that requires relatively low noise) will require several times the dose of a virtual colonoscopy (i.e., an indication that can tolerate a high amount of noise) exam [18]. Clinically speaking, this means that FBP does not allow for potentially meaningful dose reductions unless image noise significantly increases or spatial resolution is degraded [24]. Similarly, photon starvation generated when imaging a morbidly obese patient cannot be overcome without markedly escalating radiation dose [25].

Another undesirable byproduct of FBP is loss of low-contrast lesion detail [26]. Image noise degrades image quality, all of which is accentuated at lower radiation doses [26]. Noise can interfere with the detection of a low-contrast lesion (e.g., hypoenhancing liver metastases) [27, 28]. Unless a certain CT radiation dose threshold is met, low-contrast lesions can be masked by image noise when employing FBP [26].

Issues with Iterative and Model-Based Reconstruction

Iterative (IR) and model-based reconstruction (MBR) algorithms reduce image noise using nonlinear mathematical functions against variable signal intensities within a defined region of interest [19]. The assumptions described in the previous section are performed mathematically using a regularization term. Regularization terms in IR or MBR frameworks are needed to reduce noise and stabilize the solution during iterative noise reduction or image reconstruction. This stabilization essentially limits the space of possible solutions to solutions that match certain assumptions about the object to be reconstructed [29]. As discussed in the introduction, they accomplish noise reduction by assuming that “flat” regions of the patient can be smoothed thereby reducing noise which introduces nonlinearities into the reconstruction process. The degree of “flatness” involves estimating local image value changes (i.e., gradients in CT number) which inherently depend on contrast level and noise magnitude, making IR and MBR method performance depend on contrast level and noise/dose level [7,8,9, 15,16,17]. We can intuitively understand that as dose level decreases noise increases. Increased noise makes it more difficult for the IR and MBR methods to identify uniform versus non-uniform regions; hence, their performance changes for the worse at lower dose levels. Another byproduct of the noise reduction methods employed by IR and MBR methods is that their noise textures tend to peak at lower spatial frequencies (i.e., leftward shift in the noise power spectrum (NPS)), a phenomenon not observed with FBP. The leftward shift in NPS visually translates to an artificially “cartoony” on “plastic” appearing noise texture which is displeasing characteristic for some radiologists as shown in Fig. 1.

The nonlinear behavior of IR and MBR with respect to noise texture is a primary reason why these reconstruction methods exhibit contrast-dependent spatial resolution [26]. Contrast-dependent spatial resolution means that a lesion’s spatial resolution is a function of its contrast with neighboring background tissue [30]. For many low-attenuation diagnostic tasks scanned at clinical doses (e.g., low-attenuation hypovascular metastases), contrast-dependent spatial resolution is not deleterious to a radiologist’s diagnostic accuracy [31]. However, overaggressive dose reductions have shown to be detrimental to the diagnostic accuracy of low-contrast lesion detection tasks [32,33,34]. This effect has been studied in numerous phantom-based and human observer models [32, 33, 35]. Most recent literature shows that IR allows for only modest reductions (e.g., ~ 25%) in radiation dose to preserve low-contrast lesion detection accuracy.

In addition to noise texture changes when IR and MBR methods are applied, the spatial resolution may also change relative to FBP. Several studies have demonstrated the dose/noise and contrast-level dependencies of IR and MBR algorithms. For the majority of IR and MBR methods to date, their performance is better at high-contrast levels and higher doses. This unfortunately is exactly the opposite behavior that is clinically desired. This was demonstrated by Baker et al. who used a liver phantom with lesions decreasing in size and contrast, imaged at decreasing radiation dose, and reconstructed with a variety of IR strengths/techniques [31]. The Baker et al. study showed that at low radiation doses, small low-contrast objects can be invisible regardless of reconstruction technique. Multiple other phantom studies demonstrated similar findings, and these were confirmed in prospective human studies [31, 32, 36, 37]. For example, Pooler et al. showed that very aggressive dose reduction (70% range) led to decreased diagnostic accuracy and confidence in identifying and characterizing metastatic liver lesions regardless of image reconstruction algorithm used [33]. A review of this literature was conducted by Mileto et al. who concluded “Radiologists need to be aware that use of IR can result in a decline of spatial resolution for low-contrast structures and degradation of low-contrast detectability when radiation dose reductions exceed approximately 25%” [35]. We can intuitively understand the degradation of spatial resolution with lower contrast levels as being due to IR and MBR methods being better able to identify higher contrast edges relative to lower contrast edges. Since IR and MBR methods seek to actively preserve edges by not filtering orthogonally to the edge gradients, the higher the edge contrast, the more likely that edge is to receive less noise reduction (i.e., spatial resolution blurring) and therefore a preserved edge detail.

Introduction to Deep Learning CT Image Reconstruction

The known shortcomings of iterative reconstruction discussed above have motivated alternative methods for noise reduction in CT. This section will introduce artificial intelligence (AI)-based methods, built with the goal of preserving the noise reduction features of IR and MBR methods, and mitigating the negative image texture, and nonlinear spatial resolution properties of IR and MBR methods.

While progress was being made in IR for CT, there were parallel efforts which have improved the power AI and its application to an ever-growing set of practical problems. AI most commonly uses neural networks that crudely model the neurons within the brain and the synaptic connections of these neurons [38]. Machine learning (ML) algorithms are a subset of artificial intelligence wherein the algorithm developer must specify a set of features to be included as part of the learning process. Deep learning (DL) offers a framework wherein the algorithm can learn the features throughout the training process. Specifically, one type of deep learning employs convolutional neural networks (CNNs).

Deep learning reconstruction (DLR) has been implemented by multiple CT vendors and third-party software providers [39,40,41]. In each case, the DL neural networks have been trained to reduce noise. In theory, DLR can be used to solve a variety of image reconstruction problems including cone-beam artifacts, motion artifacts, truncation artifacts, and so on. However, the currently available commercial solutions focus on reducing the image noise.

As with any learning process, the network architecture (i.e., the connection of the neurons in the network) is important but even more crucial is the training data used to model the network. Typically, ground truth training data are used to teach the network the properties of a CT image. Then noise is introduced into the data through simulation for instance. In this manner, the network has paired examples of noisy data and clean data and the aim of the network is to learn a method to remove the noise from the data. It has also been shown that denoising can be achieved by training the network with multiple realizations of the same noise pattern in so-called noise2noise training [42]. However, to our knowledge, this method has not yet been implemented clinically.

FBP, IR, and MBIR reconstruction methods require a back-projection operation which maps from the detector space to the image space. The analogous term with DLR is back-propagation. Back-propagation describes the methods used to update the network coefficients during the learning process. The back-propagation step is shown in Fig. 2. Importantly, this step is where the image quality characteristics of the ground truth image are encoded into the network, allowing future sinogram data to pass through the network and inherent the characteristics of the ground truth images used to train the network.

Fig. 2
figure 2

An overview of how a deep neural network is trained to take noisy CT projection data and reconstruct high quality images. In this example, scan data is generated in two pathways, one leading to a “low quality noisy sinogram” and one leading to a “high quality sinogram”. Traditional image reconstruction (i.e., non-DL based) methods are shown here to turn the high quality sinogram data into a ground truth image. This ground truth image is backpropagated through the network to train the network to transform the low quality sinogram data into the high-quality image

In practice, there are choices that can be made for the selection of the “clean” or “target” training data which affect the output of the network. If relatively high-dose FBP images are input to the network, the network output will have noise texture which more closely resembles FBP. On the other hand, if IR is used as the “clean” training data, the noise texture in the images may more closely resemble that of IR [43, 44].

Each CT vendor or third-party denoising vendor performs validation, testing, and quality assurance to ensure their denoising solutions will perform well in practice. After the networks have been tested and ready for clinical implementation, their architecture is saved as a static software instance. Therefore, the networks provide reproducible and stable results when used in the field. Two different workflows for implementation of DLR in the clinic are shown in Fig. 3. The process of using a DL network to reconstruct noisy data is referred to as inference—because we are inferring what the clean data should be from the noisy data. In the future, it may be possible for networks to learn in real time or to make network parameter adjustments tailored to specific patients. But all currently FDA cleared methods use networks with locked weights.

Fig. 3
figure 3

Figure 2 depicts a generalized method for training a DL network to transform noisy projection data into an image. In this figure we depict how one of these networks would be used clinically. In the top row, the network shown in Fig. 2 is used to transform low quality noisy sinogram data into a high-quality image. The bottom row depicts a slightly different network design where the input is a noisy image and the output is higher quality image. The “lock” symbols denote that the networks are fixed, no changes to the weights are performed in the reconstruction process once deployed in the clinical setting

The inferencing step takes advantage of modern graphical processing unit (GPU) hardware solutions which are typically much faster than traditional IR algorithm designs as the inferencing operation does not require any iterations. The major computational hurdle to DLR network design (i.e., defining hyperparameters and network node weights) is cleared during the training stage. This contrasts with computational challenges invoked with MBIR algorithms which are present for every reconstruction. As with IR, DLR typically offers multiple levels of denoising strength to suited to user preferences [39,40,41].

Technical Review of Deep Learning Performance

DLR promises to be fast and produce high-quality images (i.e., lower noise and higher spatial resolution) at lower doses [45]. As with previous generations of nonlinear reconstruction algorithms, the performance of DLR is not adequately quantified using classical image quality metrics such as noise, contrast-to-noise ratio, and signal-to-noise ratio. More advanced metrics such as the noise power spectrum (NPS), the task-based modulation transfer function, and model observer metrics are required [46]. Further, subjective image quality has been essential for characterizing the preferences of radiologists. However, this labor-intensive assessment is now routinely automated with model observer studies and more recently artificial intelligence models have also been proposed toward this goal [47].

The NPSs of DLR images have been shown to be similar to that of FBP [48,49,50,51]. Measuring the mean frequency of DLR images using GE Healthcare’s TrueFidelity (i.e., DLIR reconstruction), a marginal shift was observed across the range of diagnostic dose levels, and only at relatively low-dose levels was a meaningful shift measured for DL relative to FBP [48, 50]. The DLR solution from Canon Medical System, termed Advanced intelligent Clear-IQ Engine (AiCE), showed similar NPS behavior to MBIR [44], but recent changes to the AiCE method have made the texture more FBP like [49]. There is evidence that the strength (or weight) of DLR processing impacts noise texture, with a shift to lower frequencies at high DLR strengths for both on-scanner algorithms currently available [52], albeit that work of Hasegawa et al. that demonstrated this did so at phantom sizes and dose levels not representative of clinical reality. However, there is evidence that this shift is alleviated with newer versions of DLR [46]. In clinical use, DLR images are described as producing a more “natural” image appearance [45, 46, 53]. In summary, DLIR does not suffer from the “plastic” noise texture shown in Fig. 1 given a NPS which does not skew toward low-frequency data.

In the assessment of absolute image noise, DLR achieves the same or higher levels of noise reduction compared to MBIR, with the relative noise increase as dose is lowered muted relative to FBP [48, 49]. Performance evaluation of one commercial product shows DL outperforming all other reconstruction methods at low doses, while it is outperformed only by MBR at higher doses [44, 49]. The noise reduction capabilities of DLIR enables exploring new spatial resolution limits such as deploying detectors with smaller pixels and utilizing shaper kernels with larger image matrices without suffering a noise penalty [44].

DLR similarly exhibits high-contrast spatial resolution that is comparable to MBR methods yielding similar results at the 50% and 10% modulation transfer function (MTF) points [44]. However, as previously reported, spatial resolution can be influenced by both the contrast level of target and the dose levels. More appropriately, a task-based MTF must be considered to fully characterize the performance of a reconstruction algorithm [7]. While there are slight variations in the task-based MTFs across varying levels of tissue contrast with DLR, the variations measured were comparable to measurement-to-measurement variations, and in general, DLR was found to have similar or better MTF values relative to FBP [48]. This is a departure from IR/MBR methods which have been shown across multiple vendor implementations to exhibit lower MTF values relative to FBP, especially for lower contrast tasks [7, 17]. The task-based MTF for DLR was also observed to be robust across a range of doses, with spatial resolution preserved as dose was decreased [48], but with a tendency for a drop off at very low doses and with lower contrast tissues [49, 50]. This behavior is reported to be less of an issue in newer versions of DLR [46]. Vendor neutral, image denoising-based methods have also demonstrated superiority over IR and MBR methods with respect to noise texture performance. Pixelshine from Algomedica has been characterized as providing less central frequency shift in NPS versus IR-based methods from multiple other scanner vendors [51]. In summary, DLR methods characterized to date using phantom data do not appear to suffer from the same MTF degradation at low-contrast levels as do IR and MBR methods.

Going a step further to the task-based detectability index, the performance of all algorithms is related to both tissue contrast and dose level as is expected, with MBR and DLR algorithms significantly outperforming FBP at all dose levels, and with DLR and MBR performance changing relative to each other and to FBP at very low doses [49, 50]. However, the relative differences in performance between DL and MBR as a function of dose and diagnostic task should be considered in context of vendor-specific implementations, and not as indication of the inherent performance of DLR vs. MBR.

CT number accuracy is well preserved with DLR as a function of contrast and dose levels. At routine dose levels, CT numbers of phantom inserts at 340 HU, 120 HU, and -35 HU are within 4 HU of the CT numbers measured at FBP at the same dose level. Even at 25% of the routine dose level, measured CT numbers on DLR images are still within 4 HU of those measured at FBP at routine dose levels [44]. These results are consistent with another study [48] that also evaluated CT number accuracy and found no statistical difference between reconstruction algorithms or reconstructed slice thickness. Additionally, multiple papers looking at nonphantom, (i.e., in vivo) CT number measurements have demonstrated no clinically significant change in CT number with DLR versus FBP and IR/MBR methods [54, 55,56, 57, 58, 59, 60•, 61, 62, 63].

Clinical Review of Deep Learning Performance

Figures 4, 5, and 6 depict clinical DLR cases using three different DLR commercial solutions.

Fig. 4
figure 4

From top left clockwise, ASiR-V 20%, DLIR-low, DLIR-medium, DLIR-High 1.25 mm axial slices (displayed at window width = 380 HU level = 40 HU) reconstructed with a standard kernel. 70-year-old patient with history of cirrhosis presenting with abdominal pain. Contrast enhanced CT of the abdomen and pelvis shows marked cecal wall thickening, new compared to prior exam. Finding was attributed to hepatic congestive colopathy, but a differential for colitis was provided. Image noise standard deviation for these cases (measured in a relatively uniform region of the bowel) was 19, 14, 12, and 8 HU for the ASiR-V 20%, DLIR-low, DLIR-medium, and DLIR-High images respectively

Fig. 5
figure 5

From top left clockwise, FBP (i.e., ASiR-V 0%), PixelShine “S” applied to a FBP (ASiR-V 0%) image, DLIR-low, PixelShine “S” applied to a DLIR-low image. All images are 1.25 mm axial slices (displayed at window width = 355 HU level = 50 HU) reconstructed with a standard kernel. Image noise standard deviation for these cases (measured in a relatively uniform region of the stomach) was 24, 18, 16, and 12 HU for the clockwise from the top left respectively. Note how the noise magnitude is reduced for all images relative to the FBP image, and how the texture is not degraded from a “FBP like” texture of any of the DLR based reconstructions

Fig. 6
figure 6

Images of similar anatomical location reconstructed using FBP (FC 18 kernel), AIDR3D (FC18 kernel, AIDRe Standard) and AiCE (AiCE body sharp standard) from left to right respectively (zoomed in images shown along the bottom row). Notice that the noise texture of the AiCE is not plastic in appearance, while the noise magnitude is markedly reduced compared to the FBP reconstruction. The AIDR3D image does have a lower noise magnitude relative to FBP, but a slightly “patchy” noise texture is apparent. All images at 0.5 mm slice thickness at window width = 380 HU, window level = 340 HU. Images courtesy of Canon Medical Systems USA

Improvements in contrast-to-noise, noise texture, and spatial resolution often correlate with improved subjectively rated radiologist image quality [26]. This is a tried-and-true pattern that typically follows the release of most image reconstruction algorithm launches. However, DL may provide unique opportunities as compared to the generation of IR and MBR given that some phantom studies suggest DLR circumnavigates the limitations of contrast-dependent spatial resolution and overly smoothened noise texture profiles associated with IR and MBR algorithms [48,49,50,51, 54, 55]. Still, it is imperative to show that figure-of-merit improvements augment clinical decision-making and diagnosis. Such findings could further reductions in benchmark CT radiation doses, either by improving or by maintaining diagnostic accuracy.

The majority of published clinical studies assess DLR algorithm performance with subjective image quality scores (IQ) typically assigned by radiologists and contrast-to-noise measurements [54,55,56,57]. For instance, Bernard et al. assessed the performance of AiCE against IR (AIDR 3D) in a CT angiography acute stroke imaging protocol [55]. They concluded that a 40% reduced radiation dose resulted in a 50% increase in IQ scores. Although timely IQ study results are encouraging, their generalizability to diagnostic accuracy is limited. A CT imaging practice should be cautious in reducing exam doses when employing DRL solely based on IQ superiority alone.

Introducing a diagnostic task into the experimental design increases external validity of using DLR in lieu of currently accepted IR methods. For example, Jensen and colleagues compared portal venous phase images reconstructed with IR (ASiR-V 30%) compared to DL (TrueFidelity at low, medium, and high weight) of patients undergoing routine oncologic staging exam of the abdomen [54]. A total of 193 lesions were identified in these patients. Reader assigned lesion diagnostic confidence, conspicuity, and artifact scores all were significantly improved with all DLR weights as compared to IR. Again, such results are encouraging. Another interesting example is a study performed by Benz et al. in which coronary luminal narrowing was evaluated on coronary CT angiography [58]. Images were reconstructed with DLIR (TrueFidelity medium and high strength) and IR (ASiR-V 70%) using standard and high-definition (HD) kernels. They concluded no differences in sensitivity, specificity, or diagnostic accuracy between ASiR-V HD and TrueFidelity high weighting. It should be noted that coronary luminal narrowing is considered a high-contrast task—in which the reader measures the diameter of a contrast-filled coronary artery. However, neither of these examples tests diagnostic confidence at reduced radiation doses.

A more intricate approach to DLR algorithm testing requires a specially tailored experimental design in which a reduced dose CT exam is performed in tandem to a routine clinical dose exam. The reduced dose exam can be reconstructed with various DLR weights and then compared to full-dose images reconstructed with FBP and routine IR strengths. Instead of having radiologist observers only grade image quality, the study design should introduce some predefined diagnostic task such that accuracy can be properly measured. An example task is low-contrast lesion detection.

Assessment of low-contrast metastases is a challenging clinical task and becomes more difficult in the setting of reduced radiation doses. A recent study by Jensen and colleagues prospectively analyzed colorectal low-contrast liver metastases detection, comparing benchmark radiation dose to 65% dose reduction images reconstructed with FBP, IR (ASiR-V 60%), and TrueFidelity medium strength [60•]. They demonstrated that TrueFidelity improved subjective reader image quality and preserved liver metastases detection measuring > 0.5 cm compared to benchmark exposures (lesion accuracy 67.1% versus 80.1% for TrueFidelity and FBP, respectively). All radiologist readers subjectively rated reduced dose DLR images superior to standard-dose FBP images (odds ratio, 1.6; P = 0.02). In a similarly designed study, Singh et al. demonstrated equivalent liver lesion and pulmonary nodule detection using low-dose (83% reduced) AiCE when compared to standard-dose IR (AIDR 3D) and superior detection compared to low-dose FBP and IR (AIDR 3D and FIRST) [62]. The study’s low-dose protocol had a mean volume CT dose index of 2.1 ± 0.8 mGy as compared to 13 ± 4.4 mGy in the standard protocol. This translates to an 83% dose index reduction.

In conclusion, these studies suggest 65–83% dose reductions for low-contrast liver lesion detection; however, only Jensen et al.’s study [60•] specifically evaluates low-contrast liver metastases. More clinical work regarding diagnostic accuracy is needed before radiologists start lowering doses of routine exams.

Conclusion

In summary, the space of CT image reconstruction using deep learning is an active area of research and productization. This chapter outlined the predecessors to DLR (i.e., FBP, IR, and MBR) and reviewed some of the currently available DLR commercial solutions. We expect more solutions to follow. At the time of this writing, Philips has a press release that states they now have AI-enabled image reconstruction on their Incisive platform. Siemens has a product released for MR called “deep resolve,” but nothing at the time of this writing for CT. Other facets of image quality like artifact mitigation and spatial resolution enhancement were not discussed in this chapter (Table 2). They are being developed, however, as shown in Figs. 7 and 8. The future of CT image reconstruction seems bright considering the positive comparisons of DLR to FBP, IR, and MBR made in this chapter. The first photon counting CT scanner release in 2021 [64] will likely also open new avenues for DLR and remains an exciting future prospect for DLR.

Table 2 A review of CT reconstruction methods for iterative reconstruction (IR), model-based reconstruction (MBR), and deep learning (DLR)
Fig. 7
figure 7

Example of deep learning being used for extended field of view reconstruction. In this example, the top row depicts external patient contours, the bottom row axial CT slices. Images are reconstructed out to 50 cm on the left, and 80 cm on the right. The right column was reconstructed using a deep learning-based method from GE Healthcare called “MaxFOV 2”. Note the mitigation of truncation artifacts (i.e., the bright edges) and the accurate patient representation outside of the 50 cm field of view of the deep learning method. Images provided courtesy GE Healthcare

Fig. 8
figure 8

Example of deep learning enabled super spatial resolution [65]. In this clinical example, “normal resolution” data acquired at a spatial resolution of 0.5 mm is fed into a neural network that was trained to transform “normal resolution” data into “super high resolution” 0.25 mm data. Images courtesy Dr. Marcus Chen, NHLBI, National Institutes of Health, USA