Over the past years, automated quantitative analysis of the coronary artery system has been developed and successfully clinically applied in particular for X-ray coronary angiography [1]. Recently, computer-aided analysis of the coronary arteries has been developed for ultrasound [2, 3], magnetic resonance imaging (MRI) [4], and lately also for computed tomography (CT) techniques [5, 6]. At present the major bottleneck of multi-slice computed tomography (MSCT) imaging of the coronary arteries is the potential lack of image quality due to limitations in the spatial and temporal resolution, irregular or high heart beat, respiratory effects, and variations of the distribution of the contrast agent. The number of rejected vessel segments in diagnostic studies is currently still too high for implementation in routine clinical practice. Until now, stenoses of the coronary arteries are evaluated visually with CT angiography [733]. Therefore, the results are highly dependent on subjective factors inherent in the examiner. New software tools for semi-quantitative analysis (CT-QCA, quantitative coronary assessment) might be adequate to improve the diagnostic accuracy und reproducibility [6, 34]. However, also for the automated quantitative analysis of the coronary arteries high image quality is required. Based upon the trend in technological development of MSCT scanners, there is no doubt that the quantitative analysis of MSCT coronary angiography will benefit from these technological advances. Fischbach et al. [35] compared quantitative and qualitative information on global and LV function obtained with MSCT with that obtained with resonance imaging (MRI) in patients with a high prevalence of LV wall motion abnormalities in 30 patients with a variety of cardiac disease. Global LV function parameters from MDCT studies were measured using a commercially available software package for cardiac function analysis (CT MASS 6.1, Medis, Leiden, The Netherlands) supporting automatic endo- and epicardial contour detection. Global LV function parameters and wall thickness measurement from MRI studies were determined using the MRI-compatible version of the analysis software (MR MASS suite 6.1, Medis) on an offline workstation employing identical criteria to those used with the CT evaluation. Normokinetic segments were reliably identified with MSCT but the sensitivity for detection and accurate classification of LV wall motion abnormalities need to be improved. Better temporal resolution of the CT system seems to be the most important factor for enhancing MSCT performance.

One of the recent technological advances in CT imaging is dual source computed tomography (DSCT), being on the market since 2006. DSCT improves imaging quality because of less heart rate dependency and offers a spatial resolution of 0.4 mm and a temporal resolution of 83 ms [34]. Busch et al. [36] compared 64-slice MSCT and DSCT with cardiac catheterization and showed a good correlation of grading stenoses between the software-assisted evaluation and the results of the coronary catheter angiography. The promising results of the DSCT are due to a superior temporal resolution compared to the 64 slice MSCT. Van der Vleuten et al. [37] compared left ventricular (LV) function by DSCT using MRI as reference standard in 34 patients. Global LV functional parameters calculated from DSCT datasets acquired in daily clinical practice correlated well with MRI and may be considered interchangeable. However, the authors found that visual assessment of the image quality of the short-axis cine slices should be performed to detect any artifacts in the DSCT data which could influence accuracy. Piers et al. [38] evaluated non-invasive angiography using DSCT for the determination of the most appropriate therapeutic strategy in 60 patients with suspected coronary artery disease (CAD). Although imaging quality did improve considerably, DSCT cannot be used for definitive therapeutic decision-making with regard to revascularization procedures in patients with suspected CAD.

Burgstahler et al. [39] showed that improved spatial and temporal resolution of DSCT was associated with better opacification of the coronary arteries and a better contrast with the myocardium independent of heart rate. Compared to MSCT, opacification of the coronary arteries at DSCT was not affected by body mass index (BMI). The main advantage of DSCT lies with the heart rate independency, which might have a positive impact on the diagnostic accuracy. Groen et al. [40] compared MRI, 64-slice MSCT and DSCT in assessing global LV function parameters using a moving heart phantom. A good correlation was found between DSCT and MRI for LV ejection fraction and cardiac output. MRI systematically underestimated functional cardiac parameters, LV ejection fraction and cardiac output of a moving heart phantom. 64-slice MSCT underestimated or overestimated these functional parameters depending on the heart rate because of limited spatial resolution. DSCT showed minimal deviations from these functional parameters compared to MRI, electron beam tomography and 64-slice MSCT.

Alkahdi et al. [41] determined the radiation doses and image quality of different DSCT protocols tailored to heart rate and BMI in 200 patients. It was concluded that DSCT was associated with radiation doses ranging between 1.3 and 9.0 mSv, depending on the protocol used. When the DSCT protocol will be tailored to heart rate and BMI of the individual patient, this will result in dose reductions of up to 86% while maintaining a diagnostic image quality of the examination. This finding was underscored by Weustink et al. [42], who showed that optimal ECG pulsing, radiation exposure to patients, particularly those with low or high heart rate, can be reduced with preservation of image quality. Juergens et al. [43] evaluated software for threshold-based 3D segmentation of the left ventricle in comparison with traditional 2D short axis-based planimetry (Simpson method) for measurement of LV volume and global LV function with state-of-the-art DSCT in 50 patients. Inter-observer variation with 3D segmentation analysis was significantly less than with the 2D technique, and mean analysis time was significantly shorter for 3D analysis. It was concluded that automated threshold-based 3D segmentation enables accurate and reproducible DSCT assessment of LV volume and function with excellent correlation with results of 2D short-axis analysis. Exclusion of papillary muscles from LV volume resulted in small systematic differences in quantitative values. Confirmation of these data by trials in larger patient collectives is warranted.

In the current issue of the International Journal of Cardiovascular Imaging, Reimann et al. [44] analyzed the diagnostic efficacy of computer-aided analysis of relevant coronary artery stenosis using DSCT. Based on a 13-segment model 30 CT scans were analyzed for significant stenoses using conventional 3D-charts as well as a specialized cardiac analysis tool (CAT). Diagnostic accuracy and time to diagnosis was recorded for each vessel separately as well as the three readers’ confidence. With severe coronary artery calcifications, 53 false interpretations of segments were found for the total of 390 coronary segments analyzed. Similar negative and positive predictive values were shown for 3D-charts and CAT analysis. Analysis of 3D charts took a mean of 5.2 min (3–10 min) versus a mean time of 8.2 min (4–12 min) for CAT analysis. No significant inter-reader time differences and no significant confidence level differences were found between readers and analyses. It was concluded that a specialized CAT of the coronary tree shows comparable accuracy to manual 3D analysis but needs improvements concerning coronary tree segmentation times.

In summary, the study of Reimann et al. [44] clearly shows that automated and computer-aided quantitative approaches are very promising in CT imaging—certainly in combination with high technology CT scanners—but they are still time-consuming and currently not suitable to establish a first line diagnosis in patients with suspected CAD. In the near future, computer analysis times will considerably shorten to such an extent that automated approaches provide excellent alternatives to, or even preferred over, purely visual approaches. At the moment dual source systems require at least ‘dual’ analysis.