Background

An estimated 17.7 million people died from cardiovascular disease (CVD) in 2015, representing 31% of all global deaths [1]. More people die annually from CVD than any other cause. Technological advances in medical imaging have led to a number of options for non-invasive investigation of CVD, including echocardiography, computed tomography (CT), cardiovascular magnetic resonance (CMR) etc., each having its own advantages and disadvantages. Due to its good image quality, excellent soft tissue contrast and absence of ionising radiation, CMR has established itself as the non-invasive gold standard for assessing cardiac chamber volume and mass for a wide range of CVD [24]. To derive quantitative measures such as volume and mass, clinicians have been relying on manual approaches to trace the cardiac chamber contours. It typically takes a trained expert 20 minutes to analyse images of a single subject at two time points of the cardiac cycle, end-diastole (ED) and end-systole (ES). This is time consuming, tedious and prone to subjective errors.

Here we propose a computational method which can automatically analyse images at all time points across the cardiac cycle and derive clinical measures within seconds. The accuracy for clinical measures is comparable to human expert performance. The method would assist clinicians in CMR image analysis and diagnosis with an automated and objective way for deriving clinical measures, therefore reducing cost and improving work efficiency. It would also facilitate large-population imaging studies, such as the UK Biobank study, which aims to conduct imaging scans of vital organs for 100,000 subjects [5]. An automated method is crucial for analysing such a large amount of images and extracting clinically relevant information for subsequent clinical studies.

Machine learning algorithms, especially deep neural networks, have demonstrated great potential, achieving or surpassing human performance in a number of visual tasks including object recognition in natural images [6], Go game playing [7], skin cancer classification [8] and ocular image analysis [9]. Previously, neural networks have been explored for CMR image analysis [1013]. Most of these studies either use relatively shallow network architectures or are limited by the size of the dataset. None of them have performed a comparison between neural networks and human performance on this task. In 2016, Kaggle organised the second Data Science Bowl for left ventricular (LV) volume assessment [14]. Images from 700 subjects were provided with the LV volumes, however, none of the images were annotated. In 2017, MICCAI organised the ACDC challenge [15], where a training set of 100 subjects were provided with manual annotation. Lieman-Sifry et al. curated a data set of 1,143 short-axis image scans [13], where most of the images had LV endocardial and right ventricle (RV) endocardial contours annotated but only 22% had LV epicardial contours annotated.

In this paper, we utilise a large dataset of 4,875 subjects with 93,500 images, one or two orders of magnitude larger than previous datasets, and for which all the images have been pixelwise annotated by clinical experts. We trained fully convolutional networks for both short-axis and long-axis CMR image analysis. By combining the power of deep learning and a large annotated dataset for training and evaluation, this paper demonstrated that the proposed automated method can match human-level performance.

Methods

Dataset

The dataset consists of short-axis and long-axis cine CMR images of 5,008 subjects (61.2 ±7.2 years, 52.5% female), acquired from the UK Biobank. The baseline characteristics of the UK Biobank cohort can be viewed in the data showcase at [16]. For short-axis images, the in-plane image resolution is 1.8 ×1.8 mm2 with slice thickness of 8.0 mm and slice gap of 2 mm. A short-axis image stack typically consists of 10 image slices. For long-axis images, the in-plane image resolution is 1.8 ×1.8 mm2 and only 1 image slice is acquired. Each cardiac cycle consists of 50 time frames. For both short-axis and long-axis views, the balanced steady-state free precession (bSSFP) magnitude images were used for analysis. Details of the image acquisition protocol can be found in [17].

Manual image annotation was undertaken by a team of eight observers under the guidance of three principal investigators and following a standard operating procedure [18]. For short-axis images, the LV endocardial and epicardial borders and the RV endocardial borders were manually traced at ED and ES time frames using the cvi42 software (version 5.1.1, Circle Cardiovascular Imaging Inc., Calgary, Alberta, Canada). For long-axis 2-chamber view (2Ch) images, the left atrium (LA) endocardial border was traced. For long-axis 4-chamber view (4Ch) images, the LA and the right atrium (RA) endocardial borders were traced.

In pre-processing, the CMR DICOM images were converted into NIfTI format. The manual annotations from the cvi42 software were exported as XML files and also converted into NIfTI format. The images and annotations were quality controlled to ensure that annotations cover both ED and ES frames and without missing slices or missing anatomical structures. For short-axis images, 4,875 subjects (with 93,500 annotated image slices) were available after quality control, which were randomly split into three sets of 3,975/300/600 for training/validation/test, i.e. 3,975 subjects for training the neural network, 300 validation subjects for tuning model parameters, and finally 600 test subjects for evaluating performance. For long-axis 2Ch images, 4,723 subjects were available after quality control, which were split into 3,823/300/600. For long-axis 4Ch images, 4,682 subjects were available, which were split into 3,782/300/600.

Automated image analysis

For automated CMR image analysis, we utilise a fully convolutional network (FCN) architecture, which is a type of neural network that can predict a pixelwise image segmentation by applying a number of convolutional filters onto an input image [19]. The network architecture is illustrated in Fig. 1. The FCN learns image features from fine to coarse scales using convolutions and combines multi-scale features for predicting the label class at each pixel.

Fig. 1
figure 1

The network architecture. A fully convolutional network (FCN) is used, which takes the cardiovascular magnetic resonance (CMR) image as input, learns image features from fine to coarse scales through a series of convolutions, concatenates multi-scale features and finally predicts a pixelwise image segmentation

The network is adapted from the VGG-16 network [20] and it consists of a number of convolutional layers for extracting image features. Each convolution uses a 3 ×3 kernel and it is followed by batch normalisationFootnote 1 and ReLUFootnote 2. After every two or three convolutions, the feature map is downsampled by a factor of 2 so as to learn features at a more global scale. Feature maps learnt at different scales are upsampled to the original resolution using transposed convolutionsFootnote 3 and the multi-scale feature maps are then concatenated. Finally, three convolutional layers of kernel size 1 ×1, followed by a softmax functionFootnote 4, are used to predict a probabilistic label map. The segmentation is determined at each pixel by the label class with highest softmax probability. The mean cross entropy between the probabilistic label map and the manually annotated label map is used as the loss function. Excluding the transposed convolutional layers, this network has in total 16 convolutional layers. Details of the network architecture can be found in Table 1. This architecture is similar to the U-Net [21]. The main difference is that U-Net performs upsampling step by step. It iteratively upsamples the feature map at each scale by a factor of 2 and concatenates with the feature map at the next scale. In contrast to this, the proposed network may be simpler on the upsampling path. It upsamples the feature map from each scale to the finest resolution in one go and then concatenates all of them.

Table 1 The network architecture

Network training and testing

Three networks were trained, respectively for segmenting short-axis images, long-axis 2Ch images and 4Ch images. For training each network, all images were cropped to the same size of 192 ×192 and intensity normalised to the range of [0,1]. Data augmentationFootnote 5 was performed on-the-fly, which applied random translation, rotation, scaling and intensity variation to each mini-batch of images before feeding them to the network. Each mini-batch consisted of 20 image slices. The Adam method [22] was used for optimising the loss function, with a learning rate of 0.001 and iteration number of 50,000. The method was implemented using Python and TensorFlow. It took about 10 hours to train the VGG-16 network on a Nvidia Tesla K80 GPU.

During the testing stage, it took ∼ 2.2 seconds to analyse the ED and ES time frames of short-axis images for one subject and 9.5 seconds to analyse a full sequence of 50 time frames. For long-axis images, it took ∼ 0.2 seconds to analyse the ED and ES time frames for one subject and 1.4 seconds to analyse a full sequence. It took longer to analyse the short-axis images, because each short-axis image stack typically has 10 slices, whereas a long-axis image stack has only 1 slice.

Evaluation of the method

For quantitative assessment, we evaluated the performance of the automated method in two ways, respectively using commonly used metrics for segmentation accuracy assessment, including the Dice metric, mean contour distance and Hausdorff distance, and using clinical measures derived from segmentations, including ventricular volume and mass.

Figure 2 illustrates the definitions of the Dice metric and contour distance metrics. The Dice metric evaluates the overlap between automated segmentation A and manual segmentation B and it is defined as,

$$\begin{array}{*{20}l} \text{Dice} = \frac{2 |A \cap B|}{|A| + |B|}. \end{array} $$
Fig. 2
figure 2

Illustration of the Dice metric and contour distance metrics. A and B are two sets representing automated segmentation and manual segmentation. The Dice metric calculates the ratio of the intersection |AB| over the average area of the two sets (|A|+|B|)/2. The mean contour distance first calculates, for each point p on one contour, its distance to the other contour d(p,), then calculates the mean across all the points p. The Hausdorff distance calculates the maximum distance between the two contours

It is a value between 0 and 1, with 0 denoting no overlap and 1 denoting perfect agreement. The higher the Dice metric, the better the agreement.

The mean contour distance and Hausdorff distance evaluate the mean and the maximum distance respectively between the segmentation contours A and B. They are defined as,

$${\begin{aligned} \text{mean~dist.} & = \frac{1}{2 |\partial A|} \sum_{p \in \partial A} d(p, \partial B) + \frac{1}{2 |\partial B|} \sum_{q \in \partial B} d(q, \partial A), \\ \mathrm{Haus.~dist.} & = \max\left(\max_{p \in \partial A} d(p, \partial B), \max_{q \in \partial B} d(q, \partial A)\right), \end{aligned}} $$

where d(p,) denotes the minimal distance from point p to contour . The lower the distance metric, the better the agreement.

We also evaluated the accuracy of clinical measures, which were derived from image segmentations. We calculated the LV end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV myocardial mass (LVM), RV end-diastolic volume (RVEDV) and end-systolic volume (RVESV) from automated segmentation and compared them to measurements from manual segmentation. The LV and RV volumes were calculated by summing up the number of voxels belonging to the corresponding label class in the segmentation, multiplied by the volume per voxel. The LV mass was calculated by multiplying the LV myocardial volume with the density of 1.05 g/mL [23].

Evaluation of human performance

For quantitative evaluation of human performance, we assessed the inter-observer variability between manual segmentations by different clinical experts. A set of 50 subjects was randomly selected and each subject was analysed by three expert observers (O1, O2, O3) independently. The Dice metric, contour distance metrics and the difference of clinical measurements were evaluated between each pair of observers (O1 vs O2, O2 vs O3, O3 vs O1).

Qualitative assessment

As an additional qualitative assessment, two experienced image analysts (respectively with over ten years and four years experiences in cardiovascular image analysis) visually assessed the segmentations for 250 test subjects [see Additional file 1]. According to an in-house standard operating procedure for image analysis and experience, the analysts visually compared automated segmentation to manual segmentation and assessed whether the two segmentations achieved a good agreement (visually close to each other) or not. If there was a disagreement between the two, the analysts would score in three categories: automated segmentation performs better; manual segmentation performs better; not sure which one is better. The visual assessment was performed for basal, mid-ventricular and apical slices.

Exemplar clinical study

We demonstrated the application of the method on an exemplar clinical study. Using automatically derived clinical measures, we investigated the association between cardiac function and obesity, similar to a previous research [24]. We compared the ventricular volume and mass between two groups of subjects, the normal weight group (18.5 ≤ body mass index (BMI) < 25) and the obese group (BMI ≥ 30). Pathological cases with CVD were excluded. The normal weight group and the obese group were matched for sex, age, height, diastolic blood pressure and systolic blood pressure using the nearest neighbour propensity score matching, implemented using the MatchIt package in R. After matching, each group consisted of 867 subjects. The clinical measures were then compared between the matched groups using two-sided t-tests.

Results

Short-axis image analysis

Figure 3a illustrates the predicted segmentation of the LV and RV on short-axis images. It shows that automated segmentation agrees well with manual segmentation by a clinical expert at both ED and ES time frames. Additional movie files demonstrate automated segmentation across a cardiac cycle [see Additional files 2, 3 and 4].

Fig. 3
figure 3

Illustration of the segmentation results for short-axis and long-axis images. The top row shows the automated segmentation, whereas the bottom row shows the manual segmentation. The automated method segments all the time frames. However, only end-diastolic (ED) and end-systolic (ES) frames are shown, as manual analysis only annotates ED and ES frames. The cardiac chambers are represented by different colours. a short-axis. b long-axis (2 chamber view). c long-axis (4 chamber view)

Table 2(a) reports the Dice metric, mean contour distance and Hausdorff distance between automated and manual segmentations, evaluated on a test set of 600 subjects, which the network has never seen before. The table shows a mean Dice value of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity, demonstrating a good agreement between automated and manual segmentations. The mean contour distance is 1.04 mm for the LV cavity, 1.14 mm for the LV myocardium and 1.78 mm for the RV cavity, all of which are smaller than the in-plane pixel spacing of 1.8 mm. The Hausdorff distance ranges from 3.16 mm to 7.25 mm for each class.

Table 2 The Dice metric, mean contour distance (MCD) and Hausdorff distance (HD) between automated segmentation and manual segmentation for short-axis images

Of the 600 test subjects, 39 are with CVD. These pathological cases were selected using the following criteria: cases with the International Classification of Diseases code, 10th Revision (ICD-10) of I21 (acute myocardial infarction), I22 (subsequent myocardial infarction), I23 (certain current complications following acute myocardial infarction), I25 (chronic ischaemic heart disease), I42 (cardiomyopathy), I50 (heart failure); cases where participants had self-reported heart attack. Table 2(b) reports the Dice and distance metrics on these pathological cases. It shows a consistent segmentation performance as on the full test set for the Dice metric and just slightly larger errors for the contour distance metrics.

For evaluating human performance, Table 3 compares the Dice and distance metrics between automated segmentation and manual segmentation, as well as between segmentations by different human observers. It demonstrates that the computer-human difference is close to or even smaller than the human-human difference for all the metrics.

Table 3 The Dice metric and contour distance metrics between automated segmentation and manual segmentation for short-axis images, as well between segmentations by different human observers

As an additional qualitative assessment, two image analysts visually compared automated segmentation to manual segmentation for 250 test subjects. Table 4 shows that for mid-ventricular slices, automated segmentation agrees well with manual segmentation for respectively 84.8% and 91.6% of the cases by visual inspection of the two analysts. For basal slices where the ventricular contours are more complex and thus more difficult to segment, the percentage of agreement is lower. For example, Analyst 1 scored that automated segmentation agrees well with manual segmentation for only 40.0% of the cases. When discrepancy occurs, however, automated segmentation performs similarly to manual segmentation. Analyst 1 scored that automated segmentation performs better for 26.2% of the cases, whereas manual segmentation performs better for 20.6% of the cases.

Table 4 Qualitative visual assessment of automated segmentation

Next, we evaluate the accuracy of clinical measures for the LVEDV, LVESV, LVM, RVEDV and RVESV. Table 5 reports the mean absolute difference and relative difference between automated and manual measurements and between measurements by different expert observers. It shows that for the clinical measures, the computer-human difference is on par with the human-human difference.

Table 5 The difference in clinical measures between automated segmentation and manual segmentation, as well between measurements by different human observers

Figure 4 shows the Bland-Altman plots of the clinical measures. The Bland-Altman plot is commonly used for analysing agreement and bias between two measurements. The first column of the figure compares automated measurements to manual measurements on 600 test subjects. These subjects were annotated by a group of eight observers and each subject was annotated only once by one observer. The first column shows that the mean difference is centred close to zero, which suggests that the automated measurement is almost unbiased relative to the group of observers. Also, there is no evidence of bias over hearts of difference sizes or volumes. By contrast, the bias between different pairs of human observers (second to fourth columns) is often larger than that, especially for RVEDV and RVESV. This indicates that individual observers may be biased. As the automated method is trained with annotations from multiple observers, it learns a consensus estimate across the group of observers and thus it may be less susceptible to biases.

Fig. 4
figure 4

Bland-Altman plots of clinical measures between automated measurement and manual measurement, as well between measurements by different human observers. The first column shows the agreement between automated and manual measurements on a test set of 600 subjects. The second to fourth columns show the inter-observer variability evaluated on the randomly selected set of 50 subjects. In each Bland-Altman plot, the x-axis denotes the average of two measurements and the y-axis denotes the difference between them. The dark dashed line denotes the mean difference (bias) and the two light dashed lines denote ± 1.96 standard deviations from the mean

Long-axis image analysis

We further demonstrate the performance of the method on long-axis CMR images, which are commonly used for assessing the cardiac chambers from a different angle. Figure 3b and c illustrate the segmentations of the LA and RA for the long-axis 2Ch and 4Ch images respectively. Additional movie files demonstrate automated segmentation across a cardiac cycle [see Additional files 56].

We evaluate the Dice metric and the contour distances on a test set of 600 subjects, as reported in Table 6. The mean Dice metric is 0.93 for the LA (2Ch), 0.95 for the LA (4Ch), 0.96 for the RA (4Ch), whereas the mean contour distance is smaller than the in-plane pixel spacing of 1.8 mm, demonstrating a good segmentation accuracy on long-axis images. Table 7 demonstrates that for long-axis images, the computer-human difference is also on par with or smaller than the human-human difference.

Table 6 The Dice metric, mean contour distance (MCD) and Hausdorff distance (HD) between automated segmentation and manual segmentation for long-axis image
Table 7 The Dice metric and contour distance metrics between automated segmentation and manual segmentation for long-axis images, as well between segmentations by different human observers

Exemplar clinical study

The proposed automated method enables us to perform clinical studies on large-scale datasets. Table 8 compares the ventricular volume and mass, which are derived from automated segmentation, between two groups of subjects, the normal weight group and the obese group. The table shows that obesity is associated with increased ventricular volume and mass with statistical significance. This is consistent with a previous finding in [24], which was performed on a dataset of 54 subjects with manual segmentation. Now we can confirm the finding with automated analysis on a much larger dataset with 1,734 subjects.

Table 8 An exemplar study of cardiac function on large-scale datasets using automatically derived clinical measures

Discussion

By training and evaluating on a large-scale annotated dataset, we demonstrate that the proposed method matches human expert performance on CMR image segmentation accuracy and clinical measurement accuracy. In terms of speed, it can analyse the short-axis and long-axis images for one subject in a few seconds. The method is fast and scalable, overcoming limitations associated with current clinical CMR image analysis routine, which is manual, time-consuming and prone to subjective errors. The method has a great potential for improving work efficiency and assisting clinicians in diagnosis and performing large-scale clinical research.

Residual networks

We also experimented with a deeper network by replacing the convolutional layers from scale 3 to 5 in Table 1 with residual blocks as described in [25] and constructed a residual network which has 33 convolutional layers. In experiments, we found the residual network achieves a similar performance as the VGG-16 network. Thus, we only reported the results from the VGG-16 network in the paper.

Other clinical measures

The LV and RV volumes are directly calculated from the image segmentations. There are also some other clinical measures for assessing cardiac function, which are derived from the LV and RV volumes, including the LV stroke volume (LVSV), LV ejection fraction (LVEF), LV cardiac output (LVCO), RV stroke volume (RVSV), RV ejection fraction (RVEF) and RV cardiac output (RVCO). Table 9 reports the difference between automated and manual measurements and between measurements by different expert observers on these measures. It shows that for these derived clinical measures, the computer-human difference is also comparable to the human-human difference.

Table 9 The difference in derived clinical measures between automated segmentation and manual segmentation, as well between measurements by different human observers

Limitations

A major limitation of our work is that the neural network was trained on a single dataset, the UK Biobank dataset, which is a relatively homogeneous dataset. The majority of the data are healthy subjects in middle and later life and only a small proportion are with self-reported CVD [26]. Although we have demonstrated that the method works well on a subset of pathological cases in Table 2(b), in the clinical environment, there can be a variety of pathological patterns, which are not currently represented in the UK Biobank cohort.

In addition, the UK Biobank dataset was acquired using a standard imaging protocol and the same scanner model [17]. This guarantees that the derived image phenotypes are consistent across the UK Biobank study, without being biased by the imaging protocol or the scanner model. However, this also means that the neural network that we have learnt is adapted to the image patterns in the UK Biobank dataset and might not generalise well to other vendor or sequence datasets. We explored how the network works on two additional datasets, the MICCAI 2009 Left Ventricle Segmentation Challenge (LVSC 2009) dataset [27] and the MICCAI 2017 Automated Cardiac Diagnosis Challenge (ACDC 2017) dataset [28]. These two datasets were acquired using different scanners or different protocols [15, 29] from the UK Biobank dataset. In addition, most of the LVSC 2009 and ACDC 2017 data are pathological cases.

Figure 5 shows the segmentation results of four exemplar cases, two from the LVSC 2009 dataset and two from the ACDC 2017 dataset. The four cases are respectively of heart failure, LV hypertrophy, dilated cardiomyopathy and abnormal RV. The top row shows the segmentation results by directly applying the UK Biobank-trained network to the LVSC and ACDC data. It shows that without any tuning, the network performs well for Cases 1 and 3, but fails for Cases 2 and 4. This is probably because the image patterns or intensity distributions in Cases 2 and 4 are not covered by UK Biobank.

Fig. 5
figure 5

Segmentation results on other datasets. The first two cases come from the LVSC 2009 dataset, whereas the last two cases come from the ACDC 2017 dataset. The four cases are respectively of heart failure, LV hypertrophy, dilated cardiomyopathy and abnormal right ventricle. The top row shows the segmentation results by directly applying the UK Biobank-trained network to the LVSC and ACDC data. The bottom row shows the segmentation results after fine-tuning the network to the new data

Then, we performed fine-tuning for the network by training it for another 10,000 iterations on the new datasets, which took about 2 hour. For LVSC 2009, we fine-tuned using the challenge training set (15 subjects) and evaluated the performance on the challenge validation set (15 subjects). The LVSC 2009 training set only annotates the LV cavity and myocardium. As a result, during fine-tuning, we only trained the network to segment the LV and ignored the RV. For ACDC 2017, we randomly split the challenge training set (100 subjects) into 80 subjects for fine-tuning and 20 subjects for evaluation. The bottom row of Fig. 5 shows the segmentation results on LVSC or ACDC data after fine-tuning. It shows that the segmentation performance is substantially improved for Cases 2 and 4 after the network has adjusted its parameters to adapt to the new data. Table 10 reports the Dice overlap metrics before and after fine-tuning. On both LVSCFootnote 6 and ACDC datasets, the Dice metrics are substantially improved after fine-tuning.

Although the network works well after fine-tuning, this still means each time when we have some new data that are acquired using a different protocol or from a different scanner model, we might need to label some of the new data for fine-tuning the network parameters. It would be interesting to explore whether we could create a large-scale heterogeneous dataset for training and evaluation, which covers typical CMR imaging protocols and scanner types, or to develop novel machine learning techniques that are more generalisable, which is an important research topic on its own [30].

Future directions

Future research will explore developing more generalisable methods for analysing a wider range of CMR images, such as multi-site images acquired from different machines and using different imaging protocols, and integrating automated segmentation results into diagnostic reports. The current method trains networks for short-axis images and long-axis images separately. It would be interesting to combine the two views for image analysis, which can provide complementary information about the anatomy of the heart. Finally, we believe that a benchmark platform based on this annotated dataset is needed, which would benefit the whole community and greatly advance the development of CMR image analysis algorithms.

Conclusions

We have proposed an automated method using deep FCN for short-axis and long-axis CMR image analysis. It has demonstrated a human-level performance on the UK Biobank dataset. We anticipate this to be a starting point for automated CMR analysis, facilitated by machine learning.