Abstract
In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.
This is a preview of subscription content, access via your institution.
Buying options





References
Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis. IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018)
Bai, W., et al.: Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J. Cardiovasc. Magn. Reson. 20(1), 65 (2018)
Tao, Q., et al.: Deep learning-based method for fully automatic quantification of left ventricle function from cine MR images. Radiology 290(1), 81–88 (2019)
Doersch, C., et al.: Multi-task self-supervised visual learning. In: ICCV (2017)
Gidaris, S., et al.: Unsupervised representation learning by predicting image rotations. In: ICLR (2018)
Doersch, C., et al.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)
Zhang, R., et al.: Colorful image colorization. In: ECCV (2016)
Pathak, D., et al.: Context encoders: feature learning by inpainting. In: CVPR (2016)
Jamaludin, A., et al.: Self-supervised learning for spinal MRIs. In: MICCAI DLMIA Workshop (2017)
Ross, T., et al.: Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. Int. J. Comput. Assist. Radiol. Surg. 13(6), 925–933 (2018)
Tajbakhsh, N., et al.: Surrogate supervision for medical image analysis: effective deep learning from limited quantities of labeled data. In: ISBI (2019)
Ronneberger, O., et al.: U-Net: convolutional networks for biomedical image segmentation. In: MICCAI (2015)
Acknowledgements
This research has been conducted using the UK Biobank Resource under Application Numbers 2964 and 18545 and supported by the SmartHeart EPSRC Programme Grant (EP/P001009/1). We would like to thank NVIDIA Corporation for donating a Titan Xp for this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bai, W. et al. (2019). Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction. In: , et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11765. Springer, Cham. https://doi.org/10.1007/978-3-030-32245-8_60
Download citation
DOI: https://doi.org/10.1007/978-3-030-32245-8_60
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32244-1
Online ISBN: 978-3-030-32245-8
eBook Packages: Computer ScienceComputer Science (R0)
-
Published in cooperation with
http://miccai.org/