Automatic biplane left ventricular ejection fraction estimation with mobile point-of-care ultrasound using multi-task learning and adversarial training
- 39 Downloads
Left ventricular ejection fraction (LVEF) is one of the key metrics to assess the heart functionality, and cardiac ultrasound (echo) is a standard imaging modality for EF measurement. There is an emerging interest to exploit the point-of-care ultrasound (POCUS) usability due to low cost and ease of access. In this work, we aim to present a computationally efficient mobile application for accurate LVEF estimation.
Our proposed mobile application for LVEF estimation runs in real time on Android mobile devices that have either a wired or wireless connection to a cardiac POCUS device. We propose a pipeline for biplane ejection fraction estimation using apical two-chamber (AP2) and apical four-chamber (AP4) echo views. A computationally efficient multi-task deep fully convolutional network is proposed for simultaneous LV segmentation and landmark detection in these views, which is integrated into the LVEF estimation pipeline. An adversarial critic model is used in the training phase to impose a shape prior on the LV segmentation output.
The system is evaluated on a dataset of 427 patients. Each patient has a pair of captured AP2 and AP4 echo studies, resulting in a total of more than 40,000 echo frames. The mobile system reaches a noticeably high average Dice score of 92% for LV segmentation, an average Euclidean distance error of 2.85 pixels for the detection of anatomical landmarks used in LVEF calculation, and a median absolute error of 6.2% for LVEF estimation compared to the expert cardiologist’s annotations and measurements.
The proposed system runs in real time on mobile devices. The experiments show the effectiveness of the proposed system for automatic LVEF estimation by demonstrating an adequate correlation with the cardiologist’s examination.
KeywordsMobile application Deep learning Adversarial training Cardiac ejection fraction Image segmentation Echocardiography
This work was supported in part by the Natural Sciences and Engineering Research and Council of Canada (NSERC) and in part by the Canadian Institutes of Health Research (CIHR).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants included in the study.
- 5.Chen H, Dou Q, Ni D, Cheng JZ, Qin J, Li S, Heng PA (2015) Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 507–514Google Scholar
- 7.Chen H, Zheng Y, Park JH, Heng PA, Zhou SK (2016) Iterative multi-domain regularized deep learning for anatomical structure detection and segmentation from ultrasound images. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 487–495Google Scholar
- 8.Chuang ML, Hibberd MG, Salton CJ, Beaudin RA, Riley MF, Parker RA, Douglas PS, Manning WJ (2000) Importance of imaging method over imaging modality in noninvasive determination of left ventricular volumes and ejection fraction: assessment by two- and three-dimensional echocardiography and magnetic resonance imaging. J Am Coll Cardiol 35(2):477–484CrossRefGoogle Scholar
- 9.Fagley RE, Haney MF, Beraud AS, Comfere T, Kohl BA, Merkel MJ, Pustavoitau A, Von Homeyer P, Wagner CE, Wall MH (2015) Critical care basic ultrasound learning goals for American anesthesiology critical care trainees: recommendations from an expert group. Anesthesia Analgesia 120(5):1041–1053CrossRefGoogle Scholar
- 11.Girdhar R, Fouhey DF, Rodriguez M, Gupta A (2016) Learning a predictable and generative vector representation for objects. In: European conference on computer vision. Springer, pp 484–499Google Scholar
- 12.Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680Google Scholar
- 13.Grossgasteiger M, Hien MD, Graser B, Rauch H, Gondan M, Motsch J, Rosendal C (2013) Assessment of left ventricular size and function during cardiac surgery. An intraoperative evaluation of six two-dimensional echocardiographic methods with real time three-dimensional echocardiography as a reference. Echocardiography 30(6):672–681CrossRefGoogle Scholar
- 15.Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
- 16.Lang RM, Badano LP, Mor-Avi V, Afilalo J, Armstrong A, Ernande L, Flachskampf FA, Foster E, Goldstein SA, Kuznetsova T (2015) Recommendations for cardiac chamber quantification by echocardiography in adults: an update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. Eur Heart J Cardiovasc Imaging 16(3):233–271CrossRefGoogle Scholar
- 18.Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440Google Scholar
- 19.Luc P, Couprie C, Chintala S, Verbeek J (2016) Semantic segmentation using adversarial networks. arXiv preprint. arXiv:1611.08408
- 22.Moradi M, Guo Y, Gur Y, Negahdar M, Syeda-Mahmood T (2016) A cross-modality neural network transform for semi-automatic medical image annotation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 300–307Google Scholar
- 23.Nascimento JC, Carneiro G (2016) Multi-atlas segmentation using manifold learning with deep belief networks. In: Biomedical imaging (ISBI), 2016 IEEE 13th international symposium on. IEEE, pp 867–871Google Scholar
- 25.Noh H, Hong S, Han B (2015) Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 1520–1528Google Scholar
- 27.Poudel RP, Lamata P, Montana G (2016) Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation. In: Reconstruction, segmentation, and analysis of medical images. Springer, pp 83–94Google Scholar
- 28.Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint. arXiv:1511.06434
- 29.Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241Google Scholar
- 30.Rupprecht C, Huaroc E, Baust M, Navab N (2016) Deep active contours. arXiv preprint. arXiv:1607.05074
- 32.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint. arXiv:1409.1556
- 33.Smistad E, ostvik A, Haugen BO, Lovstakken L (2017) 2D left ventricle segmentation using deep learning. In: 2017 IEEE international ultrasonics symposium (IUS), pp 1–4Google Scholar
- 34.Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9Google Scholar
- 35.Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, Fan E, Aras MA, Jordan C, Fleischmann KE (2017) A computer vision pipeline for automated determination of cardiac structure and function and detection of disease by two-dimensional echocardiography. arXiv preprint. arXiv:1706.07342
- 37.Zreik M, Leiner T, de Vos BD, van Hamersvelt RW, Viergever MA, Išgum I (2016) Automatic segmentation of the left ventricle in cardiac ct angiography using convolutional neural networks. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE, pp 40–43Google Scholar