Automatic biplane left ventricular ejection fraction estimation with mobile point-of-care ultrasound using multi-task learning and adversarial training

  • Mohammad H. JafariEmail author
  • Hany Girgis
  • Nathan Van Woudenberg
  • Zhibin Liao
  • Robert Rohling
  • Ken Gin
  • Purang Abolmaesumi
  • Terasa Tsang
Original Article



Left ventricular ejection fraction (LVEF) is one of the key metrics to assess the heart functionality, and cardiac ultrasound (echo) is a standard imaging modality for EF measurement. There is an emerging interest to exploit the point-of-care ultrasound (POCUS) usability due to low cost and ease of access. In this work, we aim to present a computationally efficient mobile application for accurate LVEF estimation.


Our proposed mobile application for LVEF estimation runs in real time on Android mobile devices that have either a wired or wireless connection to a cardiac POCUS device. We propose a pipeline for biplane ejection fraction estimation using apical two-chamber (AP2) and apical four-chamber (AP4) echo views. A computationally efficient multi-task deep fully convolutional network is proposed for simultaneous LV segmentation and landmark detection in these views, which is integrated into the LVEF estimation pipeline. An adversarial critic model is used in the training phase to impose a shape prior on the LV segmentation output.


The system is evaluated on a dataset of 427 patients. Each patient has a pair of captured AP2 and AP4 echo studies, resulting in a total of more than 40,000 echo frames. The mobile system reaches a noticeably high average Dice score of 92% for LV segmentation, an average Euclidean distance error of 2.85 pixels for the detection of anatomical landmarks used in LVEF calculation, and a median absolute error of 6.2% for LVEF estimation compared to the expert cardiologist’s annotations and measurements.


The proposed system runs in real time on mobile devices. The experiments show the effectiveness of the proposed system for automatic LVEF estimation by demonstrating an adequate correlation with the cardiologist’s examination.


Mobile application Deep learning Adversarial training Cardiac ejection fraction Image segmentation Echocardiography 



This work was supported in part by the Natural Sciences and Engineering Research and Council of Canada (NSERC) and in part by the Canadian Institutes of Health Research (CIHR).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.


  1. 1.
    Abdi AH, Luong C, Tsang T, Allan G, Nouranian S, Jue J, Hawley D, Fleming S, Gin K, Swift J (2017) Automatic quality assessment of echocardiograms using convolutional neural networks: feasibility on the apical four-chamber view. IEEE Trans Med Imaging 36(6):1221–1230CrossRefGoogle Scholar
  2. 2.
    Avendi M, Kheradvar A, Jafarkhani H (2016) A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med Image Anal 30:108–119CrossRefGoogle Scholar
  3. 3.
    Carneiro G, Nascimento JC (2013) Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data. IEEE Trans Pattern Anal Mach Intell 99(1):2592–2607CrossRefGoogle Scholar
  4. 4.
    Carneiro G, Nascimento JC, Freitas A (2012) The segmentation of the left ventricle of the heart from ultrasound data using deep learning architectures and derivative-based search methods. IEEE Trans Image Process 21(3):968–982CrossRefGoogle Scholar
  5. 5.
    Chen H, Dou Q, Ni D, Cheng JZ, Qin J, Li S, Heng PA (2015) Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 507–514Google Scholar
  6. 6.
    Chen H, Ni D, Qin J, Li S, Yang X, Wang T, Heng PA (2015) Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE J Biomed Health Inform 19(5):1627–1636CrossRefGoogle Scholar
  7. 7.
    Chen H, Zheng Y, Park JH, Heng PA, Zhou SK (2016) Iterative multi-domain regularized deep learning for anatomical structure detection and segmentation from ultrasound images. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 487–495Google Scholar
  8. 8.
    Chuang ML, Hibberd MG, Salton CJ, Beaudin RA, Riley MF, Parker RA, Douglas PS, Manning WJ (2000) Importance of imaging method over imaging modality in noninvasive determination of left ventricular volumes and ejection fraction: assessment by two- and three-dimensional echocardiography and magnetic resonance imaging. J Am Coll Cardiol 35(2):477–484CrossRefGoogle Scholar
  9. 9.
    Fagley RE, Haney MF, Beraud AS, Comfere T, Kohl BA, Merkel MJ, Pustavoitau A, Von Homeyer P, Wagner CE, Wall MH (2015) Critical care basic ultrasound learning goals for American anesthesiology critical care trainees: recommendations from an expert group. Anesthesia Analgesia 120(5):1041–1053CrossRefGoogle Scholar
  10. 10.
    Ghesu FC, Krubasik E, Georgescu B, Singh V, Zheng Y, Hornegger J, Comaniciu D (2016) Marginal space deep learning: efficient architecture for volumetric image parsing. IEEE Trans Med Imaging 35(5):1217–1228CrossRefGoogle Scholar
  11. 11.
    Girdhar R, Fouhey DF, Rodriguez M, Gupta A (2016) Learning a predictable and generative vector representation for objects. In: European conference on computer vision. Springer, pp 484–499Google Scholar
  12. 12.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680Google Scholar
  13. 13.
    Grossgasteiger M, Hien MD, Graser B, Rauch H, Gondan M, Motsch J, Rosendal C (2013) Assessment of left ventricular size and function during cardiac surgery. An intraoperative evaluation of six two-dimensional echocardiographic methods with real time three-dimensional echocardiography as a reference. Echocardiography 30(6):672–681CrossRefGoogle Scholar
  14. 14.
    Johri AM, Durbin J, Newbigging J, Tanzola R, Chow R, De S, Tam J (2018) Cardiac point-of-care ultrasound: state-of-the-art in medical school education. J Am Soc Echocardiogr 31(7):749–760CrossRefGoogle Scholar
  15. 15.
    Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105Google Scholar
  16. 16.
    Lang RM, Badano LP, Mor-Avi V, Afilalo J, Armstrong A, Ernande L, Flachskampf FA, Foster E, Goldstein SA, Kuznetsova T (2015) Recommendations for cardiac chamber quantification by echocardiography in adults: an update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. Eur Heart J Cardiovasc Imaging 16(3):233–271CrossRefGoogle Scholar
  17. 17.
    Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88CrossRefGoogle Scholar
  18. 18.
    Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440Google Scholar
  19. 19.
    Luc P, Couprie C, Chintala S, Verbeek J (2016) Semantic segmentation using adversarial networks. arXiv preprint. arXiv:1611.08408
  20. 20.
    Mahmood F, Matyal R, Skubas N, Montealegre-Gallegos M, Swaminathan M, Denault A, Sniecinski R, Mitchell JD, Taylor M, Haskins S (2016) Perioperative ultrasound training in anesthesiology: a call to action. Anesthesia Analgesia 122(6):1794–1804CrossRefGoogle Scholar
  21. 21.
    McCormick TJ, Miller EC, Chen R, Naik VN (2018) Acquiring and maintaining point-of-care ultrasound (POCUS) competence for anesthesiologists. Can J Anesth/Journal canadien d’anesthésie 65(4):427–436CrossRefGoogle Scholar
  22. 22.
    Moradi M, Guo Y, Gur Y, Negahdar M, Syeda-Mahmood T (2016) A cross-modality neural network transform for semi-automatic medical image annotation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 300–307Google Scholar
  23. 23.
    Nascimento JC, Carneiro G (2016) Multi-atlas segmentation using manifold learning with deep belief networks. In: Biomedical imaging (ISBI), 2016 IEEE 13th international symposium on. IEEE, pp 867–871Google Scholar
  24. 24.
    Ngo TA, Lu Z, Carneiro G (2017) Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med Image Anal 35:159–171CrossRefGoogle Scholar
  25. 25.
    Noh H, Hong S, Han B (2015) Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 1520–1528Google Scholar
  26. 26.
    Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook SA, de Marvao A, Dawes T, ORegan DP (2018) Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. IEEE Trans Med Imaging 37(2):384–395CrossRefGoogle Scholar
  27. 27.
    Poudel RP, Lamata P, Montana G (2016) Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation. In: Reconstruction, segmentation, and analysis of medical images. Springer, pp 83–94Google Scholar
  28. 28.
    Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint. arXiv:1511.06434
  29. 29.
    Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241Google Scholar
  30. 30.
    Rupprecht C, Huaroc E, Baust M, Navab N (2016) Deep active contours. arXiv preprint. arXiv:1607.05074
  31. 31.
    Schiller NB, Shah PM, Crawford M, DeMaria A, Devereux R, Feigenbaum H, Gutgesell H, Reichek N, Sahn D, Schnittger I (1989) Recommendations for quantitation of the left ventricle by two-dimensional echocardiography. J Am Soc Echocardiogr 2(5):358–367CrossRefGoogle Scholar
  32. 32.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint. arXiv:1409.1556
  33. 33.
    Smistad E, ostvik A, Haugen BO, Lovstakken L (2017) 2D left ventricle segmentation using deep learning. In: 2017 IEEE international ultrasonics symposium (IUS), pp 1–4Google Scholar
  34. 34.
    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9Google Scholar
  35. 35.
    Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, Fan E, Aras MA, Jordan C, Fleischmann KE (2017) A computer vision pipeline for automated determination of cardiac structure and function and detection of disease by two-dimensional echocardiography. arXiv preprint. arXiv:1706.07342
  36. 36.
    Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, Lassen MH, Fan E, Aras MA, Jordan C (2018) Fully automated echocardiogram interpretation in clinical practice: feasibility and diagnostic accuracy. Circulation 138(16):1623–1635CrossRefGoogle Scholar
  37. 37.
    Zreik M, Leiner T, de Vos BD, van Hamersvelt RW, Viergever MA, Išgum I (2016) Automatic segmentation of the left ventricle in cardiac ct angiography using convolutional neural networks. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE, pp 40–43Google Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.The University of British ColumbiaVancouverCanada
  2. 2.Vancouver General HospitalVancouverCanada

Personalised recommendations