Advertisement

Echocardiographic Image Quality Assessment Using Deep Neural Networks

Conference paper
  • 153 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

Echocardiography image quality assessment is not a trivial issue in transthoracic examination. As the in vivo examination of heart structures gained prominence in cardiac diagnosis, it has been affirmed that accurate diagnosis of the left ventricle functions is hugely dependent on the quality of echo images. Up till now, visual assessment of echo images is highly subjective and requires specific definition under clinical pathologies. While poor-quality images impair quantifications and diagnosis, the inherent variations in echocardiographic image quality standards indicates the complexity faced among different observers and provides apparent evidence for incoherent assessment under clinical trials, especially with less experienced cardiologists. In this research, our aim was to analyse and define specific quality attributes mostly discussed by experts and present a fully trained convolutional neural network model for assessing such quality features objectively. A total of 1,650 anonymized B-Mode images with dissimilar frame lengths were stratified from most popular ultrasound vendors equipment and clinical quality scores were provided for each echo cine by Cardiologists at England's Hammersmith Hospital which fed our multi-stream architecture model. The regression model assesses the quality features for depth-gain, chamber clarity, interventricular (on-Axis) orientation and foreshortening of the left ventricle. Four independent scores are thus displayed on each frame which compares against cardiologists' manually assigned scores to validate the degree of objective accuracy or its absolute errors. Absolute errors were found to be ±0.02 and ±0.12 for model and inter observer variability, respectively. We achieved a computation speed of 0.0095 ms per frame on GeForce 970, with feasibility for 2D/3D real-time deployment. The research outcome establishes the modality for the objective standardization of 2D echocardiographic image quality and provides a consistent objective scoring mechanism for echo image reliability and diagnosis.

Keywords

Medical imaging Echocardiography Quality assessment 

References

  1. 1.
    Nagata, Y., et al.: Impact of image quality on reliability of the measurements of left ventricular systolic function and global longitudinal strain in 2D echocardiography. Echo Res. Pract. 5, 27–39 (2018).  https://doi.org/10.1530/ERP-17-0047CrossRefGoogle Scholar
  2. 2.
    Liao, Z., et al.: On modelling label uncertainty in deep neural networks: automatic estimation of intra- observer variability in 2d echocardiography quality assessment. IEEE Trans. Med. Imaging. 39, 1868–1883 (2020).  https://doi.org/10.1109/TMI.2019.2959209CrossRefGoogle Scholar
  3. 3.
    Sassaroli, E., Crake, C., Scorza, A., Kim, D., Park, M.: Image quality evaluation of ultrasound imaging systems: advanced B-modes. J. Appl. Clin. Med. Phys. 20, 115–124 (2019).  https://doi.org/10.1002/acm2.12544CrossRefGoogle Scholar
  4. 4.
    Sprawls, P.: Optimizing medical image contrast, detail and noise in the digital era. Med. Phys. Int. J. 2(1), 1–8 (2014)Google Scholar
  5. 5.
    Abdi, A.H., et al.: Quality assessment of echocardiographic cine using recurrent neural networks: feasibility on five standard view planes. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 302–310. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_35CrossRefGoogle Scholar
  6. 6.
    Abdi, A.H., et al.: Automatic quality assessment of apical four-chamber echocardiograms using deep convolutional neural networks. In: Styner, M.A., Angelini, E.D. (eds.) Orlando, Florida, United States, p. 101330S (2017).  https://doi.org/10.1117/12.2254585.
  7. 7.
    Labs, R.B., et al.: Automated assessment of image quality in 2D echocardiography using deep learning. In: Conference proceedings ICRMIRO International Conference on Radiology, Medical Imaging and Radiation Oncology, Paris, France, 25–26 June 2020, Part XVII (2020)Google Scholar
  8. 8.
    Luong, C., et al.: Automated estimation of echocardiogram image quality in hospitalized patients. Int. J. Cardiovasc. Imaging 37, 1–11 (2020).  https://doi.org/10.1007/s10554-020-01981-8CrossRefGoogle Scholar
  9. 9.
    Dong, J., et al.: A generic quality control framework for fetal ultrasound cardiac four-chamber planes. IEEE J. Biomed. Health Inform. 24, 931–942 (2020).  https://doi.org/10.1109/JBHI.2019.2948316CrossRefGoogle Scholar
  10. 10.
    Abdi, A.H., et al.: Automatic quality assessment of echocardiograms using convolutional neural networks: feasibility on the apical four-chamber view. IEEE Trans. Med. Imaging. 36, 1221–1230 (2017).  https://doi.org/10.1109/TMI.2017.2690836CrossRefGoogle Scholar
  11. 11.
    Zhang, J., et al.: Fully automated echocardiogram interpretation in clinical practice: feasibility and diagnostic accuracy. Circulation 138, 1623–1635 (2018).  https://doi.org/10.1161/CIRCULATIONAHA.118.034338CrossRefGoogle Scholar
  12. 12.
    Nafchi, H.Z., Cheriet, M.: Efficient no-reference quality assessment and classification model for contrast distorted images (2018). ArXiv180402554 Cs.  https://doi.org/10.1109/TBC.2018.2818402.
  13. 13.
    Ünlü, S., et al.: EACVI-ASE industry standardization task force. In: Badano, L.P., et al. (eds.) Impact of apical foreshortening on deformation measurements: a report from the EACVI-ASE Strain Standardization Task Force, Eur. Heart J. - Cardiovasc. Imaging. (2019).  https://doi.org/10.1093/ehjci/jez189
  14. 14.
    Smistad, E., et al.: Real-time automatic ejection fraction and foreshortening detection using deep learning. IEEE Trans. Ultrason. Ferroelectr. Freq. Control., 1 (2020).  https://doi.org/10.1109/TUFFC.2020.2981037.
  15. 15.
    Yang, J., Zhu, Y., Ma, C., Lu, W., Meng, Q.: Stereoscopic video quality assessment based on 3D convolutional neural networks. Neurocomputing 309, 83–93 (2018).  https://doi.org/10.1016/j.neucom.2018.04.072CrossRefGoogle Scholar
  16. 16.
    Donahue, J., et al.: Long-term Recurrent Convolutional Networks for Visual Recognition and Description (2016). ArXiv14114389 Cs. http://arxiv.org/abs/1411.4389, Accessed 9 May 2021

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.School of Computing and EngineeringUniversity of West LondonLondonUK
  2. 2.National Heart and Lung InstituteImperial CollegeLondonUK

Personalised recommendations