Advertisement

User Guidance for Point-of-Care Echocardiography Using a Multi-task Deep Neural Network

  • Grzegorz ToporekEmail author
  • Raghavendra Srinivasa Naidu
  • Hua Xie
  • Adriana Simicich
  • Tony Gades
  • Balasundar Raju
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Echocardiography is a challenging sonographic examination with high user-dependence and the need for significant training and experience. To improve the use of ultrasound in emergency management, especially by non-expert users, we propose a solely image-based machine-learning algorithm that does not rely on any external tracking devices. This algorithm guides the motion of the probe towards clinically relevant views, such as an apical four-chamber or long axis parasternal view, using a multi-task deep convolutional neural network (CNN). This network was trained on 27 human subjects using a multi-task learning paradigm to: (a) detect and exclude ultrasound frames where quality is not sufficient for the guidance, (b) identify one of three typical imaging windows, including the apical, parasternal, and subcostal to guide the user through the exam workflow, and (c) predict 6-DOF motion of the transducer towards a target view i.e. rotational and translational motion. And besides that, by deploying relatively lightweight architecture we ensured the operation of the algorithm at approximately 25 frames per second on a commercially available mobile device. Evaluation of the system on three unseen human subjects demonstrated that the method can guide an ultrasound transducer to a target view with an average rotational and translation accuracy of 3.3 ± 2.6° and 2.0 ± 1.6 mm respectively, when the probe is close to the target (<5 mm). We believe that this accuracy would be sufficient to find the image on which the user can make quick, qualitative evaluations such as the detection of pericardial effusion, cardiac activity (squeeze, mitral valve motion, cardiac arrest, etc.), as well as performing quantitative calculations such as ejection fraction.

Keywords

Ultrasound Deep learning Navigation Echocardiography 

References

  1. 1.
    American College of Emergency Physicians: Ultrasound guidelines: emergency, point-of-care, and clinical ultrasound guidelines in medicine. Ann. Emerg. Med. 69, e27–e54 (2017).  https://doi.org/10.1016/j.annemergmed.2016.08.457CrossRefGoogle Scholar
  2. 2.
    Plummer, D., Brunette, D., Asinger, R., Ruiz, E.: Emergency department echocardiography improves outcome in penetrating cardiac injury. Ann. Emerg. Med. 21, 709–712 (1992).  https://doi.org/10.1016/S0196-0644(05)82784-2CrossRefGoogle Scholar
  3. 3.
    Kennedy Hall, M., Coffey, E.C., Herbst, M., et al.: The “5Es” of emergency physician-performed focused cardiac ultrasound: a protocol for rapid identification of effusion, ejection, equality, exit, and entrance. Acad. Emerg. Med. 22, 583–593 (2015).  https://doi.org/10.1111/acem.12652CrossRefGoogle Scholar
  4. 4.
    Lempitsky, V., Verhoek, M., Noble, J.A., Blake, A.: Random forest classification for automatic delineation of myocardium in real-time 3D echocardiography. In: Ayache, N., Delingette, H., Sermesant, M. (eds.) FIMH 2009. LNCS, vol. 5528, pp. 447–456. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-01932-6_48CrossRefGoogle Scholar
  5. 5.
    Madani, A., Arnaout, R., Mofrad, M.: Fast and accurate view classification of echocardiograms using deep learning. Nat. Digit. Med. 1–8 (2018).  https://doi.org/10.1038/s41746-017-0013-1
  6. 6.
    Van Woudenberg, N., et al.: Quantitative echocardiography: real-time quality estimation and view classification implemented on a mobile android device. In: Stoyanov, D., et al. (eds.) POCUS/BIVPCS/CuRIOUS/CPM -2018. LNCS, vol. 11042, pp. 74–81. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01045-4_9CrossRefGoogle Scholar
  7. 7.
    Leclerc, S., Smistad, E., Pedrosa, J., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging 1 (2019).  https://doi.org/10.1109/TMI.2019.2900516CrossRefGoogle Scholar
  8. 8.
    Chen, T.K., Abolmaesumi, P., Thurston, A.D., Ellis, R.E.: Automated 3D freehand ultrasound calibration with real-time accuracy control. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4190, pp. 899–906. Springer, Heidelberg (2006).  https://doi.org/10.1007/11866565_110CrossRefGoogle Scholar
  9. 9.
    Iandola, F.N., Han, S., Moskewicz, M.W., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size, pp. 1–13. arXiv:160207360v4 (2017)
  10. 10.
    Sehgal, A., Kehtarnavaz, N.: Guidelines and benchmarks for deployment of deep learning models on smartphones as real-time apps, pp. 1–10. arXiv:190102144 (2019)CrossRefGoogle Scholar
  11. 11.
    Toporek, G., Wang, H., Balicki, M., Xie, H.: Autonomous image-based ultrasound probe positioning via deep learning. In: Hamlyn Symposium on Medical Robotics (2018)Google Scholar
  12. 12.
    Mahendran, S., Ali, H., Vidal, R.: 3D pose regression using convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Grzegorz Toporek
    • 1
    Email author
  • Raghavendra Srinivasa Naidu
    • 1
  • Hua Xie
    • 1
  • Adriana Simicich
    • 2
  • Tony Gades
    • 2
  • Balasundar Raju
    • 1
  1. 1.Philips Research North AmericaCambridgeUSA
  2. 2.Philips HealthcareBothellUSA

Personalised recommendations