Advertisement

Can Dilated Convolutions Capture Ultrasound Video Dynamics?

  • Mohammad Ali Maraci
  • Weidi Xie
  • J. Alison Noble
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11046)

Abstract

Automated analysis of free-hand ultrasound video sweeps is an important topic in diagnostic and interventional imaging, however, it is a notoriously challenging task for detecting the standard planes, due to the low-quality data, variability in contrast, appearance and placement of the structures. Conventionally, sequential data is usually modelled with heavy Recurrent Neural Networks (RNNs). In this paper, we propose to apply a convolutional architecture (CNNs) for the standard plane detection in free-hand ultrasound videos. Our contributions are twofolds, firstly, we show a simple convolutional architecture can be applied to characterize the long range dependencies in the challenging ultrasound video sequences, and outperform the canonical LSTMs and the recently proposed two-stream spatial ConvNet by a large margin (89% versus 83% and 84% respectively). Secondly, to get an understanding of what evidences have been used by the model for decision making, we experimented with the soft-attention layers for feature pooling, and trained the entire model end-to-end with only standard classification losses. As a result, we find the input-dependent attention maps can not only boost the network’s performance, but also indicate useful patterns of the data that are deemed important for certain structure, therefore provide interpretation while deploying the models.

Notes

Acknowledgments

The National Institute for Health Research (NIHR) Oxford Biomedical Research Centre, grant BRC-1215-20008, EPSRC grant EP/M013774/1, MRC grant MR/P027938/1, ERC Advanced Grant 694581 (PULSE) and NVIDIA Corporations GPU grant are acknowledged.

References

  1. 1.
    Maraci, M.A., Bridge, C.P., Napolitano, R., Papageorghiou, A., Noble, J.A.: A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat. Med. Image Anal. 37, 22–36 (2017)CrossRefGoogle Scholar
  2. 2.
    Bridge, C.P., Ioannou, C., Noble, J.A.: Automated annotation and quantitative description of ultrasound videos of the fetal heart. Med. Image Anal. 36, 147–161 (2017)CrossRefGoogle Scholar
  3. 3.
    Baumgartner, C.F., et al.: Real-time detection and localisation of fetal standard scan planes in 2d freehand ultrasound. CoRR abs/1612.05601 (2016)Google Scholar
  4. 4.
    Chen, H., et al.: Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 507–514. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_62CrossRefGoogle Scholar
  5. 5.
    Huang, W., Bridge, C.P., Noble, J.A., Zisserman, A.: Temporal heartnet: towards human-level automatic analysis of fetal cardiac screening video. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 341–349. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_39CrossRefGoogle Scholar
  6. 6.
    Gao, Y., Alison Noble, J.: Detection and characterization of the fetal heartbeat in free-hand ultrasound sweeps with weakly-supervised two-streams convolutional networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 305–313. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_35CrossRefGoogle Scholar
  7. 7.
    Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733. IEEE (2017)Google Scholar
  8. 8.
    Van Den Oord, A., et al.: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016)
  9. 9.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: ACL (2014)Google Scholar
  10. 10.
    Dauphin, Y.N., Fan, A., Auli, M., Grangier, D.: Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083 (2016)
  11. 11.
    Girdhar, R., Ramanan, D.: Attentional pooling for action recognition. CoRR abs/1711.01467 (2017)Google Scholar
  12. 12.
    Yang, J., Ren, P., Chen, D., Wen, F., Li, H., Hua, G.: Neural aggregation network for video face recognition. CoRR abs/1603.05474 (2016)Google Scholar
  13. 13.
    Wang, X., Girshick, R.B., Gupta, A., He, K.: Non-local neural networks. CoRR abs/1711.07971 (2017)Google Scholar
  14. 14.
    Lin, M., Chen, Q., Yan, S.: Network in network. CoRR abs/1312.4400 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Mohammad Ali Maraci
    • 1
  • Weidi Xie
    • 1
  • J. Alison Noble
    • 1
  1. 1.Department of Engineering ScienceUniversity of Oxford, Institute of Biomedical EngineeringOxfordUK

Personalised recommendations