Advertisement

Fine Tuning U-Net for Ultrasound Image Segmentation: Which Layers?

  • Mina AmiriEmail author
  • Rupert Brooks
  • Hassan Rivaz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11795)

Abstract

Fine-tuning a network which has been trained on a large dataset is an alternative to full training in order to overcome the problem of scarce and expensive data in medical applications. While the shallow layers of the network are usually kept unchanged, deeper layers are modified according to the new dataset. This approach may not work for ultrasound images due to their drastically different appearance. In this study, we investigated the effect of fine-tuning different layers of a U-Net which was trained on segmentation of natural images in breast ultrasound image segmentation. Tuning the contracting part and fixing the expanding part resulted in substantially better results compared to fixing the contracting part and tuning the expanding part. Furthermore, we showed that starting to fine-tune the U-Net from the shallow layers and gradually including more layers will lead to a better performance compared to fine-tuning the network from the deep layers moving back to shallow layers. We did not observe the same results on segmentation of X-ray images, which have different salient features compared to ultrasound, it may therefore be more appropriate to fine-tune the shallow layers rather than deep layers. Shallow layers learn lower level features (including speckle pattern, and probably the noise and artifact properties) which are critical in automatic segmentation in this modality.

Keywords

Ultrasound imaging Segmentation Transfer learning U-Net 

Notes

Acknowledgment

This work was supported by in part by Natural Science and Engineering Research Council of Canada (NSERC) Discovery Grant RGPIN-2015-04136.

References

  1. 1.
    Alsinan, A.Z., Patel, V.M., Hacihaliloglu, I.: Automatic segmentation of bone surfaces from ultrasound using a filter-layer-guided CNN. Int. J. Comput. Assist. Radiol. Surg. (2019).  https://doi.org/10.1007/s11548-019-01934-0
  2. 2.
    van Ginneken, B., Stegmann, M., Loog, M.: Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database. Med. Image Anal. 10(1), 19–40 (2006)CrossRefGoogle Scholar
  3. 3.
    Kotikalapudi, R., contributors: keras-vis (2017). https://github.com/raghakot/keras-vis
  4. 4.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015).  https://doi.org/10.1038/nature14539CrossRefGoogle Scholar
  5. 5.
    Looney, P., et al.: Fully automated, real-time 3D ultrasound segmentation to estimate first trimester placental volume using deep learning. JCI Insight 3(11) (2018). https://insight.jci.org/articles/view/120178
  6. 6.
    Rand, W.M.: Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 66(336), 846–850 (1971). http://www.jstor.org/stable/2284239CrossRefGoogle Scholar
  7. 7.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) (2015). http://arxiv.org/abs/1505.04597
  8. 8.
    Wang, N., et al.: Densely deep supervised networks with threshold loss for cancer detection in automated breast ultrasound. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 641–648. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_73CrossRefGoogle Scholar
  9. 9.
    Xia, C., Li, J., Chen, X., Zheng, A., Zhang, Y.: What is and what is not a salient object? Learning salient object detector by ensembling linear exemplar regressors. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4399–4407, July 2017.  https://doi.org/10.1109/CVPR.2017.468
  10. 10.
    Yang, J., Faraji, M., Basu, A.: Robust segmentation of arterial walls in intravascular ultrasound images using dual path U-Net. Ultrasonics 96, 24–33 (2019). http://www.sciencedirect.com/science/article/pii/S0041624X18308059CrossRefGoogle Scholar
  11. 11.
    Yang, X., et al.: Towards automatic semantic segmentation in volumetric ultrasound. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 711–719. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_81CrossRefGoogle Scholar
  12. 12.
    Yap, M.H., et al.: Breast ultrasound lesions recognition: end-to-end deep learning approaches. J. Med. Imaging 6, 011007 (2018)Google Scholar
  13. 13.
    Yap, M.H., et al.: Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inf. 22, 1218–1226 (2018)CrossRefGoogle Scholar
  14. 14.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS 2014, vol. 2 (2014). http://arxiv.org/abs/1411.1792

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Concordia UniversityMontrealCanada
  2. 2.Nuance CommunicationsMontrealCanada

Personalised recommendations