Skip to main content

Mixed-Supervised Dual-Network for Medical Image Segmentation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (MICCAI 2019)

Abstract

Deep learning based medical image segmentation models usually require large datasets with high-quality dense segmentations to train, which are very time-consuming and expensive to prepare. One way to tackle this difficulty is using the mixed-supervised learning framework, where only a part of data is densely annotated with segmentation label and the rest is weakly labeled with bounding boxes. The model is trained jointly in a multi-task learning setting. In this paper, we propose Mixed-Supervised Dual-Network (MSDN), a novel architecture which consists of two separate networks for the detection and segmentation tasks respectively, and a series of connection modules between the layers of the two networks. These connection modules are used to transfer useful information from the auxiliary detection task to help the segmentation task. We propose to use a recent technique called ‘Squeeze and Excitation’ in the connection module to boost the transfer. We conduct experiments on two medical image segmentation datasets. The proposed MSDN model outperforms multiple baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  2. Milletari, F., Navab, N., Ahmadi, S.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision, pp. 565–571 (2016)

    Google Scholar 

  3. Jiang, J., Hu, Y.C., Liu, C.J., et al.: Multiple resolution residually connected feature streams for automatic lung tumor segmentation from CT images. IEEE Trans. Med. Imaging 38(1), 134–144 (2019)

    Article  Google Scholar 

  4. Fan, G., Liu, H., Wu, Z., et al.: Deep learning-based automatic segmentation of lumbosacral nerves on CT for spinal intervention: a translational study. Am. J. Neuroradiol. 40(6), 1074–1081 (2019)

    Article  Google Scholar 

  5. Wang, X., You, S., Li, X., Ma, H.: Weakly-supervised semantic segmentation by iteratively mining common object features. CVPR 2018, 1354–1362 (2018)

    Google Scholar 

  6. Rajchl, M., Lee, M.C., Oktay, O., et al.: DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans. Med. Imaging 36(2), 674–683 (2017)

    Article  Google Scholar 

  7. Kervadec, H., Dolz, J., Tang, M., et al.: Constrained-CNN losses for weakly supervised segmentation. Med. Image Anal. 54, 88–99 (2019)

    Article  Google Scholar 

  8. Shah, M.P., Merchant, S.N., Awate, S.P.: MS-net: mixed-supervision fully-convolutional networks for full-resolution segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 379–387. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00937-3_44

    Chapter  Google Scholar 

  9. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, vol. 2018, pp. 7132–7141 (2018)

    Google Scholar 

  10. Roy, A.G., Navab, N., Wachinger, C.: Concurrent spatial and channel ’squeeze & excitation’ in fully convolutional networks. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 421–429. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_48

    Chapter  Google Scholar 

  11. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV 2017, pp. 2980–2988 (2017)

    Google Scholar 

  12. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML 2015, pp. 448–456 (2015)

    Google Scholar 

  13. Roy, A.G., Siddiqui, S., Pölsterl, S., et al.: ‘Squeeze and Excite’ Guided Few-Shot Segmentation of Volumetric Images. arXiv preprint arXiv:1902.01314 (2019)

  14. Mlynarski, P., Delingette, H., Criminisi, A., et al.: Deep Learning with Mixed Supervision for Brain Tumor Segmentation. arXiv preprint arXiv:1812.04571 (2018)

  15. Bhalgat, Y., Shah, M., Awate, S.: Annotation-cost Minimization for Medical Image Segmentation using Suggestive Mixed Supervision Fully Convolutional Networks. arXiv preprint arXiv:1812.11302 (2018)

  16. Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  17. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  18. Lu, Y., Kumar, A., Zhai, S., et al.: Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. In: CVPR 2017, pp. 5334–5343 (2017)

    Google Scholar 

  19. Wang, J., Cheng, Y., Schmidt Feris, R.: Walk and learn: facial attribute representation learning from egocentric video and contextual data. In: CVPR 2016, pp. 2295–2304 (2016)

    Google Scholar 

Download references

Acknowledgement

This project was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health through Grant Numbers P41EB015898 and R01EB025964, and China Scholarship Council (CSC). Unrelated to this publication, Jayender Jagadeesan owns equity in Navigation Sciences,Inc. He is a co-inventor of a navigation device to assist surgeons in tumor excision that is licensed to Navigation Sciences. Dr. Jagadeesan’s interests were reviewed and are managed by BWH and Partners HealthCare in accordance with their conflict of interest policies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jagadeesan Jayender .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, D. et al. (2019). Mixed-Supervised Dual-Network for Medical Image Segmentation. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11765. Springer, Cham. https://doi.org/10.1007/978-3-030-32245-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32245-8_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32244-1

  • Online ISBN: 978-3-030-32245-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics