Skip to main content

FS-Net: A New Paradigm of Data Expansion for Medical Image Segmentation

  • Conference paper
  • First Online:
Deep Generative Models, and Data Augmentation, Labelling, and Imperfections (DGM4MICCAI 2021, DALI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13003))

  • 1576 Accesses

Abstract

Pre-training can alleviate the requirement of labeling data for a new task. However, Pre-training as a sequential learning typically suffers in fact from forgetting the older tasks. Especially in complex medical image segmentation tasks, this problem is more prominent. To solve above problem, we propose a network structure based on feature space transformation (FS-Net) for data expansion of medical image segmentation. FS-Net share parameters during training to help exploiting regularities present across tasks and improving the performance by constraining the learned representation. In the experiment, we use M&Ms as the extended dataset of HVSMR, these two tasks have the same segmentation target (heart). The segmentation accuracy of FS-Net is up to 7.12% higher than the baseline network, which is significantly better than Pre-training. In addition, we use Brats2019 as expansion dataset on WMH, and the segmentation accuracy is improved by 0.77% compared with the baseline network. And Brats2019 (glioma) and WMH (white matter hyperintensities) have different segmentation targets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wolz, R., et al.: Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE Trans. Med. Imaging 32(9), 1723–1730 (2013)

    Article  Google Scholar 

  2. Sarker, M.M.K., et al.: SLSDeep: skin lesion segmentation based on dilated residual and pyramid pooling networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 21–29. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_3

    Chapter  Google Scholar 

  3. Ibtehaz, N., Sohel Rahman, M.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. arXiv:1902.04049 (2019). http://arxiv.org/abs/1902.04049

  4. Jiang, H., et al.: Improved cGAN based linear lesion segmentation in high myopia ICGA images. Biomed. Opt. Express 10(5), 2355 (2019)

    Article  Google Scholar 

  5. Trullo, R., Petitjean, C., Ruan, S., Dubray, B., Nie, D., Shen, D.: SEG-mentation of organs at risk in thoracic CT images using a SharpMask architecture and conditional random fields. In: Proceedings of IEEE 14th International Symposium on Biomedical Imaging (ISBI), pp. 1003–1006, April 2017

    Google Scholar 

  6. Chiu, S.J., Allingham, M.J., Mettu, P.S., Cousins, S.W., Izatt, J.A., Farsiu, S.: Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema. Biomed. Opt. Express 6(4), 1172 (2015)

    Article  Google Scholar 

  7. Venhuizen, F.G., et al.: Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography. Biomed. Opt. Express 9(4), 1545 (2018)

    Article  Google Scholar 

  8. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  9. Halevy, A., Norvig, P., Pereira, F.: The unreasonable effectiveness of data. IEEE Intell. Syst. 24(2), 8–12 (2009)

    Article  Google Scholar 

  10. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR (2014)

    Google Scholar 

  11. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)

    Google Scholar 

  12. Zoph, B., Ghiasi, G., Lin, T.Y., et al.: Rethinking pre-training and self-training (2020)

    Google Scholar 

  13. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  14. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999)

    Article  Google Scholar 

  15. Bilen, H., Vedaldi, A.: Universal representations: the missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275 (2017)

  16. Caruana, R.: Multitask learning. Mach. Learn. 28, 41–75 (1997)

    Article  Google Scholar 

  17. Campello, M., Lekadir, K.: Multi-centre multi-vendor & multi-disease cardiac image segmentation challenge (M&Ms). In: Medical Image Computing and Computer Assisted Intervention (2020)

    Google Scholar 

  18. Pace, D.F., Dalca, A.V., Geva, T., Powell, A.J., Moghari, M.H., Golland, P.: Interactive whole-heart segmentation in congenital heart disease. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 80–88. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_10

    Chapter  Google Scholar 

  19. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 199–2024 (2014)

    Google Scholar 

  20. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017)

    Article  Google Scholar 

  21. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arXiv:1811.02629 (2018)

  22. Fu, J., et al.: Dual attention network for scene segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2020)

    Google Scholar 

  23. He, K., et al.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision (2015)

    Google Scholar 

Download references

Acknowledgment

This study is supported by grants from the National Key Research and Development Program of China (2018YFC1312000) and Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230155 427003-20200822115709001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, X., Yang, Y., Ma, T. (2021). FS-Net: A New Paradigm of Data Expansion for Medical Image Segmentation. In: Engelhardt, S., et al. Deep Generative Models, and Data Augmentation, Labelling, and Imperfections. DGM4MICCAI DALI 2021 2021. Lecture Notes in Computer Science(), vol 13003. Springer, Cham. https://doi.org/10.1007/978-3-030-88210-5_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88210-5_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88209-9

  • Online ISBN: 978-3-030-88210-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics