Skip to main content

FBA-Net: Foreground and Background Aware Contrastive Learning for Semi-Supervised Atrium Segmentation

  • Conference paper
  • First Online:
Medical Image Learning with Limited and Noisy Data (MILLanD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14307))

Included in the following conference series:

  • 415 Accesses

Abstract

Medical image segmentation of gadolinium enhancement magnetic resonance imaging (GE MRI) is an important task in clinical applications. However, manual annotation is time-consuming and requires specialized expertise. Semi-supervised segmentation methods that leverage both labeled and unlabeled data have shown promise, with contrastive learning emerging as a particularly effective approach. In this paper, we propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation (FBA-Net). Specifically, we leverage the contrastive loss to learn representations of both the foreground and background regions in the images. By training the network to distinguish between foreground-background pairs, we aim to learn a representation that can effectively capture the anatomical structures of interest. Experiments on three medical segmentation datasets demonstrate state-of-the-art performance. Notably, our method achieves a Dice score of 91.31% with only 20% labeled data, which is remarkably close to the 91.62% score of the fully supervised method that uses 100% labeled data on the left atrium dataset. Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation and enable more efficient and accurate analysis of medical images with a limited amount of annotated labels. Our code is available at https://github.com/cys1102/FBA-Net.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.cardiacatlas.org/atriaseg2018-challenge/

  2. 2.

    https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT

  3. 3.

    https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html

References

  1. Azizi, S., et al.: Robust and efficient medical imaging with self-supervision. arXiv preprint arXiv:2205.09723 (2022)

  2. Azizi, S., et al.: Big self-supervised models advance medical image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3478–3488 (2021)

    Google Scholar 

  3. Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximizing mutual information across views. Adv. Neural Inf. Process. Syst. 32, 1–11 (2019)

    Google Scholar 

  4. Bai, W., et al.: Self-supervised learning for cardiac MR image segmentation by anatomical position prediction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 541–549. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_60

    Chapter  Google Scholar 

  5. Bernard, O., et al.: Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018)

    Article  Google Scholar 

  6. Chen, J.: Jas-gan: generative adversarial network based joint atrium and scar segmentations on unbalanced atrial targets. IEEE J. Biomed. Health Inf. 26(1), 103–114 (2021)

    Article  MathSciNet  Google Scholar 

  7. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  8. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297 (2020)

  9. Cho, K., et al.: Chess: chest x-ray pre-trained model via self-supervised contrastive learning. J. Dig. Imaging, 1–9 (2023)

    Google Scholar 

  10. Clark, K., et al.: The cancer imaging archive (tcia): maintaining and operating a public information repository. J. Dig. Imaging 26, 1045–1057 (2013)

    Article  Google Scholar 

  11. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  12. Kiyasseh, D., Swiston, A., Chen, R., Chen, A.: Segmentation of left atrial MR images via self-supervised semi-supervised meta-learning. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 13–24. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_2

    Chapter  Google Scholar 

  13. Li, L., Zimmer, V.A., Schnabel, J.A., Zhuang, X.: Atrialjsqnet: a new framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information. Med. Image Anal. 76, 102303 (2022)

    Article  Google Scholar 

  14. Li, S., Zhang, C., He, X.: Shape-aware semi-supervised 3D semantic segmentation for medical images. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 552–561. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_54

    Chapter  Google Scholar 

  15. Liu, T., Hou, S., Zhu, J., Zhao, Z., Jiang, H.: Ugformer for robust left atrium and scar segmentation across scanners. arXiv preprint arXiv:2210.05151 (2022)

  16. Luo, X., Chen, J., Song, T., Wang, G.: Semi-supervised medical image segmentation through dual-task consistency. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 8801–8809 (2021)

    Google Scholar 

  17. Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  18. Tang, Y., et al.: Self-supervised pre-training of swin transformers for 3d medical image analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20730–20740 (2022)

    Google Scholar 

  19. Wu, Y., et al.: Mutual consistency learning for semi-supervised medical image segmentation. Med. Image Anal. 81, 102530 (2022)

    Article  Google Scholar 

  20. Wu, Y., Xu, M., Ge, Z., Cai, J., Zhang, L.: Semi-supervised left atrium segmentation with mutual consistency training. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 297–306. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_28

    Chapter  Google Scholar 

  21. Xie, J., Xiang, J., Chen, J., Hou, X., Zhao, X., Shen, L.: Contrastive learning of class-agnostic activation map for weakly supervised object localization and semantic segmentation. arXiv preprint arXiv:2203.13505 (2022)

  22. Xiong, Z., et al.: A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging. Med. Image Anal. 67, 101832 (2021)

    Article  Google Scholar 

  23. Yang, G., et al.: Simultaneous left atrium anatomy and scar segmentations via deep learning in multiview information with attention. Future Gener. Comput. Syst. 107, 215–228 (2020)

    Article  Google Scholar 

  24. You, C., Zhao, R., Staib, L.H., Duncan, J.S.: Momentum contrastive voxel-wise representation learning for semi-supervised volumetric medical image segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention-MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022, Proceedings, Part IV, pp. 639–652. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-16440-8_61

  25. Yu, L., Wang, S., Li, X., Fu, C.-W., Heng, P.-A.: Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 605–613. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_67

    Chapter  Google Scholar 

  26. Zhao, X., et al.: RCPS: rectified contrastive pseudo supervision for semi-supervised medical image segmentation. arXiv preprint arXiv:2301.05500 (2023)

  27. Zhou, H.Y., Lu, C., Yang, S., Han, X., Yu, Y.: Preservational learning improves self-supervised medical image models by reconstructing diverse contexts. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3499–3509 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunsung Chung .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chung, Y., Lim, C., Huang, C., Marrouche, N., Hamm, J. (2023). FBA-Net: Foreground and Background Aware Contrastive Learning for Semi-Supervised Atrium Segmentation. In: Xue, Z., et al. Medical Image Learning with Limited and Noisy Data. MILLanD 2023. Lecture Notes in Computer Science, vol 14307. Springer, Cham. https://doi.org/10.1007/978-3-031-44917-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44917-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47196-4

  • Online ISBN: 978-3-031-44917-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics