Skip to main content

Advertisement

Log in

Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss

  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient’s lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

The data that support the findings of this study are openly available at www.med.upenn.edu/sbia/brats2018/data.html.

References

  1. Menze B H, Jakab A, Bauer S, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging, 2014, 34(10): 1993-2024.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Li X, Chen H, Qi X, et al. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE transactions on medical imaging, 2018, 37(12): 2663-2674.

    Article  PubMed  Google Scholar 

  3. Weng Y, Zhou T, Li Y, et al. Nas-unet: Neural architecture search for medical image segmentation. IEEE Access, 2019, 7: 44247-44257.

    Article  Google Scholar 

  4. Jiang Z, Ding C, Liu M, et al. Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task. International MICCAI Brainlesion Workshop. Springer, Cham, 2019: 231-241.

  5. Isensee F, Maier-Hein K H. nnU-Net for Brain Tumor Segmentation. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II. Springer Nature, 2021, 12658: 118.

    Google Scholar 

  6. Ashraphijuo M, Wang X, Aggarwal V. Fundamental sampling patterns for low-rank multi-view data completion. Pattern Recognition, 2020, 103: 107307.

    Article  Google Scholar 

  7. Hastie T, Tibshirani R, Sherlock G, et al. Imputing missing data for gene expression arrays. 1999.

  8. Schneider T. Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values. Journal of climate, 2001, 14(5): 853-871.

    Article  Google Scholar 

  9. Dar S U H, Yurt M, Karacan L, et al. Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE transactions on medical imaging, 2019, 38(10): 2375-2388.

    Article  PubMed  Google Scholar 

  10. Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1125–1134.

  11. Xin B, Hu Y, Zheng Y, et al. Multi-modality generative adversarial networks with tumor consistency loss for brain MR image synthesis. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020: 1803–1807.

  12. Yokota T, Erem B, Guler S, et al. Missing slice recovery for tensors using a low-rank model in embedded space. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8251–8259.

  13. Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 2012, 25: 1097-1105.

    Google Scholar 

  14. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Advances in neural information processing systems, 2014, 27.

  15. Jog A, Carass A, Pham D L, et al. Random forest flair reconstruction from t 1, t 2, and p d-weighted mri. 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI). IEEE, 2014: 1079–1082.

  16. Odena A, Olah C, Shlens J. Conditional image synthesis with auxiliary classifier gans. International conference on machine learning. PMLR, 2017: 2642–2651.

  17. Frid-Adar M, Klang E, Amitai M, et al. Synthetic data augmentation using GAN for improved liver lesion classification. 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE, 2018: 289-293.

    Google Scholar 

  18. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.

  19. Xu, J., Hong, L., and Zhu, H. A generative adversarial network for classification of lung nodules malignancy. Journal of Northeastern University (Natural Science Edition),2018,39:39-44

    CAS  Google Scholar 

  20. Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision. 2017: 2223–2232.

  21. Chartsias A, Joyce T, Dharmakumar R, et al. Adversarial image synthesis for unpaired multi-modal cardiac data. International workshop on simulation and synthesis in medical imaging. Springer, Cham, 2017: 3-13.

    Google Scholar 

  22. Costa P, Galdran A, Meyer M I, et al. Towards adversarial retinal image synthesis. arXiv preprint arXiv:1701.08974, 2017.

  23. Lee D, Kim J, Moon W J, et al. CollaGAN: Collaborative GAN for missing image data imputation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2487–2496.

  24. Sharma A, Hamarneh G. Missing MRI pulse sequence synthesis using multi-modal generative adversarial network. IEEE transactions on medical imaging, 2019, 39(4): 1170-1183.

    Article  PubMed  Google Scholar 

  25. Hamghalam M, Frangi A F, Lei B, et al. Modality Completion via Gaussian Process Prior Variational Autoencoders for Multi-modal Glioma Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2021: 442–452.

  26. Choi Y, Choi M, Kim M, et al. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8789–8797.

  27. Gulrajani I, Ahmed F, Arjovsky M, et al. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.

Download references

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61901098, 61971118.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study’s conception and design. Material preparation, data collection, and analysis were performed by Shuang Zhang, Jianning Chi, and Yang Jiang. The first draft of the manuscript was written by Yang Jiang and Shuang Zhang, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jianning Chi.

Ethics declarations

Ethics Approval

The study did not require ethics approval.

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Zhang, S. & Chi, J. Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss. J Digit Imaging 36, 1794–1807 (2023). https://doi.org/10.1007/s10278-022-00697-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-022-00697-6

Keywords

Navigation