Skip to main content

Generative Adversary Network Based on Cross-Modal Transformer for CT to MR Images Transformation

  • Conference paper
  • First Online:
Advances in Applied Nonlinear Dynamics, Vibration, and Control – 2023 (ICANDVC 2023)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 1152))

  • 320 Accesses

Abstract

Acquiring Magnetic Resonance (MR) images in the current medical imaging tasks is expensive and time-consuming. We need technology to acquire multi-contrast MR images. Nowadays, studying the synthesis of MR images through deep learning algorithms to improve diagnostic efficiency is a hot topic. However, cross-modal translations are very challenging. This paper proposes an efficient and effective generative adversary network based on Cross-Modal Transformer (C-M Transformer) to address the issues of blurred synthetic images and unstable training, in order to achieve the conversion from Computed Tomography (CT) images to MR images. Firstly, the original input CT image is filtered to generate high-frequency detail images, and then the high-frequency detail images and the original images are respectively fed into the U-shaped network structure for feature extraction. After four downsamplings, the features are sent to the C-M Transformer for feature fusion. In C-M Transformer, the detail feature stream is Q, and the original feature stream is K and V. We added a Masked Attention to provide an average feature representation of two streams. The fused feature image is fed to the upsampling portion of the U-shaped structure to generate the MR image. It can be shown through the experimental results that the method outperforms mainstream algorithms in terms of Mean square error (MAE), Peak signal-to-noise ratio (PSNR), and Structural similarity (SSIM). This method generates MR images that show bone marrow signals in the vertebral body more clearly and accurately than other methods. It can clearly show the position of the lumbar vertebral plate and so on. The results of this method can be used to assist in orthopedic diagnosis after approval by the physician.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yi, X., Walia, E., Babyn, P.: Generative adversarial network in medical imaging: a review. Med. Image Anal. 58, 101552 (2019)

    Article  Google Scholar 

  2. Choudhary, A., et al.: Advancing medical imaging informatics by deep learning-based domain adaptation. Yearbook Med. Inform. 29(01), 129–138 (2020)

    Google Scholar 

  3. Lim, S., Shin, M., Paik, J.: Point cloud generation using deep adversarial local features for augmented and mixed reality contents. IEEE Trans. Consum. Electron. 68(1), 69–76 (2022)

    Article  Google Scholar 

  4. Dosovitskiy, A., et al.: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  5. Heo, B., et al.: Rethinking spatial dimensions of vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)

    Google Scholar 

  6. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  7. Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S.A.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 3–13. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68127-6_1

    Chapter  Google Scholar 

  8. Jin, C.-B., et al.: Deep CT to MR synthesis using paired and unpaired data. Sensors 19(10), 2361 (2019). https://doi.org/10.3390/s19102361

    Article  Google Scholar 

  9. Lei, Y., et al.: Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys. Med. Biol. 65(3), 035013 (2020)

    Google Scholar 

  10. Xu, L., et al.: BPGAN: Bidirectional CT-to-MRI prediction using multi-generative multi-adversarial nets with spectral normalization and localization. Neural Netw. 128, 82–96 (2020)

    Article  Google Scholar 

  11. Wang, J., Wu, Q.M.J., Pourpanah, F.: DC-cycleGAN: bidirectional CT-to-MR synthesis from unpaired data. Comput. Med. Imaging Graph. 102249 (2023)

    Google Scholar 

  12. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  13. Qin, X., et al.: U2-Net: going deeper with nested U-structure for salient object detection. Pattern Recogn. 106, 107404 (2020)

    Google Scholar 

  14. Xu, R., et al.: Face transfer with generative adversarial network. arXiv preprint arXiv:1710.06090 (2017)

  15. Hou, X., et al.: Deep feature consistent variational autoencoder. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2017)

    Google Scholar 

  16. Gao, X., Fang, Y.: A note on the generalized degrees of freedom under the L1 loss function. J. Statist. Plann. Inference 141(2), 677–686 (2011)

    Article  MathSciNet  Google Scholar 

  17. Mao, X., et al.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  18. Fedorov, A., et al.: 3D Slicer as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30(9), 1323–1341 (2012)

    Google Scholar 

  19. Woo, S., et al.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  20. Zhu, J.-Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  21. Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Youth Foundations of Shandong Province under Grant No. ZR2021QF100, the National Natural Science Foundation of China under Grant No.62273163, the Outstanding Youth Foundation of Shandong Province under Grant No.ZR2023YQ056, the Key R&D Project of Shandong Province under Grant No.2022CXGC010503.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Weijie Huang or Xingong Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, Z., Huang, W., Cheng, X., Wang, H. (2024). Generative Adversary Network Based on Cross-Modal Transformer for CT to MR Images Transformation. In: Jing, X., Ding, H., Ji, J., Yurchenko, D. (eds) Advances in Applied Nonlinear Dynamics, Vibration, and Control – 2023. ICANDVC 2023. Lecture Notes in Electrical Engineering, vol 1152. Springer, Singapore. https://doi.org/10.1007/978-981-97-0554-2_32

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0554-2_32

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0553-5

  • Online ISBN: 978-981-97-0554-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics