Advertisement

Image synthesis in contrast MRI based on super resolution reconstruction with multi-refinement cycle-consistent generative adversarial networks

  • Kun Wu
  • Yan QiangEmail author
  • Kai Song
  • Xueting Ren
  • WenKai Yang
  • Wanjun Zhang
  • Akbar Hussain
  • Yanfen Cui
Article
  • 42 Downloads

Abstract

In the field of medical image processing represented by magnetic resonance imaging (MRI), synthesizing the complementary target contrast of the target patient from the existing contrast has obvious medical significance for assisting doctors in making clinical diagnoses. To satisfy the image translation problem between different MRI contrasts (T1 and T2), a generative adversarial network is proposed that works in an end-to-end manner at image level. The low-frequency and high-frequency information of the image is preserved by using multi-stage optimization learning aided by adversarial loss, the loss of perceptual consistency and the loss of cyclic consistency, as it results in preserving the same contrast anatomical structure of the source domain supervisely when the perceptual pixel distribution of the target contrast is learned perfectly. To integrate different penalties (L1 and L2) organically, adaptive weights are set for the error sensitivity of the penalty function in the present total loss function, the aim being to achieve adaptive optimization of each stage of generating high-resolution images. In addition, a new net structure called multi-skip connection residual net is proposed to refine medical image details step by step with multi-stage optimization. Compared with the existing technology, the present method is more advanced. The contrast conversion of T1 and T2 in MRI is validated, which can help to shorten the imaging time, improve the imaging quality, and effectively assist doctors with diagnoses.

Keywords

Synthesis Contrast MRI Generative adversarial network Multi-stage Cyclic consistency Multi-skip 

Notes

Acknowledgements

This work was funded in part by National Natural Science Foundation of China (Grant Number 61872261), and Natural Science Foundation of Shanxi Province, China (Grant Number 201801D121139). The authors thank the contributions of their partners in these projects.

References

  1. Armanious, K., Jiang, C., Fischer, M., Küstner, T., Nikolaou, K., Gatidis, S., & Yang, B. (2018). MedGAN: Medical image translation using GANs. arXiv preprint arXiv:1806.06397.
  2. Ashhab, M. S., Breitsprecher, T., & Wartzack, S. (2014). Neural network based modeling and optimization of deep drawing: Extrusion combined process. Journal of Intelligent Manufacturing, 25(1), 77–84.CrossRefGoogle Scholar
  3. Borji, A. (2019). Pros and cons of gan evaluation measures. Computer Vision and Image Understanding, 179, 41–65.CrossRefGoogle Scholar
  4. Chartsias, A., Joyce, T., Giuffrida, M. V., & Tsaftaris, S. A. (2017). Multimodal MR synthesis via modality-invariant latent representation. IEEE Transactions on Medical Imaging, 37(3), 803–814.CrossRefGoogle Scholar
  5. Dar, S. U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., & Çukur, T. (2019). Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Transactions on Medical Imaging, 38, 2375–2388.CrossRefGoogle Scholar
  6. Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep generative image models using aOBJ Laplacian pyramid of adversarial networks. In Advances in neural information processing systems (pp. 1486–1494).Google Scholar
  7. Eigen, D., & Fergus, R. (2015). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision (pp. 2650–2658).Google Scholar
  8. Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414–2423).Google Scholar
  9. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).Google Scholar
  10. Guha, A. (1992). Continuous process control using neural networks. Journal of Intelligent Manufacturing, 3(4), 217–228.CrossRefGoogle Scholar
  11. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).Google Scholar
  12. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017a). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700–4708).Google Scholar
  13. Huang, Y., Shao, L., & Frangi, A. F. (2017b). Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning. IEEE Transactions on Medical Imaging, 37(3), 815–827.CrossRefGoogle Scholar
  14. Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134).Google Scholar
  15. Jog, A., Carass, A., Roy, S., Pham, D. L., & Prince, J. L. (2015). MR image synthesis by contrast learning on neighborhood ensembles. Medical Image Analysis, 24(1), 63–76.CrossRefGoogle Scholar
  16. Jog, A., Carass, A., Roy, S., Pham, D. L., & Prince, J. L. (2017). Random forest regression for magnetic resonance image synthesis. Medical Image Analysis, 35, 475–488.CrossRefGoogle Scholar
  17. Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694–711). Cham: Springer.Google Scholar
  18. Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
  19. Kim, T., Cha, M., Kim, H., Lee, J. K., & Kim, J. (2017). Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th international conference on machine learning (Vol. 70, pp. 1857–1865). JMLR.org.Google Scholar
  20. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  21. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  22. Li, M., Huang, H., Ma, L., Liu, W., Zhang, T., & Jiang, Y. (2018). Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 184–199).Google Scholar
  23. Liang, X., Zhang, H., & Xing, E. P. (2017). Generative semantic manipulation with contrasting gan. arXiv preprint arXiv:1708.00315.
  24. Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440.
  25. Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., et al. (2018). Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering, 65(12), 2720–2730.CrossRefGoogle Scholar
  26. Odena, A., Dumoulin, V., & Olah, C. (2016). Deconvolution and checkerboard artifacts. Distill, 1(10), e3.CrossRefGoogle Scholar
  27. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2536–2544).Google Scholar
  28. Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  29. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 234–241). Cham: Springer.Google Scholar
  30. Seitzer, M., Yang, G., Schlemper, J., Oktay, O., Würfl, T., Christlein, V., et al. (2018). Adversarial and perceptual refinement for compressed sensing MRI reconstruction. In International conference on medical image computing and computer-assisted intervention (pp. 232–240). Cham: Springer.Google Scholar
  31. Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A. P., Bishop, R., et al. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1874–1883).Google Scholar
  32. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  33. Taigman, Y., Polyak, A., & Wolf, L. (2016). Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200.
  34. Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022.
  35. Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on machine learning (pp. 1096–1103). ACM.Google Scholar
  36. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRefGoogle Scholar
  37. Wang, X., & Gupta, A. (2016). Generative image modeling using style and structure adversarial networks. In European conference on computer vision (pp. 318–335). Cham: Springer.Google Scholar
  38. Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018a). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798–8807).Google Scholar
  39. Wang, C., Xu, C., Wang, C., & Tao, D. (2018b). Perceptual adversarial networks for image-to-image transformation. IEEE Transactions on Image Processing, 27(8), 4066–4079.CrossRefGoogle Scholar
  40. Wolterink, J. M., Leiner, T., Viergever, M. A., & Išgum, I. (2017). Generative adversarial networks for noise reduction in low-dose CT. IEEE Transactions on Medical Imaging, 36(12), 2536–2545.CrossRefGoogle Scholar
  41. Yang, Q., Li, N., Zhao, Z., Fan, X., Chang, E. C., & Xu, Y. (2018). Mri image-to-image translation for cross-modality image registration and segmentation. arXiv preprint arXiv:1801.06940.
  42. Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., et al. (2017). DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Transactions on Medical Imaging, 37(6), 1310–1321.CrossRefGoogle Scholar
  43. Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857).Google Scholar
  44. Zhang, R., Isola, P., & Efros, A. A. (2016). Colorful image colorization. In Proceedings of the European conference on computer vision (pp. 649–666).Google Scholar
  45. Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 586–595).Google Scholar
  46. Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 5907–5915).Google Scholar
  47. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.College of Information and ComputerTaiyuan University of TechnologyTaiyuanChina
  2. 2.Beijing Institution of TechnologyBeijingChina
  3. 3.Department of Radiology, Shanxi Province Cancer HospitalShanxi Medical UniversityTaiyuanChina
  4. 4.College of Information and ComputerTaiyuan University of TechnologyJinzhongChina

Personalised recommendations