Skip to main content
Log in

Multi image super resolution of MRI images using generative adversarial network

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

In recent decades, computer-aided medical image analysis has become a popular techniques for disease detection and diagnosis. Deep learning-based image processing techniques, have gained popularity in areas such as remote sensing, computer vision, and healthcare,compared to conventional techniques. However, hardware limitations, acquisition time, low radiation dose, and patient motion are factors that can limit the quality of medical images. High-resolution medical images are more accurate in localizing disease regions than low-resolution images. Hardware limitations, patient motion, radiation dose etc. can result in low-resolution (LR) medical images. To enhance the quality of LR medical images, we propose a multi-image super-resolution architecture using a generative adversarial network (GAN) with a generator architecture that employs multi-stage feature extraction, incorporating both residual blocks and an attention network and a discriminator having fewer convolutional layers to reduce computational complexity. The method enhances the resolution of low-resolution (LR) prostate cancer MRI images by combining multiple MRI slices with slight spatial shifts, utilizing shared weights for feature extraction for each MRI image. Unlike super-resolution techniques in literature, the network uses perceptual loss, which is computed by fine-tuning the VGG19 network with sparse categorical cross entropy loss. The features to compute perceptual loss are extracted from the final dense layer, instead of the convolutional block in VGG19 the literature. Our experiments were conducted on MRI images having a resolution of \(80\times 80\) for lower resolution and \(320\times 320\) for high resolution achieving an upscaling of x4. The experimental analysis shows that the proposed model outperforms the existing deep learning architectures for super-resolution with an average peak-signal-to-noise ratio (PSNR) of 30.58 ± 0.76 dB and average structural similarity index measure (SSIM) of 0.8105 ± 0.0656 for prostate MRI images. The application of a CNN-based SVM classifier confirmed that enhancing the resolution of normal LR brain MRI images using super-resolution techniques did not result in any false positive cases. This same architecture has the potential to be extended to other medical imaging modalities as well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The datasets used for this research are available within the PROSTATE-DIAGNOSIS dataset, accessible at https://doi.org/10.7937/K9/TCIA.2015.FOQEUJVT.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to U. Nimitha.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nimitha, U., Ameer, P.M. Multi image super resolution of MRI images using generative adversarial network. J Ambient Intell Human Comput 15, 2241–2253 (2024). https://doi.org/10.1007/s12652-024-04751-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-024-04751-9

Keywords

Navigation