Skip to main content
Log in

SAR-to-optical image translation for quality enhancement

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Synthetic aperture radar (SAR) plays an important role in monitoring the geographical environment with the ability of all-time, all-weather, but it is a challenge to improve the readability of SAR images. Generative adversarial networks are widely used in image translation, and it also helps to enhance the quality of the cross-domain image translation. In this paper, we propose an image translation method based on generative adversarial networks for SAR images quality enhancement, which innovatively fuses U-Net and T-Net as generators and adds perceptual loss and structural loss as objective functions. The features of SAR images extracted from residual networks are input to the T-Net branch for feature compensation and the U-Net branch for deep feature extraction. Experimental results with the SEN1-2 dataset show that the advantage of the proposed UTGAN model both in traditional quality assessment metrics like SSIM (Structural Similarity), PSNR (Peak Signal-to-Noise Ratio), and deep learning-based metrics like FID (Fréchet Inception Distance). The ablation experiments show that perceptual loss and structural similarity loss both have a positive effect on translation quality. From the objective analysis, the proposed model achieves 0.73 in SSIM, 22.31 in PSNR, and 205.75 in FID, which is better than the existing models. From the subjective analysis, the images generated by the proposed model are more consistent with human visual perception, with clearer textures and richer details.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig.1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • Alharbi Y, Smith NG, Wonka P (2019) Latent Filter Scaling for Multimodal Unsupervised Image-To-Image Translation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 1458–1466

  • Bermudez JD, Happ PN, Oliveira DAB, Feitosa RQ (2018) Sar to Optical Image Synthesis for Cloud Removal with Generative Adversarial Networks. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 5–11

  • Chen L, Wu L, Hu Z, Wang M (2019) Quality-aware unpaired image-to-image translation. IEEE Trans Multimed 21:2664–2674

    Article  Google Scholar 

  • Doi K, Sakurada K, Onishi M, Iwasaki A (2020) GAN-Based SAR-to-Optical Image Translation with Region Information. IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, pp 2069–2072

  • Enomoto K, Sakurada K, Wang W, Kawaguchi N, Matsuoka M, Nakamura R (2018) Image Translation Between Sar and Optical Imagery with Generative Adversarial Nets. IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, pp 1752–1755

  • Heusel M, Ramsauer H, et al (2017) GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium. arXiv preprint: arXiv: 1706.0850

  • Hughes LH, Schmitt M (2019) A semi-supervised approach to SAR-optical image matching. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 71–78

  • Hwang J, Yu C, Shin Y (2020) SAR-to-Optical Image Translation Using SSIM and Perceptual Loss Based Cycle-Consistent GAN. In: 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, Jeju, Korea (South), pp 191–194

  • Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-Image Translation with Conditional Adversarial Networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Honolulu, HI, pp 5967–5976

  • Jiang X, Liu M, Zhao F et al (2020) A novel super-resolution CT image reconstruction via semi-supervised generative adversarial network. Neural Comput & Applic 32:14563–14578

    Article  Google Scholar 

  • Karras T, Laine S, Aila T (2019) A Style-Based Generator Architecture for Generative Adversarial Networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Long Beach, CA, USA, pp 4396–4405

  • Kim T, Cha M, Kim H, et al (2017) Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. In: International Conference on Machine Learning, pp 1857–1865

  • Kuang P, Ma T, Chen Z, Li F (2018) Image super-resolution with densely connected convolutional networks. Appl Intell 49:125–136

    Article  Google Scholar 

  • Ledig C, Theis L, Huszar F, et al (2017) Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Honolulu, HI, pp 105–114

  • Li X, Du Z, Huang Y, Tan Z (2021) A deep translation (GAN) based change detection network for optical and SAR remote sensing images. ISPRS J Photogramm Remote Sens 179:14–34

    Article  Google Scholar 

  • Lin T-Y, Dollar P, Girshick R, et al. (2017) Feature Pyramid Networks for Object Detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Honolulu, HI, pp 936–944

  • Ma J, Yu W, Chen C, Liang P, Guo X, Jiang J (2020) Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf Fusion 62:110–120

    Article  Google Scholar 

  • Merkle N, Auer S, Muller R, Reinartz P (2018) Exploring the potential of conditional adversarial networks for optical and SAR image matching. IEEE J Sel Top Appl Earth Observ Remote Sens 11:1811–1820

    Article  Google Scholar 

  • Moreira A, Prats-Iraola P, Younis M et al (2013) A tutorial on synthetic aperture radar. IEEE Geosci Remote Sens Mag 1:6–43

    Article  Google Scholar 

  • Richardson E, Alaluf Y, Patashnik O, et al (2020) Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation. arXiv preprint: arXiv:2008.00951

  • Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp 234–241

  • Sara U, Akter M, Uddin MS (2019) Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study. J Comput Chem 7:8–18

    Google Scholar 

  • Schmitt M, Hughes LH, Zhu XX (2018) THE SEN1–2 DATASET FOR DEEP LEARNING IN SAR-OPTICAL DATA FUSION. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp 141–146

  • Sheikh HR, Bovik AC (2006) Image information and visual quality. IEEE Trans Image Process 15:430–444. https://doi.org/10.1109/TIP.2005.859378

    Article  Google Scholar 

  • Wang Z, Bovik AC (2002) A universal image quality index. IEEE Signal Process Lett 9:81–84. https://doi.org/10.1109/97.995823

    Article  Google Scholar 

  • Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13:600–612

    Article  Google Scholar 

  • Wang C, Xu C, Wang C, Tao D (2018a) Perceptual adversarial networks for image-to-image transformation. IEEE Trans Image Process 27:4066–4079

    Article  MathSciNet  MATH  Google Scholar 

  • Wang P, Patel VM (2018b) Generating high quality visible images from SAR images using CNNs. In: 2018 IEEE Radar Conference (RadarConf18). IEEE, Oklahoma City, OK, pp 0570–0575

  • Wang T-C, Liu M-Y, Zhu J-Y et al (2018c) High-resolution image synthesis and semantic manipulation with conditional GANs. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, USA, pp 8798–8807

    Chapter  Google Scholar 

  • Wang P, Li Y, Vasconcelos N (2021) Rethinking and Improving the Robustness of Image Style Transfer. arXiv preprint: arXiv: 2104.05623

  • Yi Z, Zhang H, Tan P, Gong M (2017) DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, Venice, pp 2868–2876

  • Yim C, Bovik AC (2011) Quality assessment of deblocked images. IEEE Trans Image Process 20:88–98. https://doi.org/10.1109/TIP.2010.2061859

    Article  MathSciNet  MATH  Google Scholar 

  • Yu T, Zhang J, Zhou J (2021) Conditional GAN with Effective Attention for SAR-to-Optical Image Translation. In: 2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC), pp 7–11

  • Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans on Image Process 20:2378–2386

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang R, Isola P, Efros AA, et al (2018) The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, pp 586–595

  • Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, Venice, pp 2242–2251

  • Zuo Z, Li Y (2021) A SAR-to-optical image translation method based on PIX2PIX. In: 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp 3026–3029

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dechang Pi.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest with any person(s) or Organization(s).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, Y., Pi, D. SAR-to-optical image translation for quality enhancement. J Ambient Intell Human Comput 14, 9985–10000 (2023). https://doi.org/10.1007/s12652-021-03665-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-021-03665-0

Keywords

Navigation