Skip to main content
Log in

Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Despite the advantages of all-weather and all-day high-resolution imaging, synthetic aperture radar (SAR) images are much less viewed and used by general people because human vision is not adapted to microwave scattering phenomenon. However, expert interpreters can be trained by comparing side-by-side SAR and optical images to learn the mapping rules from SAR to optical. This paper attempts to develop machine intelligence that is trainable with large-volume co-registered SAR and optical images to translate SAR images to optical version for assisted SAR image interpretation. Reciprocal SAR-optical image translation is a challenging task because it is a raw data translation between two physically very different sensing modalities. Inspired by recent progresses in image translation studies in computer vision, this paper tackles the problem of SAR-optical reciprocal translation with an adversarial network scheme where cascaded residual connections and hybrid L1-GAN loss are employed. It is trained and tested on both spaceborne Gaofen-3 (GF-3) and airborne Uninhabited Airborne Vehicle Synthetic Aperture Radar (UAVSAR) images. Results are presented for datasets of different resolutions and polarizations and compared with other state-of-the-art methods. The Frechet inception distance (FID) is used to quantitatively evaluate the translation performance. The possibility of unsupervised learning with unpaired/unregistered SAR and optical images is also explored. Results show that the proposed translation network works well under many scenarios and it could potentially be used for assisted SAR interpretation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Isola P, Zhu J, Zhou T, et al. Image-to-image translation with conditional adversarial networks. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 5967–5976

  2. Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV), 2017. 2242–2251

  3. Jin K H, McCann M T, Froustey E, et al. Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process, 2017, 26: 4509–4522

    Article  MathSciNet  Google Scholar 

  4. Zhu J Y, Zhang R, Pathak D, et al. Toward multimodal image-to-image translation. In: Proceedings of Advances in Neural Information Processing Systems, 2017. 465–476

  5. Heusel M, Ramsauer H, Unterthiner T, et al. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of Advances in Neural Information Processing Systems, 2017. 6626–6637

  6. Byun Y, Choi J, Han Y. An area-based image fusion scheme for the integration of SAR and optical satellite imagery. IEEE J Sel Top Appl Earth Observ Remote Sens, 2013, 6: 2212–2220

    Article  Google Scholar 

  7. Garzelli A. Wavelet-based fusion of optical and sar image data over urban area. Int Arch Photogrammetry Remote Sensing Spatial Inf Sci, 2002, 34: 59–62

    Google Scholar 

  8. Fan J, Wu Y, Li M, et al. SAR and optical image registration using nonlinear diffusion and phase congruency structural descriptor. IEEE Trans Geosci Remote Sens, 2018, 56: 5368–5379

    Article  Google Scholar 

  9. Liu J, Gong M, Qin K, et al. A deep convolutional coupling network for change detection based on heterogeneous optical and radar images. IEEE Trans Neural Netw Learn Syst, 2018, 29: 545–559

    Article  MathSciNet  Google Scholar 

  10. Merkle N, Auer S, Muller R, et al. Exploring the potential of conditional adversarial networks for optical and SAR image matching. IEEE J Sel Top Appl Earth Observ Remote Sens, 2018, 11: 1811–1820

    Article  Google Scholar 

  11. He W, Yokoya N. Multi-temporal sentinel-1 and -2 data fusion for optical image simulation. ISPRS Int J Geo-Inf, 2018, 7: 389

    Article  Google Scholar 

  12. Schmitt M, Hughes L H, Zhu X X. The sen1-2 dataset for deep learning in sar-optical data fusion. In: Proceedings of ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1, 2018. 141–146

  13. Wang P, Patel V M. Generating high quality visible images from SAR images using CNNs. In: Proceedings of 2018 IEEE Radar Conference (RadarConf18), 2018. 0570–0575

  14. Enomoto K, Sakurada K, Wang W, et al. Image translation between sar and optical imagery with generative adversarial nets. In: Proceedings of IGARSS IEEE International Geoscience and Remote Sensing Symposium, 2018. 1752–1755

  15. Wang L, Xu X, Yu Y, et al. SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access, 2019, 7: 129136–129149

    Article  Google Scholar 

  16. Li Y, Fu R, Meng X, et al. A SAR-to-optical image translation method based on conditional generation adversarial network (cGAN). IEEE Access, 2020, 8: 60338–60343

    Article  Google Scholar 

  17. Fuentes R M, Auer S, Merkle N, et al. SAR-to-optical image translation based on conditional generative adversarial networks-optimization, opportunities and limits. Remote Sens, 2019, 11: 2067

    Article  Google Scholar 

  18. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-assisted Intervention. Berlin: Springer, 2015. 234–241

    Google Scholar 

  19. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Proceedings of Advances in Neural Information Processing Systems, 2014. 2672–2680

  20. Arjovsky M, Chintala S, Bottou L. Wasserstein GAN. 2017. ArXiv:1701.07875

  21. Cozzolino D, Parrilli S, Scarpa G, et al. Fast adaptive nonlocal SAR despeckling. IEEE Geosci Remote Sens Lett, 2014, 11: 524–528

    Article  Google Scholar 

  22. Chen L C, Zhu Y, Papandreou G, et al. Encoderdecoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), 2018. 801–818

Download references

Acknowledgements

This work was supported in part by National Key R&D Program of China (Grant No. 2017YFB0502703) and Natural Science Foundation of China (Grant Nos. 61822107, 61571134).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Xu.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fu, S., Xu, F. & Jin, YQ. Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks. Sci. China Inf. Sci. 64, 122301 (2021). https://doi.org/10.1007/s11432-020-3077-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-020-3077-5

Keywords

Navigation