Abstract
Synthetic aperture radar (SAR) images are widely used for Earth observation to complement optical imaging. By combining information on the polarization and the phase shift of the radar echos, SAR images offer high sensitivity to the geometry and materials that compose a scene. This information richness comes with a drawback inherent to all coherent imaging modalities: a strong signal-dependent noise called “speckle.” This paper addresses the mathematical issues of performing speckle reduction in a transformed domain: the matrix-log domain. Rather than directly estimating noiseless covariance matrices, recasting the denoising problem in terms of the matrix-log of the covariance matrices stabilizes noise fluctuations and makes it possible to apply off-the-shelf denoising algorithms. We refine the method MuLoG by replacing heuristic procedures with exact expressions and improving the estimation strategy. This corrects a bias of the original method and should facilitate and encourage the adaptation of general-purpose processing methods to SAR imaging.
Similar content being viewed by others
Notes
The projection is applied as follows: \(\varvec{G}_{i,j}\leftarrow e^{\lambda _i}\) if \(\varvec{G}_{i,j}\notin [e^{\lambda _j},\,e^{\lambda _i}]\) and \(|\varvec{G}_{i,j}-e^{\lambda _i}|<|\varvec{G}_{i,j}-e^{\lambda _j}|\), or \(\varvec{G}_{i,j}\leftarrow e^{\lambda _j}\) if \(\varvec{G}_{i,j}\notin [e^{\lambda _j},\,e^{\lambda _i}]\) and \(|\varvec{G}_{i,j}-e^{\lambda _i}|>|\varvec{G}_{i,j}-e^{\lambda _j}|\)
Provided by CNES under Creative Commons Attribution-Share Alike 3.0 Unported license. See: https://commons.wikimedia.org/wiki/File:Bratislava_SPOT_1027.jpg.
References
Aubert, G., Aujol, J.F.: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 68(4), 925–946 (2008)
Baqué, R., du Plessis, O.R., Castet, N., Fromage, P., Martinot-Lagarde, J., Nouvel, J.F., Oriot, H., Angelliaume, S., Brigui, F., Cantalloube, H., et al.: SETHI/RAMSES-NG: New performances of the flexible multi-spectral airborne remote sensing research platform. In: 2017 European Radar Conference (EURAD), pp. 191–194. IEEE (2017)
Boyd, S., Parikh, N., Chu, E.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc (2011)
Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)
Candes, E.J., Sing-Long, C.A., Trzasko, J.D.: Unbiased risk estimates for singular value thresholding and spectral estimators. IEEE Trans. Signal Process. 61(19), 4643–4657 (2013)
Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play ADMM for image restoration: fixed-point convergence and applications. IEEE Trans. Comput. Imag. 3(1), 84–98 (2016)
Chen, J., Chen, Y., An, W., Cui, Y., Yang, J.: Nonlocal filtering for polarimetric SAR data: A pretest approach. IEEE Trans. Geosci. Remote Sens. 49(5), 1744–1754 (2010)
Chierchia, G., Cozzolino, D., Poggi, G., Verdoliva, L.: SAR image despeckling through convolutional neural networks. In: 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 5438–5441. IEEE (2017)
Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-point algorithms for inverse problems in science and engineering, pp. 185–212. Springer (2011)
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)
Dalsasso, E., Denis, L., Tupin, F.: As if by magic: self-supervised training of deep despeckling networks with merlin. IEEE Trans. Geosci. Remote Sens. (2021). https://doi.org/10.1109/TGRS.2021.3128621
Dalsasso, E., Denis, L., Tupin, F.: Sar2sar: a semi-supervised despeckling algorithm for sar images. IEEE J. Select. Top. Appl. Earth Observat. Remote Sens. (2021)
Dalsasso, E., Yang, X., Denis, L., Tupin, F., Yang, W.: SAR image despeckling by deep neural networks: from a pre-trained model to an end-to-end training strategy. Remote Sens. 12(16), 2636 (2020)
Deledalle, C.A., Denis, L., Tabti, S., Tupin, F.: MuLoG, or How to apply Gaussian denoisers to multi-channel SAR speckle reduction? IEEE Trans. Image Process. 26(9), 4389–4403 (2017)
Deledalle, C.A., Denis, L., Tupin, F.: Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 18(12), 2661–2672 (2009)
Deledalle, C.A., Denis, L., Tupin, F.: NL-InSAR: nonlocal interferogram estimation. IEEE Trans. Geosci. Remote Sens. 49(4), 1441–1452 (2010)
Deledalle, C.A., Denis, L., Tupin, F., Reigber, A., Jäger, M.: NL-SAR: a unified nonlocal framework for resolution-preserving (Pol)(In) SAR denoising. IEEE Trans. Geosci. Remote Sens. 53(4), 2021–2038 (2014)
Deledalle, C.A., Vaiter, S., Peyré, G., Fadili, J.M., Dossal, C.: Risk estimation for matrix recovery with spectral regularization. In: ICML’2012 workshop on Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing (2012)
Denis, L., Tupin, F., Darbon, J., Sigelle, M.: SAR image regularization with fast approximate discrete minimization. IEEE Trans. Image Process. 18(7), 1588–1600 (2009)
Durand, S., Fadili, J., Nikolova, M.: Multiplicative noise removal using L1 fidelity on frame coefficients. J. Math. Imag. Vis. 36(3), 201–226 (2010)
Even, M., Schulz, K.: InSAR deformation analysis with distributed scatterers: A review complemented by new advances. Remote Sensing 10(5), 744 (2018)
Goodman, J.: Some fundamental properties of speckle. J. Opt. Soc. Am. 66(11), 1145–1150 (1976)
Goodman, J.W.: Statistical properties of laser speckle patterns. In: Laser speckle and related phenomena, pp. 9–75. Springer (1975)
Guo, B., Han, Y., Wen, J.: Agem: Solving linear inverse problems via deep priors and sampling. Adv. Neural. Inf. Process. Syst. 32, 547–558 (2019)
Hertrich, J., Neumayer, S., Steidl, G.: Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra and its Applications (2021)
Kadkhodaie, Z., Simoncelli, E.P.: Solving linear inverse problems using the prior implicit in a denoiser. arXiv preprint arXiv:2007.13640 (2020)
Kawar, B., Vaksman, G., Elad, M.: Stochastic image denoising by sampling from the posterior distribution. arXiv preprint arXiv:2101.09552 (2021)
Knaus, C., Zwicker, M.: Dual-domain image denoising. In: 2013 IEEE International Conference on Image Processing, pp. 440–444. IEEE (2013)
Laumont, R., De Bortoli, V., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie. arXiv preprint arXiv:2103.04715 (2021)
Lee, J.S.: Digital image smoothing and the sigma filter. Comput. Vis. Graph. Image Process. 24(2), 255–269 (1983)
Lewis, A.S., Sendov, H.S.: Twice differentiable spectral functions. SIAM J. Matrix Anal. Appl. 23(2), 368–386 (2001)
Lopes, A., Touzi, R., Nezry, E.: Adaptive speckle filters and scene heterogeneity. IEEE Trans. Geosci. Remote Sens. 28(6), 992–1000 (1990)
Meinhardt, T., Moller, M., Hazirbas, C., Cremers, D.: Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790 (2017)
Molini, A.B., Valsesia, D., Fracastoro, G., Magli, E.: Speckle2Void: deep self-supervised SAR despeckling with blind-spot convolutional neural networks. IEEE Trans. Geosci. Remote Sens. (2021)
Monga, V., Li, Y., Eldar, Y.C.: Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 38(2), 18–44 (2021)
Moreira, A., Prats-Iraola, P., Younis, M., Krieger, G., Hajnsek, I., Papathanassiou, K.P.: A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Magaz. 1(1), 6–43 (2013)
Parameswaran, S., Deledalle, C.A., Denis, L., Nguyen, T.Q.: Accelerating GMM-based patch priors for image restoration: three ingredients for a \(100 \times \) speed-up. IEEE Trans. Image Process. 28(2), 687–698 (2018)
Parrilli, S., Poderico, M., Angelino, C.V., Verdoliva, L.: A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 50(2), 606–616 (2011)
Rasti, B., Chang, Y., Dalsasso, E., Denis, L., Ghamisi, P.: Image restoration for remote sensing: overview and toolbox. IEEE Geosci. Remote Sens. Magaz. 2–31 (2021). https://doi.org/10.1109/MGRS.2021.3121761
Reehorst, E.T., Schniter, P.: Regularization by denoising: clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018)
Romano, Y., Elad, M., Milanfar, P.: The little engine that could: regularization by denoising (red). SIAM J. Imag. Sci. 10(4), 1804–1844 (2017)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1–4), 259–268 (1992)
Ryu, E., Liu, J., Wang, S., Chen, X., Wang, Z., Yin, W.: Plug-and-play methods provably converge with properly trained denoisers. In: International Conference on Machine Learning, pp. 5546–5557. PMLR (2019)
Steidl, G., Teuber, T.: Removing multiplicative noise by Douglas-Rachford splitting methods. J. Math. Imag. Vis. 36(2), 168–184 (2010)
Terris, M., Repetti, A., Pesquet, J.C., Wiaux, Y.: Enhanced convergent pnp algorithms for image restoration. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 1684–1688. IEEE (2021)
Touzi, R., Lopes, A., Bruniquel, J., Vachon, P.W.: Coherence estimation for SAR imagery. IEEE Trans. Geosci. Remote Sens. 37(1), 135–149 (1999)
Vasile, G., Trouvé, E., Lee, J.S., Buzuloiu, V.: Intensity-driven adaptive-neighborhood technique for polarimetric and interferometric SAR parameters estimation. IEEE Trans. Geosci. Remote Sens. 44(6), 1609–1621 (2006)
Venkatakrishnan, S.V., Bouman, C.A., Wohlberg, B.: Plug-and-play priors for model based reconstruction. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 945–948. IEEE (2013)
Xie, H., Pierce, L.E., Ulaby, F.T.: Statistical properties of logarithmically transformed speckle. IEEE Trans. Geosci. Remote Sens. 40(3), 721–727 (2002)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)
Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3929–3938 (2017)
Zhang, Y., Zhu, D.: Height retrieval in postprocessing-based VideoSAR image sequence using shadow information. IEEE Sens. J. 18(19), 8108–8116 (2018)
Zhao, W., Deledalle, C.A., Denis, L., Maître, H., Nicolas, J.M., Tupin, F.: Ratio-based multitemporal SAR images denoising: RABASAR. IEEE Trans. Geosci. Remote Sens. 57(6), 3552–3565 (2019)
Acknowledgements
The airborne SAR images processed in this paper were provided by ONERA, the French aerospace lab, within the project ALYS ANR-15-ASTR-0002 funded by the DGA (Direction Générale à l’Armement) and the ANR (Agence Nationale de la Recherche). This work has been supported in part by the French space agency CNES under project R-S19/OT-0003-086.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A Gradient of the Objective F
The gradient of (10) is given by:
Proof (Proof of eq. (12))
Applying the chain rule to eq. (10) leads to the following decomposition
Using that for any matrix \(\varvec{A}\) and \(\varvec{B}\)
concludes the proof. \(\square \)
B Proof of Proposition 1
Proposition 1
Let \(\varvec{\Sigma }\) be a \(D \times D\) Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. Then, for any \(D \times D\) Hermitian matrix \(\varvec{A}\), denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\), we have
where \(\odot \) is the element-wise (a.k.a, Hadamar) product, and, for all \(1 \le i, j \le D\), we have defined
Proof
Let us start by recalling the following Lemma whose proof can be found in [5, 18, 31].\(\square \)
Lemma 1
Let \(\varvec{\Sigma }\) be an Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. We have for a Hermitian matrix \(\varvec{A}\)
where \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \(\varvec{J}\) is the skew-symmetric matrix
Recall that \( \exp \varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \varvec{E}^* \). From Lemma 1, by applying chain rule, we have
We have for \(i \ne j\)
For \(i = j\), since \(\varvec{J}_{ii} = 0\), we conclude the proof.
C Proof of Corollary 1
Corollary 1
Let \(\varvec{\Sigma }\) be a Hermitian matrix with distinct eigenvalues. The Jacobian of the matrix exponential is a self-adjoint operator
Proof
We need to prove that for any two \(D \times D\) Hermitian matrices \(\varvec{A}\) and \(\varvec{B}\), we have
for the matrix dot product \(\left\langle \varvec{X},\,\varvec{Y} \right\rangle = {{\,\mathrm{tr}\,}}[ \varvec{X}\varvec{Y}^* ]\). According to Proposition 1, this amounts to show
where \(\varvec{E}\) and \(\varvec{G}\) are defined from \(\varvec{\Sigma }\) as in Proposition 1. Denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\), this can be recast as
Expanding the left hand side allows us to conclude the proof as follows
\(\square \)
D Hessian of the Objective F
The second-order derivative used in the approximation of the Hessian given in (23) takes the form:
Proof (Proof of eq. (24))
Applying the chain rule to eq. (12) leads to
If follows that
and thus, as \(|\!| \varvec{d} |\!|_2 = 1\), it follows that
which concludes the proof.\(\square \)
E Proof of Proposition 2
Proposition 2
Let \(\varvec{\Sigma }\) be a \(D \times D\) Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. For any \(D \times D\) Hermitian matrices \(\varvec{A}\) and \(\varvec{B}\), denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\), we have
where, for all \(1 \le i, j \le D\), we have
Proof (Proof of Proposition 2)
The second directional derivative can be defined from the adjoint of the directional derivative as
By virtue of Corollary 1, we have \( \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} ^* = \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} \), and then from Proposition 1 it follows that
In order to apply the chain rule on \(\varvec{E}[ \varvec{G}\odot {\bar{\varvec{A}}}] \varvec{E}^*\), let us first rewrite \(\varvec{J}\) and \(\varvec{G}\) in Lemma 1 and Proposition 1 as
where \(\varvec{1}_D\) is a D dimensional column vector of ones, \(\oslash \) denotes the element-wise division and \({e^{\varvec{\Lambda }}}^*\) must be understood as the row vector \(({e^{\varvec{\Lambda }}})^*\). From Lemma 1, we have for a Hermitian matrix \(\varvec{B}\)
where \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\). By application of the chain rule, we get
where we used that \(\big ( e^{\varvec{\Lambda }} \varvec{1}_D^* - \varvec{1}_D {e^{\varvec{\Lambda }}}^* \big ) \odot \varvec{J}= -\varvec{G}\odot \varvec{J}\). Let \(\varvec{A}\) be a Hermitian matrix and \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\), by Lemma 1, we have
We are now equipped to apply the chain rule to \(\varvec{E}[ \varvec{G}\odot {\bar{\varvec{A}}}] \varvec{E}^*\) in the direction \(\varvec{B}\), which leads us to
We have for all \(1 \le i \le D\) and \(1 \le j \le D\)
Hence, we get
\(\bullet \) Assume \(i \ne j\). We have
For \(k \ne i\) and \(k \ne j\), we have
Similarly, we have \(\varvec{J}_{jk} (\varvec{G}_{ik} - \varvec{G}_{ij}) = \varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik})\). Hence, we get the following
It follows that
\(\bullet \) Now assume that \(i = j\). We have \([\varvec{F}_3]_{ii} = \varvec{G}_{ii} {\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ii}\). It follows that
which concludes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Deledalle, CA., Denis, L. & Tupin, F. Speckle Reduction in Matrix-Log Domain for Synthetic Aperture Radar Imaging. J Math Imaging Vis 64, 298–320 (2022). https://doi.org/10.1007/s10851-022-01067-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-022-01067-1