Skip to main content
Log in

Speckle Reduction in Matrix-Log Domain for Synthetic Aperture Radar Imaging

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Synthetic aperture radar (SAR) images are widely used for Earth observation to complement optical imaging. By combining information on the polarization and the phase shift of the radar echos, SAR images offer high sensitivity to the geometry and materials that compose a scene. This information richness comes with a drawback inherent to all coherent imaging modalities: a strong signal-dependent noise called “speckle.” This paper addresses the mathematical issues of performing speckle reduction in a transformed domain: the matrix-log domain. Rather than directly estimating noiseless covariance matrices, recasting the denoising problem in terms of the matrix-log of the covariance matrices stabilizes noise fluctuations and makes it possible to apply off-the-shelf denoising algorithms. We refine the method MuLoG by replacing heuristic procedures with exact expressions and improving the estimation strategy. This corrects a bias of the original method and should facilitate and encourage the adaptation of general-purpose processing methods to SAR imaging.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. The projection is applied as follows: \(\varvec{G}_{i,j}\leftarrow e^{\lambda _i}\) if \(\varvec{G}_{i,j}\notin [e^{\lambda _j},\,e^{\lambda _i}]\) and \(|\varvec{G}_{i,j}-e^{\lambda _i}|<|\varvec{G}_{i,j}-e^{\lambda _j}|\), or \(\varvec{G}_{i,j}\leftarrow e^{\lambda _j}\) if \(\varvec{G}_{i,j}\notin [e^{\lambda _j},\,e^{\lambda _i}]\) and \(|\varvec{G}_{i,j}-e^{\lambda _i}|>|\varvec{G}_{i,j}-e^{\lambda _j}|\)

  2. Provided by CNES under Creative Commons Attribution-Share Alike 3.0 Unported license. See: https://commons.wikimedia.org/wiki/File:Bratislava_SPOT_1027.jpg.

References

  1. Aubert, G., Aujol, J.F.: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 68(4), 925–946 (2008)

    Article  MathSciNet  Google Scholar 

  2. Baqué, R., du Plessis, O.R., Castet, N., Fromage, P., Martinot-Lagarde, J., Nouvel, J.F., Oriot, H., Angelliaume, S., Brigui, F., Cantalloube, H., et al.: SETHI/RAMSES-NG: New performances of the flexible multi-spectral airborne remote sensing research platform. In: 2017 European Radar Conference (EURAD), pp. 191–194. IEEE (2017)

  3. Boyd, S., Parikh, N., Chu, E.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc (2011)

  4. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)

    Article  MathSciNet  Google Scholar 

  5. Candes, E.J., Sing-Long, C.A., Trzasko, J.D.: Unbiased risk estimates for singular value thresholding and spectral estimators. IEEE Trans. Signal Process. 61(19), 4643–4657 (2013)

    Article  MathSciNet  Google Scholar 

  6. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play ADMM for image restoration: fixed-point convergence and applications. IEEE Trans. Comput. Imag. 3(1), 84–98 (2016)

    Article  MathSciNet  Google Scholar 

  7. Chen, J., Chen, Y., An, W., Cui, Y., Yang, J.: Nonlocal filtering for polarimetric SAR data: A pretest approach. IEEE Trans. Geosci. Remote Sens. 49(5), 1744–1754 (2010)

    Article  Google Scholar 

  8. Chierchia, G., Cozzolino, D., Poggi, G., Verdoliva, L.: SAR image despeckling through convolutional neural networks. In: 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 5438–5441. IEEE (2017)

  9. Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-point algorithms for inverse problems in science and engineering, pp. 185–212. Springer (2011)

  10. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  11. Dalsasso, E., Denis, L., Tupin, F.: As if by magic: self-supervised training of deep despeckling networks with merlin. IEEE Trans. Geosci. Remote Sens. (2021). https://doi.org/10.1109/TGRS.2021.3128621

  12. Dalsasso, E., Denis, L., Tupin, F.: Sar2sar: a semi-supervised despeckling algorithm for sar images. IEEE J. Select. Top. Appl. Earth Observat. Remote Sens. (2021)

  13. Dalsasso, E., Yang, X., Denis, L., Tupin, F., Yang, W.: SAR image despeckling by deep neural networks: from a pre-trained model to an end-to-end training strategy. Remote Sens. 12(16), 2636 (2020)

    Article  Google Scholar 

  14. Deledalle, C.A., Denis, L., Tabti, S., Tupin, F.: MuLoG, or How to apply Gaussian denoisers to multi-channel SAR speckle reduction? IEEE Trans. Image Process. 26(9), 4389–4403 (2017)

    Article  MathSciNet  Google Scholar 

  15. Deledalle, C.A., Denis, L., Tupin, F.: Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 18(12), 2661–2672 (2009)

    Article  MathSciNet  Google Scholar 

  16. Deledalle, C.A., Denis, L., Tupin, F.: NL-InSAR: nonlocal interferogram estimation. IEEE Trans. Geosci. Remote Sens. 49(4), 1441–1452 (2010)

    Article  Google Scholar 

  17. Deledalle, C.A., Denis, L., Tupin, F., Reigber, A., Jäger, M.: NL-SAR: a unified nonlocal framework for resolution-preserving (Pol)(In) SAR denoising. IEEE Trans. Geosci. Remote Sens. 53(4), 2021–2038 (2014)

    Article  Google Scholar 

  18. Deledalle, C.A., Vaiter, S., Peyré, G., Fadili, J.M., Dossal, C.: Risk estimation for matrix recovery with spectral regularization. In: ICML’2012 workshop on Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing (2012)

  19. Denis, L., Tupin, F., Darbon, J., Sigelle, M.: SAR image regularization with fast approximate discrete minimization. IEEE Trans. Image Process. 18(7), 1588–1600 (2009)

    Article  MathSciNet  Google Scholar 

  20. Durand, S., Fadili, J., Nikolova, M.: Multiplicative noise removal using L1 fidelity on frame coefficients. J. Math. Imag. Vis. 36(3), 201–226 (2010)

    Article  Google Scholar 

  21. Even, M., Schulz, K.: InSAR deformation analysis with distributed scatterers: A review complemented by new advances. Remote Sensing 10(5), 744 (2018)

    Article  Google Scholar 

  22. Goodman, J.: Some fundamental properties of speckle. J. Opt. Soc. Am. 66(11), 1145–1150 (1976)

    Article  Google Scholar 

  23. Goodman, J.W.: Statistical properties of laser speckle patterns. In: Laser speckle and related phenomena, pp. 9–75. Springer (1975)

  24. Guo, B., Han, Y., Wen, J.: Agem: Solving linear inverse problems via deep priors and sampling. Adv. Neural. Inf. Process. Syst. 32, 547–558 (2019)

    Google Scholar 

  25. Hertrich, J., Neumayer, S., Steidl, G.: Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra and its Applications (2021)

  26. Kadkhodaie, Z., Simoncelli, E.P.: Solving linear inverse problems using the prior implicit in a denoiser. arXiv preprint arXiv:2007.13640 (2020)

  27. Kawar, B., Vaksman, G., Elad, M.: Stochastic image denoising by sampling from the posterior distribution. arXiv preprint arXiv:2101.09552 (2021)

  28. Knaus, C., Zwicker, M.: Dual-domain image denoising. In: 2013 IEEE International Conference on Image Processing, pp. 440–444. IEEE (2013)

  29. Laumont, R., De Bortoli, V., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie. arXiv preprint arXiv:2103.04715 (2021)

  30. Lee, J.S.: Digital image smoothing and the sigma filter. Comput. Vis. Graph. Image Process. 24(2), 255–269 (1983)

    Article  Google Scholar 

  31. Lewis, A.S., Sendov, H.S.: Twice differentiable spectral functions. SIAM J. Matrix Anal. Appl. 23(2), 368–386 (2001)

    Article  MathSciNet  Google Scholar 

  32. Lopes, A., Touzi, R., Nezry, E.: Adaptive speckle filters and scene heterogeneity. IEEE Trans. Geosci. Remote Sens. 28(6), 992–1000 (1990)

    Article  Google Scholar 

  33. Meinhardt, T., Moller, M., Hazirbas, C., Cremers, D.: Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790 (2017)

  34. Molini, A.B., Valsesia, D., Fracastoro, G., Magli, E.: Speckle2Void: deep self-supervised SAR despeckling with blind-spot convolutional neural networks. IEEE Trans. Geosci. Remote Sens. (2021)

  35. Monga, V., Li, Y., Eldar, Y.C.: Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 38(2), 18–44 (2021)

    Article  Google Scholar 

  36. Moreira, A., Prats-Iraola, P., Younis, M., Krieger, G., Hajnsek, I., Papathanassiou, K.P.: A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Magaz. 1(1), 6–43 (2013)

    Article  Google Scholar 

  37. Parameswaran, S., Deledalle, C.A., Denis, L., Nguyen, T.Q.: Accelerating GMM-based patch priors for image restoration: three ingredients for a \(100 \times \) speed-up. IEEE Trans. Image Process. 28(2), 687–698 (2018)

    Article  MathSciNet  Google Scholar 

  38. Parrilli, S., Poderico, M., Angelino, C.V., Verdoliva, L.: A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 50(2), 606–616 (2011)

    Article  Google Scholar 

  39. Rasti, B., Chang, Y., Dalsasso, E., Denis, L., Ghamisi, P.: Image restoration for remote sensing: overview and toolbox. IEEE Geosci. Remote Sens. Magaz. 2–31 (2021). https://doi.org/10.1109/MGRS.2021.3121761

  40. Reehorst, E.T., Schniter, P.: Regularization by denoising: clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018)

    Article  Google Scholar 

  41. Romano, Y., Elad, M., Milanfar, P.: The little engine that could: regularization by denoising (red). SIAM J. Imag. Sci. 10(4), 1804–1844 (2017)

    Article  MathSciNet  Google Scholar 

  42. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1–4), 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  43. Ryu, E., Liu, J., Wang, S., Chen, X., Wang, Z., Yin, W.: Plug-and-play methods provably converge with properly trained denoisers. In: International Conference on Machine Learning, pp. 5546–5557. PMLR (2019)

  44. Steidl, G., Teuber, T.: Removing multiplicative noise by Douglas-Rachford splitting methods. J. Math. Imag. Vis. 36(2), 168–184 (2010)

    Article  MathSciNet  Google Scholar 

  45. Terris, M., Repetti, A., Pesquet, J.C., Wiaux, Y.: Enhanced convergent pnp algorithms for image restoration. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 1684–1688. IEEE (2021)

  46. Touzi, R., Lopes, A., Bruniquel, J., Vachon, P.W.: Coherence estimation for SAR imagery. IEEE Trans. Geosci. Remote Sens. 37(1), 135–149 (1999)

    Article  Google Scholar 

  47. Vasile, G., Trouvé, E., Lee, J.S., Buzuloiu, V.: Intensity-driven adaptive-neighborhood technique for polarimetric and interferometric SAR parameters estimation. IEEE Trans. Geosci. Remote Sens. 44(6), 1609–1621 (2006)

    Article  Google Scholar 

  48. Venkatakrishnan, S.V., Bouman, C.A., Wohlberg, B.: Plug-and-play priors for model based reconstruction. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 945–948. IEEE (2013)

  49. Xie, H., Pierce, L.E., Ulaby, F.T.: Statistical properties of logarithmically transformed speckle. IEEE Trans. Geosci. Remote Sens. 40(3), 721–727 (2002)

    Article  Google Scholar 

  50. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  51. Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3929–3938 (2017)

  52. Zhang, Y., Zhu, D.: Height retrieval in postprocessing-based VideoSAR image sequence using shadow information. IEEE Sens. J. 18(19), 8108–8116 (2018)

    Article  Google Scholar 

  53. Zhao, W., Deledalle, C.A., Denis, L., Maître, H., Nicolas, J.M., Tupin, F.: Ratio-based multitemporal SAR images denoising: RABASAR. IEEE Trans. Geosci. Remote Sens. 57(6), 3552–3565 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

The airborne SAR images processed in this paper were provided by ONERA, the French aerospace lab, within the project ALYS ANR-15-ASTR-0002 funded by the DGA (Direction Générale à l’Armement) and the ANR (Agence Nationale de la Recherche). This work has been supported in part by the French space agency CNES under project R-S19/OT-0003-086.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles-Alban Deledalle.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Gradient of the Objective F

The gradient of (10) is given by:

figure i

Proof (Proof of eq. (12))

Applying the chain rule to eq. (10) leads to the following decomposition

$$\begin{aligned} \varvec{g} =\;&\nabla _{\varvec{x}} \left[ \frac{\beta }{2} \Vert \varvec{x} - \varvec{z}\Vert ^2 + L {{\,\mathrm{tr}\,}}(\Omega (\varvec{x}) + e^{\Omega (\varvec{y})} e^{-\Omega (\varvec{x})}) \right] \end{aligned}$$
(37)
$$\begin{aligned} =\;&\beta (\varvec{x} - \varvec{z}) + L \left. \frac{ \partial \Omega (\varvec{x})}{\partial \varvec{x}}\right| _{\varvec{x}} ^* \Bigg ( \nabla _{\varvec{X}} {{\,\mathrm{tr}\,}}\varvec{X}\bigg |_{\Omega (\varvec{x})}\nonumber \\&- \left. \frac{ \partial e^{\varvec{X}}}{\partial \varvec{X}}\right| _{-\Omega (\varvec{x})} ^* \left. \frac{ \partial e^{\Omega (\varvec{y})} \varvec{X}}{\partial \varvec{X}}\right| _{e^{-\Omega (\varvec{x})}} ^* \nabla _{\varvec{X}} {{\,\mathrm{tr}\,}}\varvec{X}\bigg \vert _{e^{\Omega (\varvec{y})} e^{-\Omega (\varvec{x})}} \Bigg )~. \end{aligned}$$
(38)

Using that for any matrix \(\varvec{A}\) and \(\varvec{B}\)

$$\begin{aligned} {\displaystyle \left. \frac{ \partial \varvec{A}\varvec{X}}{\partial \varvec{X}}\right| _{e^{-\Omega (\varvec{x})}} ^*[\varvec{B}]}&= \varvec{B}\varvec{A}^*~, \end{aligned}$$
(39)
$$\begin{aligned} {\displaystyle \left. \frac{ \partial \Omega (\varvec{x})}{\partial \varvec{x}}\right| _{\varvec{x}} ^*[\varvec{B}]}&= \Theta (\varvec{B})~, \end{aligned}$$
(40)
$$\begin{aligned} {\displaystyle \nabla _{\varvec{X}} {{\,\mathrm{tr}\,}}\varvec{X}\bigg |_{e^{\Omega (\varvec{y})} e^{-\Omega (\varvec{x})}}}&= \mathrm {Id}_D~. \end{aligned}$$
(41)

concludes the proof. \(\square \)

B Proof of Proposition 1

Proposition 1

Let \(\varvec{\Sigma }\) be a \(D \times D\) Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. Then, for any \(D \times D\) Hermitian matrix \(\varvec{A}\), denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\), we have

$$\begin{aligned}&\left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} \left[ \varvec{A}\right] = \varvec{E}\left[ \varvec{G}\odot {\bar{\varvec{A}}} \right] \varvec{E}^* \end{aligned}$$
(17)

where \(\odot \) is the element-wise (a.k.a, Hadamar) product, and, for all \(1 \le i, j \le D\), we have defined

$$\begin{aligned} \varvec{G}_{i,j} = \left\{ \begin{array}{ll} \frac{e^{\lambda _i} - e^{\lambda _j}}{\lambda _i - \lambda _j} &{} {\text {if} \quad }i \ne j~,\\ e^{\lambda _i} &{} {\text {otherwise}}~. \end{array} \right. \end{aligned}$$
(18)

Proof

Let us start by recalling the following Lemma whose proof can be found in [5, 18, 31].\(\square \)

Lemma 1

Let \(\varvec{\Sigma }\) be an Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. We have for a Hermitian matrix \(\varvec{A}\)

$$\begin{aligned} \frac{ \partial \varvec{\Lambda }}{\partial \varvec{\Sigma }} [\varvec{A}] = {{\,\mathrm{diag}\,}}({\bar{\varvec{A}}}) \quad \text {and} \quad \frac{ \partial \varvec{E}}{\partial \varvec{\Sigma }} [\varvec{A}] = \varvec{E}\left( \varvec{J}\odot {\bar{\varvec{A}}} \right) \end{aligned}$$
(42)

where \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \(\varvec{J}\) is the skew-symmetric matrix

$$\begin{aligned} \varvec{J}_{ij} = \left\{ \begin{array}{ll} \frac{1}{\lambda _j - \lambda _i} &{} {\text {if} \quad }i \ne j~, \\ 0 &{} {\text {otherwise}}~. \end{array} \right. \end{aligned}$$
(43)

Recall that \( \exp \varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \varvec{E}^* \). From Lemma 1, by applying chain rule, we have

$$\begin{aligned} \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }} \left[ \varvec{A}\right]&= \varvec{E}\left( \varvec{J}\odot {\bar{\varvec{A}}} \right) {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \varvec{E}^* \end{aligned}$$
(44)
$$\begin{aligned}&+ \varvec{E}{{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \left( \varvec{J}\odot {\bar{\varvec{A}}} \right) ^* \varvec{E}^* \end{aligned}$$
(45)
$$\begin{aligned}&+ \varvec{E}{{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) {{\,\mathrm{diag}\,}}({\bar{\varvec{A}}}) \varvec{E}^*~. \end{aligned}$$
(46)

We have for \(i \ne j\)

$$\begin{aligned}&\left[ \left( \varvec{J}\odot {\bar{\varvec{A}}} \right) {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) + {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \left( \varvec{J}\odot {\bar{\varvec{A}}} \right) ^* \right] _{ij} \end{aligned}$$
(47)
$$\begin{aligned} =&\varvec{J}_{ij} {\bar{\varvec{A}}}_{ij} e^{\lambda _j} + e^{\lambda _i} \varvec{J}_{ji} {\bar{\varvec{A}}}_{ji} = \varvec{J}_{ij} {\bar{\varvec{A}}}_{ij} e^{\lambda _j} - e^{\lambda _i} \varvec{J}_{ij} {\bar{\varvec{A}}}_{ij} \end{aligned}$$
(48)
$$\begin{aligned} =&\varvec{J}_{ij} {\bar{\varvec{A}}}_{ij} (e^{\lambda _j} - e^{\lambda _i}) = {\bar{\varvec{A}}}_{ij} \varvec{G}_{ij} = [\varvec{G}\odot {\bar{\varvec{A}}}]_{ij}~. \end{aligned}$$
(49)

For \(i = j\), since \(\varvec{J}_{ii} = 0\), we conclude the proof.

C Proof of Corollary 1

Corollary 1

Let \(\varvec{\Sigma }\) be a Hermitian matrix with distinct eigenvalues. The Jacobian of the matrix exponential is a self-adjoint operator

$$\begin{aligned} \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} ^* = \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} ~. \end{aligned}$$
(20)

Proof

We need to prove that for any two \(D \times D\) Hermitian matrices \(\varvec{A}\) and \(\varvec{B}\), we have

$$\begin{aligned} \left\langle \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }} [ \varvec{A}],\,\varvec{B} \right\rangle = \left\langle \varvec{A},\, \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }} [ \varvec{B}] \right\rangle \end{aligned}$$
(50)

for the matrix dot product \(\left\langle \varvec{X},\,\varvec{Y} \right\rangle = {{\,\mathrm{tr}\,}}[ \varvec{X}\varvec{Y}^* ]\). According to Proposition 1, this amounts to show

$$\begin{aligned} {{\,\mathrm{tr}\,}}( \varvec{E}[ \varvec{G}\odot ( \varvec{E}^* \varvec{A}\varvec{E}) ] \varvec{E}^* \varvec{B}^* ) = {{\,\mathrm{tr}\,}}( \varvec{E}[ \varvec{G}\odot ( \varvec{E}^* \varvec{B}\varvec{E}) ] \varvec{E}^* \varvec{A}^* ) \end{aligned}$$
(51)

where \(\varvec{E}\) and \(\varvec{G}\) are defined from \(\varvec{\Sigma }\) as in Proposition 1. Denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\), this can be recast as

$$\begin{aligned} {{\,\mathrm{tr}\,}}( (\varvec{G}\odot {\bar{\varvec{A}}}) {\bar{\varvec{B}}}^* ) = {{\,\mathrm{tr}\,}}( (\varvec{G}\odot {\bar{\varvec{B}}}) {\bar{\varvec{A}}}^* ) \end{aligned}$$
(52)

Expanding the left hand side allows us to conclude the proof as follows

$$\begin{aligned} {{\,\mathrm{tr}\,}}( (\varvec{G}\odot {\bar{\varvec{A}}}) {\bar{\varvec{B}}}^* )&= \sum _{k=1}^D \sum _{l=1}^D (\varvec{G}\odot {\bar{\varvec{A}}})_{kl} ({\bar{\varvec{B}}}^*)_{lk} \end{aligned}$$
(53)
$$\begin{aligned}&= \sum _{k=1}^D \sum _{l=1}^D \varvec{G}_{kl} {\bar{\varvec{A}}}_{kl} {\bar{\varvec{B}}}_{kl} \end{aligned}$$
(54)
$$\begin{aligned}&= \sum _{k=1}^D \sum _{l=1}^D \varvec{G}_{kl} {\bar{\varvec{B}}}_{kl} ({\bar{\varvec{A}}})^*_{lk} \end{aligned}$$
(55)
$$\begin{aligned}&= {{\,\mathrm{tr}\,}}( (\varvec{G}\odot {\bar{\varvec{B}}}) {\bar{\varvec{A}}}^* )~. \end{aligned}$$
(56)

\(\square \)

D Hessian of the Objective F

The second-order derivative used in the approximation of the Hessian given in (23) takes the form:

figure j

Proof (Proof of eq. (24))

Applying the chain rule to eq. (12) leads to

$$\begin{aligned} \varvec{H} =\;&\frac{ \partial }{\partial \varvec{x}} \left[ \beta (\varvec{x} - \varvec{z}) + L \Theta \left( \mathrm {Id}_D - \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{-\Omega (\varvec{x})} ^*[e^{\Omega (\varvec{y})}] \right) \right] \end{aligned}$$
(57)
$$\begin{aligned} =\;&\beta \mathrm {Id}_D - L \Theta \left( \frac{ \partial }{\partial \varvec{x}} \left( \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{-\Omega (\varvec{x})} ^*[e^{\Omega (\varvec{y})}] \right) [\;\cdot \;]\right) \end{aligned}$$
(58)
$$\begin{aligned} =\;&\beta \mathrm {Id}_D - L \Theta \left( \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \frac{ \partial -\Omega (\varvec{x})}{\partial \varvec{x}} [\;\cdot \;] \right] \right) \end{aligned}$$
(59)
$$\begin{aligned} =\;&\beta \mathrm {Id}_D + L \Theta \left( \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \Theta ^*[\;\cdot \;] \right] \right) . \end{aligned}$$
(60)

If follows that

$$\begin{aligned} \varvec{H} \varvec{d} = \beta \varvec{d} + L \Theta \left( \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \Theta ^*[\varvec{d}] \right] \right) \end{aligned}$$
(61)

and thus, as \(|\!| \varvec{d} |\!|_2 = 1\), it follows that

$$\begin{aligned}&\varvec{d}^* \varvec{H} \varvec{d} = \beta + L \varvec{d}^* \Theta \left( \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \Theta ^*[\varvec{d}] \right] \right) \end{aligned}$$
(62)
$$\begin{aligned}&= \beta + L \left\langle \varvec{d},\,\Theta \left( \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \Theta ^*[\varvec{d}] \right] \right) \right\rangle \end{aligned}$$
(63)
$$\begin{aligned}&= \beta + L \left\langle \Theta ^*[\varvec{d}],\, \left. \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \right| _{-\Omega (\varvec{x})} \left[ e^{\Omega (\varvec{y})}, \Theta ^*[\varvec{d}] \right] \right\rangle \end{aligned}$$
(64)

which concludes the proof.\(\square \)

E Proof of Proposition 2

Proposition 2

Let \(\varvec{\Sigma }\) be a \(D \times D\) Hermitian matrix with distinct eigenvalues. Let \(\varvec{\Sigma }= \varvec{E}{{\,\mathrm{diag}\,}}(\varvec{\Lambda }) \varvec{E}^*\) be its eigendecomposition where \(\varvec{E}\) is a unitary matrix of eigenvectors and \(\varvec{\Lambda }= (\lambda _1, \ldots , \lambda _D)\) the vector of corresponding eigenvalues. For any \(D \times D\) Hermitian matrices \(\varvec{A}\) and \(\varvec{B}\), denoting \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\) and \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\), we have

$$\begin{aligned} \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \big [&\varvec{A}, \varvec{B}\big ] = \varvec{E}[ \varvec{F}({\bar{\varvec{A}}}, {\bar{\varvec{B}}}) ] \varvec{E}^* \end{aligned}$$
(26)

where, for all \(1 \le i, j \le D\), we have

$$\begin{aligned} \varvec{F}({\bar{\varvec{A}}}, {\bar{\varvec{B}}})_{i,j}&= \sum _{k = 1}^D \phi _{i,j,k} ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*) \end{aligned}$$
(27)
$$\begin{aligned} \text {with} \quad \phi _{i,j,k}&= \left\{ \begin{array}{ll} \frac{\varvec{G}_{ik} - \varvec{G}_{jk}}{\lambda _i - \lambda _j} &{} {\text {if} \quad }i \ne j~,\\ \frac{\varvec{G}_{ii} - \varvec{G}_{ik}}{\lambda _i - \lambda _k} &{} {\text {if} \quad }i = j \quad \text {and} \quad k \ne i~,\\ \frac{\varvec{G}_{ii}}{2} &{} {\text {if} \quad }i = j = k~.\\ \end{array} \right. \end{aligned}$$
(28)

Proof (Proof of Proposition 2)

The second directional derivative can be defined from the adjoint of the directional derivative as

$$\begin{aligned} \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \left[ \varvec{A}, \varvec{B}\right] = \frac{ \partial }{\partial \varvec{\Sigma }} \left( \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} ^*\left[ \varvec{A}\right] \right) \left[ \varvec{B}\right] ~. \end{aligned}$$
(65)

By virtue of Corollary 1, we have \( \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} ^* = \left. \frac{ \partial e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }}\right| _{\varvec{\Sigma }} \), and then from Proposition 1 it follows that

$$\begin{aligned} \frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \left[ \varvec{A}, \varvec{B}\right] = \frac{ \partial }{\partial \varvec{\Sigma }} \left( \varvec{E}[ \varvec{G}\odot {\bar{\varvec{A}}}] \varvec{E}^* \right) \left[ \varvec{B}\right] ~. \end{aligned}$$
(66)

In order to apply the chain rule on \(\varvec{E}[ \varvec{G}\odot {\bar{\varvec{A}}}] \varvec{E}^*\), let us first rewrite \(\varvec{J}\) and \(\varvec{G}\) in Lemma 1 and Proposition 1 as

$$\begin{aligned} \varvec{J}&= (\varvec{1}_D \varvec{1}_D^* - \mathrm {Id}_D) \oslash (\mathrm {Id}_D + \varvec{1}_D \varvec{\Lambda }^* - \varvec{\Lambda }\varvec{1}_D^*) \end{aligned}$$
(67)
$$\begin{aligned} \varvec{G}&= {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) - (e^{\varvec{\Lambda }} \varvec{1}_D^* - \varvec{1}_D {e^{\varvec{\Lambda }}}^*) \odot \varvec{J}\end{aligned}$$
(68)

where \(\varvec{1}_D\) is a D dimensional column vector of ones, \(\oslash \) denotes the element-wise division and \({e^{\varvec{\Lambda }}}^*\) must be understood as the row vector \(({e^{\varvec{\Lambda }}})^*\). From Lemma 1, we have for a Hermitian matrix \(\varvec{B}\)

$$\begin{aligned}&\frac{ \partial }{\partial \varvec{\Sigma }} {{\,\mathrm{diag}\,}}\left( e^{\varvec{\Lambda }} \right) \left[ \varvec{B}\right] = {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \odot {\bar{\varvec{B}}} \end{aligned}$$
(69)
$$\begin{aligned}&\frac{ \partial }{\partial \varvec{\Sigma }} \big ( e^{\varvec{\Lambda }} \varvec{1}_D^* - \varvec{1}_D {e^{\varvec{\Lambda }}}^* \big ) \left[ \varvec{B}\right] \end{aligned}$$
(70)
$$\begin{aligned}& = {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}}) \varvec{1}_D^* - \varvec{1}_D {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}})^* {{\,\mathrm{diag}\,}}({e^{\varvec{\Lambda }}}) \nonumber \\&\frac{ \partial }{\partial \varvec{\Sigma }} \left( \mathrm {Id}_D + \varvec{\Lambda }\varvec{1}_D^* - \varvec{1}_D \varvec{\Lambda }^* \right) \left[ \varvec{B}\right] = {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}}) \varvec{1}_D^* - \varvec{1}_D {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}})^* \end{aligned}$$
(71)

where \({\bar{\varvec{B}}} = \varvec{E}^* \varvec{B}\varvec{E}\). By application of the chain rule, we get

$$\begin{aligned} \frac{ \partial \varvec{J}}{\partial \varvec{\Sigma }}&\left[ \varvec{B}\right] = -( {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}}) \varvec{1}_D^* - \varvec{1}_D {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}})^*) \odot \varvec{J}\odot \varvec{J}\end{aligned}$$
(72)
$$\begin{aligned} \frac{ \partial \varvec{G}}{\partial \varvec{\Sigma }}&\left[ \varvec{B}\right] = {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) \odot {\bar{\varvec{B}}} \end{aligned}$$
(73)
$$\begin{aligned}&- \big [ {{\,\mathrm{diag}\,}}(e^{\varvec{\Lambda }}) {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}}) \varvec{1}_D^* - \varvec{1}_D {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}})^* {{\,\mathrm{diag}\,}}({e^{\varvec{\Lambda }}}) \nonumber \\& - \varvec{G}\odot \left( {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}}) \varvec{1}_D^* - \varvec{1}_D {{\,\mathrm{diag}\,}}({\bar{\varvec{B}}})^* \right) \big ] \odot \varvec{J}\nonumber \end{aligned}$$
(74)

where we used that \(\big ( e^{\varvec{\Lambda }} \varvec{1}_D^* - \varvec{1}_D {e^{\varvec{\Lambda }}}^* \big ) \odot \varvec{J}= -\varvec{G}\odot \varvec{J}\). Let \(\varvec{A}\) be a Hermitian matrix and \({\bar{\varvec{A}}} = \varvec{E}^* \varvec{A}\varvec{E}\), by Lemma 1, we have

$$\begin{aligned} \frac{ \partial {\bar{\varvec{A}}}}{\partial \varvec{\Sigma }} \left[ \varvec{B}\right] =&{\bar{\varvec{A}}} \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) - \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) {\bar{\varvec{A}}}~. \end{aligned}$$
(75)

We are now equipped to apply the chain rule to \(\varvec{E}[ \varvec{G}\odot {\bar{\varvec{A}}}] \varvec{E}^*\) in the direction \(\varvec{B}\), which leads us to

$$\begin{aligned}&\frac{ \partial ^2 e^{\varvec{\Sigma }}}{\partial \varvec{\Sigma }^2} \left[ \varvec{A}, \varvec{B}\right] = \varvec{E}[ \varvec{F}({\bar{\varvec{A}}}, {\bar{\varvec{B}}}) ] \varvec{E}^* \end{aligned}$$
(76)
$$\begin{aligned} \text {with} \quad&\varvec{F}({\bar{\varvec{A}}}, {\bar{\varvec{B}}}) = \varvec{F}_1 + \varvec{F}_2 + \varvec{F}_3\\ \text {and} \quad&\varvec{F}_1 = \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) \left( \varvec{G}\odot {\bar{\varvec{A}}} \right) - \left( \varvec{G}\odot {\bar{\varvec{A}}} \right) \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) \end{aligned}$$
(77)
$$\begin{aligned}&\varvec{F}_2 = \varvec{G}\odot \left[ {\bar{\varvec{A}}} \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) - \left( \varvec{J}\odot {\bar{\varvec{B}}} \right) {\bar{\varvec{A}}} \right] \end{aligned}$$
(78)
$$\begin{aligned}&\varvec{F}_3 = \frac{ \partial \varvec{G}}{\partial \varvec{\Sigma }} \left[ \varvec{B}\right] \odot \bar{\varvec{A}}~. \end{aligned}$$
(79)

We have for all \(1 \le i \le D\) and \(1 \le j \le D\)

$$\begin{aligned}&[\varvec{F}_1]_{ij} = \sum _{k = 1}^D \varvec{J}_{jk} \varvec{G}_{ik} {\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + \varvec{J}_{ik} \varvec{G}_{jk} {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^* \end{aligned}$$
(80)
$$\begin{aligned} \text {and} \quad&[\varvec{F}_2]_{ij} = -\varvec{G}_{ij} \sum _{k = 1}^D \varvec{J}_{jk} {\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + \varvec{J}_{ik} {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*~. \end{aligned}$$
(81)

Hence, we get

$$\begin{aligned}{}[\varvec{F}_1 + \varvec{F}_2]_{ij} = \sum _{k = 1}^D&\varvec{J}_{jk} (\varvec{G}_{ik} - \varvec{G}_{ij}) {\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^*\\ + \sum _{k = 1}^D&\varvec{J}_{ik} (\varvec{G}_{jk} - \varvec{G}_{ij}) {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*~. \nonumber \end{aligned}$$
(82)

\(\bullet \) Assume \(i \ne j\). We have

$$\begin{aligned}{}[\varvec{F}_3]_{ij} = -&[\varvec{G}_{ii} {\bar{\varvec{B}}}_{ii} - \varvec{G}_{jj} {\bar{\varvec{B}}}_{jj}\nonumber \\&- \varvec{G}_{ij} ({\bar{\varvec{B}}}_{ii} - {\bar{\varvec{B}}}_{jj})] \varvec{J}_{ij} {\bar{\varvec{A}}}_{ij} \end{aligned}$$
(83)
$$\begin{aligned} = \quad&\varvec{J}_{ij} (\varvec{G}_{jj} - \varvec{G}_{ij}) {\bar{\varvec{B}}}_{jj} {\bar{\varvec{A}}}_{ij}\nonumber \\ -&\varvec{J}_{ij} (\varvec{G}_{ii} - \varvec{G}_{ij}) {\bar{\varvec{A}}}_{ij} {\bar{\varvec{B}}}_{ii}. \end{aligned}$$
(84)

For \(k \ne i\) and \(k \ne j\), we have

$$\begin{aligned}&\varvec{J}_{ik} (\varvec{G}_{jk} - \varvec{G}_{ij}) - \varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik}) \end{aligned}$$
(85)
$$\begin{aligned}&= \quad \frac{ e^{\lambda _j} - e^{\lambda _k} }{ (\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } - \frac{ e^{\lambda _i} - e^{\lambda _j} }{ (\lambda _i - \lambda _j)(\lambda _k - \lambda _i) } \end{aligned}$$
(86)
$$\begin{aligned}&- \frac{ e^{\lambda _j} - e^{\lambda _k} }{ (\lambda _j - \lambda _k)(\lambda _j - \lambda _i) } + \frac{ e^{\lambda _i} - e^{\lambda _k} }{ (\lambda _i - \lambda _k)(\lambda _j - \lambda _i) } \nonumber \\&=\quad \frac{ (\lambda _i - \lambda _j)(e^{\lambda _j} - e^{\lambda _k}) - (\lambda _j - \lambda _k)(e^{\lambda _i} - e^{\lambda _j}) }{ (\lambda _i - \lambda _j)(\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } \end{aligned}$$
(87)
$$\begin{aligned}&+ \frac{ (\lambda _k - \lambda _i)(e^{\lambda _j} - e^{\lambda _k}) + (\lambda _j - \lambda _k)(e^{\lambda _i} - e^{\lambda _k}) }{ (\lambda _i - \lambda _j)(\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } \nonumber \\&=\quad \frac{ e^{\lambda _i}(\lambda _k - \lambda _j + \lambda _j - \lambda _k) }{ (\lambda _i - \lambda _j)(\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } \end{aligned}$$
(88)
$$\begin{aligned}&+ \frac{ e^{\lambda _j}(\lambda _i - \lambda _j + \lambda _j - \lambda _k + \lambda _k - \lambda _i) }{ (\lambda _i - \lambda _j)(\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } \nonumber \\&+ \frac{ e^{\lambda _k}(\lambda _j - \lambda _i + \lambda _i - \lambda _k + \lambda _k - \lambda _j) }{ (\lambda _i - \lambda _j)(\lambda _j - \lambda _k)(\lambda _k - \lambda _i) } \nonumber \\&=\quad 0 . \end{aligned}$$
(89)

Similarly, we have \(\varvec{J}_{jk} (\varvec{G}_{ik} - \varvec{G}_{ij}) = \varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik})\). Hence, we get the following

$$\begin{aligned}{}[\varvec{F}_1 + \varvec{F}_2]_{ij} = \sum _{k \ne i, k \ne j}&\varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik}) ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*) \nonumber \\ - \;&\varvec{J}_{ij} (\varvec{G}_{ii} - \varvec{G}_{ij}) {\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ji}^* \nonumber \\ + \;&\varvec{J}_{ij} (\varvec{G}_{jj} - \varvec{G}_{ij}) {\bar{\varvec{B}}}_{ij} {\bar{\varvec{A}}}_{jj}^*~. \end{aligned}$$
(90)

It follows that

$$\begin{aligned}{}[\varvec{F}_1 + \varvec{F}_2 \;+\;&\varvec{F}_3]_{ij} \end{aligned}$$
(91)
$$\begin{aligned} = \sum _{\text{[ }1cm][c]{\scriptstyle k \ne i, k \ne j}}&\varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik}) ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*) \nonumber \\ + \;&\varvec{J}_{ij} (\varvec{G}_{ji} - \varvec{G}_{ii}) ({\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ji}^* + {\bar{\varvec{B}}}_{ii} {\bar{\varvec{A}}}_{ji}^*) \nonumber \\ + \;&\varvec{J}_{ij} (\varvec{G}_{jj} - \varvec{G}_{ij}) ({\bar{\varvec{A}}}_{ij} {\bar{\varvec{B}}}_{jj}^* + {\bar{\varvec{B}}}_{ij} {\bar{\varvec{A}}}_{jj}^*) \nonumber \\ = \sum _{\text{[ }1cm][c]{\scriptstyle k=1}}^D&\underbrace{\varvec{J}_{ij} (\varvec{G}_{jk} - \varvec{G}_{ik})}_{\phi _{i,j,k}} ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{jk}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{jk}^*) ~. \end{aligned}$$
(92)

\(\bullet \) Now assume that \(i = j\). We have \([\varvec{F}_3]_{ii} = \varvec{G}_{ii} {\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ii}\). It follows that

$$\begin{aligned}{}[\varvec{F}_1 \;+\;&\varvec{F}_2 + \varvec{F}_3]_{ii} \end{aligned}$$
(93)
$$\begin{aligned} =&\sum _{k = 1}^D \varvec{J}_{ik} (\varvec{G}_{ik} - \varvec{G}_{ii}) ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{ik}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{ik}^*) \end{aligned}$$
(94)
$$\begin{aligned}&\quad + \varvec{G}_{ii} {\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ii} \end{aligned}$$
(95)
$$\begin{aligned} =&\sum _{k \ne i} \underbrace{\varvec{J}_{ik} (\varvec{G}_{ik} - \varvec{G}_{ii})}_{\phi _{i,i,k}} ({\bar{\varvec{A}}}_{ik} {\bar{\varvec{B}}}_{ik}^* + {\bar{\varvec{B}}}_{ik} {\bar{\varvec{A}}}_{ik}^*)\\&\quad + \underbrace{\frac{1}{2} \varvec{G}_{ii}}_{\phi _{i,i,i}} ({\bar{\varvec{A}}}_{ii} {\bar{\varvec{B}}}_{ii}^* + {\bar{\varvec{B}}}_{ii} {\bar{\varvec{A}}}_{ii}^*) \nonumber \end{aligned}$$
(96)

which concludes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deledalle, CA., Denis, L. & Tupin, F. Speckle Reduction in Matrix-Log Domain for Synthetic Aperture Radar Imaging. J Math Imaging Vis 64, 298–320 (2022). https://doi.org/10.1007/s10851-022-01067-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-022-01067-1

Keywords

Navigation