Skip to main content
Log in

Abstract

Plug-and-Play (PnP) and Regularization-by-Denoising (RED) are recent paradigms for image reconstruction that leverage the power of modern denoisers for image regularization. In particular, they have been shown to deliver state-of-the-art reconstructions with CNN denoisers. Since the regularization is performed in an ad-hoc manner, understanding the convergence of PnP and RED has been an active research area. It was shown in recent works that iterate convergence can be guaranteed if the denoiser is averaged or nonexpansive. However, integrating nonexpansivity with gradient-based learning is challenging, the core issue being that testing nonexpansivity is intractable. Using numerical examples, we show that existing CNN denoisers tend to violate the nonexpansive property, which can cause PnP or RED to diverge. In fact, algorithms for training nonexpansive denoisers either cannot guarantee nonexpansivity or are computationally intensive. In this work, we construct contractive and averaged image denoisers by unfolding splitting-based optimization algorithms applied to wavelet denoising and demonstrate that their regularization capacity for PnP and RED can be matched with CNN denoisers. To our knowledge, this is the first work to propose a simple framework for training contractive denoisers using network unfolding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Ribes, A., Schmitt, F.: Linear inverse problems in imaging. IEEE Signal Process. Mag. 25(4), 84–99 (2008)

    Article  Google Scholar 

  2. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht, Netherlands (1996)

    Book  Google Scholar 

  3. Dong, W., Zhang, L., Shi, G., Wu, X.: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011)

    Article  MathSciNet  Google Scholar 

  4. Jagatap, G., Hegde, C.: Algorithmic guarantees for inverse imaging with untrained network priors. Proc. Adv. Neural Inf. Process. Syst. 32, 14,832-14,842 (2019)

    Google Scholar 

  5. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play ADMM for image restoration: fixed-point convergence and applications. IEEE Trans. Comput. Imaging 3(1), 84–98 (2017)

    Article  MathSciNet  Google Scholar 

  6. Rond, A., Giryes, R., Elad, M.: Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016)

    Article  Google Scholar 

  7. Bioucas-Dias, J.M., Figueiredo, M.: Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 19(7), 1720–1730 (2010)

    Article  MathSciNet  Google Scholar 

  8. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60(1–4), 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  9. Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(l_1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)

    Article  MathSciNet  Google Scholar 

  10. Dong, W., Li, X., Zhang, L., Shi, G.: Sparsity-based image denoising via dictionary learning and structural clustering. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 457–464 (2011)

  11. Zhang, L., Zuo, W.: Image restoration: from sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Process. Mag. 34(5), 172–179 (2017)

    Article  Google Scholar 

  12. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    Article  MathSciNet  Google Scholar 

  13. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York, NY, USA (2017)

    Book  Google Scholar 

  14. Ryu, E., Boyd, S.: Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016)

    MathSciNet  Google Scholar 

  15. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  16. Sreehari, S., Venkatakrishnan, S.V., Wohlberg, B., Buzzard, G.T., Drummy, L.F., Simmons, J.P., Bouman, C.A.: Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging 2(4), 408–423 (2016)

    MathSciNet  Google Scholar 

  17. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  18. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  19. Ryu, E., Liu, J., Wang, S., Chen, X., Wang, Z., Yin, W.: Plug-and-play methods provably converge with properly trained denoisers. Proc. Intl. Conf. Mach. Learn. 97, 5546–5557 (2019)

    Google Scholar 

  20. Sun, Y., Wohlberg, B., Kamilov, U.S.: An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging 5(3), 395–408 (2019)

    Article  MathSciNet  Google Scholar 

  21. Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 3929–3938 (2017)

  22. Tirer, T., Giryes, R.: Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process. 28(3), 1220–1234 (2019)

    Article  MathSciNet  Google Scholar 

  23. Zhang, K., Li, Y., Zuo, W., Zhang, L., Van Gool, L., Timofte, R.: Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach Intell. (2021)

  24. Hurault, S., Leclaire, A., Papadakis, N.: Gradient step denoiser for convergent plug-and-play. Proc. Int. Conf. Learn, Represent (2022)

  25. Hurault, S., Leclaire, A., Papadakis, N.: Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. arXiv:2201.13256 (2022)

  26. Romano, Y., Elad, M., Milanfar, P.: The little engine that could: regularization by denoising (RED). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017)

    Article  MathSciNet  Google Scholar 

  27. Reehorst, E.T., Schniter, P.: Regularization by denoising: clarifications and new interpretations. IEEE Trans. Comput. Imaging 5(1), 52–67 (2018)

    Article  Google Scholar 

  28. Sun, Y., Liu, J., Kamilov, U.S.: Block coordinate regularization by denoising. Proc. Adv. Neural Inf. Process. Syst. pp. 380–390 (2019)

  29. Sun, Y., Wu, Z., Xu, X., Wohlberg, B., Kamilov, U.S.: Scalable plug-and-play ADMM with convergence guarantees. IEEE Trans. Comput. Imaging 7, 849–863 (2021)

    Article  MathSciNet  Google Scholar 

  30. Cohen, R., Elad, M., Milanfar, P.: Regularization by denoising via fixed-point projection (RED-PRO). SIAM J. Imaging Sci. 14(3), 1374–1406 (2021)

    Article  MathSciNet  Google Scholar 

  31. Metzler, C., Schniter, P., Veeraraghavan, A., et al.: prdeep: robust phase retrieval with a flexible deep network. Proc. Intl. Conf. Mach. Learn. pp. 3501–3510 (2018)

  32. Wu, Z., Sun, Y., Matlock, A., Liu, J., Tian, L., Kamilov, U.S.: SIMBA: Scalable inversion in optical tomography using deep denoising priors. IEEE J. Sel. Top. Signal Process. 14(6), 1163–1175 (2020)

    Article  Google Scholar 

  33. Wu, Z., Sun, Y., Liu, J., Kamilov, U.: Online regularization by denoising with applications to phase retrieval. Proc. IEEE Conf. Comp. Vis. Pattern Recognit, Wkshp (2019)

  34. Mataev, G., Milanfar, P., Elad, M.: DeepRED: Deep image prior powered by RED. Proc. IEEE Intl. Conf. Comp. Vis, Wksh (2019)

  35. Hu, Y., Liu, J., Xu, X., Kamilov, U.S.: Monotonically convergent regularization by denoising. arXiv:2202.04961 (2022)

  36. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. Proc. Intl. Conf. Mach. Learn. pp. 399–406 (2010)

  37. Sun, J., Li, H., Xu, Z., et al.: Deep ADMM-Net for compressive sensing MRI. Proc. Adv. Neural Inf. Process. Syst. 29 (2016)

  38. Zhang, J., Ghanem, B.: ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. Proc. IEEE Intl. Conf. Comp. Vis. pp. 1828–1837 (2018)

  39. Repetti, A., Terris, M., Wiaux, Y., Pesquet, J.C.: Dual forward-backward unfolded network for flexible plug-and-play. Proc. Eur. Signal Process. Conf. pp. 957–961 (2022)

  40. Athalye, C.D., Chaudhury, K.N., Kumar, B.: On the contractivity of plug-and-play operators. IEEE Signal Process. Lett. 30, 1447–1451 (2023)

    Article  Google Scholar 

  41. Nair, P., Gavaskar, R.G., Chaudhury, K.N.: Fixed-point and objective convergence of plug-and-play algorithms. IEEE Trans. Comput. Imaging 7, 337–348 (2021)

    Article  MathSciNet  Google Scholar 

  42. Gavaskar, R.G., Athalye, C.D., Chaudhury, K.N.: On plug-and-play regularization using linear denoisers. IEEE Trans. Image Process. 30, 4802–4813 (2021)

    Article  MathSciNet  Google Scholar 

  43. Liu, J., Asif, S., Wohlberg, B., Kamilov, U.: Recovery analysis for plug-and-play priors using the restricted eigenvalue condition. Proc. Adv. Neural Inf. Process. Syst. 34, 5921–33 (2021)

    Google Scholar 

  44. Raj, A., Li, Y., Bresler, Y.: GAN-based projector for faster recovery with convergence guarantees in linear inverse problems. Proc. IEEE Intl. Conf. Comp. Vis. pp. 5602–5611 (2019)

  45. Cohen, R., Blau, Y., Freedman, D., Rivlin, E.: It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Proc. Adv. Neural Inf. Process. Syst. 34, 18152–64 (2021)

  46. Pesquet, J.C., Repetti, A., Terris, M., Wiaux, Y.: Learning maximally monotone operators for image recovery. SIAM J. Imaging Sci. 14(3), 1206–1237 (2021)

  47. Laumont, R., De Bortoli, V., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: On maximum a posteriori estimation with plug & play priors and stochastic gradient descent. J. Math. Imaging Vision 65(1), 140–163 (2023)

  48. Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Proc. Adv. Neural Inf. Process. Syst. 31 (2018)

  49. Sedghi, H., Gupta, V., Long, P.M.: The singular values of convolutional layers. Proc. Int. Conf. Learn, Represent (2019)

  50. Hertrich, J., Neumayer, S., Steidl, G.: Convolutional proximal neural networks and plug-and-play algorithms. Linear Algebra Appl. 631, 203–234 (2021)

    Article  MathSciNet  Google Scholar 

  51. Terris, M., Repetti, A., Pesquet, J.C., Wiaux, Y.: Building firmly nonexpansive convolutional neural networks. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. pp. 8658–8662 (2020)

  52. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends. Optim. 1(3), 127–239 (2014)

    Article  Google Scholar 

  53. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992)

    Article  MathSciNet  Google Scholar 

  54. Pata, V.: Fixed Point Theorem and Applications. Springer, Berlin (2019)

    Book  Google Scholar 

  55. Baillon, J.B., Haddad, G.: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)

    Article  Google Scholar 

  56. Chambolle, A., De Vore, R.A., Lee, N.Y., Lucier, B.J.: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998)

    Article  MathSciNet  Google Scholar 

  57. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv:1412.6980 (2014)

  58. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Comp. Vis. Pattern Recognit. pp. 1964–1971 (2009)

Download references

Funding

K. N. Chaudhury was supported by research grants CRG/2020/000527 and STR/2021/000011 from the Science and Engineering Research Board, Government of India.

Author information

Authors and Affiliations

Authors

Contributions

In this work, we construct a deep denoiser that is provably contractive or averaged. To the best of our knowledge, we are the first to construct a learning-based deep denoiser that can be formally proven to be contractive. By plugging our denoisers into the Plug-and-Play (PnP) and Regularization-by-Denoising (RED) frameworks for image restoration, we are able to match their results with the top-performing CNN denoisers. The highlight of our approach is that we are able to guarantee theoretical convergence of PnP and RED iterates for our denoiser. We believe that this should be of interest to the mathematical imaging community since it is known that certifying and training a nonexpansive CNN is a challenging task. Indeed, we present counterexamples showing that existing CNN denoisers often violate the nonexpansive property.

Corresponding author

Correspondence to Pravin Nair.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We prove Proposition 11 in this section. First, we recall some notations from Sect. 4.4.

  1. 1.

    For integers \(p \leqslant q\), we denote \([p,q]=\{p,p+1,\ldots ,q\}, \, [p,q]_s=\{ps,(p+1)s,\ldots , qs\}, \, [p,q]^2=[p,q] \times [p,q]\), and \([p,q]_s^2=[p,q]_s \times [p,q]_s\).

  2. 2.

    The input to the denoiser is an image \(\textbf{X}: \Omega \rightarrow \mathbb {R}\), where \(\Omega =[0,q-1]^2\). We also consider the image in matrix form as \(\textbf{X}\in \mathbb {R}^{q \times q}\).

  3. 3.

    We use circular shifts: for \(\varvec{i}\in \Omega \) and \(\varvec{\tau }\in \mathbb {Z}^2\), \(\varvec{i}- \varvec{\tau }\) is defined as \(\left( (i_1 - \tau _1) \ \textrm{mod} \ q, (i_2 - \tau _2) \ \textrm{mod} \ q\right) \).

  4. 4.

    We extract patches with stride s. More precisely, the starting coordinates of the patches are

    $$\begin{aligned} \mathcal {J}= [0, (q_s-1)]_s^2, \end{aligned}$$

    where \(q_s = q/s \). The cardinality of \(\mathcal {J}\) is \(q_s^2\) (total number of patches), and each pixel belongs to \(k_s^2\) patches, where \(k_s = k/s\).

  5. 5.

    For \(\varvec{i}\in \Omega \), the patch operator \(\mathcal {P}_{\varvec{i}}: \mathbb {R}^{q \times q} \rightarrow \mathbb {R}^{p}\) is defined in (17).

We now elaborate on the definition of the adjoint in (18). For a patch vector \(\varvec{z}\in \mathbb {R}^p\), if we let \(\textbf{Y}=\mathcal {P}^*_{\varvec{i}}(\varvec{z})\), then \(\mathcal {P}_{\varvec{i}}(\textbf{Y}) = \varvec{z}\) and \(\textbf{Y}(\varvec{j}) = 0\) for all \(\varvec{j}\notin \Omega _{\varvec{i}}\), where

$$\begin{aligned} \Omega _{\varvec{i}}= \left\{ \varvec{i}+ \varvec{\tau }: \ \varvec{\tau }\in [0,k-1]^2\right\} . \end{aligned}$$

This completely specifies \(\textbf{Y}\). In particular, we have the following property of an adjoint: for all \(\textbf{X}\in \mathbb {R}^{q \times q}\) and \(\varvec{z}\in \mathbb {R}^{p}\),

$$\begin{aligned} \langle \varvec{z}, \mathcal {P}_{\varvec{i}}(\textbf{X}) \rangle _{\mathbb {R}^p} = \langle \mathcal {P}_{\varvec{i}}^*(\varvec{z}), \textbf{X}\rangle _{\mathbb {R}^{q \times q}}, \end{aligned}$$
(21)

where the inner product on the left (resp. right) is the standard Euclidean inner product on \(\mathbb {R}^p\) (resp. \(\mathbb {R}^{q \times q}\)).

The main idea behind Proposition 11 is to express the original system of overlapping patches in terms of non-overlapping patches. In particular, let \(q_k=q/k\); since we assume q to be a multiple of k, \(q_k\) is an integer. Let

$$\begin{aligned} \mathcal {J}_0=[0, (q_k-1)]_k^2. \end{aligned}$$

By construction, \(\mathcal {J}_0 \subseteq \mathcal {J}\) and the points in \(\mathcal {J}_0\) are the starting coordinates of non-overlapping patches. It is not difficult to see that \(\mathcal {J}\) is the disjoint union of (circular) shifts of \(\mathcal {J}_0\):

$$\begin{aligned} \mathcal {J}= \bigcup _{\varvec{i}\in [0,k_s-1]^2} \, \big ( \mathcal {J}_0 + s \varvec{i}\big ), \end{aligned}$$
(22)

where \([0,k_s-1]:=\{0, 1, \ldots , k_s-1\}\).

Using (22), we can decompose (18) as follows:

$$\begin{aligned} \textrm{D}= \frac{1}{k_s^2} \sum _{ \varvec{i}\in [0, k_s-1]^2 } \textrm{D}_{\varvec{i}}, \end{aligned}$$
(23)

where

$$\begin{aligned} \textrm{D}_{\varvec{i}} = \sum _{\varvec{j}\in \mathcal {J}_0} \ \left( \mathcal {P}^*_{\varvec{j}+s\varvec{i}} \circ \textrm{D}_{\textrm{patch}} \circ \mathcal {P}_{\varvec{j}+s\varvec{i}}\right) . \end{aligned}$$
(24)

The proof of Proposition 11 follows from a couple of observations. The first of these is an observation about operator averaging. We will say that operator \(\textrm{T}\) is L-contractive if its Lipschitz constant is at most L.

Lemma 12

Let \(\textrm{T}_1, \ldots , \textrm{T}_k\) be L-contractive. Then, their average,

$$\begin{aligned} \frac{1}{k} (\textrm{T}_1 + \cdots + \textrm{T}_k), \end{aligned}$$
(25)

is L-contractive. On the other hand, if \(\textrm{T}_1, \ldots , \textrm{T}_k\) are \(\theta \)-averaged, then (25) is a \(\theta \)-averaged operator.

The second observation relates the contractivity (averaged) property of the patch denoisers to the image denoiser (24).

Lemma 13

For \(\varvec{i}\in [0,k_s-1]^2\), the following hold.

  1. 1.

    If \(\textrm{D}_{\textrm{patch}}\) is L-contractive, then \(\textrm{D}_{\varvec{i}}\) is L-contractive.

  2. 2.

    If \(\textrm{D}_{\textrm{patch}}\) is \(\theta \)-averaged, then \(\textrm{D}_{\varvec{i}}\) is \(\theta \)-averaged.

To establish Proposition 11, note from (23) that \(\textrm{D}\) is the average of \(\{\textrm{D}_{\varvec{i}}\}\). The desired conclusion follows immediately from Lemma 12 and 13.

We now give the proofs of Lemma 12 and 13.

For Lemma 12, the second part follows from [13, Proposition 4.30]; however, we give the proofs of both parts for completeness.

Proof of Lemma 12

Let \(\textrm{T}=(1/k)(\textrm{T}_1 + \cdots + \textrm{T}_k)\), where \(\textrm{T}_1, \ldots , \textrm{T}_k\) are L-contractive. Then, for \(i=1,\ldots ,k\) and for all \(\varvec{x},\varvec{x}' \in \mathbb {R}^n\),

$$\begin{aligned} \Vert \textrm{T}_i(\varvec{x}) - \textrm{T}_i(\varvec{x}') \Vert \leqslant L \, \Vert \varvec{x}- \varvec{x}' \Vert . \end{aligned}$$
(26)

We can write

$$\begin{aligned} \textrm{T}(\varvec{x}) - \textrm{T}(\varvec{x}') = \frac{1}{k} \sum _{i=1}^k \, \left( \textrm{T}_{i}(\varvec{x}) - \textrm{T}_{i}(\varvec{x}') \right) . \end{aligned}$$

Using triangle inequality and (26), we get

$$\begin{aligned} \Vert \textrm{T}(\varvec{x}) - \textrm{T}(\varvec{x}') \Vert \leqslant \frac{1}{k} \sum _{i=1}^k \, L \, \Vert \varvec{x}- \varvec{x}' \Vert = L \Vert \varvec{x}- \varvec{x}' \Vert . \end{aligned}$$

This establishes the first part of Proposition 12.

Next, suppose that \(\textrm{T}_1, \ldots , \textrm{T}_k\) are \(\theta \)-averaged, i.e., there exists nonexpansive operators \(\textrm{N}_1,\ldots ,\textrm{N}_k\) such that

$$\begin{aligned} \textrm{T}_i= (1-\theta ) \textrm{I}+\theta \textrm{N}_i \qquad (i=1,\ldots ,k). \end{aligned}$$

Then

$$\begin{aligned} \textrm{T}= \frac{1}{k}(\textrm{T}_1 + \cdots + \textrm{T}_k)= (1-\theta ) \textrm{I}+ \theta \textrm{N}, \end{aligned}$$

where \(\textrm{N}=(1/k)(\textrm{N}_1 + \cdots + \textrm{N}_k)\).

We can again use triangle inequality to show that \(\textrm{N}\) is nonexpansive. Thus, we have shown that \(\textrm{T}\) is \(\theta \)-averaged, which completes the proof. \(\square \)

We need the following result to establish Lemma 13.

Lemma 14

For any fixed s and \(\varvec{i}\in [0, k_s-1]^2\), we have the following orthogonality properties:

  1. 1.

    For all \(\textbf{X}: \Omega \rightarrow \mathbb {R}\),

    $$\begin{aligned} \sum _{\varvec{j}\in \mathcal {J}_0} \ \Vert \mathcal {P}_{\varvec{j}+s\varvec{i}}(\textbf{X}) \Vert ^2 = {\Vert \textbf{X}\Vert }^2. \end{aligned}$$
    (27)

    We can equivalently write this as

    $$\begin{aligned} \sum _{\varvec{j}\in \mathcal {J}_0} \ \mathcal {P}_{\varvec{j}+s\varvec{i}}^* \mathcal {P}_{\varvec{j}+s \varvec{i}}=\textrm{I}, \end{aligned}$$
    (28)

    where \(\textrm{I}\) is the identity operator on \(\mathbb {R}^{q\times q}\).

  2. 2.

    For all \(\{\varvec{z}_{\varvec{j}} \in \mathbb {R}^p: \, \varvec{j}\in \mathcal {J}_0\}\),

    $$\begin{aligned} \big \Vert \sum _{\varvec{j}\in \mathcal {J}_0} \ \mathcal {P}_{\varvec{j}+s \varvec{i}}^*(\varvec{z}_{\varvec{j}}) \big \Vert ^2 = \sum _{\varvec{j}\in \mathcal {J}_0}\ \Vert \varvec{z}_{\varvec{j}} \Vert ^2. \end{aligned}$$
    (29)

It is understood that the norm on the left in (27) is the Euclidean norm on \(\mathbb {R}^{p}\), while the norm on the right is the Euclidean norm on \(\mathbb {R}^{q \times q}\).

We can establish Lemma 14 using property (21) and the fact that \(\mathcal {J}_0\) are the starting coordinates of non-overlapping patches. We will skip this straightforward calculation. We are ready to prove Lemma 13.

Proof of Lemma 13

Let \(\textrm{D}_{\textrm{patch}}\) be L-contractive, i.e., for all \(\varvec{z},\varvec{z}' \in \mathbb {R}^p\),

$$\begin{aligned} \Vert \textrm{D}_{\textrm{patch}}(\varvec{z}) - \textrm{D}_{\textrm{patch}}(\varvec{z}') \Vert \leqslant L \Vert \varvec{z}-\varvec{z}' \Vert . \end{aligned}$$
(30)

We have to show for all \(\textbf{X}, \textbf{X}' \in \mathbb {R}^{q \times q}\),

$$\begin{aligned} \Vert \textrm{D}_{\varvec{i}}(\textbf{X}) - \textrm{D}_{\varvec{i}}(\textbf{X}') \Vert ^2 \leqslant L^2 \Vert \textbf{X}-\textbf{X}' \big \Vert ^2. \end{aligned}$$
(31)

From (27) and (29), we can write

$$\begin{aligned} \Vert \textrm{D}_{\varvec{i}}(\textbf{X}) - \textrm{D}_{\varvec{i}}(\textbf{X}') \Vert ^2 = \sum _{\varvec{j}\in \mathcal {J}_0}{\Vert \textrm{D}_{\textrm{patch}} (\varvec{z}_{j}) -\textrm{D}_{\textrm{patch}} (\varvec{z}'_{j}) \Vert }^2 , \end{aligned}$$

where \(\varvec{z}_{j} = \mathcal {P}_{\varvec{j}+s\varvec{i}}(\textbf{X})\) and \(\varvec{z}'_{j} = \mathcal {P}_{\varvec{j}+s\varvec{i}}(\textbf{X}')\). From (30),

$$\begin{aligned} \Vert \textrm{D}_{\textrm{patch}} (\varvec{z}_{\varvec{j}}) -\textrm{D}_{\textrm{patch}} (\varvec{z}'_{\varvec{j}}) \Vert&\leqslant L \, \Vert \varvec{z}_{\varvec{j}} -\varvec{z}'_{\varvec{j}} \Vert \\&= L \, \Vert \mathcal {P}_{\varvec{j}+s\varvec{i}}(\textbf{X}-\textbf{X}') \Vert . \end{aligned}$$

Moreover, we have from (29) that

$$\begin{aligned} \sum _{\varvec{j}\in \mathcal {J}_0} \ \Vert {\textrm{P}}_{\varvec{j}+s\varvec{i}}(\textbf{X}-\textbf{X}')) \Vert ^2 = \Vert \textbf{X}-\textbf{X}' \big \Vert ^2. \end{aligned}$$

Combining the above, we get (31). This establishes part 1 of Lemma 13.

Now, let \(\textrm{D}_{\textrm{patch}}\) be \(\theta \)-averaged, i.e.,

$$\begin{aligned} \textrm{D}_{\textrm{patch}} = (1-\theta )\textrm{I}+ \theta \textrm{N}, \end{aligned}$$
(32)

where \(\textrm{N}\) is nonexpansive. Substituting (32) in (24), and using (28), we can write

$$\begin{aligned} \textrm{D}_{\varvec{i}} = (1 - \theta ) \textrm{I}+ \theta \textrm{G}_{\varvec{i}}, \end{aligned}$$

where

$$\begin{aligned} \textrm{G}_{\varvec{i}} = \sum _{\varvec{j}\in \mathcal {J}_0} \ \left( \mathcal {P}_{\varvec{j}+s\varvec{i}}^* \circ \textrm{N}\circ \mathcal {P}_{\varvec{j}+s\varvec{i}} \right) . \end{aligned}$$
(33)

Using (27) and (29) and that \(\textrm{N}\) is nonexpansive, we can show (the calculation is similar to one used earlier) that, for all \(\textbf{X}, \textbf{X}' \in \mathbb {R}^{q \times q}\),

$$\begin{aligned} \Vert \textrm{G}_{\varvec{i}}(\textbf{X}) - \textrm{G}_{\varvec{i}}(\textbf{X}') \Vert ^2 \leqslant \Vert \textbf{X}- \textbf{X}' \Vert ^2. \end{aligned}$$

That is, \(\textrm{G}_{\varvec{i}}\) is nonexpansive. It follows from (32) that \(\textrm{D}_{\varvec{i}}\) is \(\theta \)-averaged. This establishes part 2 of Lemma 13. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nair, P., Chaudhury, K.N. Averaged Deep Denoisers for Image Regularization. J Math Imaging Vis (2024). https://doi.org/10.1007/s10851-024-01181-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10851-024-01181-2

Keywords

Navigation