Abstract
Discretization schemes commonly used for total variation regularization lead to images that are difficult to interpolate, which is a real issue for applications requiring subpixel accuracy and aliasing control. In the present work, we reconciliate total variation with Shannon interpolation and study a Fourier-based estimate that behaves much better in terms of grid invariance, isotropy, artifact removal and subpixel accuracy. We show that this new variant (called Shannon total variation) can be easily handled with classical primal–dual formulations and illustrate its efficiency on several image processing tasks, including deblurring, spectrum extrapolation and a new aliasing reduction algorithm.
Similar content being viewed by others
References
Abergel, R., Louchet, C., Moisan, L., Zeng, T.: Total variation restoration of images corrupted by Poisson noise with iterated conditional expectations. In: Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision, Lecture Notes in Computer Science, vol. 9087, pp. 178–190. Springer (2015)
Aldroubi, A., Unser, M., Eden, M.: Cardinal spline filters: stability and convergence to the ideal sinc interpolator. Signal Process. 28(2), 127–138 (1992)
Arrow, K.J., Hurwicz, L., Uzawa, H., Chenery, H.B.: Studies in Linear and Non-linear Programming. Stanford University Press, Stanford (1958)
Aujol, J.-F., Chambolle, A.: Dual norms and image decomposition models. Int. J. Comput. Vis. 63(1), 85–104 (2005)
Babacan, S., Molina, R., Katsaggelos, A.: Total variation super resolution using a variational approach. In: Proceedings of the International Conference on Image Processing, pp. 641–644 (2008)
Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)
Briand, T., Vacher, J.: How to apply a filter defined in the frequency domain by a continuous function. Image Process. 6, 183–211 (2016)
Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
Candocia, F., Principe, J.C.: Comments on “sinc interpolation of discrete periodic signals”. IEEE Trans. Signal Process. 46(7), 2044–2047 (1998)
Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)
Chambolle, A.: Total variation minimization and a class of binary MRF models. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 136–152. Springer (2005)
Chambolle, A., Caselles, V., Cremers, D., Novaga, M., Pock, T.: An introduction to total variation for image analysis. Theor. Found. Numer. Methods Sparse Recov. 9, 263–340 (2010)
Chambolle, A., Levine, S.E., Lucier, B.J.: An upwind finite-difference method for total variation-based image smoothing. SIAM J. Imaging Sci. 4(1), 277–299 (2011)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22(2), 503–516 (2000)
Chan, T.F., Wong, C.-K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998)
Chan, T.F., Yip, A.M., Park, F.E.: Simultaneous total variation image inpainting and blind deconvolution. Int. J. Imaging Syst. Technol. 15(1), 92–102 (2005)
Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer (2011)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Condat, L.: Discrete Total Variation: New Definition and Minimization. Preprint GIPSA-lab (2016)
Darbon, J., Sigelle, M.: Image restoration with discrete constrained total variation part i: fast and exact optimization. J. Math. Imaging Vis. 26(3), 261–276 (2006)
DeVore, R.A.: Deterministic constructions of compressed sensing matrices. J. Complex. 23(4), 918–925 (2007)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Drori, Y., Sabach, S., Teboulle, M.: A simple algorithm for a class of nonsmooth convex-concave saddle-point problems. Oper. Res. Lett. 43(2), 209–214 (2015)
Ekeland, I., Témam, R.: Convex Analysis and Variational Problems, vol. 28. SIAM, Philadelphia (1999)
Eldar, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)
Facciolo Furlan, G., Almansa, A., Aujol, J.-F., Caselles, V., Caselles, V.: Irregular to regular sampling, denoising and deconvolution. Multiscale Model. Simul. 7(4), 1574–1608 (2009)
Fadili, J.M., Peyré, G.: Total variation projection with first order schemes. IEEE Trans. Image Process. 20(3), 657–669 (2011)
Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005) (Special issue on “Program Generation, Optimization, and Platform Adaptation)
Getreuer, P.: Linear methods for image interpolation. Image Proc. On Line 1 (2011)
Gilboa, G.: A spectral approach to total variation. In: Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision, Lecture Notes in Computer Science, vol. 7893, pp. 36–47 (2013)
Guerquin-Kern, M., Lejeune, L., Pruessmann, K.P., Unser, M.: Realistic analytical phantoms for parallel magnetic resonance imaging. IEEE Trans. Med. Imaging 31(3), 626–636 (2012)
Guichard, F., Malgouyres, F.: Total variation based interpolation. In: Proceedings of the European Signal Processing Conference, vol. 3, pp. 1741–1744 (1998)
Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35(1), 73–101 (1964)
Huber, P.J.: Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1(5), 799–821 (1973)
Indyk, P.: Explicit constructions for compressed sensing of sparse signals. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 30–33. Society for Industrial and Applied Mathematics, Philadelphia (2008)
Kotelnikov, V.A.: On the capacity of the ’ether’ and of cables in electrical communication. In: Proceedings of the 1st All-Union Conference on Technological Reconstruction of the Community Sector and Low-Current Engineering (1933)
Lanzavecchia, S., Bellon, P.L.: A moving window Shannon reconstruction algorithm for image interpolation. J. Vis. Commun. Image Represent. 5(3), 255–264 (1994)
Louchet, C., Moisan, L.: Posterior expectation of the total variation model: properties and experiments. SIAM J. Imaging Sci. 6(4), 2640–2684 (2013)
Louchet, C., Moisan, L.: Total variation denoising using iterated conditional expectation. In: Proceedings of the European Signal Processing Conference, pp. 1592–1596. IEEE (2014)
Malgouyres, F., Guichard, F.: Edge direction preserving image zooming: a mathematical and numerical analysis. SIAM J. Numer. Anal. 39(1), 1–37 (2001)
Marks, R.: Introduction to Shannon Sampling and Interpolation Theory. Springer, Berlin (2012)
Matei, B.: Model sets and new versions of Shannon sampling theorem. In: New Trends in Applied Harmonic Analysis, pp. 215–279. Springer, Berlin (2016)
Matei, B., Meyer, Y.: A variant of compressed sensing. Rev. Math. Iberoamericana 25(2), 669–692 (2009)
Miled, W., Pesquet, J., Parent, M.: A convex optimization approach for depth estimation under illumination variation. IEEE Trans. on Image Process. 18(4), 813–830 (2009)
Moisan, L.: How to discretize the total variation of an image? In: The 6th International Congress on Industrial Applied Mathematics, Proceedings in Applied Mathematics and Mechanics, vol. 7, no. 1, pp. 1041907–1041908 (2007)
Moisan, L.: Periodic plus smooth image decomposition. J. Math. Imaging Vis. 39(2), 161–179 (2011)
Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
Nesterov, Y.: A method of solving a convex programming problem with convergence rate O(1/\(k^2\)). Sov. Math. Dokl. 27, 372–376 (1983)
Nikolova, M.: Local strong homogeneity of a regularized estimator. SIAM J. Appl. Math. 61(2), 633–658 (2000)
Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 7(2), 1388–1419 (2014)
Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)
Preciozzi, J., Musé, P., Almansa, A., Durand, S., Khazaal, A., Rougé, B.: SMOS images restoration from L1A data: A sparsity-based variational approach. In: IEEE International Geoscience and Remote Sensing Symposium, pp. 2487–2490. IEEE (2014)
Raguet, H., Fadili, J.M., Peyré, G.: A generalized forward–backward splitting. SIAM J. Imaging Sci. 6(3), 1199–1226 (2013)
Ring, W.: Structural properties of solutions to total variation regularization problems. ESAIM: Modél. Math. Anal. Numér. 34(4):799–810 (2000)
Rockafellar, R.T.: Convex Analysis. Princeton University Press (1997) (Reprint of the 1970 original, Princeton Paperbacks)
Rougé, B., Seghier, A.: Nonlinear spectral extrapolation: new results and their application to spatial and medical imaging. In: Proceedings of the SPIE’s 1995 International Symposium on Optical Science, Engineering, and Instrumentation, pp. 279–289. International Society for Optics and Photonics (1995)
Ruderman, D.L.: The statistics of natural images. Netw. Comput. Neural Syst. 5(4), 517–548 (1994)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenomena 60(1), 259–268 (1992)
Schanze, T.: Sinc interpolation of discrete periodic signals. IEEE Trans. Signal Process. 43(6), 1502–1503 (1995)
Shannon, C.E.: Communication in the presence of noise. Proc. Inst. Radio Eng. 37(1), 10–21 (1949)
Simon, L., Morel, J.-M.: Influence of unknown exterior samples on interpolated values for band-limited images. SIAM J. Imaging Sci. 9(1), 152–184 (2016)
Unser, M.: Sampling-50 years after Shannon. Proc. IEEE 88(4), 569–587 (2000)
Unser, M., Aldroubi, A., Eden, M.: Fast B-spline transforms for continuous image representation and interpolation. IEEE Trans. Pattern Anal. Mach. Intell. 13(3), 277–285 (1991)
Unser, M.A.: Ten good reasons for using spline wavelets. In: Optical Science, Engineering and Instrumentation’97, pp. 422–431. International Society for Optics and Photonics (1997)
Vese, L.A., Osher, S.J.: Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 19(1–3), 553–572 (2003)
Vogel, C., Oman, M.: Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 7(6), 813–824 (1998)
Weiss, P., Blanc-Féraud, L.: A proximal method for inverse problems in image processing. In: Proceedings of the European Signal Processing Conference, pp. 1374–1378. IEEE (2009)
Weiss, P., Blanc-Féraud, L., Aubert, G.: Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 31(3), 2047–2080 (2009)
Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic Huber-L1 Optical Flow. In: Proceedings of the British Machine Vision Conference, vol. 1, p. 3 (2009)
Whittaker, E.T.: On the functions which are represented by the expansion of interpolating theory. Proc. R. Soc. Edinb. 35, 181–194 (1915)
Yaroslavsky, L.: Signal sinc-interpolation: a fast computer algorithm. Bioimaging 4(4), 225–231 (1996)
Yosida, K.: Functional Analysis. Springer, Berlin, Heidelberg (1980) (Originally published as volume 123 in the series: Grundlehren der mathematischen Wissenschaften, 1968)
Yuen, C., Fraser, D.: Digital spectral analysis. In: Yuen, C.K., Fraser, D. (eds.) Digital Spectral Analysis, vol. 156, p. 1. Pitman Publishing, London (1979)
Zhu, M., Chan, T.: An efficient primal–dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report (2008)
Acknowledgements
We would like to thank the anonymous reviewers for their valuable suggestions.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of Proposition 1
Let us consider, for \(x\in {\mathbb R}\setminus {\mathbb Z}\),
Writing \(x= \frac{M t}{\pi }\), we obtain
and the limit \(\mathrm {sincd}_M(x) = \lim _{n\rightarrow \infty } S_n(x)\) can be computed explicitly using classical series expansions (due to Euler):
If M is odd, \((-1)^{pM}=(-1)^p\) and we obtain
and if M is even, \((-1)^{pM}=1\) and the other series yields
as announced. \(\square \)
Appendix 2: Proof of Theorem 2
Since each operator \(T_z\) is linear and translation-invariant (Hypothesis (ii)), it can be written as a convolution, that is,
where \(\psi _z\) is an element of \(\mathcal S\). Taking the DFT of (67), we obtain
Now, from Hypothesis (iii) we immediately get
and by continuity of \(z\mapsto \widehat{\psi }_z(\alpha )\) (deduced from Hypothesis (i)) we obtain
for some \(\gamma (\alpha ) \in {\mathbb C}\). Since \(\widehat{\psi }_1(\alpha ) = \mathrm{e}^\frac{-2i\pi \alpha }{M}\), we have
where \(p(\alpha ) \in {\mathbb Z}\) and \(p(-\alpha )=-p(\alpha )\) (the fact that \(T_z u\) is real-valued implies that \(\widehat{\psi }_z(-\alpha )=\widehat{\psi }_z(\alpha )^*\)).
Last, we compute
Hence,
and since \(\frac{\alpha }{M}\in (-\frac{1}{2},\frac{1}{2})\) and \(p(\alpha )\in {\mathbb Z}\) for any \(\alpha \in \widehat{I}_M\), the right-hand term of (72) is minimal if and only if \(p(\alpha )=0\) for all \(\alpha \in \widehat{I}_M\). We conclude from (71) and (70) that
and thus (68) can be rewritten as
which is exactly \(S(k-z)\) thanks to (13) (recall that the real part is not needed because M is odd). Therefore, (24) is a necessary form for a set of operators \((T_z)\) satisfying Hypotheses (i) to (iv).
Conversely, one easily checks that the operators \((T_z)\) defined by (24) satisfy the Hypotheses (i) to (iv). \(\square \)
Appendix 3: Proof of Proposition 8
Let us denote by \(\nabla _{n,x} u \) and \(\nabla _{n,y} u\) the two elements of \({\mathbb R}^{\Omega _n}\) such that \(\nabla _nu = \left( \nabla _{n,x}u, \nabla _{n,y} u \right) \). In the following, the notation \(\langle \cdot ,\cdot \rangle _X\) stands for the usual Euclidean (respectively, Hermitian) inner product over the real (respectively, complex) Hilbert space X. We have
Recall that we defined \(\mathrm {div}_{n} = -\nabla _n^*\), the opposite of the adjoint of \(\nabla _n\). Noting \(\mathrm {div}_{n,x} = - \nabla _{n,x}^*\) and \(\mathrm {div}_{n,y} = - \nabla _{n,y}^*\), we have
so that we identify \(\mathrm {div}_n(p) = \mathrm {div}_{n,x}(p_x)+\mathrm {div}_{n,y}(p_y)\). Let us focus on the computation of \(\mathrm {div}_{n,x}(p_x)\). Let \(\widehat{\Omega _1}, \widehat{\Omega _2}, \widehat{\Omega _3}, \widehat{\Omega _4}\) be the sets defined by
Notice that some sets among \(\widehat{\Omega _2}, \widehat{\Omega _3}\) and \(\widehat{\Omega _4}\) may be empty according to the parity of M and N. Now, let \(h_{\widehat{p_x}}\) be the function defined in Proposition 8 and let us show that
Given \(z \in {\mathbb C}\), we denote as usual by \(z^*\) the conjugate of z. Thanks to Parseval identity, and using Proposition 6 (because we assumed \(n\ge 2\)), we have
It follows that
where for all \(k \in \{1,2,3,4\}\), we have set
Consider \(S_1\) first. Since we have \(\varepsilon _M(\alpha ) = \varepsilon _N(\beta ) = 1\) and \(h_{\widehat{p_x}}(\alpha ,\beta ) = \widehat{p_x}(\alpha ,\beta )\) for all \((\alpha ,\beta ) \in \widehat{\Omega _1}\), we recognize
Now consider \(S_2\). If M is odd, \(\widehat{\Omega }_2\) is empty and \(S_2 = 0\). Otherwise, since \(\varepsilon _M(\alpha ) \varepsilon _N(\beta ) = 1/2\) for all \((\alpha ,\beta ) \in \widehat{\Omega _2}\), by grouping together the terms \(\left( -\frac{M}{2},\beta \right) \) and \(\left( \frac{M}{2},\beta \right) \), we get
since we have set \(h_{\widehat{p_x}}(-\frac{M}{2},\beta ) = \frac{1}{2} \left( \widehat{p_x}(-\frac{M}{2},\beta ) - \widehat{p_x}(\frac{M}{2},\beta )\right) \) for \(|\beta | < N/2\).
Similarly for the term \(S_3\). When N is odd, \(\widehat{\Omega _3}= \emptyset \) and \(S_3 = 0\). Otherwise, when N is even, we have \(\varepsilon _M(\alpha )\varepsilon _N(\beta ) = 1/2\) for all \((\alpha ,\beta ) \in \widehat{\Omega _3}\), thus, by grouping together the terms \(\left( \alpha ,-\frac{N}{2}\right) \) and \(\left( \alpha ,\frac{N}{2}\right) \), we get
since we have set \(h_{\widehat{p_x}}(\alpha ,-\frac{N}{2}) = \frac{1}{2} \left( \widehat{p_x}(\alpha ,-\frac{N}{2}) + \widehat{p_x}(\alpha ,\frac{N}{2})\right) \) for \(|\alpha | < M/2\).
Lastly, let us consider \(S_4\). When M and N are both even (otherwise \(\widehat{\Omega _4} = \emptyset \) and \(S_4=0\)), we immediately get, for \(\alpha =-\frac{M}{2}\) and \(\beta =-\frac{N}{2}\),
since for all \((\alpha ,\beta ) \in \widehat{\Omega _4}\), we have \(\varepsilon _M(\alpha ) \varepsilon _N(\beta ) = 1/4\) and we have set \(h_{\widehat{p_x}}(\alpha ,\beta ) = \frac{1}{4} \sum _{s_1 = \pm 1, s2 = \pm 1} s_1 \widehat{p_x}(s_1 \alpha ,s_2 \beta )\).
Finally, we can write \(S_1 + S_2 + S_3 + S_4\) as a sum over \(\widehat{\Omega }\), indeed,
and using again the Parseval identity, we get (75). With a similar approach, one can check that
where \(h_{\widehat{p_y}}\) is defined in Proposition 8. Consequently, for any \((\alpha ,\beta ) \in \widehat{\Omega }\), we have
which ends the proof of Proposition 8. \(\square \)
Appendix 4: Proof of Theorem 3
Recall that for any integer M, we denote by \(T_M\) the real vector space of real-valued trigonometric polynomials that can be written as complex linear combination of the family \(( x \mapsto \mathrm{e}^{2 i \pi \frac{\alpha x}{M}} )_{-\frac{M}{2} \le \alpha \le \frac{M}{2}}\). In order to prove Theorem 3, we need the following Lemma.
Lemma 1
Let \(M=2m+1\) be an odd positive integer. The functions F and G defined by,
are both in \(T_M\) and G satisfies
Proof
F is in \(T_M\) by construction, and so is G as the difference of two elements of \(T_M\). Writing \(\omega = \frac{\pi }{M}\), we can notice that \(F(0)=1\) and
so that \(F(k) = 0\) for all integers \(k \in [1,M-1]\). Consequently, \(G(0)=1, G(1)=-1\) and \(G(k)=0\) for all integers \(k \in [2,M-1]\), thus
yielding the first announced result of the Lemma. Now, remark that the sign changes of G in \((0,2m+1)\) occur at integer points \(2,3,\ldots 2m\) and in \(\frac{1}{2}\) (by symmetry). Thus, we have
since for all \(x \in [0,M]\), we have \(G(x) = F(x)-F(x-1)\) and (because M is odd) \(F(x) = F(M-x)\). It follows that
since \(|F| \le 1\) everywhere.
Consequently, by isolating the index \(\alpha =0\) in the definition of F, we get \(J \ge 2\left( J^\prime +\frac{1}{M}\right) -2\), with
By exchanging the sums and grouping identical terms, we obtain
After summation of the geometric progression
Equation (76) finally leads to
where \(g= t \mapsto \frac{\tan t}{t}\). Now since g is positive and increasing on \((0,\frac{\pi }{2})\), we have
Using the lower bound \(g(t)\ge \frac{2}{\pi } \tan t\) for \(t\in (0,\frac{\pi }{2})\), we finally get
and thus \(\displaystyle J^\prime \ge \frac{4}{\pi ^2} \log \left( \frac{2}{\omega } \right) \), from which the inequality announced in Lemma 1 follows. \(\square \)
Now, let us prove the Theorem 3 by building a discrete image u such that \({\textsc {STV}}_1(u)\) is fixed but \({\textsc {STV}}_\infty (u)\) increases with the image size. We consider the function H defined by
where \(G \in T_M\) is the real-valued M-periodic trigonometric polynomial defined in Lemma 1 (\(M=2m+1\)). Since the integral of G over one period is zero (\(\int _0^M G(t) \, dt = 0\)), H is also an element of \(T_M\). Consequently, the bivariate trigonometric polynomial defined by
belongs to \(T_M \otimes T_M\), and since M is odd it is exactly the Shannon interpolate of the discrete image defined by
In particular, by definition of \({\textsc {STV}}_1\) and \({\textsc {STV}}_\infty \), we have
From Lemma 1, we have on the one hand,
and on the other hand,
which cannot be bounded from above by a constant independent of M. \(\square \)
Appendix 5: Proof of Proposition 10
Let \(u \in {\mathbb R}^{\Omega }, n \in {\mathbb N}\) and \(\alpha \in {\mathbb R}\) such that \(n \ge 1\) and \(\alpha > 0\). One can rewrite \({\textsc {HSTV}}_{\alpha ,n}(u) = \tfrac{1}{n^2} H_\alpha (\nabla _nu)\), where
Let us show that the Legendre–Fenchel transform of \(H_\alpha \) is
One easily checks that \(\mathcal {H}_\alpha \in \Gamma ({\mathbb R}^2)\), and it follows that \(H_\alpha \in \Gamma ({\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n})\). Thus, for any image \(u\in {\mathbb R}^\Omega \), we have \(H_\alpha (\nabla _nu) = H_\alpha ^{\star \star }(\nabla _nu)\) and
Besides, we have \(H_\alpha ^\star (p) = \sum _{(x,y) \in \Omega _n} \mathcal {H}^\star _\alpha (p(x,y))\), and the Legendre–Fenchel transform of \(\mathcal {H}_\alpha \) is the function \(\mathcal {H}_\alpha ^\star (z) = \delta _{|\cdot | \le 1}(z) + \tfrac{\alpha }{2}|z|^2\), where \(\delta _{|\cdot | \le 1}\) denotes the indicator function of the unit ball for the \(\ell ^2\) norm in \({\mathbb R}^2\). Indeed, it is proven in [55] that \(\mathcal {H}_\alpha \) is the Moreau envelope (or Moreau–Yosida regularization) [51, 76] with parameter \(\alpha \) of the \(\ell ^2\) norm \(|\cdot |\), or equivalently the infimal convolution (see [59]) between the two proper, convex and l.s.c functions \(f_1(x) = |x|\) and \(f_2(x) = \tfrac{1}{2\alpha } |x|^2\), that is
Thus, we have \(\mathcal {H}_\alpha ^\star = \left( f_1 \Box f_2 \right) ^\star = f_1^\star + f_2^\star \) (see [55, 59]), leading exactly to \(\mathcal {H}_\alpha ^\star (z) = \delta _{|\cdot | \le 1}(z) + \tfrac{\alpha }{2}|z|^2 \) for any \(z \in {\mathbb R}^2\), since we have \(f_1^\star = z \mapsto \delta _{|\cdot |\le 1}(z)\) and \(f_2^\star = z \mapsto \tfrac{\alpha }{2} |z|^2\). It follows that for any \(p \in {\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}\), we have
and the supremum (78) is a maximum for the same reason as in the proof of Proposition 9. Finally, writing \({\textsc {HSTV}}_{\alpha ,n}(u) = \tfrac{1}{n^2} H_\alpha (\nabla _nu) = \tfrac{1}{n^2} H_\alpha ^{\star \star }(\nabla _nu)\) using (78) and (79) leads to the announced result. \(\square \)
Rights and permissions
About this article
Cite this article
Abergel, R., Moisan, L. The Shannon Total Variation. J Math Imaging Vis 59, 341–370 (2017). https://doi.org/10.1007/s10851-017-0733-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-017-0733-5