Skip to main content
Log in

The Shannon Total Variation

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Discretization schemes commonly used for total variation regularization lead to images that are difficult to interpolate, which is a real issue for applications requiring subpixel accuracy and aliasing control. In the present work, we reconciliate total variation with Shannon interpolation and study a Fourier-based estimate that behaves much better in terms of grid invariance, isotropy, artifact removal and subpixel accuracy. We show that this new variant (called Shannon total variation) can be easily handled with classical primal–dual formulations and illustrate its efficiency on several image processing tasks, including deblurring, spectrum extrapolation and a new aliasing reduction algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Abergel, R., Louchet, C., Moisan, L., Zeng, T.: Total variation restoration of images corrupted by Poisson noise with iterated conditional expectations. In: Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision, Lecture Notes in Computer Science, vol. 9087, pp. 178–190. Springer (2015)

  2. Aldroubi, A., Unser, M., Eden, M.: Cardinal spline filters: stability and convergence to the ideal sinc interpolator. Signal Process. 28(2), 127–138 (1992)

    Article  MATH  Google Scholar 

  3. Arrow, K.J., Hurwicz, L., Uzawa, H., Chenery, H.B.: Studies in Linear and Non-linear Programming. Stanford University Press, Stanford (1958)

    MATH  Google Scholar 

  4. Aujol, J.-F., Chambolle, A.: Dual norms and image decomposition models. Int. J. Comput. Vis. 63(1), 85–104 (2005)

    Article  Google Scholar 

  5. Babacan, S., Molina, R., Katsaggelos, A.: Total variation super resolution using a variational approach. In: Proceedings of the International Conference on Image Processing, pp. 641–644 (2008)

  6. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)

    Article  MathSciNet  Google Scholar 

  7. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  9. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Briand, T., Vacher, J.: How to apply a filter defined in the frequency domain by a continuous function. Image Process. 6, 183–211 (2016)

    Article  MathSciNet  Google Scholar 

  11. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Candocia, F., Principe, J.C.: Comments on “sinc interpolation of discrete periodic signals”. IEEE Trans. Signal Process. 46(7), 2044–2047 (1998)

    Article  Google Scholar 

  13. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)

    MathSciNet  MATH  Google Scholar 

  14. Chambolle, A.: Total variation minimization and a class of binary MRF models. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 136–152. Springer (2005)

  15. Chambolle, A., Caselles, V., Cremers, D., Novaga, M., Pock, T.: An introduction to total variation for image analysis. Theor. Found. Numer. Methods Sparse Recov. 9, 263–340 (2010)

    MathSciNet  MATH  Google Scholar 

  16. Chambolle, A., Levine, S.E., Lucier, B.J.: An upwind finite-difference method for total variation-based image smoothing. SIAM J. Imaging Sci. 4(1), 277–299 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22(2), 503–516 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  19. Chan, T.F., Wong, C.-K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998)

    Article  Google Scholar 

  20. Chan, T.F., Yip, A.M., Park, F.E.: Simultaneous total variation image inpainting and blind deconvolution. Int. J. Imaging Syst. Technol. 15(1), 92–102 (2005)

    Article  Google Scholar 

  21. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer (2011)

  22. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  23. Condat, L.: Discrete Total Variation: New Definition and Minimization. Preprint GIPSA-lab (2016)

  24. Darbon, J., Sigelle, M.: Image restoration with discrete constrained total variation part i: fast and exact optimization. J. Math. Imaging Vis. 26(3), 261–276 (2006)

    Article  Google Scholar 

  25. DeVore, R.A.: Deterministic constructions of compressed sensing matrices. J. Complex. 23(4), 918–925 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  26. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  27. Drori, Y., Sabach, S., Teboulle, M.: A simple algorithm for a class of nonsmooth convex-concave saddle-point problems. Oper. Res. Lett. 43(2), 209–214 (2015)

    Article  MathSciNet  Google Scholar 

  28. Ekeland, I., Témam, R.: Convex Analysis and Variational Problems, vol. 28. SIAM, Philadelphia (1999)

    Book  MATH  Google Scholar 

  29. Eldar, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)

    Book  Google Scholar 

  30. Facciolo Furlan, G., Almansa, A., Aujol, J.-F., Caselles, V., Caselles, V.: Irregular to regular sampling, denoising and deconvolution. Multiscale Model. Simul. 7(4), 1574–1608 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Fadili, J.M., Peyré, G.: Total variation projection with first order schemes. IEEE Trans. Image Process. 20(3), 657–669 (2011)

    Article  MathSciNet  Google Scholar 

  32. Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005) (Special issue on “Program Generation, Optimization, and Platform Adaptation)

  33. Getreuer, P.: Linear methods for image interpolation. Image Proc. On Line 1 (2011)

  34. Gilboa, G.: A spectral approach to total variation. In: Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision, Lecture Notes in Computer Science, vol. 7893, pp. 36–47 (2013)

  35. Guerquin-Kern, M., Lejeune, L., Pruessmann, K.P., Unser, M.: Realistic analytical phantoms for parallel magnetic resonance imaging. IEEE Trans. Med. Imaging 31(3), 626–636 (2012)

    Article  Google Scholar 

  36. Guichard, F., Malgouyres, F.: Total variation based interpolation. In: Proceedings of the European Signal Processing Conference, vol. 3, pp. 1741–1744 (1998)

  37. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35(1), 73–101 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  38. Huber, P.J.: Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1(5), 799–821 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  39. Indyk, P.: Explicit constructions for compressed sensing of sparse signals. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 30–33. Society for Industrial and Applied Mathematics, Philadelphia (2008)

  40. Kotelnikov, V.A.: On the capacity of the ’ether’ and of cables in electrical communication. In: Proceedings of the 1st All-Union Conference on Technological Reconstruction of the Community Sector and Low-Current Engineering (1933)

  41. Lanzavecchia, S., Bellon, P.L.: A moving window Shannon reconstruction algorithm for image interpolation. J. Vis. Commun. Image Represent. 5(3), 255–264 (1994)

    Article  Google Scholar 

  42. Louchet, C., Moisan, L.: Posterior expectation of the total variation model: properties and experiments. SIAM J. Imaging Sci. 6(4), 2640–2684 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  43. Louchet, C., Moisan, L.: Total variation denoising using iterated conditional expectation. In: Proceedings of the European Signal Processing Conference, pp. 1592–1596. IEEE (2014)

  44. Malgouyres, F., Guichard, F.: Edge direction preserving image zooming: a mathematical and numerical analysis. SIAM J. Numer. Anal. 39(1), 1–37 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  45. Marks, R.: Introduction to Shannon Sampling and Interpolation Theory. Springer, Berlin (2012)

    MATH  Google Scholar 

  46. Matei, B.: Model sets and new versions of Shannon sampling theorem. In: New Trends in Applied Harmonic Analysis, pp. 215–279. Springer, Berlin (2016)

  47. Matei, B., Meyer, Y.: A variant of compressed sensing. Rev. Math. Iberoamericana 25(2), 669–692 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  48. Miled, W., Pesquet, J., Parent, M.: A convex optimization approach for depth estimation under illumination variation. IEEE Trans. on Image Process. 18(4), 813–830 (2009)

    Article  MathSciNet  Google Scholar 

  49. Moisan, L.: How to discretize the total variation of an image? In: The 6th International Congress on Industrial Applied Mathematics, Proceedings in Applied Mathematics and Mechanics, vol. 7, no. 1, pp. 1041907–1041908 (2007)

  50. Moisan, L.: Periodic plus smooth image decomposition. J. Math. Imaging Vis. 39(2), 161–179 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  51. Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  52. Nesterov, Y.: A method of solving a convex programming problem with convergence rate O(1/\(k^2\)). Sov. Math. Dokl. 27, 372–376 (1983)

    MATH  Google Scholar 

  53. Nikolova, M.: Local strong homogeneity of a regularized estimator. SIAM J. Appl. Math. 61(2), 633–658 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  54. Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: inertial proximal algorithm for nonconvex optimization. SIAM J. Imaging Sci. 7(2), 1388–1419 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  55. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)

    Google Scholar 

  56. Preciozzi, J., Musé, P., Almansa, A., Durand, S., Khazaal, A., Rougé, B.: SMOS images restoration from L1A data: A sparsity-based variational approach. In: IEEE International Geoscience and Remote Sensing Symposium, pp. 2487–2490. IEEE (2014)

  57. Raguet, H., Fadili, J.M., Peyré, G.: A generalized forward–backward splitting. SIAM J. Imaging Sci. 6(3), 1199–1226 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  58. Ring, W.: Structural properties of solutions to total variation regularization problems. ESAIM: Modél. Math. Anal. Numér. 34(4):799–810 (2000)

  59. Rockafellar, R.T.: Convex Analysis. Princeton University Press (1997) (Reprint of the 1970 original, Princeton Paperbacks)

  60. Rougé, B., Seghier, A.: Nonlinear spectral extrapolation: new results and their application to spatial and medical imaging. In: Proceedings of the SPIE’s 1995 International Symposium on Optical Science, Engineering, and Instrumentation, pp. 279–289. International Society for Optics and Photonics (1995)

  61. Ruderman, D.L.: The statistics of natural images. Netw. Comput. Neural Syst. 5(4), 517–548 (1994)

    Article  MATH  Google Scholar 

  62. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenomena 60(1), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  63. Schanze, T.: Sinc interpolation of discrete periodic signals. IEEE Trans. Signal Process. 43(6), 1502–1503 (1995)

    Article  Google Scholar 

  64. Shannon, C.E.: Communication in the presence of noise. Proc. Inst. Radio Eng. 37(1), 10–21 (1949)

    MathSciNet  Google Scholar 

  65. Simon, L., Morel, J.-M.: Influence of unknown exterior samples on interpolated values for band-limited images. SIAM J. Imaging Sci. 9(1), 152–184 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  66. Unser, M.: Sampling-50 years after Shannon. Proc. IEEE 88(4), 569–587 (2000)

    Article  Google Scholar 

  67. Unser, M., Aldroubi, A., Eden, M.: Fast B-spline transforms for continuous image representation and interpolation. IEEE Trans. Pattern Anal. Mach. Intell. 13(3), 277–285 (1991)

    Article  Google Scholar 

  68. Unser, M.A.: Ten good reasons for using spline wavelets. In: Optical Science, Engineering and Instrumentation’97, pp. 422–431. International Society for Optics and Photonics (1997)

  69. Vese, L.A., Osher, S.J.: Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 19(1–3), 553–572 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  70. Vogel, C., Oman, M.: Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 7(6), 813–824 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  71. Weiss, P., Blanc-Féraud, L.: A proximal method for inverse problems in image processing. In: Proceedings of the European Signal Processing Conference, pp. 1374–1378. IEEE (2009)

  72. Weiss, P., Blanc-Féraud, L., Aubert, G.: Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 31(3), 2047–2080 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  73. Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic Huber-L1 Optical Flow. In: Proceedings of the British Machine Vision Conference, vol. 1, p. 3 (2009)

  74. Whittaker, E.T.: On the functions which are represented by the expansion of interpolating theory. Proc. R. Soc. Edinb. 35, 181–194 (1915)

    Article  Google Scholar 

  75. Yaroslavsky, L.: Signal sinc-interpolation: a fast computer algorithm. Bioimaging 4(4), 225–231 (1996)

    Article  Google Scholar 

  76. Yosida, K.: Functional Analysis. Springer, Berlin, Heidelberg (1980) (Originally published as volume 123 in the series: Grundlehren der mathematischen Wissenschaften, 1968)

  77. Yuen, C., Fraser, D.: Digital spectral analysis. In: Yuen, C.K., Fraser, D. (eds.) Digital Spectral Analysis, vol. 156, p. 1. Pitman Publishing, London (1979)

    Google Scholar 

  78. Zhu, M., Chan, T.: An efficient primal–dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report (2008)

Download references

Acknowledgements

We would like to thank the anonymous reviewers for their valuable suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rémy Abergel.

Appendices

Appendix 1: Proof of Proposition 1

Let us consider, for \(x\in {\mathbb R}\setminus {\mathbb Z}\),

$$\begin{aligned} S_n(x)= & {} \sum _{p=-n}^{n} \mathrm {sinc}(x-pM)\\= & {} \mathrm {sinc}(x) + \sum _{p=1}^{n} \mathrm {sinc}(x-pM) + \mathrm {sinc}(x+pM)\\= & {} \frac{\sin \pi x}{\pi x} + \sum _{p=1}^{n} (-1)^{pM} \left( \frac{\sin \pi x}{\pi (x-pM)} + \frac{\sin \pi x}{\pi (x+pM)} \right) . \end{aligned}$$

Writing \(x= \frac{M t}{\pi }\), we obtain

$$\begin{aligned} S_n(x) = \frac{\sin Mt}{M} \left( \frac{1}{t} + \sum _{p=1}^{n} (-1)^{pM} \left( \frac{1}{t-p\pi } + \frac{1}{t+p \pi } \right) \right) \end{aligned}$$

and the limit \(\mathrm {sincd}_M(x) = \lim _{n\rightarrow \infty } S_n(x)\) can be computed explicitly using classical series expansions (due to Euler):

$$\begin{aligned} \forall t\in {\mathbb R}\setminus \pi {\mathbb Z}, \;\;\; \displaystyle \frac{1}{\tan t}= & {} \frac{1}{t} + \sum _{p=1}^\infty \frac{1}{t-p\pi } + \frac{1}{t+p\pi },\\ \displaystyle \frac{1}{\sin t}= & {} \frac{1}{t} + \sum _{p=1}^\infty (-1)^p \left( \frac{1}{t-p\pi } + \frac{1}{t+p\pi }\right) . \end{aligned}$$

If M is odd, \((-1)^{pM}=(-1)^p\) and we obtain

$$\begin{aligned} \mathrm {sincd}_M(x) = \frac{\sin Mt}{M \sin t} = \frac{\sin \pi x}{M\sin \frac{\pi x}{M}}, \end{aligned}$$

and if M is even, \((-1)^{pM}=1\) and the other series yields

$$\begin{aligned} \mathrm {sincd}_M(x) = \frac{\sin Mt}{M \tan t} = \frac{\sin \pi x}{M\tan \frac{\pi x}{M}} \end{aligned}$$

as announced. \(\square \)

Appendix 2: Proof of Theorem 2

Since each operator \(T_z\) is linear and translation-invariant (Hypothesis (ii)), it can be written as a convolution, that is,

$$\begin{aligned} T_z s(k) = (\psi _z \star s)(k) := \sum _{l\in I_M} \psi _z(k-l) s(l), \end{aligned}$$
(67)

where \(\psi _z\) is an element of \(\mathcal S\). Taking the DFT of (67), we obtain

$$\begin{aligned} \forall \alpha \in {\mathbb Z}, \quad \widehat{T_z s}(\alpha ) = \widehat{\psi }_z(\alpha ) \widehat{s}(\alpha ). \end{aligned}$$
(68)

Now, from Hypothesis (iii) we immediately get

$$\begin{aligned} \forall z,w \in {\mathbb R},\; \forall \alpha \in {\mathbb Z},\quad \widehat{\psi }_{z+w}(\alpha ) = \widehat{\psi }_z(\alpha ) \widehat{\psi }_w(\alpha ), \end{aligned}$$
(69)

and by continuity of \(z\mapsto \widehat{\psi }_z(\alpha )\) (deduced from Hypothesis (i)) we obtain

$$\begin{aligned} \forall \alpha \in {\mathbb Z}, \quad \widehat{\psi }_z(\alpha ) = \mathrm{e}^{\gamma (\alpha ) z} \end{aligned}$$
(70)

for some \(\gamma (\alpha ) \in {\mathbb C}\). Since \(\widehat{\psi }_1(\alpha ) = \mathrm{e}^\frac{-2i\pi \alpha }{M}\), we have

$$\begin{aligned} \gamma (\alpha ) = -2i\pi \left( \frac{\alpha }{M} + p(\alpha ) \right) , \end{aligned}$$
(71)

where \(p(\alpha ) \in {\mathbb Z}\) and \(p(-\alpha )=-p(\alpha )\) (the fact that \(T_z u\) is real-valued implies that \(\widehat{\psi }_z(-\alpha )=\widehat{\psi }_z(\alpha )^*\)).

Last, we compute

$$\begin{aligned} \Vert T_z - id \Vert _2^2= & {} \sup _{\Vert s\Vert _2=1}\Vert T_z s- s \Vert _2^2 \\= & {} \frac{1}{M}\sup _{\Vert s\Vert _2=1}\Vert \widehat{T_z s}- \hat{s} \Vert _2^2\\= & {} \frac{1}{M} \sup _{\Vert \hat{s}\Vert _2^2=M} \sum _{\alpha \in \widehat{I}_M} |\mathrm{e}^{-2i\pi \left( \frac{\alpha }{M} + p(\alpha ) \right) z} -1|^2 \cdot |\hat{s}(\alpha )|^2\\= & {} 4 \max _{\alpha \in \widehat{I}_M} \sin ^2 \left( \pi \left( \frac{\alpha }{M} + p(\alpha ) \right) z \right) \\= & {} 4 \pi ^2 z^2 \max _{\alpha \in \widehat{I}_M} \left( \frac{\alpha }{M} + p(\alpha ) \right) ^2 + \underset{z\rightarrow 0}{o}(z^2). \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{z\rightarrow 0} \;|z|^{-1} \Vert T_z - id \Vert _2 = 2 \pi \max _{\alpha \in \widehat{I}_M} \left| \frac{\alpha }{M} + p(\alpha ) \right| \end{aligned}$$
(72)

and since \(\frac{\alpha }{M}\in (-\frac{1}{2},\frac{1}{2})\) and \(p(\alpha )\in {\mathbb Z}\) for any \(\alpha \in \widehat{I}_M\), the right-hand term of (72) is minimal if and only if \(p(\alpha )=0\) for all \(\alpha \in \widehat{I}_M\). We conclude from (71) and (70) that

$$\begin{aligned} \forall \alpha \in \widehat{I}_M, \quad \widehat{\psi }_z(\alpha ) = \mathrm{e}^{-2i\pi \alpha z/M}, \end{aligned}$$
(73)

and thus (68) can be rewritten as

$$\begin{aligned} T_z s(k) = \frac{1}{M} \sum _{\alpha \in \widehat{I}_M} \widehat{s}(\alpha ) \mathrm{e}^{-2i\pi \alpha z/M} \mathrm{e}^{-2i\pi \alpha k/M}, \end{aligned}$$
(74)

which is exactly \(S(k-z)\) thanks to (13) (recall that the real part is not needed because M is odd). Therefore, (24) is a necessary form for a set of operators \((T_z)\) satisfying Hypotheses (i) to (iv).

Conversely, one easily checks that the operators \((T_z)\) defined by (24) satisfy the Hypotheses (i) to (iv). \(\square \)

Appendix 3: Proof of Proposition 8

Let us denote by \(\nabla _{n,x} u \) and \(\nabla _{n,y} u\) the two elements of \({\mathbb R}^{\Omega _n}\) such that \(\nabla _nu = \left( \nabla _{n,x}u, \nabla _{n,y} u \right) \). In the following, the notation \(\langle \cdot ,\cdot \rangle _X\) stands for the usual Euclidean (respectively, Hermitian) inner product over the real (respectively, complex) Hilbert space X. We have

$$\begin{aligned} \langle \nabla _nu, p \rangle _{{\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}} = \langle \nabla _{n,x} u, p_x \rangle _{{\mathbb R}^{\Omega _n}} + \langle \nabla _{n,y} u, p_y \rangle _{{\mathbb R}^{\Omega _n}}. \end{aligned}$$

Recall that we defined \(\mathrm {div}_{n} = -\nabla _n^*\), the opposite of the adjoint of \(\nabla _n\). Noting \(\mathrm {div}_{n,x} = - \nabla _{n,x}^*\) and \(\mathrm {div}_{n,y} = - \nabla _{n,y}^*\), we have

$$\begin{aligned} \langle \nabla _nu, p \rangle _{{\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}} = \langle u, -\mathrm {div}_{n,x}(p_x)-\mathrm {div}_{n,y}(p_y) \rangle _{{\mathbb R}^\Omega }. \end{aligned}$$

so that we identify \(\mathrm {div}_n(p) = \mathrm {div}_{n,x}(p_x)+\mathrm {div}_{n,y}(p_y)\). Let us focus on the computation of \(\mathrm {div}_{n,x}(p_x)\). Let \(\widehat{\Omega _1}, \widehat{\Omega _2}, \widehat{\Omega _3}, \widehat{\Omega _4}\) be the sets defined by

$$\begin{aligned} \widehat{\Omega _1}= & {} \left\{ \left( \alpha ,\beta \right) \in {\mathbb R}^2,~ |\alpha |< \frac{M}{2}, ~|\beta |< \frac{N}{2} \right\} \cap {\mathbb Z}^2 \\ \widehat{\Omega _2}= & {} \left\{ \left( \pm \frac{M}{2}, \beta \right) \in {\mathbb R}^2, ~ |\beta |< \frac{N}{2} \right\} \cap {\mathbb Z}^2 \\ \widehat{\Omega _3}= & {} \left\{ \left( \alpha ,\pm \frac{N}{2}\right) \in {\mathbb R}^2,~ |\alpha | < \frac{M}{2} \right\} \cap {\mathbb Z}^2 \\ \widehat{\Omega _4}= & {} \left\{ \left( \pm \frac{M}{2}, \pm \frac{N}{2}\right) \right\} \cap {\mathbb Z}^2. \\ \end{aligned}$$

Notice that some sets among \(\widehat{\Omega _2}, \widehat{\Omega _3}\) and \(\widehat{\Omega _4}\) may be empty according to the parity of M and N. Now, let \(h_{\widehat{p_x}}\) be the function defined in Proposition 8 and let us show that

$$\begin{aligned} \forall (\alpha ,\beta ) \in \widehat{\Omega }, \quad \widehat{\mathrm {div}_{n,x}(p_x)}(\alpha ,\beta ) = 2 i \pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta ). \end{aligned}$$
(75)

Given \(z \in {\mathbb C}\), we denote as usual by \(z^*\) the conjugate of z. Thanks to Parseval identity, and using Proposition 6 (because we assumed \(n\ge 2\)), we have

$$\begin{aligned}&\langle \nabla _{n,x} u, p_x \rangle _{{\mathbb R}^{\Omega _n}}\\&\quad = \dfrac{1}{n^2 M N} \langle \widehat{\nabla _{n,x} u}, \widehat{p_x} \rangle _{{\mathbb C}^{\Omega _n}} \\&\quad = \dfrac{1}{n^2 MN} \displaystyle {\sum _{(\alpha ,\beta ) \in \widehat{\Omega }_n}} \widehat{\nabla _{n,x} u}(\alpha ,\beta ) \, \left( \widehat{p_x}(\alpha ,\beta )\right) ^* \\&\quad = \dfrac{1}{MN} \displaystyle {\sum \limits _{\begin{array}{c} -\frac{M}{2} \le \alpha \le \frac{M}{2} \\ -\frac{N}{2} \le \beta \le \frac{N}{2} \end{array}}} -\widehat{u}(\alpha ,\beta ) \left( 2 i \pi \varepsilon _M(\alpha ) \varepsilon _N(\beta ) \frac{\alpha }{M} \widehat{p_x}(\alpha ,\beta ) \right) ^*. \end{aligned}$$

It follows that

$$\begin{aligned} \langle \nabla _{n,x} u, p_x \rangle _{{\mathbb R}^{\Omega _n}} = S_1 + S_2 + S_3 + S_4, \end{aligned}$$

where for all \(k \in \{1,2,3,4\}\), we have set

$$\begin{aligned} S_k = \dfrac{1}{MN} \! \displaystyle {\sum _{(\alpha ,\beta ) \in \widehat{\Omega }_k}} \! -\widehat{u}(\alpha ,\beta ) \left( 2 i \pi \varepsilon _M(\alpha ) \varepsilon _N(\beta ) \frac{\alpha }{M} \widehat{p_x}(\alpha ,\beta ) \right) ^*. \end{aligned}$$

Consider \(S_1\) first. Since we have \(\varepsilon _M(\alpha ) = \varepsilon _N(\beta ) = 1\) and \(h_{\widehat{p_x}}(\alpha ,\beta ) = \widehat{p_x}(\alpha ,\beta )\) for all \((\alpha ,\beta ) \in \widehat{\Omega _1}\), we recognize

$$\begin{aligned} S_1 = \dfrac{1}{MN} \displaystyle {\sum _{|\alpha |< \frac{M}{2},~|\beta | < \frac{N}{2}}} -\widehat{u}(\alpha ,\beta ) \left( 2i\pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta )\right) ^*. \end{aligned}$$

Now consider \(S_2\). If M is odd, \(\widehat{\Omega }_2\) is empty and \(S_2 = 0\). Otherwise, since \(\varepsilon _M(\alpha ) \varepsilon _N(\beta ) = 1/2\) for all \((\alpha ,\beta ) \in \widehat{\Omega _2}\), by grouping together the terms \(\left( -\frac{M}{2},\beta \right) \) and \(\left( \frac{M}{2},\beta \right) \), we get

$$\begin{aligned} S_2= & {} \dfrac{1}{MN} \displaystyle {\sum _{\alpha = -\frac{M}{2}, |\beta |< \frac{N}{2} }} -\widehat{u}(\alpha ,\beta ) \\&\times \,\left( 2 i \pi \frac{1}{2} \frac{\alpha }{M} \widehat{p_x}(\alpha ,\beta ) - 2 i \pi \frac{1}{2} \frac{\alpha }{M} \widehat{p_x}(-\alpha ,\beta ) \right) ^*\\= & {} \dfrac{1}{MN} \displaystyle {\sum _{\alpha = -\frac{M}{2}, |\beta | < \frac{N}{2} }} -\widehat{u}(\alpha ,\beta ) \left( 2i\pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta )\right) ^*,~~~ \end{aligned}$$

since we have set \(h_{\widehat{p_x}}(-\frac{M}{2},\beta ) = \frac{1}{2} \left( \widehat{p_x}(-\frac{M}{2},\beta ) - \widehat{p_x}(\frac{M}{2},\beta )\right) \) for \(|\beta | < N/2\).

Similarly for the term \(S_3\). When N is odd, \(\widehat{\Omega _3}= \emptyset \) and \(S_3 = 0\). Otherwise, when N is even, we have \(\varepsilon _M(\alpha )\varepsilon _N(\beta ) = 1/2\) for all \((\alpha ,\beta ) \in \widehat{\Omega _3}\), thus, by grouping together the terms \(\left( \alpha ,-\frac{N}{2}\right) \) and \(\left( \alpha ,\frac{N}{2}\right) \), we get

$$\begin{aligned} S_3= & {} \dfrac{1}{MN} \displaystyle {\sum _{|\alpha |< \frac{M}{2}, \beta =-\frac{N}{2}}} -\widehat{u}(\alpha ,\beta ) \\&\times \left( 2 i \pi \frac{1}{2} \frac{\alpha }{M} \widehat{p_x}(\alpha ,\beta ) + 2 i \pi \frac{1}{2} \frac{\alpha }{M} \widehat{p_x}(\alpha ,-\beta ) \right) ^*\\= & {} \dfrac{1}{MN} \displaystyle {\sum _{|\alpha | < \frac{M}{2}, \beta = -\frac{N}{2}}} -\widehat{u}(\alpha ,\beta ) \left( 2i\pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta )\right) ^*, \end{aligned}$$

since we have set \(h_{\widehat{p_x}}(\alpha ,-\frac{N}{2}) = \frac{1}{2} \left( \widehat{p_x}(\alpha ,-\frac{N}{2}) + \widehat{p_x}(\alpha ,\frac{N}{2})\right) \) for \(|\alpha | < M/2\).

Lastly, let us consider \(S_4\). When M and N are both even (otherwise \(\widehat{\Omega _4} = \emptyset \) and \(S_4=0\)), we immediately get, for \(\alpha =-\frac{M}{2}\) and \(\beta =-\frac{N}{2}\),

$$\begin{aligned} S_4= & {} - \widehat{u}(\alpha ,\beta ) \left( \sum _{s_1 = \pm 1, s2 = \pm 1} 2 i \pi \frac{1}{4} s_1 \frac{\alpha }{M} \widehat{p_x}(s_1 \alpha ,s_2 \beta ) \right) ^* \\= & {} - \widehat{u}(\alpha ,\beta ) \left( 2 i \pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta ) \right) ^*, \end{aligned}$$

since for all \((\alpha ,\beta ) \in \widehat{\Omega _4}\), we have \(\varepsilon _M(\alpha ) \varepsilon _N(\beta ) = 1/4\) and we have set \(h_{\widehat{p_x}}(\alpha ,\beta ) = \frac{1}{4} \sum _{s_1 = \pm 1, s2 = \pm 1} s_1 \widehat{p_x}(s_1 \alpha ,s_2 \beta )\).

Finally, we can write \(S_1 + S_2 + S_3 + S_4\) as a sum over \(\widehat{\Omega }\), indeed,

$$\begin{aligned} \langle \nabla _{n,x} u, p_x \rangle _{{\mathbb R}^\Omega }= & {} S_1 + S_2 + S_3 + S_4 \\= & {} \frac{1}{MN} \sum _{(\alpha ,\beta ) \in \widehat{\Omega }} - \widehat{u}(\alpha ,\beta ) \left( 2 i \pi \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta ) \right) ^*, \end{aligned}$$

and using again the Parseval identity, we get (75). With a similar approach, one can check that

$$\begin{aligned} \forall (\alpha ,\beta ) \in \widehat{\Omega }, \quad \widehat{\mathrm {div}_{n,y}(p_y)}(\alpha ,\beta ) = 2 i \pi \frac{\beta }{N} h_{\widehat{p_y}}(\alpha ,\beta ), \end{aligned}$$

where \(h_{\widehat{p_y}}\) is defined in Proposition 8. Consequently, for any \((\alpha ,\beta ) \in \widehat{\Omega }\), we have

$$\begin{aligned} \widehat{\mathrm {div}_n(p)}(\alpha ,\beta ) = 2 i \pi \left( \frac{\alpha }{M} h_{\widehat{p_x}}(\alpha ,\beta ) + \frac{\beta }{N} h_{\widehat{p_y}}(\alpha ,\beta ) \right) , \end{aligned}$$

which ends the proof of Proposition 8. \(\square \)

Appendix 4: Proof of Theorem 3

Recall that for any integer M, we denote by \(T_M\) the real vector space of real-valued trigonometric polynomials that can be written as complex linear combination of the family \(( x \mapsto \mathrm{e}^{2 i \pi \frac{\alpha x}{M}} )_{-\frac{M}{2} \le \alpha \le \frac{M}{2}}\). In order to prove Theorem 3, we need the following Lemma.

Lemma 1

Let \(M=2m+1\) be an odd positive integer. The functions F and G defined by,

$$\begin{aligned} \forall x \in {\mathbb R}, \quad F(x) = \frac{1}{M} \sum _{\alpha =-m}^m \! \mathrm{e}^{\frac{2i\pi \alpha x}{M}}, \quad G(x) = F(x)-F(x-1), \end{aligned}$$

are both in \(T_M\) and G satisfies

$$\begin{aligned} \sum _{k=0}^{M-1} |G(k)| = 2, \quad \int _1^{M} |G(x)|\,\mathrm{d}x \ge \frac{8}{\pi ^2} \log \left( \frac{2M}{\pi }\right) -2. \end{aligned}$$

Proof

F is in \(T_M\) by construction, and so is G as the difference of two elements of \(T_M\). Writing \(\omega = \frac{\pi }{M}\), we can notice that \(F(0)=1\) and

$$\begin{aligned} \forall x \in (0,M), \quad F(x) = \frac{\mathrm{e}^{2 i \omega (-m) x}}{M} \cdot \frac{1-\mathrm{e}^{2i\pi x}}{1-\mathrm{e}^{2i\omega x}} = \frac{\sin {(\pi x)}}{M \sin {(\omega x)}}, \end{aligned}$$

so that \(F(k) = 0\) for all integers \(k \in [1,M-1]\). Consequently, \(G(0)=1, G(1)=-1\) and \(G(k)=0\) for all integers \(k \in [2,M-1]\), thus

$$\begin{aligned} \sum _{k=0}^{M-1} |G(k)| = |G(0)| + |G(1)| = 2, \end{aligned}$$

yielding the first announced result of the Lemma. Now, remark that the sign changes of G in \((0,2m+1)\) occur at integer points \(2,3,\ldots 2m\) and in \(\frac{1}{2}\) (by symmetry). Thus, we have

$$\begin{aligned} J := \displaystyle {\int _1^{M}} |G(x)|\,\mathrm{d}x= & {} \displaystyle {\sum _{k=1}^{2m} (-1)^k \int _k^{k+1}} G(x)\,\mathrm{d}x \\= & {} \displaystyle {2 \sum _{k=0}^{2m-1} (-1)^k \int _k^{k+1}F(x)\,\mathrm{d}x }, \end{aligned}$$

since for all \(x \in [0,M]\), we have \(G(x) = F(x)-F(x-1)\) and (because M is odd) \(F(x) = F(M-x)\). It follows that

$$\begin{aligned} J \ge \displaystyle {2 \left( \sum _{k=0}^{2m} (-1)^k \int _k^{k+1}F(x)\,\mathrm{d}x \right) }-2, \end{aligned}$$

since \(|F| \le 1\) everywhere.

Consequently, by isolating the index \(\alpha =0\) in the definition of F, we get \(J \ge 2\left( J^\prime +\frac{1}{M}\right) -2\), with

$$\begin{aligned} J^\prime = \sum _{k=0}^{2m} \frac{ (-1)^k }{M} \sum \limits _{\begin{array}{c} -m\le \alpha \le m \\ \alpha \ne 0 \end{array}} \int _k^{k+1} \mathrm{e}^{2i\omega \alpha x} \,\mathrm{d}x. \end{aligned}$$

By exchanging the sums and grouping identical terms, we obtain

$$\begin{aligned} J^\prime= & {} \displaystyle {\frac{1}{M} \sum \limits _{\begin{array}{c} -m\le \alpha \le m \\ \alpha \ne 0 \end{array}} \sum _{k=0}^{2m} (-1)^k \cdot \frac{\mathrm{e}^{2i\omega \alpha (k+1)}- \mathrm{e}^{2i\omega \alpha k} }{2i\omega \alpha }} \nonumber \\= & {} \displaystyle {\sum \limits _{\begin{array}{c} -m\le \alpha \le m \\ \alpha \ne 0 \end{array}} \frac{-1}{i\pi \alpha } \sum _{k=1}^{2m} \left( -\mathrm{e}^{2i\omega \alpha }\right) ^k}. \end{aligned}$$
(76)

After summation of the geometric progression

$$\begin{aligned}&\displaystyle {\sum _{k=1}^{2m}} \left( -\mathrm{e}^{2i\omega \alpha }\right) ^k = -\mathrm{e}^{2i\omega \alpha }\cdot \dfrac{1-\mathrm{e}^{2i\omega \alpha (2m)}}{1+\mathrm{e}^{2i\omega \alpha }} \\&\quad = \mathrm{e}^{i\pi \alpha } \dfrac{i \sin (2\omega m\alpha )}{\cos (\omega \alpha )} = \dfrac{i \sin (2\omega m\alpha -\pi \alpha )}{\cos (\omega \alpha )} = -i\tan (\omega \alpha ), \end{aligned}$$

Equation (76) finally leads to

$$\begin{aligned} J^\prime = \sum \limits _{\begin{array}{c} -m\le \alpha \le m \\ \alpha \ne 0 \end{array}} \frac{1}{\pi \alpha } \cdot \tan (\omega \alpha ) = \frac{2}{M} \sum _{\alpha =1}^m g(\omega \alpha ) \end{aligned}$$

where \(g= t \mapsto \frac{\tan t}{t}\). Now since g is positive and increasing on \((0,\frac{\pi }{2})\), we have

$$\begin{aligned} \sum _{\alpha =1}^m g(\omega \alpha ) \ge \int _0^m g(\omega x) \,\mathrm{d}x = \frac{1}{\omega } \int _0^{\omega m} g(t)\,\mathrm{d}t. \end{aligned}$$

Using the lower bound \(g(t)\ge \frac{2}{\pi } \tan t\) for \(t\in (0,\frac{\pi }{2})\), we finally get

$$\begin{aligned} J^\prime\ge & {} \frac{4}{\pi ^2} \int _0^{\omega m} \! \tan t\,dt = -\frac{4}{\pi ^2} \log \cos (\omega m)\\= & {} -\frac{4}{\pi ^2}\log \sin \left( \frac{\omega }{2} \right) \end{aligned}$$

and thus \(\displaystyle J^\prime \ge \frac{4}{\pi ^2} \log \left( \frac{2}{\omega } \right) \), from which the inequality announced in Lemma 1 follows. \(\square \)

Now, let us prove the Theorem 3 by building a discrete image u such that \({\textsc {STV}}_1(u)\) is fixed but \({\textsc {STV}}_\infty (u)\) increases with the image size. We consider the function H defined by

$$\begin{aligned} \forall x \in {\mathbb R}, \quad H(x) = \int _0^x G(t)\,\mathrm{d}t, \end{aligned}$$

where \(G \in T_M\) is the real-valued M-periodic trigonometric polynomial defined in Lemma 1 (\(M=2m+1\)). Since the integral of G over one period is zero (\(\int _0^M G(t) \, dt = 0\)), H is also an element of \(T_M\). Consequently, the bivariate trigonometric polynomial defined by

$$\begin{aligned} \forall (x,y) \in {\mathbb R}^2, \quad U(x,y) = \frac{1}{M} H(x), \end{aligned}$$

belongs to \(T_M \otimes T_M\), and since M is odd it is exactly the Shannon interpolate of the discrete image defined by

$$\begin{aligned} \forall (k,l) \in I_M\times I_M, \quad u(k,l) = U(k,l). \end{aligned}$$
(77)

In particular, by definition of \({\textsc {STV}}_1\) and \({\textsc {STV}}_\infty \), we have

$$\begin{aligned} {\textsc {STV}}_1(u)= & {} \sum _{(k,l) \in \Omega } |\nabla U(k,l)|, \quad \\ \text {and} \quad {\textsc {STV}}_\infty (u)= & {} \int _{[0,M]^2} |\nabla U(x,y)| \, \mathrm{d}x \mathrm{d}y. \end{aligned}$$

From Lemma 1, we have on the one hand,

$$\begin{aligned} {\textsc {STV}}_1(u)= & {} \sum _{(k,l) \in \Omega } |\nabla U(k,l)| \\= & {} \sum _{k=0}^{2m} |H'(k)| = \sum _{k=0}^{2m} |G(k)|= 2, \end{aligned}$$

and on the other hand,

$$\begin{aligned} {\textsc {STV}}_\infty (u)= & {} \int _{[0,M]^2} |\nabla U(x,y)| \, \mathrm{d}x\mathrm{d}y = \int _0^M |H'(x)|\,\mathrm{d}x \\= & {} \int _0^M |G(x)|\,\mathrm{d}x \ge \frac{8}{\pi ^2} \log \left( \frac{2M}{\pi }\right) -2. \end{aligned}$$

which cannot be bounded from above by a constant independent of M. \(\square \)

Appendix 5: Proof of Proposition 10

Let \(u \in {\mathbb R}^{\Omega }, n \in {\mathbb N}\) and \(\alpha \in {\mathbb R}\) such that \(n \ge 1\) and \(\alpha > 0\). One can rewrite \({\textsc {HSTV}}_{\alpha ,n}(u) = \tfrac{1}{n^2} H_\alpha (\nabla _nu)\), where

$$\begin{aligned} \forall g \in {\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}, \quad H_\alpha (g) = \sum _{(x,y) \in \Omega _n} \mathcal {H}_{\alpha }(g(x,y)). \end{aligned}$$

Let us show that the Legendre–Fenchel transform of \(H_\alpha \) is

$$\begin{aligned} H_\alpha ^\star (p)= \delta _{\Vert \cdot \Vert _{\infty ,2} \le 1}(p) + \tfrac{\alpha }{2} \Vert p\Vert _2^2. \end{aligned}$$

One easily checks that \(\mathcal {H}_\alpha \in \Gamma ({\mathbb R}^2)\), and it follows that \(H_\alpha \in \Gamma ({\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n})\). Thus, for any image \(u\in {\mathbb R}^\Omega \), we have \(H_\alpha (\nabla _nu) = H_\alpha ^{\star \star }(\nabla _nu)\) and

$$\begin{aligned} H_\alpha ^{\star \star }(\nabla _nu) = \sup _{p \in {\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}} \langle \nabla _nu, p \rangle - H_\alpha ^\star (p). \end{aligned}$$
(78)

Besides, we have \(H_\alpha ^\star (p) = \sum _{(x,y) \in \Omega _n} \mathcal {H}^\star _\alpha (p(x,y))\), and the Legendre–Fenchel transform of \(\mathcal {H}_\alpha \) is the function \(\mathcal {H}_\alpha ^\star (z) = \delta _{|\cdot | \le 1}(z) + \tfrac{\alpha }{2}|z|^2\), where \(\delta _{|\cdot | \le 1}\) denotes the indicator function of the unit ball for the \(\ell ^2\) norm in \({\mathbb R}^2\). Indeed, it is proven in [55] that \(\mathcal {H}_\alpha \) is the Moreau envelope (or Moreau–Yosida regularization) [51, 76] with parameter \(\alpha \) of the \(\ell ^2\) norm \(|\cdot |\), or equivalently the infimal convolution (see [59]) between the two proper, convex and l.s.c functions \(f_1(x) = |x|\) and \(f_2(x) = \tfrac{1}{2\alpha } |x|^2\), that is

$$\begin{aligned} \forall y \in {\mathbb R}^2, \quad \mathcal {H}_\alpha (y) = \left( f_1 \Box f_2 \right) (y) := \inf _{x \in {\mathbb R}^2} f_1(x) + f_2(y-x). \end{aligned}$$

Thus, we have \(\mathcal {H}_\alpha ^\star = \left( f_1 \Box f_2 \right) ^\star = f_1^\star + f_2^\star \) (see [55, 59]), leading exactly to \(\mathcal {H}_\alpha ^\star (z) = \delta _{|\cdot | \le 1}(z) + \tfrac{\alpha }{2}|z|^2 \) for any \(z \in {\mathbb R}^2\), since we have \(f_1^\star = z \mapsto \delta _{|\cdot |\le 1}(z)\) and \(f_2^\star = z \mapsto \tfrac{\alpha }{2} |z|^2\). It follows that for any \(p \in {\mathbb R}^{\Omega _n} \times {\mathbb R}^{\Omega _n}\), we have

$$\begin{aligned} H_\alpha ^\star (p) = \! \sum _{(x,y) \in \Omega _n} \!\! \mathcal {H}^\star _\alpha (p(x,y)) = \delta _{\Vert \cdot \Vert _{\infty ,2} \le 1}(p) + \tfrac{\alpha }{2}\Vert p\Vert _2^2,\nonumber \\ \end{aligned}$$
(79)

and the supremum (78) is a maximum for the same reason as in the proof of Proposition 9. Finally, writing \({\textsc {HSTV}}_{\alpha ,n}(u) = \tfrac{1}{n^2} H_\alpha (\nabla _nu) = \tfrac{1}{n^2} H_\alpha ^{\star \star }(\nabla _nu)\) using (78) and (79) leads to the announced result. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abergel, R., Moisan, L. The Shannon Total Variation. J Math Imaging Vis 59, 341–370 (2017). https://doi.org/10.1007/s10851-017-0733-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-017-0733-5

Keywords

Navigation