Skip to main content
Log in

Non-convex Total Variation Regularization for Convex Denoising of Signals

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Total variation (TV) signal denoising is a popular nonlinear filtering method to estimate piecewise constant signals corrupted by additive white Gaussian noise. Following a ‘convex non-convex’ strategy, recent papers have introduced non-convex regularizers for signal denoising that preserve the convexity of the cost function to be minimized. In this paper, we propose a non-convex TV regularizer, defined using concepts from convex analysis, that unifies, generalizes, and improves upon these regularizers. In particular, we use the generalized Moreau envelope which, unlike the usual Moreau envelope, incorporates a matrix parameter. We describe a novel approach to set the matrix parameter which is essential for realizing the improvement we demonstrate. Additionally, we describe a new set of algorithms for non-convex TV denoising that elucidate the relationship among them and which build upon fast exact algorithms for classical TV denoising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

    MATH  Google Scholar 

  2. Bayram, I.: Correction for On the convergence of the iterative shrinkage/thresholding algorithm with a weakly convex penalty. IEEE Trans. Signal Process. 64(14), 3822–3822 (2016)

    MathSciNet  MATH  Google Scholar 

  3. Bayram, I.: On the convergence of the iterative shrinkage/thresholding algorithm with a weakly convex penalty. IEEE Trans. Signal Process. 64(6), 1597–1608 (2016)

    MathSciNet  MATH  Google Scholar 

  4. Becker, S., Combettes, P.L.: An algorithm for splitting parallel sums of linearly composed monotone operators, with applications to signal recovery. J. Nonlinear Convex Anal. 15(1), 137–159 (2014)

    MathSciNet  MATH  Google Scholar 

  5. Blake, A., Zisserman, A.: Visual Reconstruction. MIT Press, Cambridge (1987)

    Google Scholar 

  6. Burger, M., Papafitsoros, K., Papoutsellis, E., Schönlieb, C.-B.: Infimal convolution regularisation functionals of BV and Lp spaces. J. Math. Imaging Vis. 55(3), 343–369 (2016)

    MATH  Google Scholar 

  7. Cai, G., Selesnick, I.W., Wang, S., Dai, W., Zhu, Z.: Sparsity-enhanced signal decomposition via generalized minimax-concave penalty for gearbox fault diagnosis. J. Sound Vib. 432, 213–234 (2018)

    Google Scholar 

  8. Candès, E.J., Wakin, M.B., Boyd, S.: Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)

    MathSciNet  MATH  Google Scholar 

  9. Carlsson, M.: On convexification/optimization of functionals including an l2-misfit term. arXiv preprint arXiv:1609.09378 (2016)

  10. Castella, M., Pesquet, J.-C.: Optimization of a Geman-McClure like criterion for sparse signal deconvolution. In: IEEE International Workshop on Computational Advances Multi-Sensor Adaptive Processing, pp. 309–312 (2015)

  11. Chambolle, A., Lions, P.-L.: Image recovery via total variation minimization and related problems. Numerische Mathematik 76, 167–188 (1997)

    MathSciNet  MATH  Google Scholar 

  12. Chan, R., Lanza, A., Morigi, S., Sgallari, F.: Convex non-convex image segmentation. Numerische Mathematik 138(3), 635–680 (2017)

    MathSciNet  MATH  Google Scholar 

  13. Chan, T.F., Osher, S., Shen, J.: The digital TV filter and nonlinear denoising. IEEE Trans. Image Process. 10(2), 231–241 (2001)

    MATH  Google Scholar 

  14. Chartrand, R.: Shrinkage mappings and their induced penalty functions, pp. 1026–1029. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2014)

  15. Chouzenoux, E., Jezierska, A., Pesquet, J., Talbot, H.: A majorize-minimize subspace approach for \(\ell _2-\ell _0\) image regularization. SIAM J. Imag. Sci. 6(1), 563–591 (2013)

    MATH  Google Scholar 

  16. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., et al. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, Berlin (2011)

    MATH  Google Scholar 

  17. Condat, L.: A direct algorithm for 1-D total variation denoising. IEEE Signal Process. Lett. 20(11), 1054–1057 (2013)

    Google Scholar 

  18. Ding, Y., Selesnick, I.W.: Artifact-free wavelet denoising: non-convex sparse regularization, convex optimization. IEEE Signal Process. Lett. 22(9), 1364–1368 (2015)

    Google Scholar 

  19. Donoho, D., Maleki, A., Shahram, M.: Wavelab 850 (2005). http://www-stat.stanford.edu/%7Ewavelab/

  20. Du, H., Liu, Y.: Minmax-concave total variation denoising. Signal Image Video Process. 12(6), 1027–1034 (2018)

    Google Scholar 

  21. Dümbgen, L., Kovac, A.: Extensions of smoothing via taut strings. Electron. J. Stat. 3, 41–75 (2009)

    MathSciNet  MATH  Google Scholar 

  22. Frecon, J., Pustelnik, N., Dobigeon, N., Wendt, H., Abry, P.: Bayesian selection for the l2-Potts model regularization parameter: 1D piecewise constant signal denoising. IEEE Trans. Signal Process. 65(19), 5215–5224 (2017)

    MathSciNet  MATH  Google Scholar 

  23. Friedrich, F., Kempe, A., Liebscher, V., Winkler, G.: Complexity penalized M-estimation: fast computation. J. Comput. Graph. Stat. 17(1), 201–224 (2008)

    MathSciNet  Google Scholar 

  24. Huska, M., Lanza, A., Morigi, S., Sgallari, F.: Convex non-convex segmentation of scalar fields over arbitrary triangulated surfaces. J. Comput. Appl. Math. 349, 438–451 (2019)

    MathSciNet  MATH  Google Scholar 

  25. Lanza, A., Morigi, S., Selesnick, I., Sgallari, F.: Nonconvex nonsmooth optimization via convex-nonconvex majorization-minimization. Numerische Mathematik 136(2), 343–381 (2017)

    MathSciNet  MATH  Google Scholar 

  26. Lanza, A., Morigi, S., Selesnick, I., Sgallari, F.: Sparsity-inducing nonconvex nonseparable regularization for convex image processing. SIAM J. Imag. Sci. 12(2), 1099–1134 (2019)

    MathSciNet  Google Scholar 

  27. Lanza, A., Morigi, S., Sgallari, F.: Constrained TVp-l2 model for image restoration. J. Sci. Comput. 68(1), 64–91 (2016)

    MathSciNet  MATH  Google Scholar 

  28. Lanza, A., Morigi, S., Sgallari, F.: Convex image denoising via non-convex regularization with parameter selection. J. Math. Imaging Vis. 56(2), 195–220 (2016)

    MathSciNet  MATH  Google Scholar 

  29. Little, M.A., Jones, N.S.: Generalized methods and solvers for noise removal from piecewise constant signals: part I - background theory. Proc. R. Soc. A 467, 3088–3114 (2011)

    MATH  Google Scholar 

  30. Malek-Mohammadi, M., Rojas, C.R., Wahlberg, B.: A class of nonconvex penalties preserving overall convexity in optimization-based mean filtering. IEEE Trans. Signal Process. 64(24), 6650–6664 (2016)

    MathSciNet  MATH  Google Scholar 

  31. Möllenhoff, T., Strekalovskiy, E., Moeller, M., Cremers, D.: The primal-dual hybrid gradient method for semiconvex splittings. SIAM J. Imag. Sci. 8(2), 827–857 (2015)

    MathSciNet  MATH  Google Scholar 

  32. Nikolova, M.: Estimation of binary images by minimizing convex criteria. In: Proceedings of IEEE International Conference on Image Processing (ICIP), vol. 2, pp. 108–112 (1998)

  33. Nikolova, M.: Energy minimization methods. In: Scherzer, O. (ed.) Handbook of Mathematical Methods in Imaging, Chapter 5, pp. 138–186. Springer, Berlin (2011)

    Google Scholar 

  34. Nikolova, M., Ng, M., Zhang, S., Ching, W.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imag. Sci. 1(1), 2–25 (2008)

    MathSciNet  MATH  Google Scholar 

  35. Nikolova, M., Ng, M.K., Tam, C.-P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19(12), 3073–3088 (2010)

    MathSciNet  MATH  Google Scholar 

  36. Parekh, A., Selesnick, I.W.: Convex denoising using non-convex tight frame regularization. IEEE Signal Process. Lett. 22(10), 1786–1790 (2015)

    Google Scholar 

  37. Parekh, A., Selesnick, I.W.: Enhanced low-rank matrix approximation. IEEE Signal Process. Lett. 23(4), 493–497 (2016)

    Google Scholar 

  38. Parks, T.W., Burrus, C.S.: Digital Filter Design. Wiley, Hoboken (1987)

    MATH  Google Scholar 

  39. Portilla, J., Mancera, L.: L0-based sparse approximation: two alternative methods and some applications. In: Proceedings of SPIE, vol. 6701 (Wavelets XII), San Diego, CA, USA, (2007)

  40. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)

    MathSciNet  MATH  Google Scholar 

  41. Selesnick, I.: Sparse regularization via convex analysis. IEEE Trans. Signal Process. 65(17), 4481–4494 (2017)

    MathSciNet  MATH  Google Scholar 

  42. Selesnick, I.: Total variation denoising via the Moreau envelope. IEEE Signal Process. Lett. 24(2), 216–220 (2017)

    MathSciNet  Google Scholar 

  43. Selesnick, I.W., Bayram, I.: Sparse signal estimation by maximally sparse convex optimization. IEEE Trans. Signal Process. 62(5), 1078–1092 (2014)

    MathSciNet  MATH  Google Scholar 

  44. Selesnick, I.W., Parekh, A., Bayram, I.: Convex 1-D total variation denoising with non-convex regularization. IEEE Signal Process. Lett. 22(2), 141–144 (2015)

    Google Scholar 

  45. Setzer, S., Steidl, G., Teuber, T.: Infimal convolution regularizations with discrete l1-type functionals. Commun. Math. Sci. 9(3), 797–827 (2011)

    MathSciNet  MATH  Google Scholar 

  46. Shen, L., Suter, B.W., Tripp, E.E.: Structured sparsity promoting functions. J. Optim. Theory Appl. 183(2), 386–421 (2019)

    MathSciNet  MATH  Google Scholar 

  47. Shen, L., Xu, Y., Zeng, X.: Wavelet inpainting with the l0 sparse regularization. J. Appl. Comp. Harm. Anal. 41(1), 26–53 (2016)

    MATH  Google Scholar 

  48. Sidky, E.Y., Chartrand, R., Boone, J.M., Xiaochuan, P.: Constrained TpV minimization for enhanced exploitation of gradient sparsity: application to CT image reconstruction. IEEE J. Transl. Eng. Health Med. 2, 1–18 (2014)

    Google Scholar 

  49. Soubies, E., Blanc-Féraud, L., Aubert, G.: A continuous exact \( \ell _0 \) penalty (CEL0) for least squares regularized problem. SIAM J. Imag. Sci. 8(3), 1607–1639 (2015)

    MATH  Google Scholar 

  50. Storath, M., Weinmann, A., Demaret, L.: Jump-sparse and sparse recovery using Potts functionals. IEEE Trans. Signal Process. 62(14), 3654–3666 (2014)

    MathSciNet  MATH  Google Scholar 

  51. Strang, G.: The discrete cosine transform. SIAM Rev. 41(1), 135–147 (1999)

    MathSciNet  MATH  Google Scholar 

  52. Wang, S., Selesnick, I.W., Cai, G., Ding, B., Chen, X.: Synthesis versus analysis priors via generalized minimax-concave penalty for sparsity-assisted machinery fault diagnosis. Mech. Syst. Signal Process. 127, 202–233 (2019)

    Google Scholar 

  53. Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    MathSciNet  MATH  Google Scholar 

  54. Zou, J., Shen, M., Zhang, Y., Li, H., Liu, G., Ding, S.: Total variation denoising with non-convex regularizers. IEEE Access 7, 4422–4431 (2019)

    Google Scholar 

Download references

Funding

This study was funded by the National Science Foundation (Grant No. CCF-1525398) and University of Bologna (Grant No. ex 60%) and by the National Group for Scientific Computation (GNCS-INDAM), research projects 2018–19.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fiorella Sgallari.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this Appendix, we present technical results and their proofs, which are needed for the main results of the paper.

Lemma 2

Let \( y \in \mathbb {R}^N \) and \( {\lambda } > 0 \). Let \( f \in \Gamma _0(\mathbb {R}^N) \) and \( B \in \mathbb {R}^{M \times N} \). Define \( g :\mathbb {R}^N \rightarrow \mathbb {R}\) as

$$\begin{aligned} g(x) = \tfrac{1}{2}\Vert y - x \Vert _2^2 - {\lambda } f^{\mathsf {M}}_{ B } (x) \end{aligned}$$
(81)

where \( f^{\mathsf {M}}_B \) is the generalized Moreau envelope of f. If \( B^{\mathsf {T}}\! B \preccurlyeq (1 / {\lambda } ) I \), then g is convex. If \( B^{\mathsf {T}}\! B \prec (1 / {\lambda } ) I \), then g is strongly convex.

Proof

We write

$$\begin{aligned} g(x)&= \tfrac{1}{2}\Vert y - x \Vert _2^2 - {\lambda } \inf _{ v \in \mathbb {R}^N } \bigl \{ f(v) + \tfrac{1}{2}\Vert B ( x - v ) \Vert _2^2 \bigr \} \nonumber \\&= \tfrac{1}{2}\Vert y - x \Vert _2^2 - \tfrac{ {\lambda } }{2} \Vert B x \Vert _2^2\nonumber \\&\quad - {\lambda } \inf _{ v \in \mathbb {R}^N } \bigl \{ f(v) - v^{\mathsf {T}}\! B^{\mathsf {T}}\! B x + \tfrac{1}{2}\Vert B v \Vert _2^2 \bigr \} \end{aligned}$$
(82)
$$\begin{aligned}&= \tfrac{1}{2}x^{\mathsf {T}}\! (I - {\lambda } B^{\mathsf {T}}\! B) x + \tfrac{1}{2}\Vert y \Vert _2^2 - y^{\mathsf {T}}\! x\nonumber \\&\quad + {\lambda } \sup _{ v \in \mathbb {R}^N } \bigl \{ -f(v) + v^{\mathsf {T}}\! B^{\mathsf {T}}\! B x - \tfrac{1}{2}\Vert B v \Vert _2^2 \bigr \}. \end{aligned}$$
(83)

The function in the curly braces is affine in x (hence convex in x). Since the supremum of a family of convex functions (here indexed by v) is itself convex, the final term of (83) is convex in x. Hence, g is convex if \( I - {\lambda } B^{\mathsf {T}}\! B \) is positive semidefinite; and g is strongly convex if \( I - {\lambda } B^{\mathsf {T}}\! B \) is positive definite. \(\square \)

Lemma 3

In the context of Lemma 2, let \(e_{\max }\) denote the maximum eigenvalue of \( B^{\mathsf {T}}\! B \). If \( B^{\mathsf {T}}\! B \prec (1 / {\lambda } ) I \) (that is, \( e_{\max } < 1/{\lambda } \)), then g in (81) is \( \delta \)-strongly convex with (positive) modulus of strong convexity (at least) equal to

$$\begin{aligned} \delta = 1 - {\lambda } e_{\max }. \end{aligned}$$
(84)

Proof

It follows from Definition 3 that the function g in (81) is \( \delta \)-strongly convex if and only if the function \( {\widetilde{g}} \), defined by

$$\begin{aligned} {\widetilde{g}}(x)&= g(x) - \frac{\delta }{2} \Vert x \Vert _2^2 \end{aligned}$$
(85)
$$\begin{aligned}&= \tfrac{1}{2}x^{\mathsf {T}}\! ((1- \delta )I - {\lambda } B^{\mathsf {T}}\! B) x + \tfrac{1}{2}\Vert y \Vert _2^2 - y^{\mathsf {T}}\! x \nonumber \\&\quad + {\lambda } \sup _{ v \in \mathbb {R}^N } \bigl \{ -f(v) + v^{\mathsf {T}}\! B^{\mathsf {T}}\! B x - \tfrac{1}{2}\Vert B v \Vert _2^2 \bigr \}, \end{aligned}$$
(86)

is convex. Hence, \( {\widetilde{g}} \) in (86) is convex if \( (1- \delta )I - {\lambda } B^{\mathsf {T}}\! B\) is positive semidefinite. Let \( e_i \) be the real nonnegative eigenvalues of \( B^{\mathsf {T}}B\). We have

$$\begin{aligned}&(1- \delta )I - {\lambda } B^{\mathsf {T}}\!B \succcurlyeq 0\\&\quad \iff 1-\delta - {\lambda } e_i \geqslant 0, \ \forall \, i \in \{1,2,\ldots ,N\} \\&\quad \iff \delta \leqslant \min _i \left\{ 1 - {\lambda } e_i \right\} . \\&\quad \iff \delta \leqslant 1 - {\lambda } e_{\max } \end{aligned}$$

which completes the proof. \(\square \)

In this paper, we use the forward-backward splitting (FBS) algorithm which entails a constant of Lipschitz continuity. The following two lemmas regard Lipschitz continuity. Lemma 4 is a part [equivalence (i) \( \Leftrightarrow \) (vi)] of Theorem 18.15 of Ref. [1]. Our use of this result follows the reasoning of Ref. [2].

Lemma 4

Let \( f :\mathbb {R}^N \rightarrow \mathbb {R}\) be convex and differentiable. Then the gradient \( \nabla f \) is \( \rho \)-Lipschitz continuous if and only if \( (\rho /2) \Vert {\,\cdot \,} \Vert _2^2 - f \) is convex.

Lemma 5

Let \( y \in \mathbb {R}^N \) and \( {\lambda } > 0 \). Let \( B = C D \in \mathbb {R}^{M \times N} \) with \( B^{\mathsf {T}}\! B \preccurlyeq (1 / {\lambda } ) I \). Define \( f :\mathbb {R}^N \rightarrow \mathbb {R}\) as

$$\begin{aligned} f(x) = \tfrac{1}{2}\Vert y - x \Vert _2^2 - {\lambda } S_{ C } ( D x ) \end{aligned}$$
(87)

where \( S_C \) is the generalized Huber function (34). Then the gradient \( \nabla f \) is Lipschitz continuous with a Lipschitz constant of 1.

Proof

The proof uses Lemma 4. Since both terms in (87) are differentiable, f is differentiable. Next, we show f is convex. Using (35), we write f as

$$\begin{aligned} f(x)&= \tfrac{1}{2}\Vert y - x \Vert _2^2 - {\lambda } \min _{ v \in \mathbb {R}^{N-1} } \bigl \{ \Vert v \Vert _1 + \tfrac{1}{2}\Vert C ( D x - v) \Vert _2^2 \bigr \} \\&= \tfrac{1}{2}x^{\mathsf {T}}\! ( I - {\lambda } B^{\mathsf {T}}\! B ) x - y^{\mathsf {T}}\! x + \tfrac{1}{2}\Vert y \Vert _2^2 \\&\qquad + {\lambda } \max _{ v \in \mathbb {R}^{N-1} } \bigl \{ -\Vert v \Vert _1 - \tfrac{1}{2}\Vert C v \Vert _2^2 + v^{\mathsf {T}}\! C^{\mathsf {T}}\! B x \bigr \}. \end{aligned}$$

The first term is convex because \( B^{\mathsf {T}}\! B \preccurlyeq (1 / {\lambda } ) I \). The term inside the curly braces is affine in x (hence convex in x). Since the minimum of a set of convex functions (here indexed by v) is convex, f is convex. By Lemma 4, it remains to show \( (1/2) \Vert {\,\cdot \,} \Vert _2^2 - f \) is convex. We have

$$\begin{aligned} \tfrac{1}{2}\Vert x \Vert _2^2 - f(x)&= \tfrac{1}{2}\Vert x \Vert _2^2 - \tfrac{1}{2}\Vert y - x \Vert _2^2 + {\lambda } S_{ C } ( Dx ) \end{aligned}$$
(88)
$$\begin{aligned}&= -\tfrac{1}{2}\Vert y \Vert _2^2 + y^{\mathsf {T}}\! x + {\lambda } S_{ C } ( Dx ). \end{aligned}$$
(89)

By Proposition 3, the generalized Huber function is convex. Hence, the right-hand side is convex in x which completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Selesnick, I., Lanza, A., Morigi, S. et al. Non-convex Total Variation Regularization for Convex Denoising of Signals. J Math Imaging Vis 62, 825–841 (2020). https://doi.org/10.1007/s10851-019-00937-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-019-00937-5

Keywords

Navigation