Skip to main content
Log in

Maximal Solutions of Sparse Analysis Regularization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper deals with the non-uniqueness of the solutions of an analysis—Lasso regularization. Most previous works in this area are concerned with the case, where the solution set is a singleton, or to derive guarantees to enforce uniqueness. Our main contribution consists in providing a geometrical interpretation of a solution with a maximal analysis support: such a solution abides in the relative interior of the solution set. Our result allows us to provide a way to exhibit a maximal solution using a primal-dual interior point algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. We come back in Sect. 6 to this example.

  2. A generalization of the central path and the analytic center is proposed in [21] by using the so-called concave gauge functions.

References

  1. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  2. Mallat, S.G.: A wavelet tour of signal processing, 3rd edn. Academic Press, Amsterdam (2009)

    MATH  Google Scholar 

  3. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B. Methodol. 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  4. Elad, M., Milanfar, P., Rubinstein, R.: Analysis versus synthesis in signal priors. Inverse Probl. 23(3), 947 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Vaiter, S., Peyré, G., Dossal, C., Fadili, M.J.: Robust sparse analysis regularization. IEEE Trans. Inf. Theory 59(4), 2001–2016 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Nam, S., Davies, M.E., Elad, M., Gribonval, R.: The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 34(1), 30–56 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  8. Steidl, G., Weickert, J., Brox, T., Mrázek, P., Welk, M.: On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and sides. SIAM J. Numer. Anal. 42(2), 686–713 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  9. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused Lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 67(1), 91–108 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Zhang, H., Yin, W., Cheng, L.: Necessary and sufficient conditions of solution uniqueness in 1-norm minimization. J. Optim. Theory Appl. 164(1), 109–122 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, H., Yan, M., Yin, W.: One condition for solution uniqueness and robustness of both l1-synthesis and l1-analysis minimizations. arXiv preprint arXiv:1304.5038 (2013)

  12. Gilbert, J.C.: On the solution uniqueness characterization in the \(\ell ^1\) norm and polyhedral gauge recovery. Technical report, INRIA Paris-Rocquencourt (2015)

  13. Tibshirani, R.J.: The lasso problem and uniqueness. Electron. J. Stat. 7, 1456–1490 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  15. Mallat, S.G., Zhang, Z.: Matching pursuits with time–frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)

    Article  MATH  Google Scholar 

  16. Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: Conference on Signals, Systems and Computers, pp. 40–44. IEEE (1993)

  17. Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Attouch, H., Cominetti, R.: \(l^p\) approximation of variational problems in \(l^1\) and \(l^\infty \). Nonlinear Anal. TMA 36(3), 373–399 (1999)

    Article  MATH  Google Scholar 

  19. Rockafellar, R.: Convex Analysis, vol. 28. Princeton University Press, Princeton (1996)

    Google Scholar 

  20. Frisch, K.R.: The logarithmic potential method of convex programming. Technical report, University Institute of Economics, Oslo, Norway (1955)

  21. Barbara, A.: Strict quasi-concavity and the differential barrier property of gauges in linear programming. Optimization 64(12), 2649–2677 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Barbara, A., Crouzeix, J.P.: Concave gauge functions and applications. Math. Methods OR 40(1), 43–74 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  23. Roos, C., Terlaky, T., Vial, J.P.: Interior Point Methods for Mathematical Programming. Wiley, New York (2009)

    MATH  Google Scholar 

  24. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abderrahim Jourani.

Additional information

Communicated by Asen L. Dontchev.

Appendix

Appendix

In this section, we propose to express some of our results in a general framework. More precisely, we consider the following optimization problem

$$\begin{aligned} \min _{x\in E} \{f(x) + g(x)\} . \end{aligned}$$
(14)

where E is a Banach space, \(f, g : E\mapsto {\mathbb {R}}\cup \{+\infty \}\) are convex lower semicontinuous functions. The dual space of E and the pairing between E and \(E^*\) will be denoted by \(E^*\) and \(\langle \cdot , \cdot \rangle \), respectively. The Fenchel subdifferential of f at \({\bar{x}}\) is defined by

$$\begin{aligned} \partial f({\bar{x}}) \mathrel {\mathop :}=\{ x^*\in E^*: \, \langle x^*, x-{\bar{x}}\rangle \le f(x) - f({\bar{x}}) \, \forall x\in E\}. \end{aligned}$$

The aim of the following proposition is to give a characterization of solutions of the problem (14).

Proposition A.1

Let \({\bar{x}}\in E\) be a fixed solution of the problem (14) and \(x^*\in \partial g({\bar{x}})\) be such that \(-x^* \in \partial f({\bar{x}})\). Then the following assertions are equivalent:

  1. (1)

    u is a solution of the problem (14),

  2. (2)

    \(g(u) \le g({\bar{x}}) + \langle x^*, u-{\bar{x}}\rangle \) and u is a solution of the problem

    $$\begin{aligned} \min _{x\in E} \{f(x) +\langle x^*, x\rangle \}. \end{aligned}$$
    (15)

Consequently, if \(\{ x\in E : \, g(x) \le g({\bar{x}}) + \langle x^*, x-{\bar{x}}\rangle \}\) is a polyhedral set and the function f is polyhedral (supremum of a finite affine family), then so is \(\underset{x \in {\mathbb {R}}^n}{{{\mathrm{Argmin}}}}\;\{f(x)+g(x)\}\).

Proof

Since the implication \((2) \Longrightarrow (1)\) is obvious, we will establish only the implication \((1) \Longrightarrow (2)\). First note that, because of our assumptions, assertion (2) is equivalent to say that \(x^*\in \partial g(u)\) and \(-x^*\in \partial f(u)\). So if u is a solution of the problem (14), we have

$$\begin{aligned} f(u)+g(u) = f({\bar{x}})+g({\bar{x}}). \end{aligned}$$
(16)

Since \(x^*\in \partial g({\bar{x}})\) and \(-x^*\in \partial f({\bar{x}})\), we easily obtain, by using relation (16), that \(x^*\in \partial g(u)\) and \(-x^*\in \partial f(u)\) and the proof is completed. \(\square \)

A particular and interesting case is the Hilbert setting with a special form of g.

Corollary A.1

Suppose that E (resp. F) is a Hilbert endowed with a scalar product denoted by \(\langle \cdot , \cdot \rangle \) and the associated norm \(\Vert \cdot \Vert \). Let \({\varPhi }: E\mapsto F\) be a linear continuous operator and \(y\in F\). Define the function \(g : E \mapsto {\mathbb {R}}\) by

$$\begin{aligned} g(x) = {1\over 2}\Vert {\varPhi }x-y\Vert ^2. \end{aligned}$$

Let \({\bar{x}}\in E\) be a fixed solution of the problem (14) and put \(x^* = {\varPhi }^*({\varPhi }{\bar{x}}-y)\). Then the following assertions are equivalent:

  1. (1)

    u is a solution of the problem (14),

  2. (2)

    \({\varPhi }u={\varPhi }{\bar{x}}\) and u is a solution of the problem

    $$\begin{aligned} \min _{x\in E} \{f(x) +\langle x^*, x\rangle \}. \end{aligned}$$
    (17)

Consequently, each solution u of the problem (14) satisfies \({\varPhi }u={\varPhi }{\bar{x}}\) and \(f(u) = f({\bar{x}})\).

Proof

It suffices to see that the (in)equality \(g(u) \le g({\bar{x}}) + \langle x^*, u-{\bar{x}}\rangle \) is equivalent to \({\varPhi }u={\varPhi }{\bar{x}}\) and to apply Proposition A.1. \(\square \)

The following corollary asserts that knowing one solution of (14), we can determine all the other ones.

Corollary A.2

Let the assumptions of Corollary A.1 be satisfied. Then

$$\begin{aligned} \underset{x \in {\mathbb {R}}^n}{{{\mathrm{Argmin}}}}\;\{f(x)+g(x)\} = \{ x \in E: \, {\varPhi }x={\varPhi }{\bar{x}}, \, f(x) = f({\bar{x}})\}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barbara, A., Jourani, A. & Vaiter, S. Maximal Solutions of Sparse Analysis Regularization. J Optim Theory Appl 180, 374–396 (2019). https://doi.org/10.1007/s10957-018-1385-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1385-3

Keywords

Mathematics Subject Classification

Navigation