Advertisement

New Insights on the Optimality Conditions of the \(\ell _2-\ell _0\) Minimization Problem

  • Emmanuel SoubiesEmail author
  • Laure Blanc-Féraud
  • Gilles Aubert
Article
  • 23 Downloads

Abstract

This paper is devoted to the analysis of necessary (not sufficient) optimality conditions for the \(\ell _0\)-regularized least-squares minimization problem. Such conditions are the roots of the plethora of algorithms that have been designed to cope with this NP-hard problem. Indeed, as global optimality is, in general, intractable, these algorithms only ensure the convergence to suboptimal points that verify some necessary optimality conditions. The degree of restrictiveness of these conditions is thus directly related to the performance of the algorithms. Within this context, our first goal is to provide a comprehensive review of commonly used necessary optimality conditions as well as known relationships between them. Then, we complete this hierarchy of conditions by proving new inclusion properties between the sets of candidate solutions associated with them. Moreover, we go one step further by providing a quantitative analysis of these sets. Finally, we report the results of a numerical experiment dedicated to the comparison of several algorithms with different optimality guaranties. In particular, this illustrates the fact that the performance of an algorithm is related to the restrictiveness of the optimality condition verified by the point it converges to.

Keywords

\(\ell _0\)-regularized least-squares CEL0 Exact relaxation Minimizers Optimality conditions 

Notes

References

  1. 1.
    Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1), 91–129 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Beck, A., Eldar, Y.: Sparsity constrained nonlinear optimization: optimality conditions and algorithms. SIAM J. Optim. 23(3), 1480–1509 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Beck, A., Hallak, N.: On the minimization over sparse symmetric sets: projections, optimality conditions, and algorithms. Math. Oper. Res. 41(1), 196–223 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Beck, A., Hallak, N.: Proximal mapping for symmetric penalty and sparsity. SIAM J. Optim. 28(1), 496–527 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Bourguignon, S., Ninin, J., Carfantan, H., Mongeau, M.: Exact sparse approximation problems via mixed-integer programming: formulations and computational performance. IEEE Trans. Signal Process. 64(6), 1405–1419 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Breiman, L.: Better subset regression using the nonnegative garrote. Technometrics 37(4), 373–384 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Candes, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell _1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Carlsson, M., Gerosa, D., Olsson, C.: An unbiased approach to compressed sensing (June 2018). arXiv:1806.05283 [math]
  11. 11.
    Carlsson, Marcus: On convex envelopes and regularization of non-convex functionals without moving global minima. J. Optim. Theory Appl. 183(1), 66–84 (2019)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Chen, S., Cowan, C.F.N., Grant, P.M.: Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans. Neural Netw. 2(2), 302–309 (1991)CrossRefGoogle Scholar
  13. 13.
    Chouzenoux, E., Jezierska, A., Pesquet, J., Talbot, H.: A majorize-minimize subspace approach for \(\ell _2-\ell _0\) image regularization. SIAM J. Imaging Sci. 6(1), 563–591 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Donoho, D.L.: For most large underdetermined systems of linear equations the minimal \(\ell _1\)-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Durand, S., Nikolova, M.: Stability of the minimizers of least squares with a non-convex regularization. Part I: local behavior. Appl. Math. Optim. 53(2), 185–208 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Durand, S., Nikolova, M.: Stability of the minimizers of least squares with a non-convex regularization. Part II: global behavior. Appl. Math. Optim. 53(3), 259–277 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Foucart, S., Lai, M.-J.: Sparsest solutions of underdetermined linear systems via \(\ell _q\)-minimization for \( 0{<} q \le 1\). Appl. Comput. Harmon. Anal. 26(3), 395–407 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    Geman, D., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14(3), 367–383 (1992)CrossRefGoogle Scholar
  22. 22.
    Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45, 600–616 (1997) CrossRefGoogle Scholar
  23. 23.
    Herzet, C., Drémeau, A.: Bayesian pursuit algorithms. In: 2010 18th European Signal Processing Conference, pp. 1474–1478 (Aug. 2010)Google Scholar
  24. 24.
    Jain, P., Tewari, A., Dhillon, I.S.: Orthogonal matching pursuit with replacement. In: Advances in Neural Information Processing Systems, vol. 24, pp. 1215–1223. Curran Associates, Inc., New York (2011)Google Scholar
  25. 25.
    Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)zbMATHCrossRefGoogle Scholar
  26. 26.
    Marmin, A., Castella, M., Pesquet, J.: How to globally solve non-convex optimization problems involving an approximate \(\ell _0\) penalization. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5601–5605 (May 2019)Google Scholar
  27. 27.
    Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Nguyen, T.T., Soussen, C., Idier, J., Djermoune, E.-H.: NP-hardness of \(\ell _0\) minimization problems: revision and extension to the non-negative setting. In: International Conference on Sampling Theory and Applications (SampTa), Bordeaux (2019)Google Scholar
  29. 29.
    Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4(3), 960–991 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Nikolova, M.: Bounds on the minimizers of (nonconvex) regularized least-squares. In: Sgallari, F., Murli, A., Paragios, N. (eds.) Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, pp. 496–507. Springer, Berlin (2007)Google Scholar
  31. 31.
    Nikolova, M.: Solve exactly an under determined linear system by minimizing least squares regularized with an \(\ell _0\) penalty. C. R. Math. 349(21), 1145–1150 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Nikolova, M.: Description of the minimizers of least squares regularized with \(\ell _0\)-norm. Uniqueness of the global minimizer. SIAM J. Imaging Sci. 6(2), 904–937 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Nikolova, M.: Relationship between the optimal solutions of least squares regularized with \(\ell _0\)-norm and constrained by k-sparsity. Appl. Comput. Harmon. Anal. 41(1), 237–265 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Nikolova, M., Ng, M.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 27(3), 937–966 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Ochs, P., Dosovitskiy, A., Brox, T., Pock, T.: On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision. SIAM J. Imaging Sci. 8(1), 331–372 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 40–44 (Nov. 1993)Google Scholar
  37. 37.
    Pilanci, M., Wainwright, M.J., El Ghaoui, L.: Sparse learning via Boolean relaxations. Math. Program. 151(1), 63–87 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Repetti, A., Pham, M.Q., Duval, L., Chouzenoux, É., Pesquet, J.: Euclid in a taxicab: sparse blind deconvolution with smoothed \(\ell _1/\ell _2\) regularization. IEEE Signal Process. Lett. 22(5), 539–543 (2015)CrossRefGoogle Scholar
  39. 39.
    Selesnick, I.: Sparse regularization via convex analysis. IEEE Trans. Signal Process. 65(17), 4481–4494 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  40. 40.
    Selesnick, I., Farshchian, M.: Sparse signal approximation via nonseparable regularization. IEEE Trans. Signal Process. 65(10), 2561–2575 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  41. 41.
    Soubies, E., Blanc-Féraud, L., Aubert, G.: A continuous exact \(\ell _0\) penalty (CEL0) for least squares regularized problem. SIAM J. Imaging Sci. 8(3), 1607–1639 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    Soubies, E., Blanc-Féraud, L., Aubert, G.: A unified view of exact continuous penalties for \(\ell _2\)-\(\ell _0\) minimization. SIAM J. Optim. 27(3), 2034–2060 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    Soussen, C., Idier, J., Brie, D., Duan, J.: From Bernoulli–Gaussian deconvolution to sparse signal restoration. IEEE Trans. Signal Process. 59(10), 4572–4584 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    Soussen, C., Idier, J., Duan, J., Brie, D.: Homotopy based algorithms for \(\ell _0\)-regularized least-squares. IEEE Trans. Signal Process. 63(13), 3301–3316 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  45. 45.
    Temlyakov, V.N.: Greedy approximation. Acta Numer. 17, 235–409 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  46. 46.
    Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
  47. 47.
    Tropp, J.A.: Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 52(3), 1030–1051 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    Wen, F., Chu, L., Liu, P., Qiu, R.C.: A survey on nonconvex regularization-based sparse and low-rank recovery in signal processing, statistics, and machine learning. IEEE Access 6, 69883–69906 (2018)CrossRefGoogle Scholar
  49. 49.
    Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  50. 50.
    Zhang, N., Li, Q.: On optimal solutions of the constrained \(\ell _0\) regularization and its penalty problem. Inverse Prob. 33(2), 025010 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
  51. 51.
    Zhang, T.: Multi-stage convex relaxation for learning with sparse regularization. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 21, pp. 1929–1936. Curran Associates Inc, New York (2009)Google Scholar
  52. 52.
    Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101(476), 1418–1429 (2006)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.IRIT, Université de Toulouse, CNRSToulouseFrance
  2. 2.Université Côte d’Azur, CNRS, INRIA, I3SSophia-AntipolisFrance
  3. 3.Université Côte d’Azur, UNS, LJADNiceFrance

Personalised recommendations