Skip to main content
Log in

Doubly iteratively reweighted algorithm for constrained compressed sensing models

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose a new algorithmic framework for constrained compressed sensing models that admit nonconvex sparsity-inducing regularizers including the log-penalty function as objectives, and nonconvex loss functions such as the Cauchy loss function and the Tukey biweight loss function in the constraint. Our framework employs iteratively reweighted \(\ell _1\) and \(\ell _2\) schemes to construct subproblems that can be efficiently solved by well-developed solvers for basis pursuit denoising such as SPGL1 by van den Berg and Friedlander (SIAM J Sci Comput 31:890-912, 2008). We propose a new termination criterion for the subproblem solvers that allows them to return an infeasible solution, with a suitably constructed feasible point satisfying a descent condition. The feasible point construction step is the key for establishing the well-definedness of our proposed algorithm, and we also prove that any accumulation point of this sequence of feasible points is a stationary point of the constrained compressed sensing model, under suitable assumptions. Finally, we compare numerically our algorithm (with subproblems solved by SPGL1 or the alternating direction method of multipliers) against the SCP\(_\textrm{ls}\) in Yu et al. (SIAM J Optim 31: 2024-2054, 2021) on solving constrained compressed sensing models with the log-penalty function as the objective and the Cauchy loss function in the constraint, for badly scaled measurement matrices. Our computational results show that our approaches return solutions with better recovery errors, and are always faster.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availibility

The codes for generating the random data and implementing the algorithms in the numerical section are available at https://www.polyu.edu.hk/ama/profile/pong/, the webpage of the corresponding author.

Notes

  1. In particular, our setting does not cover the cases that \({\lim _{t\downarrow {0}}\psi '}(t) = \infty \) or \({\lim _{t\downarrow {0}}\phi '}(t) = \infty \), which have also been extensively studied in the literature; see, for example, [18, 29, 42].

  2. As we shall see later, their continuities are important in establishing convergence of our proposed algorithm; see the proof of Theorem 3.1(v) below.

  3. The iteratively reweighted schemes on \(l_p\) also involve an additional smoothing parameter; see, for example, [18, 29, 42]).

  4. Note that \({{\overline{\phi }}} > 0\) because \(\lim _{t\downarrow 0}\phi '(t) > 0\) and \(\phi (0)= 0\) in view of Assumption 1.1(ii).

  5. One possible choice is to set \(\rho ={\bar{L}}\beta \), where \({\bar{L}}:=\max \limits _{i}\{\phi '_+((a^T_ix^k - b_i)^2)\}\lambda _{\max }(A^TA)\). With this choice of \({{\bar{L}}}\), one can see from the definition of \({\bar{A}}\) in (4.1) that \({\bar{L}}\ge {\lambda _{\max }({\bar{A}}^T{\bar{A}})}\), and thus \(\rho I-\beta {\bar{A}}^T{\bar{A}}=\beta ({\bar{L}}I-{\bar{A}}^T{\bar{A}})\succeq {0}\).

  6. This means that while the \({{\tilde{\varphi }}}_l\) and \({{\tilde{\varphi }}}_l'\) will be generated as in the defaulting setting in SPGL1, we may need to run the SPG method to approximately solve (4.10) for more iterations to obtain a candidate that verifies our inexact criteria (3.6), (3.7) and (3.8). For the validity of our subsequent arguments, we have to emphasize that this more “accurate" solution will not be used to construct \({{\tilde{\varphi }}}_l\) and \({{\tilde{\varphi }}}_l'\).

  7. The nonemptiness follows in particular from \(\Vert {{\bar{b}}}\Vert > 0\); see the discussion following (4.8).

  8. In our numerical experiment, \(L=\lambda _{\max }(AA^{T})\) is computed using the MATLAB commands: if m > 2000 opts.issym = 1; L = eigs(A*A’,1,’LM’,opts); else L = norm(A*A’); end

  9. It is indeed not known whether \(\lim _{k\rightarrow \infty }\Vert x^{k+1}-x^k\Vert =0\) for the sequence \(\{x^k\}\) generated by these algorithms. We use this criterion as a heuristic and it appears to work well.

  10. The codes were downloaded from https://github.com/mpf/spgl1.

  11. Nevertheless, there is no guarantee that the \(\{x^k\}\) thus generated will cluster at a stationary point of (5.1). We include this version in our experiment as a demonstration of how our framework can be used when only a black-box subproblem solver is available.

  12. For fair comparison, we report the total number of inner iterations for \(\textbf{IR}^{\ell _1}_{\ell _2}\) \(_\text {ADMM}\) and \(v\textbf{IR}^{\ell _1}_{\ell _2}\) \(_\text {SPGL1}\), i.e., the total number of iterations used by the subproblem solvers to solve the subproblems.

References

  1. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116, 5–16 (2009)

    MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35, 438–457 (2010)

    MathSciNet  MATH  Google Scholar 

  3. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137, 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  4. Aravkin, A.Y., Burke, J.V., Drusvyatskiy, D., Friedlander, M.P., Roy, S.: Level-set methods for convex optimization. Math. Program. 174, 359–390 (2019)

    MathSciNet  MATH  Google Scholar 

  5. Barron, J. T.: A general and adaptive robust loss function. In: IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 4331–4339 (2019)

  6. Beck, A.: First-order Methods in Optimization. SIAM (2017)

  7. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  8. Becker, S.R., Candès, E.J., Grant, M.C.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3, 165–218 (2011)

    MathSciNet  MATH  Google Scholar 

  9. Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 1196–1211 (2000)

    MathSciNet  MATH  Google Scholar 

  10. Bolte, J., Pauwels, E.: Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs. Math. Oper. Res. 41, 442–465 (2016)

    MathSciNet  MATH  Google Scholar 

  11. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)

    MATH  Google Scholar 

  12. Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)

    MathSciNet  MATH  Google Scholar 

  13. Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell _1\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)

    MathSciNet  MATH  Google Scholar 

  14. Carrillo, R.E., Barner, K.E., Aysal, T.C.: Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. IEEE J. Sel. Top. Signal Process. 4, 392–408 (2010)

    Google Scholar 

  15. Carrillo, R.E., Ramirez, A.B., Arce, G.R., Barner, K.E., Sadler, B.M.: Robust compressive sensing of sparse signals: a review. EURASIP J. Adv. Signal Process. 108, 1–17 (2016)

    Google Scholar 

  16. Charbonnier, P., Blanc-Féraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6, 298–311 (1997)

    Google Scholar 

  17. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Google Scholar 

  18. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: IEEE Int. Conf. Acoust. Speech Signal Process. pp. 3869–3872 (2008)

  19. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Dennis, J.E., Jr., Welsch, R.E.: Techniques for nonlinear least squares and robust regression. Commun. Stat. Simul. Comput. 7, 345–359 (1978)

    MATH  Google Scholar 

  21. Fazel, M., Pong, T.K., Sun, D., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)

    MathSciNet  MATH  Google Scholar 

  22. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)

    MATH  Google Scholar 

  23. Geman, S., McClure, D. E.: Bayesian image analysis: An application to single photon emission tomography. Proc. Stat. Comput. Sect. Amer. Statist. Assoc. pp. 12–18 (1985)

  24. Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer, New York (2001)

    MATH  Google Scholar 

  25. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Statist. 35, 73–101 (1964)

    MathSciNet  MATH  Google Scholar 

  26. Kassam, S.A., Poor, H.V.: Robust techniques for signal processing: a survey. Proc. IEEE 73, 433–481 (1985)

    MATH  Google Scholar 

  27. Lange, K.: MM Optimization Algorithms. SIAM (2016)

  28. Le Thi, H.A., Tao, P.D.: DC programming and DCA: thirty years of developments. Math. Program. 169, 5–68 (2018)

    MathSciNet  MATH  Google Scholar 

  29. Lu, Z.: Iterative reweighted minimization methods for \(l_p\) regularized unconstrained nonlinear programming. Math. Program. 147, 277–307 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Lu, Z., Zhang, Y.: An augmented Lagrangian approach for sparse principal component analysis. Math. Program. 135, 149–193 (2012)

    MathSciNet  MATH  Google Scholar 

  31. Mosteller, F., Tukey, J.W.: Data Analysis and Regression: A Second Course in Statistics. Addison-Wesley, Sydney (1977)

    Google Scholar 

  32. Nesterov, Y.: A method for solving a convex programming problem with convergence rate \(O(1/k^2)\). Soviet Math. Dokl. 27, 372–376 (1983)

    MATH  Google Scholar 

  33. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, Boston (2004)

    MATH  Google Scholar 

  34. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    MathSciNet  MATH  Google Scholar 

  35. Nikolova, M., Ng, M.K., Zhang, S., Ching, W.-K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1, 2–25 (2008)

    MathSciNet  MATH  Google Scholar 

  36. Riani, M., Cerioli, A., Atkinson, A.C., Perrotta, D.: Monitoring robust regression. Electron. J. Statist. 8, 646–677 (2014)

    MathSciNet  MATH  Google Scholar 

  37. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton, NJ (1970)

    MATH  Google Scholar 

  38. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

    MATH  Google Scholar 

  39. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)

    MathSciNet  MATH  Google Scholar 

  40. van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008)

    MathSciNet  MATH  Google Scholar 

  41. Villa, S., Salzo, S., Baldassarre, L., Verri, A.: Accelerated and inexact forward-backward algorithms. SIAM J. Optim. 23, 1607–1633 (2013)

    MathSciNet  MATH  Google Scholar 

  42. Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. J. Glob. Optim. 81, 717–748 (2021)

    MathSciNet  MATH  Google Scholar 

  43. Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)

  44. Yang, L., Toh, K.-C.: Bregman proximal point algorithm revisited: A new inexact version and its inertial variant. SIAM J. Optim. 32, 1523–1554 (2022)

    MathSciNet  MATH  Google Scholar 

  45. Yang, L., Toh, K.-C.: An inexact Bregman proximal gradient method and its inertial variant. Preprint (2021). Available at arXiv:https://arxiv.org/abs/2109.05690

  46. Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex \(\ell _p\) ball projection: algorithm and analysis. J. Mach. Learn. Res. 23, 1–31 (2022)

    Google Scholar 

  47. Yu, P., Pong, T.K.: Iteratively reweighted \(\ell _1\) algorithms with extrapolation. Comput. Optim. Appl. 73, 353–386 (2019)

    MathSciNet  MATH  Google Scholar 

  48. Yu, P., Pong, T.K., Lu, Z.: Convergence rate analysis of a sequential convex programming method with line search for a class of constrained difference-of-convex optimization problems. SIAM J. Optim. 31, 2024–2054 (2021)

    MathSciNet  MATH  Google Scholar 

  49. Zoubir, A.M., Koivunen, V., Chakhchoukh, Y., Muma, M.: Robust estimation in signal processing: a tutorial-style treatment of fundamental concepts. IEEE Signal Process. Mag. 29, 61–80 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Kei Pong.

Ethics declarations

Conflicts of interest

The corresponding author is an editorial board member of this journal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Ting Kei Pong was supported in part by Hong Kong Research Grants Council PolyU153000/20p.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, S., Pong, T.K. Doubly iteratively reweighted algorithm for constrained compressed sensing models. Comput Optim Appl 85, 583–619 (2023). https://doi.org/10.1007/s10589-023-00468-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-023-00468-1

Keywords

Navigation