Skip to main content
Log in

Accelerated Smoothing Hard Thresholding Algorithms for \(\ell _0\) Regularized Nonsmooth Convex Regression Problem

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We study a class of constrained sparse optimization problems with cardinality penalty, where the feasible set is defined by box constraint, and the loss function is convex but not necessarily smooth. First, we propose an accelerated smoothing hard thresholding (ASHT) algorithm for solving such problems, which combines smoothing approximation, extrapolation technique and iterative hard thresholding method. The extrapolation coefficients can be chosen to satisfy \(\sup _k \beta _k=1\). We discuss the convergence of ASHT algorithm with different extrapolation coefficients, and give a sufficient condition to ensure that any accumulation point of the iterates is a local minimizer of the original problem. For a class of special updating schemes on the extrapolation coefficients, we obtain that the iterates are convergent to a local minimizer of the problem, and the convergence rate is \(o(\ln ^{\sigma } k/k)\) with \(\sigma \in (1/2, 1]\) on the loss and objective function values. Second, we consider the case in which the loss function is Lipschitz continuously differentiable, and develop an accelerated hard thresholding (AHT) algorithm to solve it. We prove that the iterates of AHT algorithm converge to a local minimizer of the problem that satisfies a desirable lower bound property. Moreover, we show that the convergence rates of loss and objective function values are \(o(k^{-2})\). Finally, some numerical examples are presented to show the theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The datasets generated during the current study are available from the corresponding author on reasonable request.

References

  1. Adly, S., Attouch, H.: Finite convergence of proximal-gradient inertial algorithms combining dry friction with Hessian-driven damping. SIAM J. Optim. 30(3), 2134–2162 (2020)

    MathSciNet  MATH  Google Scholar 

  2. Alecsa, C.D., László, S.C., Pinţa, T.: An extension of the second order dynamical system that model Nesterov’s convex gradient method. Appl. Math. Optim. 84(2), 1687–1716 (2021)

    MathSciNet  MATH  Google Scholar 

  3. Attouch, H., László, S.: Newton-like inertial dynamics and proximal algorithms governed by maximally monotone operators. SIAM J. Optim. 30(4), 3252–3283 (2020)

  4. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than \(1/k^2\). SIAM J. Optim. 26(3), 1824–1834 (2016)

    MathSciNet  MATH  Google Scholar 

  5. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  6. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999)

    MATH  Google Scholar 

  7. Bian, W.: Smoothing accelerated algorithm for constrained nonsmooth convex optimization problems (in chinese). Sci. Sin. Math. 50, 1651–1666 (2020)

    MATH  Google Scholar 

  8. Bian, W., Chen, X.: Optimality and complexity for constrained optimization problems with nonconvex regularization. Math. Oper. Res. 42(4), 1063–1084 (2017)

    MathSciNet  MATH  Google Scholar 

  9. Bian, W., Chen, X.: A smoothing proximal gradient algorithm for nonsmooth convex regression with cardinality penalty. SIAM J. Numer. Anal. 58(1), 858–883 (2020)

    MathSciNet  Google Scholar 

  10. Bian, W., Chen, X., Ye, Y.Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1–2), 301–327 (2015)

    MathSciNet  MATH  Google Scholar 

  11. Blumensath, T., Davies, M.: Sparse and shift-invariant representations of music. IEEE Trans. Audio Speech Lang. Process. 14(1), 50–57 (2006)

    Google Scholar 

  12. Blumensath, T., Davies, M.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5–6), 629–654 (2008)

    MathSciNet  MATH  Google Scholar 

  13. Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    MathSciNet  MATH  Google Scholar 

  14. Boţ, R.I., Böhm, A.: Variable smoothing for convex optimization problems using stochastic gradients. J. Sci. Comput. https://doi.org/10.1007/s10915-020-01332-8 (2020)

  15. Bruckstein, A., Donoho, D., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    MathSciNet  MATH  Google Scholar 

  16. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    MathSciNet  MATH  Google Scholar 

  17. Chambolle, A., DeVore, R., Lee, N., Lucier, B.: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998)

    MathSciNet  MATH  Google Scholar 

  18. Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134(1), 71–99 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Combettes, P., Wajs, V.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)

    MathSciNet  MATH  Google Scholar 

  21. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    MathSciNet  MATH  Google Scholar 

  22. Doikov, N., Nesterov, Y.: Contracting proximal methods for smooth convex optimization. SIAM J. Optim. 30(4), 3146–3169 (2020)

    MathSciNet  MATH  Google Scholar 

  23. Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    MathSciNet  MATH  Google Scholar 

  24. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    MathSciNet  MATH  Google Scholar 

  25. Hale, E., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    MathSciNet  MATH  Google Scholar 

  26. Hoda, S., Gilpin, A., Pena, J., Sandholm, T.: Smoothing techniques for computing Nash equilibria of sequential games. Math. Oper. Res. 35(2), 494–512 (2010)

    MathSciNet  MATH  Google Scholar 

  27. Liu, Y., Wu, Y.: Variable selection via a combination of the \(\ell _0\) and \(\ell _1\) penalties. J. Comput. Graph. Stat. 16(4), 782–798 (2007)

    Google Scholar 

  28. Lu, Z.: Iterative hard thresholding methods for \(\ell _0\) regularized convex cone programming. Math. Program. 147(1–2), 125–154 (2014)

    MathSciNet  Google Scholar 

  29. Lu, Z., Zhang, Y.: Sparse approximation via penalty decomposition methods. SIAM J. Optim. 23(4), 2448–2478 (2013)

    MathSciNet  MATH  Google Scholar 

  30. Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)

    MATH  Google Scholar 

  31. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    MathSciNet  MATH  Google Scholar 

  32. Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)

    MathSciNet  MATH  Google Scholar 

  33. Nikolova, M.: Local strong homogeneity of a regularized estimator. SIAM J. Appl. Math. 61(2), 633–658 (2000)

    MathSciNet  MATH  Google Scholar 

  34. Pati, Y., Rezaiifar, R., Krishnaprasad, P.: Orthogonal matching pursuit-recursive function approximation with applications to wavelet decomposition. In: Conference Record of the Twenty-Seventh Asilomar Conference on Signal, Systems and Computers, vol. 1–2, pp. 40–44 (1993)

  35. Peleg, D., Meir, R.: A bilinear formulation for vector sparsity optimization. Signal Process. 88(2), 375–389 (2008)

    MATH  Google Scholar 

  36. Soubies, E., Blanc-Feraud, L., Aubert, G.: A continuous exact \(\ell _0\) penalty (CEL0) for least squares regularized problem. SIAM J. Imaging Sci. 8(3), 1607–1639 (2015)

    MathSciNet  MATH  Google Scholar 

  37. Su, W., Boyd, S., Candès, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17(153), 1–43 (2016)

    MathSciNet  MATH  Google Scholar 

  38. Tan, C.H., Qian, Y.Q., Ma, S.Q., Zhang, T.: Accelerated dual-averaging primal-dual method for composite convex minimization. Optim. Method Softw. 35(4), 741–766 (2020)

    MathSciNet  MATH  Google Scholar 

  39. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B-Methodol. 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  40. Wen, B., Chen, X., Pong, T.: Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems. SIAM J. Optim. 27(1), 124–145 (2017)

    MathSciNet  MATH  Google Scholar 

  41. Wen, B., Xue, X.P.: On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems. J. Glob. Optim. 75(3), 767–787 (2019)

    MathSciNet  MATH  Google Scholar 

  42. Wu, F., Bian, W.: Accelerated iterative hard thresholding algorithm for \(\ell _0\) regularized regression problem. J. Glob. Optim. 76(4), 819–840 (2020)

    MATH  Google Scholar 

  43. Wu, F., Bian, W.: Accelerated forward-backward method with fast convergence rate for nonsmooth convex optimization beyond differentiability. arXiv:2110.01454v1 (2021)

  44. Yu, Q., Zhang, X.Z.: A smoothing proximal gradient algorithm for matrix rank minimization problem. Comput. Optim. Appl. 81(2), 519–538 (2022)

    MathSciNet  MATH  Google Scholar 

  45. Zhang, C., Chen, X.: A smoothing active set method for linearly constrained non-Lipschitz nonconvex optimization. SIAM J. Optim. 30(1), 1–30 (2020)

    MathSciNet  MATH  Google Scholar 

  46. Zheng, Z., Fan, Y., Lv, J.: High dimensional thresholded regression and shrinkage effect. J. R. Stat. Soc. Ser. B-Stat. Methodol. 76(3), 627–649 (2014)

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This work is supported by the National Natural Science Foundation of China grants (No. 12271127, 62176073), the National Key Research and Development Program of China (No. 2021YFA1003500) and the Fundamental Research Funds for the Central Universities (No. 2022FRFK0600XX).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Bian.

Ethics declarations

Conflict of interest

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bian, W., Wu, F. Accelerated Smoothing Hard Thresholding Algorithms for \(\ell _0\) Regularized Nonsmooth Convex Regression Problem. J Sci Comput 96, 33 (2023). https://doi.org/10.1007/s10915-023-02249-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02249-8

Keywords

Mathematics Subject Classification

Navigation