Skip to main content
Log in

Smoothing Accelerated Proximal Gradient Method with Fast Convergence Rate for Nonsmooth Convex Optimization Beyond Differentiability

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We propose a smoothing accelerated proximal gradient (SAPG) method with fast convergence rate for finding a minimizer of a decomposable nonsmooth convex function over a closed convex set. The proposed algorithm combines the smoothing method with the proximal gradient algorithm with extrapolation \(\frac{k-1}{k+\alpha -1}\) and \(\alpha > 3\). The updating rule of smoothing parameter \(\mu _k\) is a smart scheme and guarantees the global convergence rate of \(o(\ln ^{\sigma }k/k)\) with \(\sigma \in (\frac{1}{2},1]\) on the objective function values. Moreover, we prove that the iterates sequence is convergent to an optimal solution of the problem. We then introduce an error term in the SAPG algorithm to get the inexact smoothing accelerated proximal gradient algorithm. And we obtain the same convergence results as the SAPG algorithm under the summability condition on the errors. Finally, numerical experiments show the effectiveness and efficiency of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Adly, S., Attouch, H.: Finite convergence of proximal-gradient inertial algorithms combining dry friction with Hessian-driven damping. SIAM J. Optim. 30(3), 2134–2162 (2020)

    MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Cabot, A.: Convergence rate of a relaxed inertial proximal algorithm for convex minimization. Optimization 69(6), 1281–1312 (2020)

    MathSciNet  MATH  Google Scholar 

  3. Attouch, H., Chbani, Z., Fadili, J., Riahi, H.: First-order optimization algorithms via inertial systems with Hessian driven damping. Math. Program. (2020). https://doi.org/10.1007/s10107-020-01591-1

    Article  MATH  Google Scholar 

  4. Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168(1–2), 123–175 (2018)

    MathSciNet  MATH  Google Scholar 

  5. Attouch, H., Chbani, Z., Riahi, H.: Fast proximal methods via time scaling of damped inertial dynamics. SIAM J. Optim. 29(3), 2227–2256 (2019)

    MathSciNet  MATH  Google Scholar 

  6. Attouch, H., Chbani, Z., Riahi, H.: Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients. Vietnam J. Math. 48(2), 247–276 (2020)

    MathSciNet  MATH  Google Scholar 

  7. Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than \(1/k^{2}\). SIAM J. Optim. 26(3), 1824–1834 (2016)

    MathSciNet  MATH  Google Scholar 

  8. Aujol, J.F., Dossal, C.: Stability of over-relaxations for the forward–backward algorithm, application to FISTA. SIAM J. Optim. 25(4), 2408–2433 (2015)

    MathSciNet  MATH  Google Scholar 

  9. Bauschke, H.H., Bolte, J., Teboulle, M.: A descent Lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Math. Oper. Res. 42(2), 330–348 (2017)

    MathSciNet  MATH  Google Scholar 

  10. Beck, A., Hallak, N.: Proximal mapping for symmetric penalty and sparsity. SIAM J. Optim. 28(1), 496–527 (2018)

    MathSciNet  MATH  Google Scholar 

  11. Beck, A., Hallak, N.: Optimization problems involving group sparsity terms. Math. Program. 178(1–2), 39–67 (2019)

    MathSciNet  MATH  Google Scholar 

  12. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  13. Becker, S.R., Candès, E.J., Grant, M.C.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3(165), 165–218 (2011)

    MathSciNet  MATH  Google Scholar 

  14. Bian, W.: Smoothing accelerated algorithm for constrained nonsmooth convex optimization problems (in Chinese). Sci. Sin. Math. 50(12), 1651–1666 (2020)

    MATH  Google Scholar 

  15. Bian, W., Chen, X.: A smoothing proximal gradient algorithm for nonsmooth convex regression with cardinality penalty. SIAM J. Numer. Anal. 58(1), 858–883 (2020)

    MathSciNet  Google Scholar 

  16. Boţ, R.I., B\(\ddot{\rm o}\)hm, A.: Variable smoothing for convex optimization problems using stochastic gradients. J. Sci. Comput. 85(33) (2020). https://doi.org/10.1007/s10915-020-01332-8

  17. Boţ, R.I., Hendrich, C.: A variable smoothing algorithm for solving convex optimization problems. TOP 23(1), 124–150 (2015)

    MathSciNet  MATH  Google Scholar 

  18. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  19. Bruck, R.E., Jr.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61(1), 159–164 (1977)

    MathSciNet  MATH  Google Scholar 

  20. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    MathSciNet  MATH  Google Scholar 

  21. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)

    MathSciNet  MATH  Google Scholar 

  22. Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm’’. J. Optim. Theory Appl. 166(3), 968–982 (2015)

    MathSciNet  MATH  Google Scholar 

  23. Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134(1), 71–99 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Chen, X., Kelley, C.T., Xu, F., Zhang, Z.: A smoothing direct search method for Monte Carlo-based bound constrained composite nonsmooth optimization. SIAM J. Sci. Comput. 40(4), A2174–A2199 (2018)

    MathSciNet  MATH  Google Scholar 

  25. Clarke, F.H.: Optimization and Nonsmooth Analysis. Springer, Berlin (2009)

    Google Scholar 

  26. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    MathSciNet  MATH  Google Scholar 

  27. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)

    MATH  Google Scholar 

  28. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    MathSciNet  MATH  Google Scholar 

  29. Fan, J., Xue, L., Zou, H.: Strong oracle optimality of folded concave penalized estimation. Ann. Stat. 42(3), 819 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Fukushima, M., Mine, H.: A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Syst. Sci. 12(8), 989–1000 (1981)

    MATH  Google Scholar 

  31. G\(\ddot{\rm u}\)ler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992)

  32. Hoda, S., Gilpin, A., Pena, J., Sandholm, T.: Smoothing techniques for computing Nash equilibria of sequential games. Math. Oper. Res. 35(2), 494–512 (2010)

    MathSciNet  MATH  Google Scholar 

  33. Hong, M., Luo, Z.Q.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162(1–2), 165–199 (2017)

    MathSciNet  MATH  Google Scholar 

  34. Koenker, R., Hallock, K.F.: Quantile regression. J. Econ. Perspect. 15(4), 143–156 (2001)

    Google Scholar 

  35. Liu, Y., Ma, S., Dai, Y., Zhang, S.: A smoothing SQP framework for a class of composite \({L}_q\) minimization over polyhedron. Math. Program. 158(1–2), 467–500 (2016)

    MathSciNet  MATH  Google Scholar 

  36. Lu, Z.: Iterative hard thresholding methods for \(\ell _0\) regularized convex cone programming. Math. Program. 147(1–2), 125–154 (2014)

    MathSciNet  Google Scholar 

  37. Nesterov, Y.: A method for solving the convex programming problem with the convergence rate \({O} (1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  38. Nesterov, Y.: On an approach to the construction of optimal methods of minimization of smooth convex functions. Ekon. Mat. Metody 24, 509–517 (1988)

    MATH  Google Scholar 

  39. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    MathSciNet  MATH  Google Scholar 

  40. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Berlin (2006)

    MATH  Google Scholar 

  41. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967)

    MathSciNet  MATH  Google Scholar 

  42. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)

    Google Scholar 

  43. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    MathSciNet  MATH  Google Scholar 

  44. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (2009)

    MATH  Google Scholar 

  45. Tran-Dinh, Q.: Adaptive smoothing algorithms for nonsmooth composite convex minimization. Comput. Optim. Appl. 66(3), 425–451 (2017)

    MathSciNet  MATH  Google Scholar 

  46. Urruty, J.B.H., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1996)

    MATH  Google Scholar 

  47. Van Den Berg, E., Friedlander, M.P., Hennenfent, G., Herrmann, F.J., Saab, R., Yilmaz, D.: Sparco: a testing framework for sparse reconstruction. ACM Trans. Math. Softw. (2009). https://doi.org/10.1145/1462173.1462178

    Article  Google Scholar 

  48. Van Nguyen, Q.: Forward-backward splitting with Bregman distances. Vietnam J. Math. 45(3), 519–539 (2017)

    MathSciNet  MATH  Google Scholar 

  49. Villa, S., Salzo, S., Baldassarre, L., Verri, A.: Accelerated and inexact forward–backward algorithms. SIAM J. Optim. 23(3), 1607–1633 (2013)

    MathSciNet  MATH  Google Scholar 

  50. Xu, M., Ye, J.J., Zhang, L.: Smoothing SQP methods for solving degenerate nonsmooth constrained optimization problems with applications to bilevel programs. SIAM J. Optim. 25(3), 1388–1410 (2015)

    MathSciNet  MATH  Google Scholar 

  51. Xue, X., Bian, W.: Subgradient-based neural networks for nonsmooth convex optimization problems. IEEE Trans. Circuits Syst. I Regul. Pap. 55(8), 2378–2391 (2008)

    MathSciNet  Google Scholar 

  52. Yang, W.H., Han, D.R.: Linear convergence of the alternating direction method of multipliers for a class of convex optimization problems. SIAM J. Numer. Anal. 54(2), 625–640 (2016)

    MathSciNet  MATH  Google Scholar 

  53. Zhang, C., Chen, X.: Smoothing projected gradient method and its application to stochastic linear complementarity problems. SIAM J. Optim. 20(2), 627–649 (2009)

    MathSciNet  MATH  Google Scholar 

  54. Zhang, C., Chen, X.: A smoothing active set method for linearly constrained non-Lipschitz nonconvex optimization. SIAM J. Optim. 30(1), 1–30 (2020)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China grants (No. 12271127, 62176073), the National Key Research and Development Program of China (No. 2021YFA1003500) and the Fundamental Research Funds for the Central Universities (No. 2022FRFK0600XX). The authors are grateful to the editor and the two anonymous reviewers for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Bian.

Additional information

Communicated by Radu Ioan Boţ.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, F., Bian, W. Smoothing Accelerated Proximal Gradient Method with Fast Convergence Rate for Nonsmooth Convex Optimization Beyond Differentiability. J Optim Theory Appl 197, 539–572 (2023). https://doi.org/10.1007/s10957-023-02176-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-023-02176-6

Keywords

Mathematics Subject Classification

Navigation