Skip to main content
Log in

Convergence and rate analysis of a proximal linearized ADMM for nonconvex nonsmooth optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this paper, we consider a proximal linearized alternating direction method of multipliers, or PL-ADMM, for solving linearly constrained nonconvex and possibly nonsmooth optimization problems. The algorithm is generalized by using variable metric proximal terms in the primal updates and an over-relaxation step in the multiplier update. Extended results based on the augmented Lagrangian including subgradient band, limiting continuity, descent and monotonicity properties are established. We prove that the PL-ADMM sequence is bounded. Under the powerful Kurdyka-Łojasiewicz inequality we show that the PL-ADMM sequence has a finite length thus converges, and we drive its convergence rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ames, B., Hong, M.: Alternating directions method of multipliers for \(\ell _1\)-penalized zero variance discriminant analysis and principal component analysis. Comput. Optim. Appl. 64, 725–754 (2016)

    Article  MathSciNet  Google Scholar 

  2. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116, 5–16 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35, 438–457 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Attouch, H., Bolte, J., Svaiter, B.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods. Math. Program. Ser. A 137, 91–129 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bai, J., Hager, W. W., Zhang, H.: An inexact accelerated stochastic ADMM for separable convex optimization, arXiv preprint arXiv:2010.12765, (2020)

  6. Bai, J., Han, D., Sun, H., Zhang, H.: Convergence on a symmetric accelerated stochastic admm with larger stepsizes, arXiv preprint arXiv:2103.16154, (2021)

  7. Bai, J., Li, J., Xu, F., Zhang, H.: Generalized symmetric ADMM for separable convex optimization. Comput. Optim. Appl. 70, 129–170 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. Boley, D.: Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs. SIAM J. Optim 23, 2183–2207 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim 17, 1205–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bolte, J., Daniilidis, A., Ley, M., Mazet, L.: Characterizations of Łojasiewicz inequalities: Subgradientflows, talweg, convexity. Trans. Am. Math. Soc. 362, 3319–3363 (2010)

    Article  MATH  Google Scholar 

  11. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146, 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bolte, J., Daniilidis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18, 556–572 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  13. Boţ, R., Nguyen, D.: The proximal alternating direction method of multipliers in the nonconvex setting: Convergence analysis and rates. Math. Oper. Res. 45, 682–712 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  14. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)

    Article  MATH  Google Scholar 

  15. Cai, X., Han, D., Yuan, X.: The direct extension of ADMM for three-block separable convex minimization models is convergent when one function is strongly convex, http://www.optimization-online.org, 2013 (2015)

  16. Chen, C., He, B., Yuan, X., Ye, Y.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 55, 57–79 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chen, C., Shen, Y., You, Y.: On the convergence analysis of the alternating direction method of multipliers with three blocks. Abstr. Appl. Anal. 2015, 1–7 (2013)

    MathSciNet  MATH  Google Scholar 

  18. Chen, Y., Hager, W.W., Yashtini, M., Ye, X., Zhang, H.: Bregman operator splitting with variable stepsize for Total Variation image reconstruction. Comput. Optim. Appl. 54, 317–342 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Cirik, A.C., Balasubramanya, N.M., Lampe, L.: Multi-user detection using ADMM-based compressive sensing for uplink grant-free noma. IEEE Wirel. Commun. Lett. 7, 46–49 (2017)

    Article  Google Scholar 

  20. Dai, Y., Han, D., Yuan, X., Zhang, W.: A sequential updating scheme of Lagrange multiplier for separable convex programming. Math. Comput. 86, 315–343 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Davis, D., Yin, W.: A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 25, 829–858 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Deng, W., Lai, M., Peng, Z., Yin, W.: Parallel multi-block ADMM with \(o(1/k)\) convergence. J. Sci. Comput. 71, 712–736 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  23. Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66, 889–916 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Dhar, S., Yi, C., Ramakrishnan, N., Shah, M.: ADMM based scalable machine learning on spark, In 2015 IEEE International Conference on Big Data (Big Data), IEEE, (2015), pp. 1174–1182

  25. Douglas, J., Rachford, H.: On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  26. Eckstein, J., Bertsekas, D.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  27. Fan, J.: Comments on wavelets in statistics: a review by a antoniadis. J. Ital. Stat. Soc. 6, 131–138 (1997)

    Article  MATH  Google Scholar 

  28. Forero, P.A., Cano, A., Giannakis, G.B.: Distributed clustering using wireless sensor networks. IEEE J. Sel. Topics Signal Process. 5, 707–724 (2011)

    Article  Google Scholar 

  29. Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165, 874–900 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  31. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. ESAIM Math Modell Numer Anal Modélisation Mathématique et Analyse Numérique 9, 41–76 (1975)

    MATH  Google Scholar 

  32. Goldfarb, D., Ma, S.: Fast multiple-splitting algorithms for convex optimization. SIAM J. Optim. 22, 533–556 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Goldfarb, D., Ma, S., Scheinberg, K.: Fast alternating linearization methods for minimizing the sum of the two convex functions. Math. Program. 141, 349–382 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  34. Goldstein, T., O’Donoghue, B., Setzer, S., Baraniuk, R.: Fast alternating direction optimization methods. SIAM. J. Imag. Sci. 7, 1588–1623 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  35. Guo, K., Han, D., Wu, T.: Convergence of alternating direction method for minimizing sum of two nonconvex functions with linear constraints. Int. J. Comput. Math 94, 1653–1669 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  36. Hager, W.W., Ngo, C., Yashtini, M., Zhang, H.: Alternating direction approximate Newton (ADAN) algorithm for ill-conditioned inverse problems with application to parallel MRI. J. Oper. Res. Soc. China 3, 139–162 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  37. Hager, W.W., Yashtini, M., Zhang, H.: An \({O}(1/k)\) convergence rate for the variable stepsize Bregman operator splitting algorithm. SIAM J. Numer. Anal. 53, 1535–1556 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  38. Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155, 227–238 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  39. He, B., Hou, L., Yuan, X.: On full Jacobian decomposition of the augmented lagrangian method for separable convex programming. SIAM J. Optim. 25, 2274–2312 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  40. He, B., Tao, M., Xu, M., Yuan, X.: Alternating directions based contraction method for generally separable linearly constrained convex programming problem, http://www.optimization-online.org, (2010)

  41. He, B., Tao, M., Yuan, X.: Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22, 313–340 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  42. He, B., Tao, M., Yuan, X.: Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming. Math. Oper. Res. 42, 662–691 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  43. He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method. SIAM. J. Numer. Anal. 2, 700–709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  44. He, B., Yuan, X.: On non-ergodic convergence rate of the Douglas-Rachford alternating direction method of multipliers. Numerische Mathematik 130, 567–577 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  45. Hong, M., Luo, Z.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162, 165–199 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  46. Hong, M., Luo, Z., Razaviyayn, M.: Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM J. Optim. 26, 337–364 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  47. Jiang, B., Lin, T., Ma, S., Zhang, S.: Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis. Comput. Optim. Appl. 72, 115–157 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  48. Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25, 2434–2460 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  49. Liégeois, R., Mishra, B., Zorzi, M., Sepulchre, R.: Sparse plus low-rank autoregressive identification in neuroimaging time series, In 2015 54th IEEE Conference on Decision and Control (CDC), IEEE, (2015), pp. 3965–3970

  50. Lin, F., Fardad, M., Jovanovic, M.R.: Design of optimal sparse feedback gains via the alternating direction method of multipliers. IEEE Trans. Automat. Control 58, 2426–2431 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  51. Lin, T., Ma, S., Zhang, S.: On the sublinear convergence rate of multi-block ADMM. J. Oper. Res. Soc. China 3, 251–274 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  52. Lin, T., Ma, S., Zhang, S.: Global convergence of unmodified 3-block ADMM for a class of convex minimization problems. J. Sci. Comput. 76, 69–88 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  53. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  54. Liu, Q., Shen, X., Gu, Y.: Linearized ADMM for nonconvex nonsmooth optimization with convergence analysis. IEEE Access 7, 76131–76144 (2019)

    Article  Google Scholar 

  55. Liu, Y., Shang, F., Liu, H., Kong, L., Licheng, J., Lin, Z.: Accelerated variance reduction stochastic ADMM for large-scale machine learning, IEEE Trans. Pattern Anal. Mach. Intell., (2020)

  56. Lu, C.: A library of ADMM for sparse and low-rank optimization. Methodology 68, 49–67 (2006)

    Google Scholar 

  57. Melo, J.G., Monteiro, R.D.: Iteration complexity of a linearized proximal multiblock ADMM class for linearly constrained nonconvex optimization problems, http://www.optimization-online.org, (2017)

  58. Peaceman, D., Rachford, H.: The numerical solution of parabolic elliptic differential equations. SIAM J. Appl. Math. 3, 28–41 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  59. Rockafellar, R.T., Wets, R.: Variational analysis, vol. 317. Grundlehren der Mathematischen Wissenschaften, Springer, Berlin (1998)

    MATH  Google Scholar 

  60. Shen, Y., Wen, Z., Zhang, Y.: Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization. Optim. Methods Soft. 29, 239–263 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  61. Themelis, A., Patrinos, P.: Douglas-Rachford splitting and ADMM for nonconvex optimization: tight convergence results. SIAM J. Optim. 30, 149–181 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  62. Wang, F., Can, W., Xu, Z.: Convergence of multi-block Bregman ADMM for nonconvex composite problems. Sci. China Inf. Sci. 61, 12210:11-122101:12 (2018)

    Article  MathSciNet  Google Scholar 

  63. Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 78, 29–63 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  64. Wen, Z., Yang, C., Liu, X., Marchesini, S.: Alternating direction methods for classical and ptychographic phase retrieval. Invers. Probl. 28, 1–18 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  65. Wu, C., Tai, X.-C.: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imag. Sci. 3, 300–339 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  66. Yang, J., Zhang, Y., Yin, W.: A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data. IEEE J. Sel. Topics Signal Process. 4, 288–297 (2010)

    Article  Google Scholar 

  67. Yang, L., Pong, T., Chen, X.: Alternating direction method of multipliers for a class of nonconvex and nonsmooth problems with applications to background/foreground extraction. SIAM J. Imag. Sci. 10, 74–110 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  68. Yang, Y., Sun, J., Li, H., Xu, Z.: ADMM-Net: A deep learning approach for compressive sensing MRI, arXiv preprint arXiv:1705.06869, (2017)

  69. Yang, Y., Sun, J., Li, H., Xu, Z.: ADMM-CSNet: a deep learning approach for image compressive sensing. IEEE Trans. Pattern Anal. Mach. Intell. 42, 521–538 (2018)

    Article  Google Scholar 

  70. Yashtini, M.: Multi-block nonconvex nonsmooth proximal ADMM: convergence and rates under Kurdyka-Łojasiewicz property. J. Optim. Theory Appl. 190, 966–998 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  71. Yashtini, M., Hager, W. W., Chen, Y., Ye, X.: Partially parallel MR image reconstruction using sensitivity encoding, In 2012 IEEE International Conference on Image Processing, Orlando, (2012), IEEE, pp. 2077–2080

  72. Yashtini, M., Kang, S.H.: Alternating direction method of multipliers for Euler’s elastica-based denoising, SSVM 2015. LNCS 9087, 690–701 (2015)

    MATH  Google Scholar 

  73. Yashtini, M., Kang, S.H.: A fast relaxed normal two split method and an effective weighted TV approach for Euler’s Elastica image inpainting. SIAM J. Imag. Sci. 9, 1552–1581 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  74. Yashtini, M., Kang, S.H., Zhu, W.: Efficient alternating minimization methods for variational edge-weighted colorization models. Adv. Comput. Math. 45, 1735–1767 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  75. Yuan, X., Zeng, S., Zhang, J.: Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis. J. Mach. Learn. Res. 21, 1–75 (2020)

    MathSciNet  MATH  Google Scholar 

  76. Zhang, C.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38, 894–942 (2010)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maryam Yashtini.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yashtini, M. Convergence and rate analysis of a proximal linearized ADMM for nonconvex nonsmooth optimization. J Glob Optim 84, 913–939 (2022). https://doi.org/10.1007/s10898-022-01174-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-022-01174-8

Keywords

Navigation