Skip to main content
Log in

Convergence study of indefinite proximal ADMM with a relaxation factor

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The alternating direction method of multipliers (ADMM) is widely used to solve separable convex programming problems. At each iteration, the classical ADMM solves the subproblems exactly. For many problems arising from practical applications, it is usually impossible or too expensive to obtain the exact solution of a subproblem. To overcome this, a special proximal term is added to ease the solvability of a subproblem. In the literature, the proximal term can be relaxed to be indefinite while still with a convergence guarantee; this relaxation permits the adoption of larger step sizes to solve the subproblem, which particularly accelerates its performance. A large value of the relaxation factor introduced in the dual step of ADMM also plays a vital role in accelerating its performance. However, it is still not clear whether these two acceleration strategies can be used simultaneously with no restriction on the penalty parameter. In this paper, we answer this question affirmatively and conduct a rigorous convergence analysis for indefinite proximal ADMM with a relaxation factor (IP-ADMM\(_{r}\)), reveal the relationships between the parameter in the indefinite proximal term and the relaxation factor to ensure its global convergence, and establish the worst-case convergence rate in the ergodic sense. Finally, some numerical results on basis pursuit and total variation-based denoising with box constraint problems are presented to verify the efficiency of IP-ADMM\(_{r}\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010)

    Article  Google Scholar 

  2. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  3. Chan, R.H., Tao, M., Yuan, X.M.: Linearized alternating direction method of multipliers for constrained linear least-squares problem. East Asian J. Appl. Math. 2, 326–341 (2012)

    Article  MathSciNet  Google Scholar 

  4. Chan, R.H., Tao, M., Yuan, X.M.: Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers. SIAM J. Imaging Sci. 6, 680–697 (2013)

    Article  MathSciNet  Google Scholar 

  5. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001)

    Article  MathSciNet  Google Scholar 

  6. Chen, J., Wang, Y., He, H., Lv, Y.: Convergence analysis of positive-indefinite proximal ADMM with a Glowinski’s relaxation factor. Number. Algorithms 83, 1415–1440 (2020)

    Article  MathSciNet  Google Scholar 

  7. Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66, 889–916 (2016)

    Article  MathSciNet  Google Scholar 

  8. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MathSciNet  Google Scholar 

  9. Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 4, 75–83 (1994)

    Article  MathSciNet  Google Scholar 

  10. Eckstein, J., Fukushima, M.: Reformulations and applications of the alternating direction method of multipliers. In: Hager, W.W., Hearn, D.W., Pardalos, P.M. (eds.) Large Scale Optimization, pp. 115–134. Springer, New York (1994)

    Chapter  Google Scholar 

  11. Eckstein, J., Yao, W.: Understanding the convergence of the alternating direction method of multipliers: theoretical and computational perspectives. Pac. J. Optim. 11, 619–644 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Fazel, M., Pong, T.K., Sun, D.F., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. Appl. 34, 946–977 (2013)

    Article  MathSciNet  Google Scholar 

  13. Fortin, M., Glowinski, R.: On decomposition-coordination methods using an augmented Lagrangian. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Applications to the Solution of Boundary Problems, pp. 97–146. Elsevier, Amsterdam (1983)

    Chapter  Google Scholar 

  14. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  Google Scholar 

  15. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984)

    Book  Google Scholar 

  16. Glowinski, R.: On alternating direction methods of multipliers: a historical perspective. In: Fitzgibbon, W., Kuznetsov, Y.A., Neittaanmaki, P., Pironneau, O. (eds.) Modeling, Simulation and Optimization for Science and Technology. Computational Methods in Applied Sciences, vol. 34, pp. 59–82. Springer, Dordrecht (2014)

    Google Scholar 

  17. Glowinski, R., Marrocco, A.: Sur l’approximation par \(\acute{e}\)l\(\acute{e}\)ments finis d’ordre un et la r\(\acute{e}\)solution par p\(\acute{e}\)nalisation-dualit\(\acute{e}\) d’une classe de probl\(\grave{e}\)mes de Dirichlet non lin\(\acute{e}\)aires, Revue francaise d’automatique, informatique, recherche op\(acute{e}\)rationnelle., Analyse Num\(\acute{e}\)rique, 2, 41–76 (1975)

  18. He, B.S., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monontone variational inequalities. Math. Program. 92, 103–118 (2002)

    Article  MathSciNet  Google Scholar 

  19. He, B.S., Ma, F., Yuan, X.M.: Optimal linearizing alternating direction method of multipliers for convex programming. Comput. Optim. Appl. 75, 361–388 (2020)

    Article  MathSciNet  Google Scholar 

  20. He, B.S., Ma, F., Yuan, X.M.: Optimal proximal augmented Lagrangian method and its application to full Jacobian splitting for multi-block separable convex minimization problems. IMA J. Numer. Anal. 40, 1188–1216 (2020)

    Article  MathSciNet  Google Scholar 

  21. He, B.S., Xu, M.H., Yuan, X.M.: Solving large-scale least squares covariance matrix problems by alternating direction methods. SIAM J. Matrix Anal. Appl. 32, 136–152 (2011)

    Article  MathSciNet  Google Scholar 

  22. He, B.S., Yang, H.: Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 23, 151–161 (1998)

    Article  MathSciNet  Google Scholar 

  23. Li, M., Sun, D.F., Toh, K.C.: A majorized ADMM with indefinite proximal terms for linearly constrained convex composite optimization. SIAM J. Optim. 26, 922–950 (2016)

    Article  MathSciNet  Google Scholar 

  24. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  Google Scholar 

  25. Sun, J., Zhang, S.: A modified alternating direction method for convex quadratically constrained quadratic semidefinite programs. Eur. J. Oper. Res. 207, 1210–1220 (2010)

    Article  MathSciNet  Google Scholar 

  26. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  27. Wen, Z.W., Goldfarb, D., Yin, W.T.: Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Program. Comput. 2, 203–230 (2010)

    Article  MathSciNet  Google Scholar 

  28. Tao, M., Yuan, X.M.: Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21, 57–81 (2011)

    Article  MathSciNet  Google Scholar 

  29. Tao, M., Yuan, X.M.: On Glowinski’s open question on the alternating direction method of multipliers. J. Optim. Theory Appl. 179, 163–196 (2018)

    Article  MathSciNet  Google Scholar 

  30. Xu, M.H.: Proximal alternating directions method for structured variational inequalities. J. Optim. Theory Appl. 134, 107–117 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is grateful to anonymous referees and the associate editor for their valuable comments and suggestions which have helped to improve the presentation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Tao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Min Tao was supported partially by National Key Research and Development Program of China (No. 2018AAA0101100), the Natural Science Foundation of China (No. 11971228) and the Jiangsu Provincial National Natural Science Foundation of China (No. BK20181257).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tao, M. Convergence study of indefinite proximal ADMM with a relaxation factor. Comput Optim Appl 77, 91–123 (2020). https://doi.org/10.1007/s10589-020-00206-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00206-x

Keywords

Navigation