Skip to main content
Log in

On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper, we show that for a class of linearly constrained convex composite optimization problems, an (inexact) symmetric Gauss–Seidel based majorized multi-block proximal alternating direction method of multipliers (ADMM) is equivalent to an inexact proximal augmented Lagrangian method. This equivalence not only provides new perspectives for understanding some ADMM-type algorithms but also supplies meaningful guidelines on implementing them to achieve better computational efficiency. Even for the two-block case, a by-product of this equivalence is the convergence of the whole sequence generated by the classic ADMM with a step-length that exceeds the conventional upper bound of \((1+\sqrt{5})/2\), if one part of the objective is linear. This is exactly the problem setting in which the very first convergence analysis of ADMM was conducted by Gabay and Mercier (Comput Math Appl 2(1):17–40, 1976), but, even under notably stronger assumptions, only the convergence of the primal sequence was known. A collection of illustrative examples are provided to demonstrate the breadth of applications for which our results can be used. Numerical experiments on solving a large number of linear and convex quadratic semidefinite programming problems are conducted to illustrate how the theoretical results established here can lead to improvements on the corresponding practical implementations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. This question was first resolved in [48] when the initial multiplier \(x^0\) satisfies \(\mathcal {G}x^0-b=0\) and all the subproblems are solved exactly.

  2. One may refer to [29] for the details that motivating the use of indefinite proximal terms in the 2-block majorized proximal ADMM, especially [29, Section 6] on their computational merits, as well as [57] for the similar results in multi-block cases.

  3. This is equivalent to say that \(\mathcal {R}^{-1}\) is calm at \(0\in \mathbb {U}\) for \({\overline{u}}\in \mathbf {K}\) with the same modulus \(\kappa >0\), see [11, Theorem 3H.3].

  4. This lemma can be directly verified via the singular value decomposition of the linear operator \(\mathcal {G}\) and some basic calculations from linear functional analysis.

  5. This can be routinely derived by using the singular value decomposition of \(\mathcal {G}\) and the definition of the Moore–Penrose pseudoinverse.

References

  1. Bai, M., Zhang, X., Ni, G., Cui, C.: An adaptive correction approach for tensor completion. SIAM J. Imaging Sci. 9, 1298–1323 (2016)

    MathSciNet  MATH  Google Scholar 

  2. Bai, S., Qi, H.-D.: Tackling the flip ambiguity in wireless sensor network localization and beyond. Digit. Signal Process. 55, 85–97 (2016)

    Google Scholar 

  3. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)

    MATH  Google Scholar 

  4. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn 3, 1–122 (2011)

    MATH  Google Scholar 

  5. Chen, L., Sun, D.F., Toh, K.-C.: A note on the convergence of ADMM for linearly constrained convex optimization problems. Comput. Optim. Appl. 66, 327–343 (2017)

    MathSciNet  MATH  Google Scholar 

  6. Chen, L., Sun, D.F., Toh, K.-C.: An effcient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming. Math. Program. 161(1–2), 237–270 (2017)

    MathSciNet  MATH  Google Scholar 

  7. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001)

    MathSciNet  MATH  Google Scholar 

  8. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  9. Ding, C., Qi, H.-D.: Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction. Math. Program. 164(1–2), 341–381 (2017)

    MathSciNet  MATH  Google Scholar 

  10. Ding, C., Sun, D.F., Sun, J., Toh, K.-C.: Spectral operators of matrices. Math. Program. 168, 509–531 (2018)

    MathSciNet  MATH  Google Scholar 

  11. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings, 2nd edn. Springer, New York (2014)

    MATH  Google Scholar 

  12. Du, M.Y.: A two-phase augmented Lagrangian method for convex composite quadratic programming. Ph.D. thesis, Department of Mathematics, National University of Singapore (2015)

  13. Eckstein, J.: Augmented Lagrangian and alternating direction methods for convex optimization: a tutorial and some illustrative computational results. RUTCOR Research Reports (2012)

  14. Eckstein, J., Yao, W.: Understanding the convergence of the alternating direction method of multipliers: theoretical and computational perspectives. Pac. J. Optim. 11, 619–644 (2015)

    MathSciNet  MATH  Google Scholar 

  15. Eisenblätter, A., Grötschel, M., Koster, A.: Frequency planning and ramification of coloring. Discuss. Math. Graph Theory 22, 51–88 (2002)

    MathSciNet  MATH  Google Scholar 

  16. Fazel, M., Pong, T.K., Sun, D.F., Tseng, P.: Hankel matrix rank minimization with applications to system identification and realization. SIAM J. Matrix Anal. 34(3), 946–977 (2013)

    MathSciNet  MATH  Google Scholar 

  17. Ferreira, J., Khoo, Y., Singer, A.: Semidefinite programming approach for the quadratic assignment problem with a sparse graph. Comput. Optim. Appl. 69(3), 677–712 (2018)

    MathSciNet  MATH  Google Scholar 

  18. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    MATH  Google Scholar 

  19. Gaines, B.R., Kim, J., Zhou, H.: Algorithms for fitting the constrained lasso. J. Comput. Graph. Stat. 27(4), 861–871 (2018)

    MathSciNet  Google Scholar 

  20. Glowinski, R.: Lectures on Numerical Methods for Non-Linear Variational Problems. Bombay. Springer, Published for the Tata Institute of Fundamental Research (1980)

  21. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. Revue française d’atomatique, Informatique Recherche Opérationelle. Analyse Numérique 9(2), 41–76 (1975)

    MATH  Google Scholar 

  22. Han, D.R., Sun, D.R., Zhang, L.W.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Math. Oper. Res. 43(2), 622–637 (2018)

    MathSciNet  MATH  Google Scholar 

  23. Hestenes, M.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)

    MathSciNet  MATH  Google Scholar 

  24. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35, 73–101 (1964)

    MathSciNet  MATH  Google Scholar 

  25. James, G.M., Paulson, C., Rusmevichientong, P.: Penalized and constrained optimization: an application to high-dimensional website advertising. J. Amer. Stat. Asso. (2019). https://doi.org/10.1080/01621459.2019.1609970

  26. Klopp, O.: Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20(1), 282–303 (2014)

    MathSciNet  MATH  Google Scholar 

  27. Lam, X.Y., Marron, J.S., Sun, D.F., Toh, K.-C.: Fast algorithms for large scale generalized distance weighted discrimination. J. Comput. Graph. Stat. 27(2), 368–379 (2018)

    MathSciNet  Google Scholar 

  28. Lemaréchal, C., Sagastizábal, C.: Practical aspects of the Moreau–Yosida regularization: theoretical preliminaries. SIAM J. Optim. 7(2), 367–385 (1997)

    MathSciNet  MATH  Google Scholar 

  29. Li, M., Sun, D.F., Toh, K.-C.: A majorized ADMM with indefinite proximal terms for linearly constrained convex composite optimization. SIAM J. Optim. 26(2), 922–950 (2016)

    MathSciNet  MATH  Google Scholar 

  30. Li, X.D., Sun, D.F., Toh, K.-C.: A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions. Math. Program. 155, 333–373 (2016)

    MathSciNet  MATH  Google Scholar 

  31. Li, X.D., Sun, D.F., Toh, K.-C.: QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming. Math. Program. Comput. 10(4), 703–743 (2018)

    MathSciNet  MATH  Google Scholar 

  32. Li, X.D., Sun, D.F., Toh, K.-C.: A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications. Math. Program. 175, 395–418 (2019)

    MathSciNet  MATH  Google Scholar 

  33. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35, 208–220 (2013)

    Google Scholar 

  34. Malick, J., Povh, J., Rendl, F., Wiegele, A.: Regularization methods for semidefinite programming. SIAM J. Optim. 20, 336–356 (2009)

    MathSciNet  MATH  Google Scholar 

  35. Mateos, G., Bazerque, J.-A., Giannakis, G.B.: Distributed sparse linear regression. IEEE Trans. Signal Proces. 58, 5262–5276 (2010)

    MathSciNet  MATH  Google Scholar 

  36. Miao, W.M., Pan, S.H., Sun, D.F.: A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program. 159, 289–338 (2016)

    MathSciNet  MATH  Google Scholar 

  37. Negahban, S., Wainwright, M.J.: Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res. 13, 1665–1697 (2012)

    MathSciNet  MATH  Google Scholar 

  38. Nie, J., Wang, L.: Regularization methods for SDP relaxations in large-scale polynomial optimization. SIAM J. Optim. 22, 408–428 (2012)

    MathSciNet  MATH  Google Scholar 

  39. Nie, J., Wang, L.: Semidefinite relaxations for best rank-\(1\) tensor approximations. SIAM J. Matrix Anal. Appl. 35, 1155–1179 (2014)

    MathSciNet  MATH  Google Scholar 

  40. Peng, J., Wei, Y.: Approximating k-means-type clustering via semidefinite programming. SIAM J. Optim. 18, 186–205 (2007)

    MathSciNet  MATH  Google Scholar 

  41. Potra, F.A.: Weighted complementarity problems—a new paradigm for computing equilibria. SIAM J. Optim. 22, 1634–1654 (2012)

    MathSciNet  MATH  Google Scholar 

  42. Powell, M.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969)

    Google Scholar 

  43. Povh, J., Rendl, F., Wiegele, A.: A boundary point method to solve semidefinite programs. Computing 78, 277–286 (2006)

    MathSciNet  MATH  Google Scholar 

  44. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  45. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)

    MathSciNet  MATH  Google Scholar 

  46. Schizas, I.D., Ribeiro, A., Giannakis, G.B.: Consensus in ad hoc WSNs with noisy links—part I: distributed estimation of deterministic signals. IEEE Trans. Signal Process. 56, 350–364 (2008)

    MathSciNet  MATH  Google Scholar 

  47. Sloane, N.: Challenge problems: independent sets in graphs. https://oeis.org/A265032/a265032.html. Accessed 16 Aug 2019

  48. Sun, D.F., Toh, K.-C., Yang, L.Q.: A convergent 3-block semi-proximal alternating direction method of multipliers for conic programming with 4-type constraints. SIAM J. Optim. 25(2), 882–915 (2015)

    MathSciNet  MATH  Google Scholar 

  49. Teo, C.H., Vishwanathan, S.V.N., Smola, A., V.Le, Q.: Bundle methods for regularized risk minimization. J. Mach. Learn. Res. 11, 313–365 (2010)

    MathSciNet  Google Scholar 

  50. Toh, K.-C.: Solving large scale semidefinite programs via an iterative solver on the augmented systems. SIAM J. Optim. 14, 670–698 (2004)

    MathSciNet  MATH  Google Scholar 

  51. Toh, K.-C.: An inexact primal–dual path-following algorithm for convex quadratic SDP. Math. Program. 112(1), 221–254 (2008)

    MathSciNet  MATH  Google Scholar 

  52. Trick, M., Chvatal, V., Cook, W., Johnson, D., McGeoch, C., Tarjan, R.: The Second DIMACS implementation challenge: NP hard problems: maximum clique, graph coloring, and satisfiability. Rutgers University (1992). http://dimacs.rutgers.edu/Challenges/. Accessed 16 Aug 2019

  53. Wang, B., Zou, H.: Another look at distance-weighted discrimination. J. R. Stat. Soc. B 80, 177–198 (2018)

    MathSciNet  MATH  Google Scholar 

  54. Wiegele, A.: Biq Mac library—a collection of Max-Cut and quadratic \(0-1\) programming instances of medium size. Technical report (2007). http://biqmac.uni-klu.ac.at/biqmaclib.pdf. Accessed 16 Aug 2019

  55. Yan, Z., Gao, S.Y., Teo, C.P.: On the design of sparse but efficient structures in operations. Manag. Sci. 64, 2973–3468 (2018)

    Google Scholar 

  56. Yang, L.Q., Sun, D.F., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7, 331–366 (2015)

    MathSciNet  MATH  Google Scholar 

  57. Zhang, N., Wu, J., Zhang, L.W.: A linearly convergent majorized ADMM with indefinite proximal terms for convex composite programming and its applications (2018). arXiv: 1706.01698v2

  58. Zhao, X.Y., Sun, D.F., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    MathSciNet  MATH  Google Scholar 

  59. Zhu, H., Cano, A., Giannakis, G.B.: Distributed consensus-based demodulation: algorithms and error analysis. IEEE Trans. Wirel. Commun. 9, 2044–2054 (2010)

    Google Scholar 

Download references

Acknowledgements

We would like to thank the two anonymous referees for their careful reading of this paper, and their insightful comments and suggestions which have helped to improve the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xudong Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The research of the Liang Chen was supported by the National Natural Science Foundation of China (11801158, 11871205), the Human Provincial Natural Science Foundation of China (2019JJ50040), and the Fundamental Research Funds for the Central Universities in China. The research of the Xudong Li was supported by the National Natural Science Foundation of China (11901107), the Shanghai Sailing Program (19YF1402600) and the Fundamental Research Funds for the Central Universities in China. The research of the Defeng Sun was supported in part by a start-up research grant from the Hong Kong Polytechnic University. The research of the Kim-Chuan Toh was supported in part by the Ministry of Education, Singapore, Academic Research Fund (R-146-000-257-112).

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Li, X., Sun, D. et al. On the equivalence of inexact proximal ALM and ADMM for a class of convex composite programming. Math. Program. 185, 111–161 (2021). https://doi.org/10.1007/s10107-019-01423-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01423-x

Keywords

Mathematics Subject Classification

Navigation