Abstract
We study the convergence of the augmented decomposition algorithm (ADA) proposed in Rockafellar et al. (Problem decomposition in block-separable convex optimization: ideas old and new, https://www.washington.edu/, 2017) for solving multi-block separable convex minimization problems subject to linear constraints. We show that the global convergence rate of the exact ADA is \(o(1/\nu )\) under the assumption that there exists a saddle point. We consider the inexact augmented decomposition algorithm and establish global and local convergence results under some mild assumptions, by providing a stability result for the maximal monotone operator \(\mathcal {T}\) associated with the perturbation from both primal and dual perspectives. This result implies the local linear convergence of the inexact ADA for many applications such as the lasso, total variation reconstruction, exchange problem and many other problems from statistics, machine learning and engineering with \(\ell _1\) regularization.
Similar content being viewed by others
Notes
Available at http://web.stanford.edu/boyd/papers/admm/.
References
Bai, J., Zhang, H., Li, J.: A parameterized proximal point algorithm for separable convex optimization. Optim. Lett. 12(7), 1–20 (2017)
Beck, A., Nedic, A., Ozdaglar, A., Teboulle, M.: An \(O (1/k) \) gradient method for network resource allocation problems. IEEE Trans. Control Netw. Syst. 1(1), 64–73 (2014)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends\({\textregistered }\) Mach. Learn. 3(1), 1–122 (2011)
Chang, T.H., Nedic, A., Scaglione, A.: Distributed constrained optimization by consensus-based primal-dual perturbation method. IEEE Trans. Autom. Control 59(6), 1524–1538 (2014)
Chatzipanagiotis, N., Dentcheva, D., Zavlanos, M.M.: An augmented Lagrangian method for distributed optimization. Math. Program. 152(1–2), 405–434 (2015)
Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155(1–2), 57–79 (2016)
Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64(1–3), 81–101 (1994)
Cui, Y., Sun, D., Toh, K.C.: On the R-superlinear convergence of the KKT residues generated by the augmented Lagrangian method for convex composite conic programming (2017). arXiv preprint arXiv:1706.08800
Deng, W., Lai, M.J., Peng, Z., Yin, W.: Parallel multi-block ADMM with o(1/k) convergence. J. Sci. Comput. 71(2), 712–736 (2017)
Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)
Dontchev, A.L.: Implicit Functions and Solution Mappings. Springer, New York (2009)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992)
Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992)
Han, D., Sun, D., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming (2015). arXiv preprint arXiv:1508.02134
Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155(1), 227–238 (2012)
He, B., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)
He, B., Yuan, X.: On the acceleration of augmented lagrangian method for linearly constrained optimization. Optimization online 3 (2010)
He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)
He, B., Yuan, X.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2015)
Hoffman, A.J.: On approximate solutions of systems of linear inequalities. Selected Papers Of Alan J Hoffman: With Commentary, pp. 174–176 (2003)
Hong, M., Luo, Z.Q.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162(1–2), 165–199 (2017)
Li, X., Sun, D., Toh, K.C.: A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems (2016). arXiv preprint arXiv:1607.05428
Liu, Y.J., Sun, D., Toh, K.C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. 133(1), 399–436 (2012)
Luo, Z.Q., Tseng, P.: On the convergence rate of dual ascent methods for linearly constrained convex minimization. Math. Oper. Res. 18(4), 846–867 (1993)
Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Optim. 22(2), 277–293 (1984)
Ma, S.: Alternating proximal gradient method for convex minimization. J. Sci. Comput. 68(2), 546–572 (2016)
Mulvey, J.M., Ruszczyn, A.: A diagonal quadratic approximation method for large scale linear programs. Oper. Res. Lett. 12(4), 205–215 (1992)
Nesterov, Y.: A method of solving a convex programming problem with convergence rate o(1/k2). Sov. Math. Dokl. 27(2), 372–376 (1983)
Robinson, S.M.: Some continuity properties of polyhedral multifunctions. In: König, H., Korte, B., Ritter, K. (eds.) Mathematical Programming at Oberwolfach, pp. 206–214. Springer, Berlin (1981)
Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)
Rockafellar, R.T.: Problem decomposition in block-separable convex optimization: ideas old and new (2017). https://www.washington.edu/
Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)
Spingarn, J.E.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. 32(2), 199–223 (1985)
Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29(1), 119–138 (1991)
Wang, X., Hong, M., Ma, S., Luo, Z.Q.: Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers (2013). arXiv preprint arXiv:1308.5294
Wright, S.J.: Accelerated block-coordinate relaxation for regularized optimization. SIAM J. Optim. 22(1), 159–186 (2012)
Xiao, L., Boyd, S.: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. 129(3), 469–488 (2006)
You, K., Xie, L.: Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Trans. Autom. Control 56(10), 2262–2275 (2011)
Acknowledgements
The authors are grateful to Professor R. Tyrrell Rockafellar for suggestions on this research project. Shu Lu’s research is supported by National Science Foundation under the Grant DMS-1407241.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Liu, H., Lu, S. Convergence of the augmented decomposition algorithm. Comput Optim Appl 72, 179–213 (2019). https://doi.org/10.1007/s10589-018-0039-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-018-0039-6