Skip to main content
Log in

Convergence of the augmented decomposition algorithm

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We study the convergence of the augmented decomposition algorithm (ADA) proposed in Rockafellar et al. (Problem decomposition in block-separable convex optimization: ideas old and new, https://www.washington.edu/, 2017) for solving multi-block separable convex minimization problems subject to linear constraints. We show that the global convergence rate of the exact ADA is \(o(1/\nu )\) under the assumption that there exists a saddle point. We consider the inexact augmented decomposition algorithm and establish global and local convergence results under some mild assumptions, by providing a stability result for the maximal monotone operator \(\mathcal {T}\) associated with the perturbation from both primal and dual perspectives. This result implies the local linear convergence of the inexact ADA for many applications such as the lasso, total variation reconstruction, exchange problem and many other problems from statistics, machine learning and engineering with \(\ell _1\) regularization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Available at http://web.stanford.edu/boyd/papers/admm/.

References

  1. Bai, J., Zhang, H., Li, J.: A parameterized proximal point algorithm for separable convex optimization. Optim. Lett. 12(7), 1–20 (2017)

    MathSciNet  Google Scholar 

  2. Beck, A., Nedic, A., Ozdaglar, A., Teboulle, M.: An \(O (1/k) \) gradient method for network resource allocation problems. IEEE Trans. Control Netw. Syst. 1(1), 64–73 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends\({\textregistered }\) Mach. Learn. 3(1), 1–122 (2011)

  4. Chang, T.H., Nedic, A., Scaglione, A.: Distributed constrained optimization by consensus-based primal-dual perturbation method. IEEE Trans. Autom. Control 59(6), 1524–1538 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chatzipanagiotis, N., Dentcheva, D., Zavlanos, M.M.: An augmented Lagrangian method for distributed optimization. Math. Program. 152(1–2), 405–434 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155(1–2), 57–79 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64(1–3), 81–101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cui, Y., Sun, D., Toh, K.C.: On the R-superlinear convergence of the KKT residues generated by the augmented Lagrangian method for convex composite conic programming (2017). arXiv preprint arXiv:1706.08800

  9. Deng, W., Lai, M.J., Peng, Z., Yin, W.: Parallel multi-block ADMM with o(1/k) convergence. J. Sci. Comput. 71(2), 712–736 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dontchev, A.L.: Implicit Functions and Solution Mappings. Springer, New York (2009)

    Book  MATH  Google Scholar 

  12. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  13. Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  14. Han, D., Sun, D., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming (2015). arXiv preprint arXiv:1508.02134

  15. Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155(1), 227–238 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. He, B., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  17. He, B., Yuan, X.: On the acceleration of augmented lagrangian method for linearly constrained optimization. Optimization online 3 (2010)

  18. He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. He, B., Yuan, X.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. Selected Papers Of Alan J Hoffman: With Commentary, pp. 174–176 (2003)

  21. Hong, M., Luo, Z.Q.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162(1–2), 165–199 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Li, X., Sun, D., Toh, K.C.: A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems (2016). arXiv preprint arXiv:1607.05428

  23. Liu, Y.J., Sun, D., Toh, K.C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. 133(1), 399–436 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Luo, Z.Q., Tseng, P.: On the convergence rate of dual ascent methods for linearly constrained convex minimization. Math. Oper. Res. 18(4), 846–867 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  25. Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Optim. 22(2), 277–293 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ma, S.: Alternating proximal gradient method for convex minimization. J. Sci. Comput. 68(2), 546–572 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  27. Mulvey, J.M., Ruszczyn, A.: A diagonal quadratic approximation method for large scale linear programs. Oper. Res. Lett. 12(4), 205–215 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  28. Nesterov, Y.: A method of solving a convex programming problem with convergence rate o(1/k2). Sov. Math. Dokl. 27(2), 372–376 (1983)

    MATH  Google Scholar 

  29. Robinson, S.M.: Some continuity properties of polyhedral multifunctions. In: König, H., Korte, B., Ritter, K. (eds.) Mathematical Programming at Oberwolfach, pp. 206–214. Springer, Berlin (1981)

    Chapter  Google Scholar 

  30. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  31. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  32. Rockafellar, R.T.: Problem decomposition in block-separable convex optimization: ideas old and new (2017). https://www.washington.edu/

  33. Shefi, R., Teboulle, M.: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization. SIAM J. Optim. 24(1), 269–297 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  34. Spingarn, J.E.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. 32(2), 199–223 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  35. Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29(1), 119–138 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  36. Wang, X., Hong, M., Ma, S., Luo, Z.Q.: Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers (2013). arXiv preprint arXiv:1308.5294

  37. Wright, S.J.: Accelerated block-coordinate relaxation for regularized optimization. SIAM J. Optim. 22(1), 159–186 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  38. Xiao, L., Boyd, S.: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. 129(3), 469–488 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  39. You, K., Xie, L.: Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Trans. Autom. Control 56(10), 2262–2275 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Professor R. Tyrrell Rockafellar for suggestions on this research project. Shu Lu’s research is supported by National Science Foundation under the Grant DMS-1407241.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongsheng Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, H., Lu, S. Convergence of the augmented decomposition algorithm. Comput Optim Appl 72, 179–213 (2019). https://doi.org/10.1007/s10589-018-0039-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-018-0039-6

Keywords

Navigation