Skip to main content
Log in

Golden Ratio Proximal Gradient ADMM for Distributed Composite Convex Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper introduces a golden ratio proximal gradient alternating direction method of multipliers (GRPG-ADMM) for distributed composite convex optimization. When applied to the consensus optimization problem, the GRPG-ADMM provides a superior safety protection approach to computing an optimal decision for a network of agents connected by edges in an undirected graph. To ensure convergence, we expand the parameter range used in the convex combination step. The algorithm has been implemented to solve a decentralized composite convex consensus optimization problem, resulting in a single-loop decentralized golden ratio proximal gradient algorithm. Notably, the agents do not need to communicate with their neighbors before launching the algorithm. Our algorithm guarantees convergence for both primal and dual iterates, with an ergodic convergence rate of \(\mathcal O(1/k)\) measured by function value residual and consensus violation. To demonstrate the efficiency of the algorithm, we compare its performance with two state-of-the-art algorithms and present numerical results on the decentralized sparse group LASSO problem and the decentralized compressed sensing problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

    Article  MathSciNet  Google Scholar 

  2. Aybat, N., Wang, Z., Iyengar, G.: An asynchronous distributed proximal gradient method for composite convex optimization. In: International Conference on Machine Learning, pp. 2454–2462. PMLR (2015)

  3. Aybat, N.S., Wang, Z., Lin, T., Ma, S.: Distributed linearized alternating direction method of multipliers for composite convex consensus optimization. IEEE Trans. Autom. Control 63(1), 5–20 (2017)

    Article  MathSciNet  Google Scholar 

  4. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific (1997)

  5. Boyd, S., Diaconis, P., Xiao, L.: Fastest mixing Markov chain on a graph. SIAM Rev. 46(4), 667–689 (2004)

    Article  ADS  MathSciNet  Google Scholar 

  6. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1), 253–287 (2016)

    Article  MathSciNet  Google Scholar 

  7. Chang, T.-H., Hong, M., Wang, X.: Multi-agent distributed optimization via inexact consensus ADMM. IEEE Trans. Signal Process. 63(2), 482–497 (2014)

    Article  ADS  MathSciNet  Google Scholar 

  8. Chang, X., Yang, J.: A golden ratio primal-dual algorithm for structured convex optimization. J. Sci. Comput. 87(2), 1–26 (2021)

    Article  MathSciNet  Google Scholar 

  9. Chang, X., Yang, J.: GRPDA revisited: relaxed condition and connection to Chambolle–Pock’s primal-dual algorithm. J. Sci. Comput. 93(3), 70 (2022)

    Article  MathSciNet  Google Scholar 

  10. Chen, A.I., Ozdaglar, A.: A fast distributed proximal-gradient method. In: 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 601–608. IEEE (2012)

  11. Chen, H., Gu, G., Yang, J.: A golden ratio proximal alternating direction method of multipliers for separable convex optimization. J. Glob. Optim. 87(2–4), 581–602 (2023)

    Article  MathSciNet  Google Scholar 

  12. Condat, L.: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158(2), 460–479 (2013)

    Article  MathSciNet  Google Scholar 

  13. Di Lorenzo, P., Scutari, G.: NEXT: in-network nonconvex optimization. IEEE Trans. Signal Inf. Process. Over Netw. 2(2), 120–136 (2016)

  14. Duchi, J.C., Agarwal, A., Wainwright, M.J.: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2011)

    Article  MathSciNet  Google Scholar 

  15. Forero, P.A., Cano, A., Giannakis, G.B.: Consensus-based distributed support vector machines. J. Mach. Learn. Res. 11, 1663–1707 (2010)

    MathSciNet  Google Scholar 

  16. Gan, L., Topcu, U., Low, S.H.: Optimal decentralized protocol for electric vehicle charging. IEEE Trans. Power Syst. 28(2), 940–951 (2012)

    Article  ADS  Google Scholar 

  17. Grant, M., Boyd, S., Ye, Y.: CVX: MATLAB software for disciplined convex programming, version 2.1. http://cvxr.com/cvx. Accessed Mar 2014

  18. Guannan, Q., Li, N.: Harnessing smoothness to accelerate distributed optimization. IEEE Trans. Control Netw. Syst. 5(3), 1245–1260 (2017)

    MathSciNet  Google Scholar 

  19. Hong, M., Chang, T.H.: Stochastic proximal gradient consensus over random networks. IEEE Trans. Signal Process. 65(11), 2933–2948 (2017)

    Article  ADS  MathSciNet  Google Scholar 

  20. Jakovetić, D., Xavier, J., Moura, J.M.F.: Fast distributed gradient methods. IEEE Trans. Autom. Control 59(5), 1131–1146 (2014)

    Article  MathSciNet  Google Scholar 

  21. Kovalev, D., Gasanov, E., Gasnikov, A., Richtarik, P.: Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks. Adv. Neural. Inf. Process. Syst. 34, 22325–22335 (2021)

    Google Scholar 

  22. Kovalev, D., Salim, A., Richtárik, P.: Optimal and practical algorithms for smooth and strongly convex decentralized optimization. Adv. Neural. Inf. Process. Syst. 33, 18342–18352 (2020)

    Google Scholar 

  23. Kovalev, D., Shulgin, E., Richtárik, P., Rogozin, A.V. and Gasnikov, A.: ADOM: accelerated decentralized optimization method for time-varying networks. In: International Conference on Machine Learning, pp. 5784–5793. PMLR (2021)

  24. Lesser, V., Ortiz Jr., C.L., Tambe, M.: Distributed Sensor Networks: A Multiagent Perspective, vol. 9. Springer (2003)

  25. Li, H., Lin, Z.: Accelerated gradient tracking over time-varying graphs for decentralized optimization. Preprint. arXiv:2104.02596 (2021)

  26. Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67(17), 4494–4506 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  27. Ling, Q., Tian, Z.: Decentralized sparse signal recovery for compressive sleeping wireless sensor networks. IEEE Trans. Signal Process. 58(7), 3816–3827 (2010)

    Article  ADS  MathSciNet  Google Scholar 

  28. Makhdoumi, A., Ozdaglar, A.: Broadcast-based distributed alternating direction method of multipliers. In: 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 270–277. IEEE (2014)

  29. McDonald, R., Hall, K., Mann, G.: Distributed training strategies for the structured perceptron. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 456–464 (2010)

  30. Nedic, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)

    Article  MathSciNet  Google Scholar 

  31. Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  32. Ram, S.S., Veeravalli, V.V., Nedic, A.: Distributed non-autonomous power control through distributed convex optimization. In: IEEE Infocom 2009, pp. 3001–3005. IEEE (2009)

  33. Ravazzi, C., Fosson, S.M., Magli, E.: Distributed iterative thresholding for \(\ell _0/\ell _1\)-regularized linear inverse problems. IEEE Trans. Inf. Theory 61(4), 2081–2100 (2015)

    Article  Google Scholar 

  34. Sayed, A.H.: Adaptation, learning, and optimization over networks. Found. Trends Mach. Learn. 7(4–5), 311–801 (2014)

    Article  Google Scholar 

  35. Scaman, K., Bach, F., Bubeck, S., Lee, Y.T. and Massoulié, L.: Optimal algorithms for smooth and strongly convex distributed optimization in networks. In: International Conference on Machine Learning, pp. 3027–3036. PMLR (2017)

  36. Schizas, I.D., Ribeiro, A., Giannakis, G.B.: Consensus in ad hoc WSNs with noisy links—part I: distributed estimation of deterministic signals. IEEE Trans. Signal Process. 56(1), 350–364 (2007)

    Article  ADS  Google Scholar 

  37. Shi, W., Ling, Q., Gang, W., Yin, W.: EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)

    Article  MathSciNet  Google Scholar 

  38. Shi, W., Ling, Q., Gang, W., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63(22), 6013–6023 (2015)

    Article  ADS  MathSciNet  Google Scholar 

  39. Terelius, H., Topcu, U., Murray, R.M.: Decentralized multi-agent optimization via dual decomposition. IFAC Proc. 44(1), 11245–11251 (2011)

    Google Scholar 

  40. Wei, E., Ozdaglar, A.: On the \(\cal O\it (1/k)\) convergence of asynchronous distributed alternating direction method of multipliers. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 551–554. IEEE (2013)

  41. Xin, R., Khan, U.A.: A linear algorithm for optimization over directed graphs with geometric convergence. IEEE Control Syst. Lett. 2(3), 315–320 (2018)

    Article  MathSciNet  Google Scholar 

  42. Xu, J., Zhu, S., Soh, Y.C., Xie, L.: Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes. In: 2015 54th IEEE Conference on Decision and Control, pp. 2055–2060. IEEE (2015)

  43. Yan, F., Sundaram, S., Vishwanathan, S.V.N., Qi, Y.: Distributed autonomous online learning: regrets and intrinsic privacy-preserving properties. IEEE Trans. Knowl. Data Eng. 25(11), 2483–2493 (2012)

    Article  Google Scholar 

  44. Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)

    Article  MathSciNet  Google Scholar 

  45. Yuan, K., Ying, B., Zhao, X., Sayed, A.H.: Exact diffusion for distributed optimization and learning—part I: algorithm development. IEEE Trans. Signal Process. 67(3), 708–723 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  46. Zhou, D., Chang, X., Yang, J.: A new primal-dual algorithm for structured convex optimization involving a Lipschitzian term. Pac. J. Optim. 18(2), 497–517 (2022)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

We appreciate the associate editor and two anonymous referees for their constructive comments and suggestions which helped to improve this paper significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junfeng Yang.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Communicated by Alexander Vladimirovich Gasnikov.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research supported by the National Natural Science Foundation of China (NSFC-12371301)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, C., Yang, J. Golden Ratio Proximal Gradient ADMM for Distributed Composite Convex Optimization. J Optim Theory Appl 200, 895–922 (2024). https://doi.org/10.1007/s10957-023-02336-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-023-02336-8

Keywords

Navigation