Advertisement

Computational Optimization and Applications

, Volume 74, Issue 3, pp 703–728 | Cite as

Exact spectral-like gradient method for distributed optimization

  • Dušan JakovetićEmail author
  • Nataša Krejić
  • Nataša Krklec Jerinkić
Article

Abstract

Since the initial proposal in the late 80s, spectral gradient methods continue to receive significant attention, especially due to their excellent numerical performance on various large scale applications. However, to date, they have not been sufficiently explored in the context of distributed optimization. In this paper, we consider unconstrained distributed optimization problems where n nodes constitute an arbitrary connected network and collaboratively minimize the sum of their local convex cost functions. In this setting, building from existing exact distributed gradient methods, we propose a novel exact distributed gradient method wherein nodes’ step-sizes are designed according to the novel rules akin to those in spectral gradient methods. We refer to the proposed method as Distributed Spectral Gradient method. The method exhibits R-linear convergence under standard assumptions for the nodes’ local costs and safeguarding on the algorithm step-sizes. We illustrate the method’s performance through simulation examples.

Keywords

Distributed optimization Spectral gradient R-linear convergence 

Mathematics Subject Classification

90C25 90C53 65K05 

Notes

References

  1. 1.
    Schizas, I.D., Ribeiro, A., Giannakis, G.B.: Consensus in ad hoc WSNs with noisy links—Part I: distributed estimation of deterministic signals. IEEE Trans. Signal Process. 56(1), 350–364 (2009)CrossRefGoogle Scholar
  2. 2.
    Kar, S., Moura, J.M.F., Ramanan, K.: Distributed parameter estimation in sensor networks: nonlinear observation models and imperfect communication. IEEE Trans. Inf. Theory 58(6), 3575–3605 (2012)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Lopes, C., Sayed, A.H.: Adaptive estimation algorithms over distributed networks. In: 21st IEICE Signal Processing Symposium. Kyoto, Japan (2006)Google Scholar
  4. 4.
    Cattivelli, F., Sayed, A.H.: Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process. 58(3), 1035–1048 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Mota, J., Xavier, J., Aguiar, P., Püschel, M.: Distributed optimization with local domains: applications in MPC and network flows. IEEE Trans. Autom. Control 60, 2004–2009 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefGoogle Scholar
  7. 7.
    Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Jakovetić, D., Xavier, J., Moura, J.M.F.: Fast distributed gradient methods. IEEE Trans. Autom. Control 59(5), 1131–1146 (2014)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Mokhtari, A., Ling, Q., Ribeiro, A.: Network Newton distributed optimization methods. IEEE Trans. Signal Process. 65(1), 146–161 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Mokhtari, A., Shi, W., Ling, Q., Ribeiro, A.: DQM: Decentralized quadratically approximated alternating direction method of multipliers. IEEE Trans. Signal Process. (2016).  https://doi.org/10.1109/TSP.2016.2548989 MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Jakovetić, D., Bajović, D., Krejić, N., Krklec Jerinkić, N.: Newton-like method with diagonal correction for distributed optimization. SIAM J. Optim. 27(2), 1171–1203 (2017)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Jakovetić, D., Moura, J.M.F., Xavier, J.: Distributed Nesterov-like gradient algorithms. In: CDC’12, 51\(^{{\rm st}}\) IEEE Conference on Decision and Control, pp. 5459–5464. Maui, Hawaii, December (2012)Google Scholar
  13. 13.
    Xu, J., Zhu, S., Soh, Y. C., Xie, L.: Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant step- sizes. In: IEEE Conference on Deci- sion and Control (CDC), pp. 2055-2060 (2015)Google Scholar
  14. 14.
    Di Lorenzo, P., Scutari, G.: Distributed nonconvex optimization over networks. In: IEEE International Conference on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 229–232 (2015)Google Scholar
  15. 15.
    Shi, W., Ling, Q., Wu, G., Yin, W.: EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 2(25), 944–966 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization, IEEE Transactions on Control of Network Systems (to appear)Google Scholar
  17. 17.
    Jakovetić, D.: A Unification and Generalization of Exact Distributed First Order Methods, arxiv preprint, arXiv:1709.01317, (2017)
  18. 18.
    Mokhtari, A., Shi, W., Ling, Q., Ribeiro, A.: A Decentralized Second Order Method with Exact Linear Convergence Rate for Consensus Optimization, (2016), available at: arXiv:1602.00596
  19. 19.
    Nedic, A., Olshevsky, A., Shi, W., Uribe, C.A.: Geometrically convergent distributed optimization with uncoordinated step-sizes. In: 2017 American Control Conference (ACC). Seattle, WA, USA (2017).  https://doi.org/10.23919/ACC.2017.7963560
  20. 20.
    Nedic, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Raydan, M.: On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 13, 321–326 (1993)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Raydan, M.: Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Birgin, E.G., Martínez, J.M.: Raydan M spectral projected gradient methods: review and perspectives. J. Stat. Softw. 60(3), 1–21 (2014)CrossRefGoogle Scholar
  25. 25.
    Dai, Y.H., Liao, L.Z.: R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 1196–1211 (2000)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Algorithm 813: SPG—software for convex- constrained optimization. ACM Trans. Math. Softw. 27, 340–349 (2001)CrossRefGoogle Scholar
  28. 28.
    Birgin, E.G., Martínez, J.M.: Raydan M inexact spectral projected gradient methods on convex sets. IMA J. Numer. Anal. 23, 539–559 (2003)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Kar, S., Moura, J.M.F.: Distributed consensus algorithms in sensor networks with imperfect communication: link failures and channel noise. IEEE Trans. Signal Process. 57(1), 355–369 (2009)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Desoer, C., Vidyasagar, M.: Feedback Systems: Input–Output Properties. SIAM, New Delhi (2009)CrossRefGoogle Scholar
  31. 31.
    Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1997)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Dušan Jakovetić
    • 1
    Email author
  • Nataša Krejić
    • 1
  • Nataša Krklec Jerinkić
    • 1
  1. 1.Department of Mathematics and Informatics, Faculty of SciencesUniversity of Novi SadNovi SadSerbia

Personalised recommendations