Abstract
We consider the problems of consensus optimization and resource allocation, and we discuss decentralized algorithms for solving such problems. By “decentralized”, we mean the algorithms are to be implemented in a set of networked agents, whereby each agent is able to communicate with its neighboring agents. For both problems, every agent in the network wants to collaboratively minimize a function that involves global information, while having access to only partial information. Specifically, we will first introduce the two problems in the context of distributed optimization, review the related literature, and discuss an interesting “mirror relation” between the problems. Afterwards, we will discuss some of the state-of-the-art algorithms for solving the decentralized consensus optimization problem and, based on the “mirror relationship”, we then develop some algorithms for solving the decentralized resource allocation problem. We also provide some numerical experiments to demonstrate the efficacy of the algorithms and validate the methodology of using the “mirror relation”.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Even for special topologies such as a “star” shaped network, it is considered as decentralized since the agent at the “center” has a similar/same computational ability as the others in the network and is not responsible for any extra work of coordination.
- 2.
Suppose that a sequence {x(k)} converges to x ∗ in some norm ∥⋅∥. The convergence is: (i) Q-linear if there is λ ∈ (0, 1) such that \(\frac {\|x(k+1)-x^*\|}{\|x(k)-x^*\|} \leq \lambda \) for all k; (ii) R-linear if there is λ ∈ (0, 1) and some positive constant C such that ∥x(k) − x ∗∥≤ Cλ k for all k. Both of these rates are geometric. They are often referred to as global rates to distinguish them from the case when the given relations are valid for some sufficiently large k. The difference between these two types of geometric rate is in that Q-linear rate implies monotonic decrease of ∥x(k) − x ∗∥, while R-linear rate does not. We will use “geometric(ally)” and “linear(ly)” interchangeably when it does not cause confusion.
- 3.
A nonnegative sequence {a k} is said to be convergent to 0 at an O(1∕k) rate if \({\lim \sup }_{k\rightarrow \infty } ka_k<+\infty \). In contrast, it is said to have an o(1∕k) rate if \({\lim \sup }_{k\rightarrow \infty } ka_k=0\).
- 4.
When an algorithm has convergence rate of O(θ(k)), we say that the rate is sublinear if \(\lim _{k\rightarrow +\infty }\frac {\lambda ^k}{\theta (k)}=0\) for any constant λ ∈ (0, 1). A typical sublinear rates include O(1∕k p) with p > 0.
- 5.
See Ref. [78] for the definition of “dual friendly”.
- 6.
A description of the range for δ can be found in [49].
- 7.
A more specific description of the range for δ can be found in [49].
- 8.
Copyright Ⓒ 2017 Society for Industrial and Applied Mathematics. Reprinted with permission. All rights reserved.
- 9.
To guarantee connectedness of the graph, we first grow a random tree; then uniformly randomly add edges into the graph to reach the specified connectivity ratio.
References
N.S. Aybat, E.Y. Hamedani, A distributed ADMM-like method for resource sharing under conic constraints over time-varying networks (2016). arXiv preprint arXiv:1611.07393
T. Başar, S.R. Etesami, A. Olshevsky, Convergence time of quantized metropolis consensus over time-varying networks. IEEE Trans. Autom. Control 61(12), 4048–4054 (2016)
J. Bazerque, G. Giannakis, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity. IEEE Trans. Signal Process. 58, 1847–1862 (2010)
D. Bertsekas, Distributed asynchronous computation of fixed points. Math. Program. 27(1), 107–120 (1983)
D.P. Bertsekas, Incremental proximal methods for large scale convex optimization. Math. Program. 129, 163–195 (2011)
D.P. Bertsekas, Incremental aggregated proximal and augmented Lagrangian algorithms. Technical report, Laboratory for Information and Decision Systems Report LIDS-P-3176, MIT (2015)
D.P. Bertsekas, J. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, 2nd edn. (Athena Scientific, Nashua, 1997)
D. Bertsekas, A. Nedić, A. Ozdaglar, Convex Analysis and Optimization (Athena Scientific, Belmont, 2003)
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
K. Cai, H. Ishii, Average consensus on arbitrary strongly connected digraphs with time-varying topologies. IEEE Trans. Autom. Control 59(4), 1066–1071 (2014)
V. Cevher, S. Becker, M. Schmidt, Convex optimization for big data: scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Process. Mag. 31(5), 32–43 (2014)
A.K. Chandra, P. Raghavan, W.L. Ruzzo, R. Smolensky, P. Tiwari, The electrical resistance of a graph captures its commute and cover times. Comput. Complex. 6(4), 312–340 (1996)
T.-H. Chang, M. Hong, X. Wang, Multi-agent distributed optimization via inexact consensus ADMM. IEEE Trans. Signal Process. 63(2), 482–497 (2015)
A. Chen, A. Ozdaglar, A fast distributed Proximal-Gradient method, in The 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2012), pp. 601–608
A. Cherukuri, J. Cortés, Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment. Automatica 74, 183–193 (2016)
D. Coppersmith, U. Feige, J. Shearer, Random walks on regular and irregular graphs. SIAM J. Discret. Math. 9(2), 301–308 (1996)
T.T. Doan, C.L. Beck, Distributed primal dual methods for economic dispatch in power networks (2016). arXiv preprint arXiv:1609.06287
T.T. Doan, A. Olshevsky, Distributed resource allocation on dynamic networks in quadratic time. Syst. Control Lett. 99, 57–63 (2017)
J. Duchi, A. Agarwal, M. Wainwright, Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2012)
P. Forero, A. Cano, G. Giannakis, Consensus-based distributed support vector machines. J. Mach. Learn. Res. 59, 1663–1707 (2010)
L. Gan, U. Topcu, S. Low, Optimal decentralized protocol for electric vehicle charging. IEEE Trans. Power Syst. 28(2), 940–951 (2013)
B. Gharesifard, J. Cortes, Distributed strategies for generating weight-balanced and doubly stochastic digraphs. Eur. J. Control 18(6), 539–557 (2012)
F. Guo, C. Wen, J. Mao, Y. Song, Distributed economic dispatch for smart grids with random wind power. IEEE Trans. Smart Grid 7(3), 1572–1583 (2016)
M. Gurbuzbalaban, A. Ozdaglar, P. Parrilo, On the convergence rate of incremental aggregated gradient algorithms (2015). Available on arxiv at http://arxiv.org/abs/1506.02081
M. Hong, T. Chang, Stochastic proximal gradient consensus over random networks (2015). arXiv preprint arXiv:1511.08905
H. Huang, Q. Ling, W. Shi, J. Wang, Collaborative resource allocation over a hybrid cloud center and edge server network. Int. J. Comput. Math. 35(4), 421–436 (2017)
D. Jakovetic, J. Xavier, J. Moura, Fast distributed gradient methods. IEEE Trans. Autom. Control 59, 1131–1146 (2014)
S. Kar, G. Hug, Distributed robust economic dispatch in power systems: a consensus + innovations approach, in IEEE Power and Energy Society General Meeting (2012), pp. 1–8
D. Kempe, A. Dobra, J. Gehrke, Gossip-based computation of aggregate information, in Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science (2003), pp. 482–491
J. Koshal, A. Nedić, U.V. Shanbhag, Distributed algorithms for aggregative games on graphs. Oper. Res. 64(3), 680–704 (2016)
H. Lakshmanan, D.P. De Farias, Decentralized resource allocation in dynamic networks of agents. SIAM J. Optim. 19(2), 911–940 (2008)
D.A. Levin, Y. Peres, E.L. Wilmer, Markov Chains and Mixing Times (American Mathematical Society, Providence, 2009)
Z. Li, W. Shi, M. Yan, A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates (2017). arXiv preprint arXiv:1704.07807
T. Lindvall, Lectures on the Coupling Method (Dover Publications, Newburyport, 2002)
Q. Ling, W. Shi, G. Wu, A. Ribeiro, Dlm: decentralized linearized alternating direction method of multipliers. IEEE Trans. Signal Process. 63(15), 4051–4064 (2015)
P. Lorenzo, G. Scutari, Distributed nonconvex optimization over networks, in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) (2015), pp. 229–232
P. Lorenzo, G. Scutari, Distributed nonconvex optimization over time-varying networks, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016), pp. 4124–4128
P. Lorenzo, G. Scutari, NEXT: in-network nonconvex optimization, in IEEE Transactions on Signal and Information Processing over Networks (2016)
Q. Lü, H. Li, Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes (2016). arXiv preprint arXiv:1611.00990
G. Mateos, J. Bazerque, G. Giannakis, Distributed sparse linear regression. IEEE Trans. Signal Process. 58, 5262–5276 (2010)
A. Mokhtari, W. Shi, Q. Ling, A. Ribeiro, DQM: decentralized quadratically approximated alternating direction method of multipliers (2015). arXiv preprint arXiv:1508.02073
A. Mokhtari, W. Shi, Q. Ling, A. Ribeiro, A decentralized second-order method with exact linear convergence rate for consensus optimization (2016). arXiv preprint arXiv:1602.00596
I. Necoara, Random coordinate descent algorithms for multi-agent convex optimization over networks. IEEE Trans. Autom. Control 58(8), 2001–2012 (2013)
A. Nedić, Asynchronous broadcast-based convex optimization over a network. IEEE Trans. Autom. Control 56(6), 1337–1351 (2011)
A. Nedić, D.P. Bertsekas, Convergence rate of incremental subgradient algorithms, in Stochastic Optimization: Algorithms and Applications (Kluwer Academic Publishers, Boston, 2000), pp. 263–304
A. Nedić, D.P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)
A. Nedić, A. Olshevsky, Stochastic gradient-push for strongly convex functions on time-varying directed graphs (2014). arXiv preprint arXiv:1406.2075
A. Nedić, A. Olshevsky, Distributed optimization over time-varying directed graphs. IEEE Trans. Autom. Control 60(3), 601–615 (2015)
A. Nedić, A. Olshevsky, W. Shi, Improved convergence rates for distributed resource allocation (2017). arXiv preprint arXiv:
A. Nedić, A. Ozdaglar, Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48–61 (2009)
A. Nedić, D.P. Bertsekas, V. Borkar, Distributed asynchronous incremental subgradient methods, in Proceedings of the March 2000 Haifa Workshop “Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications” (Elsevier, Amsterdam, 2001)
A. Nedić, A. Olshevsky, A. Ozdaglar, J. Tsitsiklis, On distributed averaging algorithms and quantization effects. IEEE Trans. Autom. Control 54(11), 2506–2517 (2009)
A. Nedić, A. Olshevsky, C. Uribe, Fast convergence rates for distributed non-Bayesian learning (2015). arXiv preprint arXiv:1508.05161
A. Nedić, A. Olshevsky, W. Shi, Achieving geometric convergence for distributed optimization over time-varying graphs (2016). arXiv preprint arXiv:1607.03218
A. Nedić, A. Olshevsky, W. Shi, C.A. Uribe, Geometrically convergent distributed optimization with uncoordinated step-sizes (2016). arXiv preprint arXiv:1609.05877
Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, vol. 87 (Springer Science and Business Media, Berlin, 2013)
A. Olshevsky, Efficient information aggregation strategies for distributed control and signal processing, Ph.D. thesis, Massachusetts Institute of Technology, 2010
A. Olshevsky, Linear time average consensus on fixed graphs and implications for decentralized optimization and multi-agent control (2016). arXiv preprint arXiv:1411.4186v6
G. Qu, N. Li, Harnessing smoothness to accelerate distributed optimization (2016). arXiv preprint arXiv:1605.07112
G. Qu, N. Li, Accelerated distributed Nesterov gradient descent (2017). arXiv preprint arXiv:1705.07176
M. Rabbat, R. Nowak, Distributed optimization in sensor networks, in Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks (ACM, New York, 2004), pp. 20– 27
S. Ram, V. Veeravalli, A. Nedić, Distributed non-autonomous power control through distributed convex optimization, in INFOCOM (2009), pp. 3001–3005
S. Ram, A. Nedić, V. Veeravalli, Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20(2), 691–717 (2009)
S.S. Ram, A. Nedić, V. Veeravalli, Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2010)
S.S. Ram, A. Nedić, V.V. Veeravalli, A new class of distributed optimization algorithms: application to regression of distributed data. Optim. Methods Softw. 27(1), 71–88 (2012)
W. Ren, Consensus based formation control strategies for multi-vehicle systems, in Proceedings of the American Control Conference (2006), pp. 4237–4242
R. Rockafellar, Convex Analysis (Princeton University Press, Princeton, 1970)
A. Sayed, Diffusion adaptation over networks. Acad. Press Libr. Signal Process. 3, 323–454 (2013)
K. Scaman, F. Bach, S. Bubeck, Y. Lee, L. Massoulié, Optimal algorithms for smooth and strongly convex distributed optimization in networks (2017). arXiv preprint arXiv:1702.08704
H. Seifi, M. Sepasian, Electric Power System Planning: Issues, Algorithms and Solutions (Springer Science and Business Media, Berlin, 2011)
W. Shi, Q. Ling, K. Yuan, G. Wu, W. Yin, On the linear convergence of the ADMM in decentralized consensus optimization. IEEE Trans. Signal Process. 62(7), 1750–1761 (2014)
W. Shi, Q. Ling, G. Wu, W. Yin, A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63(22), 6013–6023 (2015)
W. Shi, Q. Ling, G. Wu, W. Yin, EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)
Y. Sun, G. Scutari, D. Palomar, Distributed nonconvex multiagent optimization over time-varying networks (2016). arXiv preprint arXiv:1607.00249
H. Terelius, U. Topcu, R. Murray, Decentralized multi-agent optimization via dual decomposition, in 18th IFAC World Congress (2011)
T.M.D. Tran, A.Y. Kibangou, Distributed estimation of graph Laplacian Eigen-values by the alternating direction of multipliers method. IFAC Proc. Vol. 47(3), 5526–5531 (2014)
J. Tsitsiklis, D. Bertsekas, M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31(9), 803–812 (1986)
C.A. Uribe, S. Lee, A. Gasnikov, A. Nedić, Optimal algorithms for distributed optimization (2017). arXiv preprint arXiv:1712.00232
D. Varagnolo, F. Zanella, A. Cenedese, G. Pillonetto, L. Schenato, Newton-Raphson consensus for distributed convex optimization. IEEE Trans. Autom. Control 61(4), 994–1009 (2016)
M. Wang, D.P. Bertsekas, Incremental constraint projection-proximal methods for non-smooth convex optimization. Lab for Information and Decision Systems Report LIDS-P- 2907, MIT (2013).
E. Wei, A. Ozdaglar, On the O(1∕k) convergence of asynchronous distributed alternating direction method of multipliers (2013). arXiv preprint arXiv:1307.8254
T. Wu, K. Yuan, Q. Ling, W. Yin, A.H. Sayed, Decentralized consensus optimization with asynchrony and delays, in 50th Asilomar Conference on Signals, Systems and Computers (2016), pp. 992–996
C. Xi, U. Khan, DEXTRA: a fast algorithm for optimization over directed graphs. IEEE Trans. Autom. Control PP(99), 1–1 (2017)
L. Xiao, S. Boyd, S. Lall, Distributed average consensus with time-varying metropolis weights (2006), available: http://www.stanford.edu/boyd/papers/avg_metropolis
L. Xiao, S. Boyd, S. Kim, Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)
J. Xu, Augmented distributed optimization for networked systems. Ph.D. thesis, Nanyang Technological University, 2016
J. Xu, S. Zhu, Y. Soh, L. Xie, Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes, in Proceedings of the 54th IEEE Conference on Decision and Control (CDC) (2015), pp. 2055–2060
T. Yang, J. Lu, D. Wu, J. Wu, G. Shi, Z. Meng, K.H. Johansson, A distributed algorithm for economic dispatch over time-varying directed networks with delays. IEEE Trans. Ind. Electron. 64(6), 5095–5106 (2017)
K. Yuan, Q. Ling, W. Yin, On the convergence of decentralized gradient descent (2013). arXiv preprint arXiv:1310.7063
F. Zanella, D. Varagnolo, A. Cenedese, G. Pillonetto, L. Schenato, Multidimensional Newton-Raphson consensus for distributed convex optimization, in American Control Conference (ACC), 2012 (IEEE, Piscataway, 2012), pp. 1079–1084
J. Zeng, W. Yin, ExtraPush for convex smooth decentralized optimization over directed networks. J. Comput. Math. 35(4), 381–394 (2017)
Y. Zhang, G. Giannakis, Efficient decentralized economic dispatch for microgrids with wind power integration, in 6th Annual IEEE Green Technologies Conference (GreenTech) (2014), pp. 7–12
M. Zhu, S. Martinez, Discrete-time dynamic average consensus. Automatica 46(2), 322–329 (2010)
M. Zhu, S. Martinez, On distributed convex optimization under inequality and equality constraints. IEEE Trans. Autom. Control 57(1), 151–164 (2012)
Acknowledgements
The work of A.N. and A.O. was supported by the Office of Naval Research grant N000014-16-1-2245. The work of A.O. was also supported by the NSF award CMMI-1463262 and AFOSR award FA-95501510394.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Nedić, A., Olshevsky, A., Shi, W. (2018). Decentralized Consensus Optimization and Resource Allocation. In: Giselsson, P., Rantzer, A. (eds) Large-Scale and Distributed Optimization. Lecture Notes in Mathematics, vol 2227. Springer, Cham. https://doi.org/10.1007/978-3-319-97478-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-97478-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97477-4
Online ISBN: 978-3-319-97478-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)