Abstract
This chapter is concerned with the design of distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multi-agent networks. The striking feature of our distributed algorithms lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. Moreover, the algorithm does not require the interaction matrix to be doubly-stochastic. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform non-asymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which however use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also extend our results to the cases of deterministically and randomly time-varying graphs. Finally, we validate the theoretical results by simulations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Beck A, Sabach S (2015) Weiszfelds method: Old and new results. Journal of Optimization Theory and Applications 164(1):1–40
Bertsekas DP (2015) Convex Optimization Algorithms. Athena Scientific Belmont
Borkar VS (2008) Stochastic approximation: a dynamical systems viewpoint. Baptism’s 91 Witnesses
Cevher V, Becker S, Schmidt M (2014) Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Processing Magazine 31(5):32–43
Chen G, Lewis FL, Xie L (2011) Finite-time distributed consensus via binary control protocols. Automatica 47(9):1962–1968
Clarke FH, Ledyaev YS, Stern RJ, Wolenski PR (2008) Nonsmooth Analysis and Control Theory, vol 178. Springer Science & Business Media
Cohen MB, Lee YT, Miller G, Pachocki J, Sidford A (2016) Geometric median in nearly linear time. In: Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, ACM, pp 9–21
Deo N (1974) Graph Theory with Applications to Engineering and Computer Science. Courier Dover Publications
Franceschelli M, Giua A, Pisano A (2017) Finite-time consensus on the median value with robustness properties. IEEE Transactions on Automatic Control 62(4):1652–1667
Kan Z, Shea JM, Dixon WE (2016) Leader–follower containment control over directed random graphs. Automatica 66:56–62
Li T, Fu M, Xie L, Zhang J (2011) Distributed consensus with limited communication data rate. IEEE Transactions on Automatic Control 56(2):279–292
Lin P, Ren W, Farrell JA (2017) Distributed continuous-time optimization: nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Transactions on Automatic Control 62(5):2239–2253
Magnússon S, Enyioha C, Li N, Fischione C, Tarokh V (2017) Convergence of limited communications gradient methods. IEEE Transactions on Automatic Control 63(5):1356–1371
Mokhtari A, Ling Q, Ribeiro A (2017) Network Newton distributed optimization methods. IEEE Transactions on Signal Processing 65(1):146–161
Nedić A, Olshevsky A (2015) Distributed optimization over time-varying directed graphs. IEEE Transactions on Automatic Control 60(3):601–615
Nedic A, Ozdaglar A (2009) Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control 54(1):48–61
Nedić A, Olshevsky A, Rabbat MG (2017) Network topology and communication-computation tradeoffs in decentralized optimization. In: Proceedings of the IEEE, vol 106, no 5, May 2018
Olfati-Saber R, Murray RM (2004) Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control 49(9):1520–1533
Pu Y, Zeilinger MN, Jones CN (2017) Quantization design for distributed optimization. IEEE Transactions on Automatic Control 62(5):2107–2120
Shi W, Ling Q, Wu G, Yin W (2015) Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization 25(2):944–966
Wang H, Li C (2017) Distributed quantile regression over sensor networks. IEEE Transactions on Signal and Information Processing over Networks pp 1–1, 10.1109/TSIPN.2017.2699923
Xie P, You K, Tempo R, Song S, Wu C (2018) Distributed convex optimization with inequality constraints over time-varying unbalanced digraphs. IEEE Transactions on Automatic Control PP(99):1–1, 10.1109/TAC.2018.2816104
Yi P, Hong Y (2014) Quantized subgradient algorithm and data-rate analysis for distributed optimization. IEEE Transactions on Control of Network Systems 1(4):380–392
You K, Xie L (2011) Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Transactions on Automatic Control 56(10):2262–2275
You K, Tempo R, Xie P (2018) Distributed algorithms for robust convex optimization via the scenario approach. IEEE Transactions on Automatic Control
Zhang J, You K (2018) Distributed optimization with binary relative information over deterministically time-varying graphs. To appear in the 57th IEEE Conference on Decision and Control, Miami Beach, FL, USA
Zhang J, You K, Başar T (2017) Distributed discrete-time optimization by exchanging one bit of information. In: 2018 annual American Control Conference (ACC), IEEE, pp 2065–2070
Zhang J, You K, BaĹźar T (2018) Distributed discrete-time optimization in multi-agent networks using only sign of relative state. Accepted by IEEE Transactions on Automatic Control
Acknowledgements
The authors would very much like to thank Professor Tamer BaĹźar for the stimulating discussions on this topic. This work was supported by the National Natural Science Foundation of China under Grant No. 61722308, and National Key Research and Development Program of China under Grant No. 2017YFC0805310.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Proof of Theorem 4
Appendix: Proof of Theorem 4
We first show that \(\widetilde{d}(\rho )<\infty \). Since \(\widetilde{f}_\lambda (x)\) is convex, \(\widetilde{\mathscr {X}}(\rho )\) is convex and \(\mathscr {X}^\star \subseteq \widetilde{\mathscr {X}}(\rho )\) for any \(\rho >0\). One can verify that \(\widetilde{\mathscr {X}}(\rho )-\mathscr {X}^\star \) is bounded. If \(\widetilde{\mathscr {X}}(\rho )-\mathscr {X}^\star \) is empty, then \(\widetilde{d}(\rho )=0\), otherwise \(0\le \widetilde{d}(\rho )=\max _{x\in \widetilde{\mathscr {X}}(\rho )}\min _{x^\star \in \mathscr {X}^\star }|x-x^\star |=\max _{x\in \widetilde{\mathscr {X}}(\rho )-\mathscr {X}^\star }\min _{x^\star \in \mathscr {X}^\star }|x-x^\star |<\infty \).
Then, we claim the following.
Claim 1:Â If \(\Vert \mathbf {x}^{k}-x^\star {\mathbbm {1}}\Vert >c_\rho \) for all \(x^\star \in \mathscr {X}^\star \), then \(\widetilde{f}_\lambda (\mathbf {x}^k)-f^\star >{\rho c_a^2}/{2}\).
Recall from (15) that
This implies that if either \(f(\bar{x}^k)-f^\star >{\rho c_a^2}/{2}\) or \(v(\mathbf {x}^k)>\frac{\rho c_a^2}{2\lambda a_\text {min}^{(l)}- cn}\), then \(\widetilde{f}_\lambda (\mathbf {x}^k)-f^\star >{\rho c_a^2}/{2}\). Let
Since
we obtain that \(v(\mathbf {x}^k)>c_\rho /(2\sqrt{n})\ge \frac{\rho c_a^2}{2\lambda a_\text {min}^{(l)}- cn}\) or \(|\bar{x}^k-x^\star |>c_\rho /(2\sqrt{n})\ge \widetilde{d}(\rho )\). For the former case we have \(\widetilde{f}_\lambda (\mathbf {x}^k)-f^\star >{\rho c_a^2}/{2}\). For the latter case, \(\bar{x}^k\notin \widetilde{\mathscr {X}}(\rho )\), which by the definition of \(\widetilde{\mathscr {X}}(\rho )\) implies \(\widetilde{f}_\lambda (\mathbf {x}^k)-f^\star >{\rho c_a^2}/{2}\).
Claim 2: There is \(x_0^\star \in \mathscr {X}^\star \) such that \(\liminf _{k\rightarrow \infty } \Vert \mathbf {x}^{k}-x_0^\star {\mathbbm {1}}\Vert \le c_\rho \).
Otherwise, there exists \(k_0>0\) such that
By Claim 1, there exists some \(\varepsilon >0\) such that \(\widetilde{f}_\lambda (\mathbf {x}^k)-f^\star >{\rho c_a^2}/{2}+\varepsilon \) for all \(k>k_0\). Together with (18), it yields that
Summing this relation implies that for all \(k>k_0\),
which clearly cannot hold for a sufficiently large k. Thus, we have verified Claim 2.
Claim 3: There is \(x^\star \in \mathscr {X}^\star \) such that \(\limsup _{k\rightarrow \infty } \Vert \mathbf {x}^{k}-x^\star {\mathbbm {1}}\Vert \le c_\rho +\rho c_a\).
Otherwise, for any \(x^\star \in \mathscr {X}^\star \), there must exist a subsequence \(\{\mathbf {x}^k\}_{k\in \mathscr {K}}\) (which depends on \(x^\star \)) such that for all \(k\in \mathscr {K}\),
Notice that the penalty function \(h(\mathbf {x})\) can be represented as
where \(a_e\) is the weight of edge e. The subdifferential of \(h(\mathbf {x})\) is then given by
where \(A_e=\text {diag}\{a_1,...,a_m\}\). Then, it follows from (40) that
where the second inequality follows from
Thus, we obtain that for all \(k\in \mathscr {K}\),
By Claim 2, there must exist some \(k_1\in \mathscr {K}\) and \(k_1>k_0\) such that
Together with (41), it implies that
Hence, it follows from Claim 1 that \(\widetilde{f}_\lambda (\mathbf {x}^{k_1-1})-f^\star >{\rho c_a^2}/{2}\), which together with (38) and (42) yields that
Setting \(x^\star =x_0^\star \) in (39), we have \(\Vert \mathbf {x}^{k_1}-x_0^\star {\mathbbm {1}}\Vert >c_\rho +\rho c_a.\) This contradicts (43), and hence verifies Claim 3.
In view of (19), the proof is completed. \(\square \)
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Zhang, J., You, K. (2018). Distributed Optimization in Multi-agent Networks Using One-bit of Relative State Information. In: Başar, T. (eds) Uncertainty in Complex Networked Systems. Systems & Control: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-04630-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-04630-9_13
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-04629-3
Online ISBN: 978-3-030-04630-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)