Abstract
In this paper, we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints. Based on neighbor communication and stochastic gradient, a distributed stochastic mirror descent algorithm is designed for the distributed resource allocation problem. Sublinear convergence to an optimal solution of the proposed algorithm is given when the second moments of the gradient noises are summable. A numerical example is also given to illustrate the effectiveness of the proposed algorithm.
Similar content being viewed by others
References
Yi, P., Hong, Y., & Liu, F. (2016). Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems. Automatica, 74, 259–269.
Zeng, X., Yi, P., & Hong, Y. (2018). Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters. Journal of Systems Science and Complexity, 31(1), 103–119.
Nedić, A., Olshevsky, A., & Shi, W. (2012). Improved convergence rates for distributed resource allocation. IEEE Conference on Decision and Control (CDC) (pp. 172–177). Miami Beach, FL.
Cherukuri, A., & Cortés, J. (2015). Distributed generator coordination for initialization and anytime optimization in economic dispatch. IEEE Transactions on Control of Network Systems, 2(3), 226–237.
Lakshmanan, H., & De Farias, D. P. (2008). Decentralized resource allocation in dynamic networks of agents. SIAM Journal on Optimization, 19(2), 911–940.
Yi, P., Lei, J., & Hong, Y. (2018). Distributed resource allocation over random networks based on stochastic approximation. Systems and Control Letters, 114, 44–51.
Zhang, H., Li, H., Zhu, Y., et al. (2019). A distributed stochastic gradient algorithm for economic dispatch over directed network with communication delays. International Journal of Electrical Power and Energy Systems, 110, 759–771.
Lan, G., Lee, S., & Zhou, Y. (2020). Communication-efficient algorithms for decentralized and stochastic optimization. Mathematical Programming, 180(1), 237–284.
Yuan, D., Hong, Y., Ho, D. W. C., et al. (2018). Optimal distributed stochastic mirror descent for strongly convex optimization. Automatica, 90, 196–203.
Bregman, L. M. (1967). The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3), 200–217.
Rockafellar, R. T. (1970). Convex Analysis. Princeton: Princeton University Press.
Wang, Y., Lin, P., & Hong, Y. (2018). Distributed regression estimation with incomplete data in multi-agent networks. Science China Information Sciences, 61(9), 092202. https://doi.org/10.1007/s11432-016-9173-8.
Liang, S., & Yin, G. (2019). Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity. Automatica, 105, 298–306.
Ram, S. S., Nedić, A., & Veeravalli, V. V. (2010). Distributed stochastic subgradient projection algorithms for convex optimization. Journal of Optimization Theory and Applications, 147(3), 516–545.
Lei, J., Shanbhag, U. V., Pang, J. S., et al. (2020). On synchronous, asynchronous, and randomized best-response schemes for stochastic Nash games. Mathematics of Operations Research, 45(1), 157–190.
Acknowledgements
This work was supported by the National Key Research and Development Program of China (No. 2016YFB0901900), the National Natural Science Foundation of China (No. 61733018) and the China Special Postdoctoral Science Foundation Funded Project (No. Y990075G21).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Lemma 4
According to Lemma 2, we have
and
Applying (A1), (A2) and (A3) to (12), we get
where
Summing \(\theta _{k}\) over \(k=1,2,\ldots , T\), we obtain
where (b) follows from \({\varvec{y}}-\varvec{y^{k}}={\varvec{y}}-\varvec{y^{k-1}}+\varvec{y^{k-1}}-\varvec{y^{k}}\), (c) follows from \({\varvec{x}}^{-1}={\varvec{x}}^{0}\) and \({\varvec{z}}^{-1}={\varvec{z}}^{0}\), (d) and (f) follow from \(b\langle u,v \rangle \leqslant \dfrac{a}{2}\Vert v\Vert ^{2}+\dfrac{b^{2}\Vert u\Vert ^{2}}{2a}\) for all \(a>0\) and (14), and (e) follows from \(\Vert {\varvec{y}}-{\varvec{y}}^{0}\Vert ^{2}-\Vert {\varvec{y}}-{\varvec{y}}^{T}\Vert ^{2}=\Vert {\varvec{y}}^{0}\Vert ^{2}-\Vert {\varvec{y}}^{N}\Vert ^{2}-2\langle {\varvec{y}}, {\varvec{y}}^{0}-{\varvec{y}}^{T}\rangle\). (15) follows from (A4) and (A6).
From (A6) (d), Assumption 1 and the fact that \(\textstyle \sum \limits _{k=1}^{T}\mathrm {E}Q(\varvec{w^{k}},\varvec{w^{*}})\geqslant 0\), if we fix \({\varvec{w}}={\varvec{w}}^{*}\), then
and
from which (16) follows.\(\square\)
Rights and permissions
About this article
Cite this article
Wang, Y., Tu, Z. & Qin, H. Distributed stochastic mirror descent algorithm for resource allocation problem. Control Theory Technol. 18, 339–347 (2020). https://doi.org/10.1007/s11768-020-00018-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11768-020-00018-8