Skip to main content
Log in

Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme

  • Published:
Control Theory and Technology Aims and scope Submit manuscript

Abstract

In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes: the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A. Nedic, A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 2009, 54(1): 48–61.

    Article  MathSciNet  Google Scholar 

  2. A. H. Sayed, S. Tu, J. Chen, et al. Diffusion strategies for adaptation and learning over networks. IEEE Signal Processing Magazine, 2013, 30(3): 155–171.

    Article  Google Scholar 

  3. G. Shi, K. H. Johansson, Y. Hong. Reaching an optimal consensus: dynamical systems that compute intersections of convex sets. IEEE Transactions on Automatic Control, 2013, 58(3): 610–622.

    Article  MathSciNet  Google Scholar 

  4. Y. Lou, G. Shi, K. H. Johansson, et al. An approximate projected consensus algorithm for computing intersection of convex sets. IEEE Transactions on Automatic Control, 2014, 59(7): 1722–1736.

    Article  MathSciNet  Google Scholar 

  5. P. Yi, Y. Zhang, Y. Hong. Potential game design for a class of distributed optimization problems. Journal of Control and Decision, 2014, 1(2): 166–179.

    Article  Google Scholar 

  6. P. Yi, Y. Hong. Quantized subgradient algorithm and data-rate analysis for distributed optimization. IEEE Transactions on Control of Network Systems, 2014, 1(4): 380–392.

    Article  MathSciNet  Google Scholar 

  7. P. Yi, Y. Hong, F. Liu. Distributed gradient algorithm for constrained optimization with application to load sharing in power systems. Systems & Control Letters, 2015, 83: 45–52.

    Article  MathSciNet  Google Scholar 

  8. S. Ram, A. Nedic, V. V. Venugopal. Distributed stochastic subgradient projection algorithms for convex optimization. Journal of Optimization Theory and Applications, 2010, 147(3): 516–545.

    Article  MathSciNet  MATH  Google Scholar 

  9. A. Nedic. Asynchronous broadcast-based convex optimization over a network. IEEE Transactions on Automatic Control, 2011, 56(6): 1337–1351.

    Article  MathSciNet  Google Scholar 

  10. I. Lobel, A. Ozdaglar. Distributed subgradient methods for convex optimization over random networks. IEEE Transactions on Automatic Control, 2011, 56(6): 1291–1306.

    Article  MathSciNet  Google Scholar 

  11. K. Yuan, Q. Ling, W. Yin. On the convergence of decentralized gradient descent. arXiv, 2013: http://arxiv.org/abs/1310.7063.

    Google Scholar 

  12. J. C. Duchi, A. Agarwal, M. J. Wainwright. Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Transactions on Automatic Control, 2012, 57(3): 592–606.

    Article  MathSciNet  Google Scholar 

  13. D. Yuan, S. Xu, H. Zhao. Distributed primal-dual subgradient method for multiagent optimization via consensus algorithms. IEEE Transactions on Systems Man and Cybernetics: Part B–Cybernetics, 2011, 41(6): 1715–1724.

    Article  Google Scholar 

  14. W. Shi, Q. Ling, G. Wu, et al. EXTRA: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 2015, 25(2): 944–966.

    Article  MathSciNet  Google Scholar 

  15. J. Wang, Q. Liu. A second-order multi-agent network for bound-constrained distributed optimization. IEEE Transactions on Automatic Control, 2015: DOI 10.1109/TAC.2015.241692.

    Google Scholar 

  16. V. Cevher, S. Becker, M. Schmidt. Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Processing Magazine, 2014, 31(5): 32–43.

    Article  Google Scholar 

  17. H. Lakshmanan, D. P. Farias. Decentralized resource allocation in dynamic networks of agents. SIAM Journal on Optimization, 2008, 19(2): 911–940.

    Article  MathSciNet  MATH  Google Scholar 

  18. D. Yuan, D. W. C. Ho. Randomized gradient-free method for multiagent optimization over time-varying networks. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(6): 1342–1347.

    Article  Google Scholar 

  19. K. Kvaternik, L. Pavel. Analysis of Decentralized Extremum-Seeking Schemes. Toronto: Systems Control Group, University of Toronto, 2012.

    Google Scholar 

  20. N. Atanasov, J. L. Ny, G. J. Pappas. Distributed algorithms for stochastic source seeking with mobile robot networks. ASME Journal on Dynamic Systems, Measurement, and Control, 2015, 137(3): DOI 10.1115/1.4027892.

    Google Scholar 

  21. G. Shi, K. H. Johansson. Randomized optimal consensus of multiagent systems. Automatica, 2012, 48(12): 3018–3030.

    Article  MathSciNet  MATH  Google Scholar 

  22. Y. Lou, G. Shi, K. H. Johansson, et al. Convergence of random sleep algorithms for optimal consensus. Systems & Control Letters, 2013, 62(12): 1196–1202.

    Article  MathSciNet  MATH  Google Scholar 

  23. D. Jakovetic, J. Xavier, J. M. F. Moura. Fast distributed gradient methods. IEEE Transactions on Automatic Control, 2014, 59(5): 1131–1146.

    Article  MathSciNet  Google Scholar 

  24. D. Jakovetic, D. Bajovic, N. Krejic, et al. Distributed gradient methods with variable number of working nodes. arXiv, 2015: http://arxiv.org/abs/1504.04049.

    Google Scholar 

  25. J. Liu, X. Jiang, S. Horiguchi, et al. Analysis of random sleep scheme for wireless sensor networks. International Journal of Sensor Networks, 2010, 7(1): 71–84.

    Article  Google Scholar 

  26. H. Fan, M. Liu. Network coverage using low duty-cycled sensors: random & coordinated sleep algorithms. Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, New York: ACM, 2004: 433–442.

    Google Scholar 

  27. B. T. Polyak. Introduction to Optimization. New York: Optimization Software Inc., 1987.

    Google Scholar 

  28. T. Li, L. Xie. Distributed consensus over digital networks with limited bandwidth and time-varying topologies. Automatica, 2011, 47(9): 2006–2015.

    Article  MathSciNet  MATH  Google Scholar 

  29. R. B. Ash. Real Analysis and Probability. New York: Academic Press, 1972.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yiguang Hong.

Additional information

This work was supported by the Beijing Natural Science Foundation (No. 4152057), the National Natural Science Foundation of China (No. 61333001) and the Program 973 (No. 2014CB845301/2/3).

Peng YI is a Ph.D. candidate at the Academy of Mathematics and Systems Science, Chinese Academy of Sciences. He received his B.Sc. degree of Automation from University of Science and Technology of China in 2011. His research interest covers multi-agents system, distributed optimization, hybrid system and smart grid.

Yiguang HONG received the B.Sc. and M.Sc. degrees from Peking University, Beijing, China, and the Ph.D. degree from the Chinese Academy of Sciences (CAS), Beijing. He is currently a professor in Academy of Mathematics and Systems Science, CAS, and serves as the Director of Key Lab of Systems and Control, CAS and the Director of the Information Technology Division, National Center for Mathematics and Interdisciplinary Sciences, CAS. His research interests include nonlinear dynamics and control, multi-agent systems, distributed optimization, social networks, and software reliability.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yi, P., Hong, Y. Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme. Control Theory Technol. 13, 333–347 (2015). https://doi.org/10.1007/s11768-015-5100-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11768-015-5100-8

Keywords

Navigation