Skip to main content
Log in

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.

The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Nedić, A., Bertsekas, D.P.: Incremental subgradient method for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  2. Nedić, A., Bertsekas, D.P.: Convergence rate of incremental algorithms. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications, pp. 223–264. Kluwer Academic, Dordrecht (2001)

    Google Scholar 

  3. Pardalos, P.M., Resende, M.G.C.: Handbook of Applied Optimization. Oxford University Press, New York (2002)

    MATH  Google Scholar 

  4. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. Trans. Automat. Control 54, 48–61 (2009)

    Article  Google Scholar 

  5. Nedić, A., Ozdaglar, A.: On the rate of convergence of distributed asynchronous subgradient methods for multi-agent optimization. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 4711–4716 (2007)

  6. Sundhar, R.S., Nedić, A., Veeravalli, V.V.: Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20, 691–717 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. Lab. for Information and Decision Systems Technical Report 2779, Massachusetts Institute of Technology (2008)

  8. Rabbat, M.G., Nowak, R.D.: Distributed optimization in sensor networks. In: Proceedings of International Symposium on Information Processing in Sensor Networks, pp. 20–27 (2004)

  9. Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with constant stepsize. SIAM J. Optim. 18, 29–51 (2007)

    MATH  MathSciNet  Google Scholar 

  10. Johansson, B., Rabi, M., Johansson, M.: A simple peer-to-peer algorithm for distributed optimization in sensor networks. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 4705–4710 (2007)

  11. Johansson, B.: On distributed optimization in networked systems. Ph.D. thesis, Royal Institute of Technology, Stockholm (2008)

  12. Sundhar, R.S.: Distributed optimization in multi-agent systems with applications to distributed regression. Ph.D. thesis, University of Illinois at Urbana-Champaign (2009)

  13. Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent optimization. In: Daniel, P.P., Eldar, Y.C. (eds.) Convex Optimization in Signal Processing and Communications, pp. 340–386. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  14. Rabbat, M.G., Nowak, R.D.: Quantized incremental algorithms for distributed optimization. IEEE J. Sel. Areas Commun. 23, 798–808 (2005)

    Article  Google Scholar 

  15. Gaivoronski, A.A.: Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Optim. Methods Softw. 4, 117–134 (1994)

    Article  Google Scholar 

  16. Solodov, M.V., Zavriev, S.K.: Error stability properties of generalized gradient-type algorithms. J. Optim. Theory Appl. 98, 663–680 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  17. Bertsekas, D.P., Tsitsiklis, J.N.: Gradient convergence in gradient methods with errors. SIAM J. Optim. 10, 627–642 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  18. Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2003)

    Article  MathSciNet  Google Scholar 

  19. Nedić, A., Bertsekas, D.P.: The effect of deterministic noise in sub-gradient methods. Math. Program. (2009). doi:10.1007/s10107-008-0262-5

    Google Scholar 

  20. Tsitsiklis, J.N.: Problems in decentralized decision making and computation. Ph.D. thesis, Massachusetts Institute of Technology (1984)

  21. Tsitsiklis, J.N., Bertsekas, D.P., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Automat. Control 31, 803–812 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  22. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)

    Google Scholar 

  23. Jadbabaie, A., Lin, J., Morse, S.: Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Automat. Control 48, 998–1001 (2003)

    Google Scholar 

  24. Spanos, D.S., Olfati-Saber, R., Murray, R.M.: Approximate distributed Kalman filtering in sensor networks with quantifiable performance. In: Proceedings of IEEE International Conference on Information Processing in Sensor Networks, pp. 133–139 (2005)

  25. Kar, S., Moura, J.M.F.: Distributed consensus algorithms in sensor networks: link and channel noise. IEEE Trans. Signal Process. 57, 355–369 (2009)

    Article  Google Scholar 

  26. Xiao, L., Boyd, S., Kim, S.-J.: Distributed average consensus with least mean square deviation. J. Parallel Distrib. Comput. 67, 33–46 (2007)

    Article  MATH  Google Scholar 

  27. Lobel, I., Ozdaglar, A.: Distributed subgradient methods over random networks. Lab. for Information and Decision Systems Technical Report 2800, Massachusetts Institute of Technology (2008)

  28. Olshevsky, A., Tsitsiklis, J.N.: Convergence speed in distributed consensus and averaging. SIAM J. Control Optim. 48, 33–55 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  29. Nedić, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.: Distributed subgradient algorithms and quantization effects. In: Proceedings of the 47th IEEE Conference on Decision and Control, pp. 4177–4184 (2008)

  30. Huang, M., Manton, J.H.: Stochastic approximation for consensus seeking: mean square and almost sure convergence. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 306–311 (2007)

  31. Ermoliev, Y.: Stochastic Programming Methods. Nauka, Moscow (1976)

    Google Scholar 

  32. Ermoliev, Y.: Stochastic quasi-gradient methods and their application to system optimization. Stochastics 9, 1–36 (1983)

    MATH  MathSciNet  Google Scholar 

  33. Ermoliev, Y.: Stochastic quazigradient methods. In: Ermoliev, Y., Wets, R.J.-B. (eds.) Numerical Techniques for Stochastic Optimization, pp. 141–186. Springer, New York (1988)

    Google Scholar 

  34. Sundhar, R.S., Veeravalli, V.V., Nedić, A.: Distributed and non-autonomous power control through distributed convex optimization. In: Proceedings of the 28th IEEE Conference on Computer Communications INFOCOM, pp. 3001–3005 (2009)

  35. Bertsekas, D.P., Nedić, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

    MATH  Google Scholar 

  36. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  37. Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)

    Google Scholar 

  38. Billingsley, P.: Probability and Measure. Wiley, New York (1979)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Nedić.

Additional information

Communicated by P.M. Pardalos.

This work is supported by NSF Career Grant CMMI 07-42538.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sundhar Ram, S., Nedić, A. & Veeravalli, V.V. Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization. J Optim Theory Appl 147, 516–545 (2010). https://doi.org/10.1007/s10957-010-9737-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-010-9737-7

Keywords

Navigation