Journal of Optimization Theory and Applications

, Volume 147, Issue 3, pp 516–545 | Cite as

Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

Article

Abstract

We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.

The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.

Keywords

Distributed algorithm Convex optimization Subgradient methods Stochastic approximation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Nedić, A., Bertsekas, D.P.: Incremental subgradient method for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001) MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Nedić, A., Bertsekas, D.P.: Convergence rate of incremental algorithms. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications, pp. 223–264. Kluwer Academic, Dordrecht (2001) Google Scholar
  3. 3.
    Pardalos, P.M., Resende, M.G.C.: Handbook of Applied Optimization. Oxford University Press, New York (2002) MATHGoogle Scholar
  4. 4.
    Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. Trans. Automat. Control 54, 48–61 (2009) CrossRefGoogle Scholar
  5. 5.
    Nedić, A., Ozdaglar, A.: On the rate of convergence of distributed asynchronous subgradient methods for multi-agent optimization. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 4711–4716 (2007) Google Scholar
  6. 6.
    Sundhar, R.S., Nedić, A., Veeravalli, V.V.: Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20, 691–717 (2009) MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. Lab. for Information and Decision Systems Technical Report 2779, Massachusetts Institute of Technology (2008) Google Scholar
  8. 8.
    Rabbat, M.G., Nowak, R.D.: Distributed optimization in sensor networks. In: Proceedings of International Symposium on Information Processing in Sensor Networks, pp. 20–27 (2004) Google Scholar
  9. 9.
    Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with constant stepsize. SIAM J. Optim. 18, 29–51 (2007) MATHMathSciNetGoogle Scholar
  10. 10.
    Johansson, B., Rabi, M., Johansson, M.: A simple peer-to-peer algorithm for distributed optimization in sensor networks. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 4705–4710 (2007) Google Scholar
  11. 11.
    Johansson, B.: On distributed optimization in networked systems. Ph.D. thesis, Royal Institute of Technology, Stockholm (2008) Google Scholar
  12. 12.
    Sundhar, R.S.: Distributed optimization in multi-agent systems with applications to distributed regression. Ph.D. thesis, University of Illinois at Urbana-Champaign (2009) Google Scholar
  13. 13.
    Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent optimization. In: Daniel, P.P., Eldar, Y.C. (eds.) Convex Optimization in Signal Processing and Communications, pp. 340–386. Cambridge University Press, Cambridge (2010) Google Scholar
  14. 14.
    Rabbat, M.G., Nowak, R.D.: Quantized incremental algorithms for distributed optimization. IEEE J. Sel. Areas Commun. 23, 798–808 (2005) CrossRefGoogle Scholar
  15. 15.
    Gaivoronski, A.A.: Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Optim. Methods Softw. 4, 117–134 (1994) CrossRefGoogle Scholar
  16. 16.
    Solodov, M.V., Zavriev, S.K.: Error stability properties of generalized gradient-type algorithms. J. Optim. Theory Appl. 98, 663–680 (1998) MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Bertsekas, D.P., Tsitsiklis, J.N.: Gradient convergence in gradient methods with errors. SIAM J. Optim. 10, 627–642 (2000) MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2003) CrossRefMathSciNetGoogle Scholar
  19. 19.
    Nedić, A., Bertsekas, D.P.: The effect of deterministic noise in sub-gradient methods. Math. Program. (2009). doi:10.1007/s10107-008-0262-5 Google Scholar
  20. 20.
    Tsitsiklis, J.N.: Problems in decentralized decision making and computation. Ph.D. thesis, Massachusetts Institute of Technology (1984) Google Scholar
  21. 21.
    Tsitsiklis, J.N., Bertsekas, D.P., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Automat. Control 31, 803–812 (1986) MATHCrossRefMathSciNetGoogle Scholar
  22. 22.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997) Google Scholar
  23. 23.
    Jadbabaie, A., Lin, J., Morse, S.: Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Automat. Control 48, 998–1001 (2003) Google Scholar
  24. 24.
    Spanos, D.S., Olfati-Saber, R., Murray, R.M.: Approximate distributed Kalman filtering in sensor networks with quantifiable performance. In: Proceedings of IEEE International Conference on Information Processing in Sensor Networks, pp. 133–139 (2005) Google Scholar
  25. 25.
    Kar, S., Moura, J.M.F.: Distributed consensus algorithms in sensor networks: link and channel noise. IEEE Trans. Signal Process. 57, 355–369 (2009) CrossRefGoogle Scholar
  26. 26.
    Xiao, L., Boyd, S., Kim, S.-J.: Distributed average consensus with least mean square deviation. J. Parallel Distrib. Comput. 67, 33–46 (2007) MATHCrossRefGoogle Scholar
  27. 27.
    Lobel, I., Ozdaglar, A.: Distributed subgradient methods over random networks. Lab. for Information and Decision Systems Technical Report 2800, Massachusetts Institute of Technology (2008) Google Scholar
  28. 28.
    Olshevsky, A., Tsitsiklis, J.N.: Convergence speed in distributed consensus and averaging. SIAM J. Control Optim. 48, 33–55 (2009) MATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Nedić, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.: Distributed subgradient algorithms and quantization effects. In: Proceedings of the 47th IEEE Conference on Decision and Control, pp. 4177–4184 (2008) Google Scholar
  30. 30.
    Huang, M., Manton, J.H.: Stochastic approximation for consensus seeking: mean square and almost sure convergence. In: Proceedings of the 46th IEEE Conference on Decision and Control, pp. 306–311 (2007) Google Scholar
  31. 31.
    Ermoliev, Y.: Stochastic Programming Methods. Nauka, Moscow (1976) Google Scholar
  32. 32.
    Ermoliev, Y.: Stochastic quasi-gradient methods and their application to system optimization. Stochastics 9, 1–36 (1983) MATHMathSciNetGoogle Scholar
  33. 33.
    Ermoliev, Y.: Stochastic quazigradient methods. In: Ermoliev, Y., Wets, R.J.-B. (eds.) Numerical Techniques for Stochastic Optimization, pp. 141–186. Springer, New York (1988) Google Scholar
  34. 34.
    Sundhar, R.S., Veeravalli, V.V., Nedić, A.: Distributed and non-autonomous power control through distributed convex optimization. In: Proceedings of the 28th IEEE Conference on Computer Communications INFOCOM, pp. 3001–3005 (2009) Google Scholar
  35. 35.
    Bertsekas, D.P., Nedić, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003) MATHGoogle Scholar
  36. 36.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) MATHGoogle Scholar
  37. 37.
    Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987) Google Scholar
  38. 38.
    Billingsley, P.: Probability and Measure. Wiley, New York (1979) MATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Electrical and Computer Engineering Dept.University of Illinois at Urbana-ChampaignUrbanaUSA
  2. 2.Industrial and Enterprise Systems Engineering Dept.University of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations