Skip to main content
Log in

Gradient-free method for nonsmooth distributed optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to \(d\) (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Tsitsiklis, J.N.: Problems in decentralized decision making and computation. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (1984)

  2. Tsitsiklis, J.N., Bertsekas, D.P., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31(9), 803–812 (1986)

    Google Scholar 

  3. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)

    Google Scholar 

  4. Xiao, L., Boyd, S., Kim, S.J.: Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)

    Article  MATH  Google Scholar 

  5. Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiabl optimization. SIAM J. Optim. 12(1), 109–138 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129(2), 163–195 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  7. Johansson, B., Rabi, M., Johansson, M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20(3), 1157–1170 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)

    Article  Google Scholar 

  9. Nedić, A., Ozdaglar, : Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  Google Scholar 

  10. Duchi, J., Agarwal, A., Wainwright, M.: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2012)

    Article  MathSciNet  Google Scholar 

  11. Nesterov, Y.: Primal-dual subgradient methods for convex problems. Math. Program. A 120(1), 261–283 (2009)

    Google Scholar 

  12. Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)

    MATH  MathSciNet  Google Scholar 

  13. Nedić, A., Ozdaglar, A.: Convergence rate for consensus with delays. J. Glob. Optim. 47(3), 437–456 (2010)

    Article  MATH  Google Scholar 

  14. Boyd, S., Ghosh, A., Prabhakar, B., Shah, D.: Randomized gossip algorithms. IEEE Trans. Inf. Theory 52(6), 2508–2530 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  15. Ram, S.S., Nedić, A., Veeravalli, V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2011)

    Google Scholar 

  16. Spall, J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley, Hoboken, NJ (2003)

  17. Bagirov, A.M., Ugon, J.: Piecewise partially separable functions and a derivative-free algorithm for large scale nonsmooth optimization. J. Glob. Optim. 35(2), 163–195 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Belitz, P., Bewley, T.: New horizons in sphere-packing theory, part II: lattice-based derivative-free optimization via global surrogates. J. Glob. Optim. 56, 61–91 (2013). doi:10.1007/s10898-012-9866-7

    Article  MATH  MathSciNet  Google Scholar 

  19. Nesterov, Y.: Random gradient-free minimization of convex functions, Technical report, Center for Operations Research and Econometrics (CORE). Catholic University of Louvain (2011)

  20. Shao, C.S., Richard, B., Elizabeth, E., Robert, B.S.: Global optimization for molecular clusters using a new smoothing approach. J. Glob. Optim. 16(2), 167–196 (2000)

    Article  MATH  Google Scholar 

  21. Duchi, J.C., Bartlett, P.L., Wainwrighr, M.J.: Randomized smoothing for stochastic optimization. SIAM J. Optim. 22(2), 674–701 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  22. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  23. Polyak, B.T., Tsypkin, J.: Robust identification. Automatica 16, 53–63 (1980)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyou Wu.

Additional information

This research was partially supported by Natural Science Foundation of Chongqing: cstc2013jjB00001 and cstc2011jjA00010, by Chongqing Municipal Education Commission under Grant: KJ120616, by Postgraduate Scholarship of Federation University Australia, by NSFC11001288, by Key Project of Chinese Ministry of Education: 210179, by Natural Science Foundation of Chongqing: cstc2013jjB0149 and cstc2013jcyjA1338, and by CMEC under Grant: KJ090802.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, J., Wu, C., Wu, Z. et al. Gradient-free method for nonsmooth distributed optimization. J Glob Optim 61, 325–340 (2015). https://doi.org/10.1007/s10898-014-0174-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-014-0174-2

Keywords

Navigation