Advertisement

Random Block Coordinate Descent Methods for Linearly Constrained Optimization over Networks

  • Ion Necoara
  • Yurii Nesterov
  • François Glineur
Article

Abstract

In this paper we develop random block coordinate descent methods for minimizing large-scale linearly constrained convex problems over networks. Since coupled constraints appear in the problem, we devise an algorithm that updates in parallel at each iteration at least two random components of the solution, chosen according to a given probability distribution. Those computations can be performed in a distributed fashion according to the structure of the network. Complexity per iteration of the proposed methods is usually cheaper than that of the full gradient method when the number of nodes in the network is much larger than the number of updated components. On smooth convex problems, we prove that these methods exhibit a sublinear worst-case convergence rate in the expected value of the objective function. Moreover, this convergence rate depends linearly on the number of components to be updated. On smooth strongly convex problems we prove that our methods converge linearly. We also focus on how to choose the probabilities to make our randomized algorithms converge as fast as possible, which leads us to solving a sparse semidefinite program. We then describe several applications that fit in our framework, in particular the convex feasibility problem. Finally, numerical experiments illustrate the behaviour of our methods, showing in particular that updating more than two components in parallel accelerates the method.

Keywords

Convex optimization over networks Linear coupled constraints Random coordinate descent Distributed computations Convergence analysis 

Mathematics Subject Classification

90C06 90C25 90C35 

Notes

Acknowledgements

This research received funding from UEFISCDI Romania, PNII-RU-TE, project MoCOBiDS, no. 176/01.10.2015. It presents research results of the Belgian Network DYSCO (Dynamical Systems, Control, and Optimization), funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office, and of the Concerted Research Action (ARC) programme supported by the Federation Wallonia-Brussels (contract ARC 14/19-060). Support from two WBI-Romanian Academy grants is also acknowledged. Scientific responsibility rests with the authors.

References

  1. 1.
    Necoara, I.: Random coordinate descent algorithms for multi-agent convex optimization over networks. IEEE Trans. Autom. Control 58(8), 2001–2012 (2013)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Xiao, L., Boyd, S.: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. 129(3), 469–488 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Ishii, H., Tempo, R., Bai, E.: A web aggregation approach for distributed randomized pagerank algorithms. IEEE Trans. Autom. Control 57(1), 2703–2717 (2012)MathSciNetCrossRefGoogle Scholar
  4. 4.
    You, K., Xie, L.: Network topology and communication data rate for consensusability of discrete-time multi-agent systems. IEEE Trans. Autom. Control 56(10), 2262–2275 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bauschke, H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Combettes, P.: The convex feasibility problem in image recovery. In: Hawkes, P. (ed.) Advances in Imaging and Electron Physics, pp. 155–270. Academic Press, Cambridge (1996)Google Scholar
  7. 7.
    Wright, S.: Accelerated block coordinate relaxation for regularized optimization. SIAM J. Optim. 22(1), 159–186 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Liu, J., Wright, S.: Asynchronous stochastic coordinate descent: parallelism and convergence properties. SIAM J. Optim. 25(1), 351–376 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Qin, Z., Scheinberg, K., Goldfarb, D.: Efficient block-coordinate descent algorithms for the group lasso. Math. Program. Comput. 5(2), 143–169 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Richtarik, P., Takac, M.: Parallel coordinate descent methods for big data optimization. Math. Program. 156(1), 433–484 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. SIAM J. Optim. 23(4), 2037–2060 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Tseng, P., Yun, S.: A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. J. Optim. Theory Appl. 140(3), 513–535 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Patrascu, A., Necoara, I.: Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization. J. Global Optim. 61(1), 19–46 (2015)Google Scholar
  15. 15.
    Richtarik, P., Takac, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1–2), 1–38 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Necoara, I., Clipici, D.: Parallel coordinate descent methods for composite minimization: convergence analysis and error bounds. SIAM J. Optim. 26(1), 197–226 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Beck, A.: The 2-coordinate descent method for solving double-sided simplex constrained minimization problems. J. Optim. Theory Appl. 162(3), 892–919 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Necoara, I., Patrascu, A.: A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints. Comput. Optim. Appl. 57(2), 307–337 (2014)Google Scholar
  19. 19.
    Reddi, S., Hefny, A., Downey, C., Dubey, A., Sra, S.: Large-scale randomized-coordinate descent methods with non-separable linear constraints. Tech. rep. (2014). http://arxiv.org/abs/1409.2617.pdf
  20. 20.
    Necoara, I., Nesterov, Y., Glineur, F.: A random coordinate descent method on large optimization problems with linear constraints. Tech. rep. (2011). https://acse.pub.ro/person/ion-necoara
  21. 21.
    Hong, M., Luo, Z.: On the linear convergence of the alternating direction method of multipliers. Math. Program. (2016). doi: 10.1007/s10107-016-1034-2 Google Scholar
  22. 22.
    Wei, E., Ozdaglar, A., Jadbabaie, A.: A distributed Newton method for network utility maximization part II. IEEE Trans. Autom. Control 58(9), 2176–2188 (2013)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston (2004)CrossRefzbMATHGoogle Scholar
  24. 24.
    Godsil, C., Royle, G.: Algebraic Graph Theory. Springer, Berlin (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Automatic Control and Systems Engineering DepartmentUniversity Politehnica BucharestBucharestRomania
  2. 2.Center for Operations Research and Econometrics, ICTEAM InstituteUniversite Catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations