Skip to main content
Log in

A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ϵ-optimal solution in \(\mathcal{O}(n^{2}/\epsilon)\) iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. Tecnical report, Technion (2012)

  2. Berman, P., Kovoor, N., Pardalos, P.M.: Algorithms for least distance problem. In: Pardalos, P.M. (ed.) Complexity in Numerical Optimization, pp. 33–56. World Scientific, Singapore (1993)

    Chapter  Google Scholar 

  3. Bertsekas, D.P.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Nashua (2003)

    Google Scholar 

  4. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Nashua (1999)

    MATH  Google Scholar 

  5. Candes, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  7. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 27, 1–27 (2011)

    Article  Google Scholar 

  8. Dai, Y.H., Fletcher, R.: New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Math. Program. 106(3), 403–421 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Ferris, M.C., Munson, T.S.: Interior-point methods for massive support vector machines. SIAM J. Optim. 13(3), 783–804 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  10. Hush, D., Kelly, P., Scovel, C., Steinwart, I.: QP algorithms with guaranteed accuracy and run time for support vector machines. J. Mach. Learn. Res. 7, 733–769 (2006)

    MATH  MathSciNet  Google Scholar 

  11. Judice, J., Raydan, M., Rosa, S., Santos, S.: On the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm. Numer. Algorithms 47, 391–407 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. Kiwiel, K.C.: On linear-time algorithms for the continuous quadratic Knapsack problem. J. Optim. Theory Appl. 134, 549–554 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  13. Lin, C.J., Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: A decomposition algorithm model for singly linearly constrained problems subject to lower and upper bounds. J. Optim. Theory Appl. 141, 107–126 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  14. List, N., Simon, H.U.: General Polynomial Time Decomposition Algorithms. Lecture Notes in Computer Science, vol. 3559, pp. 308–322. Springer, Berlin (2005)

    Google Scholar 

  15. Necoara, I., Nedelcu, V., Dumitrache, I.: Parallel and distributed optimization methods for estimation and control in networks. J. Process Control 21(5), 756–766 (2011)

    Article  Google Scholar 

  16. Necoara, I., Nesterov, Y., Glineur, F.: A random coordinate descent method on large optimization problems with linear constraints. Technical report, University Politehnica Bucharest (2011) http://acse.pub.ro/person/ion-necoara

  17. Necoara, I.: Random coordinate descent algorithms for multi-agent convex optimization over networks. IEEE Trans. Autom. Control 58(7), 1–12 (2013)

    Article  MathSciNet  Google Scholar 

  18. Necoara, I., Clipici, D.: Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC. J. Process Control 23(3), 243–253 (2013)

    Article  Google Scholar 

  19. Nesterov, Y., Shpirko, S.: Primal-dual subgradient method for huge-scale linear conic problems (2012). http://www.optimization-online.org/DB_FILE/2012/08/3590.pdf

  20. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic, Norwell (2004)

    Book  Google Scholar 

  21. Nesterov, Y.: Gradient methods for minimizing composite objective functions. Core discussion paper, 76/2007, Universite Catholique de Louvain (2007)

  22. Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  23. Platt, J.C.: Fast Training of Support Vector Machines Using Sequential Minimal Optimization. Advances in Kernel Methods: Support Vector Learning. MIT Press, Cambridge (1999)

    Google Scholar 

  24. Qin, Z., Scheinberg, K., Goldfarb, D.: Efficient block-coordinate descent algorithms for the group Lasso (2010), submitted

  25. Richtarik, P., Takac, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program., Ser. A (2012). doi:10.1007/s10107-012-0614-z

    MATH  Google Scholar 

  26. Richtarik, P., Takac, M.: Efficient serial and parallel coordinate descent methods for huge-scale truss topology design. Oper. Res. Proc., 27–32 (2012)

  27. Richtarik, P., Takac, M.: Parallel coordinate descent methods for big data optimization. Technical report (2012). arXiv:1212.0873

  28. Rockafellar, R.T.: The elementary vectors of a subspace in \(\mathbb {R}^{N}\). In: Bose, R.C., Downling, T.A. (eds.) Proceedings of the Chapel Hill Conference 1967. Combinatorial Mathematics and Its Applications, pp. 104–127. University of North Carolina Press, Chapel Hill (1969)

    Google Scholar 

  29. Rockafellar, R.T.: Network Flows and Monotropic Optimization. Wiley-Interscience, New York (1984)

    MATH  Google Scholar 

  30. Saha, A., Tewari, A.: On the finite time convergence of cyclic coordinate descent methods. SIAM J. Optim. 23(1), 576–601 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  31. Tappenden, R., Richtarik, P., Gondzio, J.: Inexact coordinate descent: complexity and preconditioning (2013). arXiv:1304.5530

  32. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  33. Tseng, P., Yun, S.: A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training. Comput. Optim. Appl. 47, 179–206 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  34. Tseng, P., Yun, S.: A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. J. Optim. Theory Appl. 140, 513–535 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  35. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. B 68(1), 49–67 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  36. Xiao, L., Boyd, S.: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. 129(3), 469–488 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  37. Xu, S., Freund, M., Sun, J.: Solution methodologies for the smallest enclosing circle problem. Comput. Optim. Appl. 25(1–3), 283–292 (2003)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research leading to these results has received funding from: the European Union (FP7/2007–2013) under grant agreement no 248940; CNCS (project TE-231, 19/11.08.2010); ANCS (project PN II, 80EU/2010); POSDRU/89/1.5/S/62557.

The authors thank Y. Nesterov and F. Glineur for inspiring discussions and the two anonymous reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ion Necoara.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Necoara, I., Patrascu, A. A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints. Comput Optim Appl 57, 307–337 (2014). https://doi.org/10.1007/s10589-013-9598-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9598-8

Keywords

Navigation