Skip to main content
Log in

The 2-Coordinate Descent Method for Solving Double-Sided Simplex Constrained Minimization Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper considers the problem of minimizing a continuously differentiable function with a Lipschitz continuous gradient subject to a single linear equality constraint and additional bound constraints on the decision variables. We introduce and analyze several variants of a 2-coordinate descent method: a block descent method that performs an optimization step with respect to only two variables at each iteration. Based on two new optimality measures, we establish convergence to stationarity points for general nonconvex objective functions. In the convex case, when all the variables are lower bounded but not upper bounded, we show that the sequence of function values converges at a sublinear rate. Several illustrative numerical examples demonstrate the effectiveness of the method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation. Prentice-Hall, Englewood Cliffs (1989)

    MATH  Google Scholar 

  2. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

    MATH  Google Scholar 

  3. Auslender, A.: Méthodes numériques pour la décomposition et la minimisation de fonctions non différentiables. Numer. Math. 18, 213–223 (1971/72)

    Article  MathSciNet  Google Scholar 

  4. Auslender, A.: Optimisation. Méthodes Numériques, Maîtrise de Mathématiques et Applications Fondamentales. Masson, Paris (1976)

    MATH  Google Scholar 

  5. Auslender, A., Martinet, B.: Méthodes de décomposition pour la minimisation d’une fonctionnelle sur un espace produit. C. R. Acad. Sci. Paris Sér. A-B 274, A632–A635 (1972)

    MathSciNet  Google Scholar 

  6. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999)

    MATH  Google Scholar 

  7. Cassioli, A., Lorenzo, D.D., Sciandrone, M.: On the convergence of inexact block coordinate descent methods for constrained optimization. Eur. J. Oper. Res. 231(2), 274–281 (2013)

    Article  Google Scholar 

  8. Cassioli, A., Sciandrone, M.: A convergent decomposition method for box constrained optimization problems. Optim. Lett. 3(3), 397–409 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  9. Grippo, L., Sciandrone, M.: Globally convergent block-coordinate techniques for unconstrained optimization. Optim. Methods Softw. 10, 587–637 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  10. Luo, T., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Reas. 46, 157–178 (1993)

    Article  MathSciNet  Google Scholar 

  11. Powell, M.J.D.: On search directions for minimization algorithms. Math. Program. 4, 193–201 (1973)

    Article  MATH  Google Scholar 

  12. Luo, T., Tseng, P.: On the convergence of the coordinate descent method for convex differentiable minimization. J. Optim. Theory Appl. (1992)

  13. Polak, E., Sargent, R.W.H., Sebastian, D.J.: On the convergence of sequential minimization algorithms. J. Optim. Theory Appl. 14, 439–442 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  14. Sargent, R.W.H., Sebastian, D.J.: On the convergence of sequential minimization algorithms. J. Optim. Theory Appl. 12, 567–575 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  15. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  16. Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems (2010). CORE Discussion paper 2010/2

  17. Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. SIAM J. Optim. 23(2), 2037–2060 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  18. Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2, 121–167 (1998)

    Article  Google Scholar 

  19. Platt, J.C.: Sequential minimal optimization: a fast algorithm for training support vector machines. In: Scholkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)

    Google Scholar 

  20. Keerthi, S., Gilbert, E.: Convergence of a generalized SMO algorithm for SVM. Mach. Learn. 46, 351–360 (2002)

    Article  MATH  Google Scholar 

  21. Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 13(3), 637–649 (2001)

    Article  MATH  Google Scholar 

  22. Joachims, T.: Making large-scale SVM learning practical. In: Scholkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, B, pp. 169–184. MIT Press, Cambridge (1999)

    Google Scholar 

  23. Lin, C.J.: On the convergence of the decomposition method for support vector machines. IEEE Trans. Neural Netw. 12, 1288–1298 (2001)

    Google Scholar 

  24. Lin, C.J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Trans. Neural Netw. 13, 248–250 (2002)

    Google Scholar 

  25. Palagi, L., Sciandrone, M.: On the convergence of a modified version of \(\rm \mathrm{SVM}^{\mathrm{light}}\) algorithm. Optim. Methods Softw. 20(2–3), 317–334 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  26. Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: A convergent hybrid decomposition algorithm model for SVM training. IEEE Trans. Neural Netw. 20(6), 1055–1060 (2009)

    Google Scholar 

  27. Lin, C.J., Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: Decomposition algorithm model for singly linearly-constrained problems subject to lower and upper bounds. J. Optim. Theory Appl. 141(1), 107–126 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  28. Lucidi, S., Palagi, L., Risi, A., Sciandrone, M.: A convergent decomposition algorithm for support vector machines. Comput. Optim. Appl. 38(2), 217–234 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  29. Chen, P.H., Fan, R.E., Lin, C.J.: Working set selection using second order information for training support vector machines. J. Mach. Learn. Res. 6, 1889–1918 (2005)

    MATH  MathSciNet  Google Scholar 

  30. Glasmachers, T., Igel, C.: Maximum-gain working set selection for SVMs. J. Mach. Learn. Res. 7, 1437–1466 (2006)

    MATH  MathSciNet  Google Scholar 

  31. Chang, C.C., Hsu, C.W., Lin, C.J.: The analysis of decomposition methods for support vector machines. IEEE Trans. Neural Netw. 11, 1003–1008 (2000)

    Google Scholar 

  32. Dai, Y.H., Fletcher, R.: New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Math. Program., Ser. A 106(3), 403–421 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  33. Serafini, T., Zanghirati, G., Zanni, L.: Gradient projection methods for quadratic programs and applications in training support vector machines. Optim. Methods Softw. 20, 353–378 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  34. Tseng, P., Yun, S.: A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training. Comput. Optim. Appl. 47(2), 179–206 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  35. Liuzzi, G., Palagi, L., Piacentini, M.: On the convergence of a Jacobi-type algorithm for singly linearly-constrained problems subject to simple bounds. Optim. Lett. 5(2), 347–362 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  36. Markowitz, H.: Portfolio selection. J. Finance 7, 77–91 (1952)

    Google Scholar 

  37. Bomze, I.M.: Evolution towards the maximum clique. J. Glob. Optim. 10(2), 143–164 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  38. Xu, S., Freund, R.M., Sun, J.: Solution methodologies for the smallest enclosing circle problem. Comput. Optim. Appl. 25(1–3), 283–292 (2003). A tribute to Elijah (Lucien) Polak

    Article  MATH  MathSciNet  Google Scholar 

  39. Lin, C.J.: A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Trans. Neural Netw. 13(5), 1045–1052 (2002)

    Google Scholar 

  40. Hush, D., Scovel, C.: Polynomial-time decomposition algorithms for support vector machines. Mach. Learn. 51(1), 51–71 (2003)

    Article  MATH  Google Scholar 

  41. Beck, A., Teboulle, M.: Gradient-based algorithms with applications to signal recovery problems. In: Eldar, Y., Palomar, D. (eds.) Convex Optimization in Signal Processing and Communications. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  42. Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer, Boston (2004)

    Book  MATH  Google Scholar 

  43. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software, New York (1987)

    Google Scholar 

  44. Chen, P.H., Fan, R.E., Lin, C.J.: A study on SMO-type decomposition methods for support vector machines. IEEE Trans. Neural Netw. 17(4), 893–908 (2006)

    Google Scholar 

  45. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx (2011)

  46. Sturm, F.J.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  47. Chang, C.C., Lin, C.J.: LIBSVM data: Classification, regression, and multi-label. http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/

  48. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011). Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank the three anonymous reviewers for their useful comments and additional references which helped to improve the presentation of the paper. This work was partially supported by ISF grant #25312 and by BSF grant 2008100.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amir Beck.

Additional information

Communicated by Gianni Di Pillo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Beck, A. The 2-Coordinate Descent Method for Solving Double-Sided Simplex Constrained Minimization Problems. J Optim Theory Appl 162, 892–919 (2014). https://doi.org/10.1007/s10957-013-0491-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-013-0491-5

Keywords

Navigation