Computational Optimization and Applications

, Volume 72, Issue 3, pp 707–726 | Cite as

A random block-coordinate Douglas–Rachford splitting method with low computational complexity for binary logistic regression

  • Luis M. Briceño-Arias
  • Giovanni ChierchiaEmail author
  • Emilie Chouzenoux
  • Jean-Christophe Pesquet


In this paper, we propose a new optimization algorithm for sparse logistic regression based on a stochastic version of the Douglas–Rachford splitting method. Our algorithm performs both function and variable splittings. It sweeps the training set by randomly selecting a mini-batch of data at each iteration, and it allows us to update the variables in a block coordinate manner. Our approach leverages the proximity operator of the logistic loss, which is expressed with the generalized Lambert W function. Experiments carried out on standard datasets demonstrate the efficiency of our approach w. r. t. stochastic gradient-like methods.


Proximity operator Douglas–Rachford splitting Block-coordinate descent Logistic regression 



  1. 1.
    Rosasco, L., De Vito, E., Caponnetto, A., Piana, M., Verri, A.: Are loss functions all the same? Neural Comput. 16(5), 1063–1076 (2004)zbMATHGoogle Scholar
  2. 2.
    Bartlett, P.L., Jordan, M.I., McAuliffe, J.D.: Convexity, classification, and risk bounds. J. Am. Stat. Assoc. 101(473), 138–156 (2006)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Bradley, P.S., Mangasarian, O.L.: Feature selection via concave minimization and support vector machines. In: Proceedings of ICML, Madison, USA, pp. 82–90 (1998)Google Scholar
  4. 4.
    Weston, J., Elisseeff, A., Schölkopf, B., Tipping, M.: Use of the zero-norm with linear models and kernel methods. Mach. Learn. 3, 1439–1461 (2002)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Liu, Y., Helen Zhang, H., Park, C., Ahn, J.: Support vector machines with adaptive Lq penalty. Comput. Stat. Data Anal. 51(12), 6380–6394 (2007)zbMATHGoogle Scholar
  6. 6.
    Zou, H., Yuan, M.: The \(\text{ f }_\infty \)-norm support vector machine. Stat. Sin. 18, 379–398 (2008)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Laporte, L., Flamary, R., Canu, S., Déjean, S., Mothe, J.: Non-convex regularizations for feature selection in ranking with sparse SVM. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1118–1130 (2014)Google Scholar
  8. 8.
    Krishnapuram, B., Carin, L., Figueiredo, M.A.T., Hartemink, A.J.: Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 957–968 (2005)Google Scholar
  9. 9.
    Meier, L., Van De Geer, S., Bühlmann, P.: The group lasso for logistic regression. J. R. Stat. Soc. B 70(1), 53–71 (2008)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Duchi, J., Singer, Y.: Boosting with structural sparsity. In: Proceedings of ICML, Montreal, Canada, pp. 297–304 (2009)Google Scholar
  11. 11.
    Obozinski, G., Taskar, B., Jordan, M.I.: Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput. 20(2), 231–252 (2010)MathSciNetGoogle Scholar
  12. 12.
    Yuan, G.-X., Chang, K.-W., Hsieh, C.-J., Lin, C.-J.: A comparison of optimization methods and software for large-scale L1-regularized linear classification. Mach. Learn. 11, 3183–3234 (2010)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Rosasco, L., Villa, S., Mosci, S., Santoro, M., Verri, A.: Nonparametric sparsity and regularization. J. Mach. Learn. Res. 14, 1665–1714 (2013)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Blondel, M., Fujino, A., Ueda, N.: Large-scale multiclass support vector machine training via euclidean projection onto the simplex. In: Proceedings of ICPR, Stockholm, Sweden, 24–28 August 2014, pp. 1289–1294Google Scholar
  15. 15.
    Wang, L., Shen, X.: On \(l_1\)-norm multi-class support vector machines: methodology and theory. J. Am. Stat. Assoc. 102, 583–594 (2007)zbMATHGoogle Scholar
  16. 16.
    Mairal, J.: Optimization with first-order surrogate functions. In: Proceedings of ICML, pp. 783–791 (2013)Google Scholar
  17. 17.
    Chierchia, G., Pustelnik, N., Pesquet, J.-C., Pesquet-Popescu, B.: A proximal approach for sparse multiclass SVM (2015). Preprint arXiv:1501.03669
  18. 18.
    Barlaud, M., Belhajali, W., Combettes, P.L., Fillatre, L.: Classification and regression using a constrained convex splitting method. IEEE Trans. Signal Process. 65(17), 4635–4644 (2017)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Iutzeler, F., Malick, J.: On the proximal gradient algorithm with alternated inertia. J. Optim. Theory Appl. 176(3), 688–710 (2018). MathSciNetzbMATHGoogle Scholar
  20. 20.
    Tan, M., Wang, L., Tsang, I.W.: Learning sparse SVM for feature selection on very high dimensional datasets. In: Proceedings of ICML, Haifa, Israel, 21–24 June 2010, pp. 1047–1054Google Scholar
  21. 21.
    Hsieh, C.J., Chang, K.W., Lin, C.J., Keerthi, S.S., Sundararajan, S.: A dual coordinate descent method for large-scale linear SVM. In: Proceedings of ICML, pp. 408–415 (2008)Google Scholar
  22. 22.
    Lacoste-Julien, S., Jaggi, M., Schmidt, M., Pletscher, P.: Block-coordinate Frank-Wolfe optimization for structural SVMs. Proc. ICML 28(1), 53–61 (2013)Google Scholar
  23. 23.
    Pereyra, M., Schniter, P., Chouzenoux, E., Pesquet, J.-C., Tourneret, J.-Y., Hero, A., McLaughlin, S.: A survey of stochastic simulation and optimization methods in signal processing. IEEE J. Sel. Top. Signal Process. 10(2), 224–241 (2016)Google Scholar
  24. 24.
    Shalev-Shwartz, S., Tewari, A.: Stochastic methods for l1-regularized loss minimization. J. Mach. Learn. Res. 12, 1865–1892 (2011)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Blondel, M., Seki, K., Uehara, K.: Block coordinate descent algorithms for large-scale sparse multiclass classification. Mach. Learn. 93(1), 31–52 (2013)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Fercoq, O., Richtárik, P.: Accelerated, parallel, and proximal coordinate descent. SIAM J. Optim. 25(4), 1997–2023 (2015)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Lu, Z., Xiao, L.: On the complexity analysis of randomized block-coordinate descent methods. Math. Program. 152(1–2), 615–642 (2015)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Langford, J., Li, L., Zhang, T.: Sparse online learning via truncated gradient. J. Mach. Learn. Res. 10, 777–801 (2009)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Richtárik, P., Takáč, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1), 1–38 (2014)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Rosasco, L., Villa, S., Vũ, B.C.: Stochastic forward–backward splitting for monotone inclusions. J. Optim. Theory Appl. 169(2), 388–406 (2016)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Combettes, P., Pesquet, J.-C.: Stochastic approximations and perturbations in forward–backward splitting for monotone operators. Pure Appl. Funct. Anal. 1(1), 13–37 (2016)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Combettes, L., Pesquet, J.-C.: Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping II: mean-square and linear convergence. Math. Program. (2018)Google Scholar
  34. 34.
    Combettes, P.L., Pesquet, J.-C.: Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. SIAM J. Optim. 25(2), 1221–1248 (2015)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Pesquet, J.-C., Repetti, A.: A class of randomized primal-dual algorithms for distributed optimization. J. Nonlinear Convex Anal. 16(12), 2453–2490 (2015)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Mairal, J.: Stochastic majorization–minimization algorithms for large-scale optimization. In: Proceedings of NIPS, pp. 2283–2291 (2013)Google Scholar
  37. 37.
    Chouzenoux, E., Pesquet, J.-C.: A stochastic majorize-minimize subspace algorithm for online penalized least squares estimation. IEEE Trans. Signal Process. 65(18), 4770–4783 (2017)MathSciNetzbMATHGoogle Scholar
  38. 38.
    Briceño-Arias, L.M., Combettes, P.L.: A monotone + skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21(4), 1230–1250 (2011)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Boţ, R.I., Hendrich, C.: A Douglas–Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J. Optim. 23(4), 2541–2565 (2013)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Perekrestenko, D., Cevher, V., Jaggi, M.: Faster coordinate descent via adaptive importance sampling. In: Proceedings of AISTATS, Fort Lauderdale, Florida, USA, 20–22 April (2017)Google Scholar
  41. 41.
    Mező, I., Baricz, Á.: On the generalization of the Lambert W function. Trans. Am. Math. Soc. 369(11), 7917–7934 (2017)MathSciNetzbMATHGoogle Scholar
  42. 42.
    Maignan, A., Scott, T.: Fleshing out the generalized Lambert W function. ACM Commun. Comput. Algebra 50(2), 45–60 (2016)MathSciNetzbMATHGoogle Scholar
  43. 43.
    Chierchia, G., Pustelnik, N., Pesquet, J.-C.: Random primal-dual proximal iterations for sparse multiclass SVM. In: Proceedings of MLSP, Salerno, Italy (2016)Google Scholar
  44. 44.
    Atchadé, Y.F., Fort, G., Moulines, E.: On perturbed proximal gradient algorithms. J. Mach. Learn. Res. 18(1), 310–342 (2017)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Combettes, P., Glaudin, L.: Proximal activation of smooth functions in splitting algorithms for convex minimization. Tech. Rep. (2018). arxiv:1803.02919
  46. 46.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)zbMATHGoogle Scholar
  47. 47.
    Eckstein, J.: A simplified form of block-iterative operator splitting and an asynchronous algorithm resembling the multi-block alternating direction method of multipliers. J. Optim. Theory Appl. 173(1), 155–182 (2017)MathSciNetzbMATHGoogle Scholar
  48. 48.
    Johnstone, P.R., Eckstein, J.: Projective splitting with forward steps: asynchronous and block-iterative operator splitting. Tech. Rep. (2018).
  49. 49.
    Combettes, P.L., Eckstein, J.: Asynchronous block-iterative primal-dual decomposition methods for monotone inclusions. Math. Program. Ser. B 168(1–2), 645–672 (2018). MathSciNetzbMATHGoogle Scholar
  50. 50.
    Chierchia, G., Cherni, A., Chouzenoux, E., Pesquet, J.-C.: Approche de Douglas–Rachford aléatoire par blocs appliquée à la régression logistique parcimonieuse. In: Actes du GRETSI. Juan-les-Pins, France (2017)Google Scholar
  51. 51.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  52. 52.
    Martins, A., Astudillo, R.: From softmax to sparsemax: a sparse model of attention and multi-label classification. In: Proceedings of ICML, New York, USA, pp. 1614–1623 (2016)Google Scholar
  53. 53.
    Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Optimization with sparsity-inducing penalties. Found. Trends Mach. Learn. 4(1), 1–106 (2012)zbMATHGoogle Scholar
  54. 54.
    Chierchia, G., Chouzenoux, E., Combettes, P.L., Pesquet, J.-C.: The proximity operator repository (user’s guide). (2017). Accessed 10 Jan 2019
  55. 55.
    Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)zbMATHGoogle Scholar
  56. 56.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)Google Scholar
  57. 57.
    Wang, P.-W., Wytock, M., Kolter, J.Z.: Epigraph projections for fast general convex programming. In: Proceedings of ICML (2016)Google Scholar
  58. 58.
    Rifkin, R., Klautau, A.: In defense of one-vs-all classification. Mach. Learn. 5, 101–141 (2004)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Departamento de MatemáticaUniversidad Técnica Federico Santa MaríaSan JoaquinChile
  2. 2.LIGM, CNRS UMR 8049, ESIEE ParisUniversité Paris-Est (UPEM)Noisy-le-GrandFrance
  3. 3.Center for Visual Computing, INRIA Saclay, CentraleSupélecUniversity Paris-SaclayGif sur YvetteFrance

Personalised recommendations