Mathematical Programming

, Volume 159, Issue 1–2, pp 137–164 | Cite as

A dual method for minimizing a nonsmooth objective over one smooth inequality constraint

  • Ron Shefi
  • Marc Teboulle
Full Length Paper Series A


We consider the class of nondifferentiable convex problems which minimizes a nonsmooth convex objective over a single smooth constraint. Exploiting the smoothness of the feasible set and using duality, we introduce a simple first order algorithm proven to globally converge to an optimal solution with a \(\mathcal {O}(1/\varepsilon )\) efficiency estimate. The performance of the algorithm is demonstrated by solving large instances of the convex sparse recovery problem.


Nonsmooth convex minimization First order methods  Duality Complexity/rate of convergence analysis l1-norm minimization Sparse recovery 


  1. 1.
    Aho, A.V., Hopcroft, J.E.: Design & Analysis of Computer Algorithms. Pearson Education India, Chennai (1974)zbMATHGoogle Scholar
  2. 2.
    Auslender, A., Shefi, R., Teboulle, M.: A moving balls approximation method for a class of smooth constrained minimization problems. SIAM J. Optim. 20(6), 3232–3259 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Convex optimization with sparsity-inducing norms. In: Sra, S., Nowozin, S., Wright, S.J. (eds.) Optimization for Machine Learning, pp. 19–53. The MIT Press, Cambridge (2011)Google Scholar
  4. 4.
    Beck, A., Ben-Tal, A., Guttmann-Beck, N., Tetruashvili, L.: The CoMirror algorithm for solving nonsmooth constrained convex problems. Oper. Res. Lett. 38(6), 493–498 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Beck, A., Sabach, S.: A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Prog. Ser. A 147, 25–46 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Beck, A., Teboulle, M.: Gradient-based algorithms with applications in signal recovery problems. In: Palomar, D., Eldar, Y. (eds.) Convex Optimization in Signal Processing and Communications. Cambribge University Press, Cambribge (2010)Google Scholar
  9. 9.
    Beck, A., Teboulle, M.: Smoothing and first order methods: a unified framework. SIAM J. Optim. 22(2), 557–580 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Becker, S., Bobin, J., Candès, E.J.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Bertsekas, D.: Nonlinear Programming. Athena Scientific, Belmont, MA (1999)zbMATHGoogle Scholar
  12. 12.
    Brucker, P.: An o(n) algorithm for quadratic knapsack problems. Oper. Res. Lett. 3(3), 163–166 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Computat. Math. 9(6), 717–772 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms For Inverse Problems in Science and Engineering, pp. 185–212. Springer, Berlin (2011)Google Scholar
  16. 16.
    Moreau, J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93(2), 273–299 (1965)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Nemirovsky, A.S., Yudin, D.B.: Problem Complexity and Method Efficiency in Optimization. A Wiley-Interscience Publication. John Wiley & Sons Inc., New York (1983)Google Scholar
  18. 18.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Program. Ser. B 140, 125–161 (2013)Google Scholar
  20. 20.
    Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton, NJ (1970)CrossRefzbMATHGoogle Scholar
  21. 21.
    Shor, N.Z.:Minimization Methods for Nondifferentiable Functions, Volume 3 of Springer Series in Computational Mathematics (Translated from the Russian by K. C. Kiwiel and A, Ruszczyński). Springer, Berlin (1985)Google Scholar
  22. 22.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 73, 267–288 (1996)Google Scholar
  23. 23.
    Van Den Berg, E., Friedlander, M.P.: Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Yang, J., Zhang, Y.: Alternating direction algorithms for \(l_1\)-problems in compressive sensing. SIAM J. Sci. Comput. 33(1), 250–278 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 68(1), 49–67 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2015

Authors and Affiliations

  1. 1.School of Mathematical SciencesTel Aviv UniversityTel AvivIsrael

Personalised recommendations