Skip to main content
Log in

An infeasible-point subgradient method using adaptive approximate projections

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum 1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Alber, Y.I., Iusem, A.N., Solodov, M.V.: On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Math. Program. 81, 23–35 (1998)

    MATH  MathSciNet  Google Scholar 

  2. Allen, E., Helgason, R., Kennington, J., Shetty, B.: A generalization of Polyak’s convergence result for subgradient optimization. Math. Program. 37, 309–317 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  3. Anstreicher, K.M., Wolsey, L.A.: Two “well-known” properties of subgradient optimization. Math. Program. 120, 213–220 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  4. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  5. Bazaraa, M.S., Sherali, H.D.: On the choice of step size in subgradient optimization. Eur. J. Oper. Res. 7, 380–388 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bertsekas, D.P., Mitter, S.K.: A descent numerical method for optimization problems with nondifferentiable cost functionals. SIAM J. Control 11, 637–652 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  7. Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Ser. Oper. Res. Springer, Berlin (1999). Corrected second printing

    Google Scholar 

  8. Boyd, S., Mutapcic, A.: In: Stochastic Subgradient Methods. Lecture Notes (2007). http://see.stanford.edu/materials/lsocoee364b/04-stoch_subgrad_notes.pdf. Accessed 08/29/2013

    Google Scholar 

  9. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  10. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MATH  Google Scholar 

  11. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci. 6, 73–79 (1959)

    Article  MATH  MathSciNet  Google Scholar 

  12. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  13. Cohen, A., Dahmen, W., DeVore, R.: Adaptive wavelet methods. II. Beyond the elliptic case. Found. Comput. Math. 2, 203–245 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  14. Combettes, P.L., Luo, J.: An adaptive level set method for nondifferentiable constrained image recovery. IEEE Trans. Image Process. 11, 1295–1304 (2002)

    Article  MathSciNet  Google Scholar 

  15. Compressive sensing resources. http://dsp.rice.edu/cs. Accessed 08/29/2013

  16. D’Antonio, G., Frangioni, A.: Convergence analysis of deflected conditional approximate subgradient methods. SIAM J. Optim. 20, 357–386 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  17. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  18. Ferris, M.C.: Weak sharp minima and exact penalty functions. Tech. rep. 779, Comp. Sci. Dept., University of Wisconsin, Madison, WI, USA (1988)

  19. Goffin, J.L., Kiwiel, K.: Convergence of a simple subgradient level method. Math. Program. 85, 207–211 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  20. Grasmair, M., Haltmeier, M., Scherzer, O.: Necessary and sufficient conditions for linear convergence of 1-regularization. Commun. Pure Appl. Math. 64, 161–182 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  21. Klein Haneveld, W.K.: Duality in Stochastic Linear and Dynamic Programming. Lecture Notes in Economics and Mathematical Systems, vol. 274. Springer, Berlin (1986)

    Book  MATH  Google Scholar 

  22. Klein Haneveld, W.K., van der Vlerk, M.H.: Integrated chance constraints: reduced forms and an algorithm. Comput. Manag. Sci. 3, 245–269 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  23. Neto, E.S.H., De Pierro, A.R.: Incremental subgradients for constrainted convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  24. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)

    Article  MATH  MathSciNet  Google Scholar 

  25. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. II. Ser. Grundlehren der Mathematischen Wissenschaften [Fundam. Princ. Math. Sci.], vol. 306. Springer, Berlin (1993)

    MATH  Google Scholar 

  26. Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer, Berlin (2004). Corrected second printing

    Google Scholar 

  27. Kall, P., Mayer, J.: Stochastic Linear Programming. Models, Theory, and Computation. Springer, Berlin (2005)

    MATH  Google Scholar 

  28. Kim, S., Ahn, H., Cho, S.-C.: Variable target value subgradient method. Math. Program. 49, 359–369 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  29. Kiwiel, K.C.: Proximity control in bundle methods for convex nondifferentiable minimization. Math. Program. 46, 105–122 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  30. Kiwiel, K.C.: Subgradient method with entropic projections for convex nondifferentiable minimization. J. Optim. Theory Appl. 96, 159–173 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  31. Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  32. Kruschel, C., Lorenz, D.A.: Computing and analyzing recoverable supports for sparse reconstruction (2013). arXiv:1309.2460 [math.OC]

  33. Kuhn, D.: Convergent bounds for stochastic programs with expected value constraints. J. Optim. Theory Appl. 141, 597–618 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  34. Larsson, T., Patriksson, M., Strömberg, A.-B.: Conditional subgradient optimization—theory and applications. Eur. J. Oper. Res. 88, 382–403 (1996)

    Article  MATH  Google Scholar 

  35. Lewis, A.S., Luke, D.R., Malick, J.: Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math. 9, 485–513 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  36. Lim, C., Sherali, H.D.: Convergence and computational analyses for some variable target value and subgradient deflection methods. Comput. Optim. Appl. 34, 409–428 (2005)

    Article  MathSciNet  Google Scholar 

  37. Löbel, A.: Optimal vehicle scheduling in public transit. Dissertation, Technische Universität Berlin (1998)

  38. Lorenz, D.A., Pfetsch, M.E., Tillmann, A.M.: Solving Basis Pursuit: Subgradient algorithm, heuristic optimality check, and solver comparison. Optimization Online E-Print ID 2011-07-3100 (2011)

  39. Malioutov, D., Çetin, M., Willsky, A.: Homotopy continuation for sparse signal representation. In: Proc. ICASSP’05, vol. 5, pp. 733–736 (2005)

    Google Scholar 

  40. Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  41. Nedić, A., Bertsekas, D.P.: The effect of deterministic noise in subgradient methods. Math. Program. 125, 75–99 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  42. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  43. Osbourne, M., Presnell, B., Turlach, B.: A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20, 389–402 (2000)

    Article  MathSciNet  Google Scholar 

  44. Polyak, B.T.: A general method for solving extremal problems. Dokl. Akad. Nauk SSSR 174, 33–36 (1967)

    MathSciNet  Google Scholar 

  45. Polyak, B.T.: Minimization of nonsmooth functionals. USSR Comput. Math. Math. Phys. 9, 14–29 (1969)

    Article  Google Scholar 

  46. Polyak, B.T.: Subgradient methods: a survey of Soviet research. In: Lemaréchal, C., Mifflin, R. (eds.) Nonsmooth Optimization. Ser. IIASA Proc., pp. 5–29. Pergamon, Elmsford (1978)

    Google Scholar 

  47. Prékopa, A.: Contributions to the theory of stochastic programming. Math. Program. 4, 202–221 (1973)

    Article  MATH  Google Scholar 

  48. Sherali, H.D., Choi, G., Tubcbilek, C.H.: A variable target value method for nondifferentiable optimization. Oper. Res. Lett. 26, 1–8 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  49. Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, Berlin (1985)

    Book  MATH  Google Scholar 

  50. van den Berg, E., Schmidt, M., Friedlander, M.P., Murphy, K.: Group sparsity via linear-time projection. Tech. rep. TR-2008-09, University of British Columbia (2008)

  51. Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by seperable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2009)

    Article  MathSciNet  Google Scholar 

  52. Zaslavski, A.J.: The projected subgradient method for nonsmooth convex optimization in the presence of computational error. Numer. Funct. Anal. Optim. 31, 616–633 (2010)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project “Sparse Exact and Approximate Recovery” under grants LO 1436/3-1 and PF 709/1-1. D. Lorenz further acknowledges support from the DFG project “Sparsity and Compressed Sensing in Inverse Problems” under grant LO 1436/2-1. Moreover, we thank the anonymous referees for their numerous helpful comments which greatly helped improving this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas M. Tillmann.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lorenz, D.A., Pfetsch, M.E. & Tillmann, A.M. An infeasible-point subgradient method using adaptive approximate projections. Comput Optim Appl 57, 271–306 (2014). https://doi.org/10.1007/s10589-013-9602-3

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9602-3

Keywords

Navigation