Abstract
We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum ℓ 1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.
Similar content being viewed by others
References
Alber, Y.I., Iusem, A.N., Solodov, M.V.: On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Math. Program. 81, 23–35 (1998)
Allen, E., Helgason, R., Kennington, J., Shetty, B.: A generalization of Polyak’s convergence result for subgradient optimization. Math. Program. 37, 309–317 (1987)
Anstreicher, K.M., Wolsey, L.A.: Two “well-known” properties of subgradient optimization. Math. Program. 120, 213–220 (2009)
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
Bazaraa, M.S., Sherali, H.D.: On the choice of step size in subgradient optimization. Eur. J. Oper. Res. 7, 380–388 (1981)
Bertsekas, D.P., Mitter, S.K.: A descent numerical method for optimization problems with nondifferentiable cost functionals. SIAM J. Control 11, 637–652 (1973)
Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Ser. Oper. Res. Springer, Berlin (1999). Corrected second printing
Boyd, S., Mutapcic, A.: In: Stochastic Subgradient Methods. Lecture Notes (2007). http://see.stanford.edu/materials/lsocoee364b/04-stoch_subgrad_notes.pdf. Accessed 08/29/2013
Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)
Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci. 6, 73–79 (1959)
Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)
Cohen, A., Dahmen, W., DeVore, R.: Adaptive wavelet methods. II. Beyond the elliptic case. Found. Comput. Math. 2, 203–245 (2002)
Combettes, P.L., Luo, J.: An adaptive level set method for nondifferentiable constrained image recovery. IEEE Trans. Image Process. 11, 1295–1304 (2002)
Compressive sensing resources. http://dsp.rice.edu/cs. Accessed 08/29/2013
D’Antonio, G., Frangioni, A.: Convergence analysis of deflected conditional approximate subgradient methods. SIAM J. Optim. 20, 357–386 (2009)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Ferris, M.C.: Weak sharp minima and exact penalty functions. Tech. rep. 779, Comp. Sci. Dept., University of Wisconsin, Madison, WI, USA (1988)
Goffin, J.L., Kiwiel, K.: Convergence of a simple subgradient level method. Math. Program. 85, 207–211 (1999)
Grasmair, M., Haltmeier, M., Scherzer, O.: Necessary and sufficient conditions for linear convergence of ℓ 1-regularization. Commun. Pure Appl. Math. 64, 161–182 (2011)
Klein Haneveld, W.K.: Duality in Stochastic Linear and Dynamic Programming. Lecture Notes in Economics and Mathematical Systems, vol. 274. Springer, Berlin (1986)
Klein Haneveld, W.K., van der Vlerk, M.H.: Integrated chance constraints: reduced forms and an algorithm. Comput. Manag. Sci. 3, 245–269 (2006)
Neto, E.S.H., De Pierro, A.R.: Incremental subgradients for constrainted convex optimization: a unified framework and new methods. SIAM J. Optim. 20, 1547–1572 (2009)
Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. II. Ser. Grundlehren der Mathematischen Wissenschaften [Fundam. Princ. Math. Sci.], vol. 306. Springer, Berlin (1993)
Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer, Berlin (2004). Corrected second printing
Kall, P., Mayer, J.: Stochastic Linear Programming. Models, Theory, and Computation. Springer, Berlin (2005)
Kim, S., Ahn, H., Cho, S.-C.: Variable target value subgradient method. Math. Program. 49, 359–369 (1991)
Kiwiel, K.C.: Proximity control in bundle methods for convex nondifferentiable minimization. Math. Program. 46, 105–122 (1990)
Kiwiel, K.C.: Subgradient method with entropic projections for convex nondifferentiable minimization. J. Optim. Theory Appl. 96, 159–173 (1998)
Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14, 807–840 (2004)
Kruschel, C., Lorenz, D.A.: Computing and analyzing recoverable supports for sparse reconstruction (2013). arXiv:1309.2460 [math.OC]
Kuhn, D.: Convergent bounds for stochastic programs with expected value constraints. J. Optim. Theory Appl. 141, 597–618 (2009)
Larsson, T., Patriksson, M., Strömberg, A.-B.: Conditional subgradient optimization—theory and applications. Eur. J. Oper. Res. 88, 382–403 (1996)
Lewis, A.S., Luke, D.R., Malick, J.: Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math. 9, 485–513 (2009)
Lim, C., Sherali, H.D.: Convergence and computational analyses for some variable target value and subgradient deflection methods. Comput. Optim. Appl. 34, 409–428 (2005)
Löbel, A.: Optimal vehicle scheduling in public transit. Dissertation, Technische Universität Berlin (1998)
Lorenz, D.A., Pfetsch, M.E., Tillmann, A.M.: Solving Basis Pursuit: Subgradient algorithm, heuristic optimality check, and solver comparison. Optimization Online E-Print ID 2011-07-3100 (2011)
Malioutov, D., Çetin, M., Willsky, A.: Homotopy continuation for sparse signal representation. In: Proc. ICASSP’05, vol. 5, pp. 733–736 (2005)
Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12, 109–138 (2001)
Nedić, A., Bertsekas, D.P.: The effect of deterministic noise in subgradient methods. Math. Program. 125, 75–99 (2010)
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)
Osbourne, M., Presnell, B., Turlach, B.: A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20, 389–402 (2000)
Polyak, B.T.: A general method for solving extremal problems. Dokl. Akad. Nauk SSSR 174, 33–36 (1967)
Polyak, B.T.: Minimization of nonsmooth functionals. USSR Comput. Math. Math. Phys. 9, 14–29 (1969)
Polyak, B.T.: Subgradient methods: a survey of Soviet research. In: Lemaréchal, C., Mifflin, R. (eds.) Nonsmooth Optimization. Ser. IIASA Proc., pp. 5–29. Pergamon, Elmsford (1978)
Prékopa, A.: Contributions to the theory of stochastic programming. Math. Program. 4, 202–221 (1973)
Sherali, H.D., Choi, G., Tubcbilek, C.H.: A variable target value method for nondifferentiable optimization. Oper. Res. Lett. 26, 1–8 (2000)
Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, Berlin (1985)
van den Berg, E., Schmidt, M., Friedlander, M.P., Murphy, K.: Group sparsity via linear-time projection. Tech. rep. TR-2008-09, University of British Columbia (2008)
Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by seperable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2009)
Zaslavski, A.J.: The projected subgradient method for nonsmooth convex optimization in the presence of computational error. Numer. Funct. Anal. Optim. 31, 616–633 (2010)
Acknowledgements
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project “Sparse Exact and Approximate Recovery” under grants LO 1436/3-1 and PF 709/1-1. D. Lorenz further acknowledges support from the DFG project “Sparsity and Compressed Sensing in Inverse Problems” under grant LO 1436/2-1. Moreover, we thank the anonymous referees for their numerous helpful comments which greatly helped improving this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lorenz, D.A., Pfetsch, M.E. & Tillmann, A.M. An infeasible-point subgradient method using adaptive approximate projections. Comput Optim Appl 57, 271–306 (2014). https://doi.org/10.1007/s10589-013-9602-3
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-013-9602-3