Computational Optimization and Applications

, Volume 57, Issue 1, pp 1–25 | Cite as

A superlinearly convergent R-regularized Newton scheme for variational models with concave sparsity-promoting priors



A general class of variational models with concave priors is considered for obtaining certain sparse solutions, for which nonsmoothness and non-Lipschitz continuity of the objective functions pose significant challenges from an analytical as well as numerical point of view. For computing a stationary point of the underlying variational problem, a Newton-type scheme with provable convergence properties is proposed. The possible non-positive definiteness of the generalized Hessian is handled by a tailored regularization technique, which is motivated by reweighting as well as the classical trust-region method. Our numerical experiments demonstrate selected applications in image processing, support vector machines, and optimal control of partial differential equations.


Sparsity Concave priors Nonconvex minimization Semismooth Newton method Superlinear convergence 



This research was supported by the Austrian Science Fund (FWF) through START project Y305 “Interfaces and Free Boundaries” and through SFB project F3204 “Mathematical Optimization and Applications in Biomedical Sciences”. The authors would like to thank T. Pock (TU Graz) for communication on the problem concerning the overcomplete dictionary. We also thank the referee for the pointer to a recent thesis work [28], which contains an extensive algorithmic study of nonconvex-regularization based sparse optimization.


  1. 1.
  2. 2.
    Attouch, H., Buttazzo, G., Michaille, G.: Variational Analysis in Sobolev and BV Spaces: Applications to PDEs and Optimization. SIAM, Philadelphia (2006) Google Scholar
  3. 3.
    Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing. Springer, New York (2002) MATHGoogle Scholar
  4. 4.
    Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009) MATHMathSciNetGoogle Scholar
  5. 5.
    Burke, J.V., Lewis, A.S., Overton, M.L.: A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim. 15, 751–779 (2005) MATHMathSciNetGoogle Scholar
  6. 6.
    Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8, 337–369 (2009) MathSciNetGoogle Scholar
  7. 7.
    Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52, 5406–5425 (2006) Google Scholar
  8. 8.
    Chan, T.F., Mulet, P.: On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM J. Numer. Anal. 36, 354–367 (1999) MathSciNetGoogle Scholar
  9. 9.
    Chapelle, O.: Training a support vector machine in the primal. Neural Comput. 19, 1155–1178 (2007) MATHMathSciNetGoogle Scholar
  10. 10.
    Charbonnier, P., Blanc-Féraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6, 298–311 (1997) Google Scholar
  11. 11.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008) Google Scholar
  12. 12.
    Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program., Ser. B 134, 71–99 (2012) MATHGoogle Scholar
  13. 13.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001) MATHMathSciNetGoogle Scholar
  14. 14.
    Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of 2 p minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010) MATHMathSciNetGoogle Scholar
  15. 15.
    Clason, C., Kunisch, K.: A measure space approach to optimal source placement. Comput. Optim. Appl. 53, 155–171 (2012) MATHMathSciNetGoogle Scholar
  16. 16.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000) MATHGoogle Scholar
  17. 17.
    Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63, 1–38 (2010) MATHGoogle Scholar
  18. 18.
    Dennis, J.E. Jr., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM, Philadelphia (1996) MATHGoogle Scholar
  19. 19.
    Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001) MATHMathSciNetGoogle Scholar
  20. 20.
    Fischer, A.: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513–532 (1997) MATHGoogle Scholar
  21. 21.
    Geman, D., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14, 367–383 (1992) Google Scholar
  22. 22.
    Hintermüller, M., Wu, T.: Nonconvex TVq-models in image restoration: analysis and a trust-region regularization based superlinearly convergent solver. SIAM J. Imaging Sci. (to appear) Google Scholar
  23. 23.
    Hintermüller, M., Ito, K., Kunisch, K.: The primal-dual active set strategy as a semismooth Newton method. SIAM J. Optim. 13, 865–888 (2003) MATHGoogle Scholar
  24. 24.
    Huang, J., Horowitz, J.L., Ma, S.: Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 36, 587–613 (2008) MATHMathSciNetGoogle Scholar
  25. 25.
    Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 53, 73–101 (1964) Google Scholar
  26. 26.
    Knight, K., Fu, W.: Asymptotics for lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000) MATHMathSciNetGoogle Scholar
  27. 27.
    Kunisch, K., Pock, T.: A bilevel optimization approach for parameter learning in variational models. SIAM J. Imaging Sci. 6, 938–983 (2013) MathSciNetGoogle Scholar
  28. 28.
    Lin, Q.: Sparsity and nonconvex nonsmooth optimization. Ph.D. thesis, University of Washington (2009) Google Scholar
  29. 29.
    Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4, 960–991 (2005) MATHMathSciNetGoogle Scholar
  30. 30.
    Nikolova, M., Chan, R.H.: The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans. Image Process. 16, 1623–1627 (2007) MathSciNetGoogle Scholar
  31. 31.
    Nikolova, M., Ng, M.K., Zhang, S., Ching, W.K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1, 2–25 (2008) MATHMathSciNetGoogle Scholar
  32. 32.
    Nikolova, M., Ng, M.K., Tam, C.P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19, 3073–3088 (2010) MathSciNetGoogle Scholar
  33. 33.
    Nocedal, J., Wright, S.: Numerical Optimization, 2nd edn. Springer, New York (2006) MATHGoogle Scholar
  34. 34.
    Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993) MATHMathSciNetGoogle Scholar
  35. 35.
    Ramlau, R., Zarzer, C.A.: On the minimization of a Tikhonov functional with a non-convex sparsity constraint. Electron. Trans. Numer. Anal. 39, 476–507 (2012) MathSciNetGoogle Scholar
  36. 36.
    Stadler, G.: Elliptic optimal control problems with L 1-control cost and applications for the placement of control devices. Comput. Optim. Appl. 44, 159–181 (2009) MATHMathSciNetGoogle Scholar
  37. 37.
    Vogel, C.R., Oman, M.E.: Iterative methods for total variation denoising. SIAM J. Sci. Comput. 17, 227–238 (1996) MATHMathSciNetGoogle Scholar
  38. 38.
    Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.: Feature selection for SVMs. Adv. Neural Inf. Process. Syst. 13, 668–674 (2000) Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Department of MathematicsHumboldt-University of BerlinBerlinGermany
  2. 2.Institute for Mathematics and Scientific ComputingKarl-Franzens-University of GrazGrazAustria

Personalised recommendations