Skip to main content
Log in

A superlinearly convergent R-regularized Newton scheme for variational models with concave sparsity-promoting priors

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

A general class of variational models with concave priors is considered for obtaining certain sparse solutions, for which nonsmoothness and non-Lipschitz continuity of the objective functions pose significant challenges from an analytical as well as numerical point of view. For computing a stationary point of the underlying variational problem, a Newton-type scheme with provable convergence properties is proposed. The possible non-positive definiteness of the generalized Hessian is handled by a tailored regularization technique, which is motivated by reweighting as well as the classical trust-region method. Our numerical experiments demonstrate selected applications in image processing, support vector machines, and optimal control of partial differential equations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. _1-magic. http://users.ece.gatech.edu/~justin/l1magic/

  2. Attouch, H., Buttazzo, G., Michaille, G.: Variational Analysis in Sobolev and BV Spaces: Applications to PDEs and Optimization. SIAM, Philadelphia (2006)

    Google Scholar 

  3. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing. Springer, New York (2002)

    MATH  Google Scholar 

  4. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)

    MATH  MathSciNet  Google Scholar 

  5. Burke, J.V., Lewis, A.S., Overton, M.L.: A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim. 15, 751–779 (2005)

    MATH  MathSciNet  Google Scholar 

  6. Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8, 337–369 (2009)

    MathSciNet  Google Scholar 

  7. Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52, 5406–5425 (2006)

    Google Scholar 

  8. Chan, T.F., Mulet, P.: On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM J. Numer. Anal. 36, 354–367 (1999)

    MathSciNet  Google Scholar 

  9. Chapelle, O.: Training a support vector machine in the primal. Neural Comput. 19, 1155–1178 (2007)

    MATH  MathSciNet  Google Scholar 

  10. Charbonnier, P., Blanc-Féraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6, 298–311 (1997)

    Google Scholar 

  11. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008)

    Google Scholar 

  12. Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program., Ser. B 134, 71–99 (2012)

    MATH  Google Scholar 

  13. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001)

    MATH  MathSciNet  Google Scholar 

  14. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of 2 p minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    MATH  MathSciNet  Google Scholar 

  15. Clason, C., Kunisch, K.: A measure space approach to optimal source placement. Comput. Optim. Appl. 53, 155–171 (2012)

    MATH  MathSciNet  Google Scholar 

  16. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000)

    MATH  Google Scholar 

  17. Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63, 1–38 (2010)

    MATH  Google Scholar 

  18. Dennis, J.E. Jr., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM, Philadelphia (1996)

    MATH  Google Scholar 

  19. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)

    MATH  MathSciNet  Google Scholar 

  20. Fischer, A.: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513–532 (1997)

    MATH  Google Scholar 

  21. Geman, D., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14, 367–383 (1992)

    Google Scholar 

  22. Hintermüller, M., Wu, T.: Nonconvex TVq-models in image restoration: analysis and a trust-region regularization based superlinearly convergent solver. SIAM J. Imaging Sci. (to appear)

  23. Hintermüller, M., Ito, K., Kunisch, K.: The primal-dual active set strategy as a semismooth Newton method. SIAM J. Optim. 13, 865–888 (2003)

    MATH  Google Scholar 

  24. Huang, J., Horowitz, J.L., Ma, S.: Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 36, 587–613 (2008)

    MATH  MathSciNet  Google Scholar 

  25. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 53, 73–101 (1964)

    Google Scholar 

  26. Knight, K., Fu, W.: Asymptotics for lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000)

    MATH  MathSciNet  Google Scholar 

  27. Kunisch, K., Pock, T.: A bilevel optimization approach for parameter learning in variational models. SIAM J. Imaging Sci. 6, 938–983 (2013)

    MathSciNet  Google Scholar 

  28. Lin, Q.: Sparsity and nonconvex nonsmooth optimization. Ph.D. thesis, University of Washington (2009)

  29. Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4, 960–991 (2005)

    MATH  MathSciNet  Google Scholar 

  30. Nikolova, M., Chan, R.H.: The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans. Image Process. 16, 1623–1627 (2007)

    MathSciNet  Google Scholar 

  31. Nikolova, M., Ng, M.K., Zhang, S., Ching, W.K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1, 2–25 (2008)

    MATH  MathSciNet  Google Scholar 

  32. Nikolova, M., Ng, M.K., Tam, C.P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19, 3073–3088 (2010)

    MathSciNet  Google Scholar 

  33. Nocedal, J., Wright, S.: Numerical Optimization, 2nd edn. Springer, New York (2006)

    MATH  Google Scholar 

  34. Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)

    MATH  MathSciNet  Google Scholar 

  35. Ramlau, R., Zarzer, C.A.: On the minimization of a Tikhonov functional with a non-convex sparsity constraint. Electron. Trans. Numer. Anal. 39, 476–507 (2012)

    MathSciNet  Google Scholar 

  36. Stadler, G.: Elliptic optimal control problems with L 1-control cost and applications for the placement of control devices. Comput. Optim. Appl. 44, 159–181 (2009)

    MATH  MathSciNet  Google Scholar 

  37. Vogel, C.R., Oman, M.E.: Iterative methods for total variation denoising. SIAM J. Sci. Comput. 17, 227–238 (1996)

    MATH  MathSciNet  Google Scholar 

  38. Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.: Feature selection for SVMs. Adv. Neural Inf. Process. Syst. 13, 668–674 (2000)

    Google Scholar 

Download references

Acknowledgements

This research was supported by the Austrian Science Fund (FWF) through START project Y305 “Interfaces and Free Boundaries” and through SFB project F3204 “Mathematical Optimization and Applications in Biomedical Sciences”. The authors would like to thank T. Pock (TU Graz) for communication on the problem concerning the overcomplete dictionary. We also thank the referee for the pointer to a recent thesis work [28], which contains an extensive algorithmic study of nonconvex-regularization based sparse optimization.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Hintermüller.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hintermüller, M., Wu, T. A superlinearly convergent R-regularized Newton scheme for variational models with concave sparsity-promoting priors. Comput Optim Appl 57, 1–25 (2014). https://doi.org/10.1007/s10589-013-9583-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9583-2

Keywords

Navigation