Advertisement

Computational Optimization and Applications

, Volume 65, Issue 1, pp 205–259 | Cite as

Conjugate gradient acceleration of iteratively re-weighted least squares methods

  • Massimo Fornasier
  • Steffen PeterEmail author
  • Holger Rauhut
  • Stephan Worm
Article

Abstract

Iteratively re-weighted least squares (IRLS) is a method for solving minimization problems involving non-quadratic cost functions, perhaps non-convex and non-smooth, which however can be described as the infimum over a family of quadratic functions. This transformation suggests an algorithmic scheme that solves a sequence of quadratic problems to be tackled efficiently by tools of numerical linear algebra. Its general scope and its usually simple implementation, transforming the initial non-convex and non-smooth minimization problem into a more familiar and easily solvable quadratic optimization problem, make it a versatile algorithm. However, despite its simplicity, versatility, and elegant analysis, the complexity of IRLS strongly depends on the way the solution of the successive quadratic optimizations is addressed. For the important special case of compressed sensing and sparse recovery problems in signal processing, we investigate theoretically and numerically how accurately one needs to solve the quadratic problems by means of the conjugate gradient (CG) method in each iteration in order to guarantee convergence. The use of the CG method may significantly speed-up the numerical solution of the quadratic subproblems, in particular, when fast matrix-vector multiplication (exploiting for instance the FFT) is available for the matrix involved. In addition, we study convergence rates. Our modified IRLS method outperforms state of the art first order methods such as Iterative Hard Thresholding (IHT) or Fast Iterative Soft-Thresholding Algorithm (FISTA) in many situations, especially in large dimensions. Moreover, IRLS is often able to recover sparse vectors from fewer measurements than required for IHT and FISTA.

Keywords

Iteratively re-weighted least squares Conjugate gradient method \(\ell _{\tau }\)-norm minimization Compressed sensing Sparse recovery 

Notes

Acknowledgments

Massimo Fornasier acknowledges the support of the ERC-Starting Grant HDSPCONTR “High-Dimensional Sparse Optimal Control” and the DFG Project “Optimal Adaptive Numerical Methods for p-Poisson Elliptic equations”. Steffen Peter acknowledges the support of the Project “SparsEO: Exploiting the Sparsity in Remote Sensing for Earth Observation” funded by Munich Aerospace. Holger Rauhut would like to thank the European Research Council (ERC) for support through the Starting Grant StG 258926 SPALORA (Sparse and Low Rank Recovery) and the Hausdorff Center for Mathematics at the University of Bonn where this project has started.

References

  1. 1.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009). doi: 10.1137/080716542 MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bickel, P., Ritov, Y., Tsybakov, A.: Simultaneous analysis of lasso and Dantzig selector. Ann. Statist. 37(4), 1705–1732 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009). doi: 10.1016/j.acha.2009.04.002 MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Bredies, K., Lorenz, D.A.: Minimization of non-smooth, non-convex functionals by iterative thresholding. J. Optim. Theory Appl. 165, 78–112 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Candès, E.J., Tao, T., Romberg, J.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Candès, E.J., Plan, Y.: Near-ideal model selection by \(\ell _1\) minimization. Ann. Statist. 37(5A), 2145–2177 (2009). doi: 10.1214/08-AOS653 MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory 52(12), 5406–5425 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Chafai, D., Guédon, O., Lecué, G., Pajor, A.: Interactions between compressed sensing. random matrices and high dimensional geometry. Soc. Math. France (2012)Google Scholar
  9. 9.
    Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numer. Math. 76(2), 167–188 (1997). doi: 10.1007/s002110050258 MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Sign. Process. Lett. 14(10), 707–710 (2007). doi: 10.1109/LSP.2007.898300 CrossRefGoogle Scholar
  11. 11.
    Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Prob. 24(3), 035020 (2008). doi:  10.1088/0266-5611/24/3/035020
  12. 12.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, pp. 3869–3872 (2008). doi:  10.1109/ICASSP.2008.4518498
  13. 13.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by Basis Pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Cline, A.K.: Rate of convergence of Lawson’s algorithm. Math. Comp. 26, 167–176 (1972)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Cohen, A., Dahmen, W., DeVore, R.A.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.: Iteratively re-weighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. 63(1), 1–38 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Dirksen, S., Lecu’e, G., Rauhut, H.: On the gap between RIP-properties and sparse recovery conditions. Preprint arXiv:1504.05073 (2015)
  18. 18.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Fornasier, M., Rauhut, H., Ward, R.: Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 21(4), 1614–1640 (2011). doi: 10.1137/100811404 MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013). doi:  10.1007/978-0-8176-4948-7
  21. 21.
    Gorodnitsky, I.F., Rao, B.D.: Sparse signal reconstruction from limited data using FOCUSS: a recursive weighted norm minimization algorithm. IEEE Trans. Sign. Process. 45(3), 600–616 (1997)CrossRefGoogle Scholar
  22. 22.
    Gribonval, R., Nielsen, M.: Sparse representations in unions of bases. IEEE Trans. Inform. Theory 49(12), 3320–3325 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Han, W., Jensen, S., Shimansky, I.: The Kačanov method for some nonlinear problems. Appl. Numer. Math. 24(1), 57–79 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stan. 49(6), 409–436 (1952)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Hollanda, P.W., Welsch, R.E.: Robust regression using iteratively reweighted least-squares. Commun. Stat. 6(9), 813–827 (1977)CrossRefzbMATHGoogle Scholar
  26. 26.
    Ito, K., Kunisch, K.: A variational approach to sparsity optimization based on Lagrange multiplier theory. Inverse Prob. 30(1), 015,001, 23 (2014). doi:  10.1088/0266-5611/30/1/015001
  27. 27.
    Jacobs, D.A.H.: A generalization of the conjugate-gradient method to solve complex systems. IMA J. Num. Anal. 6(4), 447–452 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Kabanava, M., Rauhut, H.: Analysis \(\ell _1\)-recovery with frames and Gaussian measurements. Acta Appl. Math. (to appear)Google Scholar
  29. 29.
    King, J.T.: A minimal error conjugate gradient method for ill-posed problems. J. Optim. Theory Appl. 60, 297–304 (1989). doi: 10.1007/BF00940009 MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Krahmer, F., Mendelson, S., Rauhut, H.: Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. 67(11), 1877–1904 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Lai, M.J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(\ell _q\) minimization. SIAM J. Num. Anal. 51(2), 927–957 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Lawson, C.L.: Contributions to the Theory of Linear Least Maximum Approximation. Ph.D. Thesis. University of California, Los Angeles (1961)Google Scholar
  33. 33.
    Lecuè, G., Mendelson, S.: Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (to appear)Google Scholar
  34. 34.
    Nocedal, J., Wright, S.: Conjugate Gradient Methods. Springer Series in Operations Research and Financial Engineering. pp. 101–134. Springer (2006)Google Scholar
  35. 35.
    Ochs, P., Dosovitskiy, A., Brox, T., Pock, T.: On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision. SIAM J. Imaging Sci. 8(1), 331–372 (2015). doi: 10.1137/140971518 MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Osborne, M.R.: Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. Finite algorithms in optimization and data analysis. Wiley, Chichester (1985)Google Scholar
  37. 37.
    Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics. Texts in Applied Mathematics Series. Springer (2000). http://books.google.de/books?id=YVpyyi1M7vUC
  38. 38.
    Ramlau, R., Zarzer, C.A.: On the minimization of a Tikhonov functional with a non-convex sparsity constraint. Electron. Trans. Numer. Anal. 39, 476–507 (2012)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Rauhut, H.: Compressive sensing and structured random matrices. In: Fornasier, M (ed.) Theoretical foundations and numerical methods for sparse recovery, Radon Series Comp. Appl. Math., vol. 9, pp. 1–92. deGruyter (2010)Google Scholar
  40. 40.
    Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Comm. Pure Appl. Math. 61, 1025–1045 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1–4), 259–268 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Vogel, C.R., Oman, M.E.: Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 7(6), 813–824 (1998). doi: 10.1109/83.679423 MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Voronin, S.: Regularization of linear systems with sparsity constraints with applications to large scale inverse problems. Ph.D. thesis, Applied and Computational Mathematics Department, Princeton University (2012)Google Scholar
  44. 44.
    Voronin, S., Daubechies, I.: An Iteratively Reweighted Least Squares Algorithm for Sparse Regularization. arXiv:1511.08970 [math] (2015)
  45. 45.
    Zarzer, C.A.: On Tikhonov regularization with non-convex sparsity constraints. Inverse Prob. 25(2), 025006 (2009). doi:  10.1088/0266-5611/25/2/025006

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Massimo Fornasier
    • 1
  • Steffen Peter
    • 1
    Email author
  • Holger Rauhut
    • 2
  • Stephan Worm
    • 3
  1. 1.Fakultät für MathematikTechnische Universität MünchenGarching bei MünchenGermany
  2. 2.Lehrstuhl C für Mathematik (Analysis)RWTH Aachen UniversityAachenGermany
  3. 3.BonnGermany

Personalised recommendations