Abstract
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||1 where W is arbitrary, or a combination thereof. In addition, the paper introduces a number of technical contributions such as a novel continuation scheme and a novel approach for controlling the step size, and applies results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Afonso, M.V., Bioucas-Dias, J.M., Figueiredo, M.A.T.: An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 19(11), (2010). doi:10.1109/TIP.2010.2076294
Auslender A., Teboulle M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16(3), 697–725 (2006)
Beck A., Teboulle M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Beck, A., Teboulle, M.: Convex Optimization in Signal Processing and Communications. Gradient-Based Algorithms with Applications in Signal Recovery Problems. Cambridge University Press (2010)
Becker S., Bobin J., Candès E.J.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)
Becker, S., Candès, E.J., Grant, M.: Templates for first-order conic solvers user guide. Technical report (2010). Preprint. http://tfocs.stanford.edu
van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890 (2009). doi:10.1137/080714488 . http://link.aip.org/link/SJOCE3/v31/i2/p890/s1&Agg=doi
Bertsekas D.P., Nedić A., Ozdaglar A.E.: Convex Analysis and Optimization. Athena Scientific, Cambridge (2003)
Boyd D., vandenberghe L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Cai J.F., Candès E.J., Shen Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)
Candès, E.J., Eldar, Y.C., Needell, D.: Compressed sensing with coherent and redundant dictionaries. Tech. rep. (2010). Preprint available at http://arxiv.org/abs/1005.2613
Candès E.J., Guo F.: New multiscale transforms, minimum total-variation synthesis: applications to edge-preserving image reconstruction. Signal Process. 82(11), 1519–1543 (2002)
Candès, E.J., Plan, Y.: Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. In: CoRR, abs/1001.0339 (2010)
Candès E.J., Recht B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)
Candès, E.J., Romberg, J.K.: Practical signal recovery from random projections. In: SPIE Conference on Computational Imaging, pp. 76–86 (2005)
Candès, E.J., Romberg, J.K.: ℓ 1-magic. Technical report, Caltech (2007). http://www.acm.caltech.edu/l1magic/
Candès E.J., Tao T.: The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)
Candès E.J., Tao T.: The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Candès E.J., Wakin M.B., Boyd S.P.: Enhancing sparsity by reweighted ℓ 1 minimization. J. Fourier Anal. Appl. 14(5–6), 877–905 (2008)
Chambolle A., Pock T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2010)
Ciarlet P.G.: Introduction to Numerical Linear Algebra and Optimisation. Cambridge University Press, Cambridge (1989)
Combettes P.L., Dũng D., Vũ B.C.: Dualization of signal recovery problems. Set-Valued Var. Anal. 18, 373–404 (2010)
Combettes P.L., Pesquet J.C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Topics Signal Process. 1(4), 564–574 (2007)
Combettes P.L., Wajs V.R.: Signal recovery by proximal forward-backward splitting. SIAM Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Donoho D.L., Tsaig Y.: Fast solution of ℓ 1 minimization problems when the solution may be sparse. IEEE Trans. Inform. Theory 54(11), 4789–4812 (2008)
Efron B., Hastie T., Johnstone I., Tibshirani R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)
Elad M., Milanfar P., Rubinstein R.: Analysis versus synthesis in signal priors. Inverse Problems 23, 947–968 (2007)
Figueiredo M.A.T., Nowak R., Wright S.J.: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)
Friedlander, M.P., Tseng, P.: Exact regularization of convex programs. SIAM J. Optim. 18(4), 1326–1350 (2007). doi:10.1137/060675320 . http://link.aip.org/link/?SJE/18/1326/1
Friedman J., Hastie T., Tibshirani R.: Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33(1), 1–22 (2010)
Fukushima M., Mine H.: A generalized proximal point algorithm for certain non-convex minimization problems. Int. J. Syst. Sci. 12(8), 989–1000 (1981)
Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx (2010)
Gross, D.: Recovering low-rank matrices from few coefficients in any basis. In: CoRR, abs/0910.1879 (2009)
Gu, M., Lim, L.H., Wu, C.J.: PARNES: a rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals. Technical report (2009). Preprint http://arxiv.org/abs/0911.0492
Güler, O.: New proximal point algorithms for convex minimization. SIAM J. Optim. 2(4), 649–664 (1992). doi:10.1137/0802032 . http://link.aip.org/link/?SJE/2/649/1
Hale E.T., Yin W., Zhang Y.: Fixed-point continuation for ℓ 1-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)
Hiriart-Urruty J.B., Lemaréchal C.: Convex Analysis and Minimization Algorithms, vols. I and II. Springer, Berlin (1993)
James G., Radchenko P., Lv J.: DASSO: Connections Between the Dantzig Selector and Lasso. J. R. Stat. Soc. B 71, 127–142 (2009)
Koh, K., Kim, S.J., Boyd, S.P.: Solver for l1-regularized least squares problems. Technical report, Stanford University. http://www.stanford.edu/~boyd/l1_ls/ (2007)
Lan, G., Lu, Z., Monteiro, R.D.C.: Primal-dual first-order methods with o(1/ε) iteration-complexity for cone programming. Math. Program. (2009). doi:10.1007/s10107-008-0261-6 . http://www.springerlink.com/index/10.1007/s10107-008-0261-6
Larsen, R.M.: PROPACK: Software for Large and Sparse SVD Calculations. http://soi.stanford.edu/~rmunk/PROPACK/ (2004)
Liu, Y.J., Sun, D., Toh, K.C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. (2011). doi:10.1007/s10107-010-0437-8
Lorenz, D.: Constructing test instances for basis pursuit denoising. Technical report. arXiv:1103.2897 (2011)
Lu, Z.: Primal-dual first-order methods for a class of cone programming. INFORMS J. Comput. Preprint http://www.math.sfu.ca/~zhaosong/ResearchPapers/pdfirst_DS_2ndrev.pdf(2009)
Malgouyres F., Zeng T.: A predual proximal point algorithm solving a non negative basis pursuit denoising model. Int. J. Comput. Vis. 83(3), 294–311 (2009)
Mangasarian O.L., Meyer R.R.: Nonlinear perturbation of linear programs. SIAM J. Control Optim. 17, 745–752 (1979)
Moreau J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
Mosek ApS: The MOSEK Optimization Tools Version 2.5. http://www.mosek.com(2002)
Nemirovski A., Yudin D.: Problem complexity and method efficiency in optimization. Wiley-Interscience Series in Discrete Mathematics. Wiley, New York (1983)
Nesterov Y.: A method for unconstrained convex minimization problem with the rate of convergence \({\mathcal{O}(1/k^2)}\) . Doklady AN USSR (translated as Soviet Math. Docl.) 269, 543–547 (1983)
Nesterov, Y.: On an approach to the construction of optimal methods of minimization of smooth convex functions. Ekonomika i Mateaticheskie Metody 24, 509–517 (1988, in Russian)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer, Boston (2004)
Nesterov Y.: Smooth minimization of non-smooth functions. Math. Program. Ser. A 103, 127–152 (2005)
Nesterov, Y.: Gradient methods for minimizing composite objective function. Technical report, CORE 2007/76, Université Catholique de Louvain, Louvain-la-Neuve, Belgium (2007)
Osher S., Mao Y., Dong B., Yin W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8(1), 93–111 (2010)
Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Romberg, J.K.: The Dantzig selector and generalized thresholding. In: Proceedings of IEEE Conference on Information Science and System. Princeton, New Jersey (2008)
Rudin L.I., Osher S., Fatemi E.: Nonlinear total variation noise removal algorithm. Physica D 60, 259–268 (1992)
Saunders, M., Kim, B.: PDCO: Primal-dual interior method for convex objectives. Technical report, Stanford University. http://www.stanford.edu/group/SOL/software/pdco.html(2002)
Starck J.L., Ngyuen M.K., Murtagh F.: Wavelets and curvelets for image deconvolution: a combined approach. Signal Process. 83, 2279–2283 (2003)
Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58(1), 267–288 (1996)
Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. (2008). http://www.math.washington.edu/~tseng/papers.html , last accessed Sept 2009
Weiss P., Blanc-Féraud L., Aubert G.: Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 31, 2047–2080 (2009)
Wen Z., Yin W., Goldfarb D., Zhang Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. SIAM J. Sci. Comput. 32(4), 1832–1857 (2010)
Wright, S.J.: Solving ℓ 1-regularized regression problems. In: International Conference Combinatorics and Optimization, Waterloo (2007)
Wright S.J., Nowak R.D., Figueiredo M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)
Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3(4), 856–877 (2010). http://dx.doi.org/10.1137/090760350
Yin W., Osher S., Goldfarb D., Darbon J.: Bregman iterative algorithms for ℓ 1 minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Becker, S.R., Candès, E.J. & Grant, M.C. Templates for convex cone problems with applications to sparse signal recovery. Math. Prog. Comp. 3, 165–218 (2011). https://doi.org/10.1007/s12532-011-0029-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-011-0029-5
Keywords
- Optimal first-order methods
- Nesterov’s accelerated descent algorithms
- Proximal algorithms
- Conic duality
- Smoothing by conjugation
- The Dantzig selector
- The LASSO
- Nuclear-norm minimization