Skip to main content
Log in

Sparse solutions to an underdetermined system of linear equations via penalized Huber loss

  • Research Article
  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

We investigate the computation of a sparse solution to an underdetermined system of linear equations using the Huber loss function as a proxy for the 1-norm and a quadratic error term à la Lasso. The approach is termed “penalized Huber loss”. The results of the paper allow to calculate a sparse solution using a simple extrapolation formula under a sign constancy condition that can be removed if one works with extreme points. Conditions leading to sign constancy, as well as necessary and sufficient conditions for computation of a sparse solution by penalized Huber loss, and ties among different solutions are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aybat NS, Iyengar G (2011) A first-order smoothed penalty method for compressed sensing. SIAM J Optim 21(1):287–313

    Article  MathSciNet  Google Scholar 

  • Aybat NS, Iyengar G (2012) A first-order augmented lagrangian method for compressed sensing. SIAM J Optim 22(2):429–459

    Article  MathSciNet  Google Scholar 

  • Bertsimas D, Tsiksiklis J (1997) Introduction to Linear Optimization, Belmont. Athena Scientific, Massachusetts

    Google Scholar 

  • Bryan K, Leise T (2013) Making do with less: an introduction to compressed sensing. SIAM Rev 55:547–566

    Article  MathSciNet  Google Scholar 

  • Candès EJ, Tao T (2006) Decoding by linear programming. IEEE Trans Inform Theory 52:4203–4215

    MathSciNet  MATH  Google Scholar 

  • Chen BT, Madsen K, Zhang Sh (2005) On the characterization of quadratic splines. J Optim Theory Appl 124(1):93–111

    Article  MathSciNet  Google Scholar 

  • Cheng W, Dai Y-H (2018) Gradient based method with active set strategy for \(\ell _1\) optimization. Math Comput 87(311):1283–1305

    Article  Google Scholar 

  • Cheng W, Dai Y-H An active set Newton-CG method for \(\ell _1\) optimization, Applied and Computational Harmonic Analysis, to appear

  • Cheng W, Hu QJ, Li D (2019) A fast conjugate gradient algorithm with active set prediction for \(\ell _1\) optimization. Optim Methods Softw 34(6):1277–1305

    Article  MathSciNet  Google Scholar 

  • Cheng W, Chen Z, Li D (2015) Non-monotone spectral gradient method for sparse recovery. Inv Prob Imag 9(3):815–833

    Article  Google Scholar 

  • Cheng W, Li D (2018) A preconditioned conjugate gradient method with active set strategy for \(\ell _1\)-regularized least squares. J Oper Res Soc China 6:571–585

    Article  MathSciNet  Google Scholar 

  • Chrétien S (2010) An alternating \(\ell _1\) approach to the compressed sensing problem. IEEE Sig Proc Lett 17:181–184

    Article  Google Scholar 

  • Chen S, Donoho D, Saunders M (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20:33–61

    Article  MathSciNet  Google Scholar 

  • Donoho DL (2005) Compressed sensing. IEEE Trans Inform Theory 51:1289–1306

    MathSciNet  MATH  Google Scholar 

  • Donoho DL (2006) For most large underdetermined systems of linear equations the minimal \(\ell _1\)-norm solution is also the sparsest solution. Comm Pure Appl Math 59:797–829

    Article  MathSciNet  Google Scholar 

  • Elad M (2010) Sparse and redundant representations: from theory to applications in signal and image processing. Springer, New York

    Book  Google Scholar 

  • Foucart S, Rauhut H (2013) A Mathematical Introduction to Compressive Sensing. Springer, New York

    Book  Google Scholar 

  • Fuchs JJ (2004) On sparse representations in arbitrary redundant bases. IEEE Trans Inform Theor 50(6):1341–1344

    Article  MathSciNet  Google Scholar 

  • Huber PJ (1981) Robust Statistics. Wiley, New York

    Book  Google Scholar 

  • Kızılkale C, Chandrasekaran S, Ç. Pınar M, Gu M (2020) Gradient based adaptive restart is linearly convergent, Technical Report, http://www.ie.bilkent.edu.tr/~mustafap/pubs/Restartnew4.pdf. Accessed July 2020

  • Lanza A, Morigi S, Selesnick IW, Sgallari F (2019) Sparsity-inducing non-convex, non-separable regularization for convex image processing. SIAM J Imag Sci 12(2):1099–1134

    Article  Google Scholar 

  • Lee JD, Sun Y, Saunders MA (2014) Proximal Newton-type methods for minimizing composite functions. SIAM J Optim 24(3):1420–1443

    Article  MathSciNet  Google Scholar 

  • Li W, Swetits JJ (1998) The linear \(\ell _1\) estimator and the Huber M-estimator. SIAM J Optim 8:457–475

    Article  MathSciNet  Google Scholar 

  • Madsen K, Nielsen HB (1993) A finite smoothing algorithm for linear \(\ell _1\) estimation. SIAM J Optim 3:223–235

    Article  MathSciNet  Google Scholar 

  • Madsen K, Nielsen HB, Pınar MÇ (1994) New characterizations of \(\ell _1\) solutions of overdetermined systems of linear equations. Oper Res Lett 16:159–166

    Article  MathSciNet  Google Scholar 

  • Madsen K, Nielsen HB, Pınar MÇ (1998) A finite continuation algorithm for bound constrained quadratic programming. SIAM J Optim 9:62–83

    Article  MathSciNet  Google Scholar 

  • Mangasarian OL, Meyer RR (1979) Nonlinear perturbations of linear programs. SIAM J Control Optim 17:745–752

    Article  MathSciNet  Google Scholar 

  • Nesterov Yu (2004) Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers, Norwell, MA

    Book  Google Scholar 

  • Nesterov Yu (2005) Smooth minimization of nonsmooth functions. Math Program 103:127–152

    Article  MathSciNet  Google Scholar 

  • Pınar MÇ (2019) Necessary and sufficient conditions for noiseless sparse recovery via convex quadratic splines. SIAM J Matrix Anal Appl 40(1):194–209

    Article  MathSciNet  Google Scholar 

  • Selesnick I (2017) Sparse regularization via convex analysis. IEEE Trans Sig Process 65(17):4481–4494

    Article  MathSciNet  Google Scholar 

  • Singaraju D, Tron R, Elhamifar E, Yang AY, Sastry SS (2012) On the lagrangian biduality of sparsity minimization, ICASSP 2012, IEEE Proceedings, 3429–3432

  • Wang S, Chen X, Dai W, Selesnick IW, Cai G (2018) Vector minimax concave penalty for sparse representation. Dig Sig Process 83:165–179

    Article  MathSciNet  Google Scholar 

  • Wang J, Zhang F, Huang J, Wang W, Yuan C (2019) A non-convex penalty function with integral convolution approximation for compressed sensing. Sig Process 158:116–128

    Article  Google Scholar 

  • Wen Z, Yin W, Goldfarb D, Zhang Y (2010) A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation. SIAM J Sci Comp 32(4):1832–1857

    Article  MathSciNet  Google Scholar 

  • Williams AC (1970) Complementarity theorems for linear programming. SIAM Rev 12:135–137

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mustafa Ç. Pınar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

C. Kızılkale: The research of this author was supported in part by the Applied Mathematics program of the DOE Office of Advanced Scientific Computing Research under Contract No. DE-AC02-05CH11231.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kızılkale, C., Pınar, M. Sparse solutions to an underdetermined system of linear equations via penalized Huber loss. Optim Eng 22, 1521–1537 (2021). https://doi.org/10.1007/s11081-020-09577-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-020-09577-w

Keywords

Navigation