Skip to main content
Log in

Adaptive Iterative Hard Thresholding for Least Absolute Deviation Problems with Sparsity Constraints

  • Published:
Journal of Fourier Analysis and Applications Aims and scope Submit manuscript

Abstract

Constrained Least absolute deviation (LAD) problems often arise from sparse regression of statistical prediction and compressed sensing literature. It is challenging to solve LAD problems with sparsity constraints directly due to non-smoothness of objective functions and non-convex feasible sets. We provide an adaptive iterative hard thresholding (\({{\,\textrm{AIHT}\,}}_1\)) method to solve LAD problems with sparsity constraints. The sequence generated by \({{\,\textrm{AIHT}\,}}_1\) converges to ground truth linearly under the \(l_1\) restricted isometry property condition. Then we apply our analysis method to the binary iterative hard thresholding (BIHT) algorithm in one-bit compressed sensing. We obtain a tighter error bound compared with our previous work on BIHT. To some extent, our results can explain the efficiency of BIHT in recovering sparse vectors and make up for the deficiency of the theoretical guarantee of BIHT. Finally, numerical examples demonstrate the validity of our convergence analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bahmani, S., Raj, B.: A unifying analysis of projected gradient descent for \(l_p\)-constrained least squares. Appl. Comput. Harmon. Anal. 34(3), 366–378 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  2. Beck, A., Eldar, Y.C.: Sparsity constrained nonlinear optimization: optimality conditions and algorithms. SIAM J. Optim. 23(3), 1480–1509 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertsimas, D., King, A., Mazumder, R.: Best subset selection via a modern optimization lens. Ann. Stat. 44(2), 813–852 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Blumensath, T.: Compressed sensing with nonlinear observations and related nonlinear optimization problems. IEEE Trans. Inf. Theory 59(6), 3466–3474 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Blumensath, T., Davies, M.E.: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)

    Article  Google Scholar 

  7. Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5), 629–654 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Blumensath, T., Davies, M.E.: Sampling theorems for signals from the union of finite-dimensional linear subspaces. IEEE Trans. Inf. Theory 55(4), 1872–1882 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Boufounos, P.T., Baraniuk, R.G.: 1-bit compressive sensing. Paper Presented at the 42nd Annual Conference on Information Sciences and Systems (CISS 2008), 16–21 March 2008 (2008)

  10. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  11. Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9–10), 589–592 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Candès, E.J., Romberg, J.K., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pur. Appl. Math. 59(8), 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  14. Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. Davis, G., Mallat, S., Avellaneda, M.: Adaptive greedy approximations. Constr. Approx. 13, 57–98 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Dirksen, S., Jung, H.C., Rauhut, H.: One-bit compressed sensing with partial Gaussian circulant matrices. Inf. Inference 9(3), 601–626 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Dai, D.Q., Shen, L.X., Xu, Y.S., Zhang, N.: Noisy 1-bit compressive sensing: models and algorithms. Appl. Comput. Harmon. Anal. 40(1), 1–32 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  20. Foucart, S.: Flavors of compressive sensing. In: Fasshauer G., Schumaker L. (eds.) Approximation Theory XV: San Antonio 2016. AT 2016. Springer Proceedings in Mathematics & Statistics, Vol. 201, pp. 61–104. Springer, Cham (2017)

  21. Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Foucart, S., Lecué, G.: An IHT algorithm for sparse recovery from subexponential measurements. In: IEEE Signal Processing Letters 24(9), 1280–1283 (2017). https://doi.org/10.1109/LSP.2017.2721500

  23. Friedlander, M.P., Jeong, H., Plan, Y., Yilmaz, O.: NBIHT: an efficient algorithm for 1-bit compressed sensing with optimal error decay rate. IEEE Trans. Inf. Theory 68(2), 1157–1177 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gao, X.L., Huang, J.: Asymptotic analysis of high-dimensional LAD regression with Lasso. Stat. Sin. 20, 1485–1506 (2010)

    MathSciNet  MATH  Google Scholar 

  25. Genzel, M., Stollenwerk, A.: Robust 1-bit compressed sensing via Hinge loss Minimization. Paper Presented at the 13th International conference on Sampling Theory and Applications (SampTA), University of Bordeaux, Bordeaux, 8–12 July 2019 (2019)

  26. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx, September (2013)

  27. Grant, M.C., Boyd, S.P.: Graph implementations for nonsmooth convex programs. In: Blondel, V.D., Boyd, S.P., Kimura, H. (eds.) Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences, vol. 371, pp. 95–110. Springer, London (2008)

    Chapter  Google Scholar 

  28. Hastie, T., Tibshirani, R., Tibshirani, R.J.: Extended comparisons of best subset selection, forward stepwise selection, and the Lasso. Preprint at https://arxiv.org/abs/1707.08692 (2017)

  29. Jacques, L., Laska, J.N., Boufounos, P.T., Baraniuk, R.G.: Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  30. Laska, J.N., Wen, Z., Yin, W., Baraniuk, R.G.: Trust, but verify: fast and accurate signal recovery from 1-bit compressive measurements. IEEE Trans. Signal Process. 59(11), 5289–5301 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  31. Liu, D.K., Li, S., Shen, Y.: One-bit compressive sensing with projected subgradient method under sparsity constraints. IEEE Trans. Inf. Theory 65(10), 6650–6663 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  32. Liu, W.H., Gong, D., Xu, Z.: One-bit compressed sensing by greedy algorithms. Numer. Math. Theory Methods Appl. 9(2), 169–184 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41, 3397–3415 (1993)

    Article  MATH  Google Scholar 

  34. Miller, A.: Subset Selection in Regression. CRC Press, New York (2002)

    Book  MATH  Google Scholar 

  35. Natarajan, B.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  36. Needell, D., Tropp, J.A.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Commun. Pur. Appl. Math. 66(8), 1275–1297 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  38. Plan, Y., Vershynin, R.: Dimension reduction by random hyperplane tessellations. Discret. Comput. Geom. 51(2), 438–461 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Raskutti, G., Wainwright, M.J., Yu, B.: Restricted eigenvalue properties for correlated Gaussian designs. J. Mach. Learn. Res. 11(8), 2241–2259 (2010)

    MathSciNet  MATH  Google Scholar 

  40. Shen, L.X., Suter, B.W.: One-bit compressive sampling via \(\ell _0\) minimization. EURASIP J. Adv. Signal Process. 2016, 71 (2016)

    Article  Google Scholar 

  41. Terry, E.: Dielman: least absolute value regression: recent contributions. J. Stat. Comput. Simul. 75(4), 263–286 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  42. Wang, H., Li, G., Jiang, G.: Robust regression shrinkage and consistent variable selection through the LAD-Lasso. J. Bus. Econ. Stat. 25(3), 347–355 (2007)

    Article  MathSciNet  Google Scholar 

  43. Wang, L.: The \(L_1\) penalized LAD estimator for high dimensional linear regression. J. Multivar. Anal. 120, 135–151 (2013)

    Article  MATH  Google Scholar 

  44. Yan, M., Yang, Y., Osher, S.: Robust 1-bit compressive sensing using adaptive outlier pursuit. IEEE Trans. Signal Process. 60(7), 3868–3875 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their valuable comments. Yi Shen would like to thank Dr. Rui Zhang for his valuable comments on Lemma 3.1. This work was supported in part by the NSFC under Grant nos. U21A20426, 12022112, 12071426, the Zhejiang Provincial Natural Science Foundation of China under grant number LR19A010001, and the National Key Research and Development Program of China under Grant no. 2021YFA1003500.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Shen.

Additional information

Communicated by Thomas Strohmer.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, S., Liu, D. & Shen, Y. Adaptive Iterative Hard Thresholding for Least Absolute Deviation Problems with Sparsity Constraints. J Fourier Anal Appl 29, 5 (2023). https://doi.org/10.1007/s00041-022-09984-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00041-022-09984-w

Keywords

Mathematics Subject Classification

Navigation