Abstract
Constrained Least absolute deviation (LAD) problems often arise from sparse regression of statistical prediction and compressed sensing literature. It is challenging to solve LAD problems with sparsity constraints directly due to non-smoothness of objective functions and non-convex feasible sets. We provide an adaptive iterative hard thresholding (\({{\,\textrm{AIHT}\,}}_1\)) method to solve LAD problems with sparsity constraints. The sequence generated by \({{\,\textrm{AIHT}\,}}_1\) converges to ground truth linearly under the \(l_1\) restricted isometry property condition. Then we apply our analysis method to the binary iterative hard thresholding (BIHT) algorithm in one-bit compressed sensing. We obtain a tighter error bound compared with our previous work on BIHT. To some extent, our results can explain the efficiency of BIHT in recovering sparse vectors and make up for the deficiency of the theoretical guarantee of BIHT. Finally, numerical examples demonstrate the validity of our convergence analysis.
Similar content being viewed by others
References
Bahmani, S., Raj, B.: A unifying analysis of projected gradient descent for \(l_p\)-constrained least squares. Appl. Comput. Harmon. Anal. 34(3), 366–378 (2013)
Beck, A., Eldar, Y.C.: Sparsity constrained nonlinear optimization: optimality conditions and algorithms. SIAM J. Optim. 23(3), 1480–1509 (2013)
Bertsimas, D., King, A., Mazumder, R.: Best subset selection via a modern optimization lens. Ann. Stat. 44(2), 813–852 (2016)
Blumensath, T.: Compressed sensing with nonlinear observations and related nonlinear optimization problems. IEEE Trans. Inf. Theory 59(6), 3466–3474 (2013)
Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)
Blumensath, T., Davies, M.E.: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)
Blumensath, T., Davies, M.E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5), 629–654 (2008)
Blumensath, T., Davies, M.E.: Sampling theorems for signals from the union of finite-dimensional linear subspaces. IEEE Trans. Inf. Theory 55(4), 1872–1882 (2009)
Boufounos, P.T., Baraniuk, R.G.: 1-bit compressive sensing. Paper Presented at the 42nd Annual Conference on Information Sciences and Systems (CISS 2008), 16–21 March 2008 (2008)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9–10), 589–592 (2008)
Candès, E.J., Romberg, J.K., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pur. Appl. Math. 59(8), 1207–1223 (2006)
Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)
Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)
Davis, G., Mallat, S., Avellaneda, M.: Adaptive greedy approximations. Constr. Approx. 13, 57–98 (1997)
Dirksen, S., Jung, H.C., Rauhut, H.: One-bit compressed sensing with partial Gaussian circulant matrices. Inf. Inference 9(3), 601–626 (2020)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Dai, D.Q., Shen, L.X., Xu, Y.S., Zhang, N.: Noisy 1-bit compressive sensing: models and algorithms. Appl. Comput. Harmon. Anal. 40(1), 1–32 (2016)
Foucart, S.: Flavors of compressive sensing. In: Fasshauer G., Schumaker L. (eds.) Approximation Theory XV: San Antonio 2016. AT 2016. Springer Proceedings in Mathematics & Statistics, Vol. 201, pp. 61–104. Springer, Cham (2017)
Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)
Foucart, S., Lecué, G.: An IHT algorithm for sparse recovery from subexponential measurements. In: IEEE Signal Processing Letters 24(9), 1280–1283 (2017). https://doi.org/10.1109/LSP.2017.2721500
Friedlander, M.P., Jeong, H., Plan, Y., Yilmaz, O.: NBIHT: an efficient algorithm for 1-bit compressed sensing with optimal error decay rate. IEEE Trans. Inf. Theory 68(2), 1157–1177 (2022)
Gao, X.L., Huang, J.: Asymptotic analysis of high-dimensional LAD regression with Lasso. Stat. Sin. 20, 1485–1506 (2010)
Genzel, M., Stollenwerk, A.: Robust 1-bit compressed sensing via Hinge loss Minimization. Paper Presented at the 13th International conference on Sampling Theory and Applications (SampTA), University of Bordeaux, Bordeaux, 8–12 July 2019 (2019)
Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx, September (2013)
Grant, M.C., Boyd, S.P.: Graph implementations for nonsmooth convex programs. In: Blondel, V.D., Boyd, S.P., Kimura, H. (eds.) Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences, vol. 371, pp. 95–110. Springer, London (2008)
Hastie, T., Tibshirani, R., Tibshirani, R.J.: Extended comparisons of best subset selection, forward stepwise selection, and the Lasso. Preprint at https://arxiv.org/abs/1707.08692 (2017)
Jacques, L., Laska, J.N., Boufounos, P.T., Baraniuk, R.G.: Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)
Laska, J.N., Wen, Z., Yin, W., Baraniuk, R.G.: Trust, but verify: fast and accurate signal recovery from 1-bit compressive measurements. IEEE Trans. Signal Process. 59(11), 5289–5301 (2011)
Liu, D.K., Li, S., Shen, Y.: One-bit compressive sensing with projected subgradient method under sparsity constraints. IEEE Trans. Inf. Theory 65(10), 6650–6663 (2019)
Liu, W.H., Gong, D., Xu, Z.: One-bit compressed sensing by greedy algorithms. Numer. Math. Theory Methods Appl. 9(2), 169–184 (2016)
Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41, 3397–3415 (1993)
Miller, A.: Subset Selection in Regression. CRC Press, New York (2002)
Natarajan, B.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)
Needell, D., Tropp, J.A.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)
Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Commun. Pur. Appl. Math. 66(8), 1275–1297 (2013)
Plan, Y., Vershynin, R.: Dimension reduction by random hyperplane tessellations. Discret. Comput. Geom. 51(2), 438–461 (2014)
Raskutti, G., Wainwright, M.J., Yu, B.: Restricted eigenvalue properties for correlated Gaussian designs. J. Mach. Learn. Res. 11(8), 2241–2259 (2010)
Shen, L.X., Suter, B.W.: One-bit compressive sampling via \(\ell _0\) minimization. EURASIP J. Adv. Signal Process. 2016, 71 (2016)
Terry, E.: Dielman: least absolute value regression: recent contributions. J. Stat. Comput. Simul. 75(4), 263–286 (2005)
Wang, H., Li, G., Jiang, G.: Robust regression shrinkage and consistent variable selection through the LAD-Lasso. J. Bus. Econ. Stat. 25(3), 347–355 (2007)
Wang, L.: The \(L_1\) penalized LAD estimator for high dimensional linear regression. J. Multivar. Anal. 120, 135–151 (2013)
Yan, M., Yang, Y., Osher, S.: Robust 1-bit compressive sensing using adaptive outlier pursuit. IEEE Trans. Signal Process. 60(7), 3868–3875 (2012)
Acknowledgements
The authors would like to thank the referees for their valuable comments. Yi Shen would like to thank Dr. Rui Zhang for his valuable comments on Lemma 3.1. This work was supported in part by the NSFC under Grant nos. U21A20426, 12022112, 12071426, the Zhejiang Provincial Natural Science Foundation of China under grant number LR19A010001, and the National Key Research and Development Program of China under Grant no. 2021YFA1003500.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Thomas Strohmer.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Li, S., Liu, D. & Shen, Y. Adaptive Iterative Hard Thresholding for Least Absolute Deviation Problems with Sparsity Constraints. J Fourier Anal Appl 29, 5 (2023). https://doi.org/10.1007/s00041-022-09984-w
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-022-09984-w