Skip to main content
Log in

The finite steps of convergence of the fast thresholding algorithms with f-feedbacks in compressed sensing

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

Iterative algorithms based on thresholding, feedback, and null space tuning (NST+HT+FB) for sparse signal recovery are exceedingly effective and efficient, particularly for large-scale problems. The core algorithm is shown to converge in finitely many steps under a (preconditioned) restricted isometry condition. We derive in this article the number of iterations to guarantee the convergence of the NST+HT+FB algorithm. Moreover, an accelerated class of adaptive feedback scheme of the iterative algorithm, termed NST+HT+f -FB, is proposed and analyzed. The scheme NST+HT+f -FB has a variable/adaptive index selection and different feedback principles at each iteration defined by a function f(k). It is even more effective, both from its ability to recover sparse signals with a larger number of non-zeros and from its rate of convergence. The convergence of the accelerated scheme is established. The finite number of iterations for guaranteed convergence by the NST+HT+f -FB scheme is also obtained. Furthermore, it is possible to accelerate the rate of convergence and improve the condition of convergence by selecting an appropriate size of the thresholding index set Tk per iteration. The theoretical findings are validated through extensive numerical experiments. It has been shown that the proposed algorithm has a clearly advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art greedy algorithms. Detailed comparisons are provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Donoho, D. L.: Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Candés, E. J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12)), 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Candés, E. J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59 (8), 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Nyquist, H.: Certain topics in telegraph transmission theory. Trans. A.I.E.E. 47(2), 617–644 (1928)

    Article  Google Scholar 

  5. Duarte, M. F., Davenport, M. A., Takhar, D., Laska, J. N., Sun, T., Kelly, K. F., Baraniuk, R. G.: Single-Pixel Imaging via compressive sampling. IEEE Signal Process. Mag. 25(2), 83–91 (2008)

    Article  Google Scholar 

  6. Yang, J., Wright, J., Huang, T. S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Malioutov, D., Cetin, M., Willsky, A. S.: A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8), 3010–3022 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Donatelli, M., Huckle, T., Mazza, M., Sesana, D.: Image deblurring by sparsity constraint on the Fourier coefficients. Numer. Algorithm. 72 (2), 341–361 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. Duarte, M. F., Baraniuk, R. G.: Spectral compressive sensing. Appl. Comput. Harmon. Anal. 35(1), 111–129 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Candés, E. J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM. 58(3), 1–37 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 (3), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inv. Probl. 27(2), 19 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Jia, Z. G., Ng, M. K., Song, G. J.: Lanczos method for large-scale quaternion singular value decomposition. Numer. Algorithm. 82(2), 699–717 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  14. Natarajan, B. K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chen, S. S., Donoho, D. L., Saunders, M. A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  16. Candés, E. J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, Ser. I 346, 589–592 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. Foucart, S.: A note on guaranteed sparse recovery via 1 minimization. Appl. Comput. Harmon. Anal. 29(1), 97–103 (2010)

  18. Davies, M., Gribonval, R.: Restricted isometry constants where p sparse recovery can fail for 0 < p ≤ 1. IEEE Trans. Inf. Theory. 55(5), 2203–2214

  19. Foucart, S.: A note on guaranteed sparse recovery via 1 minimization. Appl. Comput. Harmon. Anal. 29(1), 97–103 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Cai, T. T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmon. Anal. 35(1), 74–93 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang, R., Li, S.: A Proof of Conjecture on Restricted Isometry Property Constants δtk \((0< t<\frac {4}{3})\). IEEE Trans. Inf. Theory. 64(3), 1699–1705 (2017)

    Article  MathSciNet  Google Scholar 

  22. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)

    Article  Google Scholar 

  23. Sun, Q. Y.: Recovery of sparsest signals via q-minimization. Appl. Comput. Harmon. Anal. 32(3), 329–341 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Wu, R., Chen, D. R.: The Improved Bounds of Restricted Isometry Constant for Recovery via p-Minimization. IEEE Trans. Inf. Theory. 59(9), 6142–6147 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Gao, Y., Peng, J. G., Yue, S. G.: Stability and robustness of the 2/q-minimization for block sparse recovery. Signal Process. 137, 287–297 (2017)

    Article  Google Scholar 

  26. Zhang, R., Li, S.: Optimal RIP bounds for sparse signals recovery via p minimization. Appl. Comput. Harmon. Anal. 47(3), 566–584 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  27. Zheng, L., Maleki, A., Weng, H. L., Wang, X. D., Long, T.: Does p,-Minimization Outperform 1-Minimization?. IEEE Trans. Inf. Theory. 63(11), 6896–6935 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  28. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008)

  29. Candés, E. J., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  30. Asif, M., Romberg, J.: Fast and accurate algorithms for re-weighted 1 norm minimization. IEEE Trans. Signal Process. 61(23), 5905–5916 (2013)

  31. Zhao, Y.B., Li, D.: Reweighted 1 minimization for sparse solution to underdetermined linear systems. SIAM J. Optim. 22(3), 1065–1088 (2012)

  32. Daubechies, I., DeVore, R., Fornasier, M., Gntrk, C. S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  33. Lai, C. K., Li, S. D., Mondo, D.: Spark-level sparsity and the 1 tail minimization. Appl. Comput Harmon. Anal. 45(1), 206–215 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tropp, J. A.: Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  35. Tropp, J. A., Gilbert, A. C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Needell, D., Vershynin, R.: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel Topics Signal Process. 4(2), 310–316 (2010)

    Article  Google Scholar 

  37. Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.: Sparse solutions of underdetermined linear equations by stagewise orthogonal matching pursuit, [Online]. Available: http://www-stat.stanford.edu/~donoho/Reports/2006/StOMP-20060403.pdf (2006)

  38. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory. 55(5), 2230–2249 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. Needell, D., Tropp, J. A.: Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Wang, J., Kwon, S., Shim, B.: Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 60(12), 6202–6216 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  41. Blumensath, T., Davies, M. E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5-6), 629–654 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  42. Blumensath, T., Davies, M. E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  43. Blumensath, T., Davies, M. E.: Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)

    Article  Google Scholar 

  44. Blumensath, T.: Accelerated iterative hard thresholding. Signal Process. 92(3), 752–756 (2012)

    Article  Google Scholar 

  45. Blanchard, J. D., Tanner, J., Wei, K.: CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference 4(4), 289–327 (2015)

    MathSciNet  MATH  Google Scholar 

  46. Blanchard, J. D., Tanner, J., Wei, K.: Conjugate gradient iterative hard thresholding: Observed noise stability for compressed sensing. IEEE Trans. Signal Process. 63(2), 528–537 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  47. Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  48. Bouchot, L., Foucart, S., Hitczenko, P.: Hard thresholding pursuit algorithms: number of iterations. Appl. Comput. Harmon. Anal. 41(2), 412–435 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  49. Bouchot, J. -L.: A generalized class of hard thresholding algorithms for sparse signal recovery. In: Approximation Theory XIV: San Antonio 2013, pp. 45–63. Springer (2014)

  50. Li, S. D., Liu, Y. L., Mi, T. B.: Fast thresholding algorithms with feedbacks for sparse signal recovery. Appl. Comput. Harmon. Anal. 37(1), 69–88 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the Foundation for Distinguished Young Talents of Guangdong under grant 2021KQNCX075, GuangDong Basic and Applied Basic Research Foundation under grant 2021A1515110530, National Natural Science Foundation of China under grants 61972265, 11871348 and 61373087, the Natural Science Foundation of Guangdong Province of China under grant 2020B1515310008, the Educational Commission of Guangdong Province of China undergrant 2019KZDZX1007, and the Guangdong Key Laboratory of Intelligent Information Processing, China, and the NSF of USA (DMS-1313490, DMS-1615288).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shidong Li.

Ethics declarations

Conflict of interest

The authors declare that no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix:

Appendix:

1.1 Preliminary results

Definition 7.1

[2]. For each integer s = 1, 2,..., the matrix A is said to satisfy the RIP of order s with constant δs if

$$ \left( 1-\delta_{s}\right)\|x\|_{2}^{2}\leq\|Ax\|_{2}^{2}\leq\left( 1+\delta_{s}\right)\|x\|_{2}^{2} $$

holds for all s-sparse vectors x. Equivalently, it is given by

$$ \delta_{s}=\max_{|S|\leq s}\|I-A_{S}^{\ast}A_{S}\|_{2}. $$

Definition 7.2

[50]. For each integer s = 1, 2,..., the matrix A is said to satisfy the P-RIP of order s with constant γs if

$$ (1-\gamma_{s})\|x\|_{2}^{2}\leq\|(AA^{\ast})^{-\frac{1}{2}}Ax\|_{2}^{2} $$

holds for all s-sparse vectors x. In fact, the preconditioned restricted isometry constant γs characterizes the restricted isometry property of the preconditioned matrix \((AA^{\ast })^{-\frac {1}{2}}A\). Since

$$ \|(AA^{\ast})^{-\frac{1}{2}}Ax\|_{2}\leq\|(AA^{\ast})^{-\frac{1}{2}}A\|_{2}\|x\|_{2}=\|x\|_{2}, $$

γs is actually the smallest number such that, for all s-sparse vectors x,

$$ (1-\gamma_{s})\|x\|_{2}^{2}\leq\|(AA^{\ast})^{-\frac{1}{2}}Ax\|_{2}^{2}\leq(1+\gamma_{s})\|x\|_{2}^{2}. $$

It indicates \(\gamma _{s}(A)=\delta _{s}\left (\left (AA^{\ast }\right )^{-\frac {1}{2}}A\right )\). Equivalently, it is given by

$$ \gamma_{s}=\max_{|S|\leq s}\|I-A_{S}^{\ast}\left( AA^{\ast}\right)^{-1}A_{S}\|_{2}. $$

Lemma 7.2

Let δt be the RIP constant of A. For \(u,v\in \mathbb {C}^{N}\), if |supp(u) ∪ supp(v)|≤ t, then \(|\langle u,\left (I-A^{\ast }A\right )v\rangle |\leq \delta _{t}\|u\|_{2}\|v\|_{2}\). Suppose \(|T^{\prime } \cup supp (v)|\leq t\), then \(\|\left (\left (I-A^{\ast }A\right )v\right )_{T^{\prime }}\|_{2}\leq \delta _{t}\|v\|_{2}\).

Proof

Setting T = supp(v) ∪ supp(u), |T|≤ t, one has

$$ \begin{aligned} |\langle u,\left( I-A^{\ast}A\right)v\rangle| & =|\langle u,v\rangle-\langle Au,Av\rangle| \\ &=|\langle u_{T},v_{T}\rangle-\langle A_{T}u_{T},A_{T}v_{T}\rangle|\\ &=|\langle u_{T},(I-A_{T}^{\ast}A_{T})v_{T}\rangle|\\ &\leq\|u_{T}\|_{2}\|(I-A_{T}^{\ast}A_{T})v_{T}\|_{2}\\ &\leq\|u_{T}\|_{2}\|I-A_{T}^{\ast}A_{T}\|_{2}\|v_{T}\|_{2}\\ &\leq\delta_{t}\|u\|_{2}\|v\|_{2}. \end{aligned} $$

The first inequality is due to the Cauchy-Schwarz inequality and the second inequality is due to the submultiplicativity of matrix norms, while the last step is based on Definition 7.1. Since

$$ \begin{aligned} \|\left( \left( I-A^{\ast}A\right)v\right)_{T^{\prime}}\|_{2}^{2} &=\langle\left( \left( I-A^{\ast}A\right)v\right)_{T^{\prime}},\left( I-A^{\ast}A\right)v\rangle\\ &\leq\delta_{t}\|\left( \left( I-A^{\ast}A\right)v\right)_{T^{\prime}}\|_{2}\|v\|_{2}, \end{aligned} $$

one can obtain that \(\|\left (\left (I-A^{\ast }A\right )v\right )_{T^{\prime }}\|_{2}\leq \delta _{t}\|v\|_{2}\). □

Remark 7.1

Let γt be the P-RIP constant of A, i.e., \(\gamma _{t}(A)=\delta _{t}\left (\left (AA^{\ast }\right )^{-\frac {1}{2}}A\right )\). For \(u,v\in \mathbb {C}^{N}\), if |supp(u) ∪ supp(v)|≤ t, then \(|\langle u,\left (I-A^{\ast }\left (AA^{\ast }\right )^{-1}A\right )v\rangle |\leq \gamma _{t}\|u\|_{2}\|v\|_{2}\). Suppose \(|T^{\prime } \cup supp (v)|\leq t\), then \(\|\left (\left (I-A^{\ast }\left (AA^{\ast }\right )^{-1}A\right )v\right )_{T^{\prime }}\|_{2}\leq \gamma _{t}\|v\|_{2}\).

Lemma 7.2

For \(e\in \mathbb {C}^{M}\), \(\|\left (A^{\ast }e\right )_{T}\|_{2}\leq \sqrt {1+\delta _{t}}\|e\|_{2}\), when |T|≤ t.

Proof

$$ \begin{aligned} \|\left( A^{\ast}e\right)_{T}\|_{2}^{2}&=\langle A^{\ast}e,\left( A^{\ast}e\right)_{T}\rangle=\langle e,A\left( A^{\ast}e\right)_{T}\rangle\\ &\leq\|e\|_{2}\|A\left( A^{\ast}e\right)_{T}\|_{2}\leq\|e\|_{2}\sqrt{1+\delta_{t}}\|\left( A^{\ast}e\right)_{T}\|_{2}, \end{aligned} $$

Hence, for all \(e\in \mathbb {C}^{M}\), we have \(\|\left (A^{\ast }e\right )_{T}\|_{2}\leq \sqrt {1+\delta _{t}}\|e\|_{2}\). □

Remark 7.2

If \(\delta _{t}\left (\left (AA^{\ast }\right )^{-1}A\right )=\theta _{t}\), then for \(e\in \mathbb {C}^{M}\), \(\|\left (A^{\ast }\left (AA^{\ast }\right )^{-1}e\right )_{T}\|_{2}\leq \sqrt {1+\theta _{t}}\|e\|_{2}\), when |T|≤ t.

1.2 Proof of Lemma 3.1.

Proof

We first have that

\(\|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{T}\|_{2}\geq \|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{S}\|_{2}.\)

Eliminating the common terms over \(T\bigcap S\), one has

\(\|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{T\setminus S}\|_{2}\geq \|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{S\setminus T}\|_{2}.\)

For the left hand,

$$ \begin{aligned} &\|[\mu^{\prime}+A^{\ast}(AA^{\ast})^{-1}(y-A\mu^{\prime})]_{T\setminus S}\|_{2}\\ =&\|[\mu^{\prime}-x+A^{\ast}(AA^{\ast})^{-1}(Ax+e-A\mu^{\prime})]_{T\setminus S}\|_{2}\\ =&\|[(I-A^{\ast}(AA^{\ast})^{-1}A)(\mu^{\prime}-x)+A^{\ast}(AA^{\ast})^{-1}e]_{T\setminus S}\|_{2}. \end{aligned} $$

The right hand satisfies

$$ \begin{aligned} &\|[\mu^{\prime}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\prime}\right)]_{S\setminus T}\|_{2}\\ =&\|[\mu^{\prime}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( Ax+e-A\mu^{\prime}\right)+x-x]_{S\setminus T}\|_{2}\\ \geq&\|x_{S\setminus T}\|_{2}-\|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\prime}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{S\setminus T}\|_{2}. \end{aligned} $$

Consequently,

$$ \begin{aligned} &\|x_{S\setminus T}\|_{2}\\ \leq& \|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\prime}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{S\setminus T}\|_{2}\\ &+\|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\prime}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{T\setminus S}\|_{2}\\ \leq&\sqrt{2}\|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\prime}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{T\triangle S}\|_{2}\\ \leq&\sqrt{2}\|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\prime}-x\right)]_{T\triangle S}\|_{2}+\sqrt{2}\|[A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{T\triangle S}\|_{2}\\ \leq&\sqrt{2}\left( \gamma_{s+s^{\prime}+t}\|x-\mu^{\prime}\|_{2}+\sqrt{1+\theta_{t+s}}\|e\|_{2}\right). \end{aligned} $$

The last step is due to Remark 7.1 and Remark 7.2. □

1.3 Proof of Lemma 3.2.

Proof

For any \(z\in \mathbb {C}^{N}\) supported on T,

$$ \begin{aligned} \langle A\mu^{\prime}-y,Az\rangle&=\langle A_{T}x_{T}^{\prime}+A_{T}(A_{T}^{\ast} A_{T})^{-1}A_{T}^{\ast} A_{T^{c}}x_{T^{c}}^{\prime}-y,A_{T}z_{T}\rangle\\ &=\langle A_{T}^{\ast}\left( A_{T}x_{T}^{\prime}+A_{T^{c}}x_{T^{c}}^{\prime}-y\right),z_{T}\rangle\\ &=\langle A_{T}^{\ast}\left( Ax^{\prime}-y\right),z_{T}\rangle\\ &=0 \end{aligned} $$

The last step is due to the feasibility of \(x^{\prime }\), i.e., \(y=Ax^{\prime }\). The inner product can also be written as

\(\langle A\mu ^{\prime }-y,Az\rangle =\langle (A\mu ^{\prime }-Ax-e),Az\rangle =0\).

Therefore,

\(\langle (\mu ^{\prime }-x),A^{\ast }Az\rangle =\langle e,Az\rangle ,~~\forall z\in \mathbb {C}^{N}\) supported on T.

Since \((\mu ^{\prime }-x)_{T}\) is supported on T, one has

\(\langle (\mu ^{\prime }-x),A^{\ast }A(\mu ^{\prime }-x)_{T}\rangle =\langle e,A(\mu ^{\prime }-x)_{T}\rangle .\)

Consequently,

$$ \begin{aligned} \|\left( \mu^{\prime}-x\right)_{T}\|_{2}^{2}&=\langle\left( \mu^{\prime}-x\right),\left( \mu^{\prime}-x\right)_{T}\rangle\\ &=|\langle \left( x-\mu^{\prime}\right),\left( I-A^{\ast}A\right)\left( x-\mu^{\prime}\right)_{T}\rangle+\langle e,A\left( \mu^{\prime}-x\right)_{T}\rangle|\\ &\leq\delta_{s+t}\|x-\mu^{\prime}\|_{2}\|(x-\mu^{\prime})_{T}\|_{2}+\sqrt{1+\delta_{t}}\|e\|_{2}\|\left( x-\mu^{\prime}\right)_{T}\|_{2} \end{aligned} $$

Using Lemma 7.2, Cauchy-Schwarz inequality and Definition 7.1 can obtain the last inequality. Therefore, we have

\(\|(x-\mu ^{\prime })_{T}\|_{2}\leq \delta _{s+t}\|x-\mu ^{\prime }\|_{2}+\sqrt {1+\delta _{t}}\|e\|_{2}\).

It then follows that

$$ \begin{aligned} \|(x-\mu^{\prime})\|_{2}^{2}&=\|(x-\mu^{\prime})_{T}\|_{2}^{2}+\|(x-\mu^{\prime})_{T^{c}}\|_{2}^{2}\\ &\leq\left( \delta_{s+t}\|x-\mu^{\prime}\|_{2}+\sqrt{1+\delta_{t}}\|e\|_{2}\right)^{2}+\|x_{T^{c}}\|_{2}^{2}. \end{aligned} $$

In other words,

\(\left (\sqrt {1-\delta _{s+t}^{2}}\|(x-\mu ^{\prime })\|_{2}-\frac {\delta _{s+t}\sqrt {1+\delta _{t}}}{\sqrt {1-\delta _{s+t}^{2}}}\|e\|_{2}\right )^{2} \leq \frac {1+\delta _{t}}{1-\delta _{s+t}^{2}}\|e\|_{2}^{2}+\|x_{T^{c}}\|_{2}^{2}.\)

It means that

$$ \begin{aligned} \|(x-\mu^{\prime})\|_{2}&\leq\frac{\delta_{s+t}\sqrt{1+\delta_{t}}\|e\|_{2}+\sqrt{(1+\delta_{t})\|e\|_{2}^{2}+({1-\delta_{s+t}^{2})\|x_{T^{c}}\|_{2}^{2}}}}{1-\delta_{s+t}^{2}}\\ &\leq\frac{\|x_{T^{c}}\|_{2}}{\sqrt{1-\delta_{s+t}^{2}}}+\frac{\sqrt{1+\delta_{t}}\|e\|_{2}}{1-\delta_{s+t}}. \end{aligned} $$

1.4 Proof of Lemma 4.1.

Proof

As defined, \(\widehat {x}\in \mathbb {C}^{N}_{+}\) is the nonincreasing rearrangement of a vector \(x=(x_{1}, x_{2},\cdots ,x_{N})\in \mathbb {C}^{N}\), i.e., \(\widehat {x}_{1}\geq \widehat {x}_{2}\geq {\ldots } \widehat {x}_{N}\geq 0\), and there exists a permutation π of {1,…,N} such that \(\widehat {x}_{i}=|x_{\pi (i)}|\) for all i ∈{1,…,N}. For NST+HT+FB, the hypothesis is \(\pi (\{1,\ldots ,p\})\subseteq T_{k}\) and the goal is to prove that \(\pi (\{1,\ldots ,p+q\})\subseteq T_{k+\ell }\). That is to say the \(|\left (\mu ^{\ell +k-1}+A^{\ast }\left (AA^{\ast }\right )^{-1}\left (y-A\mu ^{\ell +k-1}\right )\right )_{\pi (j)}|\) for j ∈{1,…p + q} are among the s largest entries of \(|\left (\mu ^{\ell +k-1}+A^{\ast }\left (AA^{\ast }\right )^{-1}\left (y-A\mu ^{\ell +k-1}\right )\right )_{i}|\) for i ∈{1,…,N}. Since supp(x) = S and |S| = s, it is then enough to prove that

$$ \begin{aligned} &\min \limits_{j\in\{1,\ldots,p+q\}} |\left( \mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{\pi(j)}|\\ >&\max \limits_{i\in S^{c}} |\left( \mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{i}|. \end{aligned} $$
(1)

For j ∈{1,…p + q} and iSc, in view of

$$ \begin{aligned} &|\left( \mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{\pi(j)}|\\ \geq&|x_{\pi(j)}|-|\left( -x+\mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{\pi(j)}|\\ \geq& \widehat{x}_{p+q}-|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{\pi(j)}|,\\ &|\left( \mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{i}|\\ =&|\left( -x+\mu^{\ell+k-1}+A^{\ast}\left( AA^{\ast}\right)^{-1}\left( y-A\mu^{\ell+k-1}\right)\right)_{i}|\\ =&|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{i}|, \end{aligned} $$

For the proof of (1), one only needs to prove next, for all j ∈{1,…,p + q} and iSc,

$$ \begin{aligned} \widehat{x}_{p+q}>&|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{\pi(j)}|\\ &+|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{i}|. \end{aligned} $$
(2)

The right-hand side can be bounded by

$$ \begin{aligned} &\sqrt{2}\|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{\{\pi(j),i\}}\|_{2}\\ \leq&\sqrt{2}\|\left( \left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)\right)_{\{\pi(j),i\}}\|_{2}\\&+\sqrt{2}\|\left( A^{\ast}\left( AA^{\ast}\right)^{-1}e\right)_{\{\pi(j),i\}}\|_{2}\\ \leq&\sqrt{2}\gamma_{2s+2}\|\mu^{\ell+k-1}-x\|_{2}+\sqrt{2(1+\theta_{2})}\|e\|_{2}\\ \leq&\sqrt{2}\gamma_{2s+2}\left( \rho_{3s}^{\ell-1}\|u^{k}-x\|_{2}+\frac{\tau_{2s}(1-\rho_{3s}^{\ell-1})\|e\|_{2}}{1-\rho_{3s}}\right)+\sqrt{2(1+\theta_{2})}\|e\|_{2}, \end{aligned} $$

where Remark 3.1 was used − 1 times in the last step. Furthermore, considering Lemma 3.2 and the assumption that \(\pi (\{1,\ldots ,p\})\subseteq T_{k}\), we have

$$ \begin{aligned} &\sqrt{2}\|[\left( I-A^{\ast}\left( AA^{\ast}\right)^{-1}A\right)\left( \mu^{\ell+k-1}-x\right)+A^{\ast}\left( AA^{\ast}\right)^{-1}e]_{\{\pi(j),i\}}\|_{2}\\ \leq&\sqrt{2}\gamma_{2s+2}\rho_{3s}^{\ell-1}\left( \frac{1 }{\sqrt{\left( 1-\delta_{2s}^{2}\right)}}\|x_{{T^{c}_{k}}}\|_{2}+\frac{\sqrt{1+\delta_{s}}}{1-\delta_{2s}}\|e\|_{2}\right)\\&+\frac{\sqrt{2}\gamma_{2s+2}\tau_{2s}\left( 1-\rho_{3s}^{\ell-1}\right)\|e\|_{2}}{1-\rho_{3s}}+\sqrt{2\left( 1+\theta_{2}\right)}\|e\|_{2} \end{aligned} $$
$$ \begin{aligned} \leq&\rho_{3s}^{\ell}\|x_{{T^{c}_{k}}}\|_{2}+\left( \sqrt{2}\gamma_{2s+2}\rho_{3s}^{\ell-1}\frac{\sqrt{1+\delta_{s}}}{1-\delta_{2s}}\right.\\&\left.+\frac{\sqrt{2}\gamma_{2s+2}\tau_{2s}(1-\rho_{3s}^{\ell-1})}{1-\rho_{3s}}+\sqrt{2(1+\theta_{2})}\right)\|e\|_{2}\\ \leq&\rho_{3s}^{\ell}\|x_{\pi(\{1,\ldots,p\})^{c}}\|_{2}+\left( \sqrt{2}\gamma_{3s}\frac{\sqrt{1+\delta_{3s}}}{1-\delta_{3s}}+\frac{\sqrt{2}\gamma_{3s}\tau_{3s}}{1-\rho_{3s}}+\sqrt{2(1+\theta_{3s})}\right)\|e\|_{2}\\ \leq&\rho_{3s}^{\ell}\|\widehat{x}_{\{p+1,\ldots,s\}}\|_{2}+\left( \sqrt{2}\gamma_{3s}\frac{\sqrt{1+\delta_{3s}}}{1-\delta_{3s}}+\frac{\sqrt{2}\gamma_{3s}\tau_{3s}}{1-\rho_{3s}}+\sqrt{2(1+\theta_{3s})}\right)\|e\|_{2}\\ \leq&\rho_{3s}^{\ell}\|\widehat{x}_{\{p+1,\ldots,s\}}\|_{2}+\omega_{3s}\|e\|_{2}. \end{aligned} $$

Therefore, if the condition (4.1) holds, then (2) is proved. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, N., Lu, J. & Li, S. The finite steps of convergence of the fast thresholding algorithms with f-feedbacks in compressed sensing. Numer Algor 90, 1197–1223 (2022). https://doi.org/10.1007/s11075-021-01227-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-021-01227-1

Keywords

Navigation