Abstract
This paper gives new bounds for restricted isometry constant (RIC) in compressed sensing. Let Φ be an m×n real matrix and k be a positive integer with k⩽n. The main results of this paper show that if the restricted isometry constant of Φ satisfies δ 8ak <1 and
for \(a> \frac{3}{8}\), then k-sparse solution can be recovered exactly via l 1 minimization in the noiseless case. In particular, when a=1,1.5,2 and 3, we have δ 2k <0.5746 and δ 8k <1, or δ 2.5k <0.7046 and δ 12k <1, or δ 3k <0.7731 and δ 16k <1 or δ 4k <0.8445 and δ 24k <1.
Similar content being viewed by others
1 Introduction
The concept of compressed sensing (CS) was first introduced by Donoho [12], Candès, Romberg and Tao [8] and Candès and Tao [9] with the involved essential idea–recovering some original n-dimensional but sparse signal∖image from linear measurement with dimension far fewer than n. Recently, large numbers of researchers, including applied mathematicians, computer scientists and engineers, have begun to pay their attention to this area owing to its wide applications in signal processing, communications, astronomy, biology, medicine, seismology and so on, see, e.g., survey papers [1, 2, 19] and a monograph [14].
The fundamental problem in compressed sensing is reconstructing a high-dimensional sparse signal from remarkably small number of measurements. We assume to recover a sparse solution \(x\in\mathbb {R}^{n}\) of the underdetermined system of the form Φx=y, where \(y\in\mathbb{R}^{m}\) is the available measurement and \(\varPhi\in\mathbb{R}^{m\times n}\) is a known measurement matrix (with m≪n). The mathematical model would be to minimize the number of the non-zero components of x, i.e., to solve the following l 0-norm optimization problem:
where ∥x∥0 is l 0-norm of the vector \(x\in\mathbb{R}^{n}\), i.e., the number of nonzero entries in x (this is not a true norm, as ∥⋅∥0 is not positive homogeneous). A vector x with at most k nonzero entries, ∥x∥0⩽k, is called k-sparse. However, (1) is combinatorial and computationally intractable and one popular and powerful approach is to solve it via ℓ 1 minimization (its convex relaxation)
One of the most commonly used frameworks for sparse recovery via l 1 minimization is the Restricted Isometry Property (RIP) introduced by Candès and Tao [9]. For some integer k∈{1,2,⋯,n}, the k-restricted isometry constant (RIC) δ k of a matrix Φ is the smallest number in (0,1) such that
holds for all k-sparse vectors. We say that Φ has k-RIP if there is a k-RIC δ k ∈(0,1) such that the above inequalities hold. Furthermore, if for integers k 1,k 2,⋯,k s there exist \(\delta_{k_{1}}, \delta _{k_{2}},\cdots,\delta_{k_{s}}\in(0,1)\) such that the corresponding inequalities hold, we say that Φ has {k 1,k 2,⋯,k s }-RIP. Here, δ k ∈(0,1) is often used in literature, see, e.g., [13, 14], and δ k has the monotone property for k (see, e.g., [3, 4]), i.e.,
Thus, Φ has {k 1,k 2,⋯,k s }-RIP is the same as that Φ has max{k 1,k 2,⋯,k s }-RIP. In addition, if k+k′⩽n, the k,k′-restricted orthogonality constant (ROC) θ k,k′ is the smallest number that satisfies
for all k-sparse x and k′-sparse x′ with disjoint supports. Candès and Tao [9] showed the link between RIC and ROC
By the definition (3), one would observe that
where ∥⋅∥ denotes the spectral norm of a matrix (see, e.g., [18]). Clearly, it is hard to compute RICs for a given matrix Φ because it essentially requires that every subset of columns of Φ with certain cardinality approximately behaves like an orthonormal system. Moreover, as shown by Zhang [20], for a nonsingular matrix (transformation) \(Q\in\mathbb{R}^{n\times n}\), the RIP constants of Φ and QΦ can be very different. However, a widely used technique for avoiding checking the RIP condition directly is to generate the matrix randomly and to show that the resulting random matrix satisfies the RIP with high probability [17].
Although the RIP condition is difficult to check, it is of independent interest to study the bounds for RIC in CS since l 1-norm minimization can recover a sparse signal under various conditions on δ k , δ 2k and θ k,k′, such as, the condition δ k +θ k,k +θ k,2k <1 in [9], δ 2k +θ k,2k <1 in [10], and δ 1.25k +θ k,1.25k <1 in [4].
While many previous results in compressed sensing made reference to δ 2k , probably because it implies that k-sparse signals remain well separated in the measurement space. The first major result of this sort was established in Candès [6], namely, \(\delta_{2k}\leqslant\sqrt{2}-1\) is sufficient for k-sparse signal reconstruction. Recently Cai and Zhang [5] obtained the sufficient condition δ 2k ⩽1/2. To the best of our knowledge, the bound for δ 2k on sparse recovery is gradually improved from \(\sqrt{2}-1 ({\approx}0.4142)\) to 0.5 in recent years. The details are listed in the Table 1 below.
The main contribution of the present paper is to give the new bounds for RIC in CS in the following theorem. Here, for \(x\in\mathbb{R}^{n}\), we define the best k-sparse approximation \(x^{ (k )}\in\mathbb{R}^{n}\) from x with all but the k largest entries (in absolute value) set to zero.
Theorem 1
Let x be a feasible solution to (1) and x (k) be the best k-sparse approximation of x. If the following inequalities hold
and
for a>3/8, then the solution \(\hat{x}\) to the l 1 minimization problem (2) satisfies
for some positive constant C 0<1 given explicitly by (16). In particular, if x is k-sparse, the recovery is exact.
From Theorem 1, when a=1,1.5,2 and 3, we get that δ 2k <0.5746, δ 2.5k <0.7046, δ 3k <0.7731 and δ 4k <0.8445 with the corresponding assumption δ 8ak <1. Observing Table 1, under the extra assumption δ 8ak <1, our conditions are all weaker than the ones known in the literature.
Note that the k-RIP condition implies that every subset of columns of Φ with cardinality less than k approximately behaves like an orthonormal system. In the context of (large scaled) sparse optimization, it is often said that k≪n. Recently, Candès and Recht [7] showed that a k-sparse vector in \(\mathbb{R}^{n}\) can be efficiently recovered from 2klogn measurements with high probability, i.e., \(m=\mathcal{O}(2k\log n)\). In this case, 8ak should be less than m for smaller a. Thus, 8ak<m and 8ak≪n make sense and our extra assumption is meaningful and valuable in large scale sparse optimization.
The organization of this paper is as follows. In the next section, we establish some key inequalities. In Sect. 3, we prove our main result. In Sect. 4, we conclude this paper with some remarks.
2 Key Inequalities
In this section, we will give some inequalities, which play an important role in improving the RIC bound for sparse recovery in this paper.
We begin with the following interesting and important inequality, which states the connection between several norms of l 0,l 1,l 2,l ∞ and l −∞. Here, we define ∥x∥−∞ norm as ∥x∥−∞:=min i {|x i |}. (In fact, l −∞ is not a norm since the triangle inequality does not hold). For convenience, we call (11) the Norm Inequality, which is essentially from (6) in [3].
Proposition 1
(Norm Inequality)
For any \(x\in\mathbb {R}^{n}\) and x≠0,
Furthermore, we obtain the following general inequality,
Throughout the paper, let \(\hat{x}\) be a solution to the minimization problem (2), and \(x\in\mathbb{R}^{n}\) be a feasible one, i.e., Φx=y. Clearly, \(\Vert \hat{x}\Vert _{1}\leqslant\Vert x\Vert _{1}\). We let \(x^{ (k )}\in\mathbb{R}^{n}\) be defined as above again. Without loss of generality we assume that the support of x (k) is T 0.
Denote that \(h=\hat{x}-x\) and h T is the vector equal to h on an index set T and zero elsewhere. We decompose h into a sum of vectors \(h_{T_{0}} ,h_{T_{1}} ,h_{T_{2}},\cdots{}\), where T 1 corresponds to the locations of the ak largest coefficients of \(h_{T_{0}^{C}}\) (\(T_{0}^{C}=T_{1}\cup T_{2}\cup\cdots\)); T 2 to the locations of the 4ak largest coefficients of \(h_{(T_{0}\cup T_{1})^{C}}\), T 3 to the locations of the next 4ak largest coefficients of \(h_{(T_{0}\cup T_{1})^{C}}\), and so on. That is
Here, the sparsity of \(h_{T_{0}}\) is at most k; the sparsity of \(h_{T_{1}}\) is at most ak; the sparsity of \(h_{T_{j}}(j\geqslant2)\) are at most 4ak.
In order to get a new bound on RIC, for the above decomposition (12), we define
Obviously, ρ∈[0,1] and
Applying the Norm Inequality, we can give some inequalities of h, which are very useful in the proof of our main results.
Lemma 1
Let \(h_{T_{0}}, h_{T_{1}}, h_{T_{2}}, \cdots\), and ρ be given by (12) and (13), respectively. Then
and
Proof
By the definitions of \(h_{T_{j}}(j=1,2,\cdots)\) and ρ, direct calculation yields
Thus, (14) holds. We remain to show (15). Applying the Norm Inequality (11), we obtain that
for j=2,3,⋯, where for the last \(h_{T_{j}}\), we set \(\Vert h_{T_{j+1}}\Vert _{\infty}:=0\). Adding up all the inequalities for j=2,3,⋯, we get that
The desired conclusion holds immediately. □
In the end of this section, we give two lemmas which give us the connection about the norms of \(\varPhi h_{T_{j}}\) and \(h_{T_{j}}\).
Lemma 2
Let \(h_{T_{0}}\) and \(h_{T_{1}}\) be given by (12). Then
Proof
From (3), we obtain
Because the supports T 0 and T 1 are disjoint, the following equality holds
Therefore
where the second inequality is derived from (11). □
Lemma 3
Let \(h_{T_{0}}, h_{T_{1}}, h_{T_{2}}, \cdots\), and ρ be given by (12) and (13), respectively. Then
Proof
By direct calculations, we obtain that
where the first inequality holds by the triangle inequality, the second holds due to (3) and (6), the third is from (14) and (15); and the first equality holds from
Hence, the desired result follows. □
3 Proof of the Main Result
In this section, we will prove our main result. For simplicity, we first define a quadratic function of variable ρ,
Clearly, it is a strictly concave function. We can easily obtain the optimal maximum value of f(ρ) through demanding its derivative, that is
where
Moreover, we denote that
Before proving our main results, we show that the RIP bound in (9) is a sufficient condition for C 0<1.
Lemma 4
If (8) and (9) hold, then C 0<1.
Proof
From (9), it is easy to verify that
which is equivalent to
Since 0⩽δ 8ak ⩽1, and by (16), we have
Thus, if (9) holds, we ensure C 0<1. □
Now we begin to prove our main result.
Proof of Theorem 1
The proof proceeds in two steps, which is a common approach in literature [4, 6].
The first step is to prove that
The second step shows that \(\Vert \hat{x}-x\Vert _{1}\) is appropriately small.
For the first step, we note that Φh=0, which implies that
From Lemmas 2 and 3, the following inequality holds
Then we get
where the first inequality is derived from (13). Combining with (16), we get (17).
For the second step, we have
where the first inequality holds from (12) in [6]. Then
This together with (17) yields
We complete to prove (10).
In particular, if x is k-sparse, then x−x (k)=0, and hence \(x=\hat{x}\) from (10). □
4 Conclusion
In this paper, we have gotten that, when a>3/8, the conditions (8) and (9) enable us to obtain several interesting RIC bounds for measurement matrices, such as δ 2k , δ 2.5k , δ 3k , and δ 4k so on. For intuitionistic analysis, we draw the curve about the connection between t(:=a+1) and the bound for δ tk .
From Fig. 1, it is easy to see that the bounds for δ tk increase fast between 1.75⩽t⩽3 and the bounds for δ tk are larger than 0.9 when t⩾6. In addition, Davies and Gribonval [11] have given detailed counter-examples to show that the bound of δ 2k cannot exceed \(1/\sqrt{2}\approx 0.7071\). Based on 0.5746<0.7071, we wonder whether there is a better way to improve the bound 0.5746 for δ 2k without the extra assumption δ 8k <1. So the further research topics we can do are to omit the extra assumption δ 8ak <1 and further to reduce the gap between 0.5746 and 0.7071.
References
Bruckstein, A.M., Donoho, D.L., Elad, A.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51, 34–81 (2009)
Baraniuk, R., Davenport, M.A., Duarte, M.F., Hegde, C.: An Introduction to Compressive Sensing (2011)
Cai, T., Wang, L., Xu, G.: New bounds for restricted isometry constants. IEEE Trans. Inform. Theory 56, 4388–4394 (2010)
Cai, T., Wang, L., Xu, G.: Shifting inequality and recovery of sparse signals. IEEE Trans. Signal Process. 58, 1300–1308 (2010)
Cai, T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmon. Anal. (2012)
Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci., Ser. I 346, 589–592 (2006)
Candès, E.J., Recht, B.: Simple bounds for low-complexity model reconstruction. Math. Program. Ser. A (2012, to appear)
Candés, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)
Candès, E.J., Tao, T.: The Dantzig selector: statistical estimation when p is much larger than n (with discussion). Ann. Statist. 35, 2313–2351 (2007)
Davies, M.E., Gribonval, R.: Restricted isometry constants where l p sparse recovery can fail for 0<p⩽1. IEEE Trans. Inf. Theory 55, 2203–2214 (2010)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Divekar, A., Ersoy, O.: Theory and Applications of Compressive Sensing, Electrical and Computer Engineering (2010)
Eldar, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)
Foucart, S.: A note on guaranteed sparse recovery via l q -minimization. Appl. Comput. Harmon. Anal. 29, 97–103 (2010)
Foucart, S., Lai, M.: Sparsest solutions of underdetermined linear systems via l q -minimization for 0<q⩽1. Appl. Comput. Harmon. Anal. 26, 395–407 (2009)
Li, S.X., Gao, F., Ge, G.N., Zhang, S.Y.: Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 58, 5035–5041 (2012)
Mo, G., Li, S.: New bounds on the restricted isometry constant δ 2k . Appl. Comput. Harmon. Anal. 31, 460–468 (2011)
Rauhut, H.: Compressive sensing and structured random matrices. Radon Ser. Comp. Appl. Math. 9, 1–92 (2010)
Zhang, Y.: On theory of compressive sensing via l 1-minimization: simple derivations and extensions. Technical Report TR08-11, Department of Computational and Applied Mathematics, Rice University, Houston, Texas (2008)
Acknowledgements
We thank the two anonymous referees for their very useful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partially supported by the National Basic Research Program of China (No. 2010CB732501), the National Natural Science Foundation of China (No. 11171018), and the Fundamental Research Funds for the Central Universities (No. 2013JBM095).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Zhou, S., Kong, L. & Xiu, N. New Bounds for RIC in Compressed Sensing. J. Oper. Res. Soc. China 1, 227–237 (2013). https://doi.org/10.1007/s40305-013-0013-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40305-013-0013-z