New Bounds for RIC in Compressed Sensing

This paper gives new bounds for restricted isometry constant (RIC) in compressed sensing. Let Φ be an m×n real matrix and k be a positive integer with k⩽n. The main results of this paper show that if the restricted isometry constant of Φ satisfies δ8ak<1 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta_{k+ak}<\frac{3}{2}-\frac{1+\sqrt{(4a+3)^2-8}}{8a} $$\end{document} for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$a> \frac{3}{8}$\end{document}, then k-sparse solution can be recovered exactly via l1 minimization in the noiseless case. In particular, when a=1,1.5,2 and 3, we have δ2k<0.5746 and δ8k<1, or δ2.5k<0.7046 and δ12k<1, or δ3k<0.7731 and δ16k<1 or δ4k<0.8445 and δ24k<1.

recovering some original n-dimensional but sparse signal\image from linear measurement with dimension far fewer than n. Recently, large numbers of researchers, including applied mathematicians, computer scientists and engineers, have begun to pay their attention to this area owing to its wide applications in signal processing, communications, astronomy, biology, medicine, seismology and so on, see, e.g., survey papers [1,2,19] and a monograph [14].
The fundamental problem in compressed sensing is reconstructing a highdimensional sparse signal from remarkably small number of measurements. We assume to recover a sparse solution x ∈ R n of the underdetermined system of the form Φx = y, where y ∈ R m is the available measurement and Φ ∈ R m×n is a known measurement matrix (with m n). The mathematical model would be to minimize the number of the non-zero components of x, i.e., to solve the following l 0 -norm optimization problem: where x 0 is l 0 -norm of the vector x ∈ R n , i.e., the number of nonzero entries in x (this is not a true norm, as · 0 is not positive homogeneous). A vector x with at most k nonzero entries, x 0 k, is called k-sparse. However, (1) is combinatorial and computationally intractable and one popular and powerful approach is to solve it via 1 minimization (its convex relaxation) One of the most commonly used frameworks for sparse recovery via l 1 minimization is the Restricted Isometry Property (RIP) introduced by Candès and Tao [9]. For some integer k ∈ {1, 2, · · · , n}, the k-restricted isometry constant (RIC) δ k of a matrix Φ is the smallest number in (0, 1) such that holds for all k-sparse vectors. We say that Φ has k-RIP if there is a k-RIC δ k ∈ (0, 1) such that the above inequalities hold. Furthermore, if for integers k 1 , k 2 , · · · , k s there exist δ k 1 , δ k 2 , · · · , δ k s ∈ (0, 1) such that the corresponding inequalities hold, we say that Φ has {k 1 , k 2 , · · · , k s }-RIP. Here, δ k ∈ (0, 1) is often used in literature, see, e.g., [13,14], and δ k has the monotone property for k (see, e.g., [3,4]), i.e., Thus, Φ has {k 1 , k 2 , · · · , k s }-RIP is the same as that Φ has max{k 1 , k 2 , · · · , k s }-RIP.
In addition, if k + k n, the k, k -restricted orthogonality constant (ROC) θ k,k is the smallest number that satisfies for all k-sparse x and k -sparse x with disjoint supports. Candès and Tao [9] showed the link between RIC and ROC θ k,k δ k+k .
By the definition (3), one would observe that where · denotes the spectral norm of a matrix (see, e.g., [18]). Clearly, it is hard to compute RICs for a given matrix Φ because it essentially requires that every subset of columns of Φ with certain cardinality approximately behaves like an orthonormal system. Moreover, as shown by Zhang [20], for a nonsingular matrix (transformation) Q ∈ R n×n , the RIP constants of Φ and QΦ can be very different. However, a widely used technique for avoiding checking the RIP condition directly is to generate the matrix randomly and to show that the resulting random matrix satisfies the RIP with high probability [17].
Although the RIP condition is difficult to check, it is of independent interest to study the bounds for RIC in CS since l 1 -norm minimization can recover a sparse signal under various conditions on δ k , δ 2k and θ k,k , such as, the condition δ k + θ k,k + θ k,2k < 1 in [9], δ 2k + θ k,2k < 1 in [10], and δ 1.25k + θ k,1.25k < 1 in [4].
While many previous results in compressed sensing made reference to δ 2k , probably because it implies that k-sparse signals remain well separated in the measurement space. The first major result of this sort was established in Candès [6], namely, δ 2k √ 2 − 1 is sufficient for k-sparse signal reconstruction. Recently Cai and Zhang [5] obtained the sufficient condition δ 2k 1/2. To the best of our knowledge, the bound for δ 2k on sparse recovery is gradually improved from √ 2 − 1(≈0.4142) to 0.5 in recent years. The details are listed in the Table 1 below.
The main contribution of the present paper is to give the new bounds for RIC in CS in the following theorem. Here, for x ∈ R n , we define the best k-sparse approximation x (k) ∈ R n from x with all but the k largest entries (in absolute value) set to zero. (1) and x (k) be the best k-sparse approximation of x. If the following inequalities hold δ 8ak < 1 ( 8 ) and

Theorem 1 Let x be a feasible solution to
for a > 3/8, then the solutionx to the l 1 minimization problem (2) satisfies for some positive constant C 0 < 1 given explicitly by (16). In particular, if x is ksparse, the recovery is exact.
From Theorem 1, when a = 1, 1.5, 2 and 3, we get that δ 2k < 0.5746, δ 2.5k < 0.7046, δ 3k < 0.7731 and δ 4k < 0.8445 with the corresponding assumption δ 8ak < 1. Observ- ing Table 1, under the extra assumption δ 8ak < 1, our conditions are all weaker than the ones known in the literature. Note that the k-RIP condition implies that every subset of columns of Φ with cardinality less than k approximately behaves like an orthonormal system. In the context of (large scaled) sparse optimization, it is often said that k n. Recently, Candès and Recht [7] showed that a k-sparse vector in R n can be efficiently recovered from 2k log n measurements with high probability, i.e., m = O(2k log n). In this case, 8ak should be less than m for smaller a. Thus, 8ak < m and 8ak n make sense and our extra assumption is meaningful and valuable in large scale sparse optimization.
The organization of this paper is as follows. In the next section, we establish some key inequalities. In Sect. 3, we prove our main result. In Sect. 4, we conclude this paper with some remarks.

Key Inequalities
In this section, we will give some inequalities, which play an important role in improving the RIC bound for sparse recovery in this paper.
We begin with the following interesting and important inequality, which states the connection between several norms of l 0 , l 1 , l 2 , l ∞ and l −∞ . Here, we define x −∞ norm as x −∞ := min i {|x i |}. (In fact, l −∞ is not a norm since the triangle inequality does not hold). For convenience, we call (11) the Norm Inequality, which is essentially from (6) in [3].

Proposition 1 (Norm Inequality)
For any x ∈ R n and x = 0, Furthermore, we obtain the following general inequality, Throughout the paper, letx be a solution to the minimization problem (2), and x ∈ R n be a feasible one, i.e., Φx = y. Clearly, x 1 x 1 . We let x (k) ∈ R n be defined as above again. Without loss of generality we assume that the support of x (k) is T 0 .
Denote that h =x − x and h T is the vector equal to h on an index set T and zero elsewhere. We decompose h into a sum of vectors h T 0 , h T 1 , h T 2 , · · · , where T 1 corresponds to the locations of the ak largest coefficients of h T C 0 (T C 0 = T 1 ∪T 2 ∪· · · ); T 2 to the locations of the 4ak largest coefficients of h (T 0 ∪T 1 ) C , T 3 to the locations of the next 4ak largest coefficients of h (T 0 ∪T 1 ) C , and so on. That is Here, the sparsity of h T 0 is at most k; the sparsity of h T 1 is at most ak; the sparsity of h T j (j 2) are at most 4ak.
In order to get a new bound on RIC, for the above decomposition (12), we define Obviously, ρ ∈ [0, 1] and Applying the Norm Inequality, we can give some inequalities of h, which are very useful in the proof of our main results.
In the end of this section, we give two lemmas which give us the connection about the norms of Φh T j and h T j .

Lemma 2
Let h T 0 and h T 1 be given by (12). Then Because the supports T 0 and T 1 are disjoint, the following equality holds where the second inequality is derived from (11).

Lemma 3
Let h T 0 , h T 1 , h T 2 , · · · , and ρ be given by (12) and (13), respectively. Then Proof By direct calculations, we obtain that where the first inequality holds by the triangle inequality, the second holds due to (3) and (6), the third is from (14) and (15); and the first equality holds from Hence, the desired result follows.

Proof of the Main Result
In this section, we will prove our main result. For simplicity, we first define a quadratic function of variable ρ, Clearly, it is a strictly concave function. We can easily obtain the optimal maximum value of f (ρ) through demanding its derivative, that is where Moreover, we denote that . (16) Before proving our main results, we show that the RIP bound in (9) is a sufficient condition for C 0 < 1. (8) and (9) hold, then C 0 < 1.

Lemma 4 If
Proof From (9), it is easy to verify that which is equivalent to Since 0 δ 8ak 1, and by (16), we have Thus, if (9) holds, we ensure C 0 < 1.
Now we begin to prove our main result.
Proof of Theorem 1 The proof proceeds in two steps, which is a common approach in literature [4,6]. The first step is to prove that The second step shows that x − x 1 is appropriately small. For the first step, we note that Φh = 0, which implies that From Lemmas 2 and 3, the following inequality holds Then we get where the first inequality is derived from (13). Combining with (16), we get (17).
For the second step, we have where the first inequality holds from (12) in [6]. Then This together with (17) yields We complete to prove (10).
In particular, if x is k-sparse, then x − x (k) = 0, and hence x =x from (10).

Conclusion
In this paper, we have gotten that, when a > 3/8, the conditions (8) and (9) enable us to obtain several interesting RIC bounds for measurement matrices, such as δ 2k , δ 2.5k , δ 3k , and δ 4k so on. For intuitionistic analysis, we draw the curve about the connection between t (:= a + 1) and the bound for δ tk . From Fig. 1, it is easy to see that the bounds for δ tk increase fast between 1.75 t 3 and the bounds for δ tk are larger than 0.9 when t 6. In addition, Davies and Gribonval [11] have given detailed counter-examples to show that the bound of δ 2k cannot exceed 1/ √ 2 ≈ 0.7071. Based on 0.5746 < 0.7071, we wonder whether there is a better way to improve the bound 0.5746 for δ 2k without the extra assumption Fig. 1 The curve of bounds for δ tk δ 8k < 1. So the further research topics we can do are to omit the extra assumption δ 8ak < 1 and further to reduce the gap between 0.5746 and 0.7071.