A linearly convergent algorithm for sparse signal reconstruction

For the sparse signal reconstruction problem in compressive sensing, we propose a projection-type algorithm without any backtracking line search based on a new formulation of the problem. Under suitable conditions, global convergence and its linear convergence of the designed algorithm are established. The efficiency of the algorithm is illustrated through some numerical experiments on some sparse signal reconstruction problem.

Obviously, function x 1 is convex although it is not differential. For convex optimization problem (1.1), there are some standard methods such as smooth Newton-type methods and interior-point methods for solving the This project is supported by the Natural Science Foundation of China (Grants no.11801309, 11671228, 11601261), and Natural Science Foundation of Shandong Province (Grant no. ZR2016AQ12). 1 −minimization. Yin et al. [66] proposed an efficient method for solving the 1 -minimization problem based on Bregman iterative regularization. Hale et al. [16] presented a framework for solving the large-scale 1 -regularized convex minimization problem based on operator splitting and continuation. However, these solvers are not tailored for large-scale cases of CS and they become inefficient as dimension n increases. To overcome this drawback, Figueiredo et al. [14] proposed a gradient projection-type algorithm with a backtracking line search for box-constrained quadratic programming formulation of (1.1). A similar algorithm based on conjugate gradient technique is proposed by Xiao and Zhu [61]. For more detail, see [4,5,[9][10][11][12]17,20,24,26,31,34,36,39,41,42,56,58,59,62,68]. Due to the high computing cost of the line search procedure, we propose a new type of projection algorithm for problem (1.1) without line search at each iteration in this paper which marginally decrease the computing cost of the algorithm.
The remainder of this paper is organized as follows. Some equivalent reformulations of problem (1.1) are established in Sect. 2. In Sect. 3, we propose a new projection-type algorithm without line search, and establish the global convergence of the new algorithm and its linear convergence rate. In Sect. 4, some numerical experiments on compressive sensing are given to illustrate the efficiency of the proposed method. Some concluding remarks are drawn in Sect. 5.
To end this section, some notations used in this paper are in order. We use R n + to denote the nonnegative quadrant in R n , and use x + to denote the orthogonal projection of vector x ∈ R n onto R n + , that is, (x + ) i := max{x i , 0}, 1 ≤ i ≤ n; the norm · and · 1 denote the Euclidean 2-norm and 1-norm, respectively.

New formulation and algorithm
To propose a new projection-type algorithm for problem (1.1), we first establish a new equivalent reformulation. To this end, we define two nonnegative auxiliary variables μ i and ν i (i = 1, 2, . . . , n) such that Then, problem (1.1) can be reformulated as where e ∈ R n denotes the vector with all entries being 1, i.e., e = (1, 1, . . . , 1) . Based on this, the problem can be simplified as Vol. 20 (2018) A linearly convergent algorithm for sparse signal reconstruction Page 3 of 13 154 Obviously, the Hessian matrix M of the quadratic function f (μ; ν) is positive semi-definite. By the optimization theory [1], we know that the stationary point of (2.2) coincides with its solution which also coincides with the solution set of the following linear variational inequality problem of finding Obviously, the solution set of (2.3), denoted by Ω * , is nonempty provided that the solution of (1.1) is nonempty.
To proceed, we give the definition of projection operator and some related properties. For a nonempty closed convex set K ⊂ R n and vector x ∈ R n , the orthogonal projection of x onto K, i.e., arg min{ y − x |y ∈ K}, is denoted by P K (x). Proposition 2.1 [1,67]. Let K be a closed convex subset of R n . For any x, y ∈ R n and z ∈ K, the following statements hold.
For problem (2.3) and (μ; ν) ∈ R 2n , define the projection residue The projection residue is intimately related to the solution of (2.3) as shown in the following conclusion [35].

Proposition 2.2. (μ; ν) * is a solution of (2.3) if and only if
Based on the discussion above, we may formally state our algorithm. Algorithm 3.1.

Convergence
To establish the convergence and convergence rate of Algorithm 3.1, we need the following conclusions.
Proof. Since matrix M is positive semi-definite, one has Combining this with (2.3) yields Then, by Proposition 2.1 (i), a direct computation gives Proof. By a direct computation, one has 154 Page 6 of 13 D. Feng and X. Wang where the first equality follows from (2.7), the first inequality follows from Proposition 2.1, the second inequality follows from (3.1), the third inequality follows from the fact that (μ; ν) k+1 ∈ H k , and the fourth inequality uses the Cauchy-Schwarz inequality. Now, we are at the position to state our main results in this section.
In the following analysis, we assume that the sequence {(μ; ν) k } is an infinite sequence. From Theorem 3.1, we know that lim k→∞ (μ; ν) k = (μ;ν). (3.7) Letx =μ −ν. Then a direct computation gives where the second and third inequalities use the fact that Thus, the sequence {x k } converges globally to a solution of (1.1).

Numerical experiments
In this section, we provide some numerical tests to show the efficiency of the proposed method. In our numerical experiment, we set ρ = 0.01, n = 2 11 , m = floor(n/a), k = floor(m/b), and the measurement matrix A is generated by Matlab scripts: The original signalx is thus generated by p=randperm(n); x(p(1:k))=randn(k,1), and the observed signal y is generated by y = Ax +n, wheren is generated by a standard Gaussian distribution N (0, 1) and then it is normalized to the norm σ = 0.01 or 0.001. In our numerical experiments, the stopping criterion is where f k denotes the objective value of (1.1) at iteration x k . For Algorithm 3.1, we set t = 0.4, β = 0.8/ M . In addition, the initial points μ 0 = max{0, A y}, ν 0 = max{0, −A y}. For the conjugate gradient descent (denoted by CGD) method proposed recently by Xiao and Zhu in [61], we set ξ = 10, σ = 10 −4 and ρ = 0.5 in the line search (2.9) of CGD, and the initial points μ 0 , ν 0 are set the same as Algorithm 3.1. In each test, we calculate the relative error wherex denotes the recovery signal. The numerical results are reported in Tables 1 and 2 from which we can see that Algorithm 3.1 is much better than CGD method for all σ and (a, b).

Conclusion
In this paper, we proposed a new projection-type algorithm for solving the compressive sensing (CS) without the backtracking line search. Its global convergence and linear convergence rate were established. Some numerical results were provided to illustrate the efficiency of the proposed method.