New inertial relaxed method for solving split feasibilities

In this paper, we introduce a relaxed CQ method with alternated inertial step for solving split feasibility problems. We give convergence of the sequence generated by our method under some suitable assumptions. Some numerical implementations from sparse signal and image deblurring are reported to show the efficiency of our method.


Introduction
Censor and Elfving [12] introduced the following Split Convex Feasibility Problem (SCFP), see also [11], where A : R k → R m is a bounded and linear operator, C ⊆ R k and Q ⊆ R m are nonempty, closed and convex sets.Hereafter, we let S represent the set of solutions to SCFP (1).
Originally the SCFP was introduced in Euclidean spaces, and afterwards extended to infinite dimensional spaces as well as applied successfully in the field of intensitymodulated radiation therapy (IMRT) treatment planning, see [11][12][13]15].
Weak convergence of the CQ method is guaranteed under the assumption that λ ∈ (0, 2/ A 2 ).So, an implementation of (2) requires a norm estimation of the bounded linear operator A, or the spectral radius of the matrix A t A in finite-dimensional framework.This fact might effect the applicability of the method in practice, see [26,Theorem 2.3].So, in order to circumvent this scenario, López et al. [30] introduced a modification of the CQ method (2) by replacing the step-size λ in (2) with the following adaptive step: where ρ n ∈ (0, 4), f (x n ) = 1 2 (I − P Q )Ax n 2 and ∇ f (x n ) = A t (I − P Q )Ax n for all n ≥ 1.There exists many other modifications of the CQ algorithm, see, for example, [20,23,24,44,49].Following the heavy ball method of Polyak [39], Nesterov [37] introduced the following iterative step: where θ n ∈ [0, 1) is an inertial factor and λ n is a positive sequence.It was shown via numerical experiments in the field of image reconstruction, that (4) and other associated methods, such as [1,2,[6][7][8]18,21,31,32,34], have greatly improved the performance of their non-inertial algorithms, that is, when θ n = 0. Hence this idea is also referred to as inertial algorithms.
In this spirit, several inertial-type methods for solving SCFPs have been proposed recently, see [16,[44][45][46][47]51], just to name a few.In particular, Dang et al. [18] (see also [17]) proposed the following inertial relaxed CQ algorithms for solving SCFPs: and where ).An important observation regarding the above inertial methods [16][17][18][44][45][46][47]51], is that the sequence {x n } generated by these inertial-type methods does not have a monotonic behaviour with respect to x * ∈ S and can move or swing back and forth around S, see, for example, [5,31].This could explain why such inertial extrapolation step does not converge faster than its counterpart non-inertial methods, see, e.g., [33].In a direction to resolve the above issue, an alternated inertial method was introduced recently in [36].This alternated inertia method shown to exhibit attractive performances in practice including monotonicity of { x 2n − x * }, see [27,28] for more details.Motivated by the above works, we propose a new relaxed CQ method with alternated inertial procedure for solving SCFPs.We establish global convergence of our scheme under some easy to verify assumptions.Moreover, the parameters controlling the inertial factor, that is, θ n can be chosen as close as possible to 1 (when μ tends to zero in (10)).This is opposite to many other related methods that restrict it to less than 1, see, e.g., [16][17][18][44][45][46][47]51].The outline of the paper is a follows.Definitions, basic concepts and useful results are presented in Sect. 2. The method and its analysis is given in Sect. 3 and then some numerical experiments in the field of signal processing which illustrate the effectiveness and applicability of our proposed scheme is presented in Sect. 4. Final remarks are presented in Sect. 5.

Preliminaries
We start by recalling some definitions and basic results.
It is shown in [25] that T is firmly nonexpansive if and only if I − T is firmly nonexpansive.
Let C be a nonempty, closed and convex subset of R k .For any point u ∈ R k , there exists a unique point . Some important properties of the metric projection are listed next, for this and more see [4].We know that P C is a firmly nonexpansive mapping of R k onto C. It is also known that P C satisfies Furthermore, P C x is characterized by the property This characterization implies that Let the function f : R k → R, an element g ∈ R k is said to be a subgradient of f at The subdifferential of f at x, ∂ f (x), is defined by The next basic lemma is useful for our analysis.Lemma 2.2 Let x, y ∈ R k .Then

The algorithm
In the light of [22], we consider a relaxed CQ method with alternated inertial extrapolation step in which C and Q in (1) are level sets of convex functions given by where c : R k → R and q : R m → R are convex functions.By [3, Fact 7.2 (iii)], c and q are subdifferentiable on C and Q, respectively, and c and q are bounded on bounded sets.
For n ≥ 1, define Consequently, since C n and Q n are two half-spaces, the projections onto these sets have a closed formulas and hence easy to compute.From now on we define for all x ∈ R k : Choose starting points x 0 , x 1 ∈ R k and set n := 1. 2: Given the iterations x n , x n−1 , compute (12) where τ n = γ l m n and m n is the smallest non-negative integer m such that 5: Set n ← n + 1, and go to 2.

Remark 3.1 (a)
As mentioned in the introduction, by adding the inertial extrapolation step in ( 5) and ( 6) to the classical CQ algorithm (2), the new generated sequence {x n } can move or swing back and forth around S and hence we do not have monotonicity of { x n − x * }, x * ∈ S.This matter can affect the convergence speed of the CQ methods with inertial extrapolation step, and sometimes would not even converge faster than the original CQ methods.In order to circumvent this scenario and regain monotonicity to some extent (see Lemma 3.2 below), we introduce the inertial extrapolation step (11).
(b) Observe that if θ n = 0, then Algorithm 1 reduces to the methods proposed in [23,40,50].(c) Our scheme allows to choose the parameters controlling the inertial factor θ n as close as possible to 1, when μ tends to zero in (10).This is more flexible than the methods in [16][17][18][44][45][46][47]51].In general a wise choice of θ n in Step 2 of Algorithm 1 enables acceleration of our method.(d) Observe that we make use of an Armijo line search rule Algorithm 1, which is similar to [23] and hence following [23, Lemma 3.1], the search rule in Algorithm 1 ends after a finite number of iterations.Furthermore, μl
We next show that the sequence of odd terms {x 2n+1 } converges to x * .Note that since lim n→∞ x 2n − x * exists and lim j→∞ x 2n j − x * = 0, we get lim n→∞ x 2n − x * = 0.
Therefore, x * is unique.Following the same arguments as in ( 14)-( 18), one can show that Therefore, We give the following remark on our results.

Remark 3.4 (a)
When vanilla inertial extrapolation step (the case when w n in ( 11) is computed as ) is added to methods for solving SCFP (1), the Fejér monotonicity of the generated sequence {x n } with respect to S is lost.Here in our results in Lemma 3.2, we recover the Fejér monotonicity of {x 2n } with respect to S. This is one of the interesting properties of methods with alternated extrapolation step for solving SCFP (1).(b) Our methods of proof in Lemma 3.2 and Theorem 3.3 are simpler and different from the methods of proof given in other papers (see, e.g., [16][17][18][44][45][46][47]51]) which solve SCFP (1) using methods with vanilla inertial extrapolation step.♦

Numerical experiments
In this section, we use the SCFP (1) to model two real problems, the first is the recovery of a sparse signal and the second is image deblurring.
We make use of the well-known LASSO problem [48] which is the following.
where A ∈ R m×k , m < k, b ∈ R m and t > 0. This problem, (33) exhibits the potential of finding a sparse solution of the SCFP (1) due to the 1 constraint.

Example 4.1
The first problem is focused in finding a sparse solution of the SCFP (1).
We illustrate the advantages of our proposed scheme by comparing it with some related results in the literature, such as the methods in [23,40,50].For the experiments the matrix A is generated from a normal distribution with mean zero and one variance.The true sparse signal x * is generated from uniformly distribution in the interval [−2, 2] with random K position nonzero while the rest is kept zero.The sample data b = Ax * (no noise is assumed).
In the algorithm's implementations we choose the following parameters γ = 1, l = μ = 0.5 and the constant stepsize 0.9 * (2/L) for the relaxed CQ algorithm [50].This parameters choices are arbitrary and valid theoretically, and here the goal is just to illustrate the performance of the methods.Clearly in a real-world scenario, one should have a deep investigation which involves intensive numerical simulations that can guaranteed optimal and performances.We limit the iterations number to 1000 and report the "Err" which is defined as x n+1 − x n .We also report the the objective function value ("Obj").Under certain condition on matrix A, the solution of the minimization problem (33) is equivalent to the 0 -norm solution of the underdetermined linear system.For the considered SCFP (1), we define C = {x ∈ R k : x 1 ≤ t} and Q = {b}.Instead of projecting onto the closed and convex set C (there exists no closed formula), we use subgradient projection.So, define the convex function c(x) := x 1 − t and let C n be where ξ n ∈ ∂c(w n ).It can be easily seen that the subdifferential ∂c at x ∈ R k is (defined element wise) Now, the orthogonal projection of a point x ∈ R k onto C n can be calculated by the following, ξ n , otherwise.In Table 1 we summarize the results and in Figs. 1, 2 and 3 we plot the exact Ksparse signal against the recovered signals and the objective function values obtained by the different methods.One can clearly see that the inertial term plays as a significant role in achieving a better solution with respect to a lower objective value and CPU time for the same number of iterations.
Next for K = 20 we illustrate the influence of the inertial parameter θ as it approaches 1 as a function of μ → 0 (taken as 1 n ).In Fig. 4 we plot the value of the objective function 1  2 Ax − b 2 2 after 1000 iterations for any value of {θ n } ∞ n=1 and other parameters are chosen as above.
Example 4. 2 In this example we wish to apply our algorithm to image deblurring problem.Given a convolution matrix A ∈ R m×k and an unknown original image x ∈ R k , we get b ∈ R m , which is the known degraded observation.We also include unknown   additive random noise v ∈ R m and get the following image recovery problem.
This problem can clearly fits into the setting of SCFP with C = R k , if no noise is included in the observed image b then Q = {b} is a singleton and otherwise Q = {y ∈ R m | y − (b + v) ≤ ε} for small enough ε > 0. We illustrate the effectiveness and performance of our proposed Algorithm 1 compared with [40,Alg. 4.1.]and the very recent result of Padcharoen et al. [35,Alg. 1] which is the inertial Tseng method.The test image is the Lenna image (https://en.wikipedia.org/wiki/Lenna) which went through a 9 × 9 Gaussian random blur and random noise.Clearly this problem's structure differs from Example 4.1 but for simplicity we choose for Algorithm 1 compared with [40,Alg. 4.1.]the same parameters settings and for [35,Alg. 1] we choose the same choices as the authors did, that is, the inertial term α n = 0.9 and the step size λ n = 0.5 − 150n 1000n+100 .In Figs. 5 (a)-(k) we report all results that include the recovered images via the different algorithms, the difference between successive iterations and the signal to noise ratio (SNR= 10 log ) with respect to the number of iterations.The CPU time in seconds of the tested algorithms is reported in Table 2. From Figs. 5 and Table 2 it can be seen that the inertial methods: Algorithm 1 and [35, Alg.1] generate reasonable and compatible results after only 30 iterations compared with the non-inertial method [40,Alg. 4.1.].The two major advantages of our proposed Algorithm 1 compared with the other two algorithms is the higher SNR value and lower CPU time for generating the recovered image.

Final remarks
In this paper, we give global convergence result for Split Convex Feasibility problem using relaxed CQ method with alternated inertial extrapolation step.Our result extend and generalize some existing results in the literature and the primary numerical results indicate that our proposed method outperforms most existing relaxed CQ method for solving SCFP.

Lemma 2 . 1 (
[10]) Let C be nonempty, closed and convex subset of R k and x ∈ R k .Consider the function f (x x * and the desired result is obtained.

Fig. 2
Fig. 2 The recovered sparse signal versus the original for the 4 CQ variants with m = 120, n = 512 and K = 20

Fig. 5
Fig. 5 Recovered images via the different algorithms

Table 1
Numerical results obtained by all 4 CQ variants with m = 120, n = 512