1 Introduction

As we know, the original split feasibility problem (SFP) was introduced firstly by Censor and Elfving [1], and has received much attention since its inception in 1994. This is due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy; please, see [26].

Since the SFP is a special case of the convex feasibility problem (CFP), which is to find a point in the nonempty intersection of finitely many closed and convex sets, we next briefly review some historic approaches which relate to the CFP. The CFP is an important problem because many real-world inversion or estimation problems in engineering as well as in mathematics can be cast into this framework; see, e.g., Combettes [7], Bauschke and Borwein [8] and Kiwiel [9]. Traditionally, iterative projection methods for solving the CFP employ orthogonal projections onto convex sets (i.e., nearest point projections with respect to the Euclidean distance function); see, e.g., [1014]. Much work has been done with generalized distance functions and the generalized projections associated with them suggested by Bregman [15].

In 1994, Censor and Elfving [1] investigated the use of different kinds of generalized projections in a single iterative process for solving the SFP. Their proposal is an iterative algorithm, which involves the computation of the inverse of a matrix, which is known to be a difficult task. That is why Byrne [16, 17] proposed the so-called CQ algorithm, which generates a sequence by a recursive procedure with suitable step-size. The CQ algorithm only involves the computations of the projections onto the sets C and Q, respectively, and is therefore implementable in the case where these projections have closed-form expressions (e.g., C and Q are the closed balls or half-spaces). There is a large number of references on the CQ method in the literature; see, for instance, [1834]. However, we have to remark that the determination of the step-size depends on the operator (matrix) norm (or the dominant eigenvalue of a matrix product). This means that in order to implement the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of an operator, which is in general not an easy work in practice.

To overcome the above difficulty, the so-called self-adaptive method which permits step-size being selected self-adaptively was developed. Note that this method is the application of the projection method of Goldstein [35], Levitin and Polyak [36] to a suitable variational inequality problem, which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends strongly on the choice of the step-size parameter. If one chooses a small parameter such that it guarantees the convergence of the iterative sequence, the recursion leads to slow speed of convergence. On the other hand, if one chooses a large step-size to improve the speed of convergence, the generated sequence may not converge. In real applications to solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP. Some self-adaptive methods for solving variational inequality problems have been developed according to the original Goldstein-Levitin-Polyak method [35, 36]. See, e.g., [3745].

Motivated by the self-adaptive strategy, Zhang et al. [45] proposed their method by using variable step-sizes instead of the fixed step-sizes as in Censor et al. [46]. Also, a self-adaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijo-like searches. The advantage of these algorithms lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.

In this paper, we further develop and improve the self-adaptive methods for solving the SFP. An improved self-adaptive method is introduced for solving the SFP. As a special case, the minimum norm solution of the SFP can be approached iteratively.

2 Framework and preliminary results

Let H 1 and H 2 be two Hilbert spaces, and let C and Q be two closed and convex subsets of H 1 and H 2 , respectively. Let A: H 1 H 2 be a bounded linear operator. The split feasibility problem (SFP) is to find a point x such that

x CandA x Q.
(1)

Next, we use Γ to denote the solution set of the SFP, i.e., Γ={xC:AxQ}.

In 1994, Censor and Elfving [1] investigated the use of different kinds of generalized projections in a single iterative process for solving the SFP. They were the first to propose the following algorithm which involved the computation of the inverse A 1 :

x k + 1 = A 1 P Q ( P A ( C ) ( A x k ) ) ,k0,

where C and Q are closed and convex sets in R n , while A is a full rank n×n matrix and A(C)={y R n y=Ax,xC}. Note that A 1 is not easily executed. Consequently, Byrne [16, 17] proposed the so-called CQ algorithm which generates a sequence { x n } by the recursive procedure

x n + 1 = P C ( x n τ n A ( I P Q ) A x n ) ,
(2)

where the step-size τ n is chosen in the interval (0,2/ A 2 ). It is remarkable that the CQ algorithm only involves the computations of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions (e.g., C and Q are the closed balls or half-spaces). However, we observe that the determination of the step-size τ n depends on the operator (matrix) norm A (or the largest eigenvalue of A A). This means that for practical implementation of the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of A, which is in general not an easy task in practice.

To overcome the above difficulty, the so-called self-adaptive method which permits step-size τ n being selected self-adaptively was developed. If we set

f(x):= 1 2 A x P Q A x 2 ,

then the convex objective f is differentiable and has a Lipschitz gradient given by

f(x)= A (I P Q )A.

Thus, the CQ algorithm (2) can be obtained by minimizing the following convex minimization problem

min x C f(x).
(3)

We know that a point x C is a stationary point of problem (3) if it satisfies

f ( x ) , x x 0,xC.
(4)

Thus, we can use a gradient projection algorithm below to solve the (SFP)

x n + 1 = P C ( x n τ n f ( x n ) ) ,
(5)

where τ n , the step-size at iteration n, is chosen in the interval (0,2/L), where L is the Lipschitz constant of ∇f.

The above method (5) has to be thought of as the application of the projection method of Goldstein [35], Levitin and Polyak [36] to the variational inequality problem (4), which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends greatly on the choice of the parameter τ n . A small τ n guarantees the convergence of the iterative sequence, but the recursion leads to slow speed of convergence. On the other hand, a large step-size will improve the speed of convergence, but the generated sequence may not converge. In real applications for solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP.

The methods in Zhang et al. [45] and Censor et al. [46] were proposed for solving the multiple-sets split feasibility problem.

Algorithm 2.1 S1. Given a nonnegative sequence τ n such that n = 0 τ n <, δ(0,1), μ(0,1), ρ(0,1), ϵ>0, β 0 >0, and arbitrary initial point x 0 . Set γ 0 = β 0 and n=0.

S2. Find the smallest nonnegative integer l n such that β n + 1 = μ l k γ k and

x n + 1 = P C ( x n β n + 1 f ( x n ) ) ,

which satisfies

β n + 1 f ( x n ) f ( x n + 1 ) 2 (2δ) x n x n + 1 , f ( x n ) f ( x n + 1 ) .

S3. If

β n + 1 f ( x n ) f ( x n + 1 ) 2 ρ x n x n + 1 , f ( x n ) f ( x n + 1 ) ,

then set γ n + 1 =(1+ τ n + 1 ) β n + 1 ; otherwise, set γ n + 1 = β n + 1 .

S4. If e( x n , β n )ϵ, stop; otherwise, set n:=n+1 and go to S2.

The following self-adaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijo-like searches.

Algorithm 2.2 Given constants β>0, σ(0,1) and γ(0,1). Let x 0 be arbitrary. For n=0,1, , calculate

x n + 1 = P C ( x n τ n f ( x n ) ) ,

where τ n =β γ l n and l n is the smallest nonnegative integer l such that

f ( P C ( x n β γ l f ( x n ) ) ) f( x n )σ f ( x n ) , x n P C ( x n β γ l f ( x n ) ) .

The advantage of Algorithm 2.1 and Algorithm 2.2 lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.

We shall introduce our improved self-adaptive method for solving the SFP. In this respect, we need the ingredients introduced right now.

Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping T:CC is called nonexpansive if

TxTyxy,x,yC.

A mapping ψ:CC is said to be δ-contractive if there exists a constant δ[0,1) such that

ψ ( x ) ψ ( y ) δxy,x,yC.

Recall that the (nearest point or metric) projection from H onto C, denoted by P C , assigns to each xH the unique point P C (x)C with the property

x P C ( x ) =inf { x y : y C } .

It is well known that the metric projection P C of H onto C has the following basic properties:

  1. (a)

    P C (x) P C (y)xy for all x,yH;

  2. (b)

    xy, P C (x) P C (y) P C ( x ) P C ( y ) 2 for every x,yH;

  3. (c)

    x P C (x),y P C (x)0 for all xH and yC.

Next we adopt the following notation:

  • x n x means that x n converges strongly to x;

  • x n x means that x n converges weakly to x;

  • ω w ( x n ):={x: x n j x} is the weak ω-limit set of the sequence { x n }.

Recall that a function f:HR is called convex if

f ( λ x + ( 1 λ ) y ) λf(x)+(1λ)f(y),λ(0,1),x,yH.

It is known that a differentiable function f is convex if and only if the following relation holds:

f(z)f(x)+ f ( x ) , z x ,zH.

Recall that an element gH is said to be a subgradient of f:HR at x if

f(z)f(x)+g,zx,zH.

If the function f:HR has at least one subgradient at x, it is said to be subdifferentiable at x. The set of subgradients of f at the point x is called the subdifferential of f at x, and is denoted by f(x). A function f is called subdifferentiable if it is subdifferentiable at all xH. If f is convex and differentiable, then its gradient and subgradient coincide. A function f:HR is said to be weakly lower semi-continuous (w-lsc) at x if x n x implies

f(x) lim inf n f( x n ).

f is said to be w-lsc on H if it is w-lsc at every point xH.

The first lemma is easy to prove.

Lemma 2.1 [14]

Let f(x):= 1 2 A x P Q A x 2 . Then

  1. (i)

    f is convex and differentiable;

  2. (ii)

    f is w-lsc on C.

Lemma 2.2 [47]

Given x H 1 . Then x solves the SFP if and only if x solves the fixed point equation

x = P C ( x γ A ( I P Q ) A x ) ,

where γ>0.

Lemma 2.3 [48]

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (1)

    n = 1 γ n =;

  2. (2)

    lim sup n δ n γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

Lemma 2.4 [49]

Let { s n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { s n i } of { s n } such that s n i s n i + 1 for all i0. For every n n 0 , define an integer sequence {τ(n)} as

τ(n)=max{kn: s n i < s n i + 1 }.

Then τ(n) as n and for all n n 0

max{ s τ ( n ) , s n } s τ ( n ) + 1 .

3 Main results

In this section we state and prove our main results.

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let ψ:C H 1 be a δ-contraction with δ[0, 2 2 ). Let A: H 1 H 2 be a bounded linear operator.

Algorithm 3.1 For given x 0 C, assume that { x n } has been constructed. If f( x n )=0, then stop and x n is a solution of SFP (1). Otherwise, continue and compute x n + 1 by the recursion

x n + 1 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] ,n0,
(6)

where { α n }(0,1) and { ρ n }(0,2).

Theorem 3.1 Suppose that the SFP is consistent, that is, Γ. Assume that the following conditions hold:

  1. (a)

    lim n α n =0 and n = 1 α n =;

  2. (b)

    inf n ρ n (2 ρ n )>0.

Then { x n } defined by (6) converges strongly to z, which solves the following variational inequality:

zΓsuch that z ψ ( z ) , z x 0 for all xΓ.
(7)

Proof First, it is obvious that the solution of the variational inequality (7) is unique (by the strong monotonicity of Iψ according to the related results in variational inequality), denoted by z. Then z= P Γ (ψ(z)). We may assume that the sequence { x n } is infinite, that is, Algorithm 3.1 does not terminate in a finite number of iterations. Thus, f( x n )0 for all n. From (6), we have

x n + 1 z 2 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] z 2 α n ( ψ ( x n ) z ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 α n ψ ( x n ) z 2 + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z 2 ( 1 α n ) [ x n z 2 + ρ n 2 f 2 ( x n ) f ( x n ) 2 2 ρ n f ( x n ) f ( x n ) 2 f ( x n ) , x n z ] + α n ( ψ ( x n ) ψ ( z ) + ψ ( z ) z ) 2 .
(8)

By the convexity of f (Lemma 2.1) and the fact that f(z)=0 for zΓ, we deduce that

f( x n )=f( x n )f(z) f ( x n ) , x n z .
(9)

Using the inequality ( a + b ) 2 2( a 2 + b 2 ) for all a,bR, we have

( ψ ( x n ) ψ ( z ) + ψ ( z ) z ) 2 2 ψ ( x n ) ψ ( z ) 2 + 2 ψ ( z ) z 2 2 δ 2 x n z 2 + 2 ψ ( z ) z 2 .
(10)

From (8)-(10), we get

x n + 1 z 2 ( 1 α n ) [ x n z 2 ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 ] + 2 δ 2 α n x n z 2 + 2 α n ψ ( z ) z 2 [ 1 ( 1 2 δ 2 ) α n ] x n z 2 + ( 1 2 δ 2 ) α n 2 ψ ( z ) z 2 1 2 δ 2 max { x n z 2 , 2 ψ ( z ) z 2 1 2 δ 2 } .

By induction, we deduce

x n + 1 z 2 max { x 0 z 2 , 2 ψ ( z ) z 2 1 2 δ 2 } .

Hence, { x n } is bounded.

By using the firm nonexpansivity of P C , we derive that

x n + 1 z 2 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] P C z 2 α n ψ ( x n ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z , x n + 1 z = α n ψ ( x n ) ψ ( z ) , x n + 1 z + α n ψ ( z ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z , x n + 1 z α n δ x n z x n + 1 z + α n ψ ( z ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z x n + 1 z = ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) x n + 1 z + α n ψ ( z ) z , x n + 1 z 1 2 ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 + 1 2 x n + 1 z 2 + α n ψ ( z ) z , x n + 1 z .

It follows that

x n + 1 z 2 ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 + 2 α n ψ ( z ) z , x n + 1 z α n δ 2 x n z 2 + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z 2 + 2 α n ψ ( z ) z , x n + 1 z α n δ 2 x n z 2 + ( 1 α n ) [ x n z 2 ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 ] + 2 α n ψ ( z ) z , x n + 1 z = [ 1 ( 1 δ 2 ) α n ] x n z 2 + 2 α n ψ ( z ) z , x n + 1 z ( 1 α n ) ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 .
(11)

Next, we will prove that x n z following the ideas in [49]. Set s n = x n z 2 for all n0. Since α n 0 and inf n ρ n (2 ρ n )>0, we may assume, without loss of generality, that (1 α n ) ρ n (2 ρ n )σ for some σ>0. Thus, we can rewrite (11) as

s n + 1 s n + ( 1 δ 2 ) α n s n + σ f 2 ( x n ) f ( x n ) 2 2 α n ψ ( z ) z , x n + 1 z .
(12)

Now, we consider two possible cases.

Case 1. Assume that { s n } is eventually decreasing, i.e., there exists N>0 such that { s n } is decreasing for nN. In this case, { s n } must be convergent, and from (12) it follows that

0 σ f 2 ( x n ) f ( x n ) 2 s n s n + 1 ( 1 δ 2 ) α n s n + 2 α n ψ ( z ) z x n + 1 z s n s n + 1 + M α n ,
(13)

where M>0 is a constant such that sup n {2ψ(z)z x n + 1 z}M. Letting n in (13), we get

lim n f( x n )=0.

Since { x n } is bounded, there exists a subsequence { x n k } of { x n } converging weakly to x ˜ C.

From the weak lower semicontinuity of f, we have

0f( x ˜ ) lim inf k f( x n k )= lim n f( x n )=0.

Hence, f( x ˜ )=0, i.e., A x ˜ Q. This indicates that

ω w ( x n )Γ.

Furthermore, due to the property of the projection (c),

lim sup n ψ ( z ) z , x n + 1 z = max ω ω w ( x n ) ψ ( z ) P Γ ( ψ ( z ) ) , ω P Γ ( ψ ( z ) ) 0.

From (12), we obtain

s n + 1 [ 1 ( 1 δ 2 ) α n ] s n +2 α n ψ ( z ) z , x n + 1 z .
(14)

Applying Lemma 2.3 to (14), we get s n 0.

Case 2. Assume { s n } is not eventually decreasing. That is, there exists an integer n 0 such that s n 0 s n 0 + 1 . Thus, we can define an integer sequence { τ n } for all n n 0 as follows:

τ(n)=max{kN n 0 kn, s k s k + 1 }.

Clearly, τ(n) is a non-decreasing sequence such that τ(n)+ as n and

s τ ( n ) s τ ( n ) + 1

for all n n 0 . In this case, we derive from (13) that

σ f 2 ( x τ ( n ) ) f ( x τ ( n ) ) 2 M α τ ( n ) 0.

It follows that

lim n f( x τ ( n ) )=0.

This implies that every weak cluster point of { x τ ( n ) } is in the solution set Γ; i.e., ω w ( x τ ( n ) )Γ.

On the other hand, we note that

x τ ( n ) + 1 x τ ( n ) α τ ( n ) ψ ( x τ ( n ) ) x τ ( n ) +(1 α τ ( n ) ) ρ τ ( n ) f ( x τ ( n ) ) f ( x τ ( n ) ) 0,

from which we can deduce that

lim sup n ψ ( z ) z , x τ ( n ) + 1 z = lim sup n ψ ( z ) z , x τ ( n ) z = max ω ω w ( x τ ( n ) ) ψ ( z ) P Γ ( ψ ( z ) ) , ω P Γ ( ψ ( z ) ) 0 .
(15)

Since s τ ( n ) s τ ( n ) + 1 , we have from (12) that

s τ ( n ) 2 1 δ 2 ψ ( z ) z , x τ ( n ) + 1 z .
(16)

Combining (15) and (16) yields

lim sup n s τ ( n ) 0,

and hence

lim n s τ ( n ) =0.

From (14), we have

lim sup n s τ ( n ) + 1 lim sup n s τ ( n ) .

Thus,

lim n s τ ( n ) + 1 =0.

From Lemma 2.4, we have

0 s n max{ s τ ( n ) , s τ ( n ) + 1 }.

Therefore, s n 0. That is, x n z. This completes the proof. □

From Theorem 3.1, we can deduce easily the following algorithm and corollary.

Algorithm 3.2 For given x 0 C, assume that { x n } has been constructed. If f( x n )=0, then stop and x n is a solution of SFP (1). Otherwise, continue and compute x n + 1 by the recursion

x n + 1 = P C [ ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] ,n0,
(17)

where { α n }(0,1) and { ρ n }(0,2).

Theorem 3.2 Suppose that the SFP is consistent, that is, Γ. Assume that the following conditions hold:

  1. (a)

    lim n α n =0 and n = 1 α n =;

  2. (b)

    inf n ρ n (2 ρ n )>0.

Then { x n } defined by (17) converges strongly to the minimum norm solution of the SFP.

4 Concluding remarks

This work contains our study dedicated to developing and improving self-adaptive methods for solving the split feasibility problem. We have introduced our improved self-adaptive method for solving the split feasibility problem. As a special case, the minimum norm solution of the split feasibility problem can be approached iteratively. This study is motivated by relevant applications for solving many real-world problems, which give rise to mathematical models in the sphere of variational inequality problems.