1 Introduction and preliminaries

Let H 1 , H 2 , H 3 be real Hilbert spaces, C H 1 be a nonempty closed convex set and 0C. Let A: H 1 H 2 , B: H 1 H 2 be two bounded linear operators. We consider the interesting problem of finding xC such that

Ax=Bx(or 0=Ax+Bx).
(1.1)

For convenience, we denote the problem by P.

For P it is generally difficult to find zeroes of A and B separately. To overcome this difficulty, Eckstein and Svaiter [1] present the splitting methods for finding a zero of the sum of monotone operator A and B. Three basic families of splitting methods for this problem were identified in [1]:

  1. (i)

    The Douglas/Peaceman-Rachford family, whose iteration is given by

    y k = [ 2 ( I + ξ B ) 1 + I ] x k , z k = [ 2 ( I + ξ A ) 1 + I ] y k , x k + 1 = ( 1 ρ k ) x k + ρ k z k ,

where ξ>0 is a fixed scalar, and { ρ k }(0,1] is a sequence of relaxation parameters.

  1. (ii)

    The double backward splitting method, with iteration given by

    y k = ( I + λ k B ) 1 x k , x k + 1 = ( I + λ k A ) 1 y k ,

where { λ k }(0,) a sequence of regularization parameters.

  1. (iii)

    The forward-backward splitting method, with iteration given by

    y k ( I λ k A ) 1 x k , x k + 1 = ( I + λ k B ) 1 y k ,

where { λ k }(0,) a sequence of regularization parameters.

Convergence results for the scheme (i), in the case in which { ρ k } is contained in a compact subset of (0,1), can be found in [2]; the convergence analysis of the double backward scheme given by (ii), which can be found in [3] and [4]; the standard convergence analysis for (iii) one can see [5]. However, the convergence results are largely dependent on the maximal monotonicity of A and B. It is therefore the aim of this paper to construct new algorithms for problem P which ignore the conditions of the maximal monotonicity of A and B.

The paper is organized as follows. In Section 2, we define the concept of the minimal norm solution of the problem P (1.1). Using Tychonov regularization, we obtain a net of solutions for some minimization problem approximating such minimal norm solution (see Theorem 2.4). In Section 3, we introduce an algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem P (1.1) (see Theorem 3.2). In Section 4, we introduce KM-CQ-like iterative algorithm which converge strongly to a solution of the problem P (1.1) (see Theorem 4.3).

Throughout the rest of this paper, I denotes the identity operator on Hilbert space H, Fix(T) the set of the fixed points of an operator T and ∇f the gradient of the functional f:HR. An operator T on a Hilbert space H is nonexpansive if, for each x and y in H, TxTyxy. T is said to be averaged, if there exist 0<α<1 and a nonexpansive operator N such that T=(1α)I+αN.

We know that the projection P C from H onto a nonempty closed convex subset C of H is a typical example of a nonexpansive and averaged mapping, which is defined by

P C (w)=arg min x C xw.

It is well known that P C (w) is characterized by the inequality

w P C ( w ) , x P C ( w 0,xC.

We now collect some elementary facts which will be used in the proofs of our main results.

Lemma 1.1 [6, 7]

Let X be a Banach space, C a closed convex subset of X, and T:CC a nonexpansive mapping with Fix(T). If { x n } is a sequence in C weakly converging to x and if {(IT) x n } converges strongly to y, then (IT)x=y.

Lemma 1.2 [8]

Let { s n } be a sequence of nonnegative real numbers, { α n } a sequence of real numbers in [0,1] with n = 1 α n =, { u n } a sequence of nonnegative real numbers with n = 1 u n <, and { t n } a sequence of real numbers with lim sup n t n 0. Suppose that

s n + 1 =(1 α n ) s n + α n t n + u n ,nN.

Then lim n s n =0.

Lemma 1.3 [9]

Let { w n }, { z n } be bounded sequences in a Banach space and let { β n } be a sequence in [0,1] which satisfies the following condition: 0< lim inf n β n lim sup n β n <1. Suppose that w n + 1 =(1 β n ) w n + β n z n and lim sup n z n + 1 z n w n + 1 w n 0, then lim n z n w n =0.

Lemma 1.4 [10]

Let f be a convex and differentiable functional and let C be a closed convex subset of H. Then xC is a solution of the problem

min x C f(x)

if and only if xC satisfies the following optimality condition:

f ( x ) , v x 0,vC.

Moreover, if f is, in addition, strictly convex and coercive, then the minimization problem has a unique solution.

Lemma 1.5 [11]

Let A and B be averaged operators and suppose that Fix(A)Fix(B) is nonempty. Then Fix(A)Fix(B)=Fix(AB)=Fix(BA).

2 The minimum-norm solution of the problem P

In this section, we propose the concept of the minimal norm solution of P (1.1). Then, using Tychonov regularization, we obtain the minimal norm solution by a net of solution for some minimization problem.

We use Γ to denote the solution set of P, i.e.,

Γ={x H 1 ,Ax=Bx,xC}

and assume consistency of P. Hence Γ is closed, convex, and nonempty.

Let H= H 1 × H 1 , M={(x,x),x H 1 }H, P be the linear operator from H 1 onto M, and P has the matrix form

P=[ I I ],

that is to say, P(x)=(x,x), x H 1 .

Define G:H H 2 by G((x,y))=Ax+By, (x,y) H 2 . Then G has the matrix form G=[A,B], and GP=A+B, P G GP= A A+ A B+ B A+ B B.

The problem can now be reformulated as finding xC with GPx=0, or solving the following minimization problem:

min x C f(x)= 1 2 G P x 2 ,
(2.1)

which is ill-posed. A classical way is the well-known Tychonov regularization, which approximates a solution of problem (2.1) by the unique minimizer of the regularized problem:

min x C f α (x)= 1 2 G P x 2 + 1 2 α x 2 ,
(2.2)

where α>0 is the regularization parameter. Denote by x α the unique solution of (2.2).

Proposition 2.1 For α>0, the solution x α of (2.2) is uniquely defined. x α is characterized by the inequality

P G G P x α + α x α , x x α 0,xC.

Proof Obviously, f(x)= 1 2 G P x 2 is convex and differentiable with gradient f(x)= P G GPx. Recall that f α (x)=f(x)+ 1 2 α x 2 , we see that f α is strictly convex and differentiable with gradient

f α (x)= P G GPx+αx.

According Lemma 1.4, x α is characterized by the inequality

P G G P x α + α x α , x x α 0,xC.
(2.3)

 □

Definition 2.2 An element x ˜ Γ is said to be the minimal norm solution of SEP (1.1) if x ˜ = inf x Γ x.

The following proposition collects some useful properties of { x α } the unique solution of (2.2).

Proposition 2.3 Let x α be given as the unique solution of (2.2). Then we have:

  1. (i)

    x α is decreasing for α(0,).

  2. (ii)

    α x α defines a continuous curve from (0,) to H 1 .

Proof Let α>β>0, since x α and x β are the unique minimizers of f α and f β , respectively, we get

1 2 G P x α 2 + 1 2 α x α 2 1 2 G P x β 2 + 1 2 α x β 2 , 1 2 G P x β 2 + 1 2 β x β 2 1 2 G P x α 2 + 1 2 β x α 2 .

It follows that x α x β . Thus x α is decreasing for α(0,).

According to Proposition 2.1, we get

P G G P x α + α x α , x β x α 0,

and

P G G P x β + β x β , x α x β 0.

It follows that

x α x β ,α x α β x β x α x β , P G G P ( x β x α ) 0.

Thus

α x α x β (αβ) x α x β , x β .

It turns out that

x α x β 2 | α β | α x β .

Hence, α x α is a continuous curve from (0,) to H 1 . □

Theorem 2.4 Let x α be the unique solution of (2.2). Then x α converges strongly to the minimum-norm solution x ˜ of P (1.1) with α0.

Proof For any 0<α<, x α is given as (2.2), we get

1 2 G P x α 2 + 1 2 α x α 2 1 2 G P x ˜ 2 + 1 2 α x ˜ 2 .

Since x ˜ Γ is a solution for P,

1 2 G P x α 2 + 1 2 α x α 2 1 2 α x ˜ 2 .

It follows that x α x ˜ for all α>0. Thus { x α } is a bounded net in H 1 .

All we need to prove is that for any sequence { α n } such that α n 0, { x α n } contains a subsequence converging strongly to x ˜ . For convenience, we set x n = x α n .

In fact { x n } is bounded, by passing to a subsequence if necessary, we may assume that { x n } converges weakly to a point x ˆ S. Due to Proposition 2.1, we get

P G G P x n + α n x n , x ˜ x n 0.

It turns out that

GP x n ,GP x ˜ GP x n α n x n , x n x ˜ .

Since x ˜ Γ, it follows that

GP x n ,GP x n α n x n , x n x ˜ .

Noting that x n x ˜ , we have

GP x n 2 α n x ˜ 0.

Moreover, note that { x n } converges weakly to a point x ˆ C, thus {GP x n } converges weakly to GP x ˆ . It follows that GP x ˆ =0, i.e. x ˆ Γ.

Finally, we prove that x ˆ = x ˜ and this finishes the proof.

Recall that { x n } converges weakly to x ˆ and x n x ˜ , one can deduce that

x ˆ lim inf n x n x ˜ =min { x : x Γ } .

This shows that x ˆ is also a point in Γ with minimum-norm. By the uniqueness of minimum-norm element, we get x ˆ = x ˜ . □

Finally, we will introduce another method to get the minimum-norm solution of the problem P.

Lemma 2.5 Let T=Iγ P G GP, where 0<γ<2/ρ( P G GP) with ρ( P G GP) being the spectral radius of the self-adjoint operator P G GP on H 1 . Then we have the following:

  1. (1)

    T1 (i.e. T is nonexpansive) and averaged;

  2. (2)

    Fix(T)={x H 1 ,Ax=Bx}, Fix( P C T)=Fix( P C )Fix(T)=Γ;

  3. (3)

    xFix( P C T) if and only if x is a solution of the variational inequality P G GPx,vx0, vC.

Proof (1) It is easily proved that T1, we only need to prove that T=Iγ P G GP is averaged. Indeed, choose 0<β<1, such that γ/(1β)<2/ρ( P G GP), then T=Iγ P G GP=βI+(1β)V, where V=Iγ/(1β) P G GP is a nonexpansive mapping. That is to say T is averaged.

  1. (2)

    If x{x H 1 ,Ax=Bx}, it is obviously that xFix(T). Conversely, assume that xFix(T), we have x=xγ P G GPx, hence γ P G GPx=0 then G P x 2 = P G GPx,x=0, we get x{x H 1 ,Ax=Bx}. We have Fix(T)={x H 1 ,Ax=Bx}.

Now we prove Fix( P C T)=Fix( P C )Fix(T)=Γ. By Fix(T)={x H 1 ,Ax=Bx}, Fix( P C )Fix(T)=Γ is obviously. On the other hand, since Fix( P C )Fix(T)=Γ, and both P C and T are averaged, from Lemma 1.5, we have Fix( P C T)=Fix( P C )Fix(T).

  1. (3)
    P G G P x , v x 0 , v C x ( x γ P G G P x ) , v x 0 , v S w = P C ( w γ P G G P x ) w Fix ( P C T ) .

 □

Remark 2.6 Choose a constant γ satisfying that 0<γ<2/ρ( P G GP). For α(0, 2 γ P G G P 2 γ ), we define a mapping

W α (x):= P C [ ( 1 α γ ) I γ P G G P ] x.

It is clear that W α is a contractive. Hence, W α has a unique fixed point x α , we have

x α = P C [ ( 1 α γ ) I γ P G G P ] x α .
(2.4)

Theorem 2.7 Let x α be given as (2.4). Then x α converges strongly to the minimum-norm solution x ˜ of the problem P (1.1) when α0.

Proof Choose x ˇ Γ, noting that α(0, 2 γ P G G P 2 γ ), I γ ( 1 α γ ) P G GP is nonexpansive, it turns out that

x α x ˇ = P C [ ( 1 α γ ) I γ P G G P ] x α P C [ x ˇ γ P G G P x ˇ ] [ ( 1 α γ ) I γ P G G P ] x α [ x ˇ γ P G G P x ˇ ] = ( 1 α γ ) [ x α γ 1 α γ P G G P x α ] ( 1 α γ ) [ x ˇ γ 1 α γ P G G P x ˇ ] α γ x ˇ ( 1 α γ ) ( x α γ 1 α γ P G G P x α ) ( x ˇ γ 1 α γ P G G P x ˇ ) + α γ x ˇ ( 1 α γ ) x α x ˇ + α γ x ˇ .

That is,

x α x ˇ x ˇ .

Hence { x α } is bounded.

Taking into account of (2.4), we have

x α P C [ I γ P G G P ] x α αγ x α 0.

We assert that { x α } is relatively norm compact as α 0 + . In fact, assume that { α n }(0, 2 γ P G G P 2 γ ) and α n 0 + as n. For convenience, we put x n := x α n , we get

x n P C [ I γ P G G P ] x n α n γ x n 0.

Since P C is nonexpansive, one concludes that

x α x ˇ 2 = P C [ ( 1 α γ ) I γ P G G P ] x α P C [ x ˇ γ P G G P x ˇ ] 2 [ ( 1 α γ ) I γ P G G P ] x α [ x ˇ γ P G G P x ˇ ] , x α x ˇ = ( 1 α γ ) [ x α γ 1 α γ P G G P x α ] ( 1 α γ ) [ x ˇ γ 1 α γ P G G P x ˇ ] , x α x ˇ α γ x ˇ , x α x ˇ ( 1 α γ ) x α x ˇ 2 α γ x ˇ , x α x ˇ .

That is ,

x α x ˇ 2 x ˇ , x α x ˇ .

Thus,

x n x ˇ 2 x ˇ , x n x ˇ , x ˇ Γ.

Due to { x n } is bounded, there exists a subsequence of { x n } which converges weakly to a point x ˜ . Without loss of generality, we may assume that { x n } converges weakly to x ˜ . Noting that

x n P C [ I γ P G G P ] x n α n γ x n 0,

and applying Lemma 1.1, we obtain x ˜ Fix( P C [Iγ P G GP])=Γ.

Since

x n x ˇ 2 x ˇ , x n x ˇ , x ˇ Γ,

it concludes that

x n x ˜ 2 x ˜ , x n x ˜ .

Hence, if { x n } converges weakly to x ˜ , then { x n } converges strongly to x ˜ . That is to say { x α } is relatively norm compact as α 0 + .

Moreover, again using

x n x ˇ 2 x ˇ , x n x ˇ , x ˇ Γ,

let n, we have

x ˜ x ˇ 2 x ˇ , x ˜ x ˇ , x ˇ Γ.

This implies that

x ˇ , x ˇ x ˜ 0, x ˇ Γ.

This is equivalent to

x ˜ , x ˇ x ˜ 0, x ˇ Γ.

It turns out that x ˜ P C (0). Consequently, each cluster point of x α is equals x ˜ . Thus x α x ˜ (α0) the minimum-norm solution of the problem P. □

3 Iterative algorithm for the minimum-norm solution of the problem P

In this section, we introduce the following algorithm and prove the strong convergence of the algorithm, more importantly, its limit is the minimum-norm solution of the problem P.

Algorithm 3.1 For an arbitrary point x 0 H 1 the sequence { x n } is generated by the iterative algorithm

x n + 1 = P C { ( 1 α n ) [ I γ P G G P ] x n } ,
(3.1)

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0;

  2. (ii)

    n = 0 α n =;

  3. (iii)

    n = 0 | α n + 1 α n |< or lim n | α n + 1 α n |/ α n =0.

Now, we prove the strong convergence of the iterative algorithm.

Theorem 3.2 The sequence { x n } generated by algorithm (3.1) converges strongly to the minimum-norm solution x ˜ of the problem P (1.1).

Proof Let R n and R be defined by

R n x : = P C { ( 1 α n ) [ I γ P G G P ] } x = P C [ ( 1 α n ) T x ] , R x : = P C ( I γ P G G P ) x = P C ( T x ) ,

where T=Iγ P G GP, by Lemma 2.5 , it is easy to see that R n is a contraction with contractive constant 1 α n . Algorithm (3.1) can be written as x n + 1 = R n x n .

For any x ˆ Γ, we have

R n x ˆ x ˆ = P C [ ( 1 α n ) T x ˆ ] x ˆ = P C [ ( 1 α n ) T x ˆ ] P S ( T x ˆ ) ( 1 α n ) T x ˆ T x ˆ = α n T x ˆ α n x ˆ .

Hence,

x n + 1 x ˆ = R n x n x ˆ R n x n R n x ˆ + R n x ˆ x ˆ P C [ ( 1 α n ) T x ˆ ] P S ( T x ˆ ) ( 1 α n ) x n x ˆ + α n x ˆ max { x n x ˆ , x ˆ } .

It follows that x n x ˆ max{ x 0 x ˆ , x ˆ }. So { x n } is bounded.

Next we prove that lim n x n + 1 x n =0.

Indeed,

x n + 1 x n = R n x n R n 1 x n 1 R n x n R n x n 1 + R n x n 1 R n 1 x n 1 ( 1 α n ) x n x n 1 + R n x n 1 R n 1 x n 1 .

Notice that

R n x n 1 R n 1 x n 1 = P C [ ( 1 α n ) T x n 1 ] P C [ ( 1 α n 1 ) T x n 1 ] ( 1 α n ) T x n 1 ( 1 α n 1 ) T x n 1 = | α n α n 1 | T x n 1 | α n α n 1 | x n 1 .

Hence

x n + 1 x n (1 α n ) x n x n 1 +| α n α n 1 | x n 1 .

By virtue of the assumptions (1)-(3) and Lemma 1.2, we have

lim n x n + 1 x n =0.

Therefore,

x n R x n x n + 1 x n + R n x n R x n x n + 1 x n + ( 1 α n ) T x n T x n x n + 1 x n + α n x n 0 .

By the demiclosedness principle ensures that each weak limit point of { x n } is a fixed point of the nonexpansive mapping R= P C T, that is, a point of the solution set Γ of SEP (1.1).

Finally, we will prove that lim n x n + 1 x ˜ =0.

Choose 0<β<1, such that γ/(1β)<2/ρ( P G GP), then T=Iγ P G GP=βI+(1β)V, where V=Iγ/(1β) P G GP is a nonexpansive mapping. Taking zΓ, we deduce that

x n + 1 z 2 = P C [ ( 1 α n ) T x n ] z 2 ( 1 α n ) T x n z 2 ( 1 α n ) T x n z 2 + α n z 2 β ( x n z ) + ( 1 β ) ( V x n z ) 2 + α n z 2 β ( x n z ) 2 + ( 1 β ) ( V x n z ) 2 β ( 1 β ) x n V x n 2 + α n z 2 ( x n z ) 2 β ( 1 β ) x n V x n 2 + α n z 2 .

Then

β ( 1 β ) x n V x n x n z 2 x n + 1 z 2 + α n z 2 ( x n z + x n + 1 z ) ( x n z x n + 1 z ) α n z 2 ( x n z + x n + 1 z ) ( x n x n + 1 ) α n z 2 0 .

Note that T=Iγ P G GP=βI+(1β)V, it follows that lim n T x n x n =0.

Take a subsequence { x n k } of { x n } such that lim sup n x n x ˜ , x ˜ = lim k x n k x ˜ , x ˜ .

By virtue of the boundedness of x n , we may further assume with no loss of generality that x n k converges weakly to a point x ˇ . Since R x n x n 0, using the demiclosedness principle, x ˇ Fix(R)=Fix( P C T)=Γ. Noticing that x ˜ is the projection of the origin onto Γ, we get

lim sup n x n x ˜ , x ˜ = lim k x n k x ˜ , x ˜ = x ˇ x ˜ , x ˜ 0.

Finally, we compute

x n + 1 x ˜ 2 = P C [ ( 1 α n ) T x n ] x ˜ 2 = P C [ ( 1 α n ) T x n ] P C T x ˜ 2 ( 1 α n ) T x n T x ˜ 2 = ( 1 α n ) T x n x ˜ 2 = ( 1 α n ) ( T x n x ˜ ) + α n ( x ˜ ) 2 = ( 1 α n ) 2 ( T x n x ˜ ) 2 + α n 2 x ˜ 2 + 2 α n ( 1 α n ) T x n x ˜ , x ˜ ( 1 α n ) ( T x n x ˜ ) 2 + α n [ α n x ˜ 2 + 2 ( 1 α n ) T x n x ˜ , x ˜ ] .

Since, lim sup n x n x ˜ , x ˜ 0, x n T x n 0, we know that lim sup n ( α n x ˜ 2 +2(1 α n )T x n x ˜ , x ˜ )0, by Lemma 1.2, we conclude that lim n x n + 1 x ˜ =0. This completes the proof. □

4 KM-CQ-like iterative algorithm for the problem P

In this section, we establish a KM-CQ-like algorithm converges strongly to a solution of the problem P.

Algorithm 4.1 For an arbitrary initial point x 0 , sequence { x n } is generated by the iteration:

x n + 1 =(1 β n ) x n + β n P C [ ( 1 α n ) ( I γ P G G P ) ] x n ,
(4.1)

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0, n = 0 α n =;

  2. (ii)

    lim n | α n + 1 α n |=0;

  3. (iii)

    0< lim inf n β n lim sup n β n <1.

Lemma 4.2 If zFix(T)=Fix(Iγ P G GP), then for any x we have T x z 2 x z 2 β(1β) V x x 2 , where β and V are the same in Lemma 2.5(1).

Proof By Lemma 2.5(1), we know that T=βI+(1β)V, where 0<β<1 and V is a nonexpansive. It is clear that zFix(T)=Fix(V), and

T x z 2 = β x + ( 1 β ) V x z 2 β x z 2 + ( 1 β ) V x z 2 β ( 1 β ) V x x 2 β x z 2 + ( 1 β ) x z 2 β ( 1 β ) V x x 2 = x z 2 β ( 1 β ) V x x 2 .

 □

Theorem 4.3 The sequence { x n } generated by algorithm (4.1) converges strongly to a solution of the problem P.

Proof For any solution x ˆ of the problem P, according to Lemma 2.5, x ˆ Fix( P C T)=Fix( P C )Fix(T), where T=Iγ P G GP, and

x n + 1 x ˆ = ( 1 β n ) x n + β n P C [ ( 1 α n ) T ] x n x ˆ = ( 1 β n ) ( x n x ˆ ) + β n ( P C [ ( 1 α n ) T ] x n x ˆ ) ( 1 β n ) x n x ˆ + β n P C [ ( 1 α n ) T ] x n x ˆ ( 1 β n ) x n x ˆ + β n P C [ ( 1 α n ) T ] x n P C [ ( 1 α n ) T ] x ˆ + β n P C [ ( 1 α n ) T ] x ˆ x ˆ ( 1 β n ) x n x ˆ + β n ( 1 α n ) x n x ˆ + β n α n x ˆ = ( 1 β n α n ) x n x ˆ + β n α n x ˆ max { x n x ˆ , x ˆ } .

One can deduce that

x n x ˆ max { x 0 x ˆ , x ˆ } .

Hence, { x n } is bounded and so is {T x n }. Moreover,

P C [ ( 1 α n ) T ] x n x ˆ ( 1 α n ) T x n x ˆ = ( 1 α n ) [ T x n x ˆ ] α n x ˆ ( 1 α n ) x n x ˆ + α n x ˆ max { x n x ˆ , x ˆ } .

Since { x n } is bounded, we see that {T x n }, (1 α n )T x n , and { P C [(1 α n )T] x n } are also bounded.

Let z n = P C [(1 α n )T] x n , and M>0 such that M= sup n 1 {T x n }. Noting that

P C [ ( 1 α n + 1 ) T ] x n P C [ ( 1 α n ) T ] x n ( 1 α n + 1 ) T x n ( 1 α n ) T x n = ( α n α n + 1 ) T x n M | α n α n + 1 | .

One concludes that

z n + 1 z n = P C [ ( 1 α n + 1 ) T ] x n + 1 P C [ ( 1 α n ) T ] x n P C [ ( 1 α n + 1 ) T ] x n + 1 P C [ ( 1 α n + 1 ) T ] x n + P C [ ( 1 α n + 1 ) T ] x n P C [ ( 1 α n ) T ] x n ( 1 α n + 1 ) x n + 1 x n + P C [ ( 1 α n + 1 ) T ] x n P C [ ( 1 α n ) T ] x n ( 1 α n + 1 ) x n + 1 x n + M | α n α n + 1 | .

Since 0< α n <1 and lim n | α n + 1 α n |=0, we have

z n + 1 z n x n + 1 x n M| α n α n + 1 |,

and

lim sup n z n + 1 z n x n + 1 x n 0.

Applying Lemma 1.3, we get

lim n P C [ ( 1 α n ) T ] x n x n = lim n z n x n =0.

Hence,

x n + 1 x n = ( 1 β n ) x n + β n P C [ ( 1 α n ) T ] x n x n = β n P C [ ( 1 α n ) T ] x n x n 0 .

Let R n and R be defined by

R n x : = P C { ( 1 α n ) [ I γ P G G P ] } x = P C [ ( 1 α n ) T x ] , R x : = P C ( I γ P G G P ) x = P C ( T x ) .

Noting that

x n R x n x n x n + 1 + x n + 1 R x n = x n x n + 1 + ( 1 β n ) x n + β n R n x n R x n x n x n + 1 + ( 1 β n ) x n R x n + β n R n x n R x n .

So, we have

x n R x n x n x n + 1 / β n + R n x n R x n = x n x n + 1 / β n + P C [ ( 1 α n ) T ] x n P C T x n x n x n + 1 / β n + ( 1 α n ) T x n T x n x n x n + 1 / β n + M α n .

By assumption, we have

lim n x n R x n =0.

Furthermore, { x n } is bounded, there exists a subsequence of { x n } which converges weakly to a point x ˇ . Without loss of generality, we may assume that { x n } converges weakly to  x ˇ . Since R x n x n 0, using the demiclosedness principle we know that x ˇ Fix(R)=Fix( P C T)=Fix( P C )Fix(T)=Γ.

Finally, we will prove that lim n x n + 1 x ˇ =0. In fact,

x n + 1 x ˇ 2 = ( 1 β n ) x n + β n P C [ ( 1 α n ) T ] x n P C T x ˇ 2 ( 1 β n ) x n x ˇ 2 + β n P C [ ( 1 α n ) T ] x n P C T x ˇ 2 ( 1 β n ) x n x ˇ 2 + β n ( 1 α n ) T x n x ˇ 2 = ( 1 β n ) x n x ˇ 2 + β n ( 1 α n ) ( T x n x ˇ ) + α n x ˇ 2 = ( 1 β n ) x n x ˇ 2 + β n [ ( 1 α n ) 2 T x n x ˇ 2 + α n 2 x ˇ 2 + 2 α n ( 1 α n ) T x n x ˇ , x ˇ ] ( 1 β n ) x n x ˇ 2 + β n [ ( 1 α n ) x n x ˇ 2 + α n 2 x ˇ 2 + 2 α n ( 1 α n ) T x n x ˇ , x ˇ ] = ( 1 α n β n ) x n x ˇ 2 + α n β n [ 2 ( 1 α n ) T x n x ˇ , x ˇ + α n x ˇ 2 ] .

Using Lemma 1.2, we only need to prove that

lim sup n T x n x ˇ , x ˇ 0.

Applying Lemma 2.5, T is averaged, that is T=βI+(1β)V, where 0<β<1 and V is nonexpansive. Hence, for zFix( P C T), we have

x n + 1 z 2 = ( 1 β n ) x n + β n P C [ ( 1 α n ) T ] x n z 2 ( 1 β n ) x n z 2 + β n ( 1 α n ) T x n z 2 = ( 1 β n ) x n z 2 + β n ( 1 α n ) ( T x n z ) α n z 2 ( 1 β n ) x n z 2 + β n [ ( 1 α n ) T x n z 2 + α n z 2 ] ( 1 β n ) x n z 2 + β n [ T x n z 2 + α n z 2 ] .

By Lemma 4.2, we have

x n + 1 z 2 ( 1 β n ) x n z 2 + β n [ x n z 2 β ( 1 β ) V x n x n 2 + α n z 2 ] x n z 2 β n β ( 1 β ) V x n x n 2 + β n α n z 2 .

Let N>0 such that x n zN for all n, then it concludes that

β n β ( 1 β ) V x n x n 2 x n z 2 x n + 1 z 2 + β n α n z 2 2 N | x n z x n + 1 z | + β n α n z 2 2 N x n x n + 1 + β n α n z 2 .

Hence,

β(1β) V x n x n 2 2 N x n x n + 1 β n + α n z 2 .

Since x n x n + 1 0, we get

V x n x n 0.

Therefore,

T x n x n 0.

It follows that

lim sup n T x n x ˇ , x ˇ = lim sup n x n x ˇ , x ˇ .

Since { x n } converges weakly to x ˇ , it follows that

lim sup n T x n x ˇ , x ˇ 0.

 □

Similar to the proof of Theorem 4.3, we can get the result that the following iterative algorithm converges strongly to a solution of the problem P also. Since the proof is similar to Theorem 4.3, we omit it.

Algorithm 4.4 For an arbitrary initial point x 0 , sequence { x n } is generated by the iteration:

x n + 1 =(1 β n )(1 α n ) ( I γ P G G P ) x n + β n P C [ ( 1 α n ) ( I γ P G G P ) ] x n ,
(4.2)

where α n >0 is a sequence in (0,1) such that

  1. (i)

    lim n α n =0, n = 0 α n =;

  2. (ii)

    lim n | α n + 1 α n |=0;

  3. (iii)

    0< lim inf n β n lim sup n β n <1.