1 Introduction

Let E be a real Banach space with norm and let E denote the dual space of E. We use → and ⇀ to denote strong and weak convergence, respectively. We denote the value of f E at xE by x,f.

We use J to denote the normalized duality mapping from E to 2 E , which is defined by

Jx:= { f E : x , f = x 2 = f 2 } ,xE.

It is well known that J is single-valued if E is strictly convex. Moreover, J(cx)=cJx, for xE and c R 1 . We call J weakly sequentially continuous if each { x n }E which converges weakly to x implies that {J x n } converges in the sense of weak to Jx.

Let C be a nonempty, closed, and convex subset of E and Q be a mapping of E onto C. Then Q is said to be sunny [1] if Q(Q(x)+t(xQ(x)))=Q(x), for all xE and t0.

A mapping Q of E into E is said to be a retraction [1] if Q 2 =Q. If a mapping Q is a retraction, then Q(z)=z for every zR(Q), where R(Q) is the range of Q.

A mapping T:CC is said to be nonexpansive if TxTyxy, for x,yC. We use F(T) to denote the fixed point set of T, that is, F(T):={xC:Tx=x}. A mapping T:ED(T)R(T)E is said to be demiclosed at p if whenever { x n } is a sequence in D(T) such that x n xD(T) and T x n p then Tx=p.

A subset C of E is said to be a sunny nonexpansive retract of E [2] if there exists a sunny nonexpansive retraction of E onto C and it is called a nonexpansive retract of E if there exists a nonexpansive retraction of E onto C. If E is reduced to a Hilbert space H, then the metric projection P C is a sunny nonexpansive retraction from H to any closed and convex subset C of H. But this is not true in a general Banach space. We note that if E is smooth and Q is a retraction of C onto F(T), then Q is sunny and nonexpansive if and only if for xC, zF(T), Qxx,J(Qxz)0 [3].

A mapping T:CC is called pseudo-contractive [2] if there exists j(xy)J(xy) such that TxTy,j(xy) x y 2 holds for all x,yC.

Interest in pseudo-contractive mappings stems mainly from their firm connection with the important class of nonlinear accretive mappings. A mapping A:D(A)EE is said to be accretive if x 1 x 2 x 1 x 2 +r( y 1 y 2 ), for x i D(A), y i A x i , i=1,2, and r>0. If A is accretive, then we can define, for each r>0, a nonexpansive single-valued mapping J r A :R(I+rA)D(A) by J r A := ( I + r A ) 1 , which is called the resolvent of A. We also know that for an accretive mapping A, N(A)=F( J r A ), where N(A)={xD(A):Ax=0}. An accretive mapping A is said to be m-accretive if R(I+λA)=E, for λ>0.

It is well known that if A is an accretive mapping, then the solutions of the problem 0Ax correspond to the equilibrium points of some evolution equations. Hence, the problem of finding a solution xE with 0Ax has been studied by many researchers (see [412] and the references contained therein).

One classical method for studying the problem 0Ax, where A is an m-accretive mapping, is the following so-called proximal method (cf. [4]), presented in a Hilbert space:

x 0 H, x n + 1 J r n A x n ,n0,
(1.1)

where J r n A := ( I + r n A ) 1 . It was shown that the sequence generated by (1.1) converges weakly or strongly to a zero point of A under some conditions.

On the other hand, one explicit iterative process was first introduced, in 1967, by Halpern [13] in the frame of Hilbert spaces:

uC, x 0 C, x n + 1 = α n u+(1 α n )T x n ,n0,
(1.2)

where { α n }[0,1] and T:CC is a nonexpansive mapping. It was proved that under some conditions, the sequence { x n } produced by (1.2) converges strongly to a point in F(T).

In 2007, Qin and Su [6] presented the following iterative algorithm:

x 1 C , y n = β n x n + ( 1 β n ) J r n A x n , x n + 1 = α n u + ( 1 α n ) y n .
(1.3)

They showed that { x n } generated by (1.3) converges strongly to a point in N(A).

Motivated by iterative algorithms (1.1) and (1.2), Zegeye and Shahzad extended their discussion to the case of finite m-accretive mappings. They presented in [14] the following iterative algorithm:

x 0 C, x n + 1 = α n u+(1 α n ) S r x n ,n0,
(1.4)

where S r = a 0 I+ a 1 J A 1 + a 2 J A 2 ++ a l J A l with J A i = ( I + A i ) 1 and i = 0 l a i =1. If i = 1 l N( A i ), they proved that { x n } generated by (1.4) converges strongly to the common point in N( A i ) (i=1,2,,l) under some conditions.

The work in [14] was then extended to the following one presented by Hu and Liu in [15]:

x 0 C, x n + 1 = α n u+ β n x n + ϑ n S r n x n ,n0,
(1.5)

where S r n = a 0 I+ a 1 J r n A 1 + a 2 J r n A 2 ++ a l J r n A l with J r n A i = ( I + r n A i ) 1 and i = 0 l a i =1. We have { α n },{ β n },{ ϑ n }(0,1) and α n + β n + ϑ n =1. If i = 1 l N( A i ), they proved that { x n } converges strongly to the common point in N( A i ) (i=1,2,,l) under some conditions.

In 2009, Yao et al. presented the following iterative algorithm in the frame of Hilbert space in [16]:

x 1 C , y n = P C [ ( 1 α n ) x n ] , x n + 1 = ( 1 β n ) x n + β n T y n , n 1 .
(1.6)

Here T:CC is a nonexpansive mapping with F(T). Suppose { α n } and { β n } are two real sequences in (0,1) satisfying

  1. (a)

    n = 1 α n =+ and lim n α n =0;

  2. (b)

    0< lim inf n β n lim sup n β n <1.

Then { x n } constructed by (1.6) converges strongly to a point in F(T).

The following lemma is commonly used in proving the convergence of the iterative algorithms in a Banach space.

Lemma 1.1 ([17])

Let E be a real uniformly smooth Banach space, then there exists a nondecreasing continuous function β:[0,+)[0,+) with lim t 0 + β(t)=0 and β(ct)cβ(t) for c1, such that for all x,yE, the following inequality holds:

x + y 2 x 2 +2y,Jx+max { x , 1 } yβ ( y ) .

Motivated by the work in [14] and [16], and after imposing an additional condition on the function β in Lemma 1.1 that

β(t) t max { 1 , 2 r 1 } ,
(1.7)

where r 1 >0 is a constant satisfying some conditions, Shehu and Ezeora presented the following result.

Theorem 1.1 ([2])

Let E be a real uniformly smooth and uniformly convex Banach space, and let C be a nonempty, closed, and convex sunny nonexpansive retract of E, where Q C is the sunny nonexpansive retraction of E onto C. Supposed the duality mapping J:E E is weakly sequentially continuous. For each i=1,2,,N, let A i :CE be an m-accretive mapping such that i = 1 N N( A i ). Let { α n },{ β n }(0,1) satisfy (a) and (b). Let { x n } be generated iteratively by

x 1 C , y n = Q C [ ( 1 α n ) x n ] , x n + 1 = ( 1 β n ) x n + β n S N y n , n 1 .
(1.8)

Here S N := a 0 I+ a 1 J A 1 + a 2 J A 2 ++ a N J A N with J A i = ( I + A i ) 1 , for i=1,2,,N. 0< a k <1, for k=0,1,2,,N, and k = 0 N a k =1. Then { x n } converges strongly to the common point in N( A i ), where i=1,2,,N.

How do we show the convergence of the iterative sequence { x n } in (1.8) if β loses the additional condition (1.7)? How about the convergence of { x n } if different A i has different coefficient in (1.8)?

To answer these questions, Wei and Tan presented the following iterative scheme in [18]:

x 1 C , u n = Q C [ ( 1 α n ) ( x n + e n ) ] , v n = ( 1 β n ) x n + β n S n u n , x n + 1 = γ n x n + ( 1 γ n ) S n v n , n 1 ,
(1.9)

where { e n }E is the error sequence and { A i } i = 1 N is a finite family of m-accretive mappings. S n := a 0 I+ a 1 J r n , 1 A 1 + a 2 J r n , 2 A 2 ++ a N J r n , N A N , J r n , i A i = ( I + r n , i A i ) 1 , for i=1,2,,N, k = 0 N a k =1, 0< a k <1, for k=0,1,2,,N. Some strong convergence theorems are obtained.

In this paper, our main purpose is to extend the discussion of (1.9) from one family of m-accretive mappings { A i } i = 1 N to that of two families of m-accretive mappings { A i } i = 1 N and { B j } j = 1 M . We shall first present and study the following three-step iterative algorithm (A) with errors { e n }E:

x 1 C , u n = Q C [ ( 1 α n ) ( x n + e n ) ] , v n = ( 1 β n ) x n + β n S n u n , x n + 1 = γ n x n + ( 1 γ n ) W n S n v n , n 1 ,
(A)

where S n := a 0 I+ a 1 J r n , 1 A 1 + a 2 J r n , 2 A 2 ++ a N J r n , N A N , and W n := b 0 I+ b 1 J s n , 1 B 1 + b 2 J s n , 2 B 2 ++ b M J s n , M B M . For i=1,2,,N, J r n , i A i = ( I + r n , i A i ) 1 . For j=1,2,,M, J s n , j B j = ( I + s n , j B j ) 1 . a 0 , a 1 ,, a N and b 0 , b 1 ,, b M are real numbers in (0,1) and i = 0 N a i =1, j = 0 M b j =1. r n , i >0, for i=1,2,,N, and s n , j >0, for j=1,2,,M and n1.

Later, we introduce and study the following one:

x 1 C , u n = Q C [ ( 1 α n ) ( x n + e n ) ] , v n = ( 1 β n ) x n + β n S n u n , x n + 1 = γ n x n + ( 1 γ n ) U n S n v n , n 1 ,
(B)

where U n := c 0 I+ c 1 J t n , 1 B 1 + c 2 J t n , 2 B 2 J t n , 1 B 1 ++ c M J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 , c 0 , c 1 ,, c M are real numbers in (0,1), j = 0 M c j =1, and J t n , j B j = ( I + t n , j B j ) 1 and t n , j >0, for j=1,2,,M and n1.

More details will be presented in Section 3. Some strong convergence theorems are obtained, which can be regarded as the extension of the work done in [2, 6, 14, 15, 18], etc. As a consequence, some new iterative algorithms are constructed to converge strongly to the common fixed point of two finite families of pseudo-contractive mappings from C to E.

2 Preliminaries

Now, we list some results we need in sequel.

Lemma 2.1 ([19])

Let E be a real uniformly convex Banach space and let C be a nonempty, closed, and convex subset of E and T:CC is a nonexpansive mapping such that F(T), then IT is demiclosed at zero.

Lemma 2.2 ([15])

Let E be a strictly convex Banach space which has a uniformly Gâteaux differential norm, and let C be a nonempty, closed, and convex subset of E. Let { A i } i = 1 N be a finite family of accretive mappings with i = 1 N N( A i ), satisfying the following range conditions:

D ( A i ) ¯ C r > 0 R(I+r A i ),i=1,2,,N.

Let a 0 , a 1 ,, a N be real numbers in (0,1) such that i = 0 N a i =1 and S r n = a 0 I+ a 1 J r n A 1 + a 2 J r n A 2 ++ a N J r n A N , where J r n A i = ( I + r n A i ) 1 and r n >0, then S r n is nonexpansive and F( S r n )= i = 1 N N( A i ).

Lemma 2.3 ([12])

In a real Banach space E, the following inequality holds:

x + y 2 x 2 +2 y , j ( x + y ) ,x,yE,

where j(x+y)J(x+y).

Lemma 2.4 ([20])

Let { a n }, { b n }, and { c n } be three sequences of nonnegative real numbers satisfying

a n + 1 (1 c n ) a n + b n c n ,n1,

where { c n }(0,1) such that (i) c n 0 and n = 1 c n =+, (ii) either lim sup n b n 0 or n = 1 | b n c n |<+. Then lim n a n =0.

Lemma 2.5 ([21])

Let { x n } and { y n } be two bounded sequences in a Banach space E such that x n + 1 = β n x n +(1 β n ) y n , for n1. Suppose { β n }(0,1) satisfying 0< lim inf n + β n lim sup n + β n <1. If lim sup n + ( y n + 1 y n x n + 1 x n )0, then lim n + y n x n =0.

Lemma 2.6 ([22])

Let E be a Banach space and let A be an m-accretive mapping. For λ>0, μ>0, and xE, we have

J λ x= J μ ( μ λ x + ( 1 μ λ ) J λ x ) ,

where J λ = ( I + λ A ) 1 and J μ = ( I + μ A ) 1 .

3 Main results

Lemma 3.1 ([2])

Let E be a real uniformly smooth and uniformly convex Banach space. Let C be a nonempty, closed, and convex sunny nonexpansive retract of E, and Q C be the sunny nonexpansive retraction of E onto C. Let T:CC be nonexpansive with F(T). Suppose that the duality mapping J:E E is weakly sequentially continuous. If for each t(0,1), define T t :CC by

T t x:=T Q C [ ( 1 t ) x ] .
(3.1)

Then T t is a contraction and has a fixed point z t , which satisfies z t T z t 0, as t0.

Lemma 3.2 ([2])

Under the assumptions of Lemma  3.1, suppose further that β in Lemma  1.1 satisfies (1.7), where r 1 >0 is a sufficiently large constant such that z t C{zE:z x r 1 }, x is in F(T) and t(0,1), then lim t 0 z t = z 0 F(T).

Remark 3.1 Lemma 1.1 with additional condition (1.7) is employed as a key tool to prove Lemma 3.2. In the following lemma, we shall show that Lemma 2.3 can be used instead of Lemma 1.1, which simplifies the proof and weakens the assumption.

Lemma 3.3 Only under the assumptions of Lemma  3.1, the result of Lemma  3.2 is true, which ensures that the assumption is weaker than that in Lemma  3.2.

Proof To show that lim t 0 z t = z 0 F(T), it suffices to show that for any sequence { t n } such that t n 0, we have lim n z t n = z 0 F(T).

In fact, Lemma 3.1 implies that z t F(T) such that z t =T Q C [(1t) z t ], t(0,1). By using Lemma 2.3, we have for pF(T),

z t p 2 = T Q C [ ( 1 t ) z t ] T Q C p 2 z t p t z t 2 z t p 2 2 t z t p t z t 2 2 t p + t z t , J ( z t p t z t ) .

This implies that

z t p t z t 2 p , J ( p + t z t z t ) +t z t , J ( p + t z t z t ) .
(3.2)

In particular,

z t n p t n z t n 2 p , J ( p + t n z t n z t n ) + t n z t n , J ( p + t n z t n z t n ) .
(3.3)

Since pF(T),

z t p = T Q C [ ( 1 t ) z t ] T Q C p Q C [ ( 1 t ) z t ] Q C p ( 1 t ) z t p = ( 1 t ) ( z t p ) t p ( 1 t ) z t p + t p ,

{ z t } is bounded.

Without loss of generality, we can assume that { z t n } converges weakly to z 0 . Using Lemma 3.1 and Lemma 2.1, we have z 0 F(T).

Substituting z 0 for p in (3.3), we obtain

z t n z 0 t n z t n 2 z 0 , J ( z 0 + t n z t n z t n ) + t n z t n , J ( z 0 + t n z t n z t n ) .
(3.4)

Then from (3.4) and the weak convergence of J, we have z t n z 0 t n z t n 0, as n.

Then from z t n z 0 z t n z 0 t n z t n + t n z t n , we see that z t n z 0 , as n.

Suppose there exists another sequence z t m x 0 , as t m 0 and m. Then from Lemma 3.1 that z t m T z t m 0 and IT is demi-closed at zero, we have x 0 F(T). Moreover, repeating the above proof, we have z t m x 0 , as m. Next, we want to show that z 0 = x 0 .

Using (3.2), we have

z t m z 0 t m z t m 2 z 0 , J ( z 0 + t m z t m z t m ) + t m z t m , J ( z 0 + t m z t m z t m ) .
(3.5)

By letting m, (3.5) implies that

x 0 z 0 2 z 0 , J ( z 0 x 0 ) .
(3.6)

Interchanging x 0 and z 0 in (3.6), we obtain

z 0 x 0 2 x 0 , J ( x 0 z 0 ) .
(3.7)

Then (3.6) and (3.7) ensure

2 x 0 z 0 2 x 0 z 0 2 ,
(3.8)

which implies that x 0 = z 0 .

Therefore, lim t 0 z t = z 0 F(T).

This completes the proof. □

Lemma 3.4 Let E be a strictly convex Banach space and let C be a nonempty, closed, and convex subset of E. Let A i :CE (i=1,2,,N) be a finite family of m-accretive mappings such that i = 1 N N( A i ).

Let a 0 , a 1 ,, a N be real numbers in (0,1) such that i = 0 N a i =1 and S n = a 0 I+ a 1 J r n , 1 A 1 + a 2 J r n , 2 A 2 ++ a N J r n , N A N , where J r n , i A i = ( I + r n , i A i ) 1 and r n , i >0, for i=1,2,,N, and n1, then S n :CC is nonexpansive and F( S n )= i = 1 N N( A i ), for n1.

Proof The proof is from [18]. For later use, we present the proof in the following.

It is easy to check that S n :CC is nonexpansive and i = 1 N N( A i )F( S n ).

On the other hand, for pF( S n ), then p= S n p= a 0 p+ a 1 J r n , 1 A 1 p+ a 2 J r n , 2 A 2 p++ a N J r n , N A N p.

For q i = 1 N N( A i )F( S n ), we have

p q a 0 p q + a 1 J r n , 1 A 1 p q + + a N J r n , N A N p q ( a 0 + a 1 + + a N 1 ) p q + a N J r n , N A N p q = ( 1 a N ) p q + a N J r n , N A N p q p q .

Therefore, pq=(1 a N )pq+ a N J r n , N A N pq, which implies that pq= J r n , N A N pq. Similarly, pq= J r n , 1 A 1 pq== J r n , N A N pq.

Then pq= a 1 i = 1 N a i ( J r n , 1 A 1 pq)+ a 2 i = 1 N a i ( J r n , 2 A 2 pq)++ a N i = 1 N a i ( J r n , N A N pq), which implies from the strict convexity of E that pq= J r n , 1 A 1 pq= J r n , 2 A 2 pq== J r n , N A N pq.

Therefore, J r n , i A i p=p, for i=1,2,,N. We have p i = 1 N N( A i ), which completes the proof. □

Similar to Lemma 3.4, we have the following lemma.

Lemma 3.5 Let E and C be the same as those in Lemma  3.4. Let { B j } j = 1 M be a finite family of m-accretive mappings such that j = 1 M N( B j ).

Let b 0 , b 1 ,, b M be real numbers in (0,1) such that j = 0 M b j =1 and W n = b 0 I+ b 1 J s n , 1 B 1 + b 2 J s n , 2 B 2 ++ b M J s n , M B M , where J s , j B j = ( I + s n , j B j ) 1 and s n , j >0, for j=1,2,,M, then W n :CC is nonexpansive and F( W n )= j = 1 M N( B j ), for n1.

Lemma 3.6 Let E, C, S n , and W n be the same as those in Lemmas 3.4 and 3.5. Suppose D:=( i = 1 N N( A i ))( j = 1 M N( B j )). Then W n S n , S n W n :CC are nonexpansive and F( W n S n )=F( S n W n )=D.

Proof From Lemmas 3.4 and 3.5, we can easily check that W n S n , S n W n :CC are nonexpansive and F( S n )F( W n )=D. So, it suffices to show that F( S n )F( W n )F( W n S n ) since F( S n )F( W n )F( W n S n ) is trivial.

For pF( W n S n ), then p= W n S n p.

For qF( S n )F( W n )F( W n S n ), then q= W n S n q. Now,

pq S n p S n q a 0 pq+ a 1 J r n , 1 A 1 p q ++ a N J r n , N A N p q .

Then repeating the discussion in Lemma 3.4, we know that pF( S n ). Then p= W n S n p= W n p, thus pF( W n ), which completes the proof. □

Theorem 3.1 Let E be a real uniformly smooth and uniformly convex Banach space. Let C be a nonempty, closed, and convex sunny nonexpansive retract of E, where Q C is the sunny nonexpansive retraction of E onto C. Let A i , B j :CE be m-accretive mappings, where i=1,2,N, j=1,2,,M. Suppose that the duality mapping J:E E is weakly sequentially continuous and D:=( i = 1 N N( A i ))( j = 1 M N( B j )). Let { x n } be generated by the iterative algorithm (A), where S n := a 0 I+ a 1 J r n , 1 A 1 + a 2 J r n , 2 A 2 ++ a N J r n , N A N , and J r n , i A i = ( I + r n , i A i ) 1 , for i=1,2,,N, 0< a k <1, for k=0,1,2,,N, k = 0 N a k =1. W n = b 0 I+ b 1 J s n , 1 B 1 + b 2 J s n , 2 B 2 ++ b M J s n , M B M , where J s n , j B j = ( I + s n , j B j ) 1 , for j=1,2,,M, 0< b k <1, for k=0,1,2,,M, k = 0 M b k =1. Suppose { e n }E, { α n }, { β n }, and { γ n } are three sequences in (0,1), and { r n , i },{ s n , j }(0,+) satisfy the following conditions:

  1. (i)

    α n 0, β n 0, as n;

  2. (ii)

    n = 1 α n β n =+;

  3. (iii)

    0< lim inf n + γ n lim sup n + γ n <1;

  4. (iv)

    n = 1 | r n + 1 , i r n , i |<+ and r n , i ε>0, for n1 and i=1,2,,N;

  5. (v)

    n = 1 | s n + 1 , j s n , j |<+ and s n , j ε>0, for n1 and j=1,2,,M;

  6. (vi)

    e n α n 0, as n+, and n = 1 e n <+.

Then { x n } converges strongly to a point p 0 D.

Proof We shall split the proof into five steps:

Step 1. { x n }, { u n }, { S n u n }, { v n }, and { S n x n } are all bounded.

We shall first show that pD,

x n + 1 p M 1 + i = 1 n e i ,
(3.9)

where M 1 =max{ x 1 p,p}.

By using the induction method, we see that for n=1, pD,

x 2 p γ 1 x 1 p + ( 1 γ 1 ) W 1 S 1 v 1 p γ 1 x 1 p + ( 1 γ 1 ) v 1 p γ 1 x 1 p + ( 1 γ 1 ) ( 1 β 1 ) x 1 p + β 1 ( 1 γ 1 ) u 1 p γ 1 x 1 p + ( 1 γ 1 ) ( 1 β 1 ) x 1 p + β 1 ( 1 γ 1 ) ( 1 α 1 ) ( x 1 + e 1 ) p [ 1 α 1 β 1 ( 1 γ 1 ) ] x 1 p + α 1 β 1 ( 1 γ 1 ) p + ( 1 α 1 ) β 1 ( 1 γ 1 ) e 1 M 1 + e 1 .

Suppose that (3.9) is true for n=k. Then, for n=k+1,

x k + 2 p γ k + 1 x k + 1 p + ( 1 γ k + 1 ) v k + 1 p γ k + 1 x k + 1 p + ( 1 γ k + 1 ) [ ( 1 β k + 1 ) x k + 1 p + β k + 1 u k + 1 p ] γ k + 1 x k + 1 p + ( 1 γ k + 1 ) [ ( 1 β k + 1 ) x k + 1 p + β k + 1 ( 1 α k + 1 ) ( x k + 1 + e k + 1 ) p ] [ 1 α k + 1 β k + 1 ( 1 γ k + 1 ) ] x k + 1 p + α k + 1 β k + 1 ( 1 γ k + 1 ) p + β k + 1 ( 1 α k + 1 ) ( 1 γ k + 1 ) e k + 1 M 1 + [ 1 α k + 1 β k + 1 ( 1 γ k + 1 ) ] i = 1 k e i + ( 1 α k + 1 ) β k + 1 ( 1 γ k + 1 ) e k + 1 M 1 + i = 1 k + 1 e i .

Thus (3.9) is true for all nN. Since n = 1 e n <+, (3.9) ensures that { x n } is bounded.

For pD, from u n p(1 α n )( x n + e n )p x n + e n +p, we see that { u n } is bounded.

Since S n u n S n u n S n p+p u n p+p, { S n u n } is bounded. Since both { S n u n } and { x n } are bounded, { v n } is bounded. Similarly, { S n x n }, { S n v n }, { J r n , i A i u n }, { J r n , i A i v n }, and { J s n , j B j S n v n } are all bounded, for i=1,2,,N; j=1,2,,M.

Then we set M 2 =sup{ u n , J r n , i A i u n , v n , J r n , i A i v n , S n u n , S n v n , x n , J s n , j B j S n v n :n1,i=1,2,,N;j=1,2,,M}.

Step 2. lim n x n W n S n v n =0 and lim n x n + 1 x n =0.

In fact,

W n + 1 S n + 1 v n + 1 W n S n v n b 0 S n + 1 v n + 1 S n v n + j = 1 M b j J s n + 1 , j B j S n + 1 v n + 1 J s n , j B j S n v n .
(3.10)

Next, we discuss J s n + 1 , j B j S n + 1 v n + 1 J s n , j B j S n v n .

If s n , j s n + 1 , j , then, using Lemma 2.6,

J s n + 1 , j B j S n + 1 v n + 1 J s n , j B j S n v n = J s n , j B j ( s n , j s n + 1 , j S n + 1 v n + 1 + ( 1 s n , j s n + 1 , j ) J s n + 1 , j B j S n + 1 v n + 1 ) J s n , j B j S n v n s n , j s n + 1 , j S n + 1 v n + 1 + ( 1 s n , j s n + 1 , j ) J s n + 1 , j B j S n + 1 v n + 1 S n v n s n , j s n + 1 , j S n + 1 v n + 1 S n v n + ( 1 s n , j s n + 1 , j ) J s n + 1 , j B j S n + 1 v n + 1 S n v n S n + 1 v n + 1 S n v n + 2 M 2 s n + 1 , j s n , j ε .
(3.11)

If s n + 1 , j s n , j , then imitating the proof of (3.11), we have

J s n + 1 , j B j S n + 1 v n + 1 J s n , j B j S n v n S n + 1 v n + 1 S n v n +2 M 2 s n , j s n + 1 , j ε .
(3.12)

Combining (3.11) and (3.12), we have

J s n + 1 , j B j S n + 1 v n + 1 J s n , j B j S n v n S n + 1 v n + 1 S n v n + 2 M 2 | s n , j s n + 1 , j | ε .
(3.13)

Putting (3.13) into (3.10), we have

W n + 1 S n + 1 v n + 1 W n S n v n S n + 1 v n + 1 S n v n + 2 M 2 ε j = 1 M | s n , j s n + 1 , j |.
(3.14)

Similarly, we have

S n + 1 u n + 1 S n u n u n + 1 u n + 2 M 2 ε i = 1 N | r n , i r n + 1 , i |
(3.15)

and

S n + 1 v n + 1 S n v n v n + 1 v n + 2 M 2 ε i = 1 N | r n , i r n + 1 , i |.
(3.16)

Therefore,

W n + 1 S n + 1 v n + 1 W n S n v n v n + 1 v n + 2 M 2 ε j = 1 M | s n , j s n + 1 , j | + 2 M 2 ε i = 1 N | r n , i r n + 1 , i | x n + 1 x n + β n x n + β n + 1 x n + 1 + | β n + 1 β n | S n + 1 u n + 1 + β n S n + 1 u n + 1 S n u n + 2 M 2 ε j = 1 M | s n , j s n + 1 , j | + 2 M 2 ε i = 1 N | r n , i r n + 1 , i | x n + 1 x n + β n x n + β n + 1 x n + 1 + | β n + 1 β n | S n + 1 u n + 1 + β n u n + 1 u n + 4 M 2 ε i = 1 N | r n , i r n + 1 , i | + 2 M 2 ε j = 1 M | s n , j s n + 1 , j | x n + 1 x n + β n x n + β n + 1 x n + 1 + | β n + 1 β n | S n + 1 u n + 1 + β n ( 1 α n + 1 ) ( x n + 1 + e n + 1 ) ( 1 α n ) ( x n + e n ) + 4 M 2 ε i = 1 N | r n , i r n + 1 , i | + 2 M 2 ε j = 1 M | s n , j s n + 1 , j | ( 1 + β n ) x n + 1 x n + ( β n + α n β n ) x n + ( β n + 1 + α n + 1 β n ) x n + 1 + | β n + 1 β n | S n + 1 u n + 1 + β n e n + 1 e n + β n α n + 1 e n + 1 α n e n + 4 M 2 ε i = 1 N | r n , i r n + 1 , i | + 2 M 2 ε j = 1 M | s n , j s n + 1 , j | .
(3.17)

Thus lim sup n + ( W n + 1 S n + 1 v n + 1 W n S n v n x n + 1 x n )0. Using Lemma 2.5, we have from (3.17) lim n x n W n S n v n =0 and then lim n x n + 1 x n = lim n (1 γ n ) W n S n v n x n =0.

Step 3. lim n x n W n S n x n =0.

In fact,

x n W n S n x n x n + 1 x n + x n + 1 W n S n x n x n + 1 x n + γ n x n + ( 1 γ n ) W n S n v n W n S n x n x n + 1 x n + γ n x n W n S n v n + W n S n v n W n S n x n x n + 1 x n + γ n x n W n S n v n + β n S n u n x n x n + 1 x n + γ n x n W n S n v n + 2 β n M 2 .
(3.18)

Then (3.18) and step 2 imply that x n W n S n x n 0, as n+, since β n 0.

Step 4. lim sup n + p 0 ,J( p 0 x n )0, where p 0 is an element in D.

From Lemma 3.6, we know that W n S n :CC is nonexpansive and F( W n S n )=D. Then Lemma 3.1 and Lemma 3.3 imply that there exists z t C such that z t = W n S n Q C [(1t) z t ] for t(0,1). Moreover, z t p 0 D, as t0.

Since z t p 0 (1t) z t p 0 (1t) z t p 0 + p 0 , { z t } is bounded. Let M 3 =sup{ z t x n :n1,t>0}. Then from step 1, we know that M 3 is a positive constant. Using Lemma 2.3, we have

z t x n 2 = z t W n S n x n + W n S n x n x n 2 z t W n S n x n 2 + 2 W n S n x n x n , J ( z t x n ) z t W n S n x n 2 + 2 W n S n x n x n z t x n ( 1 t ) z t x n 2 + 2 W n S n x n x n z t x n z t x n 2 2 t z t , J [ ( 1 t ) z t x n ] + 2 M 3 W n S n x n x n .

So z t ,J[(1t) z t x n ] M 3 t W n S n x n x n , which implies that lim t 0 lim sup n + z t ,J[(1t) z t x n ]0 in view of step 3.

Since { x n } is bounded and J is uniformly continuous on each bounded subset of E, p 0 ,J( p 0 x n )J[(1t) z t x n ]0, as t0.

Moreover, noticing the fact that

p 0 , J ( p 0 x n ) = p 0 , J ( p 0 x n ) J [ ( 1 t ) z t x n ] + p 0 z t , J [ ( 1 t ) z t x n ] + z t , J [ ( 1 t ) z t x n ] ,

we have lim sup n + p 0 ,J( p 0 x n )0.

Since p 0 ,J[ p 0 x n (1 α n ) e n + α n x n ]= p 0 ,J[ p 0 x n (1 α n ) e n + α n x n ]J( p 0 x n )+ p 0 ,J( p 0 x n ), and J is uniformly continuous on each bounded subset of E,

lim sup n + p 0 , J [ p 0 x n ( 1 α n ) e n + α n x n ] 0.
(3.19)

Step 5. x n p 0 , as n+, where p 0 D is the same as in step 4.

Let M 4 =sup{(1 α n )( x n + e n ) p 0 :n1}. By using Lemma 2.3 again, we have

x n + 1 p 0 2 γ n x n p 0 2 + ( 1 γ n ) v n p 0 2 γ n x n p 0 2 + ( 1 γ n ) ( 1 β n ) x n p 0 2 + ( 1 γ n ) β n u n p 0 2 = ( 1 β n + β n γ n ) x n p 0 2 + ( 1 γ n ) β n u n p 0 2 ( 1 β n + β n γ n ) x n p 0 2 + ( 1 γ n ) β n ( 1 α n ) ( x n + e n ) p 0 2 ( 1 β n + β n γ n ) x n p 0 2 + ( 1 γ n ) β n ( 1 α n ) x n p 0 2 + 2 ( 1 γ n ) β n ( 1 α n ) e n , J [ ( 1 α n ) ( x n + e n ) p 0 ] + 2 α n β n ( 1 γ n ) p 0 , J [ p 0 x n ( 1 α n ) e n + α n x n ] [ 1 α n β n ( 1 γ n ) ] x n p 0 2 + 2 ( 1 γ n ) ( 1 α n ) β n M 4 e n + 2 α n β n ( 1 γ n ) p 0 , J [ p 0 x n ( 1 α n ) e n + α n x n ] .
(3.20)

Let c n =(1 γ n ) α n β n , then (3.20) reduces to x n + 1 p 0 2 (1 c n ) x n p 0 2 +2 c n { p 0 ,J[ p 0 x n (1 α n ) e n + α n x n ]+(1 α n ) M 4 e n α n }.

From (3.19), (3.20), and the assumptions, by using Lemma 2.4, we know that x n p 0 , as n+.

This completes the proof. □

If in Theorem 3.1, C=E, then we have the following theorem.

Theorem 3.2 Let E and D be the same as those in Theorem  3.1. Suppose that the duality mapping J:E E is weakly sequentially continuous. Let A i :EE (i=1,2,,N) and B j :EE (j=1,2,,M) be two finite families of m-accretive mappings. Let { e n }E, { α n },{ β n },{ γ n }(0,1), and { r n , i },{ s n , j }(0,+) satisfy the some conditions presented in Theorem  3.1.

Let { x n } be generated by the following scheme:

x 1 E , u n = ( 1 α n ) ( x n + e n ) , v n = ( 1 β n ) x n + β n S n u n , x n + 1 = γ n x n + ( 1 γ n ) W n S n v n , n 1 .
(C)

Then { x n } converges strongly to a point p 0 D, where S n and W n are the same as those in Theorem  3.1.

Lemma 3.7 Let E, C and { B j } j = 1 M be the same as those in Lemma  3.5. j = 1 M N( B j ).

Let c 0 , c 1 ,, c M be real numbers in (0,1) such that j = 0 M c j =1 and U n = c 0 I+ c 1 J t n , 1 B 1 + c 2 J t n , 2 B 2 J t n , 1 B 1 ++ c M J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 , where J t n , j B j = ( I + t n , j B j ) 1 and t n , j >0, for j=1,2,,M, and n1, then U n :CC is nonexpansive and F( U n )= j = 1 M N( B j ), for n1.

Proof It is easy to check that U n :CC is nonexpansive and j = 1 M N( B j )F( U n ).

On the other hand, for pF( U n ), then p= U n p= c 0 p+ c 1 J t n , 1 B 1 p+ c 2 J t n , 2 B 2 J t n , 1 B 1 p++ c M J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 p.

For q j = 1 M N( B j )F( U n ), then

p q c 0 p q + c 1 J t n , 1 B 1 p q + + c M J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 p q ( c 0 + c 2 + + c M ) p q + c 1 J t n , 1 B 1 p q = ( 1 c 1 ) p q + c 1 J t n , 1 B 1 p q p q .

Therefore, pq=(1 c 1 )pq+ c 1 J t n , 1 B 1 pq, which implies that pq= J t n , 1 B 1 pq. Similarly, pq= J t n , 1 B 1 pq= J t n , 2 B 2 J t n , 1 B 1 pq== J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 pq.

Then pq= c 1 j = 1 M c j ( J t n , 1 B 1 pq)+ c 2 j = 1 M c j ( J t n , 2 B 2 J t n , 1 B 1 pq)++ c M j = 1 M c j ( J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 pq), which implies from the strict convexity of E that pq= J t n , 1 B 1 pq= J t n , 2 B 2 J t n , 1 B 1 pq== J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 pq.

Therefore, J t n , 1 B 1 p=p, and then we can easily see that J t n , j B j p=p, for j=2,,M. Thus p j = 1 M N( B j ), which completes the proof. □

Lemma 3.8 Let E and C be the same as those in Lemma  3.4. Let S n and U n be the same as those in Lemmas 3.4 and 3.7, respectively. Suppose D:=( i = 1 N N( A i ))( j = 1 M N( B j )). Then S n U n , U n S n :CC are nonexpansive and F( U n S n )=F( S n U n )=D.

Proof From Lemmas 3.4 and 3.7, we can easily check that U n S n , S n U n :CC are nonexpansive and F( S n )F( U n )=D. So, it suffices to show that F( S n )F( U n )F( U n S n ) since F( S n )F( U n )F( U n S n ) is trivial.

For pF( U n S n ), then p= U n S n p.

For qF( S n )F( U n )F( U n S n ), then q= U n S n q. Now,

pq= U n S n pq S n p S n qpq.

Then repeating the discussion in Lemma 3.4, we know that pF( S n ). Then p= U n S n p= U n p, thus pF( U n ), which completes the proof. □

Theorem 3.3 Let E, C, Q C , S n , and D be the same as those in Theorem  3.1. Let A i , B j :CE be m-accretive mappings, for i=1,2,N, and j=1,2,,M. Suppose that the duality mapping J:E E is weakly sequentially continuous and D. Let { x n } be generated by the iterative algorithm (B), where U n := c 0 I+ c 1 J t n , 1 B 1 + c 2 J t n , 2 B 2 J t n , 1 B 1 ++ c M J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 , and J t n , j B j = ( I + t n , j B j ) 1 , for j=1,2,,M, 0< c k <1, for k=0,1,2,,M, and k = 0 M c k =1. Suppose { e n }E, { α n }, { β n }, and { γ n } are three sequences in (0,1) and { r n , i },{ t n , j }(0,+) satisfy the following conditions:

  1. (i)

    α n 0, β n 0, as n;

  2. (ii)

    n = 1 α n β n =+;

  3. (iii)

    0< lim inf n + γ n lim sup n + γ n <1;

  4. (iv)

    n = 1 | r n + 1 , i r n , i |<+ and r n , i ε>0, for n1 and i=1,2,,N;

  5. (v)

    n = 1 | t n + 1 , j t n , j |<+ and t n , j ε>0, for n1 and j=1,2,,M;

  6. (vi)

    e n α n 0, as n+, and n = 1 e n <+.

Then { x n } converges strongly to a point p 0 D.

Proof We shall split the proof into five steps:

Step 1. { x n }, { u n }, { S n u n }, { v n }, { S n v n } and { S n x n } are all bounded.

Similar to the proof of step 1 in Theorem 3.1, we can get the result of step 1.

Then { J t n , 1 B 1 S n v n },{ J t n , 2 B 2 J t n , 1 B 1 S n v n },,{ J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 S n v n } are all bounded.

Step 2. lim n x n U n S n v n =0 and lim n x n + 1 x n =0.

In fact,

U n + 1 S n + 1 v n + 1 U n S n v n c 0 S n + 1 v n + 1 S n v n + c 1 J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 1 B 1 S n v n + c 2 J t n + 1 , 2 B 2 J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 2 B 2 J t n , 1 B 1 S n v n + + c M J t n + 1 , M B M J t n + 1 , M 1 B M 1 J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 S n v n .
(3.21)

Similar to (3.13), we know that

J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 1 B 1 S n v n S n + 1 v n + 1 S n v n +2 M 5 | t n + 1 , 1 t n , 1 | ε ,
(3.22)

where M 5 =sup{ S n v n , J t n , 1 B 1 S n v n , J t n , 2 B 2 J t n , 1 B 1 S n v n ,, J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 S n v n :n1}.

Repeating (3.22), we have

J t n + 1 , 2 B 2 J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 2 B 2 J t n , 1 B 1 S n v n J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 1 B 1 S n v n + 2 M 5 ε | t n + 1 , 2 t n , 2 | .
(3.23)

Then (3.22) and (3.23) imply that

J t n + 1 , 2 B 2 J t n + 1 , 1 B 1 S n + 1 v n + 1 J t n , 2 B 2 J t n , 1 B 1 S n v n S n + 1 v n + 1 S n v n + 2 M 5 ε ( | t n + 1 , 2 t n , 2 | + | t n + 1 , 1 t n , 1 | ) .
(3.24)

By induction, we have

J t n + 1 , M B M J t n + 1 , M 1 B M 1 J t n , 1 B 1 S n + 1 v n + 1 J t n , M B M J t n , M 1 B M 1 J t n , 1 B 1 S n v n S n + 1 v n + 1 S n v n + 2 M 5 ε ( | t n + 1 , M t n , M | + + | t n + 1 , 2 t n , 2 | + | t n + 1 , 1 t n , 1 | ) .
(3.25)

Going back to (3.21), we have

U n + 1 S n + 1 v n + 1 U n S n v n S n + 1 v n + 1 S n v n + 2 M 5 ε ( j = 1 M c j | t n , 1 t n + 1 , 1 | + j = 2 M c j | t n , 2 t n + 1 , 2 | + + c M | t n , M t n + 1 , M | ) .
(3.26)

Therefore, similar to (3.17), we have

U n + 1 S n + 1 v n + 1 U n S n v n v n + 1 v n + 2 M 2 ε i = 1 N | r n , i r n + 1 , i | + 2 M 5 ε ( j = 1 M c j | t n , 1 t n + 1 , 1 | + j = 2 M c j | t n , 2 t n + 1 , 2 | + + c M | t n , M t n + 1 , M | ) ( 1 + β n ) x n + 1 x n + ( β n + α n β n ) x n + ( β n + 1 + α n + 1 β n ) x n + 1 + | β n + 1 β n | S n + 1 u n + 1 + β n e n + 1 e n + β n α n + 1 e n + 1 α n e n + 4 M 2 ε i = 1 N | r n , i r n + 1 , i | + 2 M 5 ε ( j = 1 M c j | t n , 1 t n + 1 , 1 | + j = 2 M c j | t n , 2 t n + 1 , 2 | + + c M | t n , M t n + 1 , M | ) .
(3.27)

Thus lim sup n + ( U n + 1 S n + 1 v n + 1 U n S n v n x n + 1 x n )0. Using Lemma 2.5, we have from (3.27) lim n x n U n S n v n =0 and then lim n x n + 1 x n = lim n (1 γ n ) U n S n v n x n =0.

Similar to Theorem 3.1, we have

Step 3. lim n x n U n S n x n =0.

Step 4. lim sup n + p 0 ,J( p 0 x n )0, where p 0 is an element in D.

From Lemma 3.8, we know that U n S n :CC is nonexpansive and F( U n S n )=D. Then Lemma 3.1 and Lemma 3.3 imply that there exists z t C such that z t = U n S n Q C [(1t) z t ] for t(0,1). Moreover, z t p 0 D, as t0. Then copy step 4 in Theorem 3.1, the result follows.

Step 5. x n p 0 D, which is the same as that in step 4.

Copy step 5 in Theorem 3.1, the result follows.

This completes the proof. □

If in Theorem 3.3, C=E, then we have the following theorem.

Theorem 3.4 Let E and D be the same as those in Theorem  3.3. Suppose that the duality mapping J:E E is weakly sequentially continuous. Let A i :EE (i=1,2,,N) and B j :EE (j=1,2,,M) be two finite families of m-accretive mappings. Let { e n }E, { α n },{ β n },{ γ n }(0,1) and { r n , i },{ t n , j }(0,+) satisfy the some conditions presented in Theorem  3.3.

Let { x n } be generated by the following scheme:

x 1 E , u n = ( 1 α n ) ( x n + e n ) , v n = ( 1 β n ) x n + β n S n u n , x n + 1 = γ n x n + ( 1 γ n ) U n S n v n , n 1 .
(D)

Then { x n } converges strongly to a point p 0 D, where S n and U n are the same as those in Theorem  3.3.

Next, we apply Theorems 3.1 and 3.3 to the cases of finite pseudo-contractive mappings.

Theorem 3.5 Let E be a real uniformly smooth and uniformly convex Banach space. Let C be a nonempty, closed, and convex sunny nonexpansive retract of E, where Q C is the sunny nonexpansive retraction of E onto C. Let T i ( 1 ) , T j ( 2 ) :CE be pseudo-contractive mappings such that (I T i ( 1 ) ) and (I T j ( 2 ) ) are m-accretive, where i=1,2,N, j=1,2,,M. Suppose that the duality mapping J:E E is weakly sequentially continuous and D:=( i = 1 N F( T i ( 1 ) ))( j = 1 M F( T j ( 2 ) )). Let { x n } be generated by the iterative algorithm (A), where S n := a 0 I+ a 1 J r n , 1 I T 1 ( 1 ) + a 2 J r n , 2 I T 2 ( 1 ) ++ a N J r n , N I T N ( 1 ) , and J r n , i I T i ( 1 ) = [ I + r n , i ( I T i ( 1 ) ) ] 1 , for i=1,2,,N, 0< a k <1, for k=0,1,2,,N, k = 0 N a k =1. W n = b 0 I+ b 1 J s n , 1 I T 1 ( 2 ) + b 2 J s n , 2 I T 2 ( 2 ) ++ b M J s n , M I T M ( 2 ) , where J s n , j I T j ( 2 ) = [ I + s n , j ( I T j ( 2 ) ) ] 1 , for j=1,2,,M, 0< b k <1, for k=0,1,2,,M, k = 0 M b k =1. Suppose { e n }E, { α n }, { β n }, and { γ n } are three sequences in (0,1) and { r n , i },{ s n , j }(0,+) satisfying the following conditions:

  1. (i)

    α n 0, β n 0, as n;

  2. (ii)

    n = 1 α n β n =+;

  3. (iii)

    0< lim inf n + γ n lim sup n + γ n <1;

  4. (iv)

    n = 1 | r n + 1 , i r n , i |<+ and r n , i ε>0, for n1 and i=1,2,,N;

  5. (v)

    n = 1 | s n + 1 , j s n , j |<+ and s n , j ε>0, for n1 and j=1,2,,M;

  6. (vi)

    e n α n 0, as n+, and n = 1 e n <+.

Then { x n } converges strongly to a point p 0 D.

Proof Let A i =(I T i ( 1 ) ) and B j =(I T j ( 2 ) ), for i=1,2,,N and j=1,2,,M. Then the result follows from Theorem 3.1. □

Similarly, from Theorem 3.3, we have the following result.

Theorem 3.6 Let E, C, Q C and D be the same as those in Theorem  3.5. Let T i ( 1 ) , T j ( 2 ) :CE be pseudo-contractive mappings such that (I T i ( 1 ) ) and (I T j ( 2 ) ) are m-accretive mappings, where i=1,2,N, j=1,2,,M. Suppose that the duality mapping J:E E is weakly sequentially continuous and D. Let { x n } be generated by the iterative algorithm (B), where S n is the same as that in Theorem  3.5 and U n = c 0 I+ c 1 J t n , 1 I T 1 ( 2 ) + c 2 J t n , 2 I T 2 ( 2 ) J t n , 1 I T 1 ( 2 ) ++ c M J t n , M I T M ( 2 ) J t n , M 1 I T M 1 ( 2 ) J t n , 1 I T 1 ( 2 ) , where J t n , j I T j ( 2 ) = [ I + t n , j ( I T j ( 2 ) ) ] 1 , for j=1,2,,M, 0< c k <1, for k=0,1,2,,M, k = 0 M c k =1. Suppose { e n }E, { α n }, { β n }, and { γ n } are three sequences in (0,1) and { r n , i },{ t n , j }(0,+) satisfying the following conditions:

  1. (i)

    α n 0, β n 0, as n;

  2. (ii)

    n = 1 α n β n =+;

  3. (iii)

    0< lim inf n + γ n lim sup n + γ n <1;

  4. (iv)

    n = 1 | r n + 1 , i r n , i |<+ and r n , i ε>0, for n1 and i=1,2,,N;

  5. (v)

    n = 1 | t n + 1 , j t n , j |<+ and s n , j ε>0, for n1 and j=1,2,,M;

  6. (vi)

    e n α n 0, as n+, and n = 1 e n <+.

Then { x n } converges strongly to a point p 0 D.