1 Introduction

In this paper, we are concerned with the problem of finding zero points of a mapping A:E 2 E ; that is, finding a point x in the domain of A such that 0Ax. The domain of a mapping A is defined by the set {xE:Ax0}. Many important problems have reformulations which require finding zero points, for instance, evolution equations, complementarity problems, mini-max problems, variational inequalities and optimization problems. It is well known that minimizing a convex function f can be reduced to finding zero points of the subdifferential mapping A=f. One of the most popular techniques for solving the inclusion problem goes back to the work of Browder [1]. One of the basic ideas in the case of a Hilbert space H is reducing the above inclusion problem to a fixed point problem of the operator R A defined by R A = ( I + A ) 1 , which is called the classical resolvent of A. If A has some monotonicity conditions, the classical resolvent of A is with full domain and firmly nonexpansive, that is, R A x R A y 2 R A x R A y,xy, x,yH. The property of the resolvent ensures that the Picard iterative algorithm x n + 1 = R A x n converges weakly to a fixed point of R A , which is necessarily a zero point of A. Rockafellar introduced this iteration method and called it the proximal point algorithm; for more detail, see [24] and the references therein. Methods for finding zero points of monotone mappings in the framework of Hilbert spaces are based on the good properties of the resolvent R A , but these properties are not available in the framework of Banach spaces.

In this paper, we study a viscosity algorithm with a computational error. A strong convergence theorem for zero points of accretive operators is established in a reflexive Banach space. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a strong convergence theorem is established in a reflexive Banach space. Two applications of the main results are also discussed in this section.

2 Preliminaries

In what follows, we always assume that E is a Banach space with the dual E . Let U E ={xE:x=1}. E is said to be smooth or is said to have a Gâteaux differentiable norm if the limit lim t 0 x + t y x t exists for each x,y U E . E is said to have a uniformly Gâteaux differentiable norm if for each y U E , the limit is attained uniformly for all x U E . E is said to be uniformly smooth or is said to have a uniformly Fréchet differentiable norm if the limit is attained uniformly for x,y U E . Let , denote the pairing between E and E . The normalized duality mapping J:E 2 E is defined by J(x)={f E :x,f= x 2 = f 2 }, xE. In the sequel, we use j to denote the single-valued normalized duality mapping. It is known that if the norm of E is uniformly Gâteaux differentiable, then the duality mapping j is single-valued and uniformly norm to weak continuous on each bounded subset of E.

Let C be a nonempty closed convex subset of E. Let T:CC be a mapping. In this paper, we use F(T) to denote the set of fixed points of T. Recall that T is said to be α-contractive if there exits a constant α(0,1) such that TxTyαxy, x,yC. T is said to be nonexpansive if α=1. T is said to be pseudocontractive if there exists some j(xy)J(xy) such that TxTy,j(xy) x y 2 , x,yC.

Recall that a closed convex subset C of a Banach space E is said to have normal structure if for each bounded closed convex subset K of C which contains at least two points, there exists an element x of K which is not a diametral point of K, i.e., sup{xy:yK}<d(K), where d(K) is the diameter of K. Let D be a nonempty subset of C. Let Q:CD. Q is said to be contraction if Q 2 =Q; sunny if for each xC and t(0,1), we have Q(tx+(1t)Qx)=Qx; sunny nonexpansive retraction if Q is sunny, nonexpansive, and contraction. K is said to be a nonexpansive retract of C if there exists a nonexpansive retraction from C onto D; for more details, see [5] and the references therein.

Let I denote the identity operator on E. An operator AE×E with domain D(A)={zE:Az} and range R(A)={Az:zD(A)} is said to be accretive if for each x i D(A) and y i A x i , i=1,2, there exists j( x 1 x 2 )J( x 1 x 2 ) such that y 1 y 2 ,j( x 1 x 2 )0. An accretive operator A is said to be m-accretive if R(I+rA)=E for all r>0. In a real Hilbert space, an operator A is m-accretive if and only if A is maximal monotone. In this paper, we use A 1 (0) to denote the set of zero points of A. For an accretive operator A, we can define a nonexpansive single-valued mapping J r :R(I+rA)D(A) by J r = ( I + r A ) 1 for each r>0, which is called the resolvent of A.

One of classical methods of studying the problem 0Ax, where AE×E is an accretive operator, is the proximal point algorithm (PPA) which was initiated by Martinet [6] and further developed by Rockafellar [3]. It is known that PPA is only weakly convergent; see Güler [7]. In many disciplines, including economics, image recovery, quantum physics, and control theory, problems arise in infinite dimension spaces. In such problems, strong convergence (norm convergence) is often much more desirable than weak convergence, for it translates the physically tangible property that the energy x n x of the error between the iterate x n and the solution x eventually becomes arbitrarily small. The importance of strong convergence is also underlined in [7], where a convex function f is minimized via the proximal-point algorithm: it is shown that the rate of convergence of the value sequence {f( x n )} is better when { x n } converges strongly than when it converges weakly. Such properties have a direct impact when the process is executed directly in the underlying infinite dimensional space.

Regularization methods recently have been investigated for treating zero points of accretive operators; see [822] and the references therein. In this paper, zero points of m-accretive operators are investigated based on a viscosity iterative algorithm with a computational error. A strong convergence theorem for zero points of m-accretive operators is established in a reflexive Banach space.

In order to state our main results, we need the following lemmas.

Lemma 2.1 [23]

Let { x n } and { y n } be bounded sequences in a Banach space E. Let { β n } be a sequence in (0,1) with 0< lim inf n β n lim sup n β n <1. Suppose that x n + 1 =(1 β n ) y n + β n x n , n1 and lim sup n ( y n + 1 y n x n + 1 x n )0. Then lim n y n x n =0.

Lemma 2.2 [21]

Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm and the normal structure, and let C be a nonempty closed convex subset of E. Let T:CC be a nonexpansive mapping with a fixed point, and let f:CC be a fixed contraction with the coefficient α(0,1). Let { x t } be a sequence generated by the following x t =tf( x t )+(1t)T x t , where t(0,1). Then { x t } converges strongly as t0 to a fixed point x of T, which is the unique solution in F(T) to the following variational inequality f( x ) x ,j( x p)0, pF(T).

Lemma 2.3 [24]

Let E be a Banach space, and let A be an m-accretive operator. For λ>0, μ>0, and xE, we have J λ x= J μ ( μ λ x+(1 μ λ ) J λ x), where J λ = ( I + λ A ) 1 and J μ = ( I + μ A ) 1 .

Lemma 2.4 [25]

Let { a n } be a sequence of nonnegative numbers satisfying the condition a n + 1 (1 t n ) a n + t n b n + c n , n0, where { t n } is a number sequence in (0,1) such that lim n t n =0 and n = 0 t n =, { b n } is a number sequence such that lim sup n b n 0, and { c n } is a positive number sequence such that n = 0 c n <. Then lim n a n =0.

3 Main results

Theorem 3.1 Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm, and let A be an m-accretive operator in E. Assume that C:= D ( A ) ¯ is convex and has the normal structure. Let f:CC be a fixed α-contraction. Let { x n } be a sequence generated in the following manner: x 0 C and

x n + 1 = β n x n +(1 β n ) J r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) ,n0,

where { α n } and { β n } are real number sequences in (0,1), { e n } is a sequence in E, { r n } is a positive real number sequence, and J r n = ( I + r n A ) 1 . Assume that A 1 (0) is not empty and the above control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0 and n = 1 α n =;

  2. (b)

    0< lim inf n β n lim sup n β n <1;

  3. (c)

    n = 1 e n <;

  4. (d)

    r n r>0 and lim n | r n r n + 1 |=0.

Then the sequence { x n } converges strongly to x ¯ A 1 (0), which is the unique solution to the following variational inequality f( x ¯ ) x ¯ ,j(p x ¯ )0, p A 1 (0).

Proof Fixing p A 1 (0), we find that

x n + 1 p β n x n p + ( 1 β n ) J r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) p β n x n p + ( 1 β n ) ( α n f ( x n ) p + ( 1 α n ) x n p + e n + 1 ) ( 1 α n ( 1 β n ) ( 1 α ) ) x n p + α n ( 1 β n ) f ( p ) p + e n + 1 max { x n p , f ( p ) p 1 α } + e n + 1 max { x 0 p , f ( p ) p 1 α } + i = 1 n + 1 e i max { x 0 p , f ( p ) p 1 α } + i = 1 e i < .

This proves that the sequence { x n } is bounded. Put y n = α n f( x n )+(1 α n ) x n + e n + 1 . It follows that

y n + 1 y n α n f ( x n + 1 ) f ( x n ) + | α n + 1 α n | f ( x n ) x n + ( 1 α n + 1 ) x n + 1 x n + e n + 1 + e n + 1 x n + 1 x n + | α n + 1 α n | f ( x n ) x n + e n + 1 + e n + 1 .
(3.1)

In view of Lemma 2.3, we find that

J r n + 1 y n + 1 J r n y n = J r n ( r n r n + 1 y n + 1 + ( 1 r n r n + 1 ) J r n + 1 y n + 1 ) J r n y n ( r n r n + 1 y n + 1 + ( 1 r n r n + 1 ) J r n + 1 y n + 1 ) y n y n + 1 y n + r n + 1 r n r M ,
(3.2)

where M is an appropriate constant such that M sup n 0 { J r n + 1 y n + 1 y n + 1 }. Substituting (3.1) into (3.2), we find that

J r n + 1 y n + 1 J r n y n x n + 1 x n | α n + 1 α n | f ( x n ) x n + e n + 1 + e n + 1 + r n + 1 r n r M .

In view of the restrictions (a), (c) and (d), we find that

lim sup n ( J r n + 1 y n + 1 J r n y n x n x n + 1 ) 0.

It follows from Lemma 2.1 that

lim n J r n y n x n =0.
(3.3)

Notice that y n x n α n f( x n ) x n + e n + 1 . It follows from the restrictions (a) and (c) that

lim n y n x n =0.
(3.4)

In view of J r n y n y n J r n y n x n + x n y n , we find from (3.3) and (3.4) that

lim n J r n y n y n =0.
(3.5)

Take a fixed number s such that r>s>0. It follows from Lemma 2.3 that

y n J s y n y n J r n y n + J s ( s r n y n + ( 1 s r n ) J r n y n ) J s y n y n J r n y n + ( 1 s r n ) ( J r n y n y n ) 2 y n J r n y n .

This implies from (3.5) that

lim n y n J s y n =0.
(3.6)

Now, we are in a position to claim that lim sup n x ¯ f( x ¯ ),j( y n x ¯ )0, where x ¯ = lim t 0 x t , and x t solves the fixed point equation x t =tf( x t )+(1t) J s x t , t(0,1). It follows that

x t y n 2 ( 1 t ) ( x t y n 2 + J s y n y n x t y n ) + t f ( x t ) x t , j ( x t y n ) + t x t y n 2 x t y n 2 + J s y n y n x t y n + t f ( x t ) x t , j ( x t y n ) .

This implies that x t f( x t ),j( x t y n ) 1 t J s y n y n x t y n , t(0,1). In view of (3.6), we find that

lim sup n x t f ( x t ) , j ( x t y n ) 0.
(3.7)

Since x t x ¯ , as t0 and the fact that j is strong to weak uniformly continuous on bounded subsets of E, we see that

| f ( x ¯ ) x ¯ , j ( y n x ¯ ) x t f ( x t ) , j ( x t y n ) | | f ( x ¯ ) x ¯ , j ( y n x ¯ ) f ( x ¯ ) x ¯ , j ( y n x t ) | + | f ( x ¯ ) x ¯ , j ( y n x t ) x t f ( x t ) , j ( x t y n ) | | f ( x ¯ ) x ¯ , j ( y n x ¯ ) j ( y n x t ) | + | f ( x ¯ ) x ¯ + x t f ( x t ) , j ( y n x t ) | f ( x ¯ ) x ¯ j ( y n x ¯ ) j ( y n x t ) + f ( x ¯ ) x ¯ + x t f ( x t ) y n x t 0 as  t 0 .

Hence, for any ϵ>0, there exists λ>0 such that t(0,λ) the following inequality holds f( x ¯ ) x ¯ ,j( y n x ¯ ) x t f( x t ),j( x t y n )+ϵ. Taking lim sup n in the above inequality, we find that lim sup n f( x ¯ ) x ¯ ,j( y n x ¯ ) lim sup n x t f( x t ),j( x t y n )+ϵ. Since ϵ is arbitrary, we obtain from (3.7) that lim sup n f( x ¯ ) x ¯ ,j( y n x ¯ )0.

Finally, we prove that x n x ¯ as n. Note that

y n x ¯ 2 2 α n f ( x n ) x ¯ , j ( y n x ¯ ) +(1 α n ) x n x ¯ 2 +2 e n + 1 y n x ¯ .
(3.8)

On the other hand, we have

x n + 1 x ¯ 2 β n x n x ¯ , j ( x n + 1 x ¯ ) + ( 1 β n ) J r n y n x ¯ , j ( x n + 1 x ¯ ) β n 2 ( x n x ¯ 2 + x n + 1 x ¯ 2 ) + 1 β n 2 ( y n x ¯ 2 + x n + 1 x ¯ 2 ) .

It follows from (3.8) that

x n + 1 x ¯ 2 ( 1 α n ( 1 β n ) ) x n x ¯ 2 + 2 α n ( 1 β n ) f ( x n ) x ¯ , j ( y n x ¯ ) + 2 e n + 1 y n x ¯ .

In view of Lemma 2.4, we find the desired conclusion immediately. □

4 Applications

In this section, we give two applications of our main result in the framework of Hilbert spaces.

First, we consider, in the framework of Hilbert spaces, solutions of a Ky Fan inequality, which is known as an equilibrium problem in the terminology of Blum and Oettli; see [26] and [27] and the references therein.

Let C be a nonempty closed and convex subset of a Hilbert space H. Let F be a bifunction of C×C into ℝ, where ℝ denotes the set of real numbers. Recall the following equilibrium problem:

Find xC such that F(x,y)0,yC.
(4.1)

To study equilibrium problem (4.1), we may assume that F satisfies the following restrictions:

  1. (A1)

    F(x,x)=0 for all xC;

  2. (A2)

    F is monotone, i.e., F(x,y)+F(y,x)0 for all x,yC;

  3. (A3)

    for each x,y,zC, lim sup t 0 F(tz+(1t)x,y)F(x,y);

  4. (A4)

    for each xC, yF(x,y) is convex and lower semi-continuous.

The following lemma can be found in [27].

Lemma 4.1 Let C be a nonempty, closed, and convex subset of H and F:C×CR be a bifunction satisfying (A1)-(A4). Then, for any s>0 and xH, there exists zC such that F(z,y)+ 1 s yz,zx0, yC. Further, define

T s x= { z C : F ( z , y ) + 1 s y z , z x 0 , y C }
(4.2)

for all s>0 and xH. Then (1) T s is single-valued and firmly nonexpansive; (2) F( T s )=EP(F) is closed and convex.

Lemma 4.2 [28]

Let F be a bifunction from C×C towhich satisfies (A1)-(A4), and let A F be a multivalued mapping of H into itself defined by

A F x={ { z H : F ( x , y ) y x , z , y C } , x C , , x C .
(4.3)

Then A F is a maximal monotone operator with domain D( A F )C, EP(F)= A F 1 (0), where EP(F) stands for the solution set of (4.1), and

T s x= ( I + s A F ) 1 x,xH,s>0,

where T s is defined as in (4.2).

Theorem 4.3 Let F:C×CR be a bifunction satisfying (A1)-(A4). Let f:CC be a fixed α-contraction. Let { x n } be a sequence generated in the following manner: x 0 C and

x n + 1 = β n x n +(1 β n ) T r n ( α n f ( x n ) + ( 1 α n ) x n + e n + 1 ) ,n0,

where { α n } and { β n } are real number sequences in (0,1), { e n } is a sequence in H, { r n } is a positive real number sequence, and T r n = ( I + r n A F ) 1 . Assume that EP(F) is not empty and the above control sequences satisfy the restrictions (a), (b), (c) and (d) in Theorem  3.1. Then the sequence { x n } converges strongly to x ¯ EP(F), which is the unique solution to the following variational inequality f( x ¯ ) x ¯ ,p x ¯ 0, p A 1 (0).

Next, we consider the problem of finding a minimizer of a proper convex lower semicontinuous function.

For a proper lower semicontinuous convex function g:H(,], the subdifferential mapping ∂g of g is defined by

g(x)= { x H : g ( x ) + y x , x g ( y ) , y H } ,xH.

Rockafellar [2] proved that ∂g is a maximal monotone operator. It is easy to verify that 0g(v) if and only if g(v)= min x H g(x).

Theorem 4.4 Let g:H(,+] be a proper convex lower semicontinuous function such that ( g ) 1 (0) is not empty. Let f:HH be a κ-contraction, and let { x n } be a sequence in H in the following process: x 0 H and

{ y n = arg min z H { g ( z ) + z α n f ( x n ) ( 1 α n ) x n e n + 1 2 2 r n } , x n + 1 = β n x n + ( 1 β n ) y n , n 0 ,

where { α n } and { β n } are real number sequences in (0,1), { e n } is a sequence in E, and { r n } is a positive real number sequence. Assume that the above control sequences satisfy the restrictions in Theorem  3.1. Then the sequence { x n } converges strongly to x ¯ ( f ) 1 (0), which is the unique solution to the following variational inequality f( x ¯ ) x ¯ ,j(p x ¯ )0, p ( f ) 1 (0).

Proof Since g:H(,] is a proper convex and lower semicontinuous function, we see that subdifferential ∂g of g is maximal monotone. Note that

y n =arg min z H { g ( z ) + z α n f ( x n ) ( 1 α n ) x n e n + 1 2 2 r n }

is equivalent to

0g( y n )+ 1 r n ( y n α n f ( x n ) ( 1 α n ) x n e n + 1 ) .

It follows that

α n f( x n )+(1 α n ) x n + e n + 1 y n + r n g( y n ).

Following the proof in Theorem 3.1, we draw the desired conclusion immediately. □