1 Introduction

Variational inclusion has become rich of inspiration in pure and applied mathematics. In recent years, classical variational inclusion problems have been extended and generalized to study a large variety of problems arising in image recovery, economics, and signal processing; for more details, see [114]. Based on the projection technique, it has been shown that the variational inclusion problems are equivalent to the fixed point problems. This alternative formulation has played a fundamental and significant part in developing several numerical methods for solving variational inclusion problems and related optimization problems.

The purposes of this paper is to study the zero point problem of the sum of a maximal monotone mapping and an inverse-strongly monotone mapping, and the fixed point problem of a nonexpansive mapping. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a Mann-type iterative algorithm with mixed errors is investigated. A weak convergence theorem is established. Applications of the main results are also discussed in this section.

2 Preliminaries

Throughout this paper, we always assume that H is a real Hilbert space with the inner product , and the norm , respectively. Let C be a nonempty closed convex subset of H and let P C be the metric projection from H onto C.

Let S:CC be a mapping. F(S) stands for the fixed point set of S; that is, F(S):={xC:x=Sx}.

Recall that S is said to be nonexpansive iff

SxSyxy,x,yC.

If C is a bounded, closed, and convex subset of H, then F(S) is not empty, closed, and convex; see [15].

Let A:CH be a mapping. Recall that A is said to be monotone iff

AxAy,xy0,x,yC.

A is said to be strongly monotone iff there exists a constant α>0 such that

AxAy,xyα x y 2 ,x,yC.

For such a case, A is also said to be α-strongly monotone. A is said to be inverse-strongly monotone iff there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC.

For such a case, A is also said to be α-inverse-strongly monotone. It is not hard to see that inverse-strongly monotone mappings are monotone and Lipschitz continuous.

Recall that the classical variational inequality is to find an xC such that

Ax,yx0,yC.
(2.1)

In this paper, we use VI(C,A) to denote the solution set of (2.1). It is known that ωC is a solution to (2.1) iff ω is a fixed point of the mapping P C (IλA), where λ>0 is a constant, and I stands for the identity mapping. If A is α-inverse-strongly monotone and λ(0,2α], then the mapping P C (IλA) is nonexpansive. Indeed, we have

( I λ A ) x ( I λ A ) y 2 = ( x y ) λ ( A x A y ) 2 = x y 2 2 λ x y , A x A y + λ 2 A x A y 2 x y 2 λ ( 2 α λ ) A x A y 2 .

This shows that P C (IλA) is nonexpansive. It follows that VI(C,A) is closed and convex.

A multivalued operator T:H 2 H with the domain D(T)={xH:Tx} and the range R(T)={Tx:xD(T)} is said to be monotone if for x 1 D(T), x 2 D(T), y 1 T x 1 , and y 2 T x 2 , we have x 1 x 2 , y 1 y 2 0. A monotone operator T is said to be maximal if its graph G(T)={(x,y):yTx} is not properly contained in the graph of any other monotone operator. Let I denote the identity operator on H and let T:H 2 H be a maximal monotone operator. Then we can define, for each λ>0, a nonexpansive single-valued mapping J λ :HH by J λ = ( I + λ T ) 1 . It is called the resolvent of T. We know that T 1 0=F( J λ ) for all λ>0 and J λ is firmly nonexpansive, that is,

J λ x J λ y 2 J λ x J λ y,xy,x,yH;

for more details, see [1622] and the references therein.

In [19], Kamimura and Takahashi investigated the problem of finding zero points of a maximal monotone operator based on the following algorithm:

x 0 H, x n + 1 = α n x n +(1 α n ) J λ n x n ,n=0,1,2,,

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, T:H 2 H is maximal monotone and J λ n = ( I + λ n T ) 1 . They showed that the sequence { x n } converges weakly to some z T 1 (0) provided that the control sequence satisfies some restrictions. Further, using this result, they also investigated the case that T=f, where f:H(,] is a proper lower semicontinuous convex function. Convergence theorems are established in the framework of real Hilbert spaces.

In [16], Takahashi an Toyoda investigated the problem of finding a common solutions of the variational inequality problem (2.1) and a fixed point problem of nonexpansive mappings based on the following algorithm:

x 0 C, x n + 1 = α n x n +(1 α n )S P C ( x n λ n A x n ),n0,

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, S:CC is a nonexpansive mapping and A:CH is an inverse-strongly monotone mapping. They showed that the sequence { x n } converges weakly to some zVI(C,A)F(S) provided that the control sequence satisfies some restrictions.

In [23], Tada and Takahashi investigated the problem of finding common solutions of an equilibrium problem and a fixed point problem of nonexpansive mappings based on the following algorithm: x 0 H and

{ u n C  such that  F ( u n , u ) + 1 r n u u n , u n x n 0 , u C , x n + 1 = α n x n + ( 1 α n ) S u n , n 0 ,

where { α n } is a sequence in (0,1), { r n } is a positive sequence, S:CC is a nonexpansive mapping and F:C×CR is a bifunction. They showed that the sequence { x n } converges weakly to some zVI(C,A)F(S) provided that the control sequence satisfies some restrictions.

Recently, fixed point and zero point problems have been studied by many authors based on iterative methods; see, for example, [2334] and the references therein. In this paper, motivated by the above results, we consider the problem of finding a common solution to the zero point problem and the fixed point problem based on Mann-type iterative methods with errors. Weak convergence theorems are established in the framework of Hilbert spaces.

To obtain our main results in this paper, we need the following lemmas.

Recall that a space is said to satisfy Opial’s condition [35] if, for any sequence { x n }H with x n x, where ⇀ denotes the weak convergence, the inequality

lim inf n x n x< lim inf n x n y

holds for every yH with yx. Indeed, the above inequality is equivalent to the following:

lim sup n x n x< lim sup n x n y.

Lemma 2.1 [34]

Let C be a nonempty, closed, and convex subset of H, let A:CH be a mapping, and let B:HH be a maximal monotone operator. Then F( J λ (IλA))= ( A + B ) 1 (0), where J λ (IλA) is the resolvent of B for λ>0.

Lemma 2.2 [36]

Let { a n }, { b n }, and { c n } be three nonnegative sequences satisfying the following condition:

a n + 1 (1+ b n ) a n + c n ,n n 0 ,

where n 0 is some nonnegative integer, n = 1 b n < and n = 1 c n <. Then the limit lim n a n exists.

Lemma 2.3 [37]

Suppose that H is a real Hilbert space and 0<p t n q<1 for all n1. Suppose further that { x n } and { y n } are sequences of H such that

lim sup n x n r, lim sup n y n r

and

lim n t n x n + ( 1 t n ) y n =r

hold for some r0. Then lim n x n y n =0.

Lemma 2.4 [15]

Let C be a nonempty, closed, and convex subset of H. Let S:CC be a nonexpansive mapping. Then the mapping IS is demiclosed at zero, that is, if { x n } is a sequence in C such that x n x ¯ and x n S x n 0, then x ¯ F(S).

3 Main results

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping, let S:CC be a nonexpansive mapping and let B be a maximal monotone operator on H such that the domain of B is included in C. Assume that F=F(S) ( A + B ) 1 (0). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n J λ n ( x n λ n A x n )+ γ n e n
(3.1)

for all nN, where J λ n = ( I + λ n B ) 1 . Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } generated in (3.1) converges weakly to some point in ℱ.

Proof Notice that I λ n A is nonexpansive. Indeed, we have

( I λ n A ) x ( I λ n A ) y 2 = ( x y ) λ n ( A x A y ) 2 = x y 2 2 λ n x y , A x A y + λ n 2 A x A y 2 x y 2 λ n ( 2 α λ n ) A x A y 2 .

In view of the restriction (b), we find that I λ n A is nonexpansive. Fixing pF, we find from Lemma 2.1 that

p=Sp= J λ n (p λ n Ap).

Put y n = J λ n ( x n λ n A x n ). Since J λ n and I λ n A are nonexpansive, we have

y n p ( x n λ n A x n ) ( p λ n A p ) x n p .
(3.2)

On the other hand, we have

x n + 1 p α n x n p + β n y n p + γ n e n p x n p + γ n e n p .
(3.3)

We find that lim n x n p exists with the aid of Lemma 2.2. This in turn implies that { x n } and { y n } are bounded. Put lim n x n p=L>0. Notice that

S x n p + γ n ( e n S x n ) S x n p + γ n e n S x n x n p + γ n e n S x n .

This implies from the restriction (c) that

lim sup n S x n p + γ n ( e n S x n ) L.

We also have

y n p + γ n ( e n S x n ) y n p + γ n e n S x n x n p + γ n e n S x n .

This implies from the restriction (c) that

lim sup n y n p + γ n ( e n S x n ) L.

On the other hand, we have

x n + 1 p= ( 1 β n ) ( S x n p + γ n ( e n S x n ) ) + β n ( y n p + γ n ( e n S x n ) ) .

It follows from Lemma 2.3 that

lim n S x n y n =0.
(3.4)

Notice that

y n p 2 ( x n λ n A x n ) ( p λ n A p ) 2 x p 2 2 α λ n A x A p 2 + λ n 2 A x A p 2 = x p 2 λ n ( 2 α λ n ) A x A p 2 .
(3.5)

This implies that

x n + 1 p 2 α n x n p 2 + β n y n p 2 + γ n e n p 2 α n x n p 2 + β n x n p 2 β n λ n ( 2 α λ n ) A x n A p 2 + γ n e n p 2 x n p 2 β n λ n ( 2 α λ n ) A x n A p 2 + γ n e n p 2 .
(3.6)

It follows that

β n λ n (2α λ n ) A x n A p 2 x n p 2 x n + 1 p 2 + γ n e n p 2 .

In view of the restrictions (a), (b), and (c), we obtain that

lim n A x n Ap=0.
(3.7)

Notice that

y n p 2 = J λ n ( x n λ n A x n ) J λ n ( p λ n A p ) 2 ( x n λ n A x n ) ( p λ n A p ) , y n p = 1 2 ( ( x n λ n A x n ) ( p λ n A p ) 2 + y n p 2 ( x n λ n A x n ) ( p λ n A p ) ( y n p ) 2 ) 1 2 ( x n p 2 + y n p 2 x n y n λ n ( A x n A p ) 2 ) 1 2 ( x n p 2 + y n p 2 x n y n 2 λ n 2 A x n A p 2 + 2 λ n x n y n A x n A p ) 1 2 ( x n p 2 + y n p 2 x n y n 2 + 2 λ n x n y n A x n A p ) .

It follows that

y n p 2 x n p 2 x n y n 2 +2 λ n x n y n A x n Ap.
(3.8)

On the other hand, we have

x n + 1 p 2 α n x n p 2 + β n y n p 2 + γ n e n p 2 .
(3.9)

Substituting (3.8) into (3.9), we arrive at

x n + 1 p 2 α n x n p 2 + β n x n p 2 β n x n y n 2 + 2 β n λ n x n y n A x n A p + γ n e n p 2 x n p 2 β n x n y n 2 + 2 β n λ n x n y n A x n A p + γ n e n p 2 .

It derives that

β n x n y n 2 x n p 2 x n + 1 p 2 +2 β n λ n x n y n A x n Ap+ γ n e n p 2 .

In view of the restrictions (a) and (c), we find from (3.7) that

lim n x n y n =0.
(3.10)

Notice that

S x n x n S x n y n + y n x n .

It follows from (3.4) and (3.10) that

lim n S x n x n =0.
(3.11)

Since { x n } is bounded, there exists a subsequence { x n i } of { x n } such that x n i ωC, where ⇀ denotes the weak convergence. From Lemma 2.4, we find that ωF(S). In view of (3.10), we can choose a subsequence { y n i } of { y n } such that y n i ω. Notice that

y n = J λ n ( x n λ n A x n ).

This implies that

x n λ n A x n (I+ λ n B) y n .

That is,

x n y n λ n A x n B y n .

Since B is monotone, we get for any (u,v)B that

y n u , x n y n λ n A x n v 0.
(3.12)

Replacing n by n i and letting i, we obtain from (3.10) that

ωu,Aωv0.

This means AωBω, that is, 0(A+B)(ω). Hence, we get ω ( A + B ) 1 (0). This completes the proof that ωF.

Suppose that there is another subsequence { x n j } of { x n } such that x n j ω . Then we can show that ω F in exactly the same way. Assume that ω ω since lim n x n p exits for any pF. Put lim n x n ω=d. Since the space satisfies Opial’s condition, we see that

d = lim inf i x n i ω < lim inf i x n i ω = lim n x n ω = lim inf j x n j ω < lim inf j x n j ω = d .

This is a contradiction. This shows that ω= ω . This proves that the sequence { x n } converges weakly to ωF. This completes the proof. □

We obtain from Theorem 3.1 the following inclusion problem.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping, and let B be a maximal monotone operator on H such that the domain of B is included in C. Assume that ( A + B ) 1 (0). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n x n + β n J λ n ( x n λ n A x n )+ γ n e n

for all nN, where J λ n = ( I + λ n B ) 1 . Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in ( A + B ) 1 (0).

Let f:H(,] be a proper lower semicontinuous convex function. Define the subdifferential

f(x)= { z H : f ( x ) + y x , z f ( y ) , y H }

for all xH. Then ∂f is a maximal monotone operator of H into itself; for more details, see [38]. Let C be a nonempty closed convex subset of H and let i C be the indicator function of C, that is,

i C x={ 0 , x C , , x C .

Furthermore, we define the normal cone N C (v) of C at v as follows:

N C v= { z H : z , y v 0 , y H }

for any vC. Then i C :H(,] is a proper lower semicontinuous convex function on H and i C is a maximal monotone operator. Let J λ x= ( I + λ i C ) 1 x for any λ>0 and xH. From i C x= N C x and xC, we get

v = J λ x x v + λ N C v x v , y v 0 , y C , v = P C x ,

where P C is the metric projection from H into C. Similarly, we can get that x ( A + i C ) 1 (0)xVI(A,C). Putting B= i C in Theorem 3.1, we find that J λ n = P C . The following results are not hard to derive.

Theorem 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H, let A:CH be an α-inverse-strongly monotone mapping and let S:CC be a nonexpansive mapping. Assume that F=F(S)VI(C,A). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n P C ( x n λ n A x n )+ γ n e n

for all nN. Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in ℱ.

In view of Theorem 3.3, we have the following result.

Corollary 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H and let A:CH be an α-inverse-strongly monotone mapping such that VI(C,A). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n x n + β n P C ( x n λ n A x n )+ γ n e n

for all nN. Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<2α,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in VI(C,A).

Let F be a bifunction of C×C into ℝ, where ℝ denotes the set of real numbers. Recall the following equilibrium problem.

Find xC such that F(x,y)0,yC.

In this paper, we use EP(F) to denote the solution set of the equilibrium problem.

To study the equilibrium problems, we may assume that F satisfies the following conditions:

  1. (A1)

    F(x,x)=0 for all xC;

  2. (A2)

    F is monotone, i.e., F(x,y)+F(y,x)0 for all x,yC;

  3. (A3)

    for each x,y,zC,

    lim sup t 0 F ( t z + ( 1 t ) x , y ) F(x,y);
  4. (A4)

    for each xC, yF(x,y) is convex and weakly lower semi-continuous.

Putting F(x,y)=Ax,yx for every x,yC, we see that the equilibrium problem is reduced to the variational inequality (2.1).

The following lemma can be found in [39].

Lemma 3.5 Let C be a nonempty closed convex subset of H and let F:C×CR be a bifunction satisfying (A1)-(A4). Then, for any r>0 and xH, there exists zC such that

F(z,y)+ 1 r yz,zx0,yC.

Further, define

T r x= { z C : F ( z , y ) + 1 r y z , z x 0 , y C }
(3.13)

for all r>0 and xH. Then the following hold:

  1. (a)

    T r is single-valued,

  2. (b)

    T r is firmly nonexpansive, i.e., for any x,yH,

    T r x T r y 2 T r x T r y,xy,
  3. (c)

    F( T r )=EP(F),

  4. (d)

    EP(F) is closed and convex.

Lemma 3.6 [5]

Let C be a nonempty closed convex subset of a real Hilbert space H, let F a bifunction from C×C towhich satisfies (A1)-(A4) and let A F be a multivalued mapping of H into itself defined by

A F x={ { z H : F ( x , y ) y x , z , y C } , x C , , x C .

Then A F is a maximal monotone operator with the domain D( A F )C, EP(F)= A F 1 (0) and

T r x= ( I + r A F ) 1 x,xH,r>0,

where T r is defined as in (3.13).

Theorem 3.7 Let C be a nonempty closed convex subset of a real Hilbert space H, let S:CC be a nonexpansive mapping and let F be a bifunction from C×C towhich satisfies (A1)-(A4). Assume that F=F(S)EP(F). Let { α n }, { β n }, and { γ n } be real number sequences in (0,1) such that α n + β n + γ n =1. Let { λ n } be a positive real number sequence and let { e n } be a bounded error sequence in C. Let { x n } be a sequence in C generated in the following iterative process:

x 1 C, x n + 1 = α n S x n + β n y n + γ n e n

for all nN, where y n C such that

F( y n ,u)+ 1 λ n u y n , y n x n 0,uC.

Assume that the sequences { α n }, { β n }, { γ n }, and { λ n } satisfy the following restrictions:

  1. (a)

    0<a β n b<1,

  2. (b)

    0<c λ n d<,

  3. (c)

    n = 1 γ n <,

where a, b, c, and d are some real numbers. Then the sequence { x n } converges weakly to some point in ℱ.