1 Introduction

In the real world, many important problems have reformulations which require finding zero points of some nonlinear operator, for instance, evolution equations, complementarity problems, mini-max problems, variational inequalities and optimization problems; see [113] and the references therein. It is well known that minimizing a convex function f can be reduced to finding zero points of the subdifferential mapping A=f. Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two nonlinear operators. The central problem is to iteratively find a zero point of the sum of two monotone operators; that is, 0(A+B)(x). Many problems can be formulated as a problem of the above form. For instance, a stationary solution to the initial value problem of the evolution equation 0Fu+ u t , u 0 =u(0), can be recast as the above inclusion problem when the governing maximal monotone F is of the form F=A+B; for more details; see [14] and the references therein.

In this paper, we study a regularization method for treating zero points of the sum of an inverse-strongly monotone and a maximal monotone operator. The main contribution of the paper is establish a strong convergence theorem for viscosity zero points under mild restrictions imposed on the control sequences. The main results include the corresponding results in Xu [15] as a special case. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a regularization method is investigated. A strong convergence theorem for zero points of the sum operator is established. In Section 4, applications of the main results are discussed.

2 Preliminaries

In what follows, we always assume that H is a real Hilbert space with inner product , and norm . Let C be a nonempty, closed and convex subset of H. Let S:CC be a mapping. F(S) stands for the fixed point set of S; that is, F(S):={xC:x=Sx}. Recall that S is said to be contractive iff there exists a constant κ(0,1) such that

SxSyκxy,x,yC.

It is well known that every contractive mapping has a unique fixed point in metric spaces. S is said to be nonexpansive iff

SxSyxy,x,yC.

If C is a bounded, closed, and convex subset of H, then F(S) is not empty, closed, and convex; see [16] and the references therein.

Let A:CH be a mapping. Recall that A is said to be monotone iff

AxAy,xy0,x,yC.

Recall that A is said to be inverse-strongly monotone iff there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC.

For such a case, A is also said to be α-inverse-strongly monotone. It is not hard to see that every inverse-strongly monotone mapping is monotone and continuous.

Recall that a set-valued mapping B:HH is said to be monotone iff, for all x,yH, fBx and gBy imply xy,fg>0. In this paper, we use B 1 (0) to stand for the zero point of B. A monotone mapping B:HH is maximal iff the graph Graph(B) of B is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping B is maximal if and only if, for any (x,f)H×H, xy,fg0, for all (y,g)Graph(B) implies fBx. For a maximal monotone operator B on H, and r>0, we may define the single-valued resolvent J r :HDom(B), where Dom(B) denote the domain of B. It is well known that J r is firmly nonexpansive, and B 1 (0)=F( J r ).

Recently, many authors studied zero points of monotone operators based on different regularization methods; see [1729] and the references therein. The main motivation is from Xu [15]. We propose a regularization method for treating zero points of the sum of two monotone operators. Strong convergence theorems are established in the framework of Hilbert spaces.

In order to prove our main results, we also need the following lemmas.

Lemma 2.1 [30]

Let A:CH be a mapping, and B:HH a maximal monotone operator. Then F( J r (IrA))= ( A + B ) 1 (0).

Lemma 2.2 [31]

Let { a n } be a sequence of nonnegative numbers satisfying the condition a n + 1 (1 t n ) a n + t n b n + c n , n0, where { t n } is a number sequence in (0,1) such that lim n t n =0 and n = 0 t n =, { b n } is a number sequence such that lim sup n b n 0, and { c n } is a positive number sequence such that n = 0 c n <. Then lim n a n =0.

Lemma 2.3 [32]

Let H be a Hilbert space, and A a maximal monotone operator. For λ>0, μ>0, and xE, we have J λ x= J μ ( μ λ x+(1 μ λ ) J λ x), where J λ = ( I + λ A ) 1 and J μ = ( I + μ A ) 1 .

3 Main results

Theorem 3.1 Let A:CH be an α-inverse-strongly monotone mapping and let B be a maximal monotone operator on H. Assume that Dom(B)C and ( A + B ) 1 (0) is not empty. Let f:CC be a fixed κ-contraction and let J r n = ( I + r n B ) 1 . Let { x n } be a sequence in C in the following process: x 0 C and

{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = J r n ( y n r n A y n + e n ) , n 0 ,

where { α n } is a real number sequence in (0,1), { e n } is sequence in H and { r n } is a positive real number sequence in (0,2α). If the control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0, n = 0 α n = and n = 1 | α n α n 1 |<;

  2. (b)

    0<a r n b<2α and n = 1 | r n r n 1 |<;

  3. (c)

    n = 0 e n <,

then { x n } converges strongly to a point x ¯ ( A + B ) 1 (0), where x ¯ = Proj ( A + B ) 1 ( 0 ) f( x ¯ ).

Proof First, we show that { x n } is bounded. Notice that I r n A is nonexpansive. Indeed, we have

( I r n A ) x ( I r n A ) y 2 = x y 2 2 r n x y , A x A y + r n 2 A x A y 2 x y 2 r n ( 2 α r n ) A x A y 2 .

In view of the restriction (b), we find that I r n A is nonexpansive. Fixing p ( A + B ) 1 (0), we find that

y n p α n f ( x n ) p +(1 α n ) x n p ( 1 α n ( 1 κ ) ) x n p+ α n f ( p ) p .

It follows that

x n + 1 p ( y n r n A y n + e n ) ( I r n A ) p ( I r n A ) y n ( I r n A ) p + e n ( 1 α n ( 1 κ ) ) x n p + α n f ( p ) p + e n max { x n p , f ( p ) p 1 κ } + e n max { x n 1 p , f ( p ) p 1 κ } + e n 1 + e n max { x 0 p , f ( p ) p 1 κ } + i = 0 n e i max { x 0 p , f ( p ) p 1 κ } + i = 0 e i < .

This proves that the sequence { x n } is bounded, and so is { y n }. Notice that

y n y n 1 ( 1 α n ( 1 κ ) ) x n x n 1 +| α n α n 1 | f ( x n 1 ) x n 1 .

Putting z n = y n r n A y n + e n , we find that

z n z n 1 y n y n 1 + r n r n 1 A y n 1 + e n + e n 1 ( 1 α n ( 1 κ ) ) x n x n 1 + | α n α n 1 | f ( x n 1 ) x n 1 + | r n r n 1 | A y n 1 + e n + e n 1 .

It follows from Lemma 2.3 that

x n + 1 x n = J r n z n J r n 1 z n 1 = J r n 1 ( r n 1 r n z n + ( 1 r n 1 r n ) J r n z n ) J r n 1 z n 1 r n 1 r n ( z n z n 1 ) + ( 1 r n 1 r n ) ( J r n z n z n 1 ) ( z n z n 1 ) + ( 1 r n 1 r n ) ( J r n z n z n ) z n z n 1 + | r n r n 1 | a J r n z n z n ( 1 α n ( 1 κ ) ) x n x n 1 + f n ,

where

f n =| α n α n 1 | f ( x n 1 ) x n 1 +| r n r n 1 | ( A y n 1 + J r n z n z n a ) + e n + e n 1 .

It follows from the restrictions (a), (b), and (c) that n = 1 f n <. In view of Lemma 2.2, we find that lim n x n + 1 x n =0. In view of y n x n α n f( x n ) x n , we find from the above that

lim n y n x n + 1 = lim n y n x n =0.
(3.1)

Next, we show that

lim sup n f ( x ¯ ) x ¯ , y n x ¯ 0,
(3.2)

where x ¯ is the unique fixed point of the mapping Proj ( A + B ) 1 ( 0 ) f. To show this inequality, we choose a subsequence { y n i } of { y n } such that

lim sup n f ( x ¯ ) x ¯ , y n x ¯ = lim i f ( x ¯ ) x ¯ , y n i x ¯ 0.

Since { y n i } is bounded, we find that there exists a subsequence { y n i j } of { y n i } which converges weakly to x ˆ . Without loss of generality, we can assume that y n i x ˆ .

Now, we show that x ˆ ( A + B ) 1 (0). Notice that y n r n A y n + e n x n + 1 + r n B x n + 1 ; that is,

y n r n A y n + e n x n + 1 r n B x n + 1 .

Let μBν. Since B is monotone, we find that

y n + e n x n + 1 r n A y n μ , x n + 1 ν 0.

In view of the restriction (b), we see from (3.1) that A x ˆ μ, x ˆ ν0. This implies that A x ˆ B x ˆ , that is, x ˆ ( A + B ) 1 (0). This proves that (3.2) holds. Notice that

y n x ¯ 2 α n κ x n x ¯ y n x ¯ + α n f ( x ¯ ) x ¯ , y n x ¯ + ( 1 α n ) x n x ¯ y n x ¯ .

It follows that y n x ¯ 2 (1 α n (1κ)) x n x ¯ 2 +2 α n f( x ¯ ) x ¯ , y n x ¯ . On the other hand, we have

x n + 1 x ¯ 2 J r n ( y n r n A y n + e n ) x ¯ 2 ( y n r n A y n ) ( I r n A ) x ¯ 2 + e n ( e n + 2 ( y n r n A y n ) ( I r n A ) x ¯ ) y n x ¯ 2 + e n ( e n + 2 ( y n r n A y n ) ( I r n A ) x ¯ ) ( 1 α n ( 1 κ ) ) x n x ¯ 2 + 2 α n f ( x ¯ ) x ¯ , y n x ¯ + e n ( e n + 2 ( y n r n A y n ) ( I r n A ) x ¯ ) .

An application of Lemma 2.2 to the above inequality yields lim n x n x ¯ =0. This completes the proof. □

4 Applications

First, we consider the problem of finding a minimizer of a proper convex lower semicontinuous function.

For a proper lower semicontinuous convex function g:H(,], the subdifferential mapping ∂g of g is defined by

g(x)= { x H : g ( x ) + y x , x g ( y ) , y H } ,xH.

Rockafellar [33] proved that ∂g is a maximal monotone operator. It is easy to verify that 0g(v) if and only if g(v)= min x H g(x).

Theorem 4.1 Let g:H(,+] be a proper convex lower semicontinuous function such that ( g ) 1 (0) is not empty. Let f:HH be a κ-contraction and let { x n } be a sequence in H in the following process: x 0 H and

{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = arg min z H { g ( z ) + z y n e n 2 2 r n } , n 0 ,

where { α n } is a real number sequence in (0,1), { e n } is sequence in H and { r n } is a positive real number sequence. If the control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0, n = 0 α n = and n = 1 | α n α n 1 |<;

  2. (b)

    0<a r n ;

  3. (c)

    n = 0 e n <,

then { x n } converges strongly to a point x ¯ ( g ) 1 (0), where x ¯ = Proj ( g ) 1 ( 0 ) f( x ¯ ).

Proof Since g:H(,] is a proper convex and lower semicontinuous function, we see that subdifferential ∂g of g is maximal monotone. Noting that

x n + 1 =arg min z H { g ( z ) + z y n e n 2 2 r n }

is equivalent to

0g( x n + 1 )+ 1 r n ( x n + 1 y n e n ).

It follows that

y n + e n x n + 1 + r n g( x n + 1 ).

Putting A=0, we immediately derive from Theorem 3.1 the desired conclusion. □

Next, we consider the problem of finding a solution of a classical variational inequality.

Let C be a nonempty closed and convex subset of a Hilbert space H. Let i C be the indicator function of C, that is,

i C (x)={ 0 , x C , , x C .

Since i C is a proper lower and semicontinuous convex function on H, the subdifferential i C of i C is maximal monotone. So, we can define the resolvent J r of i C for r>0, i.e., J r := ( I + r i C ) 1 . Letting x= J r y, we find that

y x + r i C x y x + r N C x y x , v x 0 , v C x = Proj C y ,

where Proj C is the metric projection from H onto C and N C x:={eH:e,vx,vC}.

Theorem 4.2 Let A:CH be an α-inverse-strongly monotone mapping. Assume that VI(C,A) is not empty. Let f:CC be a fixed κ-contraction. Let { x n } be a sequence in C in the following process: x 0 C and

{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = Proj C ( y n r n A y n + e n ) , n 0 ,

where { α n } is a real number sequence in (0,1), { e n } is sequence in H and { r n } is a positive real number sequence in (0,2α). If the control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0, n = 0 α n = and n = 1 | α n α n 1 |<;

  2. (b)

    0<a r n b<2α and n = 1 | r n r n 1 |<;

  3. (c)

    n = 0 e n <,

then { x n } converges strongly to a point x ¯ VI(C,A), where x ¯ = Proj V I ( C , A ) f( x ¯ ).

Proof Putting B= i C in Theorem 3.1, we find that J r n = Proj C . We can draw the desired conclusion from Theorem 3.1.

Next, we consider the problem of finding a solution of a Ky Fan inequality, which is known as an equilibrium problem in the terminology of Blum and Oettli; see [34] and [35] and the references therein.

Let F be a bifunction of C×C into ℝ, where ℝ denotes the set of real numbers. Recall the following equilibrium problem:

Find xC such that F(x,y)0,yC.
(4.1)

To study the equilibrium problem (4.1), we may assume that F satisfies the following restrictions:

(A1) F(x,x)=0 for all xC;

(A2) F is monotone, i.e., F(x,y)+F(y,x)0 for all x,yC;

(A3) for each x,y,zC, lim sup t 0 F(tz+(1t)x,y)F(x,y);

(A4) for each xC, yF(x,y) is convex and lower semicontinuous.

 □

The following lemma can be found in [35].

Lemma 4.3 Let F:C×CR be a bifunction satisfying (A1)-(A4). Then, for any r>0 and xH, there exists zC such that F(z,y)+ 1 r yz,zx0, yC. Further, define

T r x= { z C : F ( z , y ) + 1 r y z , z x 0 , y C }
(4.2)

for all r>0 and xH. Then (1) T r is single-valued and firmly nonexpansive; (2) F( T r )=EP(F) is closed and convex.

Lemma 4.4 [36]

Let F be a bifunction from C×C towhich satisfies (A1)-(A4), and let A F be a multivalued mapping of H into itself defined by

A F x={ { z H : F ( x , y ) y x , z , y C } , x C , , x C .
(4.3)

Then A F is a maximal monotone operator with the domain D( A F )C, EP(F)= A F 1 (0), where FP(F) stands for the solution set of (4.1), and T r x= ( I + r A F ) 1 x, xH, r>0, where T r is defined as in (4.2).

Theorem 4.5 Let F:C×CR be a bifunction satisfying (A1)-(A4). Assume that EP(F) is not empty. Let f:CC be a fixed κ-contraction and let T r n = ( I + r n A F ) 1 . Let { x n } be a sequence in C in the following process: x 0 C and

x n + 1 = T r n ( α n f ( x n ) + ( 1 α n ) x n + e n ) ,n0,

where { α n } is a real number sequence in (0,1), { e n } is sequence in H and { r n } is a positive real number sequence. If the control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0, n = 0 α n = and n = 1 | α n α n 1 |<;

  2. (b)

    0<a r n b< and n = 1 | r n r n 1 |<;

  3. (c)

    n = 0 e n <,

then { x n } converges strongly to a point x ¯ EP(F), where x ¯ = Proj E P ( F ) f( x ¯ ).

Proof Putting A=0 in Theorem 3.1, we find that J r n = T r n . From Theorem 3.1, we can draw the desired conclusion immediately.

Recall that a mapping T:CT is said to be α-strictly pseudocontractive if there exists a constant α[0,1) such that

T x T y 2 x y 2 +α ( I T ) x ( I T ) y 2 ,x,yC.

The class of strictly pseudocontractive mappings was first introduced by Browder and Petryshyn [37]. It is well known that if T is α-strictly-pseudocontractive, then IT is 1 α 2 -inverse-strongly monotone. □

Finally, we consider fixed point problem of α-strictly pseudocontractive mappings.

Theorem 4.6 Let T:CC be an α-strictly pseudocontractive mapping with a nonempty fixed point set and let f:CC be a fixed κ-contraction. Let { x n } be a sequence generated in the following manner: x 0 C and

{ y n = α n f ( x n ) + ( 1 α n ) x n , x n + 1 = ( 1 r n ) y n + r n T y n , n 0 ,

where { α n } is a real number sequence in (0,1) and { r n } is a positive real number sequence in (0,1α). If the control sequences satisfy the following restrictions:

  1. (a)

    lim n α n =0, n = 0 α n = and n = 1 | α n α n 1 |<;

  2. (b)

    0<a r n b<1α and n = 1 | r n r n 1 |<;

then { x n } converges strongly to a point x ¯ F(T), where x ¯ = Proj F ( T ) f( x ¯ ).

Proof Putting A=IT, we find A is 1 α 2 -inverse-strongly monotone. We also have F(T)=VI(C,A) and Proj C ( y n r n A y n )=(1 r n ) y n + r n T y n . In view of Theorem 3.1, we obtain the desired result. □