1 Introduction

Variational inequalities, introduced by Hartman and Stampacchia [1] in the early sixties, are one of the most interesting and intensively studied classes of mathematical problems. They are a very powerful tool of the current mathematical technology and have been extended to study a considerable amount of problems arising in mechanics, physics, optimization and control, nonlinear programming, transportation equilibrium and engineering sciences (see, e.g., [24]). As a useful and important generalization of variational inequalities, quasi-variational inclusion problems are also further studied (see, e.g., [514] and the references contained therein).

Throughout this paper, we assume that H is a real Hilbert space with the inner product , and the induced norm , and let C be a nonempty closed convex subset of H. F(T) denotes a fixed point set of the mapping T.

Let Φ be a single-valued mapping of C into H and M be a multi-valued mapping with domain D(M)=C. The quasi-variational inclusion problem is to find uC such that

0Φ(u)+Mu.
(1.1)

The solution set of quasi-variational inclusion problem (1.1) is denoted by VI(C,Φ,M). In particular, if M= δ C , where δ C :H[0,] is the indicator function of C, i.e.,

δ C (x)= { 0 , x C , + , x C ,

then the variational inclusion problem (1.1) is equivalent to finding uC such that

Φ ( u ) , v u 0,vC.
(1.2)

This problem is called the Hartman-Stampacchia variational inequality problem [1]. The solution set of problem (1.2) is denoted by VI(C,Φ).

Recall that T:CC is called a k-strictly pseudo-contractive mapping if there exists a constant k[0,1) such that

T x T y 2 x y 2 +k ( I T ) x ( I T ) y 2 ,x,yC,
(1.3)

and T is called a pseudo-contractive mapping if

T x T y 2 x y 2 + ( I T ) x ( I T ) y 2 ,x,yC.

It is obvious that k=0, then the mapping T is nonexpansive, that is,

TxTyxy,x,yC.

It is well known that finding fixed points of nonexpansive mappings is an important topic in the theory of nonexpansive mappings and has wide applications in a number of applied areas, such as the convex feasibility problem [15, 16], the split feasibility problem [17], image recovery and signal processing [18]. After that, as an important generalization of nonexpansive mappings, strictly pseudo-contractive mappings become one of the most interesting studied classes of nonexpansive mappings (see, e.g., [1922]). In fact, strictly pseudo-contractive mappings have more powerful applications than nonexpansive mappings do such as in solving an inverse problem [23].

In order to find a common element of the solution set of quasi-variational inclusion problem (1.1) and the fixed point set of k-strictly pseudo-contractive mapping (1.3), which is also a solution of the following constrained convex minimization problem:

min x C f(x),
(1.4)

where f:CR is a real-valued convex function and assumes that problem (1.4) is consistent (i.e., its solution set is nonempty), let Ω denote its solution set. Ceng et al. [24] studied the following algorithm: take x 0 =xC arbitrarily and

{ y n = P C ( x n λ n f ( x n ) ) , t n = P C ( x n λ n f ( y n ) ) , z n = ( 1 α n α ˆ n ) x n + α n J M , μ n ( t n μ n Φ ( t n ) ) + α ˆ n S J M , μ n ( t n μ n Φ ( t n ) ) , C n = { z C : z n z x n z } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x , n 0 .

Under appropriate conditions they obtained one strong convergence theorem.

In this paper, motivated and inspired by the above facts, we propose a new algorithm as follows: take x 0 C arbitrarily, set C 0 =C, and

{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = J M , μ n ( z n μ n Φ ( z n ) ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,

and also get a strong convergence theorem under certain conditions.

The remainder of this paper is organized as follows. In Section 2, some definitions and lemmas are provided to get the main results of this paper. In Section 3, we give and prove one strong convergence theorem about our proposed algorithm. Finally, in Section 4, we apply our conclusion to some special cases.

2 Preliminaries

Let H be a real Hilbert space. It is well known that

x y 2 = x 2 2x,y+ y 2

and

t x + ( 1 t ) y 2 =t x 2 +(1t) y 2 t(1t) x y 2

for all x,yH and t[0,1].

Now, we recall some definitions and useful results which will be used in the next section.

Definition 2.1 Let T:CH be a nonlinear operator.

  1. (1)

    T is Lipschitz continuous if there exists a constant L>0 such that

    TxTyLxy,x,yC.
  2. (2)

    T is monotone if

    TxTy,xy0,x,yC.
  3. (3)

    T is ρ-strongly monotone if there exists a number ρ>0 such that

    TxTy,xyρ x y 2 ,x,yC.
  4. (4)

    T is η-inverse-strongly monotone if there exists a number η>0 such that

    TxTy,xyη T x T y 2 ,x,yC.

It is easy to see that the following results hold: (i) strongly monotone is monotone; (ii) an η-inverse-strongly monotone mapping is monotone and 1 η -Lipschitz continuous; (iii) T is k-strictly pseudo-contractive, then IT is 1 k 2 -inverse strongly monotone.

Definition 2.2 A multi-valued mapping M:D(M)H 2 H is called monotone if its graph G(M)={(x,f)H×H:xD(M),fMx} is a monotone set in H×H, that is, M is monotone if and only if

(x,f),(y,g)G(M)xy,fg0.

A monotone multi-valued mapping M is called maximal if for any (x,f)H×H, xy,fg0 for every (y,g)G(M) implies fMx.

Remark 2.1 [24]

The following results are equivalent:

  1. (1)

    A multi-valued mapping M is maximal monotone;

  2. (2)

    M is monotone and (I+λM)D(M)=H for each λ>0;

  3. (3)

    M is monotone and its graph G(M) is not properly contained in the graph of any other monotone mapping in H.

Definition 2.3 P C :HC is called a metric projection if for every point xH, there exists a unique nearest point in C, denoted by P C x, such that

x P C xxy,yC.

Lemma 2.1 Let C be a nonempty closed convex subset of H and let P C :HC be a metric projection, then

  1. (1)

    P C x P C y 2 xy, P C x P C y, x,yH;

  2. (2)

    moreover, P C is a nonexpansive mapping, i.e., P C x P C yxy, x,yH;

  3. (3)

    x P C x,y P C x0, xH, yC;

  4. (4)

    x y 2 x P C x 2 + y P C x 2 , xH, yC.

Definition 2.4 Let M:D(M)H 2 H be a multi-valued maximal monotone mapping, then the single-valued mapping J M , μ :HH defined by

J M , μ x= ( I + μ M ) 1 x,xH

is called the resolvent operator associated with M, where μ is any positive number and I is the identity mapping.

Lemma 2.2 [5, 24, 25]

The resolvent operator J M , μ associated with M is single-valued and firmly nonexpansive, i.e.,

J M , μ x J M , μ y 2 J M , μ x J M , μ y,xy,x,yH.

Consequently, J M , μ is nonexpansive and monotone.

Lemma 2.3 [24]

Let M be a multi-valued maximal monotone mapping with D(M)=C. Then, for any given μ>0, uC is a solution of problem (1.1) if and only if uC satisfies

u= J M , μ ( u μ Φ ( u ) ) .

Lemma 2.4 [24]

Let M be a multi-valued maximal monotone mapping with D(M)=C and let Φ:CH be a monotone, continuous and single-valued mapping. Then M+Φ is maximal monotone.

In the sequel, we use x n x and x n x to denote the weak convergence and strong convergence of the sequence { x n } in H, respectively.

Lemma 2.5 [26]

Let C be a nonempty closed convex subset of H and let T:CC be a k-strictly pseudocontractive mapping, then the following results hold:

  1. (1)

    inequality (1.3) is equivalent to

    TxTy,xy x y 2 1 k 2 ( I T ) x ( I T ) y 2 ,x,yC.
  2. (2)

    T is Lipschitz continuous with a constant 1 + k 1 k , i.e.,

    TxTy 1 + k 1 k xy,x,yC.
  3. (3)

    (Demi-closedness principle) IT is demi-closed on C, that is,

    if  x n x C and (IT) x n 0, then  x =T x .

Lemma 2.6 Let C be a nonempty closed convex subset of H and let T:CH be an η-inverse-strongly monotone mapping, then for all x,yC and η>0, we have

( I λ T ) x ( I λ T ) y 2 = ( x y ) λ ( T x T y ) 2 = x y 2 2 λ T x T y , x y + λ 2 T x T y 2 x y 2 + λ ( λ 2 η ) T x T y 2 .

So, if 0<λ2η, then IλT is a nonexpansive mapping from C to H.

Lemma 2.7 [24, 27]

Let A:CH be a monotone, Lipschitz continuous mapping, and let N C v be the normal cone to C at vC, i.e.,

N C v= { z H : v u , z 0 , u C } .

Define

Tv= { A v + N C v , v C , , v C .

Then, T is maximal monotone and 0Tv if and only if vVI(C,A).

For the minimization problem (1.4), if f is (Frechet) differentiable, then we have the following lemma.

Lemma 2.8 [28] (Optimality condition)

A necessary condition of optimality for a point x C to be a solution of the minimization problem (1.4) is that x solves the variational inequality

f ( x ) , x x 0,xC.
(2.1)

Equivalently, x C solves the fixed point equation

x = P C ( x λ f ( x ) )

for every constant λ>0. If, in addition, f is convex, then the optimality condition (2.1) is also sufficient.

3 Main results

In this section, we prove a strong convergence theorem by an iterative algorithm for finding a solution of the constrained convex minimization problem (1.4), which is also a common solution of the quasi-variational inclusion problem (1.1) and the fixed point problem of a k-strictly pseudo-contractive mapping (1.3) in a real Hilbert space.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradientf is a ρ-inverse-strongly monotone mapping. Let Φ:CH be an η-inverse-strongly monotone mapping and M be a maximal monotone mapping with D(M)=C, and let S:CC be a k-strictly pseudo-contractive mapping such that F=F(S)ΩVI(C,Φ,M). Pick any x 0 C and set C 0 =C. Let { x n }C be a sequence generated by

{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = J M , μ n ( z n μ n Φ ( z n ) ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(3.1)

where the following conditions hold:

  1. (i)

    0< lim inf n λ n lim sup n λ n <ρ;

  2. (ii)

    ϵ μ n 2η for some ϵ(0,2η];

  3. (iii)

    k< lim inf n α n lim sup n α n <1.

Then the sequence { x n } converges strongly to P F x 0 .

Proof It is obvious that C n is closed for each nN. Since w n w x n w is equivalent to

w n x n 2 +2 w n x n , x n w0,

we have that C n is convex for each nN. Therefore, { x n } is well defined. We divide the proof into five steps.

Step 1. We show by induction that F C n for each nN.

It is obvious that F C 0 =C. Suppose that F C n for some nN. Let pF, we have

w n p 2 = α n t n + ( 1 α n ) S t n p 2 = α n ( t n p ) + ( 1 α n ) ( S t n p ) 2 = α n t n p 2 + ( 1 α n ) S t n p 2 α n ( 1 α n ) t n S t n 2 α n t n p 2 + ( 1 α n ) [ t n p 2 + k t n S t n 2 ] α n ( 1 α n ) t n S t n 2 = t n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 t n p 2 .
(3.2)

According to Lemma 2.3, Lemma 2.2 and Lemma 2.6, we get

t n p 2 = J M , μ n ( z n μ n Φ ( z n ) ) J M , μ n ( p μ n Φ ( p ) ) 2 z n μ n Φ ( z n ) ( p μ n Φ ( p ) ) 2 z n p 2 + μ n ( μ n 2 η ) Φ ( z n ) Φ ( p ) 2 z n p 2 .
(3.3)

Since the gradient ∇f is a ρ-inverse-strongly monotone mapping and pFΩ, from Lemma 2.8, we have

f ( y n ) f ( p ) , y n p 0and f ( p ) , y n p 0.
(3.4)

From Lemma 2.1(4) and (3.4), we obtain

z n p 2 = P C ( x n λ n f ( y n ) ) p 2 x n λ n f ( y n ) p 2 x n λ n f ( y n ) z n 2 = x n p 2 x n z n 2 + 2 λ n f ( y n ) , p z n = x n p 2 x n z n 2 + 2 λ n [ f ( y n ) f ( p ) , p y n + f ( p ) , p y n + f ( y n ) , y n z n ] x n p 2 x n z n 2 + 2 λ n f ( y n ) , y n z n = x n p 2 x n y n 2 2 x n y n , y n z n y n z n 2 + 2 λ n f ( y n ) , y n z n = x n p 2 x n y n 2 y n z n 2 + 2 x n λ n f ( y n ) y n , z n y n .
(3.5)

It is easy to see that ρ-inverse-strongly monotone mapping ∇f is 1 ρ -Lipschitz continuous. Further, since y n = P C ( x n λ n f( x n )) and by Lemma 2.1(3) we have

x n λ n f ( y n ) y n , z n y n = x n λ n f ( x n ) y n , z n y n + λ n f ( x n ) λ n f ( y n ) , z n y n λ n f ( x n ) λ n f ( y n ) , z n y n λ n f ( x n ) f ( y n ) z n y n λ n ρ x n y n z n y n .
(3.6)

Substituting (3.6) into (3.5), we obtain

z n p 2 x n p 2 x n y n 2 y n z n 2 + 2 x n λ n f ( y n ) y n , z n y n x n p 2 x n y n 2 y n z n 2 + 2 λ n ρ x n y n z n y n x n p 2 x n y n 2 y n z n 2 + λ n 2 ρ 2 x n y n 2 + z n y n 2 = x n p 2 + ( λ n 2 ρ 2 1 ) x n y n 2 x n p 2 .
(3.7)

From (3.2), (3.3) and (3.7), we have

w n p x n p.
(3.8)

Hence p C n + 1 . This implies that p C n for all nN.

Step 2. We prove that lim n x n + 1 x n =0 and lim n x n w n =0.

Let x = P F x 0 . From x n = P C n x 0 and x F C n , we obtain

x n x 0 x x 0 .
(3.9)

Then { x n } is bounded. This implies that { z n }, { t n } and { w n } are also bounded. Since x n = P C n x 0 and x n + 1 C n + 1 C n , we have

x n x 0 x n + 1 x 0 .

Therefore lim n x n x 0 exists. From x n = P C n x 0 , x n + 1 C n + 1 C n and Lemma 2.1(4), we obtain

0 x n + 1 x n 2 x 0 x n + 1 2 x 0 x n 2 ,

which implies

lim n x n + 1 x n =0.
(3.10)

It follows from x n + 1 C n + 1 that w n x n + 1 x n x n + 1 , and hence

x n w n x n x n + 1 + x n + 1 w n 2 x n x n + 1 .
(3.11)

From (3.10) and (3.11), we have

lim n x n w n =0.
(3.12)

Step 3. We show that lim n t n S t n =0, lim n x n z n =0 and lim n x n t n =0.

For pF, from (3.2), (3.3) and (3.7), we have

w n p 2 t n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 z n p 2 + ( 1 α n ) ( k α n ) t n S t n 2 x n p 2 + ( λ n 2 ρ 2 1 ) x n y n 2 + ( 1 α n ) ( k α n ) t n S t n 2 .

Then

( 1 λ n 2 ρ 2 ) x n y n 2 + ( 1 α n ) ( α n k ) t n S t n 2 x n p 2 w n p 2 ( x n p + w n p ) x n w n .
(3.13)

Since 0< lim inf n λ n lim sup n λ n <ρ and k< lim inf n α n lim sup n α n <1, from (3.12) and (3.13) we get

lim n x n y n =0
(3.14)

and

lim n t n S t n =0.
(3.15)

As ∇f is 1 ρ -Lipschitz continuous, we have

z n y n = P C ( x n λ n f ( y n ) ) P C ( x n λ n f ( x n ) ) x n λ n f ( y n ) ( x n λ n f ( x n ) ) = λ n f ( y n ) f ( x n ) λ n ρ x n y n .

Hence

lim n z n y n =0.
(3.16)

From (3.14) and (3.16), we obtain

lim n x n z n =0.
(3.17)

We observe

w n t n = α n t n + ( 1 α n ) S t n t n = ( 1 α n ) S t n t n S t n t n .
(3.18)

From (3.15), we get

lim n w n t n =0.
(3.19)

Combining (3.12) and (3.19), we have

lim n x n t n =0.
(3.20)

Step 4. Since { x n } is bounded, there exists a subsequence { x n i } which converges weakly to u. We show that

uF=F(S)ΩVI(C,Φ,M).

Indeed, firstly, we show uF(S). Since x n t n 0 and x n i u, we have t n i u. From S t n t n 0, we obtain S t n i t n i 0 as i. By Lemma 2.5 (Demi-closedness principle), we can conclude that uF(S).

Secondly, we show uΩ. Since z n = P C ( x n λ n f( y n )) and by Lemma 2.1(3), we have

x n λ n f ( y n ) z n , z n v 0,

that is,

v z n , z n x n λ n + f ( y n ) 0.

Let

Tv= { f ( v ) + N C v , v C , , v C .

Then, from Lemma 2.7, we know that T is maximal monotone and 0Tv if and only if vVI(C,f). Let G(T) be the graph of T and let (v,h)G(T). Then we have hTv=f(v)+ N C v. Hence hf(v) N C v. So, we have

v z , h f ( v ) 0,zC.

Therefore,

v z n i , h v z n i , f ( v ) v z n i , f ( v ) v z n i , z n i x n i λ n i + f ( y n i ) = v z n i , f ( v ) f ( z n i ) + v z n i , f ( z n i ) f ( y n i ) v z n i , z n i x n i λ n i v z n i , f ( z n i ) f ( y n i ) v z n i , z n i x n i λ n i .
(3.21)

Since x n y n 0 and x n z n 0, we have y n i u and z n i u. Then, from (3.21), we obtain vu,h0=vu,h0 as i. Since T is maximal monotone, we have 0Tu and hence uVI(C,f). According to Lemma 2.8, we obtain uΩ.

Finally, let us show uVI(C,Φ,M). Since Φ:CH is η-inverse-strongly monotone and M is maximal monotone, by Lemma 2.4 we know that M+Φ is maximal monotone. Take a fixed (y,g)G(M+Φ) arbitrarily. Then we have gΦ(y)+My, that is,

gΦ(y)My.

Since t n i = J M , μ n i ( z n i μ n i Φ( z n i )), then

1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) M t n i .

Therefore,

y t n i , g Φ ( y ) 1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) 0,

which hence yields

y t n i , g y t n i , Φ ( y ) + 1 μ n i ( z n i μ n i Φ ( z n i ) t n i ) = y t n i , Φ ( y ) Φ ( z n i ) + y t n i , z n i t n i μ n i η Φ ( y ) Φ ( t n i ) 2 + y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i .
(3.22)

Observe that

| y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i | y t n i Φ ( t n i ) Φ ( z n i ) + 1 μ n i y t n i z n i t n i 1 η y t n i t n i z n i + 1 ϵ y t n i z n i t n i = ( 1 η + 1 ϵ ) y t n i z n i t n i .

By x n z n 0 and x n t n 0, we have z n t n 0. Then

lim i | y t n i , Φ ( t n i ) Φ ( z n i ) + y t n i , z n i t n i μ n i | =0.

Let i, from (3.22) we get

yu,g0=yu,g0.

This implies that 0Φ(u)+Mu. Hence uVI(C,Φ,M). Therefore,

uF=F(S)ΩVI(C,Φ,M).

Step 5. We show that x n x , where x = P F x 0 .

Indeed, from x = P F x 0 , uF=F(S)ΩVI(C,Φ,M) and (3.9), we have

x x 0 u x 0 lim inf i x n i x 0 lim sup i x n i x 0 x x 0 .

Then

lim i x n i x 0 =u x 0 .

From x n i x 0 u x 0 and the Kadec-Klee property of H, we have x n i x 0 u x 0 , and hence x n i u. Since x n i = P C n i x 0 and x F C n i , we have

x x n i 2 = x x n i , x n i x 0 + x x n i , x 0 x x x n i , x 0 x .

Let i, by uF and x = P F x 0 , we have

x u 2 x u , x 0 x 0.

Hence u= x , which implies that x n x . This completes the proof. □

4 Applications

From Theorem 3.1, we can obtain the following theorems.

Theorem 4.1 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradientf is a ρ-inverse-strongly monotone mapping. Let S:CC be a k-strictly pseudo-contractive mapping such that F=F(S)Ω. Pick any x 0 C and set C 0 =C. Let { x n }C be a sequence generated by

{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , w n = α n z n + ( 1 α n ) S z n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.1)

where the following conditions hold:

  1. (i)

    0< lim inf n λ n lim sup n λ n <ρ;

  2. (ii)

    k< lim inf n α n lim sup n α n <1.

Then the sequence { x n } converges strongly to P F x 0 .

Proof Let Φ=M=0 in Theorem 3.1, we have VI(C,0,0)=C and F=F(S)ΩVI(C,0,0)=F(S)Ω. Let η be any positive number in the interval (0,) and take any sequence { μ n }[ϵ,2η] for some ϵ(0,2η]. In addition, we have

t n = J M , μ n ( z n μ n Φ ( z n ) ) = ( I + μ n M ) 1 z n = z n .

Therefore, by Theorem 3.1 we obtain the expected result. □

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let S:CC be a nonexpansive mapping such that F(S). Pick any x 0 C and set C 0 =C. Let { x n }C be a sequence generated by

{ w n = α n x n + ( 1 α n ) S x n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.2)

where the following condition holds: k< lim inf n α n lim sup n α n <1. Then the sequence { x n } converges strongly to P F x 0 .

Proof Let f=Φ=M=0 and k=0 in Theorem 3.1. Let ρ, η be any positive number in the interval (0,). Take any sequence { λ n } which satisfies 0< lim inf n λ n lim sup n λ n <ρ and take any sequence { μ n }[ϵ,2η] for some ϵ(0,2η]. In this case, we have

{ y n = P C ( x n λ n f ( x n ) ) = x n , z n = P C ( x n λ n f ( y n ) ) = x n , t n = J M , μ n ( z n μ n Φ ( z n ) ) = z n , w n = α n t n + ( 1 α n ) S t n = α n x n + ( 1 α n ) S x n .
(4.3)

Therefore, by Theorem 3.1 we obtain the expected result. □

Theorem 4.3 Let C be a nonempty closed convex subset of a real Hilbert space H. For the minimization problem (1.4), assume that f is (Frechet) differentiable and the gradientf is a ρ-inverse-strongly monotone mapping. Let Γ:CC be γ-strictly pseudo-contractive and let S:CC be a k-strictly pseudo-contractive mapping such that F=F(S)ΩF(Γ). Pick any x 0 C and set C 0 =C. Let { x n }C be a sequence generated by

{ y n = P C ( x n λ n f ( x n ) ) , z n = P C ( x n λ n f ( y n ) ) , t n = ( 1 μ n ) z n + μ n Γ ( z n ) , w n = α n t n + ( 1 α n ) S t n , C n + 1 = { w C n : w n w x n w } , x n + 1 = P C n + 1 x 0 , n 0 ,
(4.4)

where the following conditions hold:

  1. (i)

    0< lim inf n λ n lim sup n λ n <ρ;

  2. (ii)

    ϵ μ n 1γ for some ϵ(0,1γ];

  3. (iii)

    k< lim inf n α n lim sup n α n <1.

Then the sequence { x n } converges strongly to P F x 0 .

Proof Let Φ=IΓ and M=0 in Theorem 3.1, then we have that Φ is η-inverse strongly monotone with η= 1 γ 2 . Now, we show that VI(C,Φ,M)=F(Γ). In fact, since Φ=IΓ and M=0, we obtain

0 VI ( C , Φ , M ) 0 Φ ( u ) + M u 0 = Φ ( u ) 0 = u Γ ( u ) u F ( Γ ) .

Thus,

F=F(S)ΩVI(C,Φ,M)=F(S)ΩF(Γ).

Note that μ n [ϵ,1γ][0,1], hence (1 μ n ) z n + μ n Γ( z n )C. In this case, we have

t n = J M , μ n ( z n μ n Φ ( z n ) ) = ( I + μ n M ) 1 ( z n μ n Φ ( z n ) ) = z n μ n Φ ( z n ) = z n μ n ( I Γ ) ( z n ) = ( 1 μ n ) z n + μ n Γ ( z n ) .

Therefore, by Theorem 3.1 we obtain the expected result. □