1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space H with the inner product , and the norm , and let f be a bifunction from C×C into R such that f(x,x)=0 for all xC. We consider the equilibrium problem in the sense of Blum and Oettli [1]: Find x C such that

for all yC.

We denote by Sol(EP(f)) the set of solutions of the equilibrium problem EP(f).

We know that the problem EP(f,C) covers many important problems in optimization and nonlinear analysis. It has also found many applications in economics, transportation and engineering (see [1, 2] and the references quoted therein). Theory and methods for solving this problem have been developed by many authors [37]. Alternatively, the problem of finding a common fixed point of a sequence of finite self-mappings { S i } i = 1 N (N1) is described as follows: Find x C such that

where F( S i ) is the set of fixed points of the mappings S i (i=1,,N) on C. This problem has now become a mature subject in nonlinear analysis. The theory and solution methods of this problem can be found in many research papers and monographs (see [810]).

We are interested in the problem of finding a common element of the set of solutions of the equilibrium problem EP(f) and the set of solutions of the fixed problem (FP), namely: Find x C such that

x i = 1 N F( S i )Sol ( EP ( f ) ) .
(1.1)

A special case of problem (1.1) is that f(x,y)=F(x),yx, and this problem is reduced to finding a common element of the set of solutions of variational inequalities, i.e., find x C such that

and the set solutions of a fixed point problem (see [1117]).

In this paper, we introduce a new iterative scheme for solving problem (1.1). This method can be considered to be an improvement of the viscosity approximation method in [15, 18, 19] and the iterative method in [20] via an improvement of the extragradient methods [3, 4, 2123].

The paper is organized as follows. Section 2 recalls some concepts in equilibrium problems and fixed point problems that are used in the sequel and an iterative algorithm for solving problem (1.1). In Section 3, we prove the convergence theorems for the algorithms which are defined in Section 2 as the main results of this paper. In Section 4, we consider the variational inequality problems as an application of the main theorem.

2 Preliminaries

We first recall the following definitions that will be used for the main theorem.

Definition 2.1 Let C be a nonempty closed convex subset of a real Hilbert space H. A bifunction f:C×CR is said to be

  1. (a)

    monotone on C if f(x,y)+f(y,x)0, x,yC;

  2. (b)

    pseudomonotone on C if f(x,y)0 implies f(y,x)0, x,yC;

  3. (c)

    Lipschitz-type continuous on C with two constants c 1 >0 and c 2 >0 if

    f(x,y)+f(y,z)f(x,z) c 1 x y 2 c 2 y z 2 ,x,y,zC.
    (2.1)

We know that every monotone bifunction f is pseudomonotone, but the converse is not true (see [24]).

Definition 2.2 Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping S:CC is said to be a strict pseudocontraction if there exists a constant 0L<1 such that

S ( x ) S ( y ) 2 x y 2 +L ( I S ) ( x ) ( I S ) ( y ) 2 ,x,yC,

where I is the identity mapping on H. If L=0, then S is called nonexpansive on C.

Now, we define the projection on C, denoted by Pr C (), i.e.,

Pr C (x)=argmin { y x : y C } ,xH.

And we use the symbols ⇀ and → to denote weak convergence and strong convergence, respectively. The following proposition gives some useful properties for strict pseudocontractions.

Proposition 2.3 [25]

Let C be a nonempty closed convex subset of a real Hilbert space H, let S:CC be an L-strict pseudocontraction, and for each i=1,,N, let S i :CC be an L i -strict pseudocontraction for some 0 L i <1. Then we have the following.

  1. (a)

    S satisfies the following Lipschitz condition:

    S ( x ) S ( y ) 1 + L 1 L xy,x,yC;
  2. (b)
    (IS)

    is demiclosed at zero. That is, if the sequence { x k } is in C such that x k x ¯ and (IS)( x k )0, then (IS)( x ¯ )=0;

  3. (c)

    The set F(S) is closed and convex;

  4. (d)

    If λ i >0 (i=1,,N) and i = 1 N λ i =1, then i = 1 N λ i S i is an L ¯ -strict pseudocontraction, where L ¯ :=max{ L i 1iN};

  5. (e)

    If λ i is the same as in (d) and { S i i=1,,N} has a common fixed point, then

    F ( i = 1 N λ i S i ) = i = 1 N F( S i ).

Many authors studied the problem of finding a common fixed point of a finite family of mappings. For instance, Marino and Xu [26] constructed an iterative algorithm for finding a common fixed point of N strict pseudocontractions S i (i=1,,N). They defined the sequence { x k } starting from x 0 H and taking

x k + 1 = α k x k +(1 α k ) i = 1 N λ k , i S i ( x k ) ,
(2.2)

where the control sequence of parameters { λ k } was made in order to get the guarantee for the convergence of the iterative sequence { x k }. And they proved that the sequence { x k } converges weakly to the point x ¯ i = 1 N F( S i ).

Recently, Chen et al. [20] introduced a new iterative scheme for finding a common element of the set of common fixed points of a sequence of strict pseudocontractions { S ¯ i } and the set of solutions of the equilibrium problem EP(f) in a real Hilbert space H. Given a starting point x 0 H, three iterative sequences { x k }, { y k } and { z k } are generated as the following scheme:

{ Compute  y k = α k x k + ( 1 α k ) S ¯ k ( x k ) ; Find  z k C  such that  f ( z k , y ) + 1 r k y z k , z k y k 0 , y C ; Compute  x k + 1 = Pr C k ( x 0 ) ,  where  C k : = { v C | z k v x k v } .
(2.3)

Here, two sequences { α k } and { r k } are given as control parameters. The authors proved that the sequences { x k }, { y k } and { z k } converged strongly to the same point x , under certain conditions on { α k } and { r k }, such that

x Pr Sol ( EP ( f ) ) F ( S ) ( x 0 ) ,

where S is a nonexpansive mapping of C into itself defined by

S(x)= lim j S ¯ j (x)

for all xC.

The methods for finding a common element of the sets Sol(EP(f)) and i = 1 N F( S i ) in a real Hilbert space have been studied in many research papers (see [7, 17, 21, 22, 2730]).

We need the following assumptions for the main theorems.

Assumption 2.4 The bifunction f satisfies the following conditions:

  1. (i)

    f is pseudomonotone and weakly continuous on C;

  2. (ii)

    f is Lipschitz-type continuous on C;

  3. (iii)

    for each xC, f(x,) is convex and subdifferentiable on C.

Assumption 2.5 Every S i is an L i -strict pseudocontraction for some 0 L i <1.

Assumption 2.6 The solution set of (1.1) is nonempty, i.e.,

i = 1 N F( S i )Sol ( EP ( f ) ) .

Note that if Cri(dom(f(x,))), where ri(dom(f(x,))) is the set of relative interior points of the domain of f(x,), then Assumption 2.4(iii) is satisfied. Now we construct the new algorithms as follows.

Algorithm 2.7 Initialization: Choose positive sequences { λ k }, { α k }, { β k }, { γ k } and { λ k , i } satisfying the following conditions:

{ α k + β k 1 , k 0 , lim inf k β k ( 0 , 1 ) , lim inf k α k α k + β k ( L ¯ , 1 ) , where  L ¯ : = max { L i 1 i N } , lim inf k ( γ k + ( 1 γ k ) ( α k + β k ) ) > 0 , { γ k } ( 0 , 1 ) , { λ k } [ a , b ] for some  a , b ( 0 , 1 L ) ,  where  L : = max { 2 c 1 , 2 c 2 } , i = 1 p λ k , i = 1 for all  k 1 .

Take an initial point x 0 C and set k:=0.

Iteration k: Carry out three steps below continuously.

  • Step 1. Solve two strongly convex programs:

    { y k : = argmin { λ k f ( x k , y ) + 1 2 y x k 2 y C } , t k : = argmin { λ k f ( y k , y ) + 1 2 y x k 2 y C } .
  • Step 2. Compute the iterations

    { y ¯ k : = ( 1 γ k ) x k + γ k t k , z k : = ( 1 α k β k ) y ¯ k + α k t k + β k i = 1 N λ k , i S i ( t k ) .
  • Step 3. Set

    { C k : = { z C z k z 2 x k z 2 β k ( α k α k + β k L ¯ ) S ¯ k ( t k ) t k 2 } , where  S ¯ k : = i = 1 N λ k , i S i ( x k ) , Q k : = { z C x k z , x 0 x k 0 } .

Compute x k + 1 := Pr C k Q k ( x 0 ).

Increase k by one and go back to Step 1.

3 Convergence of the algorithms

In this section, we study the convergence of Algorithm 2.7. We need the following useful lemmas for the main theorems.

Lemma 3.1 [2]

Let C be a nonempty closed convex subset of a real Hilbert space H, and let g:CR be subdifferentiable on C. Then x is a solution of the following convex problem:

min { g ( x ) x C }

if and only if

0g ( x ) + N C ( x ) ,

where g() denotes the subdifferential of g and N C ( x ) is the (outward) normal cone of C at x C.

Lemma 3.2 [8]

Let C be a nonempty closed convex subset of a real Hilbert space H and x 0 H. Let { x k } be a bounded sequence such that every weakly cluster point x ¯ of { x k } belongs to C and

x k x 0 x 0 Pr C ( x 0 ) ,k0.

Then { x k } converges strongly to Pr C ( x 0 ) as k.

Now, we are in a position to prove the main theorem.

Theorem 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Suppose that Assumptions  2.4-2.6 are satisfied. Then the sequences { x k }, { y k } and { z k } generated by Algorithm  2.7 converge strongly to the same point x i = 1 N F( S i )Sol(EP(f)), where

x = Pr i = 1 N F ( S i ) Sol ( EP ( f ) ) ( x 0 ) .
(3.1)

Proof The proof of this theorem is divided into several steps.

Step 1. Suppose that x i = 1 N F( S i )Sol(EP(f)). Then we have

t k x 2 x k x 2 ( 1 2 λ k c 2 ) t k y k 2 ( 1 2 λ k c 1 ) x k y k 2 , k 0 .
(3.2)

Since f(x,) is convex on C for each xC, by Lemma 3.1, we see that

t k =argmin { 1 2 t x k 2 + λ k f ( y k , t ) | t C }

if and only if

0 2 ( λ k f ( y k , y ) + 1 2 y x k 2 ) ( t k ) + N C ( t k ) ,
(3.3)

where N C (x) is the (outward) normal cone of C at xC.

Since f( y k ,) is subdifferentiable on C, by the well-known Moreau-Rockafellar theorem (see [31]), there exists w 2 f( y k , t k ) such that

f ( y k , t ) f ( y k , t k ) w , t t k ,tC.

Substituting t= x into this inequality, we obtain

f ( y k , x ) f ( y k , t k ) w , x t k .
(3.4)

And also, it follows from (3.3) that 0= λ k w+ t k x k + w ¯ , where w 2 f( y k , t k ) and w ¯ N C ( t k ). By the definition of the normal cone N C , we have

t k x k , t t k λ k w , t k t ,tC.
(3.5)

Substituting t= x C into the last inequality, we obtain

t k x k , x t k λ k w , t k x .
(3.6)

Combining (3.4) and (3.6), we have

t k x k , x t k λ k ( f ( y k , t k ) f ( y k , x ) ) .
(3.7)

Since x Sol(EP(f)), f( x ,y)0 for all yC, and f is pseudomonotone on C, we have f( y k , x )0. Hence, (3.7) implies that

t k x k , x t k λ k f ( y k , t k ) .
(3.8)

From Lipschitz condition (2.1) for f with x= x k , y= y k and z= t k , we have

f ( y k , t k ) f ( x k , t k ) f ( x k , y k ) c 1 y k x k 2 c 2 t k y k 2 .
(3.9)

Combining (3.8) and (3.9), we get

t k x k , x t k λ k ( f ( x k , t k ) f ( x k , y k ) c 1 y k x k 2 c 2 t k y k 2 ) .
(3.10)

Similarly, since y k is the unique solution of the strongly convex program

min { 1 2 y x k 2 + λ k f ( x k , y ) | y C } ,

we have

λ k ( f ( x k , y ) f ( x k , y k ) ) y k x k , y k y ,yC.

Substituting y= t k C into the last inequality, we have

λ k ( f ( x k , t k ) f ( x k , y k ) ) y k x k , y k t k .
(3.11)

Since

2 t k x k , x t k = x k x 2 t k x k 2 t k x 2 ,

from (3.10), (3.11), we have

x k x 2 t k x k 2 t k x 2 2 y k x k , y k t k 2 λ k c 1 x k y k 2 2 λ k c 2 t k y k 2 .

Hence, we have

t k x 2 x k x 2 t k x k 2 2 y k x k , y k t k + 2 λ k c 1 x k y k 2 + 2 λ k c 2 t k y k 2 = x k x 2 ( t k y k ) + ( y k x k ) 2 2 y k x k , y k t k + 2 λ k c 1 x k y k 2 + 2 λ k c 2 t k y k 2 x k x 2 t k y k 2 x k y k 2 + 2 λ k c 1 x k y k 2 + 2 λ k c 2 t k y k 2 = x k x 2 ( 1 2 λ k c 1 ) x k y k 2 ( 1 2 λ k c 2 ) y k t k 2 .

The implies that the inequality (3.2) holds.

Step 2. Next, we show that

i = 1 N F( S i )Sol ( EP ( f ) ) C k

for all k0.

Using Step 1 and y ¯ k =(1 γ k ) x k + γ k t k , we have

y ¯ k x 2 = ( 1 γ k ) ( x k x ) + γ k ( t k x ) 2 ( 1 γ k ) x k x 2 + γ k t k x 2 ( 1 γ k ) x k x 2 + γ k { x k x 2 ( 1 2 λ k c 1 ) x k y k 2 ( 1 2 λ k c 2 ) y k t k 2 } = x k x 2 γ k ( 1 2 λ k c 1 ) x k y k 2 γ k ( 1 2 λ k c 2 ) y k t k 2 ,
(3.12)

where x Sol(EP(f)).

Set

S ¯ k := i = 1 N λ k , i S i .

Let x i = 1 p Fix( S i )Sol(EP(f)), using Proposition 2.3(d), (3.12) and the relation

λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 ,x,yH,λ[0,1],

and z k =(1 α k β k ) y ¯ k + α k t k + β k S ¯ k ( t k ), we have

z k x 2 = ( 1 α k β k ) ( y ¯ k x ) + ( α k + β k ) 1 α k + β k { α k ( t k x ) + β k ( S ¯ k ( t k ) x ) } 2 ( 1 α k β k ) y ¯ k x 2 + ( α k + β k ) α k α k + β k ( t k x ) + β k α k + β k ( S ¯ k ( t k ) x ) 2 = ( 1 α k β k ) y ¯ k x 2 + α k t k x 2 + β k S ¯ k ( t k ) x 2 α k β k α k + β k S ¯ k ( t k ) t k 2 ( 1 α k β k ) y ¯ k x 2 + α k t k x 2 + β k ( t k x 2 + L ¯ ( I S ¯ k ) ( t k ) ( I S ¯ k ) ( x ) 2 ) α k β k α k + β k S ¯ k ( t k ) t k 2 ( 1 α k β k ) y ¯ k x 2 + ( α k + β k ) t k x 2 + ( β k L ¯ α k β k α k + β k ) S ¯ k ( t k ) t k 2 ( 1 α k β k ) ( x k x 2 γ k ( 1 2 λ k c 1 ) x k y k 2 γ k ( 1 2 λ k c 2 ) y k t k 2 ) + ( α k + β k ) ( x k x 2 ( 1 2 λ k c 1 ) x k y k 2 ( 1 2 λ k c 2 ) y k t k 2 ) + ( β k L ¯ α k β k α k + β k ) S ¯ k ( t k ) t k 2 x k x 2 m k ( 1 2 λ k c 1 ) x k y k 2 m k ( 1 2 λ k c 2 ) y k t k 2 β k ( α k α k + β k L ¯ ) S ¯ k ( t k ) t k 2 x k x 2 β k ( α k α k + β k L ¯ ) S ¯ k ( t k ) t k 2 ,
(3.13)

where m k = γ k +(1 γ k )( α k + β k ). This means that x C k . Hence

i = 1 N F( S i )Sol ( EP ( f ) ) C k ,k0.

Step 3. Now, we have to prove that

i = 1 N F( S i )Sol ( EP ( f ) ) C k Q k

for all k0.

We show this assertion by mathematical induction. For k=0 we have Q 0 =C. Hence by Step 2, we obtain

i = 1 N F( S i )Sol ( EP ( f ) ) P 0 Q 0 .

Assume that for some k0,

i = 1 N F( S i )Sol ( EP ( f ) ) C k Q k .
(3.14)

From x k + 1 = Pr C k Q k ( x 0 ) it follows that

x k + 1 x , x 0 x k + 1 0,x C k Q k .

Using this and (3.14), we have

x k + 1 x , x 0 x k + 1 0,x i = 1 N F( S i )Sol ( EP ( f ) ) .

Hence we have

i = 1 N F( S i )Sol ( EP ( f ) ) Q k + 1 .

Then it follows from Step 2 that

i = 1 N F( S i )Sol ( EP ( f ) ) C k + 1 Q k + 1 .

Consequently, we have

i = 1 N F( S i )Sol ( EP ( f ) ) C k Q k ,k0.

Step 4. Next, we claim that

lim k x k + 1 x k = lim k x k z k = lim k x k y k = lim k x k t k = lim k S ¯ k ( t k ) t k = 0 .

It follows from Step 2 and x k + 1 = Pr C k Q k ( x 0 ) that

x k + 1 x 0 Pr i = 1 N F ( S i ) Sol ( EP ( f ) ) ( x 0 ) x 0 ,k0.
(3.15)

Hence, we get that { x k } is bounded. By Step 1, also the sequences { t k } and { z k } are bounded. Otherwise, we have

x k x , x 0 x k 0,x Q k ,

and hence x k = Pr Q k ( x 0 ). Using this and x k + 1 C k Q k Q k , we have

x k x 0 x k + 1 x 0 ,k0.

Therefore, there exists

A= lim k x k x 0 .
(3.16)

Using x k = Pr Q k ( x 0 ), x k + 1 Q k and the property of projections

Pr Q k ( x ) x 2 x y 2 Pr Q k ( x ) y 2 ,xH,y Q k ,

we have

x k + 1 x k 2 x k + 1 x 0 2 x k x 0 2 ,k0.

Combining this and (3.16), we get

lim k x k + 1 x k =0.
(3.17)

It follows from x k + 1 = Pr C k Q k ( x 0 ) that x k + 1 C k , i.e.,

z k x k + 1 x k x k + 1 .

Hence

x k z k x k x k + 1 + x k + 1 z k 2 x k x k + 1 ,k0.

Then, by (3.17), we have

lim k x k z k =0.
(3.18)

Step 2 and (3.16) imply that { t k } is bounded, and hence { S ¯ k ( t k ) t k } and { z k } are also bounded.

By (3.13), we have

β k ( α k α k + β k L ¯ ) S ¯ k ( t k ) t k 2 x k x 2 z k x 2 = ( x k x z k x ) ( x k x + z k x ) x k z k ( x k x + z k x ) .

From this and (3.18), we obtain

lim k S ¯ k ( t k ) t k =0.
(3.19)

Using (3.13), we also have

m k (12 λ k c 1 ) x k y k 2 x k z k ( x k x + z k x ) ,

and hence

lim k x k y k =0.
(3.20)

Similarly, we have

lim k t k y k =0.
(3.21)

Combining (3.20), (3.21) and x k t k x k y k + y k t k , we have

lim k x k t k =0.
(3.22)

This completes the proof of Step 4.

In Step 5 and Step 6 of this theorem, we consider weakly clusters of { x k }. It follows from (3.15) that the sequence { x k } is bounded, and hence there exists a subsequence { x k j } converging weakly to x ¯ as j. By Step 4, also the sequences { y k j }, { t k j } and { z k j } converge weakly to x ¯ .

Step 5. Claim that x ¯ i = 1 N F( S i ).

For each i=1,,N, we suppose that { λ k j , i } converges λ ¯ i as j such that i = 1 p λ ¯ i =1. Then we have

S k j (x)S(x):= i = 1 N λ ¯ i S i (x)(as j),xC.

Since i = 1 N λ ¯ i =1, from Step 4 and

t k j S ( t k j ) t k j S ¯ k j ( t k j ) + S ¯ k j ( t k j ) S ( t k j ) = t k j S ¯ k j ( t k j ) + i = 1 N λ k j , i S i ( t k j ) i = 1 N λ ¯ i S i ( t k j ) = t k j S ¯ k j ( t k j ) + i = 1 N ( λ k j , i λ ¯ i ) S i ( t k j ) t k j S ¯ k j ( t k j ) + i = 1 N | λ k j , i λ ¯ i | S i ( t k j ) ,

we obtain that lim k t k j S( t k j )=0. By Proposition 2.3(b), we have

x ¯ F(S)=F ( i = 1 N λ ¯ i S i ) .

Then, it implies that x ¯ i = 1 N F( S i ) from Proposition 2.3(e).

Step 6. Now we prove that if x k j x ¯ as j, then we have x ¯ Sol(EP(f)).

Since y k is the unique strongly convex problem

min { 1 2 x x k 2 + f ( x k , y ) | y C } ,

from Lemma 3.1, we have

0 2 ( λ k f ( x k , y ) + 1 2 y x k 2 ) ( y k ) + N C ( y k ) .

It follows that

0= λ k w+ y k x k + w ¯ ,

where w 2 f( x k , y k ) and w ¯ N C ( y k ). The definition of the normal cone N C implies that

y k x k , y y k λ k w , y k y ,yC.
(3.23)

On the other hand, since f( x k ,) is subdifferentiable on C, by the Moreau-Rockafellar theorem [32], there exists w 2 f( x k , y k ) such that

f ( x k , y ) f ( x k , y k ) w , y y k ,yC.

Combining this with (3.23), we have

λ k ( f ( x k , y ) f ( x k , y k ) ) y k x k , y k y ,yC.

Hence

λ k j ( f ( x k j , y ) f ( x k j , y k j ) ) y k j x k j , y k j y ,yC.

Then, using { λ k }[a,b](0, 1 L ), Step 2, x k j x ¯ as j and weak continuity of f, we have

f( x ¯ ,y)0,yC.

This means that x ¯ Sol(EP(f)).

Step 7. Finally, we claim that the sequences { x k }, { y k }, { z k } and { t k } converge strongly to the same point x , where

x = Pr i = 1 N F ( S i ) Sol ( EP ( f ) ) ( x 0 ) .

From Step 5 and Step 6 it follows that for every weakly cluster point x ¯ of the sequence { x k },

x ¯ i = 1 N F( S i )Sol ( EP ( f ) ) .

On the other hand, using the definition of Q k , we have

x k = Pr Q k ( x 0 ) .

Combining this with (3.15), we obtain

x 0 x k x 0 x

for all x i = 1 N F( S i ,C)Sol(EP(f)). For x= x , we have

x 0 x k x 0 x .

By Lemma 3.2, we know that the sequence { x k } converges strongly to x as k, where

x = Pr i = 1 N F ( S i , C ) Sol ( EP ( f ) ) ( x 0 ) .

We also have that y k , z k , t k x as k by Step 4. □

4 Applications

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a function from C into H. In this section, we consider the variational inequality problem which is presented as follows:

Find x C such that

Let f:C×CR be defined by f(x,y):=F(x),yx. Then problem EP(f) can be written in VI(F). The set of solutions of VI(F) is denoted by Sol(VI(F)).

The function F is called

  • strongly monotone on C with β>0 if

    F ( x ) F ( y ) , x y β x y 2 ,x,yC;
  • monotone on C if

    F ( x ) F ( y ) , x y 0,x,yC;
  • pseudomonotone on C if

    F ( y ) , x y 0 F ( x ) , x y 0,x,yC;
  • Lipschitz continuous on C with constants L>0 if

    F ( x ) F ( y ) Lxy,x,yC.

Since

y k = argmin { λ k f ( x k , y ) + 1 2 y x k 2 | y C } = argmin { λ k F ( x k ) , y x k + 1 2 y x k 2 | y C } = Pr C ( x k λ k F ( x k ) ) ,

from Algorithm 2.7, we obtain the algorithm for finding a common element of the set of fixed points of p strict pseudocontractions and the solution set of variational inequality problem VI(F).

Algorithm 4.1 Initialization: Choose positive sequences { λ k }, { α k }, { β k }, { γ k } and { λ k , i } satisfying the conditions:

{ α k + β k 1 , k 0 , lim inf k β k ( 0 , 1 ) , lim inf k α k α k + β k ( L ¯ , 1 ) , where  L ¯ : = max { L i 1 i N } , lim inf k ( γ k + ( 1 γ k ) ( α k + β k ) ) > 0 , { γ k } ( 0 , 1 ) , { λ k } [ a , b ] for some  a , b ( 0 , 1 L ) , i = 1 N λ k , i = 1 for all  k 1 .

Find an initial point x 0 C.

Iteration k: Perform the three steps below.

  • Step 1. Solve two strongly convex programs:

    { y k : = Pr C ( x k λ k F ( x k ) ) , t k : = Pr C ( x k λ k F ( y k ) ) .
  • Step 2. Compute the iterations

    { y ¯ k : = ( 1 γ k ) x k + γ k t k , z k : = ( 1 α k β k ) y ¯ k + α k t k + β k i = 1 N λ k , i S i ( t k ) .
  • Step 3. Set

    { C k : = { z C z k z 2 x k z 2 β k ( α k α k + β k L ¯ ) S ¯ k ( t k ) t k 2 } , Q k : = { z C x k z , x 0 x k 0 } .

Compute x k + 1 := Pr C k Q k ( x 0 ).

Increase k by one and go back to Step 1.

Now, we can prove the following convergence theorem with respect to VI(F) from Theorem 3.3.

Theorem 4.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F be a function from C into H such that F is pseudomonotone, weakly continuous and L-Lipschitz continuous on C. If each i=1,,N, S i :CC is L i -strict pseudocontraction for some 0 L i <1 and

i = 1 N F( S i )Sol ( VI ( F ) ) ,

then the sequences { x k }, { y k } and { z k } generated by Algorithm  4.1 converge strongly to the same point x i = 1 p Fix( S i )Sol(F,C), where

x = Pr i = 1 N F ( S i ) Sol ( VI ( F ) ) ( x 0 ) .