1 Introduction

Let H be a real Hilbert space with the inner product , and its induced norm . Let C be a nonempty, closed and convex subset of H and let A:CH be a nonlinear operator. The variational inequality problem for A and C, denoted by VI(C,A), is the problem of finding a point x C satisfying

A x , x x 0,xC.
(1)

We denote the solution set of this problem by SVI(C,A). Under the monotonicity assumption, the solution set of SVI(C,A) is always closed and convex.

The variational inequality problem is a fundamental problem in variational analysis and, in particular, in optimization theory. There are several iterative methods for solving it. See, e.g., [138]. The basic idea consists of extending the projected gradient method for constrained optimization, i.e., for the problem of minimizing f(x) subject to xC. For x 0 C, compute the sequence { x n } in the following manner:

x n + 1 = P C [ x n α n f ( x n ) ] ,n0,
(2)

where α n >0 is the stepsize and P C is the metric projection onto C. See [1] for convergence properties of this method for the case in which f is convex and f: R n R is a differentiable function, which are related to the results in this article. An immediate extension of the method (2) to VI(C,A) is the iterative procedure given by

x n + 1 = P C [ x n α n A x n ],n0.
(3)

Convergence results for this method require some monotonicity properties of A. Note that for the method given by (3) there is no chance of relaxing the assumption on A to plain monotonicity. The typical example consists of taking C= R 2 and A, a rotation with a π 2 angle. A is monotone and the unique solution of VI(C,A) is x =0. However, it is easy to check that x n α n A x n > x n for all x n 0 and all α n >0, therefore the sequence generated by (3) moves away from the solution, independently of the choice of the stepsize α n .

To overcome this weakness of the method defined by (3), Korpelevich [20] proposed a modification of the method, called the extragradient algorithm. It generates iterates using the following formulae:

y n = P C [ x n λ A x n ] , x n + 1 = P C [ x n λ A y n ] , n 0 ,
(4)

where λ>0 is a fixed number. The difference in (4) is that A is evaluated twice and the projection is computed twice at each iteration, but the benefit is significant, because the resulting algorithm is applicable to the whole class of monotone variational inequalities. However, we note that Korpelevich assumed that A is Lipschitz continuous and that an estimate of the Lipschitz constant is available. When A is not Lipschitz continuous, or it is Lipschitz but the constant is not known, the fixed parameter λ must be replaced by stepsizes computed through an Armijo-type search, as in the following method, presented in [39] (see also [18] for another related approach).

Let δ(0,1), { β n }[ β ˆ , β ¯ ] and x 0 C. Given x n define

z n = x n β n A x n .

If x n = P C [ z n ], then stop. Otherwise, let

j ( n ) = min { j 0 : A ( 1 2 j P C [ z n ] + ( 1 1 2 j x n ) ) , x n P C [ z n ] δ β n x n P C [ z n ] 2 }
(5)

and

α n = 2 j ( n ) , y n = α n P C [ z n ]+(1 α n ) x n .

Define

H n = { x H : A y n , z y n 0 } , W n = { z H : x 0 x n , z x n 0 } , x n + 1 = P H n W n C x 0 .
(6)

It is proved that if A is maximal monotone, point-to-point and uniformly continuous on bounded sets, and if VI(C,A) is nonempty, then { x n } n strongly converges to P V I ( C , A ) x 0 .

We now know that the difficult implementation of these methods is in computational respect. First, we note that in order to get α n , we have to compute j(n), which may be time-consuming. At the same time, we observe that (6) involves two half-spaces H n and W n . If the sets C, H n and W n are simple enough, then P C , P H n and P W n are easily executed. But H n W n C may be complicated, so that the projection P H n W n C is not easily executed. This might seriously affect the efficiency of the method.

The literature on the VI(C,A) is vast and Korpelevich’s method has received great attention from many authors, who improved it in various ways; see, e.g., [33, 3944] and references therein. It is known that Korpelevich’s method (4) has only weak convergence in the infinite-dimensional Hilbert spaces (please refer to a recent result of Censor et al. [40] and [41]). So, to obtain strong convergence, the original method was modified by several authors. For example, in [4, 43] it was proved that some very interesting Korpelevich-type algorithms strongly converge to a solution of VI(C,A). Very recently, Yao et al. [33] suggested modified Korpelevich’s method which converges strongly to the minimum norm solution of variational inequality (1) in infinite-dimensional Hilbert spaces.

Motivated by the works given above, in the present paper, we propose a variant extragradient-type method for solving monotone variational inequalities. Strong convergence analysis of the method is presented under reasonable assumptions on the problem data in the infinite-dimensional Hilbert spaces.

2 Preliminaries

In this section, we present some definitions and results that are needed for the convergence analysis of the proposed method. Let C be a closed convex subset of a real Hilbert space H.

A mapping F:CH is said to be Lipschitz if there exists a positive real number L>0 such that

F ( x ) F ( y ) Lxy

for all x,yC. In the case L(0,1), F is called L-contractive. A mapping A:CH is called α-inverse-strongly-monotone if there exists a positive real number α such that

AuAv,uvα A u A v 2 ,u,vC.

The following result is well known.

Proposition 1 [45]

Let C be a bounded closed convex subset of a real Hilbert space H and let A be an α-inverse strongly monotone operator of C into H. Then SVI(C,A) is nonempty.

For any uH, there exists a unique u 0 C such that

u u 0 =inf { u x : x C } .

We denote u 0 by P C u, where P C is called the metric projection of H onto C. The following is a useful characterization of projections.

Proposition 2 Given xH. We have

x P C x,y P C x0,yC,

which is equivalent to

xy, P C x P C y P C x P C y 2 ,x,yH.

Consequently, we deduce immediately that P C is nonexpansive, that is,

P C x P C yxy

for all x,yH.

It is well known that 2 P C I is nonexpansive.

Lemma 1 [45]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping A:CH be α-inverse strongly monotone and r>0 be a constant. Then we have

( I r A ) x ( I r A ) y 2 x y 2 +r(r2α) A x A y 2 ,x,yC.

In particular, if 0r2α, then IrA is nonexpansive.

Lemma 2 [46]

Let { x n } and { y n } be bounded sequences in a Banach space X and let { β n } be a sequence in [0,1] with 0< lim inf n β n lim sup n β n <1.

Suppose that

  1. (1)

    x n + 1 =(1 β n ) y n + β n x n for all n0;

  2. (2)

    lim sup n ( y n + 1 y n x n + 1 x n )0.

Then lim n y n x n =0.

Lemma 3 [47]

Assume that { a n } is a sequence of nonnegative real numbers, which satisfies

a n + 1 (1 γ n ) a n + δ n ,n0,

where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (1)

    n = 1 γ n =;

  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

3 Algorithm and its convergence analysis

In this section, we present the formal statement of our proposal for the algorithm.

Variant extragradient-type method

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A:CH be an α-inverse-strongly-monotone mapping and let F:CH be a ρ-contractive mapping. Consider the sequences { α n }[0,1], { λ n }[0,2α], { μ n }[0,2α] and { γ n }[0,1].

  1. 1.

    Initialization:

    x 0 C.
  2. 2.

    Iterative step: Given x n , define

    { y n = P C [ x n λ n A x n + α n ( F x n x n ) ] , x n + 1 = P C [ x n μ n A y n + γ n ( y n x n ) ] , n 0 .
    (7)

Remark 1 Note that algorithm (7) includes Korpelevich’s method (4) as a special case.

Next, we shall perform a study on the convergence analysis of the proposed algorithm (7).

Theorem 1 Suppose that SVI(C,A). Assume that the algorithm parameters { α n }, { λ n }, { μ n } and { γ n } satisfy the following conditions:

  1. (C1)

    lim n α n =0 and n = 1 α n =;

  2. (C2)

    λ n [a,b](0,2α) and lim n ( λ n + 1 λ n )=0;

  3. (C3)

    γ n (0,1), μ n 2α γ n and lim n ( γ n + 1 γ n )= lim n ( μ n + 1 μ n )=0.

Then the sequence { x n } generated by (7) converges strongly to x ˜ SVI(C,A), which solves the following variational inequality:

x ˜ F x ˜ , x ˜ x 0 for all x SVI(C,A).

We shall prove our main result in several steps, included into the propositions given bellow.

Proposition 3 The sequences { x n } and { y n } are bounded. Therefore, the sequences {F x n }, {A x n } and {A y n } are all bounded.

Proof From conditions (C1) and (C2), since α n 0 and λ n [a,b](0,2α), we have α n <1 λ n 2 α , for n large enough. Without loss of generality, we may assume that, for all nN, α n <1 λ n 2 α . So, λ n 1 α n (0,2α).

Consider any x SVI(C,A). By the property of the metric projection, we know x = P C [ x δA x ] for any δ>0. Hence,

x = P C [ x λ n 1 α n A x ] = P C [ x λ n A x ] = P C [ α n x + ( 1 α n ) ( x λ n 1 α n A x ) ] , n 0 .
(8)

Thus, by (7) and (8), we have

y n x = P C [ α n F x n + ( 1 α n ) x n λ n A x n ] x = P C [ α n F x n + ( 1 α n ) ( x n λ n 1 α n A x n ) ] P C [ α n x + ( 1 α n ) ( x λ n 1 α n A x ) ] α n ( F x n x ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x λ n 1 α n A x ) ] .
(9)

Since λ n 1 α n (0,2α), from Lemma 1, we know that I λ n 1 α n A is nonexpansive. From (9), we get

y n x α n F x n x + ( 1 α n ) ( I λ n 1 α n A ) x n ( I λ n 1 α n A ) x α n F x n F x + α n F x x + ( 1 α n ) x n x α n ρ x n x + α n F x x + ( 1 α n ) x n x = [ 1 ( 1 ρ ) α n ] x n x + α n F x x .

By (C3), we obtain μ n γ n 2α. So, I μ n γ n A is also nonexpansive. Therefore,

x n + 1 x = P C [ x n μ n A y n + γ n ( y n x n ) ] x = P C [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] P C [ ( 1 γ n ) x + γ n ( x μ n γ n A x ) ] ( 1 γ n ) x n x + γ n ( y n μ n γ n A y n ) ( x μ n γ n A x ) ( 1 γ n ) x n x + γ n y n x ( 1 γ n ) x n x + γ n α n F x x + γ n [ 1 ( 1 ρ ) α n ] x n x = [ 1 ( 1 ρ ) γ n α n ] x n x + γ n α n F x x max { x n x , F x x 1 ρ } .
(10)

By induction, we get

x n + 1 x max { x 0 x , F x x 1 ρ } .

Then { x n } is bounded, and so are { y n }, {F x n }, {A x n } and {A y n }. Therefore, the proof is complete. □

Proposition 4 The following two properties hold:

lim n x n + 1 x n =0, lim n x n y n =0.

Proof Let S=2 P C I. From the property of the metric projection, we known that S is nonexpansive. Therefore, we can rewrite x n + 1 in (7) as

x n + 1 = I + S 2 [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] = 1 γ n 2 x n + γ n 2 ( y n μ n γ n A y n ) + 1 2 S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] = 1 γ n 2 x n + 1 + γ n 2 z n ,

where

z n = γ n 2 ( y n μ n γ n A y n ) + 1 2 S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n 2 = γ n ( y n μ n γ n A y n ) + S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n .

It follows that

z n + 1 z n = γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) + S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] 1 + γ n + 1 γ n ( y n μ n γ n A y n ) + S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n .

So,

z n + 1 z n γ n + 1 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ( y n μ n γ n A y n ) + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + 1 1 + γ n + 1 S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] γ n + 1 1 + γ n + 1 ( I μ n + 1 γ n + 1 A ) y n + 1 ( I μ n + 1 γ n + 1 A ) y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + 1 1 + γ n + 1 S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] .

Again, by using the nonexpansivity of I μ n γ n A and S, we have

z n + 1 z n γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + 1 1 + γ n + 1 ( 1 γ n + 1 ) ( x n + 1 x n ) + γ n + 1 [ ( I μ n + 1 γ n + 1 A ) y n + 1 ( I μ n + 1 γ n + 1 A ) y n ] + ( γ n + 1 γ n ) ( y n x n ) + ( μ n μ n + 1 ) A y n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + 1 γ n + 1 1 + γ n + 1 x n + 1 x n + γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 γ n | 1 + γ n + 1 ( x n + y n ) + | μ n + 1 μ n | 1 + γ n + 1 A y n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] .

Next, we estimate y n + 1 y n .

By (7), we have

y n + 1 y n = P C [ x n + 1 λ n + 1 A x n + 1 + α n + 1 ( F x n + 1 x n + 1 ) ] P C [ x n λ n A x n + α n ( F x n x n ) ] [ x n + 1 λ n + 1 A x n + 1 ] [ x n λ n A x n ] + α n + 1 F x n + 1 x n + 1 + α n F x n x n = ( I λ n + 1 A ) x n + 1 ( I λ n + 1 A ) x n + ( λ n λ n + 1 ) A x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n x n + 1 x n + | λ n + 1 λ n | x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n .

So, we deduce

z n + 1 z n | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + | γ n + 1 γ n | 1 + γ n + 1 ( x n + y n ) + | μ n + 1 μ n | 1 + γ n + 1 A y n + x n + 1 x n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | λ n + 1 λ n | x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n .

Since lim n ( γ n + 1 γ n )=0 and lim n ( μ n + 1 μ n )=0, we derive that

lim n | γ n + 1 1 + γ n + 1 γ n 1 + γ n | =0, lim n | μ n + 1 γ n + 1 μ n γ n | =0, lim n | 1 1 + γ n + 1 1 1 + γ n | =0.

At the same time, note that { x n }, {F x n }, { y n } and {A y n } are bounded. Therefore,

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

By Lemma 2, we obtain

lim n z n x n =0.

Hence,

lim n x n + 1 x n = lim n 1 + γ n 2 z n x n =0.

From (9), (10), Lemma 1 and the convexity of the norm, we deduce

x n + 1 x 2 ( 1 γ n ) x n x 2 + γ n y n x 2 γ n α n ( F x n x ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x λ n 1 α n A x ) ] 2 + ( 1 γ n ) x n x 2 γ n [ α n F x n x 2 + ( 1 α n ) ( I λ n 1 α n A ) x n ( I λ n 1 α n A ) x 2 ] + ( 1 γ n ) x n x 2 ( 1 α n ) γ n [ x n x 2 + λ n 1 α n ( λ n 1 α n 2 α ) A x n A x 2 ] + ( 1 γ n ) x n x 2 + α n γ n F x n x 2 α n γ n F x n x 2 + x n x 2 + γ n a ( b 1 α n 2 α ) A x n A x 2 .

Therefore, we have

γ n a ( 2 α b 1 α n ) A x n A x 2 α n γ n F x n x 2 + x n x 2 x n + 1 x 2 α n γ n F x n x 2 + ( x n x + x n + 1 x ) × x n x n + 1 .

Since lim n α n =0, lim n x n x n + 1 =0 and lim inf n γ n a(2α b 1 α n )>0, we deduce

lim n A x n A x =0.

By the property (ii) of the metric projection P C , we have

y n x 2 = P C [ α n F x n + ( 1 α n ) x n λ n A x n ] P C [ x λ n A x ] 2 α n F x n + ( 1 α n ) x n λ n A x n ( x λ n A x ) , y n x = 1 2 { x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) 2 + y n x 2 α n F x n + ( 1 α n ) x n λ n A x n ( x λ n A x ) ( y n x ) 2 } 1 2 { ( x n λ n A x n ) ( x λ n A x ) 2 + 2 α n F x n x n x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( F x n x n ) 2 } 1 2 { ( x n λ n A x n ) ( x λ n A x ) 2 + α n M + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( F x n x n ) 2 } 1 2 { x n x 2 + α n M + y n x 2 x n y n 2 + 2 λ n x n y n , A x n A x 2 α n F x n x n , x n y n λ n ( A x n A x ) α n ( F x n x n ) 2 } 1 2 { x n x 2 + α n M + y n x 2 x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n } ,

where M>0 is some constant satisfying

sup n { 2 F x n x n x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) } M.

It follows that

y n x 2 x n x 2 + α n M x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n ,

and hence

x n + 1 x 2 ( 1 γ n ) x n x 2 + γ n y n x 2 x n x 2 + α n M γ n x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n ,

which implies that

γ n x n y n 2 ( x n x + x n + 1 x ) x n + 1 x n + 2 λ n x n y n A x n A x + α n M + 2 α n F x n x n x n y n .

Since lim n α n =0, lim n x n x n + 1 =0 and lim n A x n A x =0, we derive

lim n x n y n =0,

and this concludes the proof. □

Proposition 5 lim sup n x ˜ F x ˜ , x ˜ y n 0, where x ˜ = P S V I ( C , A ) F x ˜ .

Proof In order to show that lim sup n x ˜ F x ˜ , x ˜ y n 0, we choose a subsequence { y n i } of { y n } such that

lim sup n x ˜ F x ˜ , x ˜ y n = lim i x ˜ F x ˜ , x ˜ y n i .

As { y n i } is bounded, we deduce that a subsequence { y n i j } of { y n i } converges weakly to z.

Next, we show that zSVI(C,A). The following proofs are similar to the one in [45]. Since the involved algorithms are not different, we still give the details. Now, we define a mapping T by the formula

Tv= { A v + N C v , v C , , v C .

Then T is maximal monotone.

Let (v,w)G(T). Since wAv N C v and y n C, we have v y n ,wAv0. On the other hand, from y n = P C [ α n F x n +(1 α n ) x n λ n A x n ], we obtain

v y n , y n α n F x n ( 1 α n ) x n + λ n A x n 0,

that is,

v y n , y n x n λ n + A x n α n λ n ( F x n x n ) 0.

Therefore, we have

v y n i , w v y n i , A v v y n i , A v v y n i , y n i x n i λ n i + A x n i α n i λ n i ( F x n i x n i ) = v y n i , A v A x n i y n i x n i λ n i + α n i λ n i ( F x n i x n i ) = v y n i , A v A y n i + v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( F x n i x n i ) v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( F x n i x n i ) .

Noting that α n i 0, y n i x n i 0 and A is Lipschitz continuous, we obtain vz,w0. Since T is maximal monotone, we have z T 1 (0) and hence zSVI(C,A). Therefore,

lim sup n x ˜ F x ˜ , x ˜ y n = lim i x ˜ F x ˜ , x ˜ y n i = x ˜ F x ˜ , x ˜ z0.

The proof of this proposition is now complete. □

Finally, by using Propositions 3-5, we prove Theorem 1.

Proof By the property of the metric projection P C , we have

y n x ˜ 2 = P C [ α n F x n + ( 1 α n ) ( x n λ n 1 α n A x n ) ] P C [ α n x ˜ + ( 1 α n ) ( x ˜ λ n 1 α n A x ˜ ) ] 2 α n ( F x n x ˜ ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) ] , y n x ˜ α n x ˜ F x ˜ , x ˜ y n + α n F x ˜ F x n , x ˜ y n + ( 1 α n ) ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) y n x ˜ α n x ˜ F x ˜ , x ˜ y n + α n F x ˜ F x n x ˜ y n + ( 1 α n ) x n x ˜ y n x ˜ α n x ˜ F x ˜ , x ˜ y n + [ 1 ( 1 ρ ) α n ] x n x ˜ x ˜ y n α n x ˜ G x ˜ , x ˜ y n + 1 ( 1 ρ ) α n 2 x n x ˜ 2 + 1 2 y n x ˜ 2 .

Hence,

y n x ˜ 2 [ 1 ( 1 ρ ) α n ] x n x ˜ 2 +2 α n x ˜ F x ˜ , x ˜ y n .

Therefore,

x n + 1 x ˜ 2 ( 1 γ n ) x n x ˜ 2 + γ n y n x ˜ 2 [ 1 ( 1 ρ ) α n γ n ] x n x ˜ 2 + 2 α n γ n x ˜ F x ˜ , x ˜ y n .

We apply Lemma 3 to the last inequality to deduce that x n x ˜ .

The proof of our main result is completed. □

Remark 2 Our algorithm (7) includes Korpelevich’s method (4) as a special case. However, it is well known that Korpelevich’s algorithm (4) has only weak convergence in the setting of infinite-dimensional Hilbert spaces. But our algorithm (7) has strong convergence in the setting of infinite-dimensional Hilbert spaces.

If we take F0, then we have the following algorithm:

  1. 1.

    Initialization:

    x 0 C.
  2. 2.

    Iterative step: Given x n , define

    { y n = P C [ x n λ n A x n α n x n ] , x n + 1 = P C [ x n μ n A y n + γ n ( y n x n ) ] , n 0 .
    (11)

Corollary 1 Suppose that SVI(C,A). Assume that the algorithm parameters { α n }, { λ n }, { μ n } and { γ n } satisfy the following conditions:

  1. (C1)

    lim n α n =0 and n = 1 α n =;

  2. (C2)

    λ n [a,b](0,2α) and lim n ( λ n + 1 λ n )=0;

  3. (C3)

    γ n (0,1), μ n 2α γ n and lim n ( γ n + 1 γ n )= lim n ( μ n + 1 μ n )=0.

Then the sequence { x n } generated by (11) converges strongly to the minimum norm element x ˜ in SVI(C,A).

Proof It is clear that algorithm (11) is a special case of algorithm (7). So, from Theorem 1, we have that the sequence { x n } defined by (11) converges strongly to x ˜ SVI(C,A), which solves

x ˜ , x ˜ x 0,for all  x SVI(C,A).
(12)

Applying the characterization of the metric projection, we can deduce from (12) that

x = P S V I ( C , A ) (0).

This indicates that x ˜ is the minimum-norm element in SVI(C,A). This completes the proof. □

Remark 3 Corollary 1 includes the main result in [1] as a special case.