1 Introduction

Let F be a multi-valued mapping from ℋ into 2 H with nonempty values, where ℋ is a real Hilbert space. Let X be a nonempty, closed and convex subset of the Hilbert space ℋ. The generalized variational inequality problem, abbreviated as GVIP, is to find a vector x X such that there exists ω F( x ) satisfying

ω , x x 0,xX,
(1.1)

where , stands for the inner product of vectors in ℋ. If the multi-valued mapping F is a single-valued mapping from ℋ to ℋ, then the GVIP collapses to the classical variational inequality problem [1, 2].

The generalized variational inequalities find application in economics and transportation equilibrium, engineering sciences, etc. and have received much attention in the past decades [311]. It is well known that the extra-gradient method [5, 12] is a popular solution method, which has a contraction property, i.e., the generated sequence { x k } k = 0 by the method satisfies

x k + 1 x x k x ,k0

for any solution x of the GVIP. It should be noted that the proximal point algorithm also possesses this property [13]. In this paper, inspired by the work in [14] for finding the zeros of maximal monotone operators in a real Hilbert space, we proposed a new type of extra-gradient solution method for the GVIP which has the following expansion property w.r.t. the initial point, i.e.,

x k x 0 x k + 1 x 0 ,k.

Furthermore, we establish the strong convergence of the method in the case that the solution set X is nonempty, and we show that the generated sequence { x k } k = 0 diverges to infinity if the solution set is empty.

The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions needed in the subsequent analysis. In Section 3, we present our designed algorithm and establish the convergence of the algorithm.

2 Preliminaries

Let xH and K be a nonempty, closed, and convex subset in ℋ. A point y 0 K is said to be the orthogonal projection of x onto K if it is the closest point to x in K, i.e.,

y 0 =argmin { y x y K } ,

and we denote y 0 by P K (x). The well-known properties of the projection operator are as follows.

Lemma 2.1 [15]

Let K be a nonempty, closed, and convex subset in ℋ. Then for any x,yH, and zK, the following statements hold:

  1. (i)

    P K (x)x,z P K (x)0;

  2. (ii)

    P K ( x ) P K ( y ) 2 x y 2 P K ( x ) x + y P K ( y ) 2 .

Remark 2.1 In fact, (i) in Lemma 2.1 also provides a sufficient condition for a vector u to be the projection of the vector x, i.e., u= P K (x) if and only if

ux,zu0,zK.

Definition 2.1 Let K be a nonempty subset of ℋ. The multi-valued mapping F:K 2 H is said to be

  1. (i)

    monotone if and only if

    uv,xy0,x,yK,uF(x),vF(y);
  2. (ii)

    pseudo-monotone if and only if, for any x,yK, uF(x), vF(y),

    u,yx0v,yx0.

To proceed, we present the definition of maximal monotone multi-valued mapping F.

Definition 2.2 Let K be a nonempty subset of ℋ. The multi-valued mapping F:K 2 H is said to be a maximal monotone operator if F is monotone and the graph

G(F)= { ( x , u ) K × H u F ( x ) }

is not properly contained in the graph of any other monotone operator.

It is clear that a monotone multi-valued mapping F is maximal if and only if, for any (x,u)K×H such that uv,xy0, (y,v)G(F), then uF(x).

Definition 2.3 Let K be a nonempty, closed, and convex subset of the Hilbert space ℋ. A multi-valued mapping F:K 2 H is said to be

  1. (i)

    upper semi-continuous at xK if for every open set V containing F(x), there is an open set U containing x such that F(y)V for all yKU;

  2. (ii)

    lower semi-continuous at xK if given any sequence x k converging to x and any yF(x), there exists a sequence y k F( x k ) that converges to y;

  3. (iii)

    continuous at xK if it is both upper semi-continuous and lower semi-continuous at x.

Throughout this paper, we assume that the multi-valued mapping F:X 2 H is maximal monotone and continuous on X with nonempty compact convex values, where XH is a nonempty, closed, and convex set.

3 Main results

For any xH and ξF(x), set

r(x,ξ)=x P X (xξ).

Then the projection residue r(x,ξ) can verify the solution set of problem (1.1).

Proposition 3.1 Point xX solves problem (1.1) if and only if r(x,ξ)=0 i.e.,

x= P X (xξ)for some ξF(x).

Now, we give the description of the designed algorithm for problem (1.1), whose basic idea is as follows. At each step of the algorithm, compute the projection residue r( x k , ξ k ) at iterate x k . If it is a zero vector for some ξ k F( x k ), then stop with x k being a solution of problem (1.1); otherwise, find a trial point y k by a back-tracking search at x k along the residue r( x k , ξ k ), and the new iterate is obtained by projecting x 0 onto the intersection of X with two halfspaces, respectively, associated with y k and x k . Repeat this process until the projection residue is a zero vector.

Algorithm 3.1

Step 0: Choose σ,γ(0,1), x 0 X, k=0.

Step 1: Given the current iterate x k , if r( x k ,ξ)=0 for some ξF( x k ), stop; else take any ξ k F( x k ) and compute

z k = P X ( x k ξ k ) .

Take

y k =(1 η k ) x k + η k z k ,

where η k = γ m k , with m k being the smallest non-negative integer m satisfying ζ k F( x k γ m r( x k , ξ k )) such that

ζ k , r ( x k , ξ k ) σ r ( x k , ξ k ) 2 .
(3.1)

Step 2: Let x k + 1 = P H k 1 H k 2 X ( x 0 ), where

H k 1 = { x H x y k , ζ k 0 , ζ k F ( y k ) } , H k 2 = { x H x x k , x 0 x k 0 } .

Set k=k+1 and go to Step 1.

The following conclusion addresses the feasibility of the stepsize rule (3.1), i.e., the existence of point ζ k .

Lemma 3.1 If x k is not a solution of problem (1.1), then there exists a smallest non-negative integer m satisfying (3.1).

Proof By the definition of r( x k , ξ k ) and Lemma 2.1, it follows that

P X ( x k ξ k ) ( x k ξ k ) , x k P X ( x k ξ k ) 0,

which implies

ξ k , r ( x k , ξ k ) r ( x k , ξ k ) 2 >0.
(3.2)

Since γ(0,1), we get

lim m ( x k γ m r ( x k , ξ k ) ) = x k .

Combining this with the fact that F is continuous, we know that there exists ζ m F( x k γ m r( x k , ξ k )) such that

lim m ζ m = ξ k ;

hence, by (3.2), one has

lim m ζ m , r ( x k , ξ k ) = ξ k , r ( x k , ξ k ) r ( x k , ξ k ) 2 >0.

This completes the proof. □

Lemma 3.2 Suppose the solution set X is nonempty, then the halfspace H k 1 in Algorithm  3.1 separates the point x k from the set X . Moreover,

X H k 1 X,k0.

Proof By the definition of r( x k , ξ k ) and Algorithm 3.1, we have

y k =(1 η k ) x k + η k z k = x k η k r ( x k , ξ k ) ,

which can be written as

η k r ( x k , ξ k ) = x k y k .

Then, by this and (3.1), one has

ζ k , x k y k >0,
(3.3)

where ζ k is a vector in F( y k ). So, by the definition of H k 1 and (3.3) it follows that x k H k 1 .

On the other way, for any x X and xX, we have

ω , x x 0, ω F ( x ) .

Since F is monotone on X, one has

ω , x x ω , x x 0,ωF(x).
(3.4)

Let x= y k in (3.4). Then for any ζ k F( y k ),

ζ k , y k x 0,

which implies x H k 1 . Moreover, since

X H k 1 X,k0,

the desired result follows. □

Regarding the projection step, we shall prove that the set H k 1 H k 2 X is always nonempty, even when the solution set X is empty. Therefore the whole algorithm is well defined in the sense that it generates an infinite sequence { x k } k = 0 .

Lemma 3.3 If the solution set X , then X H k 1 H k 2 X for all k0.

Proof From the analysis in Lemma 3.2, it is sufficient to prove that X H k 2 for all k0. The proof will be given by induction. Obviously, if k=0,

X H 0 2 =H.

Now, suppose that

X H k 2

holds for k=l0. Then

X H l 1 H l 2 X.

For any x X , by Lemma 2.1 and the fact that

x l + 1 = P H l 1 H l 2 X ( x 0 ) ,

we have

x x l + 1 , x 0 x l + 1 0.

Thus X H l + 1 2 . This shows that X H k 2 for all k0 and the desired result follows. □

Lemma 3.4 Suppose that X =, then H k 1 H k 2 X for all k0.

Proof On the contrary, suppose k 0 >1 is the smallest non-negative number such that

H k 0 1 H k 0 2 X=.

Then x k , y k , ζ k are defined for k=0,1,, k 0 , and there exists a positive number M such that

{ x k 0 k k 0 } B ( x 0 , M )

and

{ x k ξ k 0 k k 0 , ξ k F ( x k ) } B ( x 0 , M ) ,

where

B ( x 0 , M ) = { x H x x 0 M } .

Set

h(x)={ 0 , if  x int ( X ) B ( x 0 , 2 M ) , + , otherwise .

Then h:HR{+} is a lower semi-continuous proper convex function. By the definition of subgradient, we have

h(x)={ { 0 } , if  x int ( X ) { x x x 0 < 2 M } , { λ ( x x 0 ) λ 0 } , if  x int ( X ) { x x x 0 = 2 M } , , otherwise .

So, h(x) and

F =F+h

are all maximal monotone mappings [16]. Furthermore,

F (x)=F(x),if xint(X) { x x x 0 < 2 M } ,

and x k , y k , ζ k for k=0,1,, k 0 also satisfy the conditions of Algorithm 3.1. Since the domain of F is bounded, by the proof of Theorem 2 in [14], we know that F (x) has a zero point i.e., there exists a point x ¯ int(X){xx x 0 <2M} such that

0 F ( x ¯ )=F( x ¯ ),

which implies that the solution set X is nonempty. We arrive at a contradiction and the desired result follows. □

In order to establish the convergence of the algorithm, we first show the expansion property of the algorithm w.r.t. the initial point.

Lemma 3.5 Suppose Algorithm 3.1 reaches an iteration k+1. Then

x k + 1 x k 2 + x k x 0 2 x k + 1 x 0 2 .

Proof By the iterative process of Algorithm 3.1, one has

x k + 1 = P H k 1 H k 2 X ( x 0 ) .

So x k + 1 H k 2 and

P H k 2 ( x k + 1 ) = x k + 1 .

From the definition of H k 2 , it follows that

z x k , x 0 x k 0,z H k 2 .

Thus, x k = P H k 2 ( x 0 ) from Remark 2.1. Then, from Lemma 2.1, we have

P H k 2 ( x k + 1 ) P H k 2 ( x 0 ) x k + 1 x 0 2 P H k 2 ( x k + 1 ) x k + 1 + x 0 P H k 2 ( x 0 ) 2 ,

which can be written as

x k + 1 x k 2 x k + 1 x 0 2 x k x 0 2 ,

i.e.,

x k + 1 x k 2 + x k x 0 2 x k + 1 x 0 2 ,

and the proof is completed. □

From Lemma 3.4, Algorithm 3.1 generates an infinite sequence if the solution set of problem (1.1) is empty. More precisely, we have the following conclusion.

Theorem 3.1 Suppose Algorithm 3.1 generates an infinite sequence { x k } k = 0 . Assume the sequence { η k } k = 0 is bounded away from zero. Then the generated sequence { x k } k = 0 is bounded and its each weak accumulation point is a solution of problem (1.1) if the solution set X is nonempty. Otherwise

lim k + x k x 0 =+

if the solution set X is empty.

Proof For the case that X , by Lemma 3.3 and

x k + 1 = P H k 1 H k 2 X ( x 0 ) ,

we know that

x k + 1 x 0 x x 0

for any x X . So, { x k } k = 0 is a bounded sequence.

Then, by Lemma 3.5, we know the sequence { x k x 0 } k = 0 is nondecreasing and bounded, which implies that

lim k x k + 1 x k 2 =0.
(3.5)

On the other hand, by the fact that x k + 1 H k 1 , we have

x k + 1 y k , ζ k 0,
(3.6)

where ζ k can be chosen as (3.1). Since

y k =(1 η k ) x k + η k z k = x k η k r ( x k , ξ k ) ,

by (3.6), one has

x k + 1 y k , ζ k = x k + 1 x k + η k r ( x k , ξ k ) , ζ k 0,

which can be written as

η k r ( x k , ξ k ) , ζ k x k x k + 1 , ζ k .

Using the Cauchy-Schwarz inequality and (3.1), we obtain

η k σ r ( x k , ξ k ) 2 η k r ( x k , ξ k ) , ζ k x k + 1 x k ζ k .
(3.7)

Since F is continuous with compact values, Proposition 3.11 in [17] implies that {F( y k ):kN} is a bounded set, and hence the sequence { ζ k : ζ k F( y k )} is bounded. Thus, by (3.5) and (3.7), it follows that

lim k η k r ( x k , ξ k ) 2 =0.

By assumption that { η k } k = 0 is bounded away from zero, we have

lim k r ( x k , ξ k ) 2 =0.
(3.8)

Since the sequence { x k } k = 0 is bounded, it has weak accumulation points. Without loss of generality, assume that the subsequence { x k j } weakly converges to x ¯ , i.e.,

x k j x ¯ ,as j.

Since r(x,ξ) is a continuous and single valued operator, from Theorem 2 of [18], we know that r(x,ξ) is a weak continuous operator. Thus,

r ( x ¯ , ξ ) = lim j r ( x k j , ξ k j ) =0

for some ξF( x ¯ ) and x ¯ is a solution of problem (1.1).

Now, consider the case that the solution set is empty. For this case, the inequality

x k + 1 x k 2 + x k x 0 2 x k + 1 x 0 2

and (3.5) still hold. Thus, the sequence { x k x 0 } k = 0 is also nondecreasing. Now, we claim that

lim k + x k x 0 =+.

Otherwise, a similar argument to the one above leads to the conclusion that any weak accumulation point of { x k } k = 0 is a solution of problem (1.1), which contradicts the emptiness of the solution set, and the conclusion follows. □

We are in a position to prove strong convergence of Algorithm 3.1.

Theorem 3.2 Suppose Algorithm 3.1 generates an infinite sequence { x k } k = 0 . If the solution set X is nonempty and the sequence { η k } is bounded away from zero, then the sequence { x k } k = 0 converges strongly to a solution x such that x = P X ( x 0 ); otherwise, lim k + x k x 0 =+. That is, the solution set of problem (1.1) is empty if and only if the sequence generated by Algorithm 3.1 diverges to infinity.

Proof For the case that the solution set is nonempty, from Theorem 3.1, we know that the sequence { x k } k = 0 is bounded and that every weak accumulate point x of { x k } k = 0 is a solution of problem (1.1). Let { x k j } j = 0 be a weakly convergent subsequence of { x k } k = 0 , and let x X be its weak limit. Let x ¯ = P X ( x 0 ). Then by Lemma 3.3,

x ¯ H k j 1 1 H k j 1 2 X

for all j. So, from the iterative procedure of Algorithm 3.1,

x k j = P H k j 1 1 H k j 1 2 X ( x 0 ) ,

one has

x k j x 0 x ¯ x 0 .
(3.9)

Thus,

x k j x ¯ 2 = x k j x 0 + x 0 x ¯ 2 = x k j x 0 2 + x 0 x ¯ 2 + 2 x k j x 0 , x 0 x ¯ x ¯ x 0 2 + x 0 x ¯ 2 + 2 x k j x 0 , x 0 x ¯ ,

where the inequality follows from (3.9). Letting j, it follows that

lim sup j x k j x ¯ 2 2 x ¯ x 0 2 + 2 x x 0 , x 0 x ¯ = 2 x x ¯ , x 0 x ¯ .
(3.10)

Due to Lemma 2.1 and the fact that x ¯ = P X ( x 0 ) and x X , we have

x x ¯ , x 0 x ¯ 0.

Combing this with (3.10) and the fact that x is a weak limit of { x k j } j = 0 , we conclude that the sequence { x k j } j = 0 strongly converges to x ¯ and

x = x ¯ = P X ( x 0 ) .

Since x was taken as an arbitrary weak accumulation point of { x k } k = 0 , it follows that x ¯ is the unique weak accumulation point of this sequence. Since { x k } k = 0 is bounded, the whole sequence { x k } k = 0 weakly converges to x ¯ . On the other hand, we have shown that every weakly convergent subsequence of { x k } k = 0 converges strongly to x ¯ . Hence, the whole sequence { x k } k = 0 converges strongly to x ¯ X .

For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □