1 Introduction

Let F(x) be a multi-valued mapping from R n into 2 R n with nonempty values, and let X be a nonempty closed convex set in R n . The problem of generalized variational inequalities (GVI) [1, 2] is to find x X such that there exists ω F( x ) satisfying

ω , x x 0,xX,
(1.1)

where , stands for the Euclidean inner product of vectors in R n . The solution set of problem (1.1) is denoted by X . Certainly, the GVI reduces to the classical variational inequalities (VI) when F is a single-valued mapping, which has been well studied in the past decades [3, 4].

For the GVI, theories and solution methods have been extensively considered in the literature [2, 512], and it is well known that the existence of solutions is an important topic for the GVI [1]. Generally, there are mainly two approaches to attack the solution existence problem of the GVI. One is an analytic approach which first reformulates the GVI as a well-studied mathematical problem and then invokes an existence theorem for the latter problem [13]. The second is a constructive approach in which the existence can be verified by the behavior of the proposed method. The algorithm that is considered in this paper belongs to the second approach.

First, we give a short summary to the constructive approach on the existence theory for the VI. For this approach, the equivalence between the existence of solutions to the VI problem and the boundedness of the sequence generated by some modified extragradient methods was first established by Sun in [14]. Later, Wang et al. [4] established the same theory by a new type of extragradient-type method. Furthermore, the generated sequence possesses an expansion property with respect to the starting point and converges to a solution point if the solution set of the VI is nonempty. Now a question is posed naturally: as the GVI problem is an extension of the VI, can this theory be extended to the GVI? This constitutes the main motivation of the paper.

In this paper, inspired by the work in [4], we propose a new type of extragradient projection method for the problem GVI. We first establish the existence results for the GVI under pseudomonotonicity and continuity assumption of the underlying mapping F, and then show the global convergence of the proposed method.

The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions. In Section 3, we present the description of the algorithm and establish some properties of the algorithm. The global convergence of the sequence is also established.

2 Preliminaries

For a nonempty closed convex set K R n and a vector x R n , the orthogonal projection of x onto K, i.e.,

argmin { y x y K } ,

is denoted by P K (x). In what follows, we state some well-known properties of the projection operator which will be used in the sequel.

Lemma 2.1 [15]

Let K be a nonempty, closed and convex subset in R n . Then, for any x,y R n and zK, the following statements hold:

  1. (i)

    P K (x)x,z P K (x)0;

  2. (ii)

    P K ( x ) P K ( y ) 2 x y 2 P K ( x ) x + y P K ( y ) 2 .

Remark 2.1 In fact, (i) in Lemma 2.1 provides also a sufficient and necessary condition for a vector uK to be the projection of the vector x; i.e., u= P K (x) if and only if

ux,zu0,zK.

Definition 2.1 Let K be a nonempty subset of R n . A multi-valued mapping F:K 2 R n is said to be

  1. (i)

    monotone if and only if

    uv,xy0,x,yK,uF(x),vF(y);
  2. (ii)

    pseudomonotone if and only if, for any x,yK, uF(x), vF(y),

    u,yx0v,yx0.

Now let us recall the definition of a continuous multi-valued mapping F.

Definition 2.2 Assume that F:X 2 R n is a multi-valued mapping, then

  1. (i)

    F is said to be upper semicontinuous at xX if for every open set V containing F(x), there is an open set U containing x such that F(y)V for all yXU;

  2. (ii)

    F is said to be lower semicontinuous at xX if given any sequence { x k } converging to x and any yF(x), there exists a sequence y k F( x k ) that converges to y.

F is said to be continuous at xX if it is both upper semicontinuous and lower semicontinuous at x.

For the simplicity of our description, we list the assumptions needed in the sequel.

Assumption 2.1 Suppose that X is a nonempty closed convex set in R n . The multi-valued mapping F:X 2 R n is pseudomonotone and continuous on X with nonempty compact convex values.

3 Main results

For x R n , ξF(x), we first define the projection residue

r(x,ξ)=x P X (xξ).

It is well known that the projection residue is related intimately to the solution set X .

Proposition 3.1 For xX and ξF(x), they solve problem (1.1) if and only if

x= P X (xξ).

The basic idea of the designed algorithm is as follows. At each step of the algorithm, compute the projection residue r( x k , ξ k ) at iterate x k . If it is a zero vector, then stop with x k being a solution of the GVI; otherwise, find a trial point y k by a back-tracking search at x k along the residue r( x k , ξ k ), and the new iterate is obtained by projecting x 0 onto the intersection of X with two halfspaces respectively associated with y k and x k . Repeat this process until the projection residue is a zero vector.

Algorithm 3.1 Choose σ,γ(0,1), x 0 X, k=0.

Step 1: Given the current iterate x k X, if r( x k ,ξ)=0 for some ξF( x k ), stop; else take any ξ k F( x k ) and compute

z k = P X ( x k ξ k ) .

Then, let

y k =(1 η k ) x k + η k z k ,

where η k = γ m k , with m k being the smallest nonnegative integer m satisfying: ζ k F( x k γ m r( x k , ξ k )) such that

ζ k , r ( x k , ξ k ) σ r ( x k , ξ k ) 2 .
(3.1)

Step 2: Let x k + 1 = P H k 1 H k 2 X ( x 0 ), where

H k 1 = { x R n x y k , ζ k 0 , ζ k F ( y k ) } , H k 2 = { x R n x x k , x 0 x k 0 } .

Select k=k+1 and go to Step 1.

Now, we first discuss the feasibility of the stepsize rule (3.1).

Lemma 3.1 If x k is not a solution of problem (1.1), then there exists the smallest nonnegative integer m satisfying (3.1).

Proof By the definition of r( x k , ξ k ) and Lemma 2.1, we know that

P X ( x k ξ k ) ( x k ξ k ) , x k P X ( x k ξ k ) 0,

which implies

ξ k , r ( x k , ξ k ) r ( x k , ξ k ) 2 >0.
(3.2)

Since γ(0,1), we get

lim m ( x k γ m r ( x k , ξ k ) ) = x k .

By this and the fact that F is lower semicontinuous, there exists ζ m F( x k γ m r( x k , ξ k )) such that

lim m ζ m = ξ k .

So,

lim m ζ m , r ( x k , ξ k ) = ξ k , r ( x k , ξ k ) r ( x k , ξ k ) 2 >0,

which implies the conclusion. □

The following lemma shows that the halfspace H k 1 in Algorithm 3.1 strictly separates x k and the solution set X if X is nonempty.

Lemma 3.2 If X , the halfspace H k 1 in Algorithm  3.1 separates the point x k from the set X . Moreover,

X H k 1 X,k0.

Proof By the definition of r( x k , ξ k ) and Algorithm 3.1, we know

y k =(1 η k ) x k + η k z k = x k η k r ( x k , ξ k ) ,

which can be written

η k r ( x k , ξ k ) = x k y k .

Then, by this and (3.1), we get

ζ k , x k y k >0,
(3.3)

where ζ k is a vector in F( y k ). So, by the definition of H k 1 and (3.3), we get x k H k 1 .

On the other hand, for any x X and xX, we have

ω , x x 0, ω F ( x ) .

Since F is pseudomonotone on X, we get

ω , x x 0,ωF(x).
(3.4)

Let x= y k in (3.4), for any ζ k F( y k ), we have

ζ k , y k x 0,

which implies x H k 1 . Moreover, it is easy to see that X H k 1 X, k0. □

The following lemma says that if the solution set is nonempty, then X H k 1 H k 2 X and thus H k 1 H k 2 X is a nonempty set.

Lemma 3.3 If the solution set X , then X H k 1 H k 2 X for all k0 under Assumption  2.1.

Proof From the analysis above, it is sufficient to prove that X H k 2 for all k0. The proof will be given by induction. Obviously, if k=0,

X H 0 2 = R n .

Now, suppose that

X H k 2

holds for k=l0. Then

X H l 1 H l 2 X.

For any x X , by Lemma 2.1 and the fact that

x l + 1 = P H l 1 H l 2 X ( x 0 ) ,

it holds that

x x l + 1 , x 0 x l + 1 0.

Thus X H l + 1 2 . This shows that X H k 2 for all k0, and the desired result follows. □

For the case that the solution set is empty, we have that H k 1 H k 2 X is also nonempty from the following lemma, which implies the feasibility of Algorithm 3.1.

Lemma 3.4 Suppose that X =, then H k 1 H k 2 X for all k0 under Assumption  2.1.

Before proving Lemma 3.4, we present a fundamental existence result for the GVI problem defined over a compact convex region [16]. For the sake of completeness, we give the proof process.

Lemma 3.5 Let X R n be a nonempty bounded closed convex set and let the multi-valued mapping F:X 2 R n be lower semicontinuous with nonempty closed convex values. Then the solution set X is nonempty.

Proof Since the multi-valued mapping F is lower semicontinuous and has nonempty closed convex values, by Michael’s selection theorem (see, for instance, Theorem 24.1 in [17]), it admits a continuous selection; that is, there exists a continuous mapping G:X R n such that G(x)F(x) for every xX. Since X is a nonempty bounded closed convex set, the VI(X,G), which consists of finding an xX such that

G ( x ) , y x 0,yX,

has a solution (see Lemma 3.1 in [18]), i.e., the solution set X of the problem VI(X,G) is nonempty. It follows from X X that X is nonempty. □

Proof of Lemma 3.4 On the contrary, suppose that there exists k 0 1 such that H k 0 1 H k 0 2 X=. Then there exists a positive number M such that

{ x k 0 k k 0 } B ( x 0 , M )

and

{ x k ξ k 0 k k 0 , ξ k F ( x k ) } B ( x 0 , M ) ,

where

B ( x 0 , M ) = { x R n x x 0 M } .

Let Y=XB( x 0 ,2M) and consider the GVI(F,Y). From Lemma 3.5, we know that the solution set Y of GVI(F,Y) is nonempty. In order to avoid confusion with the sequence { H k 1 }, { H k 2 }, and { x k }, we denote the three corresponding sequences by { H ¯ k 1 }, { H ¯ k 2 } and { x ¯ k }, respectively, when Algorithm 3.1 is applied to GVI(F,Y) with a starting point x 0 . We claim that

  1. (i)

    the set has at least k 0 +1 elements: x ¯ 0 , x ¯ 1 ,, x ¯ k 0 ;

  2. (ii)

    x ¯ k = x k , H ¯ k 1 = H k 1 , H ¯ k 2 = H k 2 for 0k k 0 ;

  3. (iii)

    x k 0 is not a solution of GVI(F,Y).

Since Y , using Lemma 3.3, we know that H ¯ k 0 1 H ¯ k 0 2 X, so H k 0 1 H k 0 2 X, which contradicts the supposition that H k 0 1 H k 0 2 X=. □

From Lemma 3.4, if the solution set of problem (1.1) is empty, then Algorithm 3.1 generates an infinite sequence. More generally, we have the following conclusion.

Theorem 3.1 Suppose that Assumption  2.1 holds. Assume that Algorithm  3.1 generates an infinite sequence { x k }. If the solution set X is nonempty, then the sequence { x k } is bounded and all its cluster points belong to the solution set. Otherwise,

lim k + x k x 0 =+

if the solution set X is empty.

Proof First, we suppose that the solution set is nonempty. Since

x k + 1 = P H k 1 H k 2 X ( x 0 ) ,

by Lemma 3.3 and the definition of the projection, it holds that

x k + 1 x 0 x x 0

for any x X . So, { x k } is a bounded sequence.

Since x k + 1 H k 2 , it is obvious that

P H k 2 ( x k + 1 ) = x k + 1

from the definition of the projection operator. For x k , since

z x k , x 0 x k 0,z H k 2 ,

it holds that x k = P H k 2 ( x 0 ) from Remark 2.1. Thus, using Lemma 2.1, one has

P H k 2 ( x k + 1 ) P H k 2 ( x 0 ) 2 x k + 1 x 0 2 P H k 2 ( x k + 1 ) x k + 1 + x 0 P H k 2 ( x 0 ) 2 ;

i.e.,

x k + 1 x k 2 x k + 1 x 0 2 x k x 0 2 ,

which can be written as

x k + 1 x k 2 + x k x 0 2 x k + 1 x 0 2 .

Thus, the sequence { x k x 0 } is nondecreasing and bounded, and hence convergent, which implies that

lim k x k + 1 x k 2 =0.
(3.5)

On the other hand, by x k + 1 H k 1 , we get

x k + 1 y k , ζ k 0.
(3.6)

Since

y k =(1 η k ) x k + η k z k = x k η k r ( x k , ξ k )

and by (3.6) we have

x k + 1 y k , ζ k = x k + 1 x k + η k r ( x k , ξ k ) , ζ k 0,

which implies

η k r ( x k , ξ k ) , ζ k x k x k + 1 , ζ k 0.

Using the Cauchy-Schwarz inequality and (3.1), we obtain

η k σ r ( x k , ξ k ) 2 η k r ( x k , ξ k ) , ζ k x k + 1 x k ζ k .
(3.7)

Since F is continuous with compact values, Proposition 3.11 in [19] implies that {F( y k ):kN} is a bounded set, and so the sequence { ζ k : ζ k F( y k )} is bounded. By (3.5) and (3.7), it follows that

lim k η k r ( x k , ξ k ) 2 =0.

For any convergent sequence { x k j } of { x k }, its limit is denoted by x ¯ , i.e.,

lim j x k j = x ¯ .

Without loss of generality, suppose that { η k j } has a limit. Then

lim j η k j =0

or

lim j r ( x k j , ξ k j ) 2 =0.

For the first case, by the choice of η k j in Algorithm 3.1, we know that

ζ , r ( x k j , ξ k j ) <σ r ( x k j , ξ k j ) 2
(3.8)

for all ζF( x k j η k j γ r( x k j , ξ k j )).

Since

lim j ( x k j η k j γ r ( x k j , ξ k j ) ) = x ¯

and F is lower semicontinuous on X, ζ k j F( x k j η k j γ r( x k j , ξ k j )) such that

lim j ζ k j =ξ,

where ξ is a vector in F( x ¯ ). So, by (3.8) we obtain

lim j ζ k j , r ( x k j , ξ k j ) = ξ , r ( x ¯ , ξ ) lim j σ r ( x k j , ξ k j ) 2 =σ r ( x ¯ , ξ ) 2 .
(3.9)

Using the similar arguments of (3.2), we have

ξ , r ( x ¯ , ξ ) r ( x ¯ , ξ ) 2 .

Combining this and (3.9), we know that r( x ¯ ,ξ)=0 and thus x ¯ is a solution of problem (1.1).

For the second case

lim j r ( x k j , ξ k j ) 2 =0,

it is easy to see that the limit point x ¯ of x k j is a solution of problem (1.1).

Now, we consider the case that the solution set is empty. Since the inequality

x k + 1 x k 2 x k + 1 x 0 2 x k x 0 2

also holds in this case, the sequence { x k x 0 } is still nondecreasing. Next, we claim that

lim k + x k x 0 =+.

Otherwise, { x k x 0 } is a bounded sequence. A similar discussion as above would lead to the conclusion that every cluster point of { x k } is a solution of problem (1.1), which contradicts the emptiness of the solution set. □

Theorem 3.2 Under the assumption of Theorem  3.1, if the solution set X is nonempty, then the sequence { x k } globally converges to a solution x such that x = P X ( x 0 ); otherwise, lim k + x k x 0 =+. That is, the solution set of problem (1.1) is empty if and only if the generated sequence diverges to infinity.

Proof First, we suppose that the solution set is nonempty. From Theorem 3.1, we know that the sequence { x k } is bounded and that every cluster point x of { x k } is a solution of problem (1.1). Suppose that the subsequence { x k j } converges to x , i.e.,

lim j x k j = x .

Let x ¯ = P X ( x 0 ). Since x ¯ X , by Lemma 3.3 we have

x ¯ H k j 1 1 H k j 1 2 X

for all j. So, by the iterative sequence of Algorithm 3.1, we have

x k j x 0 x ¯ x 0 .

Thus,

x k j x ¯ 2 = x k j x 0 + x 0 x ¯ 2 = x k j x 0 2 + x 0 x ¯ 2 + 2 x k j x 0 , x 0 x ¯ x ¯ x 0 2 + x 0 x ¯ 2 + 2 x k j x 0 , x 0 x ¯ .

Letting j, we have

x x ¯ 2 2 x ¯ x 0 2 + 2 x x 0 , x 0 x ¯ = 2 x x ¯ , x 0 x ¯ 0 ,

where the last inequality is due to Lemma 2.1 and the fact that x ¯ = P X ( x 0 ) and x X . So,

x = x ¯ = P X ( x 0 ) .

Thus the sequence { x k } has a unique cluster point P X ( x 0 ), which shows the global convergence of { x k }.

For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □

4 Discussion

Certainly, the proposed extragradient method for the GVI in this paper has a good theoretical property in theory, as the generated sequence not only has an expansion property w.r.t. the starting point, but also the existence of the solution to the problem can be verified through the behavior of the generated sequence. However, the proposed algorithm is not easy to be realized in practice as the termination criterion is not easy to execute. This is an interesting topic for further research.