1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space ℋ with the inner product , and the norm . We denote weak convergence and strong convergence by notations ⇀ and →, respectively. The bilevel variational inequalities, shortly (BVI), are formulated as follows:

Find  x Sol(G,C) such that  F ( x ) , x x 0,xSol(G,C),

where G:HH, Sol(G,C) denotes the set of all solutions of the variational inequalities:

Find  y C such that  G ( y ) , y y 0,yC,

and F:CH. We denote the solution set of problem (BVI) by Ω.

Bilevel variational inequalities are special classes of quasivariational inequalities (see [14]) and of equilibrium with equilibrium constraints considered in [5]. However, they cover some classes of mathematical programs with equilibrium constraints (see [6]), bilevel minimization problems (see [7]), variational inequalities (see [813]), minimum-norm problems of the solution set of variational inequalities (see [14, 15]), bilevel convex programming models (see [16]) and bilevel linear programming in [17].

Suppose that f:HR. It is well known in convex programming that if f is convex and differentiable on Sol(G,C), then x is a solution to

min { f ( x ) : x Sol ( G , C ) }

if and only if x is the solution to the bilevel variational inequalities VI(f,Sol(G,C)), where ∇f is the gradient of f. Then the bilevel variational inequalities (BVI) are written in a form of mathematical programs with equilibrium constraints as follows:

{ min f ( x ) , x { y : G ( y ) , z y 0 , z C } .

If f, g are two convex and differentiable functions, then problem (BVI) (where F:=f and G:=g) becomes the following bilevel minimization problem (see [7]):

{ min f ( x ) , x argmin { g ( x ) : x C } .

In a special case F(x)=x for all xC, problem (BVI) becomes the minimum-norm problems of the solution set of variational inequalities as follows:

Find  x C such that  x = Pr Sol ( G , C ) (0),

where Pr Sol ( G , C ) (0) is the projection of 0 onto Sol(G,C). A typical example is the least-squares solution to the constrained linear inverse problem in [18]. For solving this problem under the assumption that the subset CH is nonempty closed convex, G:CH is α-inverse strongly monotone and Sol(G,C), Yao et al. in [14] introduced the following extended extragradient method:

{ x 0 C , y k = Pr C ( x k λ G ( x k ) α k x k ) , x k + 1 = Pr C [ x k λ G ( x k ) + μ ( y k x k ) ] , k 0 .

They showed that under certain conditions over parameters, the sequence { x k } converges strongly to x ˆ = Pr Sol ( G , C ) (0).

Recently, Anh et al. in [19] introduced an extragradient algorithm for solving problem (BVI) in the Euclidean space R n . Roughly speaking the algorithm consists of two loops. At each iteration k of the outer loop, they applied the extragradient method for the lower variational inequality problem. Then, starting from the obtained iterate in the outer loop, they computed an ϵ k -solution of problem VI(G,C). The convergence of the algorithm crucially depends on the starting points x 0 and the parameters chosen in advance. More precisely, they presented the following scheme

{ Compute  y k : = Pr C ( x k α k G ( x k ) )  and  z k : = Pr C ( x k α k G ( y k ) ) . Inner iterations  j = 0 , 1 , .  Compute { x k , 0 : = z k λ F ( z k ) , y k , j : = Pr C ( x k , j δ j G ( x k , j ) ) , x k , j + 1 : = α j x k , 0 + β j x k , j + γ j Pr C ( x k , j δ j G ( y k , j ) ) . If  x k , j + 1 Pr Sol ( G , C ) ( x k , 0 ) ϵ ¯ k , then set  h k : = x k , j + 1  and go to Step 3 . Otherwise, increase  j  by  1 . Set  x k + 1 : = α k u + β k x k + γ k h k .  Increase  k  by  1  and go to Step 1 .

Under assumptions that F is strongly monotone and Lipschitz continuous, G is pseudomonotone and Lipschitz continuous on C, the sequences of parameters were chosen appropriately. They showed that two iterative sequences { x k } and { z k } converged to the same point x which is a solution of problem (BVI). However, at each iteration of the outer loop, the scheme requires computing an approximation solution to a variational inequality problem.

There exist some other solution methods for bilevel variational inequalities when the cost operator has some monotonicity (see [16, 1921]). In all of these methods, solving auxiliary variational inequalities is required. In order to avoid this requirement, we combine the projected gradient method in [10] for solving variational inequalities and the fixed point property that x is a solution to problem VI(F,C) if and only if it is a fixed point of the mapping Pr C (xλF(x)), where λ>0. Then, the strong convergence of proposed sequences is considered in a real Hilbert space.

In this paper, we are interested in finding a solution to bilevel variational inequalities (BVI), where the operators F and G satisfy the following usual conditions:

(A1) G is η-inverse strongly monotone on ℋ and F is β-strongly monotone on C.

(A2) F is L-Lipschitz continuous on C.

(A3) The solution set Ω of problem (BVI) is nonempty.

The purpose of this paper is to propose an algorithm for directly solving bilevel pseudomonotone variational inequalities by using the projected gradient method and fixed point techniques.

The rest of this paper is divided into two sections. In Section 2, we recall some properties for monotonicity, the metric projection onto a closed convex set and introduce in detail a new algorithm for solving problem (BVI). The third section is devoted to the convergence analysis for the algorithm.

2 Preliminaries

We list some well-known definitions and the projection under the Euclidean norm which will be used in our analysis.

Definition 2.1 Let C be a nonempty closed convex subset in ℋ. We denote the projection on C by Pr C (), i.e.,

Pr C (x)=argmin { y x : y C } ,xH.

The operator φ:CH is said to be

  1. (i)

    γ-strongly monotone on C if for each x,yC,

    φ ( x ) φ ( y ) , x y γ x y 2 ;
  2. (ii)

    η-inverse strongly monotone on C if for each x,yC,

    φ ( x ) φ ( y ) , x y η φ ( x ) φ ( y ) 2 ;
  3. (iii)

    Lipschitz continuous with constant L>0 (shortly L-Lipschitz continuous) on C if for each x,yC,

    φ ( x ) φ ( y ) Lxy.

If φ:CC and L=1, then φ is called nonexpansive on C.

We know that the projection Pr C () has the following well-known basic properties.

Property 2.2

  1. (a)

    Pr C (x) Pr C (y)xy, x,yH.

  2. (b)

    x Pr C (x),y Pr C (x)0, yC,xH.

  3. (c)

    Pr C ( x ) Pr C ( y ) 2 x y 2 Pr C ( x ) x + y Pr C ( y ) 2 , x,yH.

To prove the main theorem of this paper, we need the following lemma.

Lemma 2.3 (see [21])

Let A:HH be β-strongly monotone and L-Lipschitz continuous, λ(0,1] and μ(0, 2 β L 2 ). Then the mapping T(x):=xλμA(x) for all xH satisfies the inequality

T ( x ) T ( y ) (1λτ)xy,x,yH,

where τ=1 1 μ ( 2 β μ L 2 ) (0,1].

Lemma 2.4 (see [22])

Letbe a real Hilbert space, C be a nonempty closed and convex subset ofand S:CH be a nonexpansive mapping. Then IS (I is the identity operator on ℋ) is demiclosed at yH, i.e., for any sequence ( x k ) in C such that x k x ¯ D and (IS)( x k )y, we have (IS)( x ¯ )=y.

Lemma 2.5 (see [19])

Let { a n } be a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n ,n0,

where { γ n }(0,1) and { δ n } is a sequence insuch that

  1. (a)

    n = 0 γ n =,

  2. (b)

    lim sup n δ n γ n 0 or n = 0 | δ n γ n |<+.

Then lim n a n =0.

Now we are in a position to describe an algorithm for problem (BVI). The proposed algorithm can be considered as a combination of the projected gradient and fixed point methods. Roughly speaking the algorithm consists of two steps. First, we use the well-known projected gradient method for solving the variational inequalities VI(G,C): x k + 1 = Pr C ( x k λG( x k )) (k=0,1,), where λ>0 and x 0 C. The method generates a sequence ( x k ) converging strongly to the unique solution of problem VI(G,C) under assumptions that G is L-Lipschitz continuous and α-strongly monotone on C with the step-size λ(0, 2 α L 2 ). Next, we use the Banach contraction-mapping fixed-point principle for finding the unique fixed point of the contraction-mapping T λ =IλμF, where F is β-strongly monotone and L-Lipschitz continuous, I is the identity mapping, μ(0, 2 β L 2 ) and λ(0,1]. The algorithm is presented in detail as follows.

Algorithm 2.6 (Projection algorithm for solving (BVI))

Step 0. Choose x 0 C, k=0, a positive sequence { α k }, λ, μ such that

{ 0 < α k min { 1 , 1 τ } , τ = 1 1 μ ( 2 β μ L 2 ) , lim k α k = 0 , lim k | 1 α k + 1 1 α k | = 0 , k = 0 α k = , 0 < λ 2 η , 0 < μ < 2 β L 2 .
(2.1)

Step 1. Compute

{ y k : = Pr C ( x k λ G ( x k ) ) , x k + 1 = y k μ α k F ( y k ) .

Update k:=k+1, and go to Step 1.

Note that in the case F(x)=0 for all xC, Algorithm 2.6 becomes the projected gradient algorithm as follows:

x k + 1 := Pr C ( x k λ G ( x k ) ) .

3 Convergence results

In this section, we state and prove our main results.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let two mappings F:CH and G:HH satisfy assumptions (A1)-(A3). Then the sequences ( x k ) and ( y k ) in Algorithm 2.6 converge strongly to the same point x Ω.

Proof For conditions (2.1), we consider the mapping S k :HH defined by

S k (x):= Pr C ( x λ G ( x ) ) μ α k F [ Pr C ( x λ G ( x ) ) ] ,xH.

Using Property 2.2(a), G is η-inverse strongly monotone and conditions (2.1), for each x,yH, we have

Pr C ( x λ G ( x ) ) Pr C ( y λ G ( y ) ) 2 x λ G ( x ) y + λ G ( y ) 2 = x y 2 + λ 2 G ( x ) G ( y ) 2 2 λ x y , G ( x ) G ( y ) x y 2 + λ ( λ 2 η ) G ( x ) G ( y ) 2 x y 2 .
(3.1)

Combining this and Lemma 2.3, we get

S k ( x ) S k ( y ) = Pr C ( x λ G ( x ) ) μ α k F [ Pr C ( x λ G ( x ) ) ] Pr C ( y λ G ( y ) ) + μ α k F [ Pr C ( y λ G ( y ) ) ] ( 1 α k τ ) Pr C ( x λ G ( x ) ) Pr C ( y λ G ( y ) ) ( 1 α k τ ) x y ,
(3.2)

where τ:=1 1 μ ( 2 β μ L 2 ) . Thus, S k is a contraction on ℋ. By the Banach contraction principle, there is the unique fixed point ξ k such that S k ( ξ k )= ξ k . For each x ˆ Sol(G,C), set

C ˆ := { x H : x x ˆ μ F ( x ˆ ) τ } .

Due to this and Property 2.2(a), we get that the mapping S k Pr C ˆ is contractive on ℋ. Then there exists the unique point z k such that S k [ Pr C ˆ ( z k )]= z k . Set z ¯ k = Pr C ˆ ( z k ). It follows from (3.2) that

z k x ˆ = S k ( z ¯ k ) x ˆ S k ( z ¯ k ) S k ( x ˆ ) + S k ( x ˆ ) x ˆ = S k ( z ¯ k ) S k ( x ˆ ) + S k ( x ˆ ) Pr C ( x ˆ α k G ( x ˆ ) ) ( 1 α k τ ) z ¯ k x ˆ + μ α k F [ Pr C ( x ˆ α k G ( x ˆ ) ) ] ( 1 α k τ ) μ F ( x ˆ ) τ + μ α k F ( x ˆ ) = μ F ( x ˆ ) τ .

Thus, z k C ˆ , S k [ Pr C ˆ ( z k )]= S k ( z k )= z k and hence ξ k = z k C ˆ . Therefore, there exists a subsequence ( ξ k i ) of the sequence ( ξ k ) such that ξ k i ξ ¯ . Combining this and the assumption lim k α k =0, we get

Pr C ( ξ k i λ G ( ξ k i ) ) ξ k i = Pr C ( ξ k i λ G ( ξ k i ) ) S k i ( ξ k i ) = μ α k i F [ Pr C ( ξ k i λ G ( ξ k i ) ) ] 0 as  i .
(3.3)

It follows from (3.1) that the mapping Pr C ( α k G()) is nonexpansive on ℋ. Using Lemma 2.4, (3.3) and ξ k i ξ ¯ , we obtain Pr C ( ξ ¯ λG( ξ ¯ ))= ξ ¯ , which implies ξ ¯ Sol(G,C). Now we will prove that lim j ξ k j = x Sol(BVI).

Set z ¯ k = Pr C ( ξ k λG( ξ k )), v =(μFI)( x ) and v k =(μFI)( z ¯ k ), where I is the identity mapping. Since S k j ( ξ k j )= ξ k j and x = Pr C ( x λG( x )), we have

(1 α k j ) ( ξ k j z ¯ k j ) + α k j ( ξ k j + v k j ) =0

and

(1 α k j ) [ I Pr C ( λ G ( ) ) ] ( x ) + α k j ( x + v ) = α k j ( x + v ) .

Then

α k j x + v , ξ k j x = ( 1 α k j ) ξ k j x ( z ¯ k j x ) , ξ k j x + α k j ξ k j x + v k j v , ξ k j x .
(3.4)

By the Schwarz inequality, we have

ξ k j x ( z ¯ k j x ) , ξ k j x ξ k j x 2 z ¯ k j x ξ k j x ξ k j x 2 ξ k j x 2 = 0
(3.5)

and

ξ k j x + v k j v , ξ k j x ξ k j x 2 v k j v ξ k j x ξ k j x 2 ( 1 τ ) ξ k j x 2 = τ ξ k j x 2 .
(3.6)

Combining (3.4), (3.5) and (3.6), we get

τ ξ k j x 2 x + v , ξ k j x = μ F ( x ) , ξ k j x = μ F ( x ) , ξ k j ξ ¯ + μ F ( x ) , ξ ¯ x μ F ( x ) , ξ k j ξ ¯ .

Then we have

τ ξ k j x 2 μ F ( x ) , ξ ¯ ξ k j .

Let j, and hence the sequence { ξ k j } converges strongly to x . This implies that the sequence { ξ k } also converges strongly to x .

Otherwise, by using (3.2), we have

x k ξ k x k ξ k 1 + ξ k 1 ξ k = S k 1 ( x k 1 ) S k 1 ( ξ k 1 ) + ξ k 1 ξ k ( 1 α k 1 τ ) x k 1 ξ k 1 + ξ k 1 ξ k .
(3.7)

Moreover, by Lemma 2.3, we have

ξ k 1 ξ k = S k 1 ( ξ k 1 ) S k ( ξ k ) = ( 1 α k ) z ¯ k α k v k ( 1 α k 1 ) z ¯ k 1 + α k 1 v k 1 = ( 1 α k ) ( z ¯ k z ¯ k 1 ) α k ( v k v k 1 ) + ( α k 1 α k ) ( z ¯ k 1 + v k 1 ) ( 1 α k ) z ¯ k z ¯ k 1 + α k v k v k 1 + | α k 1 α k | μ F ( z ¯ k 1 ) ( 1 α k ) z ¯ k z ¯ k 1 + α k 1 μ ( 2 β μ L 2 ) ξ k ξ k 1 + | α k 1 α k | μ F ( z ¯ k 1 ) ( 1 α k ) ξ k ξ k 1 + α k 1 μ ( 2 β μ L 2 ) ξ k ξ k 1 + | α k 1 α k | μ F ( z ¯ k 1 ) .

This implies that

α k τ ξ k 1 ξ k | α k 1 α k |μ F ( z ¯ k 1 )

and hence

ξ k ξ k 1 μ | α k 1 α k | F ( z ¯ k 1 ) α k τ .

So, we have

x k ξ k (1 α k 1 τ) x k 1 ξ k 1 + μ | α k 1 α k | F ( z ¯ k 1 ) α k τ .

Let

δ k := μ | α k α k + 1 | F ( z ¯ k ) α k α k + 1 τ 2 ,k0.

Then

x k ξ k (1 α k 1 τ) x k 1 ξ k 1 + α k 1 τ δ k 1 ,k1.

Since {F( z ¯ k )} is bounded, F( z ¯ k )K for all k0, we have

lim k δ k = lim k μ | α k α k + 1 | F ( z ¯ k ) α k α k + 1 τ 2 μ K τ 2 lim k | 1 α k + 1 1 α k |=0.

Applying Lemma 2.5, lim k x k ξ k =0. Combining this and the fact that the sequence { ξ k } converges strongly to x , the sequence { x k } also converges strongly to the unique solution to problem (BVI). □

Now we consider the special case F(x)=x for all xH. It is easy to see that F is Lipschitz continuous with constant L=1 and strongly monotone with constant β=1 on ℋ. Problem (BVI) becomes the minimum-norm problems of the solution set of the variational inequalities.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let G:HH be η-inverse strongly monotone. The iteration sequence ( x k ) is defined by

{ y k : = Pr C ( x k λ G ( x k ) ) , x k + 1 = ( 1 μ α k ) y k .

The parameters satisfy the following:

{ 0 < α k min { 1 , 1 τ } , τ = 1 | 1 μ | , lim k α k = 0 , lim k | 1 α k + 1 1 α k | = 0 , k = 0 α k = , 0 < λ 2 η , 0 < μ < 2 .

Then the sequences { x k } and { y k } converge strongly to the same point x ˆ = Pr Sol ( G , C ) (0).