1 Introduction

Variational inequality problems are among the most interesting and intensively studiedclasses of mathematics problems and have wide applications in the fields of optimization andcontrol, economics and transportation equilibrium and engineering science. And there havebeen a substantial number of numerical methods including fixed point, projection operator,Wiener-Hopf equations, auxiliary principle, KKM technique, linear approximation,decomposition methods, penalty function, splitting method, inertial proximal, dynamicalsystem and well-posedness for solving the variational inequalities and related problems inrecent years (see [113] and the references therein).

One of the most common methods for solving the variational problem is to transfer thevariational inequality into an operator equation, and then transfer the operator equationinto the fixed point problems. In the present paper, we introduce and study a class of newsystems of generalized set-valued nonlinear quasi-variational inequalities in a Hilbertspace. We prove that the system of generalized set-valued nonlinear quasi-variationalinequalities is equivalent to the fixed point problem and the system of Wiener-Hopfequations. By using the projection operator technique and the system of Wiener-Hopfequations technique, we suggest several new iterative algorithms to find the approximatesolutions to the problems and prove the convergence of the different types of iterativesequences. It is the first time that the system of Wiener-Hopf equations technique has beenused to solve the system of variational inequalities problems, and the technique is moregeneral than the projection operator technique. Our results improve and extend some knownresults in the literature.

Let H be a real Hilbert space whose inner product and norm are denoted by, and respectively. Let K be a nonempty closed convex set inH and C(H) be the family of all nonempty compact subsets of H.Given two nonlinear mappings A i :K×KH and two set-valued mappings B i :H×HC(H), i=1,2, we consider the following problem of finding(x,y)K×K such that ω 1 B 1 (x,y), ω 2 B 2 (x,y) and

{ A 1 ( x , y ) + ω 1 , u x 0 , u K , A 2 ( x , y ) + ω 2 , v y 0 , v K ,
(1.1)

which is called the system of generalized set-valued nonlinear quasi-variationalinequalities.

It is worth mentioning that in many important problems, the closed convex set Kalso depends upon the solutions explicitly or implicitly. Given two point-to-set mappings K 1 :x K 1 (x) and K 2 :y K 2 (y), which associate two closed convex sets K 1 (x) and K 2 (y) with any element x, y of H, weconsider the problem of finding (x,y) K 1 (x)× K 2 (y) such that ω 1 B 1 (x,y), ω 2 B 2 (x,y) and

{ A 1 ( x , y ) + ω 1 , u x 0 , u K 1 ( x ) , A 2 ( x , y ) + ω 2 , v y 0 , v K 2 ( y ) ,
(1.2)

which is called the system of generalized set-valued nonlinear implicit quasi-variationalinequalities. We remark that if K 1 (x)= K 2 (y)=K, a nonempty closed convex set in H, then the problem(1.2) is exactly the problem (1.1).

If the closed convex sets K 1 (x) and K 2 (y) are of the form K 1 (x)= m 1 (x)+ K 1 and K 2 (y)= m 2 (y)+ K 2 , where K 1 and K 2 are two nonempty closed convex sets and m 1 , m 2 are two point-to-point mappings, then the problem (1.2) isequivalent to finding (x,y) K 1 (x)× K 2 (y)=( m 1 (x)+ K 1 )×( m 2 (y)+ K 2 ) such that ω 1 B 1 (x,y), ω 2 B 2 (x,y) and

{ A 1 ( x , y ) + ω 1 , u x 0 , u K 1 ( x ) = m 1 ( x ) + K 1 , A 2 ( x , y ) + ω 2 , v y 0 , v K 2 ( y ) = m 2 ( y ) + K 2 .
(1.3)

If A 1 = A 2 =VA+B, B 1 = B 2 =T, K 1 (x)= K 2 (y), then the problem (1.2) is equivalent to findingxK(x) such that wT(x), yA(x) and

ω+Vy+Bx,vx0,vK(x),
(1.4)

which is due to Noor [1].

If A 1 = A 2 =A, B 1 = B 2 =B, then the problem (1.1) is equivalent to findingxK such that wB(x) and

A ( x ) + ω , v x 0,vK.
(1.5)

2 Preliminaries

We need the following known concepts and results.

Definition 2.1 (see [26])

Let H be a Hilbert space, A(,):H×HH be a nonlinear mapping. A is said to be

  1. (i)

    γ-strongly monotone with respect to the first argument, if there exists a constant γ>0 such that

    A ( x 1 , ) A ( x 2 , ) , x 1 x 2 γ x 1 x 2 2 , x 1 , x 2 H;

    Similarly, we can define A is strongly monotone with respect to the second argument.

  2. (ii)

    (τ,ζ)-relaxed co-coercive, if there exist constants τ>0, ζ>0 such that

    A ( x 1 , ) A ( x 2 , ) , x 1 x 2 τ A ( x 1 , ) A ( x 2 , ) 2 +ζ x 1 x 2 2 , x 1 , x 2 H;
  3. (iii)

    (ξ,η)-Lipschitz continuous, if there exist constants ξ>0, η>0 such that

    A ( x 1 , y 1 ) A ( x 2 , y 2 ) ξ x 1 x 2 +η y 1 y 2 , x 1 , x 2 , y 1 , y 2 H.

Definition 2.2 (see [7])

Let B(,):H×HC(H) be a set-valued mapping. B is said to be(α,β)-H-Lipschitz continuous if there exist constantsα>0, β>0 such that

H ( B ( x 1 , y 1 ) , B ( x 2 , y 2 ) ) α x 1 x 2 +β y 1 y 2 , x 1 , x 2 , y 1 , y 2 H,

where H(,) is the Hausdorff metric on C(H).

Lemma 2.3 (see[8, 9])

Let H be a Hilbert space, K be a nonempty closed convex set in H. Then, for a givenzH, uKsatisfies the inequality

uz,vu0,vK,

if and only if

u= P K z,

where P K is the projection of H into K. Furthermore, the operator P K is nonexpansive, i.e.,

P K ( u ) P K ( v ) uv,u,vH.

Assumption 2.4 Let H be a real Hilbert space, K 1 (x) and K 2 (y) be two nonempty closed convex sets. For allx,y,zH, the operators P K 1 ( x ) and P K 2 ( y ) satisfy the relations

P K 1 ( x ) z P K 1 ( y ) z s 1 xy,
(2.1)
P K 2 ( x ) z P K 2 ( y ) z s 2 xy,
(2.2)

where s 1 >0, s 2 >0 are two constants.

Remark 2.5 We remark that Assumption 2.4 is also true for the case K 1 (x)= m 1 (x)+ K 1 , K 2 (y)= m 2 (y)+ K 2 , when the point-to-point mappings m 1 , m 2 are μ 1 , μ 2 -Lipschitz continuous respectively. For allx,y,zH, it is well known that

P K 1 ( x ) z = P m 1 ( x ) + K 1 z = m 1 ( x ) + P K 1 [ z m 1 ( x ) ] , P K 2 ( y ) z = P m 2 ( y ) + K 2 z = m 2 ( y ) + P K 2 [ z m 2 ( y ) ] ,

and

P K 1 ( x ) z P K 1 ( y ) z = m 1 ( x ) m 1 ( y ) + P K 1 [ z m 1 ( x ) ] P K 1 [ z m 1 ( y ) ] m 1 ( x ) m 1 ( y ) + P K 1 [ z m 1 ( x ) ] P K 1 [ z m 1 ( y ) ] 2 m 1 ( x ) m 1 ( y ) 2 μ 1 x y ,

which shows that (2.1) holds for s 1 =2 μ 1 >0. Similarly, (2.2) holds for s 2 =2 μ 2 >0.

Lemma 2.6 The system of generalized set-valued nonlinear implicitquasi-variational inequalities (1.2) has solutions(x,y, ω 1 , ω 2 )if and only if(x,y, ω 1 , ω 2 )satisfy the relations

{ x = P K 1 ( x ) [ x ρ 1 ( A 1 ( x , y ) + ω 1 ) ] , y = P K 2 ( y ) [ y ρ 2 ( A 2 ( x , y ) + ω 2 ) ] ,
(2.3)

where(x,y) K 1 (x)× K 2 (y), ω 1 B 1 (x,y), ω 2 B 2 (x,y), ρ 1 , ρ 2 >0are two constants. P K 1 ( x ) :H K 1 (x), P K 2 ( y ) :H K 2 (y)are two projection operators.

Proof The conclusion follows directly from Lemma 2.3 . □

Lemma 2.7 (see[10, 11])

Let E be a complete metric space, CB(E)be the family of all the convex bounded subsets of E, T:ECB(E)be a set-valued mapping. Then for any givenε>0and any givenx,yE, uTx, there existsvTysuch that

d(u,v)(1+ε)H(Tx,Ty),

whereH(,)is the Hausdorff metric onCB(E).

3 Projection operator technique

Using the projection operator technique, Lemma 2.6 and Lemma 2.7, we constructthe following iterative algorithms.

Algorithm 3.1 Let H be a real Hilbert space, K 1 (x) and K 2 (y) be two nonempty closed convex sets in H, A i : K 1 (x)× K 2 (y)H be two nonlinear mappings, B i :H×HC(H) be two set-valued mappings, i=1,2. For any given ( x 0 , y 0 ) K 1 ( x 0 )× K 2 ( y 0 ) such that ω 0 1 B 1 ( x 0 , y 0 ), ω 0 2 B 2 ( x 0 , y 0 ), and

x 1 = ( 1 λ 1 ) x 0 + λ 1 P K 1 ( x 0 ) [ x 0 ρ 1 ( A 1 ( x 0 , y 0 ) + ω 0 1 ) ] , y 1 = ( 1 λ 2 ) y 0 + λ 2 P K 2 ( y 0 ) [ y 0 ρ 2 ( A 2 ( x 0 , y 0 ) + ω 0 2 ) ] .

Since ω 0 1 B 1 ( x 0 , y 0 ), ω 0 2 B 2 ( x 0 , y 0 ), and by Lemma 2.7, there exist ω 1 1 B 1 ( x 1 , y 1 ), ω 1 2 B 2 ( x 1 , y 1 ) such that

ω 1 1 ω 0 1 ( 1 + 1 ) H ( B 1 ( x 1 , y 1 ) , B 1 ( x 0 , y 0 ) ) , ω 1 2 ω 0 2 ( 1 + 1 ) H ( B 2 ( x 1 , y 1 ) , B 2 ( x 0 , y 0 ) ) .

Let

x 2 = ( 1 λ 1 ) x 1 + λ 1 P K 1 ( x 1 ) [ x 1 ρ 1 ( A 1 ( x 1 , y 1 ) + ω 1 1 ) ] , y 2 = ( 1 λ 2 ) y 1 + λ 2 P K 2 ( y 1 ) [ y 1 ρ 2 ( A 2 ( x 1 , y 1 ) + ω 1 2 ) ] ,

since ω 1 1 B 1 ( x 1 , y 1 ), ω 1 2 B 2 ( x 1 , y 1 ), and by Lemma 2.7, there exist ω 2 1 B 1 ( x 2 , y 2 ), ω 2 2 B 2 ( x 2 , y 2 ) such that

ω 2 1 ω 1 1 ( 1 + 1 2 ) H ( B 1 ( x 2 , y 2 ) , B 1 ( x 1 , y 1 ) ) , ω 2 2 ω 1 2 ( 1 + 1 2 ) H ( B 2 ( x 2 , y 2 ) , B 2 ( x 1 , y 1 ) ) .

By induction, we can define iterative sequences { x n }, { y n }, { ω n 1 } and { ω n 2 } satisfying

x n + 1 =(1 λ 1 ) x n + λ 1 P K 1 ( x n ) [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] ,
(3.1)
y n + 1 =(1 λ 2 ) y n + λ 2 P K 2 ( y n ) [ y n ρ 2 ( A 2 ( x n , y n ) + ω n 2 ) ] ,
(3.2)
ω n 1 B 1 ( x n , y n ), ω n + 1 1 ω n 1 ( 1 + 1 n + 1 ) H ( B 1 ( x n + 1 , y n + 1 ) , B 1 ( x n , y n ) ) ,
(3.3)
ω n 2 B 2 ( x n , y n ), ω n + 1 2 ω n 2 ( 1 + 1 n + 1 ) H ( B 2 ( x n + 1 , y n + 1 ) , B 2 ( x n , y n ) ) ,
(3.4)

where n=0,1,2, .

If K 1 (x)= K 2 (y)=K, we obtain the following algorithm from Algorithm 3.1.

Algorithm 3.2 We define iterative sequences { x n }, { y n }, { ω n 1 } and { ω n 2 } satisfying

x n + 1 = ( 1 λ 1 ) x n + λ 1 P K [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] , y n + 1 = ( 1 λ 2 ) y n + λ 2 P K [ y n ρ 2 ( A 2 ( x n , y n ) + ω n 2 ) ] , ω n 1 B 1 ( x n , y n ) , ω n + 1 1 ω n 1 ( 1 + 1 n + 1 ) H ( B 1 ( x n + 1 , y n + 1 ) , B 1 ( x n , y n ) ) , ω n 2 B 2 ( x n , y n ) , ω n + 1 2 ω n 2 ( 1 + 1 n + 1 ) H ( B 2 ( x n + 1 , y n + 1 ) , B 2 ( x n , y n ) ) ,

where n=0,1,2, .

Theorem 3.3 Let H be a real Hilbert space, K 1 (x)and K 2 (y)be two nonempty closed convex sets in H. Fori=1,2, let nonlinear mappings A i :K×KHbe( ξ i , η i )-Lipschitz continuous and γ i -strongly monotone with respect to the ith argument, B i :H×HC(H)be( α i , β i )-H-Lipschitz continuous if Assumption  2.4holds, and there exist constants ρ i >0, 0< λ i <1such that

0 < max { 1 λ 1 [ 1 ( s 1 + 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 α 1 ) ] + λ 2 ρ 2 ( α 2 + ξ 2 ) , 1 λ 2 [ 1 ( s 2 + 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 β 2 ) ] + λ 1 ρ 1 ( β 1 + η 1 ) } < 1 ,
(3.5)

then the problem (1.2) admits solutions(x,y, ω 1 , ω 2 )and sequences{ x n }, { y n }, { ω n 1 }and{ ω n 2 }which are generated by Algorithm 3.1 converge to x, y, ω 1 and ω 2 respectively.

Proof By Lemma 2.3, (2.1) and (3.1), we have

x n + 1 x n = ( 1 λ 1 ) x n + λ 1 P K 1 ( x n ) [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] ( 1 λ 1 ) x n 1 λ 1 P K 1 ( x n 1 ) [ x n 1 ρ 1 ( A 1 ( x n 1 , y n 1 ) + ω n 1 1 ) ] ( 1 λ 1 ) x n x n 1 + λ 1 P K 1 ( x n ) [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] P K 1 ( x n 1 ) [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] + λ 1 P K 1 ( x n 1 ) [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] P K 1 ( x n 1 ) [ x n 1 ρ 1 ( A 1 ( x n 1 , y n 1 ) + ω n 1 1 ) ] ( 1 λ 1 + λ 1 s 1 ) x n x n 1 + λ 1 [ x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ] [ x n 1 ρ 1 ( A 1 ( x n 1 , y n 1 ) + ω n 1 1 ) ] ( 1 λ 1 + λ 1 s 1 ) x n x n 1 + λ 1 x n x n 1 ρ 1 ( A 1 ( x n , y n ) A 1 ( x n 1 , y n ) ) + λ 1 ρ 1 [ ω n 1 ω n 1 1 + A 1 ( x n 1 , y n ) A 1 ( x n 1 , y n 1 ) ] .
(3.6)

Since A 1 is γ 1 -strongly monotone with respect to the first argument andLipschitz continuous, we have

x n x n 1 ρ 1 ( A 1 ( x n , y n ) A 1 ( x n 1 , y n ) ) 2 = x n x n 1 2 2 ρ 1 A 1 ( x n , y n ) A 1 ( x n 1 , y n ) , x n x n 1 + ρ 1 2 A 1 ( x n , y n ) A 1 ( x n 1 , y n ) 2 ( 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 ) x n x n 1 2 ,
(3.7)

and

A 1 ( x n 1 , y n ) A 1 ( x n 1 , y n 1 ) η 1 y n y n 1 .
(3.8)

By (3.3) and ( α 1 , β 1 )-H-Lipschitz continuity of B 1 , we have

ω n 1 ω n 1 1 ( 1 + 1 n ) H ( B 1 ( x n , y n ) , B 1 ( x n 1 , y n 1 ) ) , ( 1 + 1 n ) [ α 1 x n x n 1 + β 1 y n y n 1 ] .
(3.9)

Combining (3.6), (3.7), (3.8) and (3.9), we obtain

x n + 1 x n { 1 λ 1 [ 1 ( s 1 + 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 ) ] } x n x n 1 + λ 1 ρ 1 [ ( 1 + 1 n ) β 1 + η 1 ] y n y n 1 .
(3.10)

Similarly, we can have

y n + 1 y n { 1 λ 2 [ 1 ( s 2 + 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 ) ] } y n y n 1 + λ 2 ρ 2 [ ( 1 + 1 n ) α 2 + ξ 2 ] x n x n 1 .
(3.11)

Adding (3.10) to (3.11), we have

x n + 1 x n + y n + 1 y n { 1 λ 1 [ 1 ( s 1 + 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 ) ] + λ 2 ρ 2 [ ( 1 + 1 n ) α 2 + ξ 2 ] } x n x n 1 + { 1 λ 2 [ 1 ( s 2 + 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 ) ] + λ 1 ρ 1 [ ( 1 + 1 n ) β 1 + η 1 ] } y n y n 1 θ n ( x n x n 1 + y n y n 1 ) ,
(3.12)

where

θ n = max { 1 λ 1 [ 1 ( s 1 + 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 ) ] + λ 2 ρ 2 [ ( 1 + 1 n ) α 2 + ξ 2 ] , 1 λ 2 [ 1 ( s 2 + 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 ) ] + λ 1 ρ 1 [ ( 1 + 1 n ) β 1 + η 1 ] } .

Let

θ = max { 1 λ 1 [ 1 ( s 1 + 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 α 1 ) ] + λ 2 ρ 2 ( α 2 + ξ 2 ) , 1 λ 2 [ 1 ( s 2 + 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 β 2 ) ] + λ 1 ρ 1 ( β 1 + η 1 ) } ,

then θ n θ as n. By (3.5), we know that 0<θ<1. So, (3.12) implies that { x n } and { y n } are both Cauchy sequences. Thus, there existx K 1 (x) and y K 2 (y) such that x n x, y n y as n.

Now, we prove that ω n 1 ω 1 B 1 (x,y) and ω n 2 ω 2 B 2 (x,y). In fact, since { x n } and { y n } are both Cauchy sequences and by (3.9), we know that{ ω n 1 } is Cauchy sequences. Similarly, { ω n 2 } is also Cauchy sequences. Therefore, there exist ω 1 C(H) and ω 2 C(H) such that ω n 1 ω 1 and ω n 2 ω 2 . Further,

d ( ω 1 , B 1 ( x , y ) ) ω 1 ω n 1 + d ( ω n 1 , B 1 ( x , y ) ) ω 1 ω n 1 + H ( B 1 ( x n , y n ) , B 1 ( x , y ) ) ω 1 ω n 1 + α 1 x n x + β 1 y n y 0 ( n ) .

Since B 1 (x,y) is compact, we have ω 1 B 1 (x,y). Similarly, we have ω 2 B 2 (x,y).

By the continuity of A 1 , A 2 , B 1 , B 2 , P K 1 ( x ) , P K 2 ( y ) and Algorithm 3.1, we know that (x,y, ω 1 , ω 2 ) satisfy the relations (2.3). By Lemma 2.6, we claim that(x,y, ω 1 , ω 2 ) is a solution of the problem (1.2). This completes theproof. □

If K 1 (x)= K 2 (y)=K, we do not need Assumption 2.4 and can obtain thefollowing theorem from Theorem 3.3.

Theorem 3.4 Let H be a real Hilbert space, K be a nonempty closed convex set in H. Fori=1,2, let nonlinear mappings A i :K×KHbe( ξ i , η i )-Lipschitz continuous and γ i -strongly monotone with respect to the ith argument, B i :H×HC(H)be( α i , β i )-H-Lipschitz continuous, if there existconstants ρ i >0, 0< λ i <1such that

0 < max { 1 λ 1 [ 1 ( 1 2 ρ 1 γ 1 + ρ 1 2 ξ 1 2 + ρ 1 α 1 ) ] + λ 2 ρ 2 ( α 2 + ξ 2 ) , 1 λ 2 [ 1 ( 1 2 ρ 2 γ 2 + ρ 2 2 η 2 2 + ρ 2 β 2 ) ] + λ 1 ρ 1 ( β 1 + η 1 ) } < 1 ,

then the problem (1.1) admits solutions(x,y, ω 1 , ω 2 )and sequences{ x n }, { y n }, { ω n 1 }and{ ω n 2 }which are generated by Algorithm 3.2 converge to x, y, ω 1 and ω 2 respectively.

4 System of Wiener-Hopf equations technique

Related to the system of generalized set-valued nonlinear implicit quasi-variationalinequalities (1.2), we now consider a new system of generalized implicit Wiener-Hopfequations (4.1). And we will establish the equivalence between them. This equivalence isthen used to suggest a number of new iterative algorithms for solving the given systems ofvariational inequalities.

To be more precise, let Q K 1 ( x ) =I P K 1 ( x ) , Q K 2 ( y ) =I P K 2 ( y ) , where I is the identity operator, P K 1 ( x ) :H K 1 (x) and P K 2 ( y ) :H K 2 (y) are two projection operators, K 1 (x) and K 2 (y) are two convex sets. We consider the following problem offinding (x,y) K 1 (x)× K 2 (y), ( z 1 , z 2 )H×H such that ω 1 B 1 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ), ω 2 B 2 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) and

{ A 1 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) + ω 1 + ρ 1 1 Q K 1 ( x ) z 1 = 0 , A 2 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) + ω 2 + ρ 2 1 Q K 2 ( y ) z 2 = 0 ,
(4.1)

where ρ 1 >0, ρ 2 >0 are constants. (4.1) is called the system of generalizedimplicit Wiener-Hopf equations.

If K 1 (x)= K 2 (y)=K, we obtain the following system of generalized Wiener-Hopfequations from (4.1), which is of finding ( z 1 , z 2 )H×H such that ω 1 B 1 ( P K z 1 , P K z 2 ), ω 2 B 2 ( P K z 1 , P K z 2 ) and

{ A 1 ( P K z 1 , P K z 2 ) + ω 1 + ρ 1 1 Q K z 1 = 0 , A 2 ( P K z 1 , P K z 2 ) + ω 2 + ρ 2 1 Q K z 2 = 0 ,
(4.2)

where ρ 1 >0, ρ 2 >0 are constants.

If A 1 = A 2 =A, B 1 = B 2 =B, we obtain the following Wiener-Hopf equation from (4.2), whichis of finding zH such that ωB( P K z) and

A( P K z)+ω+ ρ 1 Q K z=0,
(4.3)

where ρ>0 is a constant.

Lemma 4.1 The system of generalized set-valued nonlinear implicitquasi-variational inequalities (1.2) has solutions(x,y) K 1 (x)× K 2 (y)such that ω 1 B 1 (x,y), ω 2 B 2 (x,y)if and only if the system of generalized implicit Wiener-Hopf equations(4.1) has solutions(x,y) K 1 (x)× K 2 (y)and( z 1 , z 2 )H×Hsuch that ω 1 B 1 (x,y), ω 2 B 2 (x,y), where

{ x = P K 1 ( x ) z 1 , y = P K 2 ( y ) z 2 , z 1 = x ρ 1 ( A 1 ( x , y ) + ω 1 ) , z 2 = y ρ 2 ( A 2 ( x , y ) + ω 2 ) ,
(4.4)

and ρ 1 >0, ρ 2 >0are constants.

Proof Let (x,y) K 1 (x)× K 2 (y) such that ω 1 B 1 (x,y), ω 2 B 2 (x,y) be a solution of (1.2), then by Lemma 2.6, we know that(x,y) satisfy (2.3).

Let z 1 =x ρ 1 ( A 1 (x,y)+ ω 1 ), z 2 =y ρ 2 ( A 2 (x,y)+ ω 2 ), then by (2.3), we have x= P K 1 ( x ) z 1 , y= P K 2 ( y ) z 2 , which is just (4.4). And we have

{ z 1 = P K 1 ( x ) z 1 ρ 1 ( A 1 ( x , y ) + ω 1 ) , z 2 = P K 2 ( y ) z 2 ρ 2 ( A 2 ( x , y ) + ω 2 ) .

Using the fact Q K 1 ( x ) =I P K 1 ( x ) and Q K 2 ( y ) =I P K 2 ( y ) , we obtain (4.1). That is to say, (x,y) K 1 (x)× K 2 (y) and ( z 1 , z 2 )H×H such that ω 1 B 1 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ), ω 2 B 2 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) is also the solution of (4.1).

Conversely, let (x,y) K 1 (x)× K 2 (y) and ( z 1 , z 2 )H×H such that ω 1 B 1 (x,y), ω 2 B 2 (x,y) be a solution of (4.1). Then we have

{ ρ 1 ( A 1 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) + ω 1 ) + z 1 = P K 1 ( x ) z 1 , ρ 2 ( A 2 ( P K 1 ( x ) z 1 , P K 2 ( y ) z 2 ) + ω 2 ) + z 2 = P K 2 ( y ) z 2 .

Now, by invoking Lemma 2.3 and the above relations, we have

{ 0 P K 1 ( x ) z 1 z 1 , u P K 1 ( x ) z 1 , u K 1 ( x ) , 0 P K 2 ( y ) z 2 z 2 , v P K 2 ( y ) z 2 , v K 2 ( y ) .

Thus (x,y, ω 1 , ω 2 ), where

{ x = P K 1 ( x ) z 1 , y = P K 2 ( y ) z 2 ,

is a solution of (1.2). □

If K 1 (x)= K 2 (y)=K, we obtain the following lemma from Lemma 4.1.

Lemma 4.2 The system of generalized set-valued nonlinear quasi-variationalinequalities (1.1) has solutions(x,y)K×Ksuch that ω 1 B 1 (x,y), ω 2 B 2 (x,y)if and only if the system of generalized Wiener-Hopf equations (4.2)has solutions(x,y)K×Kand( z 1 , z 2 )H×Hsuch that ω 1 B 1 (x,y), ω 2 B 2 (x,y), where

{ x = P K z 1 , y = P K z 2 , z 1 = x ρ 1 ( A 1 ( x , y ) + ω 1 ) , z 2 = y ρ 2 ( A 2 ( x , y ) + ω 2 ) ,
(4.5)

and ρ 1 >0, ρ 2 >0are constants.

Using the system of Wiener-Hopf equations technique, Lemma 4.1 and Lemma 2.7, weconstruct the following iterative algorithms.

Algorithm 4.3 Let H be a real Hilbert space, K 1 (x) and K 2 (y) be two nonempty closed convex sets in H, A i : K 1 (x)× K 2 (y)H be two nonlinear mappings, B i :H×HC(H) be two set-valued mappings, i=1,2. For any given ( z 0 1 , z 0 2 )H×H, such that x 0 = P K 1 ( x 0 ) z 0 1 K 1 ( x 0 ), y 0 = P K 2 ( y 0 ) z 0 2 K 2 ( y 0 ), ω 0 1 B 1 ( x 0 , y 0 ), ω 0 2 B 2 ( x 0 , y 0 ). We compute { x n }, { y n }, { z n 1 }, { z n 2 }, { ω n 1 } and { ω n 2 } by the following iterative schemes:

x n = P K 1 ( x n ) z n 1 ,
(4.6)
y n = P K 2 ( y n ) z n 2 ,
(4.7)
z n + 1 1 = x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) ,
(4.8)
z n + 1 2 = y n ρ 2 ( A 2 ( x n , y n ) + ω n 2 ) ,
(4.9)
ω n 1 B 1 ( x n , y n ), ω n + 1 1 ω n 1 ( 1 + 1 n + 1 ) H ( B 1 ( x n + 1 , y n + 1 ) , B 1 ( x n , y n ) ) ,
(4.10)
ω n 2 B 2 ( x n , y n ), ω n + 1 2 ω n 2 ( 1 + 1 n + 1 ) H ( B 2 ( x n + 1 , y n + 1 ) , B 2 ( x n , y n ) ) ,
(4.11)

where n=0,1,2, .

If K 1 (x)= K 2 (y)=K, we have the following iterative algorithm from Algorithm4.3.

Algorithm 4.4 For any given ( z 0 1 , z 0 2 )H×H, such that x 0 = P K z 0 1 K, y 0 = P K z 0 2 K, ω 0 1 B 1 ( x 0 , y 0 ), ω 0 2 B 2 ( x 0 , y 0 ), we compute { x n }, { y n }, { z n 1 }, { z n 2 }, { ω n 1 } and { ω n 2 } by the following iterative schemes:

x n = P K z n 1 , y n = P K z n 2 , z n + 1 1 = x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) , z n + 1 2 = y n ρ 2 ( A 2 ( x n , y n ) + ω n 2 ) , ω n 1 B 1 ( x n , y n ) , ω n + 1 1 ω n 1 ( 1 + 1 n + 1 ) H ( B 1 ( x n + 1 , y n + 1 ) , B 1 ( x n , y n ) ) , ω n 2 B 2 ( x n , y n ) , ω n + 1 2 ω n 2 ( 1 + 1 n + 1 ) H ( B 2 ( x n + 1 , y n + 1 ) , B 2 ( x n , y n ) ) ,

where n=0,1,2, .

Theorem 4.5 Let H be a real Hilbert space, K 1 (x)and K 2 (y)be two nonempty closed convex sets in H. Fori=1,2, let nonlinear mappings A i : K 1 (x)× K 2 (y)Hbe( ξ i , η i )-Lipschitz continuous and( τ i , ζ i )-relaxed co-coercive with respect to the ith argument, B i :H×HC(H)be( α i , β i )-H-Lipschitz continuous, ifAssumption  2.4 holds and there exist constants ρ i >0such that

0 < max { 1 1 s [ 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 α 1 + ρ 2 ( α 2 + ξ 2 ) ] , 1 1 s [ 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 β 2 + ρ 1 ( β 1 + η 1 ) ] } < 1 ,
(4.12)

then there exist(x,y, ω 1 , ω 2 , z 1 , z 2 )satisfying the system of generalized implicit Wiener-Hopf equations (4.1).So, the problem (1.2) admits solutions(x,y, ω 1 , ω 2 )and sequences{ x n }, { y n }, { z n 1 }, { z n 2 }, { ω n 1 }and{ ω n 2 }which are generated by Algorithm 4.3 converge to x, y, z 1 , z 2 , ω 1 and ω 2 respectively.

Proof By (4.8), we have

z n + 1 1 z n 1 = x n ρ 1 ( A 1 ( x n , y n ) + ω n 1 ) [ x n 1 ρ 1 ( A 1 ( x n 1 , y n 1 ) + ω n 1 1 ) ] x n x n 1 ρ 1 ( A 1 ( x n , y n ) A 1 ( x n 1 , y n ) ) + ρ 1 [ ω n 1 ω n 1 1 + A 1 ( x n 1 , y n ) A 1 ( x n 1 , y n 1 ) ] .
(4.13)

Since A 1 is ( τ 1 , ζ 1 )-relaxed co-coercive with respect to the first argument andLipschitz continuous, we have

x n x n 1 ρ 1 ( A 1 ( x n , y n ) A 1 ( x n 1 , y n ) ) 2 = x n x n 1 2 2 ρ 1 A 1 ( x n , y n ) A 1 ( x n 1 , y n ) , x n x n 1 + ρ 1 2 A 1 ( x n , y n ) A 1 ( x n 1 , y n ) 2 x n x n 1 2 2 ρ 1 [ ( τ 1 ) A 1 ( x n , y n ) A 1 ( x n 1 , y n ) 2 + ζ 1 x n x n 1 2 ] + ρ 1 2 ξ 1 2 x n x n 1 2 [ 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 ] x n x n 1 2 ,
(4.14)

and

A 1 ( x n 1 , y n ) A 1 ( x n 1 , y n 1 ) η 1 y n y n 1 .
(4.15)

From ( α 1 , β 1 )-H-Lipschitz continuity of B 1 and (4.10), we have

ω n 1 ω n 1 1 ( 1 + 1 n ) H ( B 1 ( x n , y n ) , B 1 ( x n 1 , y n 1 ) ) ( 1 + 1 n ) [ α 1 x n x n 1 + β 1 y n y n 1 ] .
(4.16)

Combining (4.13), (4.14), (4.15) and (4.16), we obtain

z n + 1 1 z n 1 [ 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 ] x n x n 1 + ρ 1 [ ( 1 + 1 n ) β 1 + η 1 ] y n y n 1 .
(4.17)

Similarly, we can have

z n + 1 2 z n 2 ρ 2 [ ( 1 + 1 n ) α 2 + ξ 2 ] x n x n 1 + [ 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 ] y n y n 1 .
(4.18)

By (4.6), Lemma 2.3 and Assumption 2.4,

x n x n 1 = P K 1 ( x n ) z n 1 P K 1 ( x n 1 ) z n 1 1 P K 1 ( x n ) z n 1 P K 1 ( x n 1 ) z n 1 + P K 1 ( x n 1 ) z n 1 P K 1 ( x n 1 ) z n 1 1 s 1 x n x n 1 + z n 1 z n 1 1 ,

which implies that

x n x n 1 1 1 s 1 z n 1 z n 1 1 .
(4.19)

Similarly, we can obtain

y n y n 1 1 1 s 2 z n 2 z n 1 2 .
(4.20)

By (4.17)-(4.20), we have

z n + 1 1 z n 1 + z n + 1 2 z n 2 1 1 s 1 { 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 + ρ 2 [ ( 1 + 1 n ) α 2 + ξ 2 ] } × z n 1 z n 1 1 + 1 1 s 2 { 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 + ρ 1 [ ( 1 + 1 n ) β 1 + η 1 ] } × z n 2 z n 1 2 θ n ( z n 1 z n 1 1 + z n 2 z n 1 2 ) ,
(4.21)

where

θ n = max { 1 1 s 1 [ 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 ( 1 + 1 n ) α 1 + ρ 2 ( ( 1 + 1 n ) α 2 + ξ 2 ) ] , 1 1 s 2 [ 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 ( 1 + 1 n ) β 2 + ρ 1 ( ( 1 + 1 n ) β 1 + η 1 ) ] } .

Let

θ = max { 1 1 s 1 [ 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 α 1 + ρ 2 ( α 2 + ξ 2 ) ] , 1 1 s 2 [ 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 β 2 + ρ 1 ( β 1 + η 1 ) ] } ,

then θ n θ as n. By (4.12), we know that 0<θ<1. So, (4.21) implies that { z n 1 } and { z n 2 } are both Cauchy sequences. By (4.19) and (4.20), we know that{ x n } and { y n } are both Cauchy sequences respectively. So, there exist( z 1 , z 2 )H×H and (x,y) K 1 (x)× K 2 (y) such that z n 1 z 1 , z n 2 z 2 , x n x and y n y as n. In a similar way as in Theorem 3.3, we know{ ω n 1 } and { ω n 2 } are also Cauchy sequences and there exist ω 1 B 1 (x,y) and ω 2 B 2 (x,y) such that ω n 1 ω 1 and ω n 2 ω 2 .

By the continuity of the mappings A 1 , A 2 , B 1 , B 2 , P K 1 ( x ) , P K 2 ( y ) and Algorithm 4.3, as n, we have

{ x = P K 1 ( x ) z 1 , y = P K 2 ( y ) z 2 , z 1 = x ρ 1 ( A 1 ( x , y ) + ω 1 ) , z 2 = y ρ 2 ( A 2 ( x , y ) + ω 2 ) ,

where ρ 1 >0, ρ 2 >0 are constants. That is just (4.4). By Lemma 4.1, we knowthat (x,y, z 1 , z 2 , ω 1 , ω 2 ) satisfy the generalized implicit Wiener-Hopf equations (4.1).So, we claim that (x,y, ω 1 , ω 2 ) is a solution of the problem (1.2). This completes theproof. □

If K 1 (x)= K 2 (y)=K, we do not need Assumption 2.4 and we can obtain thefollowing theorem from Theorem 4.5.

Theorem 4.6 Let H be a real Hilbert space, K be a nonempty closed convex set in H. Fori=1,2, let nonlinear mappings A i :K×KHbe( ξ i , η i )-Lipschitz continuous and( τ i , ζ i )-relaxed co-coercive with respect to the ith argument, B i :H×HC(H)be( α i , β i )-H-Lipschitz continuous if there existconstants ρ i >0such that

0 < max { 1 + 2 ρ 1 ( τ 1 ξ 1 2 ζ 1 ) + ρ 1 2 ξ 1 2 + ρ 1 α 1 + ρ 2 ( α 2 + ξ 2 ) , 1 + 2 ρ 2 ( τ 2 η 2 2 ζ 2 ) + ρ 2 2 η 2 2 + ρ 2 β 2 + ρ 1 ( β 1 + η 1 ) } < 1 ,

then there exist(x,y, ω 1 , ω 2 , z 1 , z 2 )satisfying (4.5). So, the generalized Wiener-Hopfequations (4.2) and the problem (1.1) admit the same solutions(x,y, ω 1 , ω 2 )and sequences{ x n }, { y n }, { z n 1 }, { z n 2 }, { ω n 1 }and{ ω n 2 }which are generated by Algorithm 4.4 converge to x, y, z 1 , z 2 , ω 1 and ω 2 respectively.

Remark 4.7 It is the first time that the system of generalized Wiener-Hopf equationstechnique has been used to solve the system of generalized variational inequalities problem.And for a suitable and appropriate choice of the mappings A i , B i and K i , Theorem 3.3 and Theorem 4.5 include many importantknown results of variational inequality as special cases.

Remark 4.8 It is easy to see that a γ-strongly monotone mapping mustbe a (τ,ζ)-relaxed co-coercive mapping, whenever τ=0, ζ=γ. Therefore, the class of the (τ,ζ)-relaxed co-coercive mappings is a more general one. Hence, theresults presented in the paper include many known results as special cases.