1 Introduction

Throughout this paper, let K be a closed convex subset of an Euclidean space R n and F: R n B( R n , R m ) be a continuously differentiable mapping. We consider a weak vector variational inequality (WVVI) of finding x K such that

F ( x ) , x x intC,xK,

where C R m is a closed convex and pointed cone with nonempty interior intC. (WVVI) was firstly introduced by Giannessi [1]. It has been shown to have many applications in vector optimization problems and traffic equilibrium problems (e.g., [2, 3]).

Error bounds are to depict the distance from a feasible solution to the solution set, and have played an important role not only in sensitivity analysis but also in convergence analysis of iterative algorithms. Recently, kinds of error bounds have been presented for weak vector variational inequalities in [47]. By using a scalarization approach of Konnov [8], Li and Mastroeni [5] established the error bounds for two kinds of (WVVIs) with set-valued mappings. By a regularized gap function and a D-gap function for a weak vector variational inequality, Charitha and Dutta [4] obtained the error bounds of (WVVI), respectively. Recently, in virtue of the regularized gap functions, Sun and Chai [6] studied some error bounds for generalized (WVVIs). By using the image space analysis, Xu and Li [7] got a gap function for (WVVI). Then, they established an error bound for (WVVI) without the convexity of the constraint set. These papers have a common characteristic: the solution set of (WVVI) is a singleton [6, 7]. Even though the solution set of (WVVI) is not a singleton [4, 5], the solution set of the corresponding variational inequality (VI) is a singleton, when their results reduce to (VI).

In this paper, by the nonlinear scalarization method, we study a global error bound of (WVVI). This paper is organized as follows. In Section 2, we establish a global error bound of (VI) via the generalized gap functions. In Section 3, we discuss a global error bound of (WVVI) by the nonlinear scalarization method.

2 A global error bound of (VI)

Let h: R n R{+} be a proper lower semicontinuous function, and let S={x R n |h(x)0}. h has a global error bound if there exists τ>0 such that

d(x,S)τh ( x ) + ,xX,

where h ( x ) + :=max{h(x),0} and d(x,S):=inf{xs|sS} if S is nonempty and d(x,S)=+ if S is empty. f: R n R n is said to be coercive on K if

lim x K , x + f ( x ) , x y x =+,yK.

f: R n R n is said to be strongly monotone on R n with the modulus λ>0 if

f ( x ) f ( x ) , x x λ x x 2 ,x, x R n .

In this section, we establish a global error bound of (VI) of finding xK such that

f ( x ) , y x 0,yK,

where f: R n R n is a continuously differentiable mapping.

To study the error bound of (VI), we need to construct a class of merit functions which were made to reformulate (VI) as an optimization problem; see [916]. One of such functions is a generalized regularized gap function [17] defined by

f γ (x):= inf y K { f ( x ) , y x + γ φ ( x , y ) } ,x R n ,γ>0,
(1)

where φ: R n × R n R is a real-valued function with the following properties:

(P1) φ is continuously differentiable on R n × R n .

(P2) φ(x,y)0, x,y R n and the equality holds if and only if x=y.

(P3) φ(x,) is uniformly strongly convex on R n with the modulus β>0 in the sense that

φ(x, y 1 )φ(x, y 2 ) 2 φ ( x , y 2 ) , y 1 y 2 +β y 1 y 2 2 ,x, y 1 , y 2 R n ,

where 2 φ denotes the partial derivative of φ with respect to the second variable.

(P4) 2 φ(,y) is uniformly Lipschitz continuous on R n with the modulus α, i.e., for all x R n ,

2 φ ( x , y 1 ) 2 φ ( x , y 2 ) α y 1 y 2 , y 1 , y 2 R n .

Now we recall some properties of φ in (1).

Proposition 2.1 The following statements hold for each x,yK:

  1. (i)

    2 φ(x,y),uαxyu, uspan(Kx).

  2. (ii)

    β x y 2 φ(x,y)(αβ) x y 2 .

  3. (iii)

    φ(x,y) 2 φ(x,y),yx(αβ) x y 2 .

  4. (iv)

    2 φ(x,y)=0 if and only if x=y.

Proof Parts (i)-(iii) are taken from [[14], Lemma 2.1] and part (iv) is from [[17], Lemma 2.1]. □

Remark 2.1 In light of (ii) in Proposition 2.1, it holds true that α2β.

Then we list some basic properties of the generalized regularized gap function f γ .

Proposition 2.2 The following conclusions are valid for (VI).

  1. (i)

    For every x R n , there exists a unique vector y γ φ (x)K at which the infimum in (1) is attained, i.e.,

    f γ (x)= f ( x ) , y γ φ ( x ) x γφ ( x , y γ φ ( x ) ) .
  2. (ii)

    f γ is a gap function of (VI).

  3. (iii)

    x= y γ φ (x) if and only if x is a solution of (VI).

  4. (iv)

    f γ is continuously differentiable on R n with

    f γ (x)=f(x) ( y γ φ ( x ) x ) +f(x)γ 1 φ ( x , y γ φ ( x ) ) .
  5. (v)

    y γ φ and f γ are both locally Lipschitz on R n .

  6. (vi)

    If f is coercive on K, then (VI) has a nonempty compact solution set.

  7. (vii)

    f γ (x)βγ y γ φ ( x ) x 2 , xK.

Proof Parts (i)-(iv) are from [16], part (v) from [[18], Lemma 3.1] and part (vi) from [[11], Proposition 2.2.7].

It follows from (ii) and (iii) that we only need to prove (vii) for xKS. Since y γ φ (x) is the minimizer of the function

G():= f ( x ) , x +γφ(x,)on K,

the first-order optimality condition implies that

G ( y γ φ ( x ) ) , y y γ φ ( x ) 0,yK.

Letting y=x, we get

G ( y γ φ ( x ) ) , y γ φ ( x ) + x 0,

i.e.,

f ( x ) + γ 2 φ ( x , y γ φ ( x ) ) , y γ φ ( x ) x 0.

It follows from (P3) that the mapping G is strongly convex on R n with the modulus β>0, i.e., x, y 1 , y 2 R n

G(x, y 1 )G(x, y 2 ) f ( x ) + γ 2 φ ( x , y 2 ) , y 1 y 2 +βγ y 1 y 2 2 .

Letting y 1 =x and y 2 = y γ φ (x), by f γ (x)=G(x, y γ φ (x)), we obtain

f γ (x) f ( x ) + γ 2 φ ( x , y γ φ ( x ) ) , x y γ φ ( x ) +βγ y γ φ ( x ) x 2 .

Thus, one has f γ (x)βγ y γ φ ( x ) x 2 . □

Theorem 2.1 Let f be coercive on K and γ(α2β)<μ. Assume that φ satisfies

(P5) 1 φ(x, y γ φ (x))+ 2 φ(x, y γ φ (x)), y γ φ (x)x0, xK.

Suppose further that the following condition holds:

μ:=inf { d , f ( x ) d | x K S , d = y γ φ ( x ) x y γ φ ( x ) x } >0,
(2)

where S is the solution set of (VI). Then f γ has a global error bound with the modulus

max { 2 β γ μ + 2 γ β γ α , 2 β γ β γ } .

Proof It follows from (vi) of Proposition 2.2 that S is a nonempty compact set of K. If xS, then the assertion obviously holds. Let xKS. Then f γ (x)>0. For brevity, we denote w:= y γ φ (x)x and d:= w w . It follows from [[19], Theorem 2.5] that we only need to prove

f γ (x)dmin { μ + 2 γ β γ α 2 β γ , β γ 2 } .
(3)

It follows from (iv) of Proposition 2.2 that

f γ ( x ) w = f ( x ) w + f ( x ) γ 1 φ ( x , y γ φ ( x ) ) , w = f ( x ) w , w + f ( x ) , w + γ φ ( x , y γ φ ( x ) ) γ [ 1 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] = f ( x ) w , w f γ ( x ) γ [ 1 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] .

By (P5) and (2), we have

f γ (x)wμ w 2 f γ (x)γ [ 2 φ ( x , y γ φ ( x ) ) , w + φ ( x , y γ φ ( x ) ) ] .

It follows from (iii) of Proposition 2.1 that

f γ (x)wμ w 2 f γ (x)+γ(αβ) w 2 .

Thus,

f γ (x)d= f γ ( x ) w w 2 f γ ( x ) [ μ γ ( α β ) ] w 2 f γ ( x ) f γ ( x ) 2 w .
(4)

In light of (4) and (vii) of Proposition 2.2, we have

f γ (x)d [ μ γ ( α β ) ] w 2 f γ ( x ) β γ 2 .

If μ<γ(αβ), then it follows from γ(α2β)<μ that

f γ (x)d [ γ ( α β ) μ ] 2 β γ β γ 2 = γ α μ 2 γ β 2 β γ <0.
(5)

If μγ(αβ), then

f γ (x)d β γ 2 .
(6)

Hence, (3) follows from (5) and (6). The proof is complete. □

Now we use two examples to show that (2) cannot be dropped and that Theorem 2.1 is applicable, respectively.

Example 2.1 Consider K=R, φ(x,y)= 1 2 | x y | 2 , γ= 1 2 and f(x)= x 3 . Then we can easily get that α=2β=1, y γ φ (x)=x2f(x), f γ (x)= x 6 and S={0}. It is clear that f is coercive on K and μ=0. Thus, (2) does not hold. Moreover, it is obvious that f γ does not have a global error bound.

Example 2.2 Consider K=[0,+), φ(x,y)= 1 2 | x y | 2 , γ=1 and f(x)=x. Then we can easily get that α=2β=1, f(x)=1, y γ φ (x)=0, f γ (x)= 1 2 x 2 and S={0}. It is clear that f is coercive on K and (2) holds. Thus, it follows from Theorem 2.1 that f γ has a global error bound.

By [[21], Proposition 2.3(ii)] Huang and Ng [[14], Theorem 2.1] have obtained the following conclusion. Now we give a slightly different proof by Theorem 2.1.

Corollary 2.1 Let f be strongly monotone on R n with the modulus λ>0 and γ(α2β)<λ. Assume that φ satisfies (P5). Then f γ has a global error bound with the modulus

max { 2 β γ λ + 2 γ β γ α , 2 β γ β γ } .

Proof Let xKS and w= y γ φ (x)x. Since f is continuously differentiable, then

f(x+tw)=f(x)+t f ( x ) , w +o(t),

where o ( t ) t 0 as t0. Since f is strongly monotone with the modulus λ, one has

f ( x + t w ) f ( x ) , t w λ t w 2 ,

which implies that

w , f ( x ) w λ w 2 .

Thus, (2) holds. Moreover, the strong monotonicity of f implies the coerciveness of f (cf. [[22], Remark 2.1]). Thus, by Theorem 2.1, we get that f γ has a global error bound. □

3 A global error bound of (WVVI)

In this section, by the nonlinear scalarization method and by Theorem 2.1, we discuss a global error bound of (WVVI). The dual cone of C is defined by C :={ξ R m :ξ,z0,zC}. For each ξ R m , ξ:=sup{|ξ,z|:z1}, where ξ,z denotes the value of ξ at z. Let eintC and B e :={ξ C :ξ,e=1}. It is well known that B e is a compact convex base of C .

Lemma 3.1 [20]

S ξ C { 0 } S ξ = ξ B e S ξ ,

where S ξ :={xK: i = 1 m ξ i F i ( x ),x x 0,yK} and S is the solution set of (WVVI).

Recall the generalized regularized gap function for (WVVI) which is defined by

ϕ γ (x):= min ξ B e f γ (x,ξ),

where f γ (x,ξ)= max y K { i = 1 m ξ i F i (x),xyγφ(x,y)}. When ϕ(x,y)= 1 2 x y 2 , the generalized regularized gap function reduces to the regularized gap function which was defined in [4].

Theorem 3.1 Let γ(α2β)< min ξ B e μ ξ . Assume that φ satisfies (P5). For each ξ B e , suppose that ξF is coercive on K, and that the following condition holds:

μ ξ :=inf { d , ( ξ F ) ( x ) d | x K S , d = y γ φ ( x ) x y γ φ ( x ) x } >0.
(7)

Then ϕ γ has a global error bound with the modulus

max { max ξ B e 2 β γ μ ξ + 2 γ β γ α , 2 β γ β γ } .

Proof It follows from (vi) of Proposition 2.2 that S ξ is a nonempty compact set of K for each ξ B e . If xS, then the assertion obviously holds. Let xKS. Then ϕ γ (x)>0 and there exists ξ 0 B e such that f γ (x, ξ 0 )= ϕ γ (x). It follows from Theorem 2.1 that

d(x, S ξ 0 ) τ ξ 0 f γ ( x , ξ 0 ) ,

where τ ξ =max{ 2 β γ μ ξ + 2 γ β γ α , 2 β γ β γ }. Thus, by Lemma 3.1, one has

d(x,S)d(x, S ξ 0 ) τ ξ 0 f γ ( x , ξ 0 ) max ξ B e τ ξ ϕ γ ( x ) .

Hence, ϕ γ has a global error bound with the modulus max ξ B e τ ξ . □

Remark 3.1 If F i is strongly monotone with the modulus λ i for i=1,2,,m and C= R + m , it follows from [[21], Proposition 2.3] that

d , ( ξ F ) ( x ) d λ d 2 ,d R n ,ξ B e .

Moreover, the strong monotonicity of F i implies the coerciveness of F i (cf. [[22], Remark 2.1]) and that (VI) has a unique solution (cf. [[11], Theorem 2.3.3]). Thus, by Theorem 3.1, we get that ϕ γ has a global error bound. Hence, our results extend those of [[4], Theorem 2.9].