1 Introduction

As is well known, the notion of well-posedness can be divided into two different groups: Hadamard type and Tykhonov type [1]. Roughly speaking, Hadamard types of well-posedness for a problem means the continuous dependence of the optimal solution from the data of the problem. Tykhonov types of well-posedness for a problem such as Tykhonov and Levitin-Polyak well-posedness are based on the convergence of approximating solution sequences of the problem. Some researchers have investigated the relations between them for different problems (see [14]). The notion of extended well-posedness has been proposed by Zolezzi [5] in the context of scalar optimization. In some sense this notion unifies the ideas of Tykhonov and Hadamard well-posedness. Moreover, the notion of extended well-posedness has been generalized to vector optimization problems by Huang [68].

On the other hand, the vector equilibrium problem provides a very general model for a wide range of problems, for example, the vector optimization problem, the vector variational inequality problem, the vector complementarity problem and the vector saddle point problem. In the literature, existence results for various types of vector equilibrium problems have been investigated intensively; see, e.g., [9, 10] and the references therein. The study of well-posedness for vector equilibrium problems is another important topic in vector optimization theory. Recently, Tykhonov types well-posedness for vector optimization problems, vector variational inequality problems and vector equilibrium problems have been intensively studied in the literature, such as [1117]. Among those papers, we observe that the scalarization technique is an efficient approach to deal with Tykhonov types well-posedness for vector optimization problems. As noted in [12, 16], the notions of well-posedness in the scalar case can be extended to the vector case and, for this end, one needs an appropriate scalarizarion technique. Such a technique is supposed to preserve some well-posedness properties when one passes from the vectorial to the scalar case, and simple examples show that linear scalarization is not useful from this point of view even in the convex case. An effort in this direction was made in the papers (see [11, 12, 16, 18]). Miglierina et al. [11] investigated several types of well-posedness concepts for vectorial optimization problems by using a nonlinear scalarization procedure. Some equivalences between well-posedness of vectorial optimization problems and well-posedness of corresponding scalar optimization problems are given. By virtue of a nonlinear scalarization function, Durea [12] proved the Tykhonov well-posedness of the scalar optimization problems are equivalent to the Tykhonov well-posedness of the original vectorial optimization problems. Very recently, Li and Xia [16] investigated Levitin-Polyak well-posedness for vectorial optimization problems by using a nonlinear scalarization function. They also showed the equivalence relations between the Levitin-Polyak well-posedness of scalar optimization problems and the vectorial optimization problems.

Motivated and inspired by the research work mentioned above, we introduce a new well-posedness concept for generalized vector equilibrium problems (in short (GVEP)), which unifies its Hadamard and Levitin-Polyak well-posedness. The concept of well-posedness for (GVEP) is investigated by using a new method which is different from the ones used in [58]. Our method is based on a nonlinear scalarization technique and the bounded rationality model M (see [1922]). Furthermore, we give some sufficient conditions on various types of well-posedness for (GVEP). Finally, we apply these results to generalized equilibrium problems (in short (GEP)).

2 Preliminaries

Let X be a nonempty subset of the Hausdorff topological space H and Y be a Hausdorff topological vector space. Assume that C denotes a nonempty, closed, convex, and pointed cone in Y with apex at the origin and intC, where intC denotes the topological interior of C.

Let G:XX be a set-valued mapping and φ:X×XY be a vector-valued mapping, the problem of interest, called generalized vector equilibrium problems (in short (GVEP)), which consist of finding an element xX such that xG(x) and

φ(x,y)intC,yG(x).

When Y=R and C=[0,+[, the generalized vector equilibrium problem becomes the generalized equilibrium problem (in short (GEP)): finding an element xX such that xG(x) and

φ(x,y)0,yG(x).

Now we introduce the notion of Levitin-Polyak approximating solution sequence for (GVEP).

Definition 2.1 A sequence { x n }X is called a Levitin-Polyak approximating solution sequence (in short LP sequence) for (GVEP), if there exists { ϵ n } R + with ϵ n 0 such that

d ( x n , G ( x n ) ) ϵ n ,

and

φ( x n ,y)+ ϵ n eintC,yG( x n ).

Next, we introduce a nonlinear scalarization function and their related properties.

Lemma 2.1 ([10, 18, 23])

For fixed eintC, the nonlinear scalarization function is defined by

ξ e (y)=inf{rR:yreC},yY.

The nonlinear scalarization function ξ e has the following properties:

  1. (i)

    ξ e (y)ryreintC;

  2. (ii)

    ξ e (re)=r;

  3. (iii)

    ξ e ( y 1 + y 2 ) ξ e ( y 1 )+ ξ e ( y 2 ), for all y 1 , y 2 Y.

Definition 2.2 Let φ:XY be a vector-valued mapping.

  1. (i)

    φ is said to be C-upper semicontinuous at x if for any open neighborhood V of zero element in Y, there is an open neighborhood U at x in X such that for any x U, φ( x )φ(x)+VC;

  2. (ii)

    φ is said to be C-upper semicontinuous on X if φ is C-upper semicontinuous on each xX;

  3. (iii)

    φ is said to be C-lower semicontinuous at x if for any open neighborhood V of zero element in Y, there is an open neighborhood U at x in X such that for any x U, φ( x )φ(x)+V+C;

  4. (iv)

    φ is said to be C-lower semicontinuous on X if φ is C-lower semicontinuous on each xX.

Remark 2.1 In Definition 2.2, when Y=R, C=[0,+[, being C-upper semicontinuous reduces to being upper semicontinuous and being C-lower semicontinuous reduces to being lower semicontinuous.

Lemma 2.2 If φ:X×XY is C-upper semicontinuous on X×X, then ξ e φ:X×X is upper semicontinuous on X×X.

Proof In order to show that ξ e φ:X×X is upper semicontinuous on X×X, we must check, for any r, the set L={(x,y)X×X: ξ e (φ(x,y))r} is closed.

Let ( x n , y n )L and ( x n , y n )( x 0 , y 0 ), we have ξ e (φ( x n , y n ))r, that is to say, by Lemma 2.1(i), φ( x n , y n )reintC. Next, we only need to prove that ξ e (φ( x 0 , y 0 ))r, that is, φ( x 0 , y 0 )reintC. By way of contradiction, assume that φ( x 0 , y 0 )reintC, then there exists an open neighborhood V of zero element in Y such that

φ( x 0 , y 0 )+VreintC.

Since φ is C-upper semicontinuous at ( x 0 , y 0 )X×X, we have

φ( x n , y n )φ( x 0 , y 0 )+VCreintCCreintC.

It contradicts φ( x n , y n )reintC. So L is closed. It shows ξ e φ:X×X is upper semicontinuous on X×X. □

Finally, we recall some useful definitions and lemmas.

Let F:XY be a set-valued mapping. F is said to be upper semicontinuous at xX if for any open set UF(x), there is an open neighborhood O(x) of x such that UF( x ) for each x O(x); F is said to be lower semicontinuous at x if for any open set UF(x), there is an open neighborhood O(x) of x such that UF( x ), for each x O(x); F is said to be an usco mapping if F is upper semicontinuous and F(x) is nonempty compact for each xX; F is said to be closed if Graph(F) is closed, where Graph(F)={(x,y)X×Y:xX,yF(x)} is the graph of F.

Lemma 2.3 [22]

Let X and Y be two metric spaces. Suppose that F:YX is a usco mapping. Then for any y n y and any x n F( y n ), there is a subsequence { x n k }{ x n } such that x n k xF(y).

Lemma 2.4 [24]

If F:YX is closed and X is compact, then F is upper semicontinuous on Y.

Let (X,d) be a metric space. Denote by K(X) all nonempty compact subsets of X. For arbitrary C 1 , C 2 X, define

h( C 1 , C 2 )=max { h 0 ( C 1 , C 2 ) , h 0 ( C 2 , C 1 ) } ,

where

h 0 ( C 1 , C 2 )=sup { d ( b , C 2 ) : b C 1 }

and

d(b, C 2 )=inf { d ( b , c ) , c C 2 } .

It is obvious that h is a Hausdorff metric on K(X).

Lemma 2.5 [25]

Let (X,d) be a metric space and h be Hausdorff metric on X. Then (K(X),h) is complete if and only if (X,d) is complete.

3 Bounded rationality model and definition of well-posedness for (GVEP)

Let (X,d) be a metric space. The problem space Λ of (GVEP) is given by

Λ= { λ = ( φ , G ) : φ : X × X Y  is  C -upper semicontinuous on  X × X , x X , φ ( x , x ) = 0 , sup ( x , y ) X × X φ ( x , y ) < + , G : X X  is continuous with nonempty compact value on  X , x X  such that  x G ( x )  and  φ ( x , y ) int C , y G ( x ) . }

For any λ 1 =( φ 1 , G 1 ), λ 2 =( φ 2 , G 2 )Λ, define

ρ( λ 1 , λ 2 ):= sup ( x , y ) X × X φ 1 ( x , y ) φ 2 ( x , y ) + sup x X h ( G 1 ( x ) , G 2 ( x ) ) ,

where h denotes a Hausdorff distance on X. Clearly, (Λ,ρ) is a metric space.

Next, the bounded rationality model M={Λ,X,f,Φ} for (GVEP) is defined as follows:

  1. (i)

    Λ and X are two metric spaces;

  2. (ii)

    the feasible set of the problem λΛ is defined by

    f(λ):= { x X : x G ( x ) } ;
  3. (iii)

    the solution set of the problem λΛ is defined by

    E(λ):= { x G ( x ) : φ ( x , y ) int C , y G ( x ) } ;
  4. (iv)

    the rationality function of the problem λΛ is defined by

    Φ(λ,x):= sup y G ( x ) { ξ e ( φ ( x , y ) ) } .

Lemma 3.1

  1. (1)

    xf(λ), Φ(λ,x)0.

  2. (2)

    For all λΛ, E(λ).

  3. (3)

    For all λΛ and ϵ0, Φ(λ,x)= sup y G ( x ) { ξ e (φ(x,y))}ϵ if and only if φ(x,y)+ϵeintC, yG(x). Particularly, xE(λ) if and only if Φ(λ,x)=0.

Proof (1) If xf(λ), then xG(x). By Lemma 2.1(i), we have

Φ(λ,x) ξ e ( φ ( x , x ) ) =0.
  1. (2)

    Obvious.

  2. (3)

    If φ(x,y)+ϵeintC, yG(x), by Lemma 2.1(i), ξ e (φ(x,y))ϵ, yG(x). Thus, we have Φ(λ,x)= sup y G ( x ) { ξ e (φ(x,y))}ϵ.

Conversely, if Φ(λ,x)= sup y G ( x ) { ξ e (φ(x,y))}ϵ, then ξ e (φ(x,y))ϵ, yG(x). By Lemma 2.1(i), we get φ(x,y)+ϵeintC, yG(x). □

Remark 3.1 By Definition 2.1 and Lemma 3.1, for all ϵ n >0 with ϵ n 0, the set of LP approximating solution for the problem λ is defined as

E(λ, ϵ n ):= { x X : d ( x , f ( λ ) ) ϵ n , Φ ( λ , x ) ϵ n } ;

the set of solutions for the problem λ is defined as

E(λ)=E(λ,0):= { x X : x f ( λ ) , Φ ( λ , x ) = 0 } .

Hence, Levitin-Polyak well-posedness for (GVEP) is defined as follows.

Definition 3.1

  1. (i)

    If x n E(λ, ϵ n ), ϵ n >0 with ϵ n 0, there must exist a subsequence { x n k }{ x n } such that x n k xE(λ), then the problem λΛ is said to be generalized Levitin-Polyak well-posed (in short GLP-wp);

  2. (ii)

    If E(λ)={x} (a singleton), x n E(λ, ϵ n ), ϵ n >0 with ϵ n 0, there must have x n x, then the problem λΛ is said to be Levitin-Polyak well-posed (in short LP-wp).

Referring to [3], Hadamard well-posedness for (GVEP) is defined as follows.

Definition 3.2

  1. (i)

    If λ n Λ, λ n λ, x n E( λ n ), there must exist a subsequence { x n k }{ x n } such that x n k xE(λ), then the problem λΛ is said to be generalized Hadamard well-posed (in short GH-wp);

  2. (ii)

    If E(λ)={x} (a singleton), λ n Λ, λ n λ, x n E( λ n ), we must have x n x, then the problem λΛ is said to be Hadamard well-posed (in short H-wp).

Next, we establish a new well-posedness concept for (GVEP), which unifies its Hadamard and Levitin-Polyak well-posedness.

Definition 3.3

  1. (i)

    If λ n Λ, λ n λ, x n E( λ n , ϵ n ), ϵ n >0 with ϵ n 0, there must exist a subsequence { x n k }{ x n } such that x n k xE(λ), then the problem λΛ is said to be generalized well-posed (in short G-wp);

  2. (ii)

    If E(λ)={x} (a singleton), λ n Λ, λ n λ, x n E( λ n , ϵ n ), ϵ n >0 with ϵ n 0, there must have x n x, then λΛ is said to be well-posed (in short wp).

By Definitions 3.1, 3.2 and 3.3, it is easy to check the following.

Lemma 3.2

  1. (1)

    If the problem λΛ is G-wp, then λ must be GLP-wp.

  2. (2)

    If the problem λΛ is wp, then λ must be LP-wp.

Lemma 3.3

  1. (1)

    If the problem λΛ is G-wp, then λ must be GH-wp.

  2. (2)

    If the problem λΛ is wp, then λ must be H-wp.

4 Sufficient conditions for well-posedness of (GVEP)

Assume that the bounded rationality model M={Λ,X,f,Φ} for (GVEP) is given. Now, let (X,d) be a compact metric space, (Y,) be a Banach space, and C be a nonempty, closed, convex, and pointed cone in Y with apex at the origin and intC.

In order to show sufficient conditions for well-posedness of (VEP), we first give the following lemmas.

Lemma 4.1 (Λ,ρ) is a complete metric space.

Proof Let { λ n =( φ n , G n )} be any Cauchy sequence in Λ, then for any ϵ>0, there is a positive integer N such that for any n,mN,

ρ( λ n , λ m )= sup ( x , y ) X × X φ n ( x , y ) φ m ( x , y ) + sup x X h ( G n ( x ) , G m ( x ) ) <ϵ.

Then, for any fixed (x,y)X×Y, { φ n (x,y)} is a Cauchy sequence in Y, and { G n (x)} is a Cauchy sequence in K(X). By Lemma 2.5, (K(X),h) is a complete spaces and (Y,) is also complete spaces. It follows that there exist φ(x,y)Y and G(x)K(X) such that lim m φ m (x,y)=φ(x,y) and lim m G m (x)=G(x). Thus, for all nN, we have

sup ( x , y ) X × X φ n ( x , y ) φ ( x , y ) + sup x X h ( G n ( x ) , G ( x ) ) ϵ.

Next, we will prove that λ=(φ,G)Λ, thus (Λ,ρ) is a complete metric space.

  1. (i)

    For any open convex neighborhood V of zero element in Y, there is a positive integer n 0 such that for all x,yX,

    φ(x,y) φ n 0 (x,y)+ V 3 ,
    (1)

and

φ n 0 (x,y)φ(x,y)+ V 3 .
(2)

Since λ n 0 =( φ n 0 , G n 0 )Λ, φ n 0 is C-upper semicontinuous on X×X, thus there are an open neighborhood of U 1 at x and an open neighborhood of U 2 at y such that

φ n 0 ( x , y ) φ n 0 (x,y)+ V 3 C, x U 1 , y U 2 .
(3)

By (1), (2), and (3), we have

φ ( x , y ) φ n 0 ( x , y ) + V 3 φ n 0 (x,y)+ 2 3 VCφ(x,y)+VC.

It shows φ:X×XY is C-upper semicontinuous on X×X.

  1. (ii)

    It is easy to check that φ(x,x)=0, xX, sup ( x , y ) X × X φ(x,y)<+, G:XX is continuous on X and xX, G(x) is a nonempty compact set.

  2. (iii)

    Since λ n =( φ n , G n )Λ, there exists x n X such that x n G n ( x n ) and φ n ( x n ,y)intC, y G n ( x n ). Firstly, we may suppose that x n x, since X is a compact metric space. For all nN,

    h ( G n ( x n ) , G ( x ) ) h ( G n ( x n ) , G ( x n ) ) + h ( G ( x n ) , G ( x ) ) ϵ + h ( G ( x n ) , G ( x ) ) .
    (4)

Note that G is continuous on X, we have

h ( G ( x n ) , G ( x ) ) 0.
(5)

By (4) and (5), we get

d ( x , G ( x ) ) d ( x , x n ) + d ( x n , G n ( x n ) ) + h ( G n ( x n ) , G ( x ) ) = d ( x , x n ) + h ( G n ( x n ) , G ( x n ) ) 0 .

Hence, xG(x).

Finally, we only need to prove that φ(x,y)intC, yG(x). By way of contradiction, assume that there exists y 0 G(x) such that φ(x, y 0 )intC. It shows that there exists an open convex neighborhood V of zero element in Y such that φ(x, y 0 )+VintC.

Since sup ( x , y ) X × X φ n (x,y)φ(x,y)0, there is a positive integer N 1 such that n N 1 ,

φ n (x,y)φ(x,y)+ V 2 ,x,yX.
(6)

By virtue of h( G n ( x n ),G(x))0, there exists y n G n ( x n ) such that y n y 0 . Note that φ:X×XY is C-upper semicontinuous on X×X, then there exists a positive integer N 2 such that n N 2 ,

φ( x n , y n )φ(x, y 0 )+ V 2 C.
(7)

Let N=max{ N 1 , N 2 }, nN, by (6) and (7), we have

φ n ( x n , y n )φ( x n , y n )+ V 2 φ(x, y 0 )+VCintCCintC.

This is a contradiction to φ n ( x n ,y)intC, y G n ( x n ). □

Lemma 4.2 f:ΛX is an usco mapping.

Proof Since X is a compact metric space, by Lemma 2.4, it suffices to show that Graph(f) is closed, where Graph(f)={(λ,x)Λ×X:xf(λ)}. That is to say, λ n Λ, λ n λ, x n f( λ n ), x n x, we need to show that xf(λ).

In fact, for each n=1,2,3, , since x n f( λ n ), then there exists x n X such that x n G n ( x n ). Let sup x X h( G n (x),G(x))= ϵ n with ϵ n 0, there must be h( G n ( x n ),G( x n )) ϵ n . Since x n G n ( x n ), there exists x n G( x n ) such that d( x n , x n ) ϵ n . By

d ( x n , x ) d ( x n , x n ) +d( x n ,x)0
(8)

we get x n x. Note that set-value mapping G is continuous on X, then

h ( G ( x n ) , G ( x ) ) 0.
(9)

By (8) and (9), we get

d ( x , G ( x ) ) d ( x , x n ) +d ( x n , G ( x n ) ) +h ( G ( x n ) , G ( x ) ) 0.
(10)

Since G(x) is a nonempty compact subset of X, by (10), we have xG(x). It shows that xf(λ). □

Lemma 4.3 For all (λ,x)Λ×X, Φ(λ,x) is lower semicontinuous at (λ,x).

Proof By Lemma 4.1, it is only need to show that ϵ>0, λ n =( φ n , G n )Λ, λ n λ=(φ,G)Λ, x n X, x n xX, there exists a positive integer N such that nN,

Φ( λ n , x n )>Φ(λ,x)ϵ.
(11)

By definition of the least upper bound, there exists y 0 G(x) such that

ξ e ( φ ( x , y 0 ) ) >Φ(λ,x) ϵ 2 .
(12)

Note that sup x X h( G n (x),G(x))0 and h(G( x n ),G(x))0, we have

h ( G n ( x n ) , G ( x ) ) h ( G n ( x n ) , G ( x n ) ) +h ( G ( x n ) , G ( x ) ) 0.
(13)

From (13), there exists y n G n ( x n ) such that d( y n , y 0 )0.

Since sup ( x , y ) X × X φ n (x,y)φ(x,y)0, that is to say, there exists a positive integer N 1 such that n N 1 ,

φ n ( x n , y n )φ( x n , y n )re,
(14)

where r] ϵ 4 , ϵ 4 [, eintC, re is an open neighborhood of zero element in Y. Thus, by (14) and Lemma 2.1(ii), we get

ϵ 4 ξ e ( φ n ( x n , y n ) φ ( x n , y n ) ) ϵ 4 .
(15)

By (15) and Lemma 2.1(iii), we get

ξ e ( φ n ( x n , y n ) ) = ξ e ( φ n ( x n , y n ) φ ( x n , y n ) + φ ( x n , y n ) ) ξ e ( φ ( x n , y n ) ) ξ e ( φ n ( x n , y n ) φ ( x n , y n ) ) ξ e ( φ ( x n , y n ) ) ϵ 4 .
(16)

By Lemma 2.2, ξ e φ is upper semicontinuous on X×X. Then there exists a positive integer N 2 such that n N 2 ,

ξ e ( φ ( x n , y n ) ) > ξ e ( φ ( x , y 0 ) ) ϵ 4 .
(17)

Let N=max{ N 1 , N 2 }, nN, by (16), (17), and (12), we get the inequality (11):

Φ ( λ n , x n ) = sup y G n ( x n ) { ξ e ( φ n ( x n , y ) ) } ξ e ( φ n ( x n , y n ) ) > ξ e ( φ ( x n , y n ) ) ϵ 4 > ξ e ( φ ( x , y 0 ) ) ϵ 2 > Φ ( λ , x ) ϵ .

 □

Next, we give sufficient conditions for G-wp and wp of (VEP).

Theorem 4.1

  1. (1)

    For all λΛ, the problems λ is G-wp.

  2. (2)

    For all λΛ, if E(λ)={x} (a singleton), then the problem λ is wp.

Proof (1) λ n Λ, λ n λ, x n E( λ n , ϵ n ), ϵ n >0 with ϵ n 0, then

d ( x n , f ( λ n ) ) ϵ n
(18)

and

Φ( λ n , x n ) ϵ n .
(19)

From (18), there exists u n f( λ n ) such that d( u n , x n )0 as n. It follows by Lemma 4.2 and Lemma 2.3 that there exists { u n k }{ u n } such that u n k xf(λ). By

d( x n k ,x)d( x n k , u n k )+d( u n k ,x)0,

we get

x n k xf(λ).
(20)

By Lemma 4.3 and (19), we have

0Φ(λ,x) lim inf n k Φ( λ n k , x n k ) lim inf n k ϵ n k =0.
(21)

That is,

Φ(λ,x)=0.

By (20) and (21), we have xE(λ). It shows that λ is G-wp.

  1. (2)

    By way of contradiction. If the sequence { x n } does not converge x, then there exists an open neighborhood O at x and a subsequence { x n k } of { x n } such that x n k O. Since E(λ)={x} (a singleton), by the proof of (1), we get x n k x. This is a contradiction to x n k O. □

Similarly, by Lemmas 3.2 and 3.3, it is easy to check the following.

Theorem 4.2

  1. (1)

    For all λΛ, the problems λ must be GLP-wp and GH-wp.

  2. (2)

    For all λΛ, if E(λ)={x} (a singleton), then the problem λ must be LP-wp and H-wp.

Finally, we apply these results to (GEP). Let Y=R, C=[0,+[, the problem space of (GEP) is defined as

Λ = { λ = ( φ , G ) : φ : X × X R  is upper semicontinuous on  X × X , x X , φ ( x , x ) = 0 , sup ( x , y ) X × X | φ ( x , y ) | < + , G : X X  is continuous with nonempty compact value on  X , x X  such that  x G ( x )  and  φ ( x , y ) 0 , y G ( x ) . }

For any λ 1 =( φ 1 , G 1 ), λ 2 =( φ 2 , G 2 ) Λ , define

ϱ( λ 1 , λ 2 ):= sup ( x , y ) X × X | φ 1 ( x , y ) φ 2 ( x , y ) | + sup x X h ( G 1 ( x ) , G 2 ( x ) ) .

It is easy to check that ( Λ ,ϱ) is a complete metric space. Hence, we have the following.

Corollary 4.1

  1. (1)

    For all λ Λ , the problem λ is G-wp.

  2. (2)

    For all λ Λ , if E(λ)={x} (a singleton), then the problem λ is wp.

Corollary 4.2

  1. (1)

    For all λ Λ , the problem λ must be GLP-wp and GH-wp.

  2. (2)

    For all λ Λ , if E(λ)={x} (a singleton), then the problem λ must be LP-wp and H-wp.