1 Introduction

In this paper, we consider the following generalized semi-infinite multiobjective programming problem:

$$\begin{aligned} (\mathrm{GSIMP}) \quad& \operatorname{Min}_{C} f(x):= \bigl(f_{1}(x),f_{2}(x),\ldots,f_{p} (x)\bigr),\quad \mbox{s.t. } x\in M, \\ &\quad \mbox{with the feasible set } M:=\bigl\{ x\in\mathbb{R}^{n}:g(x,y) \leq0, \forall y\in Y(x)\bigr\} , \\ &\quad \mbox{and the index set } Y(x):=\bigl\{ y\in\mathbb{R}^{m}:h_{l}(x,y) \leq0, l\in L\bigr\} , \end{aligned}$$

where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) is a vector-valued function, \(C\subseteq\mathbb{R}^{p}\) is a closed, convex and cone, \(g, h_{l}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) are real-valued functions, and the index set \(L=\{1,2,\ldots,s\}\) with \(s<+\infty\).

If \(p=1\) and \(C=\mathbb{R}_{+}\), then (GSIMP) reduces to generalized semi-infinite programming problems (for short, GSIP). If the index set does not depend on the decision variable x, i.e., \(Y(x)=Y\) where Y is some nonempty set, then (GSIP) reduces to a standard semi-infinite programming problem and if the index set is finite, say \(Y(x)=\{y_{1},y_{2},\ldots,y_{t}\}\) for all \(x\in\mathbb{R}^{n}\), then (GSIP) reduces to a finite programming problem.

In recent years, generalized semi-infinite programming problems became an active research topic in mathematical programming due to its extensive applications in many fields such as reverse Chebyshev approximate, robust optimization, minimax problems, design centering and disjunctive programming: see [13]. A large number of results have appeared in the literature: see, e.g., [49] and the references therein. Recently, standard semi-infinite programming problems have been generalized to multiobjective case. Chuong et al. [10] derived necessary and sufficient conditions for lower and upper semi-continuity of Pareto solution maps for parametric semi-infinite multiobjective optimization problems. Chuong et al. [11] obtained the pseudo-Lipschitz property of Pareto solution maps for the parametric linear semi-infinite multiobjective optimization problem. Chuong and Yao [12] derived necessary and sufficient optimality conditions of strongly isolated solutions and positively properly efficient solutions for nonsmooth semi-infinite multiobjective optimization problems. Huy and Kim [13] established sufficient conditions for the Aubin Lipschitz-like property for nonconvex semi-infinite multiobjective optimization problems. Goberna et al. [14] derived some optimality conditions for linear semi-infinite vector optimization problems by using the constraint qualifications.

On the other hand, it is well known that the well-posedness is very important for both optimization theory and numerical methods of optimization problems, which guarantees that, for approximating solution sequences, there is a subsequence which converges to a solution. The notion of well-posedness was first introduced and studied by Tykhonov [15] for unconstrained optimization problems. One limitation in Tykhonov well-posedness is that every minimizing sequence needs to satisfy feasibility conditions. To overcome this drawback, Levitin and Polyak [16] introduced a notion of well-posedness which does not necessarily require the feasibility of the minimizing sequence. Konsulova and Revalski [17] investigated the Levitin-Polyak well-posedness for convex optimization problems with functional constraints. Huang and Yang [18] extended the results of Konsulova and Revalski [17] to nonconvex case. Huang and Yang [19, 20] studied the Levitin-Polyak well-posedness for vector optimization problems with functional constraints. They also derived characterizations for the nonemptiness and compactness of weakly efficient solutions for a convex vector optimization problem with functional constraints in finite dimensional spaces. Lalitha and Chatterjee [21] gave some characterizations for the Levitin-Polyak well-posedness of quasiconvex vector optimization problems in terms of efficient solutions. Long et al. [22] introduced several types of Levitin-Polyak well-posedness for equilibrium problems with functional constraints and obtained criteria and characterizations for these types of well-posedness. About the other well-posedness of optimization problems, we refer the readers to [2329] and the references therein.

Very recently, Wang et al. [30] considered the generalized Levitin-Polyak well-posedness for generalized semi-infinite programming problems. The criteria and characterizations of the generalized Levitin-Polyak well-posedness for this problem are established.

We remark that, so far as we know, there are no papers dealing with the Levitin-Polyak well-posedness for generalized semi-infinite multiobjective programming problems. This paper is the effort in this direction.

The rest of this article is organized as follows. In Section 2, we recall some basic definitions required in the sequel. In Section 3, we introduced a notion of Levitin-Polyak well-posedness for generalized semi-infinite multiobjective programming problems. We also give some criteria and characterizations for this kind of well-posedness. We discuss the relations between the Levitin-Polyak well-posedness and the upper semi-continuity of approximate solution maps for generalized semi-infinite multiobjective programming problems in Section 4.

2 Preliminaries

Let \(C\subseteq\mathbb{R}^{p}\) be a closed convex and cone with nonempty interior intC, which induces an order in \(\mathbb{R}^{p}\), i.e., for any \(x,y\in\mathbb{R}^{p}\), \(x\leq_{C} y\) if and only if \(y-x\in C\). The corresponding ordered vector space is denoted by \((\mathbb{R}^{p}, C)\). Arbitrarily fix an \(e\in\operatorname{int}C\). Let \((\mathbb {R}^{n},d)\) be a metric space and \(K\subset{\mathbb{R}^{n}}\). We denote by \(d(a,K):=\inf_{b\in{K}}\|a-b\|\), the distance from the point a to the set K.

Definition 2.1

A point \(x_{0}\in M\) is said to be a weakly efficient solution for problem (GSIMP) iff for any \(x\in M\),

$$f(x)-f(x_{0})\notin-\operatorname{int}C. $$

Denote by S the set of weakly efficient solutions of problem (GSIMP).

Remark 2.1

From Definition 2.1, we have

$$ S=\bigl\{ x'\in{\mathbb{R}^{n}}:f(x)-f \bigl(x'\bigr)\notin-\operatorname{int}C, \forall x\in M, g \bigl(x',y\bigr)\leq0,\forall y\in Y\bigl(x'\bigr)\bigr\} . $$

To reformulate problem (GSIMP) as a finite nonlinear multiobjective programming problem, we define the value function of the lower-level problem by

$$ \varphi(x):=\left \{ \textstyle\begin{array}{l@{\quad}l} \sup_{y\in Y(x)}g(x,y), &\mbox{if } Y(x)\neq\emptyset; \\ -\infty,&\mbox{else}. \end{array}\displaystyle \right . $$

Let \(X=\{x\in\mathbb{R}^{n}: Y(x)\neq\emptyset\}\). It is easy to see that problem (GSIMP) can be equivalently reformulated as the following multiobjective programming problem with a single nonsmooth constraint:

$$ \operatorname{Min}_{C} f(x), \quad \mbox{s.t. } \varphi(x)\leq0. $$

We will use the following definitions of continuity for a set-valued map.

Definition 2.2

[31]

Let \(G:K\rightrightarrows\mathbb{R}^{m}\) be a set-valued map. G is said to be upper semi-continuous at \(x_{0}\in {K}\) iff for any open set V containing \(G(x_{0})\), there exists an open set U containing \(x_{0}\) such that, for all \(t\in{U}\cap K\), \(G(t)\subset{V}\). G is said to be upper semi-continuous on K iff it is upper semi-continuous at all \(x\in{K}\).

Definition 2.3

[31]

Let \(G:K\rightrightarrows\mathbb {R}^{m}\) be a set-valued map. G is said to be lower semi-continuous at \(x_{0}\in K\) iff for any \(y_{0}\in G(x_{0})\) and any neighborhood \(V(y_{0})\) of \(y_{0}\), there exists a neighbourhood \(U(x_{0})\) of \(x_{0}\) such that \(G(x)\cap V(y_{0})\neq\emptyset\), \(\forall x\in U(x_{0})\cap K\). G is said to be lower semi-continuous on K iff it is lower semi-continuous at each \(x\in K\).

Remark 2.2

[31]

G is lower semi-continuous at \(x_{0}\in K\) if and only if for any \(x_{n}\rightarrow x_{0}\) and any \(y\in G(x_{0})\), there exists \(y_{n}\in G(x_{n})\) such that \(y_{n}\rightarrow y\).

Definition 2.4

[32]

Let \(G:K\rightrightarrows\mathbb {R}^{m}\) be a set-valued map. We say that G is Hausdorff upper continuous at \(x_{0}\in K\) iff for any neighborhood \(V(0)\) of 0, there exists a neighborhood \(W(x_{0})\) of \(x_{0}\) such that

$$G(x)\subset G(x_{0})+V(0), \quad \mbox{for all } x\in W(x_{0})\cap K. $$

We say that G is Hausdorff upper continuous iff G is Hausdorff upper continuous at every point of K.

Remark 2.3

If G is upper semi-continuous at \(x_{0}\in{K}\), then G is Hausdorff upper continuous at \(x_{0}\in{K}\); the converse implication is true when \(G(x_{0})\) is compact (see [33]).

Remark 2.4

For the index set \(Y(x)\) in problem (GSIMP), Wang et al. [30] gave a condition ensuring that the set-valued mapping Y is lower semi-continuous on X. They also proved that if Y is lower semi-continuous on X and g is lower semi-continuous, then φ is lower semi-continuous on X.

Definition 2.5

[34]

Let A be a nonempty subset of \(\mathbb{R}^{n}\). The Kuratowski measure [34] of non-compactness μ of the set A is defined by

$$\mu(A)=\inf\Biggl\{ \varepsilon>0:A\subseteq\bigcup_{i=1}^{n}A_{i}, \operatorname{diam}A_{i}< \varepsilon, i=1,2,\ldots,n\Biggr\} , $$

where \(\operatorname{diam}A_{i}\) is the diameter of \(A_{i}\) defined by \(\operatorname{diam}A_{i}=\sup\{d(x_{1},x_{2}):x_{1},x_{2}\in{A_{i}}\}\).

Definition 2.6

Let A and B be two nonempty subsets of \(\mathbb{R}^{n}\). The Hausdorff distance between A and B is defined by

$$H(A,B)=\max\bigl\{ e(A,B),e(B,A)\bigr\} , $$

where \(e(A,B)=\sup_{a\in{A}}d(a,B)\). Let \(\{A_{n}\}\) be a sequence of nonempty subsets of X. We say that \(A_{n}\) converges to A in the sense of Hausdorff distance if \(H(A_{n},A)\rightarrow0\). It is easy to see that \(e(A_{n},A)\rightarrow0\) if and only if \(d(a_{n},A)\rightarrow0\) for every \(a_{n}\in A_{n}\). For more details on this topic, we refer the reader to [34].

3 Metric characterizations of Levitin-Polyak well-posedness

In this section, we introduce a notion of Levitin-Polyak well-posedness for generalized semi-infinite multiobjective programming problems. We also obtain some metric characterizations of Levitin-Polyak well-posedness by considering the non-compactness of approximate solution set.

We first introduce the notion of Levitin-Polyak well-posedness for problem (GSIMP).

Definition 3.1

A sequence \(\{x_{n}\}\subseteq\mathbb{R}^{n}\) is said to be a Levitin-Polyak minimizing sequence of problem (GSIMP) iff there exists a sequence \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\) such that

$$f(x)-f(x_{n})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M $$

and

$$g(x_{n},y)\leq\varepsilon_{n},\quad \forall y\in Y(x_{n}). $$

Definition 3.2

Problem (GSIMP) is said to be Levitin-Polyak well-posed iff the solution set S is nonempty, and for every Levitin-Polyak minimizing sequence has a subsequence which converges to an element of S.

Remark 3.1

We remark that:

  1. (i)

    The Levitin-Polyak well-posedness implies that the set S of weakly efficient solutions of problem (GSIMP) is nonempty and compact.

  2. (ii)

    When f is a real-valued function and \(C=\mathbb{R}_{+} ^{1}\), the Levitin-Polyak well-posedness reduces to generalized type II Levitin-Polyak well-posedness for generalized semi-infinite programming problems considered by Wang et al. [30].

  3. (iii)

    When the index set is finite, e.g., \(Y(x)=\{ y_{1},y_{2},\ldots,y_{t}\}\) for all \(x\in\mathbb{R}^{n}\), the concept of the Levitin-Polyak well-posedness for problem (GSIMP) is similar to the definition introduced by Huang and Yang [20].

Consider the following statement:

$$\begin{aligned}& \bigl\{ S\neq\emptyset\mbox{ and, for any Levitin-Polyak minimizing sequence } \{x_{n}\}, \\& \quad \mbox{we have } d(x_{n},S)\rightarrow0\bigr\} . \end{aligned}$$
(1)

The proof of the following proposition is easy and so we omit it.

Proposition 3.1

If problem (GSIMP) is Levitin-Polyak well-posed, then (1) holds. Conversely, if (1) holds and S is nonempty compact, then problem (GSIMP) is Levitin-Polyak well-posed.

For any \(\varepsilon>0\), we consider the following approximating solution set:

$$ \Omega(\varepsilon)=\bigl\{ x'\in{\mathbb{R}^{n}}:f(x)-f \bigl(x'\bigr)+\varepsilon e\notin -\operatorname{int}C,\forall x\in M, g\bigl(x',y\bigr)\leq\varepsilon, \forall{y\in {Y \bigl(x'\bigr)}}\bigr\} . $$

Theorem 3.1

Problem (GSIMP) is Levitin-Polyak well-posed if and only if the solution set S is nonempty compact and

$$ e\bigl(\Omega(\varepsilon),S\bigr)\rightarrow0 \quad \textit{as } \varepsilon \rightarrow0. $$
(2)

Proof

Suppose that problem (GSIMP) is Levitin-Polyak well-posed. Then S is nonempty and compact. Now, we prove (2) holds. Suppose by contradiction that there exist \(\alpha>0\), \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\), and \(\{x_{n}\}\subset\Omega(\varepsilon _{n})\) such that

$$ d(x_{n}, S)>\alpha. $$
(3)

As \(\{x_{n}\}\subset\Omega(\varepsilon_{n})\), we know that \(\{x_{n}\}\) is a Levitin-Polyak minimizing sequence for problem (GSIMP). By the Levitin-Polyak well-posedness of problem (GSIMP), there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) converging to some point of S. This contradicts (3). It follows that (2) holds.

Conversely, suppose that S is nonempty compact and (2) holds. Let \(\{ x_{n}\}\) is a Levitin-Polyak minimizing sequence for problem (GSIMP). Then there exists a sequence \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\) such that

$$\begin{aligned}& f(x)-f(x_{n})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M, \\& g(x_{n},y)\leq\varepsilon_{n}, \quad \forall y\in Y(x_{n}). \end{aligned}$$

It follows that \(\{x_{n}\}\subset\Omega(\varepsilon_{n})\). By (2), there exists a sequence \(\{z_{n}\}\subset S\) such that

$$\|x_{n}-z_{n}\|\rightarrow0. $$

Note that S is compact. Then there exists a subsequence \(\{z_{n_{k}}\}\) of \(\{z_{n}\}\) converging to \(x_{0}\in S\). Thus, the corresponding subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) converges to \(x_{0}\). Therefore, problem (GSIMP) is Levitin-Polyak well-posed. The proof is complete. □

The following theorem shows that the Levitin-Polyak well-posedness of problem (GSIMP) can be characterized by considering the non-compactness of approximate solution set.

Theorem 3.2

Assume that f is continuous, g is lower semi-continuous and the set-valued mapping Y is lower semi-continuous. Then, problem (GSIMP) is Levitin-Polyak well-posed if and only if

$$ \Omega(\varepsilon)\neq\emptyset,\quad \forall\varepsilon>0\quad \textit{and} \quad \lim_{\varepsilon\rightarrow0}\mu\bigl(\Omega(\varepsilon) \bigr)=0. $$
(4)

Proof

Let problem (GSIMP) be Levitin-Polyak well-posed. By Theorem 3.1, S is nonempty compact and

$$ e\bigl(\Omega(\varepsilon),S\bigr)\rightarrow0 \quad \mbox{as } \varepsilon \rightarrow 0. $$
(5)

Clearly, \(\Omega(\varepsilon)\neq\emptyset\) for any \(\varepsilon>0\), since \(S\subset\Omega(\varepsilon)\). Observe that for \(\varepsilon>0\), we have

$$H\bigl(\Omega(\varepsilon), S\bigr)=\max\bigl\{ e\bigl(\Omega(\varepsilon), S \bigr), e\bigl(S,\Omega (\varepsilon)\bigr)\bigr\} =e\bigl(\Omega(\varepsilon), S \bigr). $$

Since S is compact, \(\mu(S)=0\). It follows that

$$\mu\bigl(\Omega(\varepsilon)\bigr)\leq2H\bigl(\Omega(\varepsilon), S\bigr)+ \mu(S)=2H\bigl(\Omega (\varepsilon), S\bigr)=2e\bigl(\Omega(\varepsilon), S\bigr). $$

This fact together with (5) implies that (4) holds.

Conversely, assume that (4) holds. We first show that \(\Omega (\varepsilon)\) is a closed set for any \(\varepsilon>0\). Let \(x_{n}\in \Omega(\varepsilon)\) with \(x_{n}\rightarrow x_{0}\) such that

$$\begin{aligned}& f(x)-f(x_{n})+\varepsilon e\notin-\operatorname{int}C, \quad \forall x \in M, \\& g(x_{n},y)\leq\varepsilon, \quad \forall y\in Y(x_{n}). \end{aligned}$$
(6)

By (6), we have

$$f(x)-f(x_{n})+\varepsilon e\in R^{p}\backslash(- \operatorname{int}C),\quad \forall x\in M. $$

Since f is continuous and \(R^{p}\backslash(-\operatorname{int}C)\) is closed,

$$f(x)-f(x_{0})+\varepsilon e\in R^{p}\backslash(- \operatorname{int}C),\quad \forall x\in M, $$

or equivalently,

$$ f(x)-f(x_{0})+\varepsilon e\notin-\operatorname{int}C, \quad \forall x\in M. $$
(7)

On the other hand, for any \(y'\in Y(x_{0})\), since Y is lower semi-continuous, there exists a sequence \(\{y_{n}\}\) with \(y_{n}\in Y(x_{n})\) converging to \(y'\) such that

$$g(x_{n}, y_{n})\leq\varepsilon. $$

By the lower semi-continuity of g, we have

$$g\bigl(x_{0}, y'\bigr)\leq\varepsilon. $$

This fact together with (7) yields \(x_{0}\in\Omega(\varepsilon)\). It follows that \(\Omega(\varepsilon)\) is closed.

We next prove that

$$ S=\bigcap_{\varepsilon>0}\Omega(\varepsilon). $$
(8)

Obviously, \(S\subset\bigcap_{\varepsilon>0}\Omega(\varepsilon)\). Now suppose that \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\) and \(x_{0}\in\bigcap_{n=1}^{+\infty}\Omega(\varepsilon_{n})\). It follows that for any n,

$$\begin{aligned}& f(x)-f(x_{0})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M, \\& g(x_{0},y)\leq\varepsilon_{n},\quad \forall y\in Y(x_{0}). \end{aligned}$$

Since \(R^{p}\backslash(-\operatorname{int}C)\) is closed and \(\varepsilon _{n}\rightarrow0\),

$$\begin{aligned}& f(x)-f(x_{0})\notin-\operatorname{int}C,\quad \forall x\in M, \\& g(x_{0},y)\leq0, \quad \forall y\in Y(x_{0}). \end{aligned}$$

This implies that \(x_{0}\in S\). Therefore, (8) holds.

Suppose that (4) holds. Note that \(\Omega(\varepsilon)\) is closed and \(\Omega(\varepsilon_{1})\subset\Omega(\varepsilon_{2})\) whenever \(\varepsilon_{1}<\varepsilon_{2}\). By the Kuratowski theorem ([34], p.412),

$$ \lim_{\varepsilon\rightarrow0}H\bigl(\Omega(\varepsilon),S\bigr)=0 $$
(9)

and S is nonempty and compact.

Let \(\{x_{n}\}\) be a Levitin-Polyak minimizing sequence for problem (GSIMP). Then there exists a sequence \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\) such that

$$\begin{aligned} \begin{aligned} &f(x)-f(x_{n})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M, \\ &g(x_{n},y)\leq\varepsilon_{n}, \quad \forall y\in Y(x_{n}). \end{aligned} \end{aligned}$$

Thus, \(\{x_{n}\}\subset\Omega(\varepsilon_{n})\). This fact together with (9) yields that \(d(x_{n},S)\rightarrow0\). By Proposition 3.1, problem (GSIMP) is Levitin-Polyak well-posed. This completes the proof. □

We now give an example to illustrate Theorem 3.2.

Example 3.1

Let \(C=\mathbb{R}_{+}^{2}\) and \(e=(1,1)\). We consider the following generalized semi-infinite multiobjective programming problem:

$$\begin{aligned} (\mathrm{GSIMP})\quad & \operatorname{Min}_{C} f(x)=\left \{ \textstyle\begin{array}{l@{\quad}l} (0,0),& \mbox{if } x\geq0, \\ (x^{2},x^{2}),& \mbox{if } x< 0, \end{array}\displaystyle \right . \\ &\quad \mbox{s.t. } g(x,y)=x-y^{2}-1\leq0,\quad \forall y\in Y(x), \\ &\quad \mbox{where } Y(x)=\bigl\{ y\in\mathbb{R}:h(x,y)=y-x^{2}\leq0 \bigr\} . \end{aligned}$$

By simple calculations, \(Y(x)=(-\infty,x^{2}]\) and \(M=(-\infty,1]\). It is easy to verify that f and g are continuous and \(Y(x)\) is lower semi-continuous. It is clear that \(S=[0,1]\) and

$$\begin{aligned} \Omega(\varepsilon)&=\bigl\{ x'\in{\mathbb{R}}:f(x)-f \bigl(x'\bigr)+\varepsilon e\notin -\operatorname{int}C, \forall x\in M, g\bigl(x',y\bigr)\leq\varepsilon,\forall y\in Y \bigl(x'\bigr)\bigr\} \\ &=[-\sqrt{\varepsilon},1+\varepsilon]. \end{aligned}$$

It follows that \(\lim_{\varepsilon\rightarrow0}\mu(\Omega(\varepsilon ))=0\). By Theorem 3.2, problem (GSIMP) is Levitin-Polyak well-posed.

The following example illustrates that the continuity of f in Theorem 3.2 is essential.

Example 3.2

Let C, e, g, and Y be considered in Example 3.1. Let \(f:\mathbb{R}\rightarrow\mathbb{R}^{2}\) be defined by

$$ f(x)=\left \{ \textstyle\begin{array}{l@{\quad}l} (0,0), &\mbox{if } x\geq0, \\ (-x,-x),& \mbox{if } {-}1\leq x< 0, \\ (-1-x,-1-x),& \mbox{if } x< -1. \end{array}\displaystyle \right . $$

Then \(Y(x)=(-\infty,x^{2}]\), \(M=(-\infty,1]\), and \(S=[0,1]\). It is easy to see that g are continuous and \(Y(x)\) is lower semi-continuous. Moreover,

$$ \Omega(\varepsilon)=[-\varepsilon,1+\varepsilon]\cup[-1-\varepsilon,-1). $$

Obviously, f is not continuous. By Theorem 3.2, problem (GSIMP) is not Levitin-Polyak well-posed. In fact, for sequence \(\{x_{n}\}=\{-1-1/n \}\) is a Levitin-Polyak minimizing sequence for problem (GSIMP), but any subsequence of \(\{x_{n}\}\) converges to \(-1\notin S\).

Theorem 3.3

Assume that f is continuous, g is lower semi-continuous and the set-valued mapping Y is lower semi-continuous. If there exists some \(\varepsilon>0\) such that \(\Omega(\varepsilon)\) is nonempty bounded, then problem (GSIMP) is Levitin-Polyak well-posed.

Proof

Let \(\{x_{n}\}\) be a Levitin-Polyak minimizing sequence for problem (GSIMP). Then there exists a sequence \(\varepsilon_{n}>0\) with \(\varepsilon_{n}\rightarrow0\) such that

$$\begin{aligned}& f(x)-f(x_{n})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M, \end{aligned}$$
(10)
$$\begin{aligned}& g(x_{n},y)\leq\varepsilon_{n}, \quad \forall y\in Y(x_{n}). \end{aligned}$$
(11)

Let \(\varepsilon>0\) be such that \(\Omega(\varepsilon)\) is nonempty bounded. Then there exists \(n_{0}\) such that \(\{x_{n}\}\subset\Omega (\varepsilon)\) for all \(n> n_{0}\). This implies that \(\{x_{n}\}\) is bounded. It follows that there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{ x_{n}\}\) such that \(x_{n_{k}}\rightarrow x_{0}\). From (10), we have

$$f(x)-f(x_{n_{k}})+\varepsilon_{n_{k}} e\in\mathbb{R}^{p} \backslash(-\operatorname {int}C),\quad \forall x\in M. $$

Since \(\mathbb{R}^{p}\backslash(-\operatorname{int}C)\) is closed, f is continuous and \(\varepsilon_{n_{k}}\rightarrow0\),

$$f(x)-f(x_{0})\in\mathbb{R}^{p}\backslash(- \operatorname{int}C),\quad \forall x\in M, $$

which implies that

$$ f(x)-f(x_{0})\notin-\operatorname{int}C,\quad \forall x\in M. $$
(12)

Note that (11) also holds for \(x_{n_{k}}\) and \(\varepsilon_{n_{k}}\). For any \(y'\in Y(x_{0})\), by the lower semi-continuity of Y, there exists a sequence \(\{y_{n_{k}}\}\) with \(y_{n_{k}}\in Y(x_{n_{k}})\) converging to \(y'\) such that

$$g(x_{n_{k}}, y_{n_{k}})\leq\varepsilon_{n_{k}}. $$

Since g is lower semi-continuous and \(\varepsilon_{n_{k}}\rightarrow0\),

$$g\bigl(x_{0}, y'\bigr)\leq0. $$

Thus, \(x_{0}\in M\). This fact together with (12) yields \(x_{0}\in S\). Therefore, problem (GSIMP) is Levitin-Polyak well-posed. This completes the proof. □

Remark 3.2

Theorem 3.3 illustrates that under suitable conditions, Levitin-Polyak well-posedness of problem (GSIMP) is equivalent to the existence of solutions.

The following example illustrates that the boundedness condition in Theorem 3.3 is essential.

Example 3.3

Let \(C=\mathbb{R}_{+}^{2}\) and \(e=(1,1)\). We consider the following generalized semi-infinite multiobjective programming problem:

$$\begin{aligned} (\mathrm{GSIMP})\quad & \operatorname{Min}_{C} f(x)=\left \{ \textstyle\begin{array}{l@{\quad}l} (x,-x), &\mbox{if } x\geq0, \\ (-x,-x),& \mbox{if } x< 0, \end{array}\displaystyle \right . \\ &\quad \mbox{s.t. } g(x,y)=-x-y^{2}-1\leq0,\quad \forall y\in Y(x), \\ &\quad \mbox{where } Y(x)=\bigl\{ y\in\mathbb{R}:h(x,y)=x-y\leq0\bigr\} . \end{aligned}$$

Then, it is easy to check that \(Y(x)=[x,+\infty)\) and \(M=[-1,+\infty)\). Clearly, f and g are continuous and \(Y(x)\) is lower semi-continuous. By simple calculations, \(S=[0,+\infty)\) and for any \(\varepsilon>0\),

$$\Omega(\varepsilon)=[-\varepsilon,+\infty). $$

It follows that \(\Omega(\varepsilon)\) is not bounded. By Theorem 3.3, problem (GSIMP) is not Levitin-Polyak well-posed. In fact, for sequence \(\{x_{n}\}=\{n\}\) is a Levitin-Polyak minimizing sequence for problem (GSIMP), but it does not have any subsequence which converges to an element of S.

Remark 3.3

It is worth mentioning that Huang and Yang [20] established the equivalence between the generalized type I Levitin-Polyak well-posedness and the nonemptiness and compactness of weakly efficient solution set for convex vector optimization problems with a cone constraint by the linear scalarization method (see Theorem 3.1 in [20]). However, based on different problems and different approaches, their result and ours cannot include each other; for more details, see [20].

4 Links with upper semi-continuity of approximate solution maps

In this section, we investigate the relationship between the Levitin-Polyak well-posedness of problem (GSIMP) and the upper semi-continuity of approximate solution maps. We first have the following result concerning the necessary condition for problem (GSIMP) to be Levitin-Polyak well-posed.

Theorem 4.1

If problem (GSIMP) is Levitin-Polyak well-posed, then the set-valued map \(\Omega:\mathbb{R}_{+}\rightrightarrows\mathbb {R}^{n}\) is upper semi-continuous at \(\varepsilon=0\).

Proof

Let problem (GSIMP) be Levitin-Polyak well-posed. Suppose by contradiction that Ω is not upper semi-continuous at \(\varepsilon=0\). Then there exists an open set U with \(\Omega (0)\subset U\), and for any \(\varepsilon_{n}>0\) with \(\varepsilon _{n}\rightarrow0\), there exists \(x_{n}\in\Omega(\varepsilon_{n})\) such that \(x_{n}\notin U\). Since \(x_{n}\in\Omega(\varepsilon_{n})\), we have

$$\begin{aligned}& f(x)-f(x_{n})+\varepsilon_{n} e\notin-\operatorname{int}C, \quad \forall x\in M, \\& g(x_{n},y)\leq\varepsilon_{n},\quad \forall y\in Y(x_{n}). \end{aligned}$$

It follows that \(\{x_{n}\}\) is a Levitin-Polyak minimizing sequence for problem (GSIMP). Note that problem (GSIMP) is Levitin-Polyak well-posed. Then there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to some point \(x_{0}\in S\). It is easy to see that \(S=\Omega(0)\). This implies \(x_{0}\in\Omega(0)\). It follows that

$$x_{n_{k}}\rightarrow x_{0}\in S=\Omega(0)\subset U. $$

As \(x_{n}\notin U\), we have \(x_{n}\in\mathbb{R}^{n}\backslash U\). By the closedness of \(\mathbb{R}^{n}\backslash U\) and \(x_{n_{k}}\rightarrow x_{0}\), we get \(x_{0}\in\mathbb{R}^{n}\backslash U\). This gives a contradiction. Therefore, Ω is upper semi-continuous at \(\varepsilon=0\). This completes the proof. □

By Theorem 4.1 and Remark 2.3, we have the following corollary.

Corollary 4.1

If problem (GSIMP) is Levitin-Polyak well-posed, then for every Levitin-Polyak minimizing sequence \(\{x_{n}\} \subset\mathbb{R}^{n}\) and for every neighborhood W of 0, there exists \(n_{0}\in\mathbb{N}\) such that \(x_{n}\in S+W\) for all \(n>n_{0}\).

The next theorem gives a sufficient condition for problem (GSIMP) to be Levitin-Polyak well-posed.

Theorem 4.2

If S is nonempty compact and Ω is upper semi-continuous at \(\varepsilon=0\), then problem (GSIMP) is Levitin-Polyak well-posed.

Proof

Let B be an open unit ball in \(\mathbb{R}^{n}\). For any \(\rho>0\), \(\Omega(0)+\rho B\) is a neighborhood of \(\Omega(0)\). Since Ω is upper semi-continuous at \(\varepsilon=0\), there exists a neighborhood of V of 0 such that

$$\Omega(v)\subset\Omega(0)+\rho B,\quad \forall v\in V. $$

Let \(\{x_{n}\}\) be a Levitin-Polyak minimizing sequence for problem (GSIMP). Thus, there exists \(\varepsilon'\in V\) and \(n_{0}\in\mathbb{N}\) such that \(\{x_{n}\}\subset\Omega(\varepsilon')\) when \(n>n_{0}\). It follows that

$$x_{n}\in\Omega(0)+\rho B=S+\rho B. $$

Let \(s_{n}\in S\) and \(b_{n}\in\rho B\) be such that

$$x_{n}=s_{n}+b_{n}. $$

Since S is nonempty compact, there exists a subsequence \(\{s_{n_{k}}\}\) of \(\{s_{n}\}\) which converges to some point \(s_{0}\in S\), and for the above \(\rho>0\), there exists \(N\in\mathbb{N}\) such that \(\|s_{n_{k}}-s_{0}\| <\rho\) for all \(k>N\). It follows that

$$\|x_{n_{k}}-s_{0}\|=\|s_{n_{k}}+b_{n_{k}}-s_{0} \|\leq\|s_{n_{k}}-s_{0}\|+\|b_{n_{k}}\| < 2\rho,\quad \forall k>N. $$

By the arbitrariness of ρ, we get \(x_{n_{k}}\rightarrow s_{0}\in S\). Hence, problem (GSIMP) is Levitin-Polyak well-posed. This completes the proof. □

Remark 4.1

It is worth mentioning that the compactness assumption of S cannot be dropped in the above theorem. Let us consider Example 3.2. Clearly, \(S=\Omega(0)=[0,+\infty)\) is not compact and for any \(\rho>0\), \(V=(-\rho,+\infty)\) is an open set with \(\Omega (0)\subset V\). It is easy to see that Ω is upper semi-continuous at \(\varepsilon=0\). But the problem is not Levitin-Polyak well-posed.

As a consequence of Theorem 4.2 and Remark 2.3, we have the following corollary.

Corollary 4.2

If S is nonempty compact and Ω is Hausdorff upper continuous at \(\varepsilon=0\), then problem (GSIMP) is Levitin-Polyak well-posed.

From Theorems 4.1 and 4.2, we obtain the equivalent relation between the Levitin-Polyak well-posed of problem (GSIMP) and the upper semi-continuity of approximate solution maps.

Corollary 4.3

If S is nonempty compact, then problem (GSIMP) is Levitin-Polyak well-posed if and only if Ω is upper semi-continuous at \(\varepsilon=0\).

5 Conclusion

The purpose of this paper is to study the Levitin-Polyak well-posedness for generalized semi-infinite multiobjective programming problems, where the objective function is vector-valued and the generalized semi-infinite constraint functions are real-valued. Metric characterizations for this kind of Levitin-Polyak well-posedness are obtained. The relations between the Levitin-Polyak well-posedness and the upper semi-continuity of approximate solution maps for generalized semi-infinite multiobjective programming problems are established. It would be interesting to consider the Levitin-Polyak well-posedness for semi-infinite vector optimization problems, where the objective function and the semi-infinite constraint functions are also vector-valued. This may be the topic of some of our forthcoming papers.