Introduction

An improvement set behaves just like a cone and in essence generalizes the notion of a cone. The notion of cones, introduced by Giannessi [4, 5], is the cornerstone of the theory of optimization and almost every optimal notion uses this concept to introduce the optimal elements associated with that optimal notion. Just like the cones, the improvement sets are the basis of the E-optimal points for a set and many authors [3, 7,8,9, 11, 12] have employed this kind of optimal notion in vector optimization which generalizes the notion of efficiency and weak efficiency. In this work, we consider \(\epsilon -E\) optimal points instead of E-optimal points for a set, as this approach is very common in the literature, in locally convex spaces. Indeed, for \(\epsilon\) appropriately chosen, the set of all \(\epsilon -E\) optimal points is greater (or even strictly greater) than the set of all E-optimal points for any set. Let us declare that this approach do not produce complex situations and the obtained conclusions are still as simple as the conclusions achieved in [8].

The paper is organized as follows. In Sect. 2, we present definitions and notations needed in addressing our study. This section specially includes a simple wording of the celebrated Fan’s Lemma [1, 14] and a more tractable version of this technique. In Sect. 3, some existence theorems for E-optimal and \(\epsilon -E\)optimal points of a set are verified. In Sect. 4, we state our main results, refreshing the results achieved by Lalitha and Chatterjee [8]. For a complete history of the current work, we refer the readers to [8] and references therein.

Preliminaries

Let X and Y be two Hausdorff locally convex spaces and let C be a proper pointed closed convex cone in Y with nonempty interior. Let us first recall the definition of an improvement set.

Definition 1

[2, 6, 8] A nonempty set \(E\subseteq Y\) is said to be an improvement set with respect to C (or simply an improvement set, if there is no danger of confusion) if it satisfies the following

  1. (i)

    \(0\notin E\);

  2. (ii)

    \(E+C=E\).

It is worth observing that if C has nonempty interior, then the interior of the improvement set E is nonempty too. Moreover, the following equality holds true: \(E+{\mathrm{int}} C={\mathrm{int}} E\), where by \({\mathrm{int}} (C)\) we mean the topological interior of C. [6]

We now state the following notion of minima associated with the notion of an improvement set E. Let \(\epsilon \in C\). Then

Definition 2

[2, 6, 8] Let \(A\subseteq Y\) be a nonempty set. An element \({\bar{a}}\in A\) is said to be

  1. (i)

    an E-minimal solution iff \((A-{\bar{a}})\cap (-E)=\emptyset\);

  2. (ii)

    a weak E-minimal solution iff \((A-{\bar{a}})\cap (-{\mathrm{int}} E)=\emptyset\);

  3. (iii)

    an \(\epsilon\)-E-minimal solution iff \((A-{\bar{a}}+\epsilon )\cap (-E)=\emptyset\);

  4. (iv)

    a \(\epsilon\)-weak E-minimal solution iff \((A-{\bar{a}}+\epsilon )\cap (-{\mathrm{int}} E)=\emptyset\).

For a set \(A\subseteq Y\), the set of all E-minimal, weak E-minimal, \(\epsilon\)-E-minimal and \(\epsilon\)-weak E-minimal solutions of A are denoted, respectively, by \({\mathrm{Min}}(A,E)\), \(W{\mathrm{Min}}(A,E)\), \(\epsilon -{\mathrm{Min}}(A,E)\) and \(\epsilon -W{\mathrm{Min}}(A,E)\). Obviously, for any set A, we have \({\mathrm{Min}}(A,E)\subseteq W{\mathrm{Min}}(A,E)\) and \(\epsilon -{\mathrm{Min}}(A,E)\subseteq \epsilon -W{\mathrm{Min}}(A,E)\). But more can be said.

Proposition 1

Let E be an improvement set and let \(A\subseteq Y\). Then

  1. (i)

    if \(\epsilon \in C\), then \({\mathrm{Min}}(A,E)\subseteq \epsilon -{\mathrm{Min}}(A,E)\);

  2. (ii)

    if \(\epsilon \in {\mathrm{int}} C\), then \(W{\mathrm{Min}}(A,E)\subseteq \epsilon -W{\mathrm{Min}}(A,E)\).

Proof

Since E is an improvement set, thus both \(E+C=E\) and \(E+{\mathrm{int}} C={\mathrm{int}} E\) hold. If \(\epsilon \in C\), then \(E+\epsilon \subseteq E\). This leads to the proof of the first item. If \(\epsilon \in {\mathrm{int}} C\), then \(E+\epsilon \subseteq {\mathrm{int}} E\), leading to proof of the second item. \(\square\)

The following example simply shows that the inclusions in Proposition 1 may be strict.

Example 1

Let \(Y={{\mathbb {R}}}^2\) and \(A=[-2,-1]\times \{0\}\). Suppose that \(C=[0,\infty )^2\), the natural cone in \({{\mathbb {R}}}^2\). Let \(E=[1,\infty )\times [0,\infty )\) and \(\epsilon =(\varepsilon ,0)\) where \(\varepsilon\) is a fixed positive real number. Obviously, E is an improvement set and \(\epsilon \in C\). One can easily verify that \({\mathrm{Min}}(A,E)=[-2,-1)\times \{0\}\) and \(\epsilon -{\mathrm{Min}}(A,E)=A\). Notice that if \(\epsilon =(\varepsilon _1, \varepsilon _2)\) where \(\varepsilon _1\) and \(\varepsilon _2\) are two fixed positive real numbers, then \(\epsilon \in {\mathrm{int}} (C)\) and the previous conclusion still holds true.

We now recall the notion of Painlevé-Kuratowski set-convergence [8, 13]. Let \((S_n)\) be a sequence of sets in X. Let

$$\begin{aligned} Li S_n:= & {} \{x \in X : x = \lim _{n\rightarrow \infty }x_n,~~x_n \in S_n,\quad \text{ for } \text{ sufficiently } \text{ large }~~n\},\\ Ls S_n:= & {} \{x \in X : x = \lim _{k\rightarrow \infty }x_{n_k},~~x_{n_k}\in S_{n_k},\quad (n_k)~~\text{ a } \text{ subsequence } \text{ of }~~(n)\}. \end{aligned}$$

A sequence of sets \((S_n)\) is said to be convergent in the sense of Painlevé-Kuratowski if and only if there exists a set S so that

$$\begin{aligned} Ls S_n \subseteq S \subseteq Li S_n. \end{aligned}$$

Mathematically, we may write \(S_n {\mathop {\longrightarrow }\limits ^{PK}}S\).

We finally state the definition of a KKM map and then the Fan’s lemma for easy reference. As we already expressed, the Fan’s lemma is required here for verifying the desired existence results of solutions.

Definition 3

[1, 14] Let X be a Hausdorff topological vector space and let A be a nonempty subset of X. A set-valued mapping \(F:A\rightrightarrows X\) is called a KKM map if

$$\begin{aligned} \text{ conv }\{x_1, x_2, \ldots , x_n\}\subseteq \bigcup _{k=1}^n F(x_k), \end{aligned}$$

for each finite subset \(\{x_1, x_2, \ldots , x_n\}\subseteq A\), where \(\text{ conv }\{x_1, x_2, \ldots , x_n\}\) denotes the convex hull of the points \(\{x_1, x_2, \ldots , x_n\}\).

Lemma 1

[1, 14] Let X be a Hausdorff topological vector space and let \(A\subseteq X\) be an arbitrary set. Let \(F:A\rightrightarrows X\) be a KKM mapping. If F has closed values and F(x) is compact for at least one \(x\in A\), then \(\bigcap _{x\in A}F(x)\ne \emptyset\).

The following lemma, as a consequence of the Fan’s lemma, yields a similar result. In some problems, verifying the condition that a given set-valued mapping is a KKM one, is somewhat cumbersome and we may prefer to use the following lemma instead of Fan’s lemma as a shortcut. A direct proof for this lemma can be found in [1].

Lemma 2

[1] Let X be a Hausdorff topological vector space and \(A\subseteq X\) be a convex set. Let \(F:A\rightrightarrows X\) be a given set-valued mapping which has closed values and F(x) is compact for at least one \(x\in A\). If furthermore F satisfies the following two conditions:

  1. (i)

    \(x\in F(x)\) for each \(x\in A\);

  2. (ii)

    \(F(\lambda x+(1-\lambda )u)\subseteq F(x)\bigcup F(u)\) for any \(\lambda \in [0,1]\) and \(x,u\in A\),

then, \(\bigcap _{x\in A}F(x)\ne \emptyset\).

Existence results using KKM theorem

This section is devoted to verify some existence results about the E-minima notions described above. Throughout this section, we assume that X and Y are two Hausdorff locally convex spaces and \(C\subseteq Y\) is a proper pointed closed convex cone with nonempty interior. Furthermore, assume that E is a convex improvement set. We now have the following existence theorems.

Theorem 1

The notations are all as above. Suppose that \(A\subseteq Y\) is a convex set. If there exists some \(a\in A\) so that the set \(\{u\in A: u\notin a+ {\mathrm{int}} E\}\) is compact, then \({{W}}{\mathrm{Min}}(A,E)\ne \emptyset\).

Proof

Consider the set-valued mapping \(G:A\rightrightarrows X\) defined by

$$\begin{aligned} G(y)=\{u\in A: u\notin y+ {\mathrm{int}} E\}. \end{aligned}$$

We verify the conditions of the Lemma 2 about this mapping. Since \(0\notin {\mathrm{int}} E\), thus for any \(y\in A\) we have \(y\in G(y)\). Let \(y, h\in A\), \(\lambda \in [0,1]\) and \(u\in G(\lambda y+(1-\lambda )h)\). If \(u\notin G(y)\bigcup G(h)\), thus \(u\in y+ {\mathrm{int}} E\) and \(u\in h+ {\mathrm{int}} E\). It follows that \(u\in \lambda y +(1-\lambda )h + {\mathrm{int}} E\), by positivity of scalar \(\lambda\) and the convexity of the improvement set E. This contradicts the assumption of \(u\in G(\lambda y+(1-\lambda )h)\). Thus, G satisfies the condition (ii) of the mentioned lemma too. Furthermore, G obviously has closed values and G(a) is compact, by hypothesis. Hence by Lemma 2, \(\bigcap _{y\in A} G(y)\ne \emptyset\). This means, there exists \({\bar{y}}\in A\) so that \({\bar{y}}\in G(y)\) for any \(y\in A\). This completes the proof. \(\square\)

Theorem 2

The notations are all as above. Suppose that \(A\subseteq Y\) is a convex set. Let \(\epsilon \in {\mathrm{int}} E\). If there exists some \(a\in A\) so that the set \(\{u\in A: u\notin a+ \epsilon + {\mathrm{int}} E\}\) is compact, then \(\epsilon -W{\mathrm{Min}}(A,E)\ne \emptyset\).

Proof

The proof is completely similar to that of Theorem 1, we therefore omit it. \(\square\)

Corollary 1

The notations are all as above. Suppose that \(A\subseteq Y\) is a convex compact set. Then \(W{\mathrm{Min}}(A,E)\ne \emptyset\).

Proof

The proof is a direct consequence of Theorem 1. \(\square\)

Corollary 2

The notations are all as above. Suppose that \(A\subseteq Y\) is convex. Let \(\epsilon \in {\mathrm{int}} C\). If there exists some \(a\in A\) so that the set \(\{u\in A: u\notin a+ {\mathrm{int}} E\}\) is compact, then \(\epsilon -W{\mathrm{Min}}(A,E)\ne \emptyset\).

Proof

The proof is a direct consequence of Theorem 1 and Proposition 1. \(\square\)

Corollary 3

The notations are all as above. Suppose that \(A\subseteq Y\) is convex and compact. Let \(\epsilon \in {\mathrm{int}} C\). Then \(\epsilon -W{\mathrm{Min}}(A,E)\ne \emptyset\).

Proof

This follows easily from Corollary 2. \(\square\)

Some \(\epsilon\)-stability conclusions for solution sets

In this section, we consider the following vector optimization problem and establish some stability results for its solutions. To this end, let X and Y be two Hausdorff locally convex spaces. Suppose that E is an improvement set with nonempty interior. Let \(f:X\rightarrow Y\) be a map. The following vector optimization problem, denoted (VP) from now on, is discussed here.

$$\begin{aligned} ({\mathbf{VP }})\quad \quad \min _{x\in S}f(x), \end{aligned}$$

where \(S\subseteq X\) is a nonempty closed set.

Definition 4

An element \({\bar{s}}\in S\) is called an:

  1. (i)

    E-optimal solution of (VP) if \(f({\bar{s}})\in {\mathrm{Min}}(f(S),E)\);

  2. (ii)

    a weak E-optimal solution of (VP) if \(f({\bar{s}})\in W{\mathrm{Min}}(f(S),E)\);

  3. (iii)

    an \(\epsilon -E\)-optimal solution of (VP) if \(f({\bar{s}})\in \epsilon -{\mathrm{Min}}(f(S),E)\);

  4. (iv)

    a weak \(\epsilon -E\)-optimal solution of (VP) if \(f({\bar{s}})\in \epsilon -W{\mathrm{Min}}(f(S),E)\).

In this research, the set of all E-optimal, weak E-optimal, \(\epsilon -E\)-optimal and weak \(\epsilon -E\)-optimal solutions of (VP) are denoted, respectively, by O(fSE), WO(fSE), \(\epsilon -O(f,S,E)\) and \(\epsilon -WO(f,S,E)\). Using the conclusions of the previous section, the following existence theorem is immediate. Assume that E is a convex improvement set.

Theorem 3

The notations are all as we remarked above. Assume that E is a convex improvement set with nonempty interior and \(\epsilon \in int E\). Assume further that f(S) is convex. Then, the following hold true:

  1. (i)

    if there exists some \(y\in f(S)\) so that the set \(\{u\in f(S): u\notin y+ {\mathrm{int}} E\}\) is compact, then \(WO(f,S,E)\ne \emptyset\);

  2. (ii)

    if there exists some \(y\in f(S)\) so that the set \(\{u\in f(S): u\notin y+\epsilon + {\mathrm{int}} E\}\) is compact, then \(\epsilon -WO(f,S,E)\ne \emptyset\);

  3. (iii)

    if f(S) is compact, then \(WO(f,S,E)\ne \emptyset\);

  4. (iv)

    if f(S) is compact, then \(\epsilon -WO(f,S,E)\ne \emptyset\).

Proof

The proof is a direct consequence of the existence theorems and corollaries stated in the previous section. \(\square\)

Corollary 4

The notations are all as above. Assume that E is a convex improvement set with nonempty interior and \(\epsilon \in int E\). Assume further that S is a convex set and f is a linear continuous map. Then, the following hold true:

  1. (i)

    if there exists some \(y\in f(S)\) so that the set \(\{u\in f(S): u\notin y+ {\mathrm{int}} E\}\) is compact, then \(WO(f,S,E)\ne \emptyset\);

  2. (ii)

    if there exists some \(y\in f(S)\) so that the set \(\{u\in f(S): u\notin y+\epsilon + {\mathrm{int}} E\}\) is compact, then \(\epsilon -WO(f,S,E)\ne \emptyset\);

  3. (iii)

    if S is compact, then \(WO(f,S,E)\ne \emptyset\);

  4. (iv)

    if S is compact, then \(\epsilon -WO(f,S,E)\ne \emptyset\).

The definition of the continuous convergence for a sequence of functions stated below is required in the sequel.

Definition 5

[8, 10] Let \(f_n:X\rightarrow Y\) be a sequence of functions. We say the sequence \((f_n)\) converges continuously to the function \(f:X\rightarrow Y\) if \(f_n(x_n)\rightarrow f(x)\) whenever \(x_n\rightarrow x\).

The following theorems say if a sequence of functions converges continuously and a sequence of sets converges in the sense of Painlevé-Kuratowski, then the corresponding images sets converges too. Let us remark that the condition of sequentially compactness in this theorem is fundamental and, as the example below shows, ignoring this condition may yield absurd result.

Theorem 4

The assumptions are as above. Suppose the following conditions are satisfied:

  1. (i)

    \(S_n {\mathop {\longrightarrow }\limits ^{PK}}S\);

  2. (ii)

    \((f_n)\) converges continuously to f;

  3. (iii)

    for any sequentially compact set \(M\subseteq Y\) the union \(\bigcup _{m=1}^\infty f_m^{-1}(M)\) is relatively sequentially compact.

Then, \(f_n(S_n)\) converges to f(S) in the sense of Painlevé-Kuratowski.

Proof

The proof is similar to that of Theorem (3.1) in [8]. We must show \(Ls(f_n(S_n))\subseteq f(S)\subseteq Li(f_n(S_n))\). Let \(y\in Ls(f_n(S_n))\). Thus, there exist a subsequence \((n_k)\) of (n) and a sequence \((x_n)\) with \(x_n\in S_n\) so that \(y=\lim _{k\rightarrow \infty }y_k\) where \(y_k=f_{n_k}(x_{n_k})\). Obviously, the set \(M=\{y_k:k\in {{\mathbb {N}}}\}\) is sequentially compact and \(x_{n_k}\in \bigcup _{j=1}^\infty f_{n_j}^{-1}(M)\) for any k. On the other hand from hypothesis, we find that the set \(\bigcup _{m=1}^\infty f_m^{-1}(M)\) is relatively sequentially compact. Specially \(\bigcup _{j=1}^\infty f_{n_j}^{-1}(M)\) is relatively sequentially compact. Thus, the sequence \((x_{n_k})\) lies in a relatively sequentially compact set and therefore contains a convergent subsequence, still denoted \((x_{n_k})\), converging to some \(x\in S\). Since \((f_n)\) converges continuously to f thus \(f_{n_k}(x_{n_k})\) converges to f(x). This implies \(y=f(x)\) and therefore \(y\in f(S)\). This completes the proof of the first inclusion we remarked above. The rest of the proof which is verifying the second inclusion follows just by mimicking the proof of Theorem (3.1) in [8] and is easy. This completes the proof. \(\square\)

Theorem 5

The assumptions are all as above. Suppose the following conditions are satisfied:

  1. (i)

    \(S_n {\mathop {\longrightarrow }\limits ^{PK}}S\);

  2. (ii)

    \((f_n)\) converges continuously to f;

  3. (iii)

    \(\bigcup _{k=1}^\infty S_k\) is relatively sequentially compact.

Then, \(f_n(S_n)\) converges to f(S) in the sense of Painlevé-Kuratowski.

Proof

Elementary. \(\square\)

The following example indicates that disregarding the condition (iii) of Theorem 4 may produce wrong result.

Example 2

Equip \(\ell ^p\) with its usual norm topology. Notice that in \(\ell ^p\), as a metric space, the compactness and sequentially compactness are the same. Let \(X=Y=\ell ^p\) where \(1<p<\infty\). Let \((e_n)\) denote the canonical basis of \(\ell ^p\) (i.e., \(e_k\) is just the sequence whose only nonzero entry is a “1” in the kth coordinate). Since \(\Vert e_n-e_m\Vert _{\ell ^p}=2^{\frac{1}{p}}\) for \(n\ne m\), thus the sequence \((e_n)\) does not contain any convergent subsequence. Let \(({\bar{x}}_n)\) be an arbitrary sequence in \(\ell ^p\) converging to some \(x_0\ne 0\). Let \(S_n=\{e_n, {\bar{x}}_n\}\) and

$$\begin{aligned} f_n(x)= \left\{ \begin{array}{ll} \displaystyle {\bar{x}}_n \quad x=e_n;\\ \displaystyle \frac{x}{n} \quad x\ne e_n. \end{array} \right. \end{aligned}$$

Let \(f=0\). One may easily verify that \(S_n {\mathop {\longrightarrow }\limits ^{PK}}\{x_0\}\) and \(f_n\) converges continuously to f. On the other hand \(f_n(S_n)=\{{\bar{x}}_n, \frac{{\bar{x}}_n}{n}\}\) which implies \(f_n(S_n){\mathop {\longrightarrow }\limits ^{PK}} \{0,x_0\}\). But \(f(\{x_0\})=\{0\}\) which differs from \(\{0,x_0\}\). The reason for this failure is that while the set \(M=\{{\bar{x}}_n: n\in {{\mathbb {N}}}\}\) is compact, the union \(\bigcup _{k=1}^\infty f_{k}^{-1}(M)\) fails to be compact. Indeed, the third condition of Theorem 4 does not hold.

Remark 1

In Example 2, if one replaces the norm topology with the weak topology, then the last condition of Theorem 4 satisfies. Indeed, the sequence \((e_n)\) converges weakly to 0 and thus the set \(\{e_n: n\in {{\mathbb {N}}}\}\) is weakly sequentially compact (or weakly compact, by Eberlein-Smulian Theorem). However, the desired conclusion fails to be true again! This produces no discrepancy because in this topology \(f_n\) does not converge continuously to the zero function.

Remark 2

Example 2 indicates that the proof of Theorem 3.1 in [8] is wrong if X is infinite dimensional. Indeed, the conditions of Theorem 3.1 in [8] about Example 2 are entirely satisfied!

We now investigate some stability conclusions of the solution set of problem (VP). By considering a family of (VP) problems, we perturb both the objective function f and its solution set. To begin with, consider the following family of (VP) problems:

$$\begin{aligned} ({\mathbf VP })_n\quad \quad \min _{x\in S_n}f_n(x). \end{aligned}$$

The following theorems establish some conclusions between E-minimal solutions of the perturbed problems and the E-minimal solutions of the original (VP) problem. Let us remark that the conditions are all as the opening of this section. Comparing with the conclusion of Theorem 3.2 in [8], the following theorem furnishes the opposite side of the inclusion too. The details are as follows:

Theorem 6

Suppose the following conditions hold true:

  1. (i)

    \(S_n {\mathop {\longrightarrow }\limits ^{PK}}S\);

  2. (ii)

    \((f_n)\) converges continuously to f;

  3. (iii)

    \(\bigcup _{k=1}^\infty S_k\) is relatively sequentially compact.

Suppose that the improvement set E is closed. Suppose further that \(\epsilon \in {\mathrm{int}} C\). Then, the following inclusions hold:

  1. (i)

    \(\epsilon -{\mathrm{Min}}(f(S),E)\subseteq Li(\epsilon -{\mathrm{Min}}(f_n(S_n),E))\);

  2. (ii)

    \(Li(\epsilon _1-{\mathrm{Min}}(f_n(S_n),E))\subseteq \epsilon _2-{\mathrm{Min}}(f(S),E)\) for all \(\epsilon _1,\epsilon _2\in {\mathrm{int}} C\) with \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\);

  3. (iii)

    \(Li(\epsilon -{\mathrm{Min}}(f_n(S_n),E))\subseteq (\lambda +1)\epsilon -{\mathrm{Min}}(f(S),E)\) for all scalar \(\lambda >0\).

Proof

The proof of the first statement goes exactly as that of Theorem 3.2 in [8] with some minor modifications. We therefore give only a sketch of the proof. Let \(y\in \epsilon -{\mathrm{Min}}(f(S),E)\). This implies

$$\begin{aligned} (f(S)-y+\epsilon ) \cap (-E)=\emptyset . \end{aligned}$$
(1)

Obviously, \(y=f(x)\) for some \(x\in S\). So, from hypothesis, there exist \(x_n\in S_n\) satisfying \(x_n\rightarrow x\) and thus \(f_n(x_n)\rightarrow f(x)\). Now assume that there exists a subsequence \((n_k)\) of (n) so that

$$\begin{aligned} (f_{n_k}(S_{n_k})-f_{n_k}(x_{n_k})+\epsilon ) \cap (-E)\ne \emptyset . \end{aligned}$$
(2)

This implies there exist \(s_{n_k}\in S_{n_k}\) so that \(f_{n_k}(s_{n_k})-f_{n_k}(x_{n_k})+\epsilon \in -E\). The last hypothesis of theorem just implies that there exists a subsequence of \((s_{n_k})\), still denoted \((s_{n_k})\), which converges to some \(s\in S\). Letting \(n\rightarrow \infty\) in (2) yields a result contradicting (1). This shows \(f_n(x_n)\in \epsilon -{\mathrm{Min}}(f_n(S_n),E)\) and we are done. We now prove the second item. Let \(y\in Li(\epsilon _1-{\mathrm{Min}}(f_n(S_n),E))\). Thus, \(y=\lim _{n\rightarrow \infty } y_n\) with \(y_n\in \epsilon _1-{\mathrm{Min}}(f_n(S_n),E)\). This implies

$$\begin{aligned} (f_n(S_n)-y_n+\epsilon _1)\cap (-E)=\emptyset . \end{aligned}$$
(3)

Now assume that \(y\notin \epsilon _2- {\mathrm{Min}}(f(S),E)\). Thus, there exists \(x\in S\) so that

$$\begin{aligned} f(x)-y+\epsilon _2 \in -E. \end{aligned}$$
(4)

On the other hand since \(x\in S\), there exist \(x_n\in S_n\) so that \(x_n\rightarrow x\). Using (3) we deduce that \(y_n-f_n(x_n)-\epsilon _1\notin E\). Specially, we have \(y_n-f_n(x_n)-\epsilon _1\notin {\mathrm{int}} E\). It follows that \(y-f(x)-\epsilon _1\notin {\mathrm{int}} E\). Since E is an improvement set, thus we follow \(y-f(x)-\epsilon _1\notin E+{\mathrm{int}} C\). This, using the fact that \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\), just implies \(y-f(x)-\epsilon _1\notin E+\epsilon _2-\epsilon _1\), from which it follows that \(f(x)-y+\epsilon _2 \notin -E\). This violates (4). Thus, \(y\in \epsilon _2-{\mathrm{Min}}(f(S),E)\), completing the proof of this section. The proof of the last item follows directly from (ii). Indeed, \(\lambda \epsilon\) belongs to \({\mathrm{int}} C\) for any \(\lambda >0\). Now let \(\epsilon _1=\epsilon\) and \(\epsilon _2=(1+\lambda )\epsilon\) and use (ii). The proof is over. \(\square\)

Comparing with the conclusion of Theorem 3.3 in [8], the next theorem contains a better conclusion. The details are as follows:

Theorem 7

Suppose the following conditions hold true:

  1. (i)

    \(S_n {\mathop {\longrightarrow }\limits ^{PK}}S\);

  2. (ii)

    \((f_n)\) converges continuously to f;

  3. (iii)

    \(\bigcup _{k=1}^\infty S_k\) is relatively sequentially compact.

Suppose that the improvement set E is closed. Suppose furthermore \(\epsilon \in {\mathrm{int}} C\). Then, the following inclusions hold:

  1. (i)

    \(Ls(\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\subseteq \epsilon -W{\mathrm{Min}}(f(S),E)\);

  2. (ii)

    \(\epsilon _1-W{\mathrm{Min}}(f(S),E)\subseteq Ls(\epsilon _2-W{\mathrm{Min}}(f_n(S_n),E))\) for all \(\epsilon _1, \epsilon _2\in {\mathrm{int}} C\) with \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\);

  3. (iii)

    \(\epsilon -W{\mathrm{Min}}(f(S),E)\subseteq Ls((\lambda +1)\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\) for all scalar \(\lambda >0\).

Proof

The proof for “\(\subseteq\)” part is exactly the same with the proof of the corresponding conclusion in Theorem 3.3 in [8], we therefore omit it. Let us prove the opposite inclusion in (ii). So let \(y\in \epsilon _1-W{\mathrm{Min}}(f(S),E)\). This just implies

$$\begin{aligned} (f(S)-y+\epsilon _1)\cap (-{\mathrm{int}}\,E)=\emptyset . \end{aligned}$$
(5)

Since \(y\in f(S)\), thus \(y=f(x)\) for some \(x\in S\). It follows that there exists a subsequence \((n_k)\) of (n) and \(x_{n_k}\in S_{n_k}\) such that \(x_{n_k}\rightarrow x\). Therefore \(f_{n_k}(x_{n_k})\rightarrow f(x)=y\), as n approaches infinity. We claim that \(f_{n_k}(x_{n_k})\in \epsilon _2-W{\mathrm{Min}}(f_{n_k}(S_{n_k}),E)\). Suppose to the contrary, \(f_{n_k}(x_{n_k})\notin \epsilon _2-W{\mathrm{Min}}(f_{n_k}(S_{n_k}),E)\). Hence there exist \(s_{n_k}\in S_{n_k}\) so that

$$\begin{aligned} f_{n_k}(x_{n_k})-f_{n_k}(s_{n_k})-\epsilon _2 \in {\mathrm{int}}\, E. \end{aligned}$$
(6)

By the last hypothesis of theorem \((s_{n_k})\) contains a subsequence, still denoted \((s_{n_k})\), so that \((s_{n_k})\) converges to some \(s\in S\). Hence, letting \(n\rightarrow \infty\) in (6), we deduce that

$$\begin{aligned} f(x)-f(s)-\epsilon _2 \in E. \end{aligned}$$
(7)

On the other hand since E is an improvement set and \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\), thus \(E+\epsilon _2-\epsilon _1\subseteq {\mathrm{int}} E\), using the equality \(E+{\mathrm{int}} C={\mathrm{int}} E\). Hence \(E\subseteq {\mathrm{int}} E-\epsilon _2+\epsilon _1\). Applying this result in (7) we find that \(f(s)-f(x)+\epsilon _1 \in -int E\), contradicting (5). This completes the proof of (ii). In a way completely similar to that of Theorem 6 (iii), one may easily prove (iii). This completes the proof. \(\square\)

From these two theorems and the previous conclusion, the following string of inclusions is easily extracted. Suppose the conditions of Theorem 7 are entirely satisfied. Then, for any \(\lambda >0\)

$$\begin{aligned} {\mathrm{Min}}(f(S),E)\subseteq\, & {} \epsilon -{\mathrm{Min}}(f(S),E)\\\subseteq\, & {} Li(\epsilon -{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} Ls(\epsilon -{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} Ls(\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} \epsilon -W{\mathrm{Min}}(f(S),E)\\\subseteq\, & {} Ls((\lambda +1)\epsilon -{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} (\lambda +1)\epsilon -W{\mathrm{Min}}(f(S),E), \end{aligned}$$

and also

$$\begin{aligned} {\mathrm{Min}}(f(S),E)\subseteq\, & {} \epsilon -{\mathrm{Min}}(f(S),E)\\\subseteq\, & {} Li(\epsilon -{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} (\lambda +1)\epsilon -{\mathrm{Min}}(f(S),E)\\\subseteq\, & {} Li((\lambda +1)\epsilon -{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} Li((\lambda +1)\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} Ls((\lambda +1)\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} (\lambda +1)\epsilon -W{\mathrm{Min}}(f(S),E)\\\subseteq\, & {} Ls((\lambda +1)^2\epsilon -W{\mathrm{Min}}(f_n(S_n),E))\\\subseteq\, & {} (\lambda +1)^2\epsilon -W{\mathrm{Min}}(f(S),E). \end{aligned}$$

Using Theorems 6 and 7, we can also deduce the following conclusions for \(\epsilon\)-optimal solutions.

Theorem 8

Suppose conditions Theorem 6 hold. Then

  1. (i)

    \(\epsilon -O(f,S,E)\subseteq Li(\epsilon -O(f_n,S_n,E))\);

  2. (ii)

    \(Li(\epsilon _1-O(f_n,S_n,E))\subseteq \epsilon _2-O(f,S,E)\) for all \(\epsilon _1,\epsilon _2\in int C\) with \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\);

  3. (iii)

    \(Ls(\epsilon -WO(f_n,S_n,E))\subseteq \epsilon -WO(f,S,E)\);

  4. (iv)

    \(\epsilon _1-WO(f,S,E)\subseteq Ls(\epsilon _2-WO(f_n,S_n,E))\) for all \(\epsilon _1, \epsilon _2\in {\mathrm{int}} C\) with \(\epsilon _2-\epsilon _1\in {\mathrm{int}} C\).

Proof

Elementary. \(\square\)

Conclusions

In this paper, we first verified some existence results for the optimal solutions (including E-minimal, \(\epsilon -E\)-minimal and weak \(\epsilon -E\)-minimal solutions) of a given set. Then, we have established the lower and upper set-convergences of \(\epsilon -E\)-optimal and weak \(\epsilon -E\)-optimal solution sets of perturbed nonconvex vector problems in the sense of Painlevé-Kuratowski. The conclusions of this paper could be viewed as an \(\epsilon\)-translation and somewhat an improvement of the work of Lalitha and Chatterjee [8].