1 Introduction and problem statement

Vector optimization offers a sophisticated and effective theoretical apparatus for supporting decision processes in the presence of multiple conflicting criteria. A peculiar feature of vector optimization is that, in a context of partial orderings, there are different concepts of solutions, reflecting different viewpoints and priorities of the decision maker. Among the basic and mainly investigated solution concepts, ideal efficient solutions are the strongest ones, whose definition appears very close to the natural definition of solution for scalar optimization problems. In its global form, an ideal efficient solution in fact captures the possibility of comparison with any other admissible choice and, in doing so, it guarantees better performances with respect to each among the multiple criteria to be considered. The concept of ideal efficiency indeed is related to the cone domination property over all possible choices, whereas mere efficiency can only guarantee a non-domination property, with a consequent minor impact on the related solution concept in concrete decision processes. The relations between these two solution concepts have been well understood since long ago: it is well known that ideal efficient solutions are, a fortiori, efficient solutions, the converse failing to be true, in general (see, for instance, [16,  Proposition 2.4.6]). Nonetheless, whenever the set of ideal solutions happens to be nonempty, it coincides with the set of all efficient solutions. This link should offer theoretical motivations for studying ideal efficiency, inasmuch as it enables to provide information which are relevant, to a certain extent, also to mere efficiency. On the other hand, since the aforementioned coincidence holds only for global efficient solutions, the need of a specific study devoted to ideal efficiency should also emerge.

Besides theoretical motivations, when dealing with concrete vector optimization problems, evidences show that the set of efficient solutions is typically a big set. In a concrete decision process this does not help to identify immediately the “most-preferred” solutions, namely the ones that the decision maker would identify as the solution to the decision-making problem. Consequently, as far as the problem analysis is confined to merely efficient solutions, a further selection procedure is often required. In contrast to this, whenever they exist, ideal efficient solutions allow to avoid any additional involvement of the decision maker, representing a solution concept which is valid once and for all.

All of this becomes evident when dealing, in particular, with multi-objective optimization problems, i.e. with vector optimization problems, whose partial order is induced by the nonnegative orthant of a finite-dimensional Euclidean space (component-wise partial order). Solving this kind of problems (a.k.a. Pareto optimization problems) is about studying the inherent trade-offs among conflicting objectives. In this specific context, efficient solutions are the ones that possess the relevant trade-off information. On the other hand, ideal efficiency singles out those special situations, in which such trade-offs are any more needed: since ideal efficient solutions minimize at the same time each scalar objective function (as a component of the vector cost mapping), conditions ensuring a priori their existence should lead to simplify solution procedures. Of course, as it is readily seen in this particular context, the occurrence of such a favourable circumstance is expected to happen rarely. Consistently, a drawback of such a solution concept is that the geometry of ideal efficiency is very delicate, so that the existence of ideal efficient solutions can hardly take place in many problems. For this reason, in the rare circumstances when they do exist, it becomes important to understand upon which conditions their existence can be preserved in the presence of data perturbations and, if this happens, how and how much they change. Whereas for the stability analysis of weak efficient and efficient solutions to vector optimization problems a well-developed literature can be found (see, among the others, [1, 2, 4,5,6,7, 14, 26,27,28]), a stability analysis specific for ideal efficient solutions seems to be still largely unexplored. The present paper describes an attempt to address this question.

Consider the following parametric optimization problem

figure a

where \(f:P\times {\mathbb {X}}\longrightarrow {\mathbb {Y}}\) is a mapping representing the vector objective function, \(C\subset {\mathbb {Y}}\) is a nontrivial (i.e. \(C\ne \{{\mathbf {0}}\}\)) closed, pointed, convex cone, inducing the partial order relation \(\le _{{}_C}\) on \({\mathbb {Y}}\) in the standard way (i.e., \(y_1\le _{{}_C}y_2\) iff \(y_2-y_1\in C\)), and \({\mathscr {R}}:P\rightrightarrows {\mathbb {X}}\) is the feasible region set-valued mapping. Henceforth (Pd) denotes a metric space, where perturbation parameters vary, while \(({\mathbb {X}},\Vert \cdot \Vert )\) and \(({\mathbb {Y}},\Vert \cdot \Vert )\) denote real Banach spaces.

Fixed \({\bar{p}}\in P\), an element \({\bar{x}}\in {\mathscr {R}}({\bar{p}})\) is said to be a (global) ideal efficient solution to the particular problem \((\mathrm{VOP}{\bar{p}}\,)\) if

$$\begin{aligned} f({\bar{p}},{\bar{x}})\le _{{}_C}f({\bar{p}},x),\quad \forall x\in {\mathscr {R}}({\bar{p}}), \end{aligned}$$
(1)

or, equivalently, if

$$\begin{aligned} f({\bar{p}},{\mathscr {R}}({\bar{p}}))\subseteq f({\bar{p}},{\bar{x}})+C. \end{aligned}$$
(2)

If the value of the parameter p is subject to perturbations, making it to vary around the nominal value \({\bar{p}}\), the corresponding problems \((\mathrm{VOP}p\,)\) are expected to admit different ideal efficient solutions, if any, reflecting changes in the feasible region and in the vector objective function. The study of the stability behaviour of vector optimization problems leads therefore to consider the ideal efficient solution mapping \(\mathrm{IE}:P \rightrightarrows {\mathbb {X}}\), which is defined by

$$\begin{aligned} \mathrm{IE}(p)=\{x\in {\mathscr {R}}(p):\ x \hbox { ideal efficient solution to }(\mathrm{VOP}p\,)\}. \end{aligned}$$

The analysis of concrete examples gives evidence to the fact that the behaviour of the mapping \(\mathrm{IE}\) may be bizarre enough, even in the presence of very amenable data. In the below example, for a problem with linear (and smoothly perturbed) objective function and linear (unperturbed) constraints, the solution mapping \(\mathrm{IE}\) exhibits a variety of situations: it alternates isolated solution existence (meaning no solution for small changes of p around a solvable problem) with the best form of stability (solution existence and invariance of the solution set for small changes of p).

Example 1

Let \(P=[0,2\pi ]\), \({\mathbb {X}}={\mathbb {Y}}={\mathbb {R}}^2\), \(C={\mathbb {R}}^2_+=\{y=(y_1,y_2)\in {\mathbb {R}}^2:\ y_1\ge 0,\ y_2\ge 0\}\), let the objective mapping \(f:[0,2\pi ]\times {\mathbb {R}}^2\longrightarrow {\mathbb {R}}^2\) be given by

$$\begin{aligned} f(p,x)=A(p)x, \quad \hbox { with }\quad A(p)=\left( \begin{array}{cc} \cos (p) &{} \sin (p) \\ -\sin (p) &{} \cos (p) \end{array}\right) , \end{aligned}$$

and let \({\mathscr {R}}:[0,2\pi ]\rightrightarrows {\mathbb {R}}^2\) be given by

$$\begin{aligned} {\mathscr {R}}(p)=T=\{x=(x_1,x_2)\in {\mathbb {R}}^2:\ x_1\ge 0,\ x_2\ge 0, \ x_1+x_2\le 1\},\quad \forall p\in [0,2\pi ]. \end{aligned}$$

Evidently, the matrix A(p) represents the clockwise rotation of \({\mathbb {R}}^2\) of an angle measuring p radians. By direct inspection of the so defined problem \((\mathrm{VOP}p\,)\), one sees that the associated solution mapping \(\mathrm{IE}: [0,2\pi ]\rightrightarrows {\mathbb {R}}^2\) results in

$$\begin{aligned} \mathrm{IE}(p)=\left\{ \begin{array}{cl} \{(0,0)\} &{} \quad \hbox { if } p=0, \\ \\ \{(1,0)\} &{} \quad \hbox { if } p\in \left[ {\pi \over 2},{3\over 4}\pi \right] , \\ \\ \{(0,1)\} &{} \quad \hbox { if } p\in \left[ {5\over 4}\pi ,{3\over 2}\pi \right] , \\ \\ \varnothing &{} \quad \hbox { otherwise.} \end{array}\right. \end{aligned}$$

This says that for small changes in the value of \(p\in [0,2\pi ]\) near 0, the corresponding problems \((\mathrm{VOP}p\,)\) have no solution, whereas fixed any \({\bar{p}}\in ({\pi \over 2},{3\over 4}\pi )\cup ({5\over 4}\pi ,{3\over 2}\pi )\), for perturbations of the parameter sufficiently near to \({\bar{p}}\) the corresponding problems are still solvable and the solution set stays constant.

It is worth noticing that this parametric optimization problem admits efficient solutions for every \(p\in [0,2\pi ]\). Thus, the present example shows that the geometry of ideal efficiency can be broken by small perturbations of the parameter more easily than the one related to mere efficiency.

From inclusion (2) it should be apparent that the domination (ordering) cone C plays a crucial role in defining the peculiarity of ideal efficiency, causing its rare occurrence. Geometrical properties of C clearly affect the solution set to each problem \((\mathrm{VOP}p\,)\). Of course, this is true also for the mere efficiency concept, but in a different manner. Observe, for instance, that if it is \(\mathrm{int}\, f({\bar{p}},{\mathscr {R}}({\bar{p}})) \ne \varnothing \), a necessary condition for the existence of ideal efficient solutions to problem \((\mathrm{VOP}{\bar{p}}\,)\) requires that C has nonempty interior. Such a topological implication has no analogue in the case of efficient solutions to the same problem, which are characterized by the following relation of different nature

$$\begin{aligned} f({\bar{p}},{\mathscr {R}}({\bar{p}}))\cap [f({\bar{p}},{\bar{x}})-C]= \{f({\bar{p}},{\bar{x}})\}. \end{aligned}$$
(3)

Another related aspect is that, whereas condition (3) may take place for many elements as \(f({\bar{p}},{\bar{x}})\) in \(f({\bar{p}},{\mathscr {R}}({\bar{p}}))\), thereby causing big solution sets, the inclusion in (2) can take place only for one vector \(f({\bar{p}},{\bar{x}})\) in \(f({\bar{p}},{\mathscr {R}}({\bar{p}}))\) (possibly given by several elements in \({\mathscr {R}}({\bar{p}})\)). As a further remark concerning the role of C, marking the difference between full domination and non-domination property, notice that, while enlarging such a cone results in a diminished amount of efficient solutions, enlargements of C may yield the existence of ideal efficient solutions, otherwise lacking. Throughout this paper, the cone C is kept fixed, with the intention to extend the present analysis to the case of ordering cones C varying with \(p\in P\) in subsequent developments.

It is plain to see that the search for ideal efficient solutions to problems \((\mathrm{VOP}p\,)\) can be regarded in fact as a specialization of a more general class of problems involving set-valued mappings and cones, a kind of parameterized generalized equations which are referred to as set-valued inclusions in [31]. More precisely, given set-valued mappings \({\mathscr {R}}:P \rightrightarrows {\mathbb {X}}\), \(F:P\times {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) and a nontrivial cone \(C\subseteq {\mathbb {Y}}\), these problems require to

$$\begin{aligned} \hbox { find }x\in {\mathscr {R}}(p)\hbox { such that } F(p,x)\subseteq C. \end{aligned}$$
(PSV)

Their solution mapping will be denoted henceforth by \({{\mathscr {S}}}:P\rightrightarrows {\mathbb {X}}\), namely

$$\begin{aligned} {{\mathscr {S}}}(p)=\{x\in {\mathscr {R}}(p):\ F(p,x)\subseteq C\}. \end{aligned}$$

By introducing the specific set-valued mapping \(F_{{\mathscr {R}},f}:P\times {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) defined as

$$\begin{aligned} F_{{\mathscr {R}},f}(p,x)= f(p,{\mathscr {R}}(p))-f(p,x), \end{aligned}$$
(4)

it is clear that

$$\begin{aligned} \mathrm{IE}(p)={{\mathscr {S}}}(p). \end{aligned}$$

Set-valued inclusions, in simple as well as in parameterized form, have been recently studied from several viewpoints in [3, 29,30,31,32]. The idea underlying the research exposed in the present paper is that useful insights into the stability behaviour of ideal efficient solutions can be obtained by refining in a proper way the study of solution stability of parameterized set-valued inclusions. In doing so, it will be also possible to establish some property of the ideal efficient value mapping \(\mathrm{val}:\mathrm{dom}\, \mathrm{IE}\longrightarrow {\mathbb {Y}}\) associated to \((\mathrm{VOP}p\,)\), namely the single-valued mapping well defined by

$$\begin{aligned} \mathrm{val}(p)=f(p,{\bar{x}}_p), \end{aligned}$$

where \({\bar{x}}_p\) is any element of \(\mathrm{IE}(p)\). Notice that \(\mathrm{val}(p)\) is well defined even when \(\mathrm{IE}(p)\) contains more than one element, as it may happen. Indeed, according to the definition of ideal efficient solution to \((\mathrm{VOP}p\,)\), the relation

$$\begin{aligned} f(p,{\bar{x}}_p)\le _{{}_C}f(p,x),\quad \forall x\in {\mathscr {R}}(p) \end{aligned}$$

must be true for every \({\bar{x}}_p\in \mathrm{IE}(p)\). So, the fact that C is pointed entails that \(f(p,{\bar{x}}_p)\) must be the same value for every \({\bar{x}}_p\in \mathrm{IE}(p)\).

The contents of the paper are arranged as follows. In Sect. 2 a sufficient condition for the existence of ideal efficient solutions for a problem in the family \((\mathrm{VOP}p\,)\), in the case of a fixed value of p, is provided. In Sect. 3 a sufficient condition for the solution mapping associated to a problem family \((\mathrm{PSV})\) to be stable is established. Here the stability behaviour is expressed as Lipschitz lower semicontinuity for set-valued mappings. An estimate for the related modulus is also provided. In Sect. 4 the result established in the previous section finds a specific application in providing sufficient conditions for the stability of ideal efficient solutions to problems \((\mathrm{VOP}p\,)\). The focus is therefore on Lipschitz lower semicontinuity of \(\mathrm{IE}\), but, whenever \(\mathrm{IE}\) happens to be single-valued, such a property qualifies as calmness. Section 5 is reserved for concluding remarks and perspectives.

The main notations in use throughout the paper are basically standard: \({\mathbb {R}}\) denotes the real number field and \({\mathbb {R}}^n_+\) indicates the nonnegative orthant in the Euclidean space \({\mathbb {R}}^n\). In any metric space (Xd), \(\mathrm{B}\left( x, r\right) \) denotes the closed ball with center \(x\in X\) and radius \(r\ge 0\), \(\mathrm{dist}\left( x,S\right) \) the distance of x from \(S\subseteq X\), with the convention that \(\mathrm{dist}\left( x,\varnothing \right) =+\infty \), and \(\mathrm{B}\left( S, r\right) =\{x\in X:\ \mathrm{dist}\left( x,S\right) \le r\}\) the r-enlargement of S. The symbol \(\mathrm{int}\, S\) and \(\mathrm{cl}\, S\) indicate the topological interior of S and closure of S, respectively. Given \(A,\, B\subseteq X\), the value \(\mathrm{exc}(A,B)=\sup \{\mathrm{dist}\left( a,B\right) :\ a\in A\}\) is the excess of A over B. In any real Banach space \(({\mathbb {X}},\Vert \cdot \Vert )\), with null vector \({\mathbf {0}}\), \({{\mathbb {B}}}=\mathrm{B}\left( {\mathbf {0}}, 1\right) \) stands for the closed unit ball, whereas \({{\mathbb {S}}}\) for the unit sphere. Given two nonempty subsets \(A,\, B\subseteq {\mathbb {X}}\), their \(*\)-difference (a.k.a. Pontryagin difference) is defined as \(A{*\over {}}B=\{x\in {\mathbb {X}}:\ x+B\subseteq A\}\). The convex hull of a set \(A\subseteq {\mathbb {X}}\) is denoted by \(\mathrm{conv}\, A\). The space of all \(n\times n\) matrices with real entries is indicated by \(\mathrm{L}({\mathbb {R}}^n)\), the operator norm of \(\varLambda \in \mathrm{L}({\mathbb {R}}^n)\) by \(\Vert \varLambda \Vert _\mathrm{L}\), and the inverse of \(\varLambda \) by \(\varLambda ^{-1}\), provided that it exists. If \(\varPhi :{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) denotes a set-valued mapping, its domain is indicated by \(\mathrm{dom}\, \varPhi \). The acronyms p.h., l.s.c. and u.s.c. stand for positively homogeneous, lower semicontinuous and upper semicontinuous, respectively. The meaning of additional symbols will be explained contextually to their introduction.

2 An existence result without boundedness and continuity

This section is a digression from the main theme of the paper. A basic feature of any stability behaviour of the solution mapping to a parameterized problem is non-emptiness of its values. Therefore, before exploring conditions for this phenomenon to happen, it seems reasonable to spend some words about the solution existence for a fixed problem within the family \((\mathrm{VOP}p\,)\). Thus, the present section presents a sufficient condition for the existence of ideal efficient solutions to the following (geometrically) constrained vector optimization problem \((\mathrm{VOP})\):

$$\begin{aligned} C\hbox {-}\min \, f(x) \quad \hbox { subject to }\quad x\in {\mathscr {R}}, \end{aligned}$$
(VOP)

where \(f:X\longrightarrow {\mathbb {Y}}\), \(C\subseteq {\mathbb {Y}}\) and \({\mathscr {R}}\) are the problem data. Throughout the current section, (Xd) stands for a complete metric space, whereas \(({\mathbb {Y}},\Vert \cdot \Vert )\) denotes a real Banach space. Such existence condition refines and accomplishes an analogous result recently proposed (see [29,  Theorem 5.1]), by weakening several of its hypotheses. Indeed, the continuity of f is replaced by the lower C-semicontinuity, while the closedness of \(f({\mathscr {R}})\) is dropped out. Besides, an assumption, given for granted in [29,  Theorem 5.1], is now explicitly made, which avoids a pathological, yet possible, behaviour of \((\mathrm{VOP})\).

Let us recall that, according to [18], a mapping \(f:X\longrightarrow {\mathbb {Y}}\) is said to be C-lower semicontinuous (for short, C-l.s.c.) at \({\bar{x}}\in X\) if for every \(\epsilon >0\) there exists \(\delta _\epsilon >0\) such that

$$\begin{aligned} f(x)\in \mathrm{B}\left( f({\bar{x}}), \epsilon \right) +C,\quad \forall x\in \mathrm{B}\left( {\bar{x}}, \delta _\epsilon \right) . \end{aligned}$$
(5)

Of course, whenever f is continuous at \({\bar{x}}\), a fortiori it is C-l.s.c. at the same point. Following a variational approach combined with an analysis via set-valued inclusions, the ideal efficient solutions to \((\mathrm{VOP})\) can be singled out by means of the function \(\nu :{\mathscr {R}}\longrightarrow [0,+\infty ]\), defined by

$$\begin{aligned} \nu (x)=\mathrm{exc}(f({\mathscr {R}})-f(x),C)=\mathrm{exc}(f({\mathscr {R}}),f(x)+C). \end{aligned}$$
(6)

More precisely, since the cone C has been assumed to be closed, it is clear that

$$\begin{aligned} \mathrm{IE}=[\nu \le 0]=[\nu =0], \end{aligned}$$
(7)

where \(\mathrm{IE}\) indicates the set of all idel efficient solutions to \((\mathrm{VOP})\).

The next lemma connects assumptions on the problem data of \((\mathrm{VOP})\) with properties of \(\nu \), which will be useful in the sequel.

Lemma 1

Let \(f:X\longrightarrow {\mathbb {Y}}\) be a mapping, let \(C\subseteq {\mathbb {Y}}\) be a closed, convex cone and let \({\mathscr {R}}\subseteq X\) be a nonempty closed set.

  1. (i)

    If there exists \(x_0\in {\mathscr {R}}\) such that the set \([f({\mathscr {R}})-f(x_0)]\backslash C\) is bounded, then \(\nu \not \equiv +\infty \).

  2. (ii)

    If f is C-l.s.c. at \({\bar{x}}\in {\mathscr {R}}\), then \(\nu \) is l.s.c. at \({\bar{x}}\).

Proof

  1. (i)

    It suffices to observe that, if \(M>0\) is such that \([f({\mathscr {R}})-f(x_0)]\backslash C\subseteq M{{\mathbb {B}}}\), then it results in

    $$\begin{aligned} \nu (x_0) = \sup _{y\in [f({\mathscr {R}})-f(x_0)]\backslash C}\mathrm{dist}\left( y,C\right) \le \sup _{y\in M{{\mathbb {B}}}}\mathrm{dist}\left( y,C\right) \le \sup _{y\in M{{\mathbb {B}}}}\Vert y\Vert =M<+\infty , \end{aligned}$$

    and hence \(\nu \not \equiv +\infty \).

  2. (ii)

    It is useful to recall that, given two nonempty sets \(A,\, B\subseteq {\mathbb {Y}}\), and \(\epsilon >0\), then

    $$\begin{aligned} \mathrm{exc}(A,B+\epsilon {{\mathbb {B}}})\ge \mathrm{exc}(A,B)-\epsilon . \end{aligned}$$

    Indeed, one has

    $$\begin{aligned} \mathrm{exc}(A,B+\epsilon {{\mathbb {B}}})= & {} \sup _{a\in A}\inf _{\begin{array}{c} b\in B\\ u\in {{\mathbb {B}}} \end{array}}\Vert a-b-\epsilon u\Vert \ge \sup _{a\in A}\inf _{\begin{array}{c} b\in B \\ u\in {{\mathbb {B}}} \end{array}} [\Vert a-b\Vert -\epsilon \Vert u\Vert ] \\= & {} \sup _{a\in A} \inf _{b\in B} [\Vert a-b\Vert -\epsilon ]=\mathrm{exc}(A,B)-\epsilon . \end{aligned}$$

    Now, let \((x_n)_n\) be a sequence in \({\mathscr {R}}\), with \(x_n\longrightarrow {\bar{x}}\), as \(n\rightarrow \infty \). If \(\nu ({\bar{x}})=0\), the inequality \(\liminf _{n\rightarrow \infty }\nu (x_n)\ge 0=\nu ({\bar{x}})\) trivially holds true, as \(\nu \) takes nonnegative values only. Assume that \(\nu ({\bar{x}})>0\) and take any sequence \((x_n)_n\) in \({\mathscr {R}}\), such that \(x_n\longrightarrow {\bar{x}}\). Fix an arbitrary \(\epsilon >0\). Since f is C-l.s.c. at \({\bar{x}}\), there exists \(\delta _\epsilon >0\) such that inclusion (5) holds. Since for a proper \({\bar{n}}\in {\mathbb {N}}\), one has \(x_n\in \mathrm{B}\left( {\bar{x}}, \delta _\epsilon \right) \), for every \(n\in {\mathbb {N}}\), with \(n\ge {\bar{n}}\), then it is \(f(x_n)+C\subseteq f({\bar{x}})+\epsilon {{\mathbb {B}}}+C\), for every \(n\in {\mathbb {N}}\), with \(n\ge {\bar{n}}\). Consequently, one obtains

    $$\begin{aligned} \nu (x_n)= & {} \mathrm{exc}(f({\mathscr {R}}),f(x_n)+C)\ge \mathrm{exc}(f({\mathscr {R}}),f({\bar{x}})+ \epsilon {{\mathbb {B}}}+C) \\\ge & {} \mathrm{exc}(f({\mathscr {R}}),f({\bar{x}})+C)-\epsilon = \nu ({\bar{x}})-\epsilon , \quad \forall n\in {\mathbb {N}},\ n\ge {\bar{n}}. \end{aligned}$$

    The above inequalities imply

    $$\begin{aligned} \liminf _{n\rightarrow \infty }\nu (x_n)\ge \nu ({\bar{x}})-\epsilon . \end{aligned}$$

    The thesis follows by arbitrariness of \(\epsilon \). The reader should notice that such a reasoning works also in the case \(\nu ({\bar{x}})=+\infty \).

\(\square \)

In order to formulate the next result, it is to be recalled that, following [29,  Definition 3.1], given a set \(S\subseteq X\) and a mapping \(g:X\longrightarrow {\mathbb {Y}}\), g is said to be metrically C-increasing on S if there exists a constant \(a>1\) such that

$$\begin{aligned} \forall x\in S,\ \forall r>0,\ \exists u\in \mathrm{B}\left( x, r\right) \cap S:\ \mathrm{B}\left( g(u), ar\right) \subseteq \mathrm{B}\left( g(x)+C, r\right) . \end{aligned}$$
(8)

The quantity

$$\begin{aligned} \mathrm{inc}(g;S)=\sup \{a>1:\ \hbox { inclusion }(8) \hbox { holds}\} \end{aligned}$$

is called the exact bound of metric C-increase of g on S. For a discussion of this notion, including examples, related properties and its connection with the decrease principle of variational analysis, the reader is referred to [29].

Theorem 1

(Ideal efficient solution existence) With reference to a problem \((\mathrm{VOP})\), suppose that:

  1. (i)

    \({\mathscr {R}}\) is nonempty and closed;

  2. (ii)

    there exists \(x_0\in {\mathscr {R}}\) such that \([f({\mathscr {R}})-f(x_0)]\backslash C\) is a bounded set;

  3. (iii)

    f is C-l.s.c. with respect to the topology induced on \({\mathscr {R}}\), at each point of \({\mathscr {R}}\);

  4. (iv)

    \(-f\) is metrically C-increasing on \({\mathscr {R}}\).

Then, \(\mathrm{IE}\ne \varnothing \) is closed and the following estimate holds

$$\begin{aligned} \mathrm{dist}\left( x,\mathrm{IE}\right) \le {\nu (x)\over \mathrm{inc}(-f;{\mathscr {R}})},\quad \forall x\in {\mathscr {R}}. \end{aligned}$$
(9)

Proof

The idea is to apply [29,  Theorem 4.2], after observing that, as one readily checks by a perusal of its proof, assuming the set-valued mapping \(F:X\rightrightarrows {\mathbb {Y}}\), \(F=f({\mathscr {R}})-f\), to be closed-valued is not required in order for getting the validity of the aforementioned result.

That said, notice that, as a closed subset of a complete metric space, \({\mathscr {R}}\) is a complete metric space. In the light of Lemma 1, by hypotheses (ii) and (iii), the function \(\nu :{\mathscr {R}}\longrightarrow [0,+\infty ]\) defined as in (6) is l.s.c. on \({\mathscr {R}}\) and \(\nu \not \equiv +\infty \). Since according to (7) \(\mathrm{IE}=[\nu \le 0]\) is a sublevel set of a l.s.c. function, it is closed.

Now, in order to show that \(\mathrm{IE}\ne \varnothing \) and the error bound in (9) holds true, it suffices to apply [29,  Theorem 4.2], with \(X={\mathscr {R}}\), \(F=f({\mathscr {R}})-f\) and \(\phi =\nu \), following the same argument as proposed in [29,  Theorem 5.1]. In doing so, notice that the existence of \(x_0\in {\mathscr {R}}\) such that \(\nu (x_0)<+\infty \) is guaranteed by hypothesis (ii), whereas the lower semicontinuity of \(\nu \) can be derived directly from the lower C-semicontinuity of f, instead of from the lower semicontinuity of F. Besides, the hypothesis (iv) entails the property of metric C-increase on \({\mathscr {R}}\) for the mapping \(f({\mathscr {R}})-f\). \(\square \)

Example 2

Let \(X={\mathbb {Y}}={\mathbb {R}}^2\) be endowed with its standard Euclidean space structure, let \(C={\mathbb {R}}^2_+\) and let \(f:{\mathbb {R}}^2\longrightarrow {\mathbb {R}}^2\) be defined by

$$\begin{aligned} f(x)=-x+\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x\Vert _\infty ), \end{aligned}$$

where \(\mathrm{e}=(1,1)\in {\mathbb {R}}^2\), \(\chi _A\) denotes the characteristic function of a subset \(A\subseteq {\mathbb {R}}\), and \(\Vert x\Vert _\infty =\max \{|x_1|,\, |x_2|\}\). Let the feasible region be \({\mathscr {R}}=-{\mathbb {R}}_+\mathrm{e}=\{x=(x_1,x_2)\in {\mathbb {R}}^2:\ x_1=x_2\le 0\}\). One sees from the definition that

$$\begin{aligned} f({\mathscr {R}})=\{(0,0)\}\cup \bigcup _{n=0}^\infty (2n+1,2n+2]\mathrm{e}. \end{aligned}$$
(10)

This makes it clear that, for the problem \((\mathrm{VOP})\) defined by these data it is \(\mathrm{IE}=\{(0,0)\}\). Notice that \(f({\mathscr {R}})\) fails to be closed, as \((2n+1)\mathrm{e}\not \in f({\mathscr {R}})\), for every \(n\in {\mathbb {N}}\), even though \({\mathscr {R}}\) is a closed subset of \({\mathbb {R}}^2\). It is readily seen that f is not continuous at each point of the form \(x=-n\mathrm{e}\in {\mathscr {R}}\), with \(n\in {\mathbb {N}}\). Nonetheless, f turns out to be \({\mathbb {R}}^2_+\)-l.s.c. at each point of \({\mathscr {R}}\). Indeed, fixed any \(x_0\in {\mathscr {R}}\) and \(\epsilon \in (0,1)\), it suffices to take \(\delta =\epsilon \) in order to have

$$\begin{aligned} f(x)\in \mathrm{B}\left( f(x_0), \epsilon \right) +{\mathbb {R}}^2_+,\quad \forall x\in \mathrm{B}\left( x_0, \delta \right) \cap {\mathscr {R}}. \end{aligned}$$
(11)

If \(x_0=(0,0)\) this inclusion is evident because \(f({\mathscr {R}})\subseteq {\mathbb {R}}^2_+\subseteq B(f(0),\epsilon )+{\mathbb {R}}^2_+\). If \(x_0\in \bigcup _{n=0}^\infty (n,n+1)(-\mathrm{e})\), f coincides with the function \(x\mapsto -x+(n+1)\mathrm{e}\) in a neighbourhood in \({\mathscr {R}}\) of \(x_0\) and it is continuous with respect to the topology induced on \({\mathscr {R}}\) at \(x_0\). If \(x_0=-n\mathrm{e}\), with \(n\in {\mathbb {N}}\backslash \{0\}\), the inclusion in (11) is true, because for every \(x\in \mathrm{B}\left( x_0, \epsilon \right) \cap {\mathscr {R}}\), with \(x_0\le _{{}_C}x\), it is \(f(x)\in \mathrm{B}\left( f(x_0), \epsilon \right) \), whereas for every \(x\in \mathrm{B}\left( x_0, \epsilon \right) \cap {\mathscr {R}}\), with \(x\le _{{}_C}x_0\), \(x\ne x_0\), it results in

$$\begin{aligned} f(x)= & {} -x+(n+1)\mathrm{e}\ge _{{}_C}-x_0+(n+1)\mathrm{e}\ge _{{}_C}-x_0+n\mathrm{e} \\= & {} f(x_0), \end{aligned}$$

so

$$\begin{aligned} f(x)\in f(x_0)+{\mathbb {R}}^2_+\subseteq \mathrm{B}\left( f(x_0), \epsilon \right) +{\mathbb {R}}^2_+. \end{aligned}$$

Let us show that \(-f\) is \({\mathbb {R}}^2_+\)-increasing on \({\mathscr {R}}\). Fix an arbitrary \(x\in {\mathscr {R}}\) and \(r>0\) and set

$$\begin{aligned} u={\mathrm{e}\over \Vert \mathrm{e}\Vert } \qquad \hbox { and }\qquad z=x+ru\in \mathrm{B}\left( x, r\right) \cap {\mathscr {R}}. \end{aligned}$$

Taken \(a=2>1\), it is possible to prove that

$$\begin{aligned} -f(z)+ar{{\mathbb {B}}}\subseteq -f(x)+{\mathbb {R}}^2_++r{{\mathbb {B}}}. \end{aligned}$$
(12)

Indeed, since it is \(\Vert x+ru\Vert _\infty \le \Vert x\Vert _\infty \) for every \(x\in {\mathscr {R}}\), one has

$$\begin{aligned} \mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x+ru\Vert _\infty ) \le _{{}_C}\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x\Vert _\infty ) \end{aligned}$$

and hence

$$\begin{aligned} \mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x\Vert _\infty )\in \mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x+ru\Vert _\infty )+{\mathbb {R}}^2_+, \end{aligned}$$

wherefrom it follows

$$\begin{aligned} -\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x+ru\Vert _\infty )\in -\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x\Vert _\infty )+{\mathbb {R}}^2_+, \end{aligned}$$

On the other hand, it is clear that for every \(r>0\) it is

$$\begin{aligned} r{\mathrm{e}\over \Vert \mathrm{e}\Vert }+ar{{\mathbb {B}}}=r\mathrm{B}\left( {\mathrm{e}\over \Vert \mathrm{e}\Vert }, a\right) \subseteq r{{\mathbb {B}}}+{\mathbb {R}}^2_+. \end{aligned}$$
(13)

Thus, in the light of the above inclusions, one finds

$$\begin{aligned} -f(z)+ar{{\mathbb {B}}}= & {} (x+ru)-\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x+ru\Vert _\infty )+ar{{\mathbb {B}}}\\\subseteq & {} x+r{\mathrm{e}\over \Vert \mathrm{e}\Vert }-\mathrm{e}\sum _{n=0}^{\infty }(n+1)\chi _{(n,n+1]}(\Vert x\Vert _\infty ) +{\mathbb {R}}^2_++ar{{\mathbb {B}}}\\\subseteq & {} f(x)+\left( r{\mathrm{e}\over \Vert \mathrm{e}\Vert }+ar{{\mathbb {B}}}\right) +{\mathbb {R}}^2_+ \\\subseteq & {} f(x)+r{{\mathbb {B}}}+{\mathbb {R}}^2_+, \end{aligned}$$

so inclusion (12) is satisfied. Moreover, one can see that \(a=2\) is the greatest constant for which inclusion (13) and hence inclusion (12) is true. Thus, it is \(\mathrm{inc}(-f;{\mathscr {R}})=2\).

Thus, since for \(x_0=(0,0)\) the set \([f({\mathscr {R}})-f(x_0)]\backslash {\mathbb {R}}^2_+=\varnothing \) is bounded, for this instance of problem \((\mathrm{VOP})\) Theorem 1 can be applied. It must be remarked that the existence of an ideal efficient solution is achieved in spite of the fact that \({\mathscr {R}}\) is not bounded, \(f({\mathscr {R}})\) is not closed and f is not continuous on \({\mathscr {R}}\).

To accomplish the analysis of the present example, observe that, as \(f({\mathscr {R}})\) takes the form in (10) and \((0,0)\in f({\mathscr {R}})\), one readily sees that

$$\begin{aligned} \nu (x)= & {} \mathrm{exc}(f({\mathscr {R}})-f(x),{\mathbb {R}}^2_+)= \mathrm{exc}(f({\mathscr {R}}),f(x)+{\mathbb {R}}^2_+) =\Vert f(x)\Vert \\= & {} \Vert x||+\Vert (n+1)\mathrm{e}\Vert =\Vert x\Vert +\sqrt{2}(n+1), \quad \forall x\in {\mathscr {R}}:\ n<\Vert x\Vert _\infty \le n+1. \end{aligned}$$

On the other hand, clearly it is \(\mathrm{dist}\left( x,\mathrm{IE}\right) =\Vert x||\). By taking into account that, for every \(x\in {\mathbb {R}}^2\), it is \(\Vert x\Vert \le \sqrt{2}\Vert x\Vert _\infty \), the inequality

$$\begin{aligned} \Vert x\Vert _\infty \le n+1 \end{aligned}$$

implies

$$\begin{aligned} \Vert x\Vert \le \sqrt{2}(n+1). \end{aligned}$$

Thus, one finds

$$\begin{aligned} \mathrm{dist}\left( x,\mathrm{IE}\right)= & {} \Vert x\Vert \le {\nu (x)\over \mathrm{inc}(-f;{\mathscr {R}})} ={\Vert x\Vert +\sqrt{2}(n+1)\over 2}\le {\sqrt{2}(n+1)+\sqrt{2}(n+1)\over 2} \\= & {} \sqrt{2}(n+1), \quad \forall x\in {\mathscr {R}}:\ n<\Vert x\Vert _\infty \le n+1, \end{aligned}$$

which agrees with the estimate provided in (9).

Several existence results for ideal efficient solutions can be found in the literature dedicated to vector optimization. Some of them demand compactness of the feasible region (see, for instance, [20]). Other results drop out the boundedness of the feasible region, while are essentially based on convexity properties of the objective mapping (see [11, 13]). Theorem 1 avoids any form of convexity (remember that X is a metric space), whereas the solution existence relies on metric completeness, through the property of metric C-increase. Such an approach makes it possible to complement the qualitative part of the statement (existence) with a quantitative part (an error bound for the distance from the solution set). To the author’s knowledge, results of this type are a few in nonconvex vector optimization. It is worth mentioning that a promising characterization of ideal efficient solutions has been obtained in [12,  Corollary 3.5(c)], which is expressed as a sort of Fermat rule involving lower radial epiderivatives of the vector objective function (see [12,  Remark 3.6] for relevant discussions), leading as a matter of fact to a set-valued inclusion. Nevertheless, the author found no evidence of subsequent developments exploiting such a condition to achieve quantitative existence results.

3 Parameterized set-valued inclusions with moving feasible region

This section deals with stability properties of the solution mapping \({{\mathscr {S}}}:P\rightrightarrows {\mathbb {X}}\) associated to a parameterized problem \((\mathrm{PSV})\). More precisely, a sufficient condition for \({{\mathscr {S}}}\) to be Lipschitz l.s.c. at a point of its graph is established. Recall that, according to [17], a set-valued mapping \(\varPhi :P\rightrightarrows X\) between metric spaces is said to be Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\in \mathrm{graph}\,\varPhi \) if there exist positive \(\delta \) and \(\ell \) such that

$$\begin{aligned} \varPhi (p)\cap \mathrm{B}\left( {\bar{x}}, \ell d({\bar{p}},p)\right) \ne \varnothing , \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$
(14)

The value

$$\begin{aligned} \mathrm{Liplsc}\,\varPhi ({\bar{p}},{\bar{x}})=\inf \{\ell>0:\ \exists \delta >0 \hbox { for which }(14)\hbox { holds}\} \end{aligned}$$
(15)

is called modulus of Lipschitz lower semicontinuity of \(\varPhi \) at \(({\bar{p}},{\bar{x}})\).

Discussions about this property and its relationships with other quantitative semicontinuity properties for set-valued mappings can be found, for instance, in [17, 31]. For the purpose of the present analysis, it is relevant to observe that the requirement in (14) entails local solvability for problems \((\mathrm{PSV})\) and nearness to the reference value \({\bar{x}}\) of at least some among the solutions to the perturbed problems. Not only: the condition postulated in (14) contains a quantitative aspect, in prescribing a nearness which must be proportional to the parameter variation. The rate is measured by the modulus of Lipschitz lower semicontinuity. Historically, this quantitative aspect motivated the use of the prefix ‘Lipschitz’ for qualifying such kind of stability behaviours in the variational analysis literature, to distinguish them from mere topological properties (see [9, 17, 19, 22, 25] and commentaries therein).

Another property of this kind, which will be employed in the sequel, is Lipschitz upper semicontinuity: a set-valued mapping \(\varPhi :P\rightrightarrows X\) between metric spaces is said to be Lipschitz u.s.c. at \({\bar{p}}\in \mathrm{dom}\, \varPhi \) if there exist positive \(\delta \) and \(\ell \) such that

$$\begin{aligned} \mathrm{exc}(\varPhi (p),\varPhi ({\bar{p}}))\le \ell d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$
(16)

The value

$$\begin{aligned} \mathrm{Lipusc}\,\varPhi ({\bar{p}})=\inf \{\ell>0:\ \exists \delta >0 \hbox { for which }(16)\hbox { holds}\} \end{aligned}$$

is called modulus of Lipschitz upper semicontinuity of \(\varPhi \) at \({\bar{p}}\). It is possible to see at once that, whenever \(\varPhi \) happens to be single-valued in a neighbourhood of \({\bar{p}}\), Lipschitz lower semicontinuity at \(({\bar{p}},\varPhi ({\bar{p}}))\) and Lipschitz upper semicontinuity at \({\bar{p}}\) reduce to the same property, as conditions (14) and (16) in this case share the form

$$\begin{aligned} d(\varPhi (p),\varPhi ({\bar{p}}))\le \ell d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$
(17)

If a single-valued mapping \(\varPhi :P\longrightarrow X\) satisfies inequality (17) for some positive \(\delta \) and \(\ell \) it is called calm at \({\bar{p}}\). In such an event, the value

$$\begin{aligned} \mathrm{clm}\, \varPhi ({\bar{p}})=\mathrm{Liplsc}\,\varPhi ({\bar{p}},\varPhi ({\bar{p}})) =\mathrm{Lipusc}\,\varPhi ({\bar{p}}) \end{aligned}$$

will be called modulus of calmness of \(\varPhi \) at \({\bar{x}}\). When, in particular, \(\varPhi \) is a single-real-valued function, the above notion of calmness can be split in its versions from above and from below. So, \(\varPhi :P\longrightarrow {\mathbb {R}}\cup \{\pm \infty \}\) is said to be calm from above at \({\bar{p}}\in \mathrm{dom}\, \varPhi \) if there exist positive \(\delta \) and \(\ell \) such that

$$\begin{aligned} \varPhi (p)-\varPhi ({\bar{p}})\le \ell d(p,{\bar{p}}),\quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) , \end{aligned}$$
(18)

with

$$\begin{aligned} \overline{\mathrm{clm}}\, \varPhi ({\bar{p}})=\inf \{\ell>0:\ \exists \delta >0 \hbox { for which }(18)\hbox { holds}\} \end{aligned}$$

being the modulus of calmness from above of \(\varPhi \) at \({\bar{p}}\).

The following standing assumption will be supposed to hold throughout the current section:

\(({\mathscr {A}})\):

both the set-valued mappings F and \({\mathscr {R}}\) take nonempty and closed values (in particular, \(\mathrm{dom}\, F=P\times {\mathbb {X}}\) and \(\mathrm{dom}\, {\mathscr {R}}=P\)).

In order to develop, through variational methods, a quantitative stability analysis of the solution mapping associated to \((\mathrm{PSV})\) it is convenient to introduce the function \(\nu _1:P\times {\mathbb {X}}\longrightarrow [0,+\infty ]\), defined as

$$\begin{aligned} \nu _1(p,x)=\mathrm{exc}(F(p,x),C)+\mathrm{dist}\left( x,{\mathscr {R}}(p)\right) , \end{aligned}$$
(19)

which is a kind of merit function providing a functional characterization of solutions to \((\mathrm{PSV})\). In fact, one sees that, for every \(p\in P\), it holds

$$\begin{aligned} {{\mathscr {S}}}(p)=[\nu _1(p,\cdot )=0]=\nu _1(p,\cdot )^{-1}(0). \end{aligned}$$

Together with function \(\nu _1\), in what follows it will be convenient to deal also with the function \(\nu _F:P\times {\mathbb {X}}\longrightarrow [0,+\infty ]\) associated to a set-valued mapping \(F:P\times {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) as being

$$\begin{aligned} \nu _F(p,x)=\mathrm{exc}(F(p,x),C). \end{aligned}$$

Notice that, unlike function \(\nu _1\), function \(\nu _F\) involves the set-valued mapping F only.

Remark 1

The author is aware of the fact that other functions could be considered to the same purpose in the place of \(\nu _1\), e.g. function \(\nu _\infty \) given by

$$\begin{aligned} \nu _\infty (p,x)=\max \{\mathrm{exc}(F(p,x),C), \mathrm{dist}\left( x,{\mathscr {R}}(p)\right) \}. \end{aligned}$$

A different choice of merit function does not affect the essence of the approach and the consequent achievements, resulting only in a change of the estimates for the involved moduli.

The variation rate of merit functions such as \(\nu _1\) and \(\nu _F\) can be measured in a metric space setting by means of the notion of slope. Recall that, after [8], for (strong) slope of a function \(\varphi :X\longrightarrow {\mathbb {R}}\cup \{\pm \infty \}\) at \(x_0\in \mathrm{dom}\, \varphi \) the following value is meant:

$$\begin{aligned} |\nabla \varphi |(x_0)=\left\{ \begin{array}{ll} 0, &{} \hbox { if }x_0\hbox { is a local minimizer of }\varphi , \\ \displaystyle \limsup _{x\rightarrow x_0}{\varphi (x_0)-\varphi (x)\over d(x,x_0)}, &{} \hbox { otherwise.} \end{array}\right. \end{aligned}$$

A behaviour of the above notion of slope in the presence of additive perturbations is pointed out in the next remark, as it will be employed in the sequel.

Remark 2

(Calm perturbation of the slope) Let \(\varphi :X\longrightarrow {\mathbb {R}}\cup \{\pm \infty \}\), let \(\psi :X\longrightarrow {\mathbb {R}}\cup \{\pm \infty \}\), and let \(x_0\in \mathrm{dom}\, \varphi \cap \mathrm{dom}\, \psi \). If \(x_0\) is not a local minimizer of \(\varphi \), \(\psi \) is calm at \(x_0\) and \(c_\psi >\mathrm{clm}\, \psi (x_0)\), then

$$\begin{aligned} |\nabla (\varphi +\psi )|(x_0)\ge \max \{ |\nabla \varphi |(x_0)-c_\psi ,\, 0\}. \end{aligned}$$

Indeed, according to the definition of strong slope, one has

$$\begin{aligned} |\nabla (\varphi +\psi )|(x_0)\ge \max \left\{ \limsup _{x\rightarrow x_0}{(\varphi +\psi )(x_0)-(\varphi +\psi )(x) \over d(x,x_0)},\, 0 \right\} \end{aligned}$$

and, according to inequality (17), one finds

$$\begin{aligned} \limsup _{x\rightarrow x_0}{(\varphi +\psi )(x_0)-(\varphi +\psi )(x) \over d(x,x_0)}\ge & {} \limsup _{x\rightarrow x_0}{\varphi (x_0)-\varphi (x) \over d(x,x_0)}+\liminf _{x\rightarrow x_0}{\psi (x_0)-\psi (x)\over d(x,x_0)} \\\ge & {} |\nabla \varphi |(x_0)-c_\psi . \end{aligned}$$

In the statement of the next result, the following partial version of the strict outer slope (see, for instance, [10]) will be employed for a function \(\varphi :P\times {\mathbb {X}}\longrightarrow {\mathbb {R}}\cup \{\pm \infty \}\) at a point \((p_0,x_0)\):

$$\begin{aligned} \overline{|\nabla _x \varphi |}{}^>(p_0,x_0)= & {} \lim _{\epsilon \rightarrow 0^+}\inf \{|\nabla \varphi (p,\cdot )|(x):\ (p,x)\in \mathrm{B}\left( p_0, \epsilon \right) \times \mathrm{B}\left( x_0, \epsilon \right) , \nonumber \\&\varphi (p_0,x_0)<\varphi (p,x)<\varphi (p_0,x_0)+\epsilon \} \nonumber \\= & {} \liminf _{\begin{array}{c} (p,x)\rightarrow (p_0,x_0)\\ \varphi (p,x)\downarrow \varphi (p_0,x_0) \end{array}} |\nabla \varphi (p,\cdot )|(x). \end{aligned}$$
(20)

Proposition 1

(Lipschitz lower semicontinuity of \({{\mathscr {S}}}\)) With reference to \((\mathrm{PSV}{p})\), let \({\bar{p}}\in P\) and \({\bar{x}} \in {{\mathscr {S}}}({\bar{p}})\) be given. Suppose that:

  1. (i)

    there exists \(\delta >0\) such that each mapping \(F(p,\cdot ): {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) is l.s.c. on \({\mathbb {X}}\), for every \(p\in \mathrm{B}\left( {\bar{p}}, \delta \right) \);

  2. (ii)

    \({\mathscr {R}}:P\rightrightarrows {\mathbb {X}}\) is Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\) and \(F(\cdot ,{\bar{x}}):P\rightrightarrows {\mathbb {Y}}\) is Lipschitz u.s.c. at \({\bar{p}}\);

  3. (iii)

    it holds \(\overline{|\nabla _x \nu _F|}{}^>({\bar{p}},{\bar{x}})>1\).

Then \({{\mathscr {S}}}\) is Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\) and the following estimate holds

$$\begin{aligned} \mathrm{Liplsc}\,{{\mathscr {S}}}({\bar{p}},{\bar{x}})\le {\mathrm{Lipusc}\,F(\cdot ,{\bar{x}})({\bar{p}})+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\over \overline{|\nabla _x \nu _F|}{}^>({\bar{p}},{\bar{x}})-1}. \end{aligned}$$

Proof

Following the same technique as in [31,  Theorem 3.1], let us start with showing that, under the current assumptions, the function \(\nu _1:P\times {\mathbb {X}}\longrightarrow [0,+\infty ]\) defined by (19) fulfils the following properties:

\((\wp _1)\):

\(p\mapsto \nu _1(p,{\bar{x}})\) is calm from above at \({\bar{p}}\) and the following estimate holds

$$\begin{aligned} \overline{\mathrm{clm}}\,\nu _1(\cdot ,{\bar{x}})\le \mathrm{Lipusc}\,F(\cdot ,{\bar{x}})({\bar{p}})+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}}); \end{aligned}$$
\((\wp _2)\):

\(x\mapsto \nu _1(p,x)\) is l.s.c. on \({\mathbb {X}}\), for every \(p\in \mathrm{B}\left( {\bar{p}}, \delta \right) \), for some \(\delta >0\);

\((\wp _3)\):

it holds \(\overline{|\nabla _x \nu _1|}{}^>({\bar{p}},{\bar{x}})>0\).

As for \((\wp _1)\), by [31,  Lemma 2.4(ii)], the function \(p\mapsto \mathrm{exc}(F(p,{\bar{x}}),C)\) is calm from above at \({\bar{p}}\) because \(F(\cdot ,{\bar{x}})\) is Lipschitz u.s.c. at \({\bar{p}}\), with the aforementioned estimate. This means that, for any \(\ell _1>\mathrm{Lipusc}\,F(\cdot ,{\bar{x}})({\bar{p}})\), there exists \(\delta _1>0\) such that

$$\begin{aligned} \mathrm{exc}(F(p,{\bar{x}}),C)\le \ell _1d(p,{\bar{p}}),\quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta _1\right) . \end{aligned}$$

On the other hand, by the Lipschitz lower semicontinuity of \({\mathscr {R}}\) at \(({\bar{p}},{\bar{x}})\), one can say that for any \(\ell _2>\mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\) there exists \(\delta _2>0\) such that

$$\begin{aligned} {\mathscr {R}}(p)\cap \mathrm{B}\left( {\bar{x}}, \ell _2d(p,{\bar{p}})\right) , \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta _2\right) , \end{aligned}$$

so that \(\mathrm{dist}\left( {\bar{x}},{\mathscr {R}}(p)\right) \le \ell _2d(p,{\bar{p}})\). Thus, by setting \(\delta _0=\min \{\delta _1,\, \delta _2\}\), one obtains

$$\begin{aligned} \nu _1(p,{\bar{x}})-\nu _1({\bar{p}},{\bar{x}})\le & {} \mathrm{exc}(F(p,{\bar{x}}),C)+ \mathrm{dist}\left( {\bar{x}}, {\mathscr {R}}(p)\right) \\\le & {} (\ell _1+\ell _2)d(p,{\bar{p}}),\quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta _0\right) . \end{aligned}$$

The last inequality says that the function \(\nu _1(\cdot ,{\bar{x}})\) is calm from above at \({\bar{p}}\) and, by arbitrariness of \(\ell _1\) and \(\ell _2\), the estimate in \((\wp _1)\) holds true.

As for \((\wp _2)\), remember that by virtue of the assumption \(({\mathscr {A}})\) it must be \({\mathscr {R}}(p)\ne \varnothing \), so that, for every \(p\in P\), each function \(x\mapsto \mathrm{dist}\left( x,{\mathscr {R}}(p)\right) \) is Lipschitz continuous on \({\mathbb {X}}\). Besides, by taking \(\delta \) as in hypothesis (i), for every fixed \(p\in \mathrm{B}\left( {\bar{p}}, \delta \right) \), the function \(x\mapsto \mathrm{exc}(F(p,x),C)\) is l.s.c. on \({\mathbb {X}}\), according to [31,  Lemma 2.4(i)]. Thus the function \(x\mapsto \nu _1(p,x)\) turns out to be l.s.c. on \({\mathbb {X}}\) as a sum of two l.s.c. functions.

As for \((\wp _3)\), according to the hypothesis (iii), fixed \(\sigma \) in such a way that \(1<\sigma <\overline{|\nabla _x \nu _F|}{}^>({\bar{p}},{\bar{x}})\), there exists \(\delta _\sigma >0\) such that

$$\begin{aligned} |\nabla \nu _F(p,\cdot )|(x)>\sigma ,\quad \forall (p,x)\in \mathrm{B}\left( {\bar{p}}, \delta _\sigma \right) \times \mathrm{B}\left( {\bar{x}}, \delta _\sigma \right) :\ 0<\nu _F(p,x)<\delta _\sigma . \end{aligned}$$
(21)

Fix an arbitrary \((p_0,x_0)\in \mathrm{B}\left( {\bar{p}}, \delta _\sigma \right) \times \mathrm{B}\left( {\bar{x}}, \delta _\sigma \right) \), with \(0<\nu _F(p_0,x_0)<\delta _\sigma \). The inequality (21) entails that \(x_0\) can not be a local minimizer for the function \(\nu _F(p_0,\cdot )\). Thus, since the function \(x\mapsto \mathrm{dist}\left( x,{\mathscr {R}}(p_0)\right) \) is Lipschitz continuous on \({\mathbb {X}}\) with constant 1, and hence calm around \(x_0\), it is possible to apply what has been observed in Remark 2, with \(\varphi = \nu _F(p_0,\cdot )\), \(\psi =\mathrm{dist}\left( \cdot ,{\mathscr {R}}(p_0)\right) \) and \(c_\psi =1\). Consequently, it holds

$$\begin{aligned} |\nabla [\nu _F(p_0,\cdot )+\mathrm{dist}\left( \cdot ,{\mathscr {R}}(p_0)\right) ]|(x_0) \ge |\nabla \nu _F(p_0,\cdot )|(x_0)-1\ge \sigma -1>0. \end{aligned}$$

From the last inequality the positivity of \(\overline{|\nabla _x \nu _1|}{}^>({\bar{p}},{\bar{x}})\) readily follows.

Now, let us exploit a variational argument to prove the thesis. By virtue of \((\wp _3)\), there exists \(\sigma _0\in (0,1)\) such that

$$\begin{aligned} \overline{|\nabla _x \nu _1|}{}^>({\bar{p}},{\bar{x}})>\sigma _0. \end{aligned}$$

By recalling the definition in (20), this means that there exists \(\eta >0\) such that for every \(\epsilon \in (0,\eta )\) it holds

$$\begin{aligned} |\nabla \nu _1(p,\cdot )|(x)>\sigma _0,\quad \forall (p,x)\in \mathrm{B}\left( {\bar{p}}, \epsilon \right) \times \mathrm{B}\left( {\bar{x}}, \epsilon \right) :\ 0<\nu _1(p,x)<\epsilon . \end{aligned}$$
(22)

Clearly, \(\eta \) can be assumed to be smaller that the value of \(\delta \) appearing in \((\wp _2)\). By virtue of property \((\wp _1)\), taken any \(\ell >\mathrm{Lipusc}\,F(\cdot ,{\bar{x}})({\bar{p}})+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\), there exists \(\delta _\ell >0\) such that

$$\begin{aligned} \nu _1(p,{\bar{x}})\le \nu _1({\bar{p}},{\bar{x}})+\ell d(p,{\bar{p}})=\ell d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta _\ell \right) . \end{aligned}$$
(23)

Without any loss of generality, one can assume that the inequality in (23) holds with

$$\begin{aligned} 0<\delta _\ell <{\sigma _0\eta \over 2(\ell +1)}. \end{aligned}$$
(24)

Notice that, if this is true, one has in particular \(\delta _\ell <\eta /2\).

Let us consider the function \(\nu _1(p,\cdot ):{\mathbb {X}}\longrightarrow [0,+\infty ]\), where p is arbitrarily fixed in \(\mathrm{B}\left( {\bar{p}}, \delta _\ell \right) \backslash \{{\bar{p}}\}\). As it is \(\delta _\ell<\eta <\delta \), then by virtue of property \((\wp _2)\), function \(\nu _1(p,\cdot )\) is l.s.c. on \({\mathbb {X}}\). Moreover, \(\nu _1(p,\cdot )\) is obviously bounded from below and, on account of inequality (23), it is \(\nu _1(p,{\bar{x}})<+\infty \) and

$$\begin{aligned} \nu _1(p,{\bar{x}})\le \inf _{x\in {\mathbb {X}}}\nu _1(p,x)+\ell d(p,{\bar{p}}). \end{aligned}$$

These facts enable one to invoke the Ekeland variational principle. According to it, corresponding to \(\lambda =\ell d(p,{\bar{p}})/\sigma _0\), there exists \(x_\lambda \in {\mathbb {X}}\) such that

$$\begin{aligned} \nu _1(p,x_\lambda )\le & {} \nu _1(p,{\bar{x}}), \end{aligned}$$
(25)
$$\begin{aligned} d(x_\lambda ,{\bar{x}})\le & {} \lambda , \end{aligned}$$
(26)
$$\begin{aligned} \nu _1(p,x_\lambda )< & {} \nu _1(p,x)+\sigma _0d(x,x_\lambda ), \quad \forall x\in {\mathbb {X}}\backslash \{x_\lambda \}. \end{aligned}$$
(27)

In the present context, the validity of the relations (25), (26) and (27) implies that \(\nu _1(p,x_\lambda )=0\). Indeed, observe that, according to the inequality (27), it is

$$\begin{aligned} {\nu _1(p,x_\lambda )-\nu _1(p,x)\over d(x,x_\lambda )}<\sigma _0, \quad \forall x\in {\mathbb {X}}\backslash \{x_\lambda \}, \end{aligned}$$

and hence

$$\begin{aligned} |\nabla \nu _1(p,\cdot )|(x_\lambda )=\lim _{r\rightarrow 0^+} \sup _{x\in \mathrm{B}\left( x_\lambda , r\right) \backslash \{x_\lambda \}} {\nu _1(p,x_\lambda )-\nu _1(p,x)\over d(x,x_\lambda )}\le \sigma _0. \end{aligned}$$
(28)

On the other hand, by recalling that \(d(p,{\bar{p}})\le \delta _\ell <\eta /2\), on account of inequalities (24) and (26) one finds

$$\begin{aligned} d(x_\lambda ,{\bar{x}})\le {\ell d(p,{\bar{p}})\over \sigma _0}\le {\ell \over \sigma _0}\delta _\ell <{\eta \over 2}. \end{aligned}$$

Besides, by combining inequalities (23), (24) and (25), one obtains

$$\begin{aligned} \nu _1(p,x_\lambda )\le \ell \delta _\ell <{\eta \over 2}. \end{aligned}$$

Thus, if it were \(\nu _1(p,x_\lambda )>0\), in the light of inequality (28) one would find inequality (22) contradicted for \(\epsilon =\eta /2\).

The fact that \(\nu _1(p,x_\lambda )=0\) means that

$$\begin{aligned} \mathrm{exc}(F(p,x_\lambda ),C)=0 \qquad \hbox { and }\qquad \mathrm{dist}\left( x_\lambda ,{\mathscr {R}}(p)\right) =0, \end{aligned}$$

so, as \({\mathscr {R}}(p)\) and C are closed sets,

$$\begin{aligned} F(p,x_\lambda )\subseteq C \qquad \hbox { and }\qquad x_\lambda \in {\mathscr {R}}(p), \end{aligned}$$

namely \(x_\lambda \in {{\mathscr {S}}}(p)\). Since it is \(d(x_\lambda ,{\bar{x}})\le \ell d(p,{\bar{p}})/\sigma _0\), as a consequence one has

$$\begin{aligned} {{\mathscr {S}}}(p)\cap \mathrm{B}\left( {\bar{x}}, {\ell d(p,{\bar{p}})\over \sigma _0}\right) \ne \varnothing . \end{aligned}$$

By arbitrariness of \(p\in \mathrm{B}\left( {\bar{p}}, \delta _\ell \right) \backslash \{{\bar{p}}\}\), this allows one to say that \({{\mathscr {S}}}\) is Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\) and

$$\begin{aligned} \mathrm{Liplsc}\,{{\mathscr {S}}}({\bar{p}},{\bar{x}})\le {\ell \over \sigma _0}. \end{aligned}$$

As the last inequality remains true for every \(\ell >\mathrm{Lipusc}\,F(\cdot ,{\bar{x}})({\bar{p}})+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\) and for every \(\sigma _0< \overline{|\nabla _x \nu _1|}{}^>({\bar{p}},{\bar{x}})\), then also the estimate in the thesis must hold true. This completes the proof. \(\square \)

From the Proof of Proposition 1 it should be evident that such a result embeds Theorem 3.1 in [31], which provides a sufficient condition for Lipschitz lower semicontinuity in the special case with \({\mathscr {R}}\) being given by \({\mathscr {R}}(p)=X\), for every \(p\in P\). Notice that, in such an event, \(\mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})=0\) while, for every \(p\in P\), the function \(x\mapsto \mathrm{dist}\left( x,{\mathscr {R}}(p)\right) \) vanishes. The condition in hypothesis (iii) can therefore be replaced with the mere positivity of \(\overline{|\nabla _x \nu _F|}{}^>({\bar{p}},{\bar{x}})\), as \(\nu _1\) reduces to \(\nu _F\).

4 Stability conditions for ideal efficiency

In the present section, with the aim of deriving a stability condition for ideal efficiency, the general condition for the Lipschitz lower semicontinuity of the solution mapping associated to a parameterized set-valued inclusion presented in Sect. 3 will be adapted to the specific context of vector optimization problems. In such a setting, the set-valued mapping F appearing in problems \((\mathrm{PSV})\) takes the special form introduced in (4). While in Proposition 1 several assumptions are directly made on F, inasmuch as in the context of \((\mathrm{PSV})\) such a mapping appears among the problem data as an independent one, the definition of \(F_{{\mathscr {R}},f}\) involves several elementary data such as \({\mathscr {R}}\) and f. This fact requires a further work aimed at singling out reasonable conditions, which can guarantee the aforementioned assumptions be satisfied.

Remark 3

Under conditions making each set-valued mapping \(F_{{\mathscr {R}},f}(p,\cdot ):{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) l.s.c. on \({\mathbb {X}}\), for \(p\in P\), the mapping \(\mathrm{IE}:P\rightrightarrows {\mathbb {X}}\) turns out to be closed (possibly, empty) valued.

Throughout the current section, with reference to problems \((\mathrm{VOP}p\,)\) the following assumption will be supposed to hold:

\((\tilde{{\mathscr {A}}})\):

\(\mathrm{dom}\, {\mathscr {R}}=P\).

Lemma 2

(Lower semicontinuity of \(F_{{\mathscr {R}},f}\)) Let \(p\in \ P\) and let the mapping \(f(p,\cdot ):{\mathbb {X}}\longrightarrow {\mathbb {Y}}\) be continuous on \({\mathbb {X}}\). Under assumption \((\tilde{{\mathscr {A}}})\) the set-valued mapping \(F_{{\mathscr {R}},f}(p,\cdot ):{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\), defined as in (4), is l.s.c. on \({\mathbb {X}}\).

Proof

Observe that, as a consequence of assumption \((\tilde{{\mathscr {A}}})\), it is \(\mathrm{dom}\, F_{{\mathscr {R}},f}=P\times X\). Fix \(x_0\in {\mathbb {X}}\) and take an arbitrary open subset O of \({\mathbb {Y}}\), with \(F_{{\mathscr {R}},f}(p,x_0)\cap O\ne \varnothing \). According to the definition of \(F_{{\mathscr {R}},f}\), this means

$$\begin{aligned}{}[f(p,{\mathscr {R}}(p))-f(p,x_0)]\cap O\ne \varnothing , \end{aligned}$$

so there exists \(y_0\in f(p,{\mathscr {R}}(p))\) such that \(y_0-f(p,x_0)\in O\). By openness of O, there exists \(\epsilon >0\) such that \(\mathrm{B}\left( y_0-f(p,x_0), \epsilon \right) \subseteq O\). Thus, since the function \(f(p,\cdot )\) is continuous at \(x_0\), there exists \(\delta _\epsilon >0\) such that

$$\begin{aligned} f(p,x)\in \mathrm{B}\left( f(p,x_0), \epsilon \right) ,\quad \forall x\in \mathrm{B}\left( x_0, \delta _\epsilon \right) , \end{aligned}$$

wherefrom it follows

$$\begin{aligned} y_0-f(p,x)\in \mathrm{B}\left( y_0-f(p,x_0), \epsilon \right) ,\quad \forall x\in \mathrm{B}\left( x_0, \delta _\epsilon \right) . \end{aligned}$$

Consequently, one finds

$$\begin{aligned} y_0-f(p,x)\in F_{{\mathscr {R}},f}(p,x)\cap O\ne \varnothing , \quad \forall x\in \mathrm{B}\left( x_0, \delta _\epsilon \right) , \end{aligned}$$

what shows that \(F_{{\mathscr {R}},f}(p,\cdot )\) is l.s.c. at \(x_0\), thereby completing the proof. \(\square \)

Lemma 3

Let \(f:P\times {\mathbb {X}}\longrightarrow {\mathbb {Y}}\) be a mapping, let \({\mathscr {R}}: P\rightrightarrows {\mathbb {X}}\) be a set-valued mapping satisfying assumption \((\tilde{{\mathscr {A}}})\), and let \({\bar{p}}\in \ P\). Suppose that:

  1. (i)

    f is Lipschitz continuous with constant \(\ell _f\) on \(P\times {\mathbb {X}}\);

  2. (ii)

    \({\mathscr {R}}\) is Lipschitz u.s.c. at \({\bar{p}}\).

Then, the set-valued mapping \(G:P\rightrightarrows {\mathbb {Y}}\), defined by \(G(p)=f(p,{\mathscr {R}}(p))\), is Lipschitz u.s.c. at \({\bar{p}}\) and the following estimate holds

$$\begin{aligned} \mathrm{Lipusc}\,G({\bar{p}})\le \ell _f[1+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})]. \end{aligned}$$
(29)

Proof

By hypothesis (ii), fixed \(\ell _{\mathscr {R}}>\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})\) there exists \(\delta >0\) such that

$$\begin{aligned} \mathrm{exc}({\mathscr {R}}(p),{\mathscr {R}}({\bar{p}}))\le \ell _{\mathscr {R}} d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$
(30)

Take an arbitrary \(p\in \mathrm{B}\left( {\bar{p}}, \delta \right) \) and \(x\in {\mathscr {R}}(p)\). By virtue of the Lipschitz continuity of f, one obtains

$$\begin{aligned} \mathrm{dist}\left( f(p,x),G({\bar{p}})\right)= & {} \inf _{z\in {\mathscr {R}}({\bar{p}})} \Vert f(p,x)-f({\bar{p}},z)\Vert \le \inf _{z\in {\mathscr {R}}({\bar{p}})}\ell _f[d(p,{\bar{p}})+d(x,z)] \\= & {} \ell _f\left[ d(p,{\bar{p}})+\inf _{z\in {\mathscr {R}}({\bar{p}})}d(x,z)\right] = \ell _f[d(p,{\bar{p}})+\mathrm{dist}\left( x,{\mathscr {R}}({\bar{p}})\right) ]. \end{aligned}$$

As from inequality (30) one has for every \(x\in {\mathscr {R}}(p)\)

$$\begin{aligned} \mathrm{dist}\left( x,{\mathscr {R}}({\bar{p}})\right) \le \ell _{\mathscr {R}} d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) , \end{aligned}$$

then the last estimate gives

$$\begin{aligned} \mathrm{dist}\left( f(p,x),G({\bar{p}})\right) \le \ell _f[1+\ell _{\mathscr {R}}]d(p,{\bar{p}}), \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$

By arbitrariness of x in \({\mathscr {R}}(p)\), what obtained implies

$$\begin{aligned} G(p)=f(p,{\mathscr {R}}(p))\subseteq \mathrm{B}\left( G({\bar{p}}), \ell _f[1+\ell _{\mathscr {R}}]d(p,{\bar{p}})\right) , \quad \forall p\in \mathrm{B}\left( {\bar{p}}, \delta \right) . \end{aligned}$$

This shows that G is Lipschitz u.s.c. at \({\bar{p}}\) with \(\mathrm{Lipusc}\,G({\bar{p}})\le \ell _f[1+\ell _{\mathscr {R}}]\). The arbitrariness of \(\ell _{\mathscr {R}}>\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})\) enables one to achieve the estimate in (29). \(\square \)

The next lemma establishes a stability behaviour of the Lipschitz upper semicontinuity property under additive calm perturbations, which turns out to be useful in the present approach.

Lemma 4

Let \(G:P\rightrightarrows {\mathbb {Y}}\) be a set-valued mapping, let \(h:P\longrightarrow {\mathbb {Y}}\) be a given single-valued mapping and let \({\bar{p}}\in P\). If G is Lipschitz u.s.c. at \({\bar{p}}\) and h is calm at \({\bar{p}}\), then \(G+h\) is Lipschitz u.s.c. at \({\bar{p}}\) and the following estimate holds

$$\begin{aligned} \mathrm{Lipusc}\,(G+h)({\bar{p}})\le \mathrm{Lipusc}\,G({\bar{p}})+\mathrm{clm}\, h({\bar{p}}). \end{aligned}$$
(31)

Proof

It suffices to observe that, since any distance induced by a norm is invariant under translations, one has

$$\begin{aligned} \mathrm{exc}(G(p)+h(p),G({\bar{p}})+h({\bar{p}}))= & {} \sup _{y\in G(p)} \mathrm{dist}\left( y+h(p),G({\bar{p}})+h({\bar{p}})\right) \\= & {} \sup _{y\in G(p)} \mathrm{dist}\left( y,G({\bar{p}})+h({\bar{p}})-h(p)\right) \\\le & {} \mathrm{exc}(G(p),G({\bar{p}})) +\Vert h(p)-h({\bar{p}})\Vert . \end{aligned}$$

The estimate in (31) is a straightforward consequence of the above inequality, the definitions of modulus of Lipschitz upper semicontinuity and of modulus of calmness. \(\square \)

Conditions ensuring the behaviour of \(\overline{|\nabla _x \nu _F|}{}^>({\bar{p}},{\bar{x}})\) to fit the requirement in hypothesis (iii) of Proposition 1 will be expressed in terms of generalized derivatives. Recall that, following [24], a mapping \(f:{\mathbb {X}}\longrightarrow {\mathbb {Y}}\) is said to be Bouligand differentiable at \(x_0\in {\mathbb {X}}\) if there exists a continuous p.h. mapping \(\mathrm{D}_Bf(x_0;\cdot ):{\mathbb {X}}\longrightarrow {\mathbb {Y}}\) such that

$$\begin{aligned} \lim _{x\rightarrow x_0}{f(x)-f(x_0)-\mathrm{D}_Bf(x_0;x-x_0) \over \Vert x-x_0\Vert }={\mathbf {0}}. \end{aligned}$$

In such an event, the mapping \(v\mapsto \mathrm{D}_Bf(x_0;v)\) is called Bouligand derivative of f at \(x_0\). It is clear that such a differentiability notion actually generalizes the Fréchet smoothness: whenever f is Fréchet differentiable at \(x_0\), with Fréchet derivative \(\mathrm{D}f(x_0)\), f is also Bouligand differentiable at the same point with \(\mathrm{D}_Bf(x_0;\cdot )=\mathrm{D}f(x_0)\).

Before stating the next remark, it is proper to recall that, after [15], a p.h. set-valued mapping \(H(x_0;\cdot ): {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) is said to be an outer prederivative of \(G:{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) at \(x_0\in {\mathbb {X}}\) if for every \(\epsilon >0\) there exists \(\delta >0\) such that

$$\begin{aligned} G(x)\subseteq G(x_0)+H(x_0;x-x_0)+\epsilon \Vert x-x_0\Vert {{\mathbb {B}}}, \quad \forall x\in \mathrm{B}\left( x_0, \delta \right) . \end{aligned}$$

For more details on this nonsmooth analysis tool the reader may refer to [15, 21].

Remark 4

Let \(f:P\times {\mathbb {X}}\longrightarrow {\mathbb {Y}}\) be a mapping, let p be fixed in P and \(x_0\in {\mathbb {X}}\). If the mapping \(x\mapsto f(p,\cdot )\) is Bouligand differentiable at \(x_0\) with Bouligand derivative \(\mathrm{D}_Bf(p,\cdot )(x_0)\), then the set-valued mapping \(x\leadsto F_{{\mathscr {R}},f}(p,x)\) admits as an outer prederivative at \(x_0\) the mapping \(v\leadsto \{-\mathrm{D}_Bf(p,\cdot )(x_0)(v)\}\). Indeed, fixed any \(\epsilon >0\), the Bouligand differentiability of \(f(p,\cdot )\) at \(x_0\) ensures the existance of \(\delta _\epsilon >0\) such that

$$\begin{aligned} f(p,x)\in f(p,x_0)+\mathrm{D}_Bf(p,\cdot )(x_0;x-x_0)+ \epsilon \Vert x-x_0\Vert {{\mathbb {B}}},\quad \forall x\in \mathrm{B}\left( x_0, \delta _\epsilon \right) . \end{aligned}$$

This inclusion implies

$$\begin{aligned} F_{{\mathscr {R}},f}(p,x)= & {} f(p,{\mathscr {R}}(p))-f(p,x) \\\subseteq & {} f(p,{\mathscr {R}}(p))-f(p,x_0)-\mathrm{D}_Bf(p,\cdot )(x_0;x-x_0)+ \epsilon \Vert x-x_0\Vert {{\mathbb {B}}}\\= & {} F_{{\mathscr {R}},f}(p,x_0)-\mathrm{D}_Bf(p,\cdot )(x_0;x-x_0)+ \epsilon \Vert x-x_0\Vert {{\mathbb {B}}}, \quad \forall x\in \mathrm{B}\left( x_0, \delta _\epsilon \right) . \end{aligned}$$

The next technical lemma provides a below estimate for the slope of the function \(\nu _{F_{{\mathscr {R}},f}}(p,\cdot ):{\mathbb {X}}\longrightarrow [0+\infty ]\), defined by \(\nu _{F_{{\mathscr {R}},f}}(p,x)=\mathrm{exc}(F_{{\mathscr {R}},f}(p,x),C)\), in terms of ‘strict negativity’ (with respect to the partial ordering \(\le _{{}_C}\)) of the values taken by the first-order approximation of \(f(p,\cdot )\).

Lemma 5

With reference to a problem \((\mathrm{VOP}p\,)\), let p be fixed in P and let \(x_0\not \in \mathrm{IE}(p)\). Under assumption \((\tilde{{\mathscr {A}}})\), suppose that:

  1. (i)

    \(f(p,\cdot ):{\mathbb {X}}\longrightarrow {\mathbb {Y}}\) is continuous on \({\mathbb {X}}\);

  2. (ii)

    \(f(p,\cdot )\) is Bouligand differentiable at \(x_0\);

  3. (iii)

    there exist \(\sigma >1\) and \(u\in {{\mathbb {S}}}\) such that \(\mathrm{B}\left( \mathrm{D}_Bf(p,\cdot )(x_0;u), \sigma \right) \subseteq -C\).

Then, it holds

$$\begin{aligned} |\nabla \nu _{F_{{\mathscr {R}},f}}(p,\cdot )|(x_0)\ge \sigma . \end{aligned}$$
(32)

Proof

By virtue of hypothesis (i) and Lemma 2, the set-valued mapping \(F_{{\mathscr {R}},f}(p,\cdot )\) is l.s.c. on \({\mathbb {X}}\), so, in particular, l.s.c. at \(x_0\). According with what has been noticed in Remark 4, \(F_{{\mathscr {R}},f}(p,\cdot )\) admits the set-valued mapping \(v\leadsto \{-\mathrm{D}_Bf(p,\cdot )(x_0)(v)\}\) as an outer prederivative at \(x_0\), owing to hypothesis (ii).

Now, if \(\sigma \) and \(u\in {{\mathbb {S}}}\) are as in hypothesis (iii), one has

$$\begin{aligned} -\mathrm{D}_Bf(p,\cdot )(x_0;u)+\sigma {{\mathbb {B}}}\subseteq C \end{aligned}$$

and hence

$$\begin{aligned} \sup _{v\in {{\mathbb {S}}}}|C-^{\!\!\!\!\!*} \{-\mathrm{D}_Bf(p,\cdot )(x_0)(v)\}| \ge \sigma , \end{aligned}$$

where \(|S|=\sup \{r>0:\ r{{\mathbb {B}}}\subseteq S\}\). In the light of [32,  Proposition 2.5], the last inequality implies the estimate in (32), thereby completing the proof. \(\square \)

With the above elements, one is in a position to establish the following result about stability of ideal efficient solutions to \((\mathrm{VOP}p\,)\).

Theorem 2

(Lipschitz lower semicontinuity of \(\mathrm{IE}\)) With reference to a \((\mathrm{VOP}p\,)\), let \({\bar{p}}\in P\) and \({\bar{x}}\in \mathrm{IE}({\bar{p}})\) be given. Under assumption \((\tilde{{\mathscr {A}}})\), suppose that:

  1. (i)

    \(f:P\times {\mathbb {X}}\longrightarrow {\mathbb {Y}}\) is Lipschitz continuous on \(P\times {\mathbb {X}}\) with constant \(\ell _f\);

  2. (ii)

    \({\mathscr {R}}\) is Lipschitz u.s.c. at \({\bar{p}}\) and Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\);

  3. (iii)

    there exists \(\delta _0>0\) such that \(f(p,\cdot )\) is Bouligand differentiable on \(\mathrm{B}\left( {\bar{x}}, \delta _0\right) \), for each \(p\in \mathrm{B}\left( {\bar{p}}, \delta _0\right) \);

  4. (iv)

    there exist \(\delta \in (0,\delta _0)\) and \(\sigma >1\) such that for every \((p,x)\in [\mathrm{B}\left( {\bar{p}}, \delta \right) \times \mathrm{B}\left( {\bar{x}}, \delta \right) ] \backslash \mathrm{graph}\,\mathrm{IE}\) there is \(u\in {{\mathbb {S}}}\) such that

    $$\begin{aligned} \mathrm{D}_Bf(p,\cdot )(x;u)+\sigma {{\mathbb {B}}}\subset -C. \end{aligned}$$
    (33)

Then, \(\mathrm{IE}\) is Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\) and the following estimate holds

$$\begin{aligned} \mathrm{Liplsc}\,\mathrm{IE}({\bar{p}},{\bar{x}})\le {\ell _f[2+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})]+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\over \sigma -1}. \end{aligned}$$
(34)

Proof

The proof consists in showing that, under the current assumptions, it is possible to apply Proposition 1, with \(F=F_{{\mathscr {R}},f}\). To do so, let us start with observing, since each mapping \(x\mapsto f(p,x)\) is continuous on \({\mathbb {X}}\), for every \(p\in P\), as a consequence of hypothesis (i), then on account of Lemma 2 each set-valued mapping \(F_{{\mathscr {R}},f}(p,\cdot )\) is l.s.c. on \({\mathbb {X}}\), for every \(p\in P\). This shows that hypothesis (i) of Proposition 1 is fulfilled.

Moreover, by virtue of hypothesis (i) and (ii), Lemma 3 ensures that the set-valued mapping \(p\leadsto f(p,{\mathscr {R}}(p))\) is Lipschitz u.s.c. at \({\bar{p}}\), with \(\mathrm{Lipusc}\,f(\cdot ,{\mathscr {R}}(\cdot ))({\bar{p}}) \le \ell _f[1+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})]\). Since for any fixed \(x\in {\mathbb {X}}\) the mapping \(p\mapsto f(p,x)\) is calm at \({\bar{p}}\), again as a consequence of hypothesis (i), with constant \(\ell _f>\mathrm{clm}\, f(\cdot ,x)({\bar{p}})\), then Lemma 4 enables one to say that \(F_{{\mathscr {R}},f}\) is Lipscitz u.s.c. at \({\bar{p}}\), with

$$\begin{aligned} \mathrm{Lipusc}\,F_{{\mathscr {R}},f}({\bar{p}})\le \ell _f[1+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})] +\ell _f. \end{aligned}$$

This shows that all the requirements in the hypothesis (ii) of Proposition 1 are fulfilled under the assumptions made.

It remains to show that also hypothesis (iii) of Proposition 1 is fulfilled. This can be done by applying Lemma 5. Remembering the definition of partial strict outer slope, one has to prove the existence of \(\epsilon >0\) such that

$$\begin{aligned} |\nabla \nu _{F_{{\mathscr {R}},f}}(p,\cdot )|(x)>1,\quad \forall (p,x)\in \mathrm{B}\left( {\bar{p}}, \epsilon \right) \times \mathrm{B}\left( {\bar{x}}, \epsilon \right) ,\quad 0<\nu _{F_{{\mathscr {R}},f}}(p,x)<\epsilon . \end{aligned}$$

So, taking \(\epsilon \in (0,\delta )\), where \(\delta >0\) is as in hypothesis (iv), and an arbitrary \((p,x)\in \mathrm{B}\left( {\bar{p}}, \epsilon \right) \times \mathrm{B}\left( {\bar{x}}, \epsilon \right) \), one has that, according to hypothesis (iii), if it is \(\nu _{F_{{\mathscr {R}},f}}(p,x)>0\) then \((p,x)\not \in \mathrm{graph}\,\mathrm{IE}\) and therefore, by hypothesis (iv) there exists \(u\in {{\mathbb {S}}}\) such that inclusion (33) holds. In turn this inclusion, on account of Lemma 5, implies that \(|\nabla \nu _{F_{{\mathscr {R}},f}}(p,\cdot )|(x)\ge \sigma >1\).

Thus the thesis follows by taking into account that, in the current setting, \(\mathrm{IE}={{\mathscr {S}}}\). This completes the proof. \(\square \)

Hypothesis (ii) in Theorem 2 refers to a certain stability behaviour of \({\mathscr {R}}\). In concrete problems, this set-valued mapping may happen to be defined by a large variety of constraint systems. For many of them, in the last decades adequate conditions ensuring the needed stability behaviour have been developed within variational analysis (see [9,  Chapter 4.D], [17, 19,  Chapter 4.3] and references therein).

The stability behaviour of \(\mathrm{IE}\) established by Theorem 2 has a remarkable consequence on the stability of ideal efficient values, which can be formulated through the mapping \(\mathrm{val}:P\longrightarrow {\mathbb {Y}}\).

Corollary 1

(Calmness of \(\mathrm{val}\)) Under the same hypotheses as in Theorem 2 the mapping \(\mathrm{val}:P\longrightarrow {\mathbb {Y}}\) is calm at \({\bar{p}}\) and it holds

$$\begin{aligned} \mathrm{clm}\, \mathrm{val}({\bar{p}})\le {\ell _f^2[2+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})]+ \ell _f(\mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})+1)\over \sigma -1}. \end{aligned}$$

Proof

By Theorem 2\(\mathrm{IE}\) is Lipschitz l.s.c. at \(({\bar{p}},{\bar{x}})\), with the related modulus estimate. So, if taking an arbitrary \(\ell >\mathrm{Liplsc}\,\mathrm{IE}({\bar{p}},{\bar{x}})\), there exists \(\zeta _\ell >0\) such that for any \(p\in \mathrm{B}\left( {\bar{p}}, \zeta _\ell \right) \) an element \(x_p\) must belong to \(\mathrm{IE}(p)\) with the property that \(d(x_p,{\bar{x}})\le \ell d(p,{\bar{p}})\). Consequently, it results in

$$\begin{aligned} |\mathrm{val}(p)-\mathrm{val}({\bar{p}})|= & {} |f(p,x_p)-f({\bar{p}},{\bar{x}})| \le \ell _f[d(p,{\bar{p}})+\Vert x_p-{\bar{x}}\Vert ] \\\le & {} \ell _f[1+\ell ]d(p,{\bar{p}}), \quad \forall p \in \mathrm{B}\left( {\bar{p}}, \zeta _\ell \right) . \end{aligned}$$

This says that \(\mathrm{val}\) is calm at \({\bar{p}}\). By arbitrariness of \(\ell >\mathrm{Liplsc}\,\mathrm{IE}({\bar{p}},{\bar{x}})\), to obtain the estimate complementing the thesis, it suffices to recall the inequality in (34). \(\square \)

Example 3

Let \(P=[0,+\infty )\), \({\mathbb {X}}={\mathbb {Y}}={\mathbb {R}}^2\), \(C={\mathbb {R}}^2_+\), with \(f:[0,+\infty ) \times {\mathbb {R}}^2\longrightarrow {\mathbb {R}}^2\) given by

$$\begin{aligned} f(p,x)=(2\arctan x_2,-2\arctan x_1), \end{aligned}$$

and \({\mathscr {R}}:[0,+\infty )\rightrightarrows {\mathbb {R}}^2\) given by

$$\begin{aligned} {\mathscr {R}}=\{x\in {\mathbb {R}}^2:\ x_1\ge 0,\ x_2\ge 0,\ x_1+x_2 \le \beta (p)\}, \end{aligned}$$

where \(\beta :[0,+\infty )\longrightarrow [0,+\infty )\) is a function with \(\beta (0)=0\) and calm from above at 0. Take \({\bar{p}}=0\) and \({\bar{x}}=(0,0)\).

In order to find the ideal efficient solutions to the corresponding \((\mathrm{VOP}p\,)\), it is convenient to observe first that \(\mathrm{IE}(0)=\{(0,0)\}\) and that, for every \(y=(y_1,y_2) \in f(p,{\mathscr {R}}(p))\), with \(p\in [0,+\infty )\), according to the definition of f and \({\mathscr {R}}(p)\), one has

$$\begin{aligned} y_1\ge 0 \qquad \hbox { and }\qquad y_2\ge -2\arctan \beta (p). \end{aligned}$$

In other terms, \((\beta (p),0)\in {\mathscr {R}}(p)\) and

$$\begin{aligned} f(p,{\mathscr {R}}(p))\subseteq f(p,(\beta (p),0))+{\mathbb {R}}^2_+= (0,-2\arctan \beta (p))+{\mathbb {R}}^2_+, \end{aligned}$$

which means that \((\beta (p),0)\in \mathrm{IE}(p)\), for every \(p\in [0,+\infty )\). Besides, since the vector \((0,-2\arctan \beta (p))\) can be the only ideal efficient element of the set \(f(p,{\mathscr {R}}(p))\) and the function \(x\mapsto f(p,x)\) is injective, one can state that

$$\begin{aligned} \mathrm{IE}(p)=\{(\beta (p),0)\},\quad \forall p\in [0,+\infty ). \end{aligned}$$

Thus, since for any \(c_\beta >\overline{\mathrm{clm}}\, \beta (0)\) there exists \(\delta >0\) such that it holds

$$\begin{aligned} \mathrm{dist}\left( (0,0),\mathrm{IE}(p)\right) =\beta (p)\le c_\beta p, \quad \forall p\in [0,\delta ], \end{aligned}$$

it is possible to deduce that \(\mathrm{IE}\) is Lipschitz l.s.c. (actually, also Lipschitz u.s.c. and hence calm) at (0, (0, 0)), with

$$\begin{aligned} \mathrm{Liplsc}\,\mathrm{IE}(0,(0,0))\le \overline{\mathrm{clm}}\, \beta (0). \end{aligned}$$
(35)

In order to test the application of Theorem 2 in this concrete case, let us start with noticing that, since \(f(p,\cdot )\) is (Fréchet) differentiable on \({\mathbb {R}}^2\) and the linear mapping \(\mathrm{D}f(p,\cdot )(x):{\mathbb {R}}^2\longrightarrow {\mathbb {R}}^2\) can be represented by the Jacobian matrix

$$\begin{aligned} \mathrm{D}f(p,\cdot )(x)= \left( \begin{array}{cc} 0 &{} \displaystyle {2\over 1+x_2^2} \\ -\displaystyle {2\over 1+x_1^2} &{} 0 \end{array}\right) , \end{aligned}$$

with

$$\begin{aligned} \Vert \mathrm{D}f(p,\cdot )(x)\Vert _\mathrm{L}= & {} \sup _{u\in {{\mathbb {S}}}} \left\| \left( \begin{array}{cc} 0 &{} \displaystyle {2\over 1+x_2^2} \\ -\displaystyle {2\over 1+x_1^2} &{} 0 \end{array}\right) \left( \begin{array}{c} u_1 \\ u_2 \end{array}\right) \right\| \\&\\= & {} \sup _{u\in {{\mathbb {S}}}} \left\| \left( \begin{array}{cc} \displaystyle {2u_2\over 1+x_2^2},&-\displaystyle {2u_1\over 1+x_1^2} \end{array}\right) \right\| \le \sqrt{{4\over (1+x_1^2)^2}+{4\over (1+x_2^2)^2}} \\\le & {} 2\sqrt{2},\quad \forall x=(x_1,x_2)\in {\mathbb {R}}^2, \end{aligned}$$

then f turns out to be Lipschitz continuous on \([0,+\infty ) \times {\mathbb {R}}^2\), with constant \(\ell _f=2\sqrt{2}\). Since it is

$$\begin{aligned} \mathrm{exc}({\mathscr {R}}(p),{\mathscr {R}}(0))=\Vert (\beta (p),0)\Vert =\beta (p) \le c_\beta p,\quad \forall p\in [0,\delta ], \end{aligned}$$

it is true that \({\mathscr {R}}\) is Lipschitz u.s.c. at 0, with \(\mathrm{Lipusc}\,{\mathscr {R}}(0)\le c_\beta \). Moreover, as it is

$$\begin{aligned} {\mathscr {R}}(0)=\{(0,0)\}\subseteq {\mathscr {R}}(p), \quad \forall p\in [0,+\infty ), \end{aligned}$$

one sees that for every \(\ell >0\) is holds

$$\begin{aligned} {\mathscr {R}}(p)\cap \mathrm{B}\left( (0,0), \ell |p|\right) \ne \varnothing , \quad \forall p\in [0,+\infty ), \end{aligned}$$

what says that \({\mathscr {R}}\) is also Lipschitz l.s.c. at (0, (0, 0)) and \(\mathrm{Liplsc}\,{\mathscr {R}}(0,(0,0))=0\).

Now, take an arbitrary \(x\in \mathrm{B}\left( (0,0), \delta \right) \backslash \{(0,0)\}\), with \(\delta \) fixed in such a way that \(0<\delta <\sqrt{\sqrt{2}-1}\), and set

$$\begin{aligned} \sigma ={\sqrt{2}\over 1+\delta ^2}. \end{aligned}$$

Notice that \(\sigma >1\), because \(\delta <\sqrt{\sqrt{2}-1}\). Taking \(u=(1/\sqrt{2},-1/\sqrt{2})\in {{\mathbb {S}}}\), one finds

$$\begin{aligned} \mathrm{D}f(p,\cdot )(x)u= \left( \begin{array}{cc} -\displaystyle {\sqrt{2}\over 1+x_2^2} \\ -\displaystyle {\sqrt{2}\over 1+x_1^2}\end{array}\right) , \end{aligned}$$

whence it follows

$$\begin{aligned} \mathrm{dist}\left( \mathrm{D}f(p,\cdot )(x)u,{\mathbb {R}}^2\backslash (-\mathrm{int}\, {\mathbb {R}}^2_+)\right) = \min \left\{ {\sqrt{2}\over 1+x_1^2},\, {\sqrt{2}\over 1+x_2^2}\right\} \ge {\sqrt{2}\over 1+\delta ^2}. \end{aligned}$$

Consequently, it is true that

$$\begin{aligned} \mathrm{D}f(p,\cdot )(x)u+\sigma {{\mathbb {B}}}\subseteq -{\mathbb {R}}^2_+, \quad \forall x\in \mathrm{B}\left( (0,0), \delta \right) \backslash \{(0,0)\}. \end{aligned}$$

This shows that also hypothesis (iii) of Theorem 2 is fulfilled. In the case under consideration, the estimate in (34) becomes

$$\begin{aligned} \mathrm{Liplsc}\,\mathrm{IE}(0,(0,0))\le {2\sqrt{2}[2+c_\beta ]\over \displaystyle {\sqrt{2}\over 1+\delta ^2}-1}, \end{aligned}$$

which is consistent with (even though, less accurate than) the estimate in (35), obtained by direct inspection of \(\mathrm{IE}\). Indeed, one sees that

$$\begin{aligned} \lim _{\delta \rightarrow 0^+} {2\sqrt{2}[2+c_\beta ]\over \displaystyle {\sqrt{2}\over 1+\delta ^2}-1} = {4\sqrt{2}+2\sqrt{2}c_\beta \over \sqrt{2}-1}>c_\beta > \overline{\mathrm{clm}}\, \beta (0) \end{aligned}$$

(whereas

$$\begin{aligned} \lim _{\delta \rightarrow {\sqrt{\sqrt{2}-1}\ }^-}{2\sqrt{2}[2+c_\beta ]\over \displaystyle {\sqrt{2}\over 1+\delta ^2}-1}=+\infty >\overline{\mathrm{clm}}\, \beta (0)\ ). \end{aligned}$$

The above example suggests that, whenever f is one-to-one, \(\mathrm{IE}\) is single-valued and this fact automatically enhances the Lipschitz lower semicontinuity property to calmness. It is well known that a sufficient condition for a Lipschitz (possibly, nonsmooth) mapping f between finite-dimensional spaces to be a homeomorphism can be expressed in terms of Clarke’s generalized Jacobian (see [23]). Let \(\partial ^\circ f(p,\cdot )(x_0)\) denote the Clarke’s generalized Jacobian of \(f(p,\cdot ):{\mathbb {R}}^n\longrightarrow {\mathbb {R}}^n\) at \(x_0\in {\mathbb {R}}^n\), i.e. the set

$$\begin{aligned} \partial ^\circ f(p,\cdot )(x_0)= & {} \mathrm{conv}\, \left\{ \varLambda \in \mathrm{L}({\mathbb {R}}^n):\ \exists (x_k)_k,\ x_k\in {\mathscr {D}}(f(p,\cdot )),\ x_k\rightarrow x_0, \right. \\&\left. \mathrm{D}f(p,\cdot )(x_k)\longrightarrow \varLambda \hbox { as } k\rightarrow \infty \right\} , \end{aligned}$$

where \({\mathscr {D}}(f(p,\cdot ))\) indicates the set of points at which the function \(x\mapsto f(p,x)\) is (Fréchet) differentiable (the Rademacher theorem ensures that such a set is a Lebesgue full measure subset of \({\mathbb {R}}^n)\).

Corollary 2

Under the same hypotheses as in Theorem 2, suppose that \({\mathbb {X}}={\mathbb {Y}}={\mathbb {R}}^n\) and

  1. (v)

    for every \(p\in P\) there exists \(\gamma _p>0\) such that, for every \(x\in {\mathbb {R}}^n\), every \(\varLambda \in \partial ^\circ f(p,\cdot )(x)\) is invertible and

    $$\begin{aligned} \sup _{x\in {\mathbb {R}}^n}\sup _{\varLambda \in \partial ^\circ f(p,\cdot )(x)} \Vert \varLambda ^{-1}\Vert _\mathrm{L}\le \gamma _p. \end{aligned}$$

Then, \(\mathrm{IE}\) is single-valued and calm at \({\bar{p}}\), with

$$\begin{aligned} \mathrm{clm}\, \mathrm{IE}({\bar{p}})\le {\ell _f[2+\mathrm{Lipusc}\,{\mathscr {R}}({\bar{p}})]+ \mathrm{Liplsc}\,{\mathscr {R}}({\bar{p}},{\bar{x}})\over \sigma -1}. \end{aligned}$$

Proof

Fix an arbitrary \(p\in P\). The additional hypothesis (v) enables one to apply the Lipschitzian Hadamard theorem in [23]. According to it, the mapping \(f(p,\cdot ):{\mathbb {R}}^n\longrightarrow {\mathbb {R}}^n\) is one-to-one on \({\mathbb {R}}^n\). Consequently, since it is

$$\begin{aligned} \mathrm{IE}(p)=f^{-1}(p,\cdot )(\mathrm{val}(p))\cap {\mathscr {R}}(p), \end{aligned}$$

the mapping \(\mathrm{IE}\) must be single-valued. As already remarked, in such a circumstance Lipschitz lower semicontinuity and calmness collapse to the same property. So the thesis becomes a consequence of Theorem 2. \(\square \)

It is well recognized that the stability/sensitivity analysis in optimization, as well as robust and stochastic programming, affords an useful approach to dealing with problems affected by uncertain data. Effects due to uncertainty can not be neglected in concrete problems, so evaluating how stable/sensitive an optimal solution is with respect to perturbations of the input data becomes a necessary issue for a complete problem analysis. Nonetheless, as clearly explained in [16,  Chapter 15.4], a specific feature of the stability/sensitivity approach is to provide only some a posteriori insights in describing ranges for the input data, within which solutions, if any, remain optimal. “It does not, however, provide a course of action for changing a solution should the perturbation be outside this range. In contrast, stochastic and robust optimization techniques take the uncertainty into account during the optimization process” ([16]). In particular, following the robustness approach, different scenarios are allowed for the input parameter and this leads to a solution concept that works well in every uncertain scenario, thereby hedging against the worst case that may happen. In the light of this basic difference, the findings of the present paper can not be directly related to results of robustness in vector optimization. Nevertheless, as guessed by the journal editor handling the present paper, the technique here employed for the stability analysis, mainly relying on the solution behaviour of parameterized set-valued inclusions, may offer useful hints to be developed in a hereafter analysis explicitly focussing on robustness in multi-objective optimization. More precisely, if the parameter space P is interpreted as a set of all possible scenarios, a way to define a robust counterpart of the feasible region affected by uncertainty is to set

$$\begin{aligned} \overline{{\mathscr {R}}}=\bigcap _{p\in P}{\mathscr {R}}(p) \qquad \qquad \hbox {(robust feasibility)}. \end{aligned}$$

Consequently, a robust counterpart of the notion of ideal efficient solution related to problems \((\mathrm{VOP}p\,)\) should lead to single out any vector \({\bar{x}} \in \overline{{\mathscr {R}}}\) such that

$$\begin{aligned} F_{{\mathscr {R}},f}(p,{\bar{x}})=f(p,{\mathscr {R}}(p))-f(p,{\bar{x}})\subseteq C, \quad \forall p\in P. \end{aligned}$$

In this setting, by introducing the set-valued mapping \({\overline{F}}: {\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) embedding all uncertain scenarios

$$\begin{aligned} {\overline{F}}(x)=F_{{\mathscr {R}},f}(P,x)=\bigcup _{p\in P}[f(p,{\mathscr {R}}(p))-f(p,x)], \end{aligned}$$

the set-valued inclusion problem formalizing the robust counterpart related to problem \((\mathrm{PSV})\) turns out to be

$$\begin{aligned} \hbox { find }x\in \overline{{\mathscr {R}}}\hbox { such that } {\overline{F}}(x)\subseteq C. \end{aligned}$$

Investigations focusing on the solvability of the above set-valued inclusion problem will be the subject of a future research work.

5 Conclusions

Evidences show that ideal efficiency has a delicate geometry. The findings of the present paper demonstrate that the analysis of the solution stability for parameterized set-valued inclusions can afford useful insights into the behaviour of ideal efficient solutions to vector optimization problems subject to perturbations, from both the qualitative and the quantitative viewpoint. The study has focused on the Lipschitz lower semicontinuity property for the ideal efficient solution mapping, but it is reasonable to expect that other quantitative stability properties widely considered in variational analysis (such as Lipschitz upper semicontinuity, calmness and the Aubin property) can be fruitfully investigated by the same approach, via set-valued inclusions. While the analysis of parameterized set-valued inclusions has been conducted in a rather abstract setting, the related achievements have been subsequently applied in a more structured context, where the employment of well-known generalized derivatives ensures applicability of results to a large class of problems. The choice made in this part of the work leaves open the possibility to refine the stability conditions here obtained by means of other, more sophisticated, tools of nonsmooth analysis.