Differential stability properties in convex scalar and vector optimization

This paper focuses on formulas for the ε-subdifferential of the optimal value function of scalar and vector convex optimization problems. These formulas can be applied when the set of solutions of the problem is empty. In the scalar case, both unconstrained problems and problems with an inclusion constraint are considered. For the last ones, limiting results are derived, in such a way that no qualification conditions are required. The main mathematical tool is a limiting calculus rule for the ε-subdifferential of the sum of convex and lower semicontinuous functions defined on a (non necessarily reflexive) Banach space. In the vector case, unconstrained problems are studied and exact formulas are derived by linear scalarizations. These results are based on a concept of infimal set, the notion of cone proper set and an ε-subdifferential for convex vector functions due to Taa.


Introduction
Studying differential stability of optimization problems usually means to study differentiability properties of the optimal value function in parametric mathematical programming. We refer to [1, 2, 4, 7, 8, 25-27, 29, 31, 37] and the references therein for some old and new results in this direction.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. In the early 1960's, Moreau and Rockafellar [31] introduced the concept of subgradient for convex functions, initiating the developments of theoretical and applied convex analysis. Ioffe and Tihomirov [21], Hiriart-Urruty and Lemaréchal [18,19], Phelps [30], Zȃlinescu [37] and Borwein and Vanderwerff [9] presented a beautiful theory about convex sets and convex functions in finite and infinite dimensional spaces with many significant applications in mathematical programming, classical variational calculus, and optimal control theory.
In 1965, Brøndsted and Rockafellar [11] introduced the concept of approximate subdifferential (also called ε-subdifferential) of a convex function. It has become an important tool for the study of algorithms as well as for theoretical purposes in convex optimization. For more information, the reader is referred to [14,17,20,28,37] and the references therein.
This paper concerns with the study of differential stability properties to scalar and vector convex programming problems.
In the literature on differentiability properties of the optimal value function (also named efficient value mapping and marginal or perturbation function) of a parametric family of optimization problems including inclusion constraints, different qualification conditions are assumed (see the recent papers [1,2,4] and the references therein). However, it is wellknown that these regularity conditions are usually difficult to check and they could not be satisfied. Then, an objective of this paper is to derive limiting versions of these differentiability properties where no qualification conditions are required. In addition, lots of differential stability results of the literature are based on the existence of solutions in the involved problem. Unfortunately, this assumption is not always satisfied (see [3,28]). Thus, a second aim of this work is to derive such differential stability results for problems whose solution sets could be empty (see [7,8] and the references therein for a different approach based on the regularization of the problem).
In vector optimization, additional technical difficulties happen. Namely, the optimal value mapping is set-valued and involves some concepts of infimal point (see [6,12,22,23,36]), which are the vector counterpart to the notion of infimum of a set of real numbers. As a result, differential stability properties in vector optimization problems are formulated in terms of graphical and epigraphical derivatives and most of the obtained results require the so-called domination property, which implies the existence of exact solutions of the problem (see [23,24,33,34,36]). Again, this paper concerns with differential stability of vector optimization problems that would not satisfy the domination property (in particular with empty solution set).
The contents of the paper are as follows. Section 2 collects some basic notations and concepts. In Section 3, a limiting calculus rule for the ε-subdifferential of the sum of m proper lower semicontinuous convex functions on a (non necessarily reflexive) Banach space is stated without requiring any qualification condition. It is derived by convex analysis tools and a generalization of the well-known Brøndsted-Rockafellar Theorem. Sections 4 and 5 are devoted to the differential stability of unconstrained and constrained convex optimization problems, respectively, whose solution sets can be empty. In Section 5, an inclusion constraint is considered and a limiting formula for ε-subdifferential of the optimal value function is obtained. This result is a consequence of both the previous one in Section 4 dealing with unconstrained problems and the limiting sum rule stated in Section 2. It is also showed that it reduces to an exact formula provided that the so-called Robinson-Rockafellar condition is satisfied. Section 6 involves the differential stability of convex vector optimization problems. In deriving it, a linear scalarization approach is considered. The obtained differentiability properties are formulated by ε-subgradients of the scalarized function and also by ε-subgradients of the optimal value function of the scalarized problems. The main mathematical tools are an ε-subdifferential for vector functions due Taa (see [35]) and a concept of infimal point (see [6,12]). As in the previous two sections, the set of solutions of the problem can be empty. Finally, in Section 7, the conclusions of this work are underlined.

Preliminaries and Mathematical Tools
Throughout R stands for the set R ∪ {±∞}, R p + is the nonnegative orthant of R p and R + := R 1 + . Let X be a real locally convex Hausdorff topological linear space. The topological dual space of X is denoted by X * .
For a set C ⊂ X, we denote by int C and cl C the topological interior and the closure of C, respectively. The core or algebraic interior of C is defined by Given a function f : X → R, we denote the effective domain and the epigraph of f by dom f and epi f respectively, i.e., One says that f is proper if f (x) > −∞ for all x ∈ X and dom f = ∅. The function f is called lower-semicontinuous if epi f is closed.
Definition 1 Let f : X → R be a proper convex function and ε ≥ 0. The ε-subdifferential (or approximate subdifferential) of f at a point x 0 ∈ dom f is the set Remark 1 Although a nonconvex function could be ε-subdifferentiable at some point in its effective domain, the proper class of functions where that notion has sense is the class of convex functions. More precisely, it is well-known that if f is a proper lowersemicontinuous convex function, then f is ε-subdifferentiable at every point x 0 ∈ dom f , for all ε > 0 (see [17]).

Sum Rules for the Approximate Subdifferential
In this section, X is assumed to be a Banach space with the norm X . The dual space of X is denoted by X * with the dual norm . . , f m be proper lowersemicontinuous convex functions on X. The aim of this section is to derive a formula for the ε-subdifferential of the sum where cl w * denotes the closure with respect to the weak * topology of X * .
Proof Consider the Cartesian productX := X × X × · · · × X m and the continuous linear Define the function f :X → R, where A * :X * → X * stands for the adjoint operator of A. It follows that In addition, by [37,Corollary 2.4.5] we see that The result follows as a consequence of statements (2), (3) and (4).
The next result extends the sequential sum rule in [14,Theorem 3] to more than two functions defined in a not necessarily reflexive Banach space. Although its proof is similar to the one in [14,Theorem 3], we include it for the sake of completeness.
Theorem 2 Let f i : X → R be a proper lower-semicontinuous convex function, i = 1, 2, . . . , m, and let Proof In addition, by applying [14, Proposition 2] to each i and α it follows that there exist nets where a i,α = |ε i,α −ε i |. Clearly, assertions (5), (6) and x i,α X −→ x 0 are satisfied. In addition, and the proof of the necessary condition is complete.
Conversely, assume that there exist ε i ≥ 0, (5) and (6). From (5), we obtain for each i: By (6) and condition In other words, , and the proof finishes.
Remark 2 It is not hard to check that the next set is convex: Therefore, if X is a reflexive Banach space, formula (1) can be formulated as follows: and then nets in Theorem 2 can be replaced by sequences and the condition where cl X * denotes the closure for the strong topology on X * . In particular, Theorem 2 reduces to [14,Theorem 3] by considering m = 2 and a reflexive Banach space X.

Differential Stability of Unconstrained Optimization Problems
Let X and Y be real locally convex Hausdorff topological linear spaces. Let ϕ : X × Y → R be an extended real-valued function. Consider the parametric unconstrained optimization problem depending on the parameter x ∈ X. Function ϕ is called the objective function of problem (P x ). The optimal value function μ : X → R of (P x ) is For each x ∈ X, the set of approximate solutions of (P x ) with error η ≥ 0 is denoted by for the case where ϕ is a convex function. Next we state it by a proof that makes clear that such a convex assumption is only required to avoid the nonemptyness of the involved ε-subdifferentials (see Remark 1). Particularly, it is worthy to stress that the proof does not involve the conjugate function of the optimal value function μ. In [37] the reader can find other properties of ε-subdifferentials that are stated without assuming convexity assumptions.

Theorem 3 Suppose that ϕ is convex and μ is finite atx
Proof Let x * ∈ X * , η > 0 and y η ∈ M η (x). From the definition, we have that Therefore, Reciprocally, consider x * ∈ X * , δ ≥ 0 and y ∈ Y . Then, and the proof finishes.
for all x ∈ X. Thus, the inclusion of Theorem 3 holds true without requiring the assumption μ(x) ∈ R.
Corollary 1 Suppose that ϕ is convex and μ is finite atx ∈ X and M(x) = ∅. Then, for every ε ≥ 0 and y ∈ M(x), one has , for all η > 0, we can apply equality (8) toȳ η := y and we obtain and the result is proved.
Let us illustrate with a simple example that Theorem 3 and Corollary 1 also work for a nonconvex function ϕ (see Remark 1 and the paragraph just before Theorem 3).
In addition, it is not hard to check that Next we derive ∂ ε μ(x) via formula (8). If x < 0 and ε ≥ 0, or x ≥ 0 and ε ≥ 1, it follows that To state ∂ ε μ(x) via Corollary 1 observe that Then, for all x ≥ 0, y ∈ M(x) and ε ≥ 0,

Differential Stability of Constrained Convex Optimization Problems
Let (X, X ) and (Y, Y ) be two Banach spaces and ϕ : X × Y → R be an extended realvalued function. Let G : X ⇒ Y be a set-valued map. The graph and the domain of G are given, respectively, by Consider the parametric optimization problem under an inclusion constraint depending on the parameter x ∈ X. The multifunction G is called constraint multifunction of (P c x ). The optimal value function μ c : The usual convention inf ∅ = +∞ forces μ c (x) = +∞ for every x / ∈ domG. The solution map M c : For each η > 0, the approximate solution map M c η : X ⇒ Y of (P c x ) is given by Proof We apply Theorem 2 to the case f 1 , f 2 play the role of the functions ϕ and δ gphG , respectively. Hence (x * , y * ) ∈ ∂ γ (ϕ + δ gphG )(x,ȳ) if and only if there exist γ 1 , γ 2 ≥ 0, ,ȳ), and the result is proved since ∂ γ 2 δ gphG (x 2,α , y 2,α ) = N γ 2 ((x 2,α , y 2,α ), gphG) and δ gphG (x 2,α , y 2,α ) = δ gphG (x,ȳ) = 0.
We are now in a position to formulate the main result of this section.
We now give the relationship between qualification condition (14) in Corollary 2 and qualification conditions in [3,Theorem 4.3]. The first assertion is obvious and the second one follows from [29, Lemma 1.58]. (14). If, in addition, ϕ and G are convex, then qualification condition (14) is equivalent to say that

Vector Optimization Problems
In this section, differential stability properties of convex vector optimization problems are obtained. Namely, in unconstrained problems, the ε-subdifferential of the infimal value mapping is characterized in terms of ε-subdifferentials of linear scalarizations of the problem.
Consider the parametric unconstrained vector optimization problem where X, Y, and Z are real locally convex Hausdorff topological linear spaces, and f : X × Y → Z is a vector-valued function. The final space Z is ordered by a convex cone D, which is assumed to be proper (D = Z) and solid (intD = ∅). We denote In addition, D + stands for the (positive) polar cone of D: Consider a nonempty set A ⊂ Z and E ∈ D. A pointā ∈ A is said to be a weak (respectively E-weak) minimal point of A if there is not a point a ∈ A satisfying a Dā (respectively a Dā − E). The set of all weak (respectively E-weak) minimal points of We say that a nonempty set [16,Definition 2.15]). This concept defines a really general kind of lower boundedness with respect to the ordering D (see [15,Section 3]

). Notice that A is D-proper if and only if clA is D-proper.
For convenience, the final space Z is extended to Z : where +∞ D (respectively −∞ D ) stands for the greatest (respectively least) element of Z with respect to the ordering D . In addition, we assume −∞ D D z D +∞ D and z±∞ D = ±∞ D + z = ±∞ D , for all z ∈ Z. As a result, for each nonempty set A ⊂ Z and E ∈ D we have WMin (A, D, E In order to deal with an optimal value function corresponding to problem (V P x ) a notion of infimal point is required. Here we consider a concept linked with D-proper sets, in the sense that these sets have always infimal points under a cone closedness assumption. Namely, a nonempty set A ⊂ Z is said to be D-closed if A + D is a closed set. For all z * ∈ Z * we denote argmin A z * := {ā ∈ A|z * (ā) ≤ z * (a), ∀a ∈ A}. Lemma 2 Consider a nonempty set A ⊂ Z. The next properties are true. that A is D-proper and clA is D-closed. Then, WMin(clA, D) = ∅ and (iv) Assume that clA is D-closed. Then, WMin(clA, D) = ∅ if and only if A is D-proper.

Remark 4
The cone closedness assumption of Lemma 2(iii) cannot be dropped. Consider, for instance, the set A = {(x, e x ) ∈ R 2 : x ∈ R}. We have that WMin(A, R 2 + ) = ∅ and A is closed, R 2 + -proper, but it is not R 2 + -closed.
The next notion of infimal set is a weak version of the one introduced in [6,12].

Definition 3
The weak infimal set of a nonempty set A ⊂ Z is defined to be Remark 5 (i) According to Lemma 2(iv), it follows that D-properness is the lower boundedness condition that guaranties the existence of infimal points in the class of sets whose closure is D-closed. (ii) In the literature on differentiability properties of the optimal value function of a parametric family of vector optimization problems (see [23,24,33,34,36] and the references therein), it is always assumed the domination property: the range of each problem in the family is included in the conical extension of the set of optimal values. For instance, in the setting of problem (V P x ), where the concept of weak minimality is considered to define its solutions and optimal values, that assumption is formulated as follows: Clearly, that assumption implies the existence of the considered solutions, which is a too strong condition in the study of the optimal value function (see [3,28]). For instance, in Example 2 such an assumption is not satisfied as each problem in the parametric family does not have weak efficient solutions.
The differentiability properties of this paper overcome such a handicap since infimal solutions are considered. Notice that requirement (21) is replaced with assertion (18), which involves clA instead of the set A.
By the concept of weak infimal set, we define the weak optimal value mapping M : X ⇒ Z of problem (V P x ) as follows: The notion of E-weak subdifferential of a set-valued function is required (see [35]) in order to state differential stability properties of mapping M. Denote by L(X, Z) the set of all continuous linear functions from X to Z, and consider a set-valued mapping F : X ⇒ Z and points E ∈ D and (x,z) ∈ gphF ,z ∈ Z. We say that T ∈ L(X, Z) is a E-weak subgradient forz of F atx, and it is denoted by is called E-weak subdifferential forz of F atx. When E = 0, it reduces to the subdifferential forz of F atx (see [32,Definition 6.2.8], [34] and [10, Definition 7.4.2(c)]): Consider a pointx ∈ X satisfying M(x) = {−∞ D }. For eachz ∈ M(x), q ∈ intD and η > 0, we denote By Lemma 2(v), we have that M η (x,z, q) = ∅. For each λ ∈ Z * , μ λ•f stands for the optimal value function corresponding to the function ϕ = λ • f .
In problems where the set clf (x, Y ) is D-closed, for all x ∈ X Theorems 5 and 6 allow us to characterize the E-weak subdifferential of the optimal value function. Next we show one of this characterizations, which is a direct consequence of the cone closedness criterion introduced in Remark 5. Indeed, consider a point y ∈ M η (x,z, q). Then, f (x, y) ∈z + (−ηq + D) ∩ (ηq − D) and there exists d ∈ D such that f (x, y) =z + ηq − d. Thus, Then, for each η > 0 take a point y η ∈ M η (x,z, q). By Theorem 3 we have that {x * ∈ X * |(x * , 0) ∈ ∂ η+λ(E) (λ • f )(x, y η )} and the result follows by applying Theorem 6.

Conclusions
In this paper several formulas are stated for computing the approximate subdifferential of the optimal value function of scalar and vector convex optimization problems whose solution sets could be empty. Notice that the scalar constrained convex optimization problems studied here are the same as those analyzed in [3]. However, in this paper we use a limiting approach to differential stability of those problems. As a result, no regularity conditions are required.
In vector optimization problems, the optimal value function is set-valued and involve infimal points. Its differentiability properties involve an ε-subdifferential for vector functions introduced by Taa. They are formulated by ε-subgradients of linear scalarizations of the problem and ε-subgradients of the optimal value function corresponding to such linear scalarizations.