Optimality Conditions in DC-Constrained Mathematical Programming Problems

This paper provides necessary and sufficient optimality conditions for abstract-constrained mathematical programming problems in locally convex spaces under new qualification conditions. Our approach exploits the geometrical properties of certain mappings, in particular their structure as difference of convex functions, and uses techniques of generalized differentiation (subdifferential and coderivative). It turns out that these tools can be used fruitfully out of the scope of Asplund spaces. Applications to infinite, stochastic and semi-definite programming are developed in separate sections.


Introduction
Mathematical programming has been recognized as one of the fundamental chapters of applied mathematics since a huge number of problems in engineering, economics, management science, etc., involve an optimal decision-making process which gives rise to an optimization model.This fact has intensely motivated the theoretical foundations of optimization and the study and development of algorithms.
Among the main issues in mathematical optimization, optimality conditions play a key role in the theoretical understanding of solutions and their numerical computation.At first such conditions were set for linear or smooth optimization problems.Later developments in variational analysis allowed researchers to extend this theory to general nonlinear nonsmooth convex programming problems defined in infinite-dimensional spaces (see, e.g., [39]).In the same spirit, generalized differentiation properties became an intensive field of research with numerous applications to nonsmooth and nonconvex mathematical programming problems.Nevertheless, in order to provide such general calculus rules and compute optimal conditions, a certain degree of smoothness is required, while working either in Banach spaces with a smooth norm or in Asplund spaces (see, e.g., [24][25][26][27]34]).In this paper, we provide necessary and sufficient optimality conditions for a general class of optimization problems under new qualification conditions which constitute real alternatives to the well-known Slater constraint qualification.Our approach is based on the notions of the (regular) subdifferential and coderivative and we show that these tools work out of the scope of Asplund spaces, not for the whole family of lower semicontinuous functions, but for the so-called class of B-DC mappings.This class of mappings is introduced in Definition 2.3, and constitutes a slight extension of the concept of DC functions/mappings.
The paper is focused on the study of (necessary and sufficient) optimal conditions for a constrained programming problem.First, we study the case where the constraint is of the form, Φ(x) ∈ C, where C is a nonempty, closed and convex set in a vector space Y , and Φ is a vector-valued function from the decision space X into Y .Second, we study an abstract conic constraint, that is, the case when C = −K, for a nonempty convex closed cone K.These abstract representations allow us to cover infinite, stochastic, and semidefinite programming problems.
With this general aim of establishing necessary and sufficient optimality conditions, we first introduce an extension of the concept of vector-valued DC mappings, also called δ-convex mappings, given in [38] (see also [21] for classical notions and further references).Our Definition 2.3 addresses two fundamental aspects in mathematical optimization.First, the convenience of using functions with extended real values and mappings which are not defined in the whole space has been widely recognized.This allows us to handle different classes of constraint systems from an abstract point of view and, to this purpose, we enlarge the space Y with an extra element ∞ Y (in particular, ∞ Y = +∞, whenever Y = R).Second, we consider specific scalarization sets for the mapping Φ, which varies along dual directions in respect to the set involved in the constraint; more specifically, directions in the polar set of C, or in the positive polar cone of K, respectively (see definitions below).
The aforementioned notions make it possible for us to exploit the geometrical properties of mappings (convexity) combined with tools taken from variational analysis and generalized differentiation.Using these tools, we obtain necessary and sufficient optimality conditions of abstract optimization problems defined on general locally convex spaces under new qualification conditions, which represent alternatives for classical notions.
The paper is organized as follows.In Section 2 we introduce the necessary notations and preliminary results; we give the definition of the set Γ h (X, Y ) and introduce the tools of generalized differentiation.Together with those notions, we provide the first calculus rules and machinery needed in the paper, which constitute the working horse in the formulation of our optimal conditions.In Section 3 we deal with a constraint of the form Φ(x) ∈ C; we transform the problem into an unconstrained mathematical program, where the objective function is a difference of two convex functions, and this reformulation yields necessary and sufficient conditions of global and local optimality.The main result in this section, concerning global optimality, is Theorem 3.1, and the result for necessary conditions of local optimality is Theorem 3.7; meanwhile sufficient conditions are given in Theorem 3.11.Later in Section 4, we confine ourselves to studying problems with abstract conic constraints given by Φ(x) ∈ −K.In such Section, the cone structure is exploited, and a set of scalarizations, generating by the positive polar cone K + (see Definition 4.1), is used.Appealing to that notion, and thanks to a suitable reformulation of the problem, we derived specific necessary and sufficient optimality conditions for conic programming problems.In particular, Theorem 4.3 presents global optimality conditions and Theorems 4.5 and 4.7 are devoted to local optimality.In the final section, we apply our developments to establish ad hoc optimality conditions for fundamental problems in applied mathematics such as infinite, stochastic and semidefinite programming problems.

Notation and preliminary results
The paper uses the main notations and definitions which are standard in convex and variational analysis (see, e.g., [2, 24-27, 34, 39]).

Tools form convex analysis
In this paper X and Y are locally convex (Hausdorff) spaces (lcs, in brief) with respective topological duals X * and Y * .We denote by w(X * , X), w(Y * , Y ) the corresponding weak-topologies on X * and Y * .We enlarge Y by adding the element ∞ Y .The extended real line is denoted R := [−∞, +∞], and we adopt the convention +∞−(+∞) = +∞.Given a family of sets {A i } i∈I , we denote its union by brackets: Given a set A ⊂ X, we denote by cl(A), int(A), co(A), cone(A) the closure, the interior, the convex hull and the the convex cone generated by A. By 0 X we represent the zero vector in X, similarly for the spaces Y, X * and Y * .For two sets A, B ⊂ X and λ ∈ R we define the following operations:

and
A ⊖ B := {x ∈ X : x + B ⊂ A}.In the previous operations we consider the following conventions: Given a set T we represent the generalized simplex on T by ∆(T ), which is the set of all the functions α : T → [0, 1] such that α t = 0 only for finitely many t ∈ T and t∈T α t = 1; for α ∈ ∆(T ) we denote supp α := {t ∈ T : α t = 0}.We also introduce the symbols The positive polar cone of A is given by We use similar notations for functions f : X → R ∪ {+∞}.We represent by Γ 0 (X) the set of all functions f : X → R ∪ {+∞} which are proper, convex and lower-semicontinuous (lsc, in brief).
The continuity of functions and mappings will be only considered at points of their domains.Given the set A, the indicator function of A is For ε ≥ 0, the ε-subdifferential (or approximate subdifferential ) of a function f : X → R at a point x ∈ X is the set The special case ε = 0 yields the classical (Moreau-Rockafellar ) convex subdifferential, denoted by ∂f (x).
We finish this subsection by recalling the following alternative formulation of a general constrained optimization problem which uses a maximum function.Since the proof follows standard argument, we omit its proof.Lemma 2.1 Given the functions g, h : X → R and the nonempty set C ⊂ X, let us consider the optimization problem min g(x) Assume that the optimal value, α, of problem (3) is finite.Then, x is an optimal solution of (3) if and only if x is an optimal solution of the optimization problem Moreover, the optimal value of problem (4) is zero.

Remark 2.2
The function H : X × X → R defined by is called standard improvement function (see, e.g., [3]).Particularly, the objective function used in problem (4) corresponds to the improvement function at y = x.

B-DC functions and basic properties
Next we introduce a new class of DC functions which constitutes the keystone of this paper.It extends the notion of DC vector-valued mappings introduced in [15] and is also related to the concept of delta-convex functions in [38].
Definition 2.3 Let X and Y be lcs spaces and h ∈ Γ 0 (X).
i) Consider a nonempty set B ⊂ Y * .We define the set of B-DC mappings with control h, denoted by Γ h (X, Y, B), as the set of all mappings F : X → Y ∪ {∞ Y } such that dom h ⊃ dom F and We also say that F is a DC mapping with control function h relative to B, or that F is controlled by h relatively to B. ii) We represent by Γ h (X) the set of all functions f : X → R ∪ {+∞} such that dom f ⊂ dom h and f + h ∈ Γ 0 (X).
Remark 2.4 It is worth mentioning that Definition 2.3 corresponds to a natural extension of the notion used in [38], with the name delta-convex functions.More precisely, following [38], for a continuous mapping Moreover, by [38,Corollary 1.8] both definitions are equivalent when Y is finite dimensional.In [38], the focus is on analytic properties of vector-valued mappings defined on convex open sets.Here, according to the tradition in optimization theory, we deal with mappings which admit extended values.It is important to mention that Definition 2.3 i) reduces to the concept of DC mapping introduced in [38], when B is the unit ball in the dual space of Y , in the normed space setting.Moreover, if a real-valued function f is DC mapping with control h, then necessarily −f is a DC with control h (see Proposition 2.5 c) below for more details).
The next proposition gathers some elementary properties of the class Γ h (X, Y, B).
Proof a) It follows from the fact that dom F = dom ( λ * , F + h) for every λ * ∈ B. b) Let F := p i=1 F i and λ * ∈ B. Then, for all x ∈ X we have that λ * , F + h = p i=1 ( λ * , F i + h i ) , which is a convex proper and lsc function.c) It follows from the fact that λ * , −F + h = −λ * , F + h and −λ * ∈ B due to the symmetry of B.

Generalized differentiation and calculus rules
In this subsection, we introduce the necessary notation to distinguish nonsmooth and nonconvex functions and mappings, and we develop some calculus rules.
The following notions are based on classical bornological constructions in Banach spaces (see, e.g., [5,6,18,24] for more details and similar constructions).Given a locally convex space X, we consider β(X) as the family of all bounded sets of X (i.e., those sets such that every seminorm which generates the topology on X is bounded on them).We simply write β, when there is no ambiguity of the space.Definition 2.6 We say that g : It is not difficult to see that when such x * exists, it is unique.In that case, and following the usual notation, we simply write ∇g(x) = x * .Here it is important to recall that in a general locally convex space X, the differentiability of a function does not imply its continuity.For instance, the square of the norm in any infinite dimensional Hilbert space is Fréchet differentiable, but not weak continuous (see, e.g., [7]).
Definition 2.7 The regular (Fréchet) subdifferential of f : X → R at x ∈ X, with |f (x)| < +∞, is the set, denoted by ∂f (x), of all x * such that lim inf For a point x ∈ X, where |f (x)| = +∞ we simply define ∂f (x) = ∅.Now, let us formally prove that the regular subdifferential coincides with the classic convex subdifferential for functions in Γ 0 (X).
Lemma 2.8 Let f ∈ Γ 0 (X).Then, the regular subdifferential of f coincides with the classic convex subdifferential of f , that is, ∂f (x) = ∂f (x) for every x ∈ X.
Proof Since ∂f (x) ⊂ ∂f (x) obviously holds, we focus on the opposite inclusion.Let x * ∈ ∂f (x), which implies that |f (x)| < +∞.Now, consider y ∈ X and ε > 0 arbitrary, and let S ∈ β such that h = y−x ∈ S. Hence, we have that for small enough t ∈ (0, 1), the following inequality holds so using the convexity of f , we yield that f (y) − f (x) − x * , y − x ≥ −ε, then, taking ε → 0, we have that f (y) − f (x) ≥ x * , y − x , which from the arbitrariness of y ∈ X implies the result.
The following sum rule is applied in the paper, and we provide its proof for completeness.
Lemma 2.9 Let x ∈ X, and g : X → R be differentiable at x.Then, for any function f : X → R we have Proof Let us suppose that x * ∈ ∂(f + g)(x) and S ∈ β.Hence, which shows that x * − ∇g(x) ∈ ∂f (x).To prove the converse inclusion it is enough to notice that ∂f , where the final inclusion follows from the previous part, and that ends the proof.
Next, we employ the regular subdifferential to provide machinery to differentiate nonsmooth vector-valued mappings.
Definition 2.10 Given a mapping F : X → Y ∪{+∞ Y } and x ∈ dom F we define the regular coderivative of F at x by the set-valued map D * F (x) : Y * ⇒ X * defined as: where y * , F is the function defined in (2).
This operator is positively homogeneous.In particular, definition (7) coincides with the general construction for the regular coderivative of set-valued mappings on Banach spaces when F is calm at x, that is, for some ℓ > 0 and u close to x (see, e.g., [19,Proposition 1.32]).
The following lemma yields a sum rule for functions in Γ h (X).
On the one hand, by Lemma 2.9, we have that On the other hand, since the convex function f 1 + h is continuous at some point in the domain of f 2 + h, we get (see, e.g., [39,Theorem 2.8.7]) that where in the last equality we used Lemma 2.9 again, and that concludes the proof.
Next, we present some calculus rules for the subdifferential of an extended real DC function.
Proposition 2.12 [23, Theorem 1] Let g, h ∈ Γ 0 (X) be such that both are finite at x.Then, for every Particularly, g − h attains a global minimum at x if and only if , for all η ≥ 0.
The following result characterizes the ε-subdifferential of the supremum function of an arbitrary family of functions.
where cl * represents the closure with respect to the w * -topology.
The following calculus rules play a key role in our analysis.Given a mapping F : For ε = 0, we simply write C(x) := C 0 (x).
Theorem 2.14 Let F : X → Y ∪ {∞ Y } and consider C a convex and compact subset of Y * with respect to the w * -topology.Let g ∈ Γ 0 (X) such that for all λ * ∈ C the function λ * , F + g ∈ Γ 0 (X).Then, for every ε ≥ 0 and all x ∈ X, we have Proof First let us show the inclusion ⊃ in (9).Consider x * in the right-hand side of (9); then there exists Second, let us consider T = C, and the family of functions f t = t, F + g, for t = λ * .Then, by Proposition 2.13 we have where we have simplified (8) by convexity of C and linearity of the application λ * → λ * , F (w) for all w ∈ X.In fact, for (α t ) t∈T ∈ ∆(T ) we have where λ * = t∈supp α α t t ∈ C and η := t∈T α t η t .Now, consider x * ∈ ∂ ε f (x), so that by (10), there exists a net γ ℓ → ε and Hence, by compactness of C, we can assume that so taking limits in ℓ we conclude that from which, and from the arbitrariness of y ∈ X, we conclude that x * belongs to ∂ η ( λ * , F + g) (x).

DC mathematical programming
This section is devoted to establishing necessary and sufficient conditions for general DC mathematical programming problems.More precisely, we consider the following optimization problem where ϕ : X → R ∪ {+∞}, Φ : X → Y ∪ {∞ Y } is a vector-valued mapping, and C ⊂ Y is a closed convex set.This section has two parts devoted to global and local optimality conditions.

Global optimality conditions
Let us establish our main result in this section.
, and suppose that one of the following conditions holds: Then, if x is an optimal solution of the optimization problem (11), we have that where the union is taken over all η 1 , η 2 ≥ 0, (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ C • such that Conversely, assume that x is a feasible point of (11) and that (12) always holds with α 1 > 0, then x is a solution of (11) relative to dom ∂h, that is, x is an optimum of min{ϕ(x) : x ∈ Φ −1 (C) ∩ dom ∂h}.Remark 3.2 (before the proof ) It is important to note that the assumption that C • is w * -compact is not restrictive.Indeed, whenever there exists some z 0 in C such that C • z0 := (C − z 0 ) • is weak * -compact, Theorem 3.1 can be translated easily in terms the mapping Φ(x) := Φ(x) − z 0 and the set Ĉ = C − z 0 .Moreover, according to Banach-Alaouglu-Bourbaki theorem, in order to guarantee that C • z0 is weak *compact, it is enough to suppose that z 0 ∈ int(C).More precisely, [8, establishes that z 0 belongs to the interior of C with respect to the Mackey topology if and only if C • z0 is weak * -compact.Here, it is important to mention that there are several relations between the weak * -compactness of C • z0 and the nonemptiness of the interior of C, with respect to the Mackey topology (see, e.g.[8,20] for more details), and they can be connected even with the classical James's Theorem (see [31]) and other variational and geometric properties of functions (see [9,10,14]).
Proof First let us suppose that x is a solution of ( 11) and let α be the optimal value of the optimization problem (11).In the first part we prove two claims.Claim 1: First we prove that where and Indeed, first let us notice that, by the bipolar theorem, Therefore, by Lemma 2.1, the optimization problem (11) has the same optimal solutions as the problem Hence, by Proposition 2.12, we have that x is a solution of (11) if and only if (14) holds.
Remark 3.3 Let us briefly comment on some facts about the last result: ) with η 3 := 1 − λ * , Φ(x) ≥ 0. Therefore, the existence of multipliers can be equivalently described in terms of the ε-normal set defined in (1).Nevertheless, we prefer to use the set C • in order to take advantage of the compactness of this set, which will be exploited later.
ii) The converse in Theorem 3.1 can be proved assuming that (12) always holds with multiplier α 1 = 0 for all η ∈ [0, η], where iii) It is worth mentioning that, due to the assumption of the existence of a continuity point of ϕ + h, we can prove that ϕ, h are bounded above on a neighbourhood of a point of their domain.Indeed, let us suppose that ϕ + h is bounded above on a neighbourhood U of x 0 , that is, ϕ(x) + h(x) ≤ M for all x ∈ U for some scalar M .Since ϕ and h are lsc at x 0 , we can assume (shrinking enough the neighbourhood U ) that inf x∈U ϕ(x) > −m and inf x∈U h(x) > −m, for some constant m ∈ R, which implies that ϕ(x) ≤ M + m and h(x) ≤ M + m for all x ∈ U .Hence, due to the convexity of the involved functions, we see that ϕ and h are continuous on int dom ϕ and int dom h, respectively (see, e.g., [39, Theorem 2.2.9]).Particularly, the latter implies that dom ∂h ⊃ int dom h is dense on dom ϕ (recall dom ϕ ⊂ dom h).
Now, let us establish the following corollary when the problem ( 11) is convex.The proof follows directly from Theorem 3.1, so we omit the details.Then, if x is an optimal solution of the optimization problem (11), we have that there exists (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ C • such that Conversely, assume that x is a feasible point of (11) and that (22) holds with α 1 > 0, then x is a solution of (11).
The following result shows that the fulfilment of ( 12) with α 1 ≥ ε 0 , for some ε 0 > 0, can be used to establish that x is a solution to problem (11).
Theorem 3.5 In the setting of Theorem 3.1, suppose that x is a feasible point of (11) and that (12) always holds with α 1 ≥ ε 0 , for some ε 0 > 0. Then x is an optimal solution of (11).
Corollary 3.6 Let Q ⊂ X and C ⊂ Y be closed and convex set with 0 Y ∈ C and suppose Additionally, assume that there exists a point in Q such that ϕ, Φ and h are continuous at this point.Then, if x is an optimal solution of the optimization problem (23) we have that where the union is taken over all η 1 , η 2 , η 3 ≥ 0, (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ C • such that Conversely, assume that x is a feasible point of (23) and that (24) always holds with α 1 > 0, then x is a solution of (23) relative to dom ∂h.
Proof Let us observe that the optimization problem ( 23) is equivalent to where Furthermore, it is easy to prove that λ * , Φ Q = λ * , Φ + δ Q for every λ * ∈ C • , and consequently Then we apply Theorem 3.1 to the optimization problem (25) (notice that ϕ + h is continuous at some point of dom Φ Q ) and we use the sum rule for the ε-subdifferential (see, e.g., [39,Theorem 2.8.3]) to compute the ε-subdifferential of λ * , Φ + h + δ Q in terms of the corresponding subdifferentials of λ * , Φ + h and δ Q (here recall that λ * , Φ and h are continuous at some point of Q).

Local optimality conditions
In this section we present necessary and sufficient conditions for local optimality of problem (11).The first result corresponds to a necessary optimality condition.
Theorem 3.7 In the setting of Theorem 3.1, let x be a local optimal solution of the optimization problem (11) suppose that h is differentiable at x.Then, we have that there are In addition, if the following qualification condition holds then we have Proof Consider a closed convex neighbourhood U of x such that x is a global optimum for the next optimization problem min ϕ(x) Following the proof of Theorem 3.1, we can prove that x is a solution to the unconstrained minimization problem where α is the optimal value of (30), and f is defined in (16).Now, applying the Fermat rule, and using Proposition 2.12 (with ε = η = 0), we have that x satisfies the following subdifferential inclusion where ψ is the function introduced in (15).Since U is a neighbourhood of x, we have that ∂ (ψ + δ U ) (x) = ∂ψ(x).Moreover, using Claim 2 of Theorem 3.1, we conclude the existence (α Moreover, by Lemmas 2.8 and 2.9 (recall that h is differentiable at x) we can compute the corresponding subdifferentials as Therefore, inclusion (31) reduces to (27).Now, (27) gives us the existence of (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ C • with α 2 (1 − λ * , Φ(x) ) = 0 such that (27) holds.Moreover, by the qualification condition (28), we have that α 1 = 0. Therefore, dividing by α 1 we get (29).
Remark 3.8 (On normality of multiplier λ * ) It is important to emphasize that the conditions λ * ∈ C • and λ * , Φ(x) = 1 imply that λ * ∈ N C (Φ(x)).Therefore, in Theorem 3.7 the multiplier λ * is necessarily a normal vector to C at Φ(x).Furthermore, the qualification condition ( 28) is equivalent to Here, the equality λ * , Φ(x) = 1 is relevant because, without this condition, 0 X * always belongs to the Remark 3.9 (On abstract differentiability) It is worth mentioning that in Theorem 3.7 above the differentiability and subdifferentiability can be exchanged for a more general notion using an abstract concept of subdifferentiability. Indeed, based on the notion of presubdifferential (see, e.g., [35,36]), we can adapt the definition there in the following way: For every x ∈ X consider a family of functions F x , which are finite-valued at x. Now, consider an operator ∂ which associates to any lower semicontinuous function f : X → R and any x ∈ X, a subset ∂f (x) of X * with the following properties: i) ∂f (x) = ∅ for all x where |f (x)| = +∞.ii) ∂f (x) is equal to the convex subdifferential whenever f is proper, convex and lower semicontinuous.iii) ∂φ(x) is single valued for every x ∈ X and φ ∈ F x .In that case φ is called ∂-differentiable at x, and we represent by ∇φ(x) the unique point in ∂φ(x).iv) For every x ∈ dom f and φ ∈ F x , we have The above notion covers several classes of subdifferentials, for instance: 1) Bornological subdifferential (Fréchet, Hadamard, Gateaux, etc) with F x , the family of differentiable functions at x with respect to that bornology.2) Viscosity bornological subdifferential with F x , the family of smooth functions (with respect to that bornology) at x .3) Proximal subdifferential with F x , the family of C 2 -functions at x. 4) Basic subdifferential with F x , the family of C 1 -functions at x.
Using the above definition, we can define the notation of ∂-coderivative similar to (7), given by D * Φ(x)(y * ) := ∂ ( y * , Φ ) (x).Using these tools, it is easy to change the proof of Theorem 3.7 requiring that the convex function h belongs to F x.In this way, the corresponding inclusion in Theorem 3.7 is given with ∂ and D * replacing the regular subdifferential and coderivative, respectively.
Therefore, the assumption over the operator ∂ and the family F x at the optimal point x corresponds to a trade-off between the differentiability of the data h ∈ F x and the robustness of objects ∂ and D * , that is, while the notion of differentiability is weaker, larger is the object for which the optimality condition are presented ( ∂ and D * ).In that case, the reader could believe that it would be best to directly assume that h satisfies a high level of smoothness at x. Nevertheless, for infinite-dimensional applications, such smoothness does not always hold (see Example 5.3

below).
Similarly to Theorem 3.7, we provide a necessary optimality condition to problem (23).
Corollary 3.10 In the setting of Corollary 3.6, let x be a local optimal solution of problem (23) and suppose that h is differentiable at x.Then, there exist η ≥ 0 and λ * ∈ C • such that λ * , Φ(x) = 1 and we have provided that the following qualification holds Proof Following the proof of Corollary 3.6, we have that the optimization problem (25) has a local optimal solution at x.Then, by Theorem 3.7 we get that where Φ Q is defined in (26), and provided that, the following qualification holds: Now, using Lemma 2.11 we have that ( 34) and ( 35) reduce to ( 32) and ( 33), which concludes the proof.
The final result of this section shows that the fulfilment of inclusion ( 12), for all small η ≥ 0, is sufficient for a point to be a local optimum of problem (11).Theorem 3.11 Let x be a feasible point of the optimization problem (11) which satisfies the subdifferential inclusion (12) for all η small enough.In addition, suppose that C • is weak * -compact, that h is continuous at x and the following qualification condition holds Then, x is a local solution of (11).
Proof First, we claim that x satisfies the subdifferential inclusion (12) with multiplier α 1 = 0 for all η ≥ 0 small enough.Indeed, suppose by contradiction that there are sequences η n , η ′ n → 0 + and Since h is continuous at x we have that ∂ ηn h(x) is weak * -compact (see, e.g., [39,Theorem 2.4.9]).Hence, there exists subnets (with respect to the weak *topology) x * nν and λ * nν converging to x * and λ * , respectively.Then, it is easy to see that x * ∈ ∂h(x) ∩ ∂ ( λ * , Φ + h) (x), which contradicts (36) and proves our claim.Now, let us denote by ε 0 > 0 a number such that x satisfies the subdifferential inclusion (12) with Since, h is locally Lipschitz at x there exists a neighbourhood U of x such that for all x, y ∈ U and all x * ∈ ∂h(x) Particularly, for each y ∈ U (19) holds with η ≤ ε 0 .Now, for y ∈ U ∩Φ −1 (C), and repeating the arguments given in the proof of Theorem 3.1, we get that ϕ(x) ≤ ϕ(y), which ends the proof.Remark 3.12 Let us notice that when h is differentiable at x condition (36) Indeed, if h is differentiable at x, we can use the sum rule (6) to get that Therefore, (36) turns out to be equivalent to (37).

DC cone-constrained optimization problems
This section addresses to establishing necessary and sufficient conditions for cone-constrained optimization problems.More precisely, we consider the following optimization problem min ϕ(x) where ϕ : The approach in this section is slightly different for the one followed in the previous section where there was a convex abstract constraint involving a general closed convex set C. More precisely, we will take advantage of the particular structure of the cone-constraint Φ(x) ∈ −K in terms of a more suitable supremum function.In order to do that we need to introduce the following notion for convex cones.Definition 4.1 Let Θ ⊂ Y * be a convex closed cone, we say that Θ is w * -compactly generated if there exists a weak * −compact and convex set B such that In this case we say that Θ is w * -compactly generated by B.
The next lemma establishes sufficient conditions to ensure that the polar of a convex cone is w *compactly generated.Lemma 4.2 Let K ⊂ Y be a convex closed cone.Suppose that one of the following conditions is satisfied: Then, K + is w * -compactly generated.
Proof a) Let us consider the convex compact set B := {x * ∈ K + : x * ≤ 1}.Then, K + is weakly * compactly generated by B. b) Consider an interior point of K, y 0 , and take a convex balanced neighbourhood of zero, V , such that y 0 + V ⊂ K. Therefore K + ⊂ {x * ∈ Y * : x * , y 0 ≥ sup y∈V x * , y }.Then, consider B := {x * ∈ K + : sup y∈V x * , y ≤ 1}, which is a weak * -compact (convex) set due to the Banach-Alaoglu-Bourbaki Theorem.Moreover, for every x * ∈ K + we have that

Global optimality conditions
The next theorem gives necessary and sufficient optimality conditions for problem (38).
Theorem 4.3 Let K be a closed convex cone such that K + is w * -compactly generated by B, and assume that Φ ∈ Γ h (X, Y, B) and ϕ ∈ Γ h (X) for some function h ∈ Γ 0 (X).Furthermore, suppose that one of the following conditions holds: Then, if x is a minimum of (38), we have that for every η ≥ 0 where the union is taken over all η 1 , η 2 ≥ 0, (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ B such that Conversely, assume that x is a feasible point of (38) and that (39) always holds with α 1 > 0, then x is a solution of (38) relative to dom ∂h.
Proof Suppose that x is a minimum of (38).First, let us notice that Φ(x) ∈ −K if and only if sup y * ∈B y * , Φ (x) ≤ 0.Then, by Lemma 2.1 we have that x is a solution of the DC program min max{ϕ(x) + h(x) − α, sup where α is the optimal value of (38).Now, using the notation of Theorem 3.1 we consider Now, mimicking the proof of Theorem 3.1 and taking into account that, instead of ( 18), we have we have that (39) holds.The converse follows as in the proof of Theorem 3.1, so we omit the details.Now, we present a result about optimality conditions of a DC cone-constrained optimization problem with an extra abstract convex constraint.The proof follows similar arguments to the proof of Corollary 3.6, but uses Theorem 4.3 instead of Theorem 3.1; so we omit the proof.
where Q is closed and convex, K is a closed convex cone such that K + is weakly * -compact generated by B, and Φ ∈ Γ h (X, Y, B) and ϕ ∈ Γ h (X) for some function h ∈ Γ 0 (X).Assume that there exists a point in Q such that ϕ + h and Φ + h are continuous at this point.Then, if x is an optimal solution of problem (40) we have that where the union is taken over all η 1 , η 2 , η 3 ≥ 0, (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ B such that Conversely, assume that x is a feasible point of (40) and that (41) always holds with α 1 > 0, then x is a solution of (40) relative to dom ∂h.

Local optimality conditions
Now, we focus on necessary and sufficient local optimality conditions for DC cone-constrained optimization problems.The following two results provide necessary conditions for optimality of problem (38) and for a variant with an additional abstract convex constraint.The proofs of both results follow similar arguments to the ones used in Theorem 3.7 and Corollary 3.10, respectively; accordingly, we omit them.Theorem 4.5 In the setting of Theorem 4.3, let x be a local optimal solution of problem (38), and suppose that h is differentiable at x.Then, there are multipliers (α 1 , α 2 ) ∈ ∆ 2 and λ * ∈ B such that In addition, if the following qualification holds we have Corollary 4.6 Under the assumptions of Corollary 4.4, let x be a local optimal solution of problem (40), and suppose that h is differentiable at x and ϕ and Φ are continuous at x.Then, there exists λ * ∈ K + such that provided that the following constraint qualification holds Similarly to Theorem 3.11, we provide sufficient conditions for local optimality in terms of (39).
Theorem 4.7 Let x be a feasible point problem (38) satisfying the subdifferential inclusion (39) for all η small enough.Additionally, suppose that h is continuous at x and the following qualification holds Then, x is a local solution of (11).
Remark 4.8 Notice that when h is differentiable at x condition (42) leads us to

Applications to mathematical programs problems
In this section we provide some applications of the theory developed in the previous sections.

Infinite programming
We consider the optimization problem where T is a locally compact Hausdorff space, the functions φ t : X → R, t ∈ T , are such that, for all x ∈ X, the function, t → φ t (x) ≡ φ(t, x) is continuous with compact support, and ϕ : X → R. Problem (43) corresponds to the class of infinite programming problems (called semi-infinite when X is finite dimensional); we refer to [22] for more details about the theory.The space of continuous functions defined on T and with compact support is denoted by C c (T ).A finite measure µ : B(T ) → [0, +∞), where B(T ) is the Borel σ-algebra, is called regular if for every A ∈ B(T ) We denote by M + (T ) the set of all (finite) regular Borel measures.Let us recall (e.g., [1,Theorem 14.14]) that the dual of C c (T ), endowed with the uniform norm, can be identified as the linear space generated by M + (T ).
The following result provides necessary optimality conditions for problem (43) using a cone representation in the space C c (T ).Theorem 5.1 Let X be a Banach space and T be a locally compact Hausdorff space.Suppose that ϕ, φ t ∈ Γ h (X), t ∈ T , for some function h ∈ Γ 0 (X), and assume that the function x → inf t∈T φ t (x) is locally bounded from below.Let x be a local optimum of problem (43), and assume that h and φ t , t ∈ T , are differentiable at x, and that there are ℓ, ε > 0 such that Then, there exists where the integrals are in the sense of Gelfand (also called w * -integrals).
Remark 5.2 (before the proof) It is important to recall that a mapping x * : T → X * is Gelfand integrable if for every x ∈ X, the function t → x * (t), x is integrable.In that case, the integral T x * (t)ν(dt) is well-defined as the unique element of X * such that We refer to [16, Chapter II.3, p. 53] for more details.
To this purpose, fix a measure ν ∈ B. By the assumptions, the function x → φ t (x) + h(x) is convex for all t ∈ T , so integration over T with respect to ν preserves the convexity on X (see, e.g., [11][12][13]28,29,32,33]); hence, the function ν, Φ + h is convex.Moreover, consider a sequence x k → x.Since, the function x → inf t∈T φ t (x) is locally bounded from below and h ∈ Γ 0 (X), we can take α ∈ R and k 0 ∈ N such that φ t (x k ) + h(x k ) ≥ α, for all t ∈ T and all k ≥ k 0 .Then, Fatou's lemma and the lower semicontinuity of the functions showing the lower semicontinuty of x → ν, Φ (x) + h(x), and that, consequently the function Φ belongs to Γ h (X, C c (T ), B).Finally, since the function x → sup{ ν, Φ(x) : ν ∈ B} + h(x) is convex, lsc and finite valued because B is w * -compact, it is also continuous (recall that X is a Banach space).Claim 2: For every h ∈ X the function t → ∇φ t (x), h is measurable and, for every ν ∈ B, we have that Fix h ∈ X.Since the functions φ t , t ∈ T , are differentiable at x, we get that ∇φ t (x), h = lim k→∞ k φ t (x + k −1 h) − φ t (x) .Particularly, the function t → ∇φ t (x), h is measurable as it is the pointwise limit of a sequence of measurable functions.Moreover, by (44) we get that ∇φ t (x), h ≤ ℓ h , for all t ∈ T , which shows the integrability and, consequently, the Gelfand integral is well-defined (see, e.g., [1,16]).Finally, let x * ∈ D * Φ(x)(ν), the definition of regular subdifferential, with S = {h} ∈ β, implies that where in the last equality we use Lebesgue's dominated convergence theorem, which can be applied thanks to (44).The proof of this claim ends by considering h and −h in (45).Finally, observe that, by [1,Lemma 12.16], any measure ν ∈ B(T ) such that T φ t (x)ν(dt) = 0 satisfies that supp ν ⊂ T (x).Then, applying Theorem 4.5 we get the desired result.

Stochastic programming
Before introducing our optimization problem in this subsection let us give some additional notations.In the sequel, X is a separable Banach space, Y a general locally convex space, and (Ω, A, µ) a complete σ-finite measure space.A set-valued mapping M : Ω ⇒ X is said to be measurable if for every open set U ⊂ X, we have {ω ∈ Ω : Given a set-valued mapping S : Ω ⇒ X * we define the (Gelfand) integral of S by Ω S(ω)µ(dω) := Ω x * (ω)µ(dω) : x * is Gelfand integrable and x * (ω) ∈ S(ω) a.e.ω ∈ Ω .
We refer to [8,16,17,34] for more details about the theory of measurable multifunctions and integration on Banach spaces.Given a normal integrand ϕ : Ω × X → R ∪ {+∞} we define the integral functional (also called expected functional ) associated to ϕ by I ϕ : X → R ∪ {+∞, −∞} defined as with the inf-addition convention +∞ + (−∞) = +∞.Finally, a normal integrand ϕ is integrably differentiable at x provided that I ϕ is differentiable at x and the following integral formula holds The next example shows that the last notion makes sense for integral mappings since its smoothness cannot be taken for granted even when all data are smooth.
Example 5.3 It is important to mention here that the integral functional I ϕ , for a normal integrand ϕ, could fail to be Fréchet differentiable even when the data functions ϕ ω , ω ∈ Ω, are Fréchet differentiable.Let us consider the measure space (N, P(N), µ), where the σ-finite measure is given by the counting measure µ(A) := |A|, and the Banach space X = ℓ 1 .Next, consider the convex normal integrand function ϕ(n, x) := | x, e n | 1+ 1 n , where {e 1 , e 2 , . . ., e n , . ..} is the canonical basis of ℓ 1 .It has been shown in [11,Example 2] that I ϕ is Gateaux differentiable at any point and the integral formula (47) holds for the Gateaux derivative.Nevertheless, as it was also proved in that paper, the function I ϕ fails to be Fréchet differentiable at zero.Now, we extend a classical formula for the subdifferential of convex normal integrand functions to the case of nonconvex normal integrands.This result is interesting in itself, and for that reason, we present it as an independent proposition.Proposition 5.4 Let x ∈ X, and ϕ : Ω × X → R ∪ {+∞} be a normal integrand.Suppose that ϕ ω ∈ Γ hω (X) for some convex normal integrand h such that dom I ϕ ⊂ dom I h .Then, I ϕ ∈ Γ I h (X) provided that I ϕ is proper.In addition, suppose that h is integrably differentiable at x and the functions I ϕ and ϕ ω , ω ∈ Ω, are continuous at some common point.Then, Proof Let us consider the convex normal integrand ψ := ϕ + h.By our assumptions dom I ψ = dom I ϕ and I ψ = I ϕ + I h .Consequently, I ψ is proper, entailing that I ϕ ∈ Γ I h (X).Then, by [11,Theorem 2] we have that ∂I ψ (x) = Ω ∂ψ ω (x)dµ + N dom I ψ (x).Now, by Lemmas 2.8 and 2.9, and the integral formula (47) we have that which implies that (48) holds.
Remark 5.5 (On the use of Gelfand integrals) The above result does not require that ϕ be locally Lipschitzian at x as in other classical results about differentiation of nonconvex integral functionals (see, e.g., [12,29] and the references therein).Consequently, we cannot expect that the formula (48) holds for the Bochner integral (see, e.g., [1,Definition 11.42]).Indeed, adapting [11, Example 1], let us consider the measure space (N, P(N), µ), where µ(A) := j∈A 2 −j , the Hilbert space X = ℓ 2 , and the normal integrand function ϕ(n, x) := 2 n ( x, e n ) 2 − x 2 , where {e 1 , e 2 , . . ., e n , . ..} is the canonical basis of ℓ 2 .Clearly, the integrand ϕ satisfies all the assumptions of Proposition 5.4 at any point, therefore (48) holds.Nevertheless, the function n → ∇φ n (x) = 2 n+1 ( x, e n ) e n − x is not always integrable in the Bochner sense because, otherwise, the function n → 2 n+1 ( x, e n ) e n must be integrable (see, e.g., [1,Theorem 11.44]).Indeed, we always have for some function g ∈ Γ 0 (X) such that dom Φ = X.Let x ∈ int (dom I ϕ ) be a local optimal solution of problem (49) and assume that h is integrably differentiable at x and g is differentiable at x.Then, we have provided that the following qualification condition holds Now, suppose that b) holds, and consider A ∈ S p + with tr(A) = 1.Employing its spectral decomposition, we write A = P DP ⊤ = p i=1 λ i (A)v i v ⊤ i , where P is an orthogonal matrix whose columns are v i ∈ R p , i = 1, . . ., p, and D is the diagonal matrix formed by λ 1 (A), . . ., λ p (A).Then, Next, using the fact that tr(A) = 1 and λ i (A) ≥ 0, we get that which shows the desired convexity as well as the lower semicontinuity of the function x → A, Φ (x)+h(x).Finally, (52) remains to be proved.On the one hand, from the fact that A = uu ⊤ is positive semidefinite, we get that which proves the inequality ≥ in (52).On the other hand, for a given matrix A ∈ B, and taking into account (53), we have that and we are done.
The following proposition establishes some sufficient conditions ensuring that Φ is a DC matrixmapping.
Proof Let us notice that for every v ∈ R with v = 1, we have that Therefore, the function x → v ⊤ Φ(x)v + h(x) is convex and lower semicontinuous, which yields that Φ is a DC matrix-mapping with control h.
Although the notion of DC matrix-mapping has a simpler description in terms of quadratic forms v ⊤ Φ(x)v, the set Γ h (X, Y, B) in Definition 2.3 provides enhanced properties depending on the choice of scalarizations B and, consequently, better properties of operations with the matrix Φ.
Given a mapping Φ : X → S p ∪ {∞ S p }, we define its k-th eigenvalue function by λ Φ k : X → R given by Moreover, we define the sum of the first k eigenvalue functions by Λ Φ k : X → R given by Λ Φ k (x) := k j=1 λ Φ j (x).The following proposition gives sufficient conditions to ensure that the above functions are DC.Proposition 5.9 Let Φ : X → S p ∪ {∞ S p } and h ∈ Γ 0 (X).a) If Φ is a DC matrix-mapping with control h, then its largest eigenvalue function, λ Φ 1 , belongs to Γ h (X).
Theorem 5.10 Let ϕ ∈ Γ h (X) and Φ be a DC matrix-mapping with control h such that x → ϕ(x) + h(x) is continuous at some point of dom Φ.Let x be a local optimal solution of the optimization of problem (51) and suppose that h is differentiable at x.Then, there exists A ∈ S p + with tr(A) = 1 such that 0 X * ∈ ∂ϕ(x) + D * Φ(x)(A) + N Q (x), and v ⊤ Φ(x)v = 0 for each eigenvector v of A, provided that the following qualification holds 0 X * / ∈ DΦ(x)(A) + N Q (x), for all A ∈ S p + with tr(A) = 1. (55) Proof First, let us notice that by Proposition 5.7 the mapping Φ belongs to Γ h (X, S p , B), where B := {A ∈ S p + : tr(A) = 1}.Then, applying Corollary 4.6 we get the result.Consider normed spaces X, Y and recall that a function F : X → Y is called C 1,+ at x if there exists a neighbourhood U of x such that F is Fréchet differentiable on U and its gradient is Lipschitz continuous on U .
Corollary 5.11 Let x be a local optimal solution of problem (51).Suppose that X is a Hilbert space and ϕ and that Φ are C 1,+ at x.Then, there exist v i ∈ R p with v i = 1, i = 1, . . ., p, and (λ i ) p i=1 ∈ ∆ p such that v ⊤ i Φ(x)v i = 0 and 0 X ∈ ∇ϕ(x) provided that the following qualification holds where v ⊤ i ∇Φ(x)v i is the gradient of x → v ⊤ i Φ(x)v i at x. Proof Let us consider a closed and convex neighbourhood U of x and ρ > 0 such that the functions ϕ(x) + ρ x 2 and A, Φ(x) + ρ x 2 are convex over U for all A ∈ S p + with tr(A) = 1 (see, e.g, [38,Proposition 1.11]).Hence, for h(x) := ρ x 2 , ϕ U := ϕ + δ U ∈ Γ h (X).Furthermore, by Proposition 5.7 the mapping is a DC matrix-mapping with control h.Then, it is easy to see that x is also a local solution of min{ϕ U (x) : Φ U (x) 0, x ∈ Q}.Let us notice that for every matrix A ∈ S p + , and its spectral decomposition A = p i=1 λ i u i u ⊤ i , we get Hence, condition (56) implies (55).Therefore, Theorem 5.10 implies the existence of A ∈ S p + with tr(A) = 1 such that (54) holds.Now, consider λ i := λ i (A), and associated eigenvalues v i , i = 1, . . ., p.Then, (λ i ) ∈ ∆ p and v ⊤ i Φ(x)v i = 0, and that ends the proof.

Conclusions
The paper deals with optimization problems involving the so-called class of B-DC mappings (see Definition 2.3), which slightly extend the concept of delta-convex functions.The most general model studied in the paper is an optimization problem with an abstract constraint given by a closed convex set C. The proposed methodology consists in transforming the original problem into an unconstrained optimization problem (by means of the notion of improvement function), and in using this reformulation to derive necessary and sufficient conditions of global and local optimality.The case in which the abstract constraint is a convex closed cone −K is discussed in detail, and global optimality conditions are stated in Theorem 4.3 while Theorems 4.5 and 4.7 deals with local optimality.Our developments are applied in the last section to establish ad hoc optimality conditions for fundamental problems in applied mathematics such as infinite, stochastic and semidefinite programming problems.Next, we resume the main conclusions of the paper: 1) Non-smooth tools like the (regular) subdifferential, the notion of (regular) coderivative showed to be appropriate technical instruments in our approach, outside of the scope of Asplund spaces.2) New qualification conditions, which are an alternative to the Slater condition, are introduced in the paper.These conditions require certain degree of continuity of the objective/constraints functions and (w * )-compactness of the set C • .3) Some properties of the B-DC mappings are supplied by Proposition 2.5.4) Theorem 2.14 is a key result in our analysis.It is based on Proposition 2.13, a useful characterization of the ε-subdifferential of the supremum of convex functions.5) The particular structure of the cone-constraint problem allows us to build more suitable supremum functions.This is the case when the polar cone K + is w * −compactly generated, and a representative example of that situation is the semi-infinite optimization model where K + is the set of all (finite) regular Borel measures.6) In Proposition 5.4, a classical formula for the subdifferential of convex normal integrand functions is extended to the case of nonconvex normal integrands.7) In the last subsection, devoted to semidefinite programming, the notion of DC-matrix mapping is introduced.This concept leads to the main associated optimality result, which is Theorem 5.10.

Corollary 3 . 4
Let C ⊂ Y be a closed and convex set such that 0 Y ∈ C and C • is weak * -compact.Let ϕ ∈ Γ 0 (X) and Φ ∈ Γ 0 (X, Y, C • ) and suppose that one of the following conditions holds: a) the function ϕ is continuous at some point of dom Φ, b) the function x → sup{ λ * , Φ(x) : λ * ∈ C • } is continuous at some point of dom ϕ.

Corollary 4 . 4
Consider the optimization problem