Optimality conditions for nonconvex problems over nearly convex feasible sets

In this paper, we study the optimization problem (P) of minimizing a convex function over a constraint set with nonconvex constraint functions. We do this by given new characterizations of Robinson’s constraint qualification, which reduces to the combination of generalized Slater’s condition and generalized sharpened nondegeneracy condition for nonconvex programming problems with nearly convex feasible sets at a reference point. Next, using a version of the strong CHIP, we present a constraint qualification which is necessary for optimality of the problem (P). Finally, using new characterizations of Robinson’s constraint qualification, we give necessary and sufficient conditions for optimality of the problem (P).


Introduction
Consider the following optimization problem: where K := {x ∈ R n : −g(x) ∈ S}, (1.1) employed to study convex optimization problems in the literature [3,7,13,15,17]. In the special case, the Slater's condition [10] was used to obtain the so-called Karush-Kuhn-Tucker conditions, which are necessary and sufficient for optimality. Unfortunately, the characterization of optimality by the Karush-Kuhn-Tucker conditions may fail under the Slater's condition whenever g is not S-convex. Recently, the Slater's condition together with nondegeneracy condition [11] has been shown to guarantee that the Karush-Kuhn-Tucker conditions are necessary and sufficient for optimality of the problem (P), where K is a convex set and C := R n [6,8,11,12].
In this paper, we study the problem (P), where the constraint set K is nearly convex at a reference point [8,9,16], but is not necessarily convex. We first present new characterizations of Robinson's constraint qualification, which reduces to the combination of generalized Slater's condition and generalized sharpened nondegeneracy condition for nonconvex programming problems with nearly convex feasible sets at a reference point. Also, using a version of the strong CHIP, we give a constraint qualification which is necessary for optimality of the problem (P). Finally, we present necessary and sufficient conditions for optimality of the problem (P), extending known results in the finite dimensional case (see [4,8,11] and other references therein).
The paper has the following structure. In Sect. 2, we provide some definitions and elementary results related to generalized convexity. We also give several constraint qualifications that are used to study optimality of the optimization problem (P). New characterizations of Robinson's constraint qualification are presented in Sect. 3 whenever the constraint set K is nearly convex at a reference point. In Sect. 4, using new characterizations of Robinson's constraint qualification, we give necessary and sufficient conditions for optimality of the problem (P). Several examples are given to illustrate our results.

Preliminaries
Throughout the paper, we assume that R n is the Euclidean space with the inner product ·, · and the induced norm · . Now, we consider the following optimization problem: where f : R n −→ R is a real-valued (continuous) convex function, the constraint set K is defined by

2)
S is a non-empty closed convex cone in R m , C is a non-empty closed convex subset of R n such that C ∩ K = ∅ and g : R n −→ R m is a Fréchet differentiable function, but it is not assumed to be S-convex. Note that by continuity of g and closedness of S, K is closed. We recall from [14,19] the following definitions. A function h : R n −→ R m is called S-convex, if where S is a non-empty closed convex cone in R m . For a function h : R n −→ R, the subdifferential of h at a pointx ∈ R n , ∂h(x), is defined by In this case, x * is called the Fréchet derivative of h at the pointx and denoted by ∇h(x) := x * . We define the negative and positive polar cones of S by: and S + := {λ ∈ R m : λ, y ≥ 0, ∀ y ∈ S}, (2.4) respectively. We also define the normal cone to a convex set E ⊆ R n at a point x ∈ E, by: For a subset W of R n , we denote by clW, bd(W ) and intW, the closure, boundary and interior of W, respectively. The nonnegative orthant of R m is denoted by R m + and is defined by We now give a generalization of Mangasarian-Fromovitz's constraint qualification [1,2,14], nondegeneracy condition [9,11], sharpened nondegeneracy condition [9] and Slater's condition [1,2].
Definition 2.1 Let S be a closed convex cone in R m with non-empty interior. Let K be given as in (2.2), B be given as in (2.6), and letx ∈ K .
(i) We say that K satisfies generalized Mangasarian-Fromovitz's constraint qualification If generalized nondegeneracy condition holds at every point x ∈ K , we say that K satisfies generalized nondegeneracy condition. (iii) We say that K satisfies generalized sharpened nondegeneracy condition If generalized sharpened nondegeneracy condition holds at every point x ∈ K , we say that K satisfies generalized sharpened nondegeneracy condition. (iv) Let C be a non-empty closed convex subset of R n such that C ∩ K = ∅. The set := C ∩ K is said to satisfy generalized Slater's condition (G SC) if there exists x 0 ∈ C such that −g(x 0 ) ∈ intS.
Remark 2.2 It should be noted that generalized sharpened nondegeneracy condition implies generalized nondegeneracy condition, but the converse is not true (see [9,Example 3.1]). However, generalized nondegeneracy condition together with generalized Slater's condition implies generalized sharpened nondegeneracy condition (see Corollary 2.4).

Proposition 2.3
Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), where S ⊆ R m is a closed convex cone with non-empty interior. Let C be a non-empty closed convex subset of R n such that C ∩ K = ∅, and let x ∈ C ∩ K . Assume that generalized Slater's condition holds. Then the following assertions are equivalent: Since generalized Slater's condition holds, so there exists x 0 ∈ C such that −g(x 0 ) ∈ intS. Thus, there exists r > 0 such that −g(x 0 + ru) ∈ S for all u ∈ B, where B is defined by: This implies that x 0 + ru ∈ K for all u ∈ B. In view of (2.7), one has Put u = 0 in (2.8), we conclude that ∇g(x) T λ, x 0 −x = 0. This together with (2.8) implies that This guarantees that ∇g(x) T λ = 0 with λ ∈ S • \ {0} such that λ, g(x) = 0, which contradicts (i). Thus, (ii) holds. Clearly (ii) implies (i) without the validity of generalized Slater's condition.

Corollary 2.4
Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), where S ⊆ R m is a closed convex cone with non-empty interior. Let C be a non-empty closed convex subset of R n such that C ∩ K = ∅, and let x ∈ C ∩ K . Assume that generalized Slater's condition holds. Then the following assertions are equivalent: (i) Generalized nondegeneracy condition holds atx.
Proof This is an immediate consequence of Proposition 2.3.
Definition 2.5 (Robinson's constraint qualification [1]). Let S be a closed convex cone in R m with non-empty interior and K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2). Let C be a non-empty closed convex subset of R n such that C ∩ K = ∅, and let := C ∩ K andx ∈ . We say that the set satisfies the Robinson's constraint qualification (RC Q) at the pointx if It is worth noting that in the case C := R n , Robinson's constraint qualification is equivalent to Mangasarian-Fromovitz's constraint qualification (see the proof of Theorem 3.5).
In the following, we recall the notion of the strong conical hull intersection property (the strong CHIP) (see [5]).

Definition 2.6
Let C 1 , C 2 , . . . , C m be closed convex sets in R n and let x ∈ m j=1 C j . Then, the collection The collection {C 1 , C 2 , . . . , C m } is said to have the strong CHIP if it has the strong CHIP at each x ∈ m j=1 C j . Definition 2.7 [8,9,16] Let E be a non-empty subset of R n . The set E is called nearly convex at a point x ∈ E if, for each z ∈ E, there exists a sequence {β m } m∈N ⊂ (0, +∞) with β m −→ 0 + such that (2.9) where N is the set of natural numbers.
The set E is said to be nearly convex if it is nearly convex at each point x ∈ E. It is easy to see that if E is convex, then E is nearly convex (for more details and illustrative examples related to the near convexity, see [8,9,16]).

Characterizing Robinson's constraint qualification
In this section, we first characterize the near convexity of the constraint set K , which given by (2.2). Next, whenever the constraint set K is nearly convex at a reference point, we present new characterizations of Robinson's constraint qualification that will be used for optimality of the problem (P). We start with the following lemma which has a crucial role for proving our main results. This lemma gives us characterizations of near convexity.
Lemma 3.1 Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), where S ⊆ R m is a non-empty closed convex cone. Consider the following assertions: Moreover, if we assume that S has a non-empty interior and there exists a subset B of S • \{0} such that pos(B) = S • , and moreover, generalized sharpened nondegeneracy condition (G S N C) holds at each point x ∈ bd (K ), then (iv) ⇒(i).
Then, by differentiability of g at x, we conclude that for every sequence {t n } n≥1 of positive real numbers with t n ↓ 0, we have λ, ∈ K for all sufficiently large n and for every sequence {t n } n≥1 of positive real numbers with t n ↓ 0. This is a contradiction, because u ∈ K and, by (iii), K is nearly convex at x (see Definition 2.7). Therefore, (iv) holds. (iv) ⇒(i). Suppose that S has a non-empty interior and there exists a subset B of S • \{0} such that pos(B) = S • , and moreover, generalized sharpened nondegeneracy condition holds at each point x ∈ bd (K ). Now, we assume that (iv) holds, andx ∈ bd (K ) is arbitrary. We first prove that there existsλ ∈ S • \{0} with λ , g(x) = 0. Indeed, sincex ∈ bd(K ), by continuity of g atx, one has −g(x) ∈ bd(S). So, {−g(x)} ∩ intS = ∅. Then, by the convex separation theorem, there existsλ ∈ R m \{0} such that λ , −g(x) ≥ λ , y for all y ∈ S. Since S is a closed cone and −g(x) ∈ S, it follows that λ , y ≤ 0 for all y ∈ S and λ , g(x) = 0. Consequently, we conclude thatλ ∈ S • \{0} and λ , On the other hand, sinceλ i ∈ B and λ i , g(x) = 0 (i = 1, 2, . . . , p), in view of the assumption that generalized sharpened nondegeneracy condition holds at each point x ∈ bd (K ) (and hence, atx), we have, for each i = 1, 2, . . . , p, that there exists u ∈ K such that ∇g(x) Tλ i , u−x = 0. This implies that ∇g(x) Tλ i = 0 for all i = 1, 2, . . . , p. This together with (3.11) implies that there exists a supporting hyperplane for K at each boundary pointx of K . Combining this with the closedness of K , by [18, Theorem 1.3.3], we conclude that K is a convex set. Therefore, (i) holds. Corollary 3.2 Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), where S ⊆ R m is a non-empty closed convex cone. Letx ∈ K be arbitrary. If K is nearly convex atx, then for each The proof is exactly similar to the proof of the implication (iii) ⇒ (iv) in Lemma 3.1 with replacing x byx.

Lemma 3.3 Let D be a closed subset of R n . Then D is convex if and only if D is nearly convex.
Proof Suppose that D is convex. Then it is clear that D is nearly convex. Conversely, let D be nearly convex. Assume if possible that D is not convex. Then there exist x, y ∈ D and z 0 : Therefore, in view of (3.13) we conclude that which together with (3.13) contradicts the definition of t 1 . Thus, D is convex.
The following example shows that condition (3.10) alone does not guarantee the near convexity of K at each boundary point.
Hence, condition (3.10) is satisfied. However, K is not nearly convex at each boundary point, because forx = 0 ∈ bd K , u = −1 ∈ K and every sequence {t n } n≥1 of positive real numbers with t n ↓ 0, we havex + t n (u −x) / ∈ K for all sufficiently large n, i.e., K is not nearly convex at We now present new characterizations of Robinson's constraint qualification whenever the constraint set K is nearly convex at some reference point, but is not necessarily convex.
Theorem 3.5 Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), where S ⊆ R m is a closed convex cone with non-empty interior. Let C be a non-empty closed convex subset of R n such that C ∩ K = ∅, and let x ∈ C ∩ K . Then the following assertions are equivalent: (ii) Robinson's constraint qualification holds atx.
Hence, if one of assertions (i) and (ii) holds, then generalized Slater's condition holds. Furthermore, if we assume that K is nearly convex atx and there exists a subset B of S • \{0} such that pos(B) = S • , and moreover, generalized Slater's condition holds, then (i) (and hence (ii)) is equivalent to the following assertions: (iii) Generalized nondegeneracy condition holds atx. (iv) Generalized sharpened nondegeneracy condition holds atx.

Example 3.6 Let
Clearly, C is a closed convex subset of R 2 . We see that 3 16 ) ∈ intS, i.e., generalized Slater's condition holds (note that ( 1 2 , 1 16 ) ∈ C). Letx := (1, 1) ∈ C ∩ K . It is easy to check that g(x) = 0, λ, g(x) = 0 for all λ ∈ B, and moreover, ∇g(x) T λ = 0 whenever λ ∈ B such that λ, g(x) = 0. So, generalized nondegeneracy condition holds atx. It is not difficult to show that K is not nearly convex atx, while Robinson's constraint qualification is invalid atx, because g(x) + ∇g(x)(C −x) + S = R − × R. Therefore, Thus, in the absence of the near convexity of K atx, the validity of both generalized Slater's condition and generalized nondegeneracy condition atx does not guarantee the validity of Robinson's constraint qualification atx ∈ C ∩ K .

Remark 3.7
In view of the proof of Theorem 3.5, we see that the implication (i) ⇒(iii) and (iii) ⇐⇒ (iv) do not require the near convexity of K at the pointx. However, even in the case, where S is a convex polyhedral cone, the near convexity of K atx is essential for the validity of the implication (iii) ⇒(ii) (see Example 3.6). Also, it may happen that ∇g(x) T λ = 0 whenever x ∈ K and λ ∈ B with λ, g(x) = 0 for some B ⊂ S • \{0} such that pos(B) = S • , while there is no x 0 ∈ R n such that −g(x 0 ) ∈ intS; for example, let S := R 2

Necessary and sufficient conditions for optimality
In this section, using new characterizations of Robinson's constraint qualification (Theorem 3.5), we present necessary and sufficient conditions for optimality of the problem (P). We first give the notion of the generalized sharpened strong conical hull intersection property, which was introduced in [4] for the case where the constraint functions g j , j = 1, 2, . . . , m, are continuously Fréchet differentiable, and the constraint set K is convex. We now give this notion for the case where g is a Fréchet differentiable function, and the constraint set K := {x ∈ R n : −g(x) ∈ S} is nearly convex at some reference point, but is not necessarily convex. Definition 4.1 Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), and let C be a non-empty closed convex subset of R n such that C ∩ K = ∅. We say that the pair {C, K } has the "generalized sharpened strong conical hull intersection property" (G-S strong CHIP in short) at a pointx ∈ C ∩ K , if where S + defined by (2.4). The pair {C, K } is said to have the generalized sharpened strong CHIP if it has the generalized sharpened strong CHIP at each point x ∈ C ∩ K .
In the following, we give an example of a pair {C, K }, with K is nearly convex at some pointx ∈ C ∩ K but not convex, having the G-S strong CHIP atx.
Example 4.2 Let g j : R 2 −→ R be defined by and let S := R 2 + . It is easy to see that g is a differentiable function which is not S-convex, and Then i.e., the pair {C, K } has the G-S strong CHIP atx. Also, if we assume that C := R − × R + , thus C ⊂ K , and hence C ∩ K = C. Therefore, Then the pair {C, K } has the G-S strong CHIP atx.
In the sequel, for each convex function f : R n −→ R, we denote by (P f ) the following optimization problem: where K := {x ∈ R n : −g(x) ∈ S} is defined by (2.2), S is a non-empty closed convex cone in R m , and C is a non-empty closed convex subset of R n such that C ∩ K = ∅. We now show that the G-S strong CHIP is necessary for optimality of the problem (P f ) whenever the constraint set K is nearly convex at some pointx ∈ C ∩ K .
Theorem 4.2 Let K := {x ∈ R n : −g(x) ∈ S} be given as in (2.2), and letx ∈ C ∩ K . Consider the following assertions: (i) For each convex function f : R n −→ R that attains its global minimum atx over C ∩ K , we have If K is nearly convex atx, then (i) ⇒(ii). Moveover, if K is convex, then (ii) ⇒(i).

It should be noted that since K is closed, in view of Lemma 3.3, K is convex if and only if K is nearly convex.
Proof (i) ⇒ (ii). Assume that K is nearly convex atx and (i) holds. Let u ∈ (C ∩ K −x) • be arbitrary. Then u, y −x ≤ 0 for all y ∈ C ∩ K , and so, −u, y ≥ −u,x for all y ∈ C ∩ K . Therefore,x is a global minimizer of the convex function f over C ∩ K , where the function f : R n −→ R is defined by f (y) := −u, y for all y ∈ R n . Thus, by the hypothesis (i), (Note that ∂ f (y) = {∇ f (y)} = {−u} for all y ∈ R n , and so ∂ f (x) = {−u}). Hence, This shows that On the other hand, since by the hypothesis, K is nearly convex atx, then by Definition 2.7, for each y ∈ C ∩ K , there exists a sequence {t n } n≥1 of positive real numbers with t n −→ 0 + such thatx + t n (y −x) ∈ K for all sufficiently large n. This implies that −g(x + t n (y −x)) ∈ S for all sufficiently large n. This together with λ ∈ S + and λ, g(x) = 0 and the fact that g is differentiable atx implies that Now, since v ∈ (C −x) • , we obtain from (4.24) that Therefore, u ∈ (C ∩ K −x) • , and consequently, We conclude from (4.23) and (4.25) that the pair {C, K } has the G-S strong CHIP atx, i.e., (ii) holds.
(ii) ⇒ (i). Suppose that K is convex and (ii) holds. Letx be a global minimizer of any convex function f : R n −→ R over C ∩ K . Then, due to the convexity of C ∩ K and using Moreau-Rockafellar's theorem, we get This together with (4.21) implies that (4.22) holds, i.e., (i) is justified.

Definition 4.3 (KKT conditions and stationary points)
. Consider the problem (P f ), and letx ∈ C ∩ K . We say thatx is a stationary point of the problem If the function f : R n −→ R is Fréchet differentiable, C := R n and S := R m + , then (4.26) is called the Karush-Kuhn-Tucker conditions (KKT conditions), and λ in (4.26) is called a Lagrange multiplier atx.
We now present necessary and sufficient conditions for optimality of the problem (P f ).

Theorem 4.4
Consider the problem (P f ), and letx ∈ C ∩ K . Assume that S has a non-empty interior and there exists a subset B of S • \{0} such that pos(B) = S • , and moreover, generalized sharpened nondegeneracy condition holds atx. Furthermore, suppose that generalized Slater's condition holds. Consider the following assertions: (i)x is a stationary point of the problem (P f ).
(ii)x is a global minimizer of the problem (P f ) over C ∩ K .
Proof (i) ⇒ (ii). Assume that K is nearly convex atx, andx is a stationary point of the problem (P f ). Then, by Definition 4.3, there exists λ ∈ (S + g(x)) • such that Since λ ∈ (S + g(x)) • , −g(x) ∈ S and S is a cone, we conclude that −λ ∈ S + and λ, g(x) = 0, and hence, Now, by near convexity of K atx, we show that the later inclusion holds. To this end, let w ∈ {∇g(x) T λ : λ ∈ S + , λ, g(x) = 0} + (C −x) • be arbitrary. Then there exist λ ∈ S + with λ, g(x) = 0 and v ∈ (C −x) • such that w = ∇g(x) T λ + v. Now, let y ∈ C ∩ K be arbitrary. Since K is nearly convex atx, it follows from Definition 2.7 that there exists a sequence {β m } m∈N ⊂ (0, +∞) with β m −→ 0 + such that This implies that −g(x + β m (y −x)) ∈ S, for all sufficiently large m ∈ N .

This together with the fact that
Therefore,x is a global minimizer of the problem (P f ) over C ∩ K , i.e., (ii) holds. (ii) ⇒(i). Suppose that K is convex. We first show that the pair {C, K } has the G-S strong CHIP atx. To this end, since by the hypothesis, K is nearly convex atx (because K is convex), S has a non-empty interior and there exists a subset B of S • \{0} such that pos(B) = S • , and generalized Slater's condition holds, then by Theorem 3.5 (the implication (iv) ⇒(ii)) (note that, by the hypothesis, generalized sharpened nondegeneracy condition holds atx), Robinson's constraint qualification holds atx ∈ D := C ∩ K . Now, let the function G : R n −→ R m × R n be defined by G(x) := (−g(x), x) for all x ∈ R n . Clearly, one has D = {x ∈ R n : G(x) ∈ S × C}. Since Robinson's constraint qualification holds atx ∈ D, we conclude from [1, Lemma 2.100] that Thus, it follows from [14,Corollaries 1.15 & 3.9] that (4.29) Since C and K are convex sets and S is a convex cone, we conclude that Therefore, we deduce from (4.29) that the pair {C, K } has the G-S strong CHIP atx. Now, suppose thatx is a global minimizer of the problem (P f ) over C ∩ K . Then, by Theorem 4.2 (the implication (ii) ⇒(i)), This implies that there exists λ ∈ S + such that λ, g(x) = 0 and Since S is a closed cone, we conclude that −λ ∈ S + g(x) • . So, there exists λ ∈ (S + g(x)) • such that Hence, in view of Definition 4.3,x is a stationary point of the problem (P f ), which completes the proof.
Proof Since Robinson's constraint qualification holds atx, then in view of the hypotheses, by Theorem 3.5 (the implication (ii) ⇒(iii)), generalized nondegeneracy condition holds atx, and moreover, generalized Slater's condition holds. Now, the result follows from Corollary 4.5.