Optimality conditions for robust weak sharp efficient solutions of nonsmooth uncertain multiobjective optimization problems

In this paper, we investigate an uncertain multiobjective optimization problem involving nonsmooth and nonconvex functions. The notion of a (local/global) robust weak sharp efficient solution is introduced. Then, we establish necessary and sufficient optimality conditions for local and/or the robust weak sharp efficient solutions of the considered problem. These optimality conditions are presented in terms of multipliers and Mordukhovich/limiting subdifferentials of the related functions.


Introduction
In reality, it is common that the input data associated with the objective function and the constraints of programs are uncertain or incomplete due to prediction errors, measurement errors, or lack of information; that is, they are not known precisely when the problem is solved (see [1]). Robust optimization has come out as a noticeable determinism framework for investigating mathematical programming problems with data uncertainty. Many researchers have been studied intensively both theoretical and applied aspects in the area of robust optimization; see, e.g., [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and the references therein.
In [12,13], Kuroiwa and Lee studied scalarizations and optimality theorems for uncertain multiobjective optimization problem when involved functions are convex. Then, in [16], Lee and Kim proved nonsmooth optimality theorems for weakly robust efficient solutions and properly robust efficient solutions for multiobjective optimization problem with data uncertainty, and soon later, Lee and Lee [18] studied optimality conditions and duality theorems for uncertain semi-infinite multiobjective optimization problems. Besides, for nonconvex optimization problems, Chuong [11] established necessary/sufficient optimality conditions for robust (weakly) efficient solutions and robust duality theorems of uncertain multiobjective optimization in terms of multipliers and Mordukhovitz/limiting subdifferentials of the related functions.
On the other hand, the notion of a weak sharp solution in general mathematical programming problems was first introduced in [21]. It is an extension of a sharp minimizer (or equivalently, strongly unique minimizer) in [22] to include the possibility of non-unique solution set. It has been acknowledged that the weak sharp minimizer plays important roles in stability/sensitivity analysis and convergence analysis of a wide range of numerical algorithms in mathematical programming (see [23][24][25][26] and references therein). In the context of optimization, much attention has been paid to concerning necessary and/or sufficient conditions for weak sharp solutions in various types of problems (see, [27][28][29][30][31][32][33][34] and references therein).
Very recently, with the intention to answer the question "How about optimality condition for weak sharp solutions, particularly, in a robust optimization?", Kerdkaew and Wangkeeree [35] introduced robust weak sharp and robust sharp solution to a convex cone-constrained optimization problem with data uncertainty and some optimality conditions for the robust weak sharp solutions problem were established. Moreover, as an application, the authors presented the characterization of the robust weak sharp weakly efficient solution sets for convex uncertain multiobjective optimization problems. Soon later, Kerdkaew et al. [36] investigated a robust optimization problem involving nonsmooth and nonconvex real-valued functions and obtained some optimal solutions for robust weak sharp solution of the problem.
Motivated by above-mentioned works, especially [34][35][36], we aim to establish necessary and sufficient optimality conditions for the robust weak sharp efficient solutions of an uncertain multiobjective optimization problem with data incertainty in both objective and constraints functions. Our obtained optimality conditions are presented in terms of multipliers and limiting/Mordukhovich subdifferential of the related functions. In addition, some examples are also provided for analyzing and illustrating the obtained results.
The rest of the paper is organized as follows. Section 2 contains some basic definitions from variational analysis and several auxiliary results. Here, we introduce a new concept of a solution, which involves robustness and weak sharp efficiency, namely the robust weak sharp efficient solution. In Sect. 3, the first part of main results concluding a nonsmooth Fermat rule for the local robust weak sharp efficient solutions of the uncertain multiobjective optimization problem is presented. In Sect. 4, another part of results are some sufficient optimality conditions for robust weak sharp efficient solutions of the considered problem. Section 5 devotes to concluding remarks.

Preliminary
We begin this section by fixing notations and definitions including the notations generally used in variational analysis, the Mordukhovich generalized differentiation notions (see more details in [37,38]), which are the main tools for our study. Throughout this paper, R n denotes the Euclidean space with dimension n. The inner product and norm in R n are denoted by symbols ·, · and · , respectively. The symbols R n + , B and B(x 0 , r ) stand for the nonnegative orthant of R n , closed unit ball in R n , and the open ball with center at x 0 and radius r > 0 for any x 0 ∈ R n , respectively. For a nonempty subset S ⊆ R n , the closure, boundary, and convex hull of S are denoted by clS, bdS, and coS, respectively, while the notation x S − → x 0 means that x → x 0 and x ∈ S. Let a point x 0 ∈ S be given. The set S is said to be closed around x 0 if there is a neighborhood U of x 0 , such that S ∩ U is closed. Moreover, the set S is said to be locally closed if it is closed around every x 0 ∈ S. Given a set-valued mapping F : R n → 2 R n , the sequential Painlevé-Kuratowski upper/outer limit of F as Let S be closed around x 0 . Recall that the contingent cone of S at x 0 is denoted by T (S, x 0 ) and defined by while the Fréchet (or regular) normal cone of S at x 0 , which is a set of all the Fréchet normals, has the form N (S, x 0 ) and is defined by Note that the Fréchet (or regular) normal cone N (S, x 0 ) is a closed convex subset of R n and we set N (S, x 0 ) = ∅ if x 0 / ∈ S. The notation N (S, x 0 ) stands for the Mordukhovich (or basic, limiting) normal cone of S at x 0 . It is defined by Observe that the Mordukhovich normal cone is obtained by the Fréchet normal cones by taking the sequential Painlevé-Kuratowski upper/outer limit (see [37] for more details) as Specially, in the case that S is a convex set, then we obtain the following relations: Let h : R n → R := R ∪ {±∞} be an extended real-valued function, we define denotes the domain and the epigraph of h, respectively. Let x 0 ∈ domh and ε ≥ 0 be given. Then, analytic ε-subdifferential of function h at x 0 , which has the form ∂ ε h(x 0 ) is defined by In the special case that ε = 0, the analytic ε-subdifferential ∂ ε h(x 0 ) of h at x 0 reduces to the general Fréchet subdifferential of h at x 0 , which is denoted by ∂h(x 0 ). Besides, ∂h(x 0 ) denotes the Mordukhovich subdifferential of h at x 0 . It is defined by In addition, we have the following equation, which presents the relation between the Mordukovich subdifferential of h at x 0 ∈ X with |h(x 0 )| < ∞ and the Mordukovich normal cone of epih : ). In the case that x / ∈ domh, we set ∂h(x 0 ) = ∂h(x 0 ) = ∅. It is obvious that for any x ∈ R n , ∂h(x 0 ) ⊆ ∂h(x 0 ) and specially, the following relation is fulfilled if h is a convex function:  (2. 2) The following necessary optimality condition, called generalized Fermat rule, for a function to attain its local minimum plays a key role for our analysis. 37,38]) Let h : R n → R ∪ {+∞} be a proper lower semicontinuous function. If h attains a local minimum at x 0 ∈ R n , then 0 R n ∈ ∂h(x 0 ), which implies 0 R n ∈ ∂h(x 0 ).
We recall the following fuzzy sum rule for the Fréchet subdifferential and the sum rule for the Mordukhovich subdifferential, which are important in the sequel.
To conclude this section, we recall the concepts of classical, uncertain, and robust multiobjective optimization problems, respectively. Let be a nonempty locally closed subset of R n and f i , g j : R n → R, i ∈ I := {1, . . . , m}, j ∈ J := {1, . . . , p} be given. Consider the following multiobjective optimization problem: The multiobjective optimization problem (MP) in the face of data uncertainty in both the objective function and the constraints can be written by the following uncertain multiobjective optimization problem: respectively. We consider the following uncertain optimization problem: where f i : R n × U i → R, i ∈ I and g j : R n × V j → R, j ∈ J are given real-valued functions, x is the vector of decision variable, and u i , i ∈ I and v j , j ∈ J are uncertain parameters belonging to sequentially compact topological spaces U i , i ∈ I and V j , j ∈ J, respectively. In fact, the uncertainty sets can be apprehended in the sense that parameters u i , i ∈ I and v j , j ∈ J are not known exactly at the time of the decision. For examining the uncertain optimization problem (UMP), we treat the robust approach for (UMP), which is the worst-case approach for (UMP). The following robust multiobjective optimization problem (RMP) associates with (UMP); it is a robust counterpart of (UMP): The robust feasible set K is denoted by Now, we recall the following concept of robust efficient solutions for (UMP), which can be found in the literature; see, e.g., [14].

Definition 2.4
The vector x 0 ∈ R n is said to be a local robust efficient solution for (UMP) if it is a local efficient solution for (RMP), i.e., there exists a neighborhood U of x 0 , such that In addition, if U = R n , then x 0 ∈ K is said to be a global robust efficient solution for (UMP).
Next, we introduce a new concept of a robust solution, which is related to robustness and weak sharp efficiency, namely the (local/global) robust weak sharp efficient solution.
Definition 2.5 A point x 0 ∈ K is said to be a local robust weak sharp efficient solution for (UMP) if it is a local weak sharp efficient solution for (RMP), i.e., there exist a neighborhood U of x 0 and a real number It is simple to see that every (local) robust sharp efficient solution or robust weak sharp efficient solution of a problem must also be a (local) robust efficient solution of the problem. In contrast, the converse implication does not need to be true. In the case that the solution set is singleton, the robust efficient solution of (UMP) is a robust weak sharp efficient solution of the problem. However, there are many cases that the problem, which has a robust weak sharp efficient solution, has no robust sharp efficient solution. Observe that x 0 := (0, 0) ∈ K is a global robust efficient solution of (UMP). Assume that x 0 is a local robust sharp efficient solution of (UMP), then there exist η, ε > 0, such that x 2 ≥ η x − x 0 , ∀x ∈ K ∩U, holds with U := (−ε, ε). It can be seen that S = {(0, 0)} and the inequality deduces to x 2 ≥ η x , ∀x ∈ K ∩ (−ε, ε), which is a contradiction.

Necessary optimality conditions for robust weak sharp efficient solutions
In this section, we focus our attention on establishing some necessary optimality conditions for the local (global) robust weak sharp efficient solutions of uncertain multiobjective optimization problems in terms of the advanced tools of variational analysis and generalized differentiation. Concretely, using the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials, and the sum rule for Mordukhovich subdifferentials, we establish a necessary condition for the local robust weak sharp efficient solution of the problem (UMP).
First, for given arbitrary x 0 ∈ , we set In what follows, throughout this section, we assume that g i : R n × V j → R is a function, such that for each fixed v j ∈ V j , j ∈ J, g j (·, v j ) is locally Lipschitz continuous, while each function f i : R n × U → R, i ∈ I satisfies the following conditions: (C1) For a fixed x 0 ∈ , there exists r x 0 > 0, such that the function f i (x, ·) : U i → R, i ∈ I is upper semicontinuous for all x ∈ B(x 0 , r x 0 ) and f i (·, u i ) is Lipschitz continuous in x, uniformly for u i ∈ U i , i.e., for some real number l > 0, for all x, y ∈ and u i ∈ U i , one has where the symbol ∂ x stands for the Mordukhovich subdifferential operation with respect to x.
The assumption (C1) guarantees that the function max u i ∈U i f (·, u i ), i ∈ I, is defined and locally Lipschitz of rank l i (see, e.g., [41]). When dealing with subgradients of a supremum/max function over a compact set, this assumption has been widely used in the literature (see, e.g., [15,[42][43][44]). (ii) The assumption (C2) related to the closedness of the partial subdifferential operation with respect to the first variable is a relaxed property of subdifferentials for convex functions in the finite-dimensional setting (see [11,45] fore more details).
To obtain the necessary optimality condition for local robust weak sharp efficient solutions of (UMP), we now state a constraint qualification for the uncertain multiobjective optimization problem with the feasible set K defined in (2.3).

Definition 3.2 ([11])
Given arbitrary x 0 ∈ , the constraint qualification (CQ) is said to be satisfied at Remark 3. 3 We can see that the (CQ) defined in Definition 3.2 reduces to the constraint qualification defined in [39, Definition 3.2] when = R n . As well as, it is not hard to verify that this (CQ) reduces to the extended Mangasarian-Fromovitz constraint qualification (see [40]) in the smooth setting when = R n .
Next, we establish the following necessary optimality condition for local robust sharp solutions of (UMP) under the (CQ).

Hence, we derive from (3.2) that max 1≤i≤m max
for all y ∈ K ∩ B(x, r 3 ). Note that x ∈ K and max u∈U f (x, u) = max u∈U f (x 0 , u), since x ∈ S. Then, the following function ϕ : attains its local minimizer at x. Indeed, for each y ∈ R n , φ(y) ≥ 0, while φ(x) = 0. Therefore, we arrive 0 ∈ ∂φ(x) by applying the generalized Fermat rule (Lemma 2.3). Since for each u i ∈ U i , i ∈ I, f i (·, u i ) is locally Lipschitz continuous at x, the functionf : R n → R, defined bỹ is also locally Lipschitz continuous at x. Let γ > 0 be the modulus of the locally Lipschitz continuity off . Additionally, since the robust feasible set K is locally closed, one has δ(·, K ) is lower semicontinuous around x. Clearly, the function ·−x is Lipschitz continuous with modulus 1. Therefore, by applying Lemma 2.3(i), we have that, for the proceeding ε > 0, there exist It then follows from δ(x ε 3 , K ) < ε that x ε 3 ∈ K and so ∂δ(·, K )(x ε 3 ) = N (K , x ε 3 ). Sincef is Lipschitz continuous around x with a constant γ and x ε 1 ∈ B(x, ε), we have from [37, Proposition 1.85] with = 0 for all sufficiently small ε > 0 that ∂f (x ε 1 ) ⊆ γ B. Simultaneously, we also arrive ∂ x ε 2 − x ⊆ B. By these inclusions, the compactness of B, and the fact that Since f satisfies (C1) and (C2), by the same fashion using to prove inequality (3.4) in Theorem 3.3 of [11], we obtain that for each fixed i ∈ I ∂ max Furthermore, by applying the formula for the Mordukhovich subdifferential of maximum functions (see [25,Theorem 3.46(ii)]) and Lemma 2.
On the other hand, we put Hence, K = ∩ . Observe that the (CQ) holds at x, since r < r 2 and x ∈ S ∩ B(x 0 , r ). In addition, as 0 ∈ N ( , x), the following inclusion always holds: Since the (CQ) is satisfied at x, there do not exist μ i ≥ 0 and v j ∈ V j , j ∈ J (x), such that j∈J (x) μ j = 0 and N ( , x).

Remark 3.5 (i)
In the case that f is a real-valued function and g j , j ∈ J are assumed to be continuous functions, such that for each u ∈ U ⊆ R q 0 , and for each fixed v j ∈ V j , f (·, u) and g i (·, v j ) are convex functions, respectively, our considered problem reduces to a convex optimization problem with data uncertainty that was studied in [14]. Although, in [14, Proposition 2.1], the authors employed the assumptions of the convexity of objective and constraint functions and the convexity of parameter uncertain sets to establish the necessary optimality conditions for a robust solution, in Theorem 3.4 the necessary optimality conditions for a local robust weak sharp solution, which also is a local robust solution, without these mentioned assumptions. (ii) In the case that f i , i ∈ I and g j , j ∈ J are without uncertainty, our considered problem reduces to a multiobjective optimization problem involving nonsmooth and nonconvex functions. Necessary and sufficient conditions for weak sharp efficient solutions of such multiobjective optimization problems were established in [34].
The following example shows that the (CQ) being satisfied around x 0 ∈ K is essential for Theorem 3.4.
which shows that (3.1) does not hold for every η, δ > 0. Hence, condition (CQ) is vital. It is obvious that the functions f i (·, u i ), i = 1, 2 and g(·, v) are not convex. Therefore, [35,Theorem 4.2] is not applicable for this example.
The following result is established easily by means of the basic concepts of variational analysis. Corollary 3.7 Let x 0 ∈ K be given. Suppose that there exists a neighborhood U of x 0 , such that the constraint qualification (CQ) is satisfied at any x ∈ K ∩U. If x 0 is a local robust weak sharp efficient solution for (UMP), then there exist real numbers η, r > 0, such that for any x ∈ S ∩ B(x 0 , r ) and x * ∈ ηB ∩ N (S, x) Specially, if x 0 ∈ K is a local sharp efficient solution for (RMP), i.e., a local robust sharp efficient solution for (UMP), the point x 0 is isolated in the solution set of (RMP). Therefore, we obtain that N (S, x) = R n and the (CQ) only needs to be fulfilled at x 0 . The following result, which presents the necessary optimality conditions for the local robust sharp efficient solution of (UMP), is obtained if (CQ) is satisfied at x 0 . Corollary 3.8 Let x 0 ∈ K be given and the constraint qualification (CQ) be satisfied at x 0 . Assume that x 0 is a local robust sharp efficient solution for (UMP), then there exist real numbers η > 0, such that

Sufficient optimality conditions for robust weak sharp efficient solutions
In this section, we focus on the sufficient optimality conditions for robust weak sharp efficient solution of uncertain multiobjective optimization problems. To formulate sufficient conditions for robust sharp solutions of problem (UMP) in the next theorem, we need the concept of generalized convexity at a given point for a family of real-valued functions. We set f : = ( f 1 , . . . , f m ) and g := (g 1 , . . . , g p ) for convenience in the sequel.
Remark 4.2 If f i (·, u), u ∈ U i , i ∈ I are convex and g j (·, v), v ∈ V j , j ∈ J are convex, then ( f, g) is generalized convex at any x 0 ∈ R n with w := x − x 0 for each x ∈ R n .
Next, we focus on the sufficiency of the considered problem. In the following theorem, we established the sufficient optimality conditions for robust weak sharp efficient solution for the problem (UMP).

Theorem 4.3
For the problem (UMP), let := R n . Assume that x 0 ∈ K satisfies the condition (3.11) with real numbers η and r . If ( f, g) is generalized convex at x 0 , then x 0 is a robust weak sharp efficient solution for the problem (UMP).
Proof Since x 0 ∈ K satisfy the condition (3.11) with real numbers η and r , for any x ∈ S ∩ B(x 0 , r ) and . . , l j , l j ∈ N, such that i∈I λ i + j∈J μ j = 1 and Since 0 ∈ ηB ∩ N (S, x) and we have Clearly, if the solution set of (UMP) is a singleton set of {x 0 }, then it is also a robust weak sharp efficient solution of the problem. Assume that x 0 is a robust efficient solution but not a robust weak sharp efficient solution for problem (UMP). Then, there existsx ∈ K , such that for all η > 0 It follows from the generalized convexity of ( f, g) and (4.1) that there exists w ∈ R n , such that: Therefore, one has . . , j l . From (4.1), we have μ j g j (x 0 , v j i ) = 0 for j ∈ J and l = 1, . . . , l j . Furthermore, for eachx ∈ K , μ j g j (x, v j l ) ≤ 0 for j ∈ J and l = 1, . . . , l j . Hence, by (4.4), we have This together with This contradicts (4.2). Hence, we can conclude that x 0 is a robust weak sharp efficient solution of (UMP), and so, the proof is complete.

Remark 4.4
In Theorem 4.3, the sufficient optimality conditions for a robust weak sharp efficient solution are established, while the assumptions of the convexity of objective and constraint functions and the convexity of parameter uncertain sets are dropped. However, these assumptions are employed in [15].
Specially, under some appropriate convexity and affineness conditions, by employing the approximate projection theorem, we establish the following sufficient optimality conditions for the local and global robust weak sharp efficient solutions of the problem (UMP), respectively. Theorem 4.5 Let x 0 ∈ K be given. Suppose that is closed and convex set, and K is convex. Assume that for each u i ∈ U i and v j , j ∈ J, f i (·, u i ) and g j (·, v j ) are convex and u i ∈U i (x) ∂ f i (·, u i )(x) is convex. If there exist real numbers η, r > 0, such that for every x ∈ S ∩ B(x 0 , r )

5)
then x 0 is a local robust weak sharp efficient solution of (UMP).
Proof Since is closed and convex, and for each v j ∈ V j , j ∈ J, the functions g j (·, v j ), j ∈ J are convex, the robust feasible set K is closed and convex. Therefore, it follows from the convexity of S and the local Lipschitz continuity of each f i (·, u i ), i ∈ I where u i ∈ U i , i ∈ I that the robust feasible set S is closed and convex. Assume that there exist real numbers η, r ∈ (0, +∞), such that (4.5) holds. To verify that x 0 is a local robust weak sharp efficient solution of (UMP), we let r 1 ∈ (0, 1 2 r ) be given. We claim that Let y ∈ K ∩ B(x 0 , r 1 ) be arbitrary. It is not hard to see that (4.6) holds trivially if y ∈ K and max u∈U f (y, u) = max u∈U f (x 0 , u), i.e., y ∈ S. On the other hand, if y / ∈ S, then we have from x 0 ∈ S that 0 < d(y, S) ≤ y − x 0 < r 1 . Clearly, we obtain 1 r 1 d(y, S) ∈ (0, 1). By following Theorem 2.3 [34], for any γ ∈ ( 1 r 1 d(y, S), 1), there exist x ∈ S and x * ∈ B ∩ N (S, x), (4.7) Therefore, we arrive y − x < 1 γ d(y, S), and so N (S, x) and (4.5) holds, there exist λ i ≥ 0 with i∈I λ i = 1, u * i ∈ ∂ f i (·,ū i )(x), ∃ū i ∈ U i (x), i ∈ I,μ j ∈ M j (x) ≥ 0, v * j ∈ ∂g j (·, v j )(x), j ∈ J, and b ∈ N( , x), such that Observe that y ∈ , since y ∈ K ⊆ . By the convexity of ω, we obtain b, y − x ≤ 0. Furthermore, since for each u i ∈ U i , i ∈ I and v j ∈ V j , max u i ∈U i f i (·, u i ) and g(·, v j ) are convex functions, one has (4.10) Since y is a robust feasible solution of (UMP), we have g j (y, v j ) ≤ 0, ∀v j ∈ V j , j ∈ J. Hence, it follows from g j (y, v j ) ≤ 0, ∀v j ∈ V j , j ∈ J, equality (4.8), b, y − x ≤ 0, inequalities (4.9)-(4.10) and x ∈ S that x ∈ K , max u∈U f (x, u) = max u∈U f (x 0 , u) and: Observe that x ∈ S, so we have d(y, S) ≤ y − x . By inequalities (4.7), (4.11), and d(y, S) ≤ y − x , we obtain ηγ d(y, S) ≤ ηγ y − x ≤ ηx * , y − x ≤ max(max u i ∈U i f i (y, u i ) − max u i ∈U i f i (x, u i )). Take γ → 1, then inequality (4.6) is fulfilled as y ∈ K ∩ B(x 0 , r 1 ) is arbitrary. Therefore, the conclusion that x 0 is a local robust weak sharp efficient solution for (UMP) is verified.

Concluding remarks
In this paper, we investigate an uncertain muliobjective optimization problem involving nonsmooth and nonconvex functions. We establish necessary and sufficient optimality conditions for robust weak sharp efficient solutions of the considered problem. These optimality conditions are presented in terms of multipliers and Mordukhovich subdifferentials of the related functions. To fulfill our goals, many tools are used in this paper, which has mainly the following three light spots: (1) In the discussion on the necessary optimality conditions for the local robust weak sharp efficient solution of (UMP), we employ the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials, and the sum rule for Mordukhovich subdifferentials. (2) In the discussions on such necessary optimality conditions, the assumptions of convexity conditions of objective function, constraint function, and uncertain sets are not assumed. (3) In the discussion on the sufficient optimality conditions for the robust weak sharp efficient solutions of (UMP), we employ the generalized convexity, approximate projection theorem, and some appropriate convexity and affineness conditions.