1 Introduction

In reality, it is common that the input data associated with the objective function and the constraints of programs are uncertain or incomplete due to prediction errors, measurement errors, or lack of information; that is, they are not known precisely when the problem is solved (see [1]). Robust optimization has come out as a noticeable determinism framework for investigating mathematical programming problems with data uncertainty. Many researchers have been studied intensively both theoretical and applied aspects in the area of robust optimization; see, e.g., [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] and the references therein.

In [12, 13], Kuroiwa and Lee studied scalarizations and optimality theorems for uncertain multiobjective optimization problem when involved functions are convex. Then, in [16], Lee and Kim proved nonsmooth optimality theorems for weakly robust efficient solutions and properly robust efficient solutions for multiobjective optimization problem with data uncertainty, and soon later, Lee and Lee [18] studied optimality conditions and duality theorems for uncertain semi-infinite multiobjective optimization problems. Besides, for nonconvex optimization problems, Chuong [11] established necessary/sufficient optimality conditions for robust (weakly) efficient solutions and robust duality theorems of uncertain multiobjective optimization in terms of multipliers and Mordukhovitz/limiting subdifferentials of the related functions.

On the other hand, the notion of a weak sharp solution in general mathematical programming problems was first introduced in [21]. It is an extension of a sharp minimizer (or equivalently, strongly unique minimizer) in [22] to include the possibility of non-unique solution set. It has been acknowledged that the weak sharp minimizer plays important roles in stability/sensitivity analysis and convergence analysis of a wide range of numerical algorithms in mathematical programming (see [23,24,25,26] and references therein). In the context of optimization, much attention has been paid to concerning necessary and/or sufficient conditions for weak sharp solutions in various types of problems (see, [27,28,29,30,31,32,33,34] and references therein).

Very recently, with the intention to answer the question “How about optimality condition for weak sharp solutions, particularly, in a robust optimization?”, Kerdkaew and Wangkeeree [35] introduced robust weak sharp and robust sharp solution to a convex cone-constrained optimization problem with data uncertainty and some optimality conditions for the robust weak sharp solutions problem were established. Moreover, as an application, the authors presented the characterization of the robust weak sharp weakly efficient solution sets for convex uncertain multiobjective optimization problems. Soon later, Kerdkaew et al. [36] investigated a robust optimization problem involving nonsmooth and nonconvex real-valued functions and obtained some optimal solutions for robust weak sharp solution of the problem.

Motivated by above-mentioned works, especially [34,35,36], we aim to establish necessary and sufficient optimality conditions for the robust weak sharp efficient solutions of an uncertain multiobjective optimization problem with data incertainty in both objective and constraints functions. Our obtained optimality conditions are presented in terms of multipliers and limiting/Mordukhovich subdifferential of the related functions. In addition, some examples are also provided for analyzing and illustrating the obtained results.

The rest of the paper is organized as follows. Section 2 contains some basic definitions from variational analysis and several auxiliary results. Here, we introduce a new concept of a solution, which involves robustness and weak sharp efficiency, namely the robust weak sharp efficient solution. In Sect. 3, the first part of main results concluding a nonsmooth Fermat rule for the local robust weak sharp efficient solutions of the uncertain multiobjective optimization problem is presented. In Sect. 4, another part of results are some sufficient optimality conditions for robust weak sharp efficient solutions of the considered problem. Section 5 devotes to concluding remarks.

2 Preliminary

We begin this section by fixing notations and definitions including the notations generally used in variational analysis, the Mordukhovich generalized differentiation notions (see more details in [37, 38]), which are the main tools for our study. Throughout this paper, \({{\mathbb {R}}}^n\) denotes the Euclidean space with dimension n. The inner product and norm in \({{\mathbb {R}}}^n\) are denoted by symbols \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert ,\) respectively. The symbols \({{\mathbb {R}}}^n_+, {\mathbf {B}}\) and \(B(x_0,r)\) stand for the nonnegative orthant of \({{\mathbb {R}}}^n,\) closed unit ball in \({{\mathbb {R}}}^n\), and the open ball with center at \(x_0\) and radius \(r > 0\) for any \(x_0\in {{\mathbb {R}}}^n\), respectively. For a nonempty subset \(S \subseteq {{\mathbb {R}}}^n\), the closure, boundary, and convex hull of S are denoted by clS, bdS, and coS, respectively, while the notation \(x \xrightarrow {S}x_0\) means that \(x \rightarrow x_0\) and \(x\in S.\)

Let a point \(x_0 \in S\) be given. The set S is said to be closed around \(x_0\) if there is a neighborhood U of \(x_0\), such that \(S \cap U\) is closed. Moreover, the set S is said to be locally closed if it is closed around every \(x_0 \in S.\) Given a set-valued mapping \(F : {{\mathbb {R}}}^n \rightarrow 2^{{{\mathbb {R}}}^n},\) the sequential Painlevé–Kuratowski upper/outer limit of F as \(x\rightarrow x_0\) is denoted by

$$\begin{aligned} \underset{\begin{array}{c} x\xrightarrow {S}x_0 \end{array}}{\text {Lim sup} }\, F(x):= \left\{ x^* \in {{\mathbb {R}}}^n : \exists x_n \xrightarrow {S}x_0, \exists x^*_n \rightarrow x^* \text { with } x^*_n \in F\left( x_n\right) , \,\,\forall n \in {\mathbb {N}}\right\} . \end{aligned}$$

Let S be closed around \(x_0.\) Recall that the contingent cone of S at \(x_0\) is denoted by \(T(S, x_0)\) and defined by

$$\begin{aligned} T \left( S, x_0\right) := \left\{ v \in {{\mathbb {R}}}^n : \exists v_n \rightarrow v, \exists t_n \downarrow 0 \text { s.t. } x_0 + t_nv_n \in S, \forall n\in {\mathbb {N}}\right\} , \end{aligned}$$

while the Fréchet (or regular) normal cone of S at \(x_0\), which is a set of all the Fréchet normals, has the form \({\widehat{N}}(S, x_0)\) and is defined by

$$\begin{aligned} {\widehat{N}}\left( S,x_0\right) := \left\{ x^* \in {{\mathbb {R}}}^n : \limsup _{x \xrightarrow {S} x_0}\dfrac{\left\langle x^*,x-x_0 \right\rangle }{\Vert x-x_0\Vert } \le 0 \right\} . \end{aligned}$$

Note that the Fréchet (or regular) normal cone \({\widehat{N}}(S,x_0)\) is a closed convex subset of \({{\mathbb {R}}}^n\) and we set \({\widehat{N}}(S,x_0) = \emptyset \) if \(x_0 \notin S.\) The notation \(N(S,x_0)\) stands for the Mordukhovich (or basic, limiting) normal cone of S at \(x_0\). It is defined by

$$\begin{aligned} N(S,x_0):= \left\{ x^* \in {{\mathbb {R}}}^n : \exists x_n \xrightarrow {S}x_0, \exists x^*_n \rightarrow x^* \text { with } x^*_n \in {\widehat{N}}(S,x_n), \forall n \in {\mathbb {N}}\right\} . \end{aligned}$$

Observe that the Mordukhovich normal cone is obtained by the Fréchet normal cones by taking the sequential Painlevé–Kuratowski upper/outer limit (see [37] for more details) as

$$\begin{aligned} N(S,x_0)= \underset{\begin{array}{c} x\xrightarrow {S}x_0 \end{array}}{\text {Lim sup} }\,{\widehat{N}}(S,x). \end{aligned}$$
(2.1)

Specially, in the case that S is a convex set, then we obtain the following relations:

$$\begin{aligned} {\widehat{N}}(S,x_0) = N(S,x_0) = T(S,x_0)^\circ = \left\{ x^*\in {{\mathbb {R}}}^n :\langle x^*,x-x_0 \rangle \le 0, \forall x\in S\right\} . \end{aligned}$$

Let \(h : {{\mathbb {R}}}^n \rightarrow {\overline{{{\mathbb {R}}}}}:={{\mathbb {R}}}\cup \{\pm \infty \}\) be an extended real-valued function, we define

$$\begin{aligned} \text {dom} h := \left\{ x \in {{\mathbb {R}}}^n : h(x) < +\infty \right\} , \end{aligned}$$

and

$$\begin{aligned} \text {epi} h := \left\{ (x,\alpha )\in X \times {{\mathbb {R}}}\,|\, \alpha \ge h(x)\right\} , \end{aligned}$$

denotes the domain and the epigraph of h,  respectively. Let \(x_0 \in \text {dom}h\) and \(\varepsilon \ge 0\) be given. Then, analytic \(\varepsilon \)-subdifferential of function h at \(x_0\), which has the form \({\widehat{\partial }}_{\varepsilon }h(x_0)\) is defined by

$$\begin{aligned} {\widehat{\partial }}_{\varepsilon }h(x_0):=\left\{ x^*\in {{\mathbb {R}}}^n : \liminf _{\begin{array}{c} x \rightarrow x_0,\\ x \ne x_0 \end{array}} \dfrac{h(x)-h(x_0)-\left\langle x^*,x-x_0\right\rangle }{\Vert x-x_0\Vert } \ge -\varepsilon \right\} . \end{aligned}$$

In the special case that \(\varepsilon = 0\), the analytic \(\varepsilon \)-subdifferential \(\partial _\varepsilon h (x_0)\) of h at \(x_0\) reduces to the general Fréchet subdifferential of h at \(x_0\), which is denoted by \({\widehat{\partial }} h(x_0)\). Besides, \(\partial h( x_0)\) denotes the Mordukhovich subdifferential of h at \(x_0.\) It is defined by

$$\begin{aligned} \partial h({x_0}):= \left\{ x^*\in {{\mathbb {R}}}^n : \exists x_n \xrightarrow {h}x_0, \exists x^*_n \rightarrow x^* \text { with } x^*_n \in {\widehat{\partial }}h(x_n), \forall n \in {\mathbb {N}}\right\} , \end{aligned}$$

where \(x_n \xrightarrow {h} x_0\) means \(x_n \rightarrow x_0\) and \(h(x_n) \rightarrow h(x).\) In addition, we have the following equation, which presents the relation between the Mordukovich subdifferential of h at \(x_0 \in X\) with \(|h(x_0)|< \infty \) and the Mordukovich normal cone of \(\text {epi}h:\)

$$\begin{aligned} \partial h(x_0)= \left\{ x^* \in X : \left( x^*,-1\right) \in N\left( \text {epi}h,\, x_0^h\right) \right\} , \end{aligned}$$

where \(x_0^h =(x_0,h(x_0)).\) In the case that \(x \notin \text {dom} h,\) we set \({\widehat{\partial }}h(x_0) = \partial h(x_0) = \emptyset .\) It is obvious that for any \(x \in {{\mathbb {R}}}^n, {\widehat{\partial }} h(x_0) \subseteq \partial h(x_0)\) and specially, the following relation is fulfilled if h is a convex function:

$$\begin{aligned} {\widehat{\partial }}h(x_0)=\partial h(x_0)=\left\{ x^* \in {{\mathbb {R}}}^n : \langle x^*,x-x_0 \rangle \le h(x)-h(x_0), \forall x \in {{\mathbb {R}}}^n\right\} . \end{aligned}$$

Besides, the distance function \(d(\cdot , S) : {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\) and the indicator function \(\delta (\cdot , S) : {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\cup \{+\infty \}\) of S are, respectively, defined by

$$\begin{aligned} d(x, S) := \inf _{y\in S}\Vert x-y\Vert , \forall x\in {{\mathbb {R}}}^n, \end{aligned}$$

and

$$\begin{aligned} \delta (x,S) = \left\{ \begin{array}{ll} 0; &{} x \in S,\\ +\infty ; &{}x \notin S. \end{array} \right. \end{aligned}$$

By above notations and definitions, we get \({\widehat{\partial }} \delta (x_0, S) = {\widehat{N}}(S,x_0)\) and \(\partial \delta (x_0, S) = N(S,x_0).\) Simultaneously, \({\widehat{\partial }} d(x_0,S) = {\mathbf {B}} \cap {\widehat{N}}(S,x_0)\) and \(\partial d(x_0,S)\) \(\subseteq {\mathbf {B}} \cap N(S,x_0).\)

Next, we recall some useful and important propositions and definitions for this paper. First of all, the following lemma is established by virtue of [37, Corollary 1.81].

Lemma 2.1

([37, Corollary 1.81]) If h is locally Lipschitz at \(x_0\), with modulus \(l>0,\) then we always have

$$\begin{aligned} \Vert x^*\Vert \le l, \,\, \forall x^* \in \partial h(x_0). \end{aligned}$$
(2.2)

The following necessary optimality condition, called generalized Fermat rule, for a function to attain its local minimum plays a key role for our analysis.

Lemma 2.2

([37, 38]) Let \( h : {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\cup \{+\infty \}\) be a proper lower semicontinuous function. If h attains a local minimum at \(x_0 \in {{\mathbb {R}}}^n,\) then \(0_{{{\mathbb {R}}}^n} \in {\widehat{\partial }} h(x_0),\) which implies \(0_{{{\mathbb {R}}}^n} \in \partial h(x_0).\)

We recall the following fuzzy sum rule for the Fréchet subdifferential and the sum rule for the Mordukhovich subdifferential, which are important in the sequel.

Lemma 2.3

([37, 38]) Let \(f, h : {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\cup \{+ \infty \}\) be proper lower semicontinuous around \(x_0 \in \text { dom} f \cap \text { dom} h.\) If f is Lipschitz continuous around \(x_0,\) then

  1. (1)

    for every \(x^* \in {\widehat{\partial }}( f + h)(x_0)\) and every \(\varepsilon > 0,\) there exist \(x_1, x_2 \in B(x_0,\varepsilon )\), such that

    $$\begin{aligned} \left| f (x_1)- f (x_0)\right|< \varepsilon , \left| h(x_2)-h( x_0)\right| < \varepsilon \text { and } x^* \in {\widehat{\partial }}f(x_1)+{\widehat{\partial }}h(x_2) + \varepsilon {\mathbf {B}}. \end{aligned}$$
  2. (2)

    \(\partial (f+h)(x_0) \subseteq \partial f (x_0) + \partial h (x_0).\)

To conclude this section, we recall the concepts of classical, uncertain, and robust multiobjective optimization problems, respectively. Let \(\Omega \) be a nonempty locally closed subset of \({{\mathbb {R}}}^n\) and \(f_i, g_j:{{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}, i\in I:=\{1,\ldots ,m\}, j\in J:=\{1,\ldots ,p\}\) be given. Consider the following multiobjective optimization problem:

$$\begin{aligned} \min _{x\in \Omega }\left\{ \left( f_1(x),\ldots ,f_m(x)\right) :g_j(x)\le 0, j\in J\right\} . \end{aligned}$$
(MP)

The multiobjective optimization problem (MP) in the face of data uncertainty in both the objective function and the constraints can be written by the following uncertain multiobjective optimization problem: For \(q_0, q_i \in {\mathbb {N}}, j\in J:=\{1,2,\ldots ,m \},\) let \({\mathcal {U}}_i, i\in I\) and \({\mathcal {V}}_j, j\in J\) be nonempty compact subsets of \({{\mathbb {R}}}^{q_0}\) and \({{\mathbb {R}}}^{q_i},\) respectively. We consider the following uncertain optimization problem:

$$\begin{aligned} \min _{x\in \Omega }\left\{ \left( f_1(x,u_1),\ldots ,f_m(x,u_m)\right) : g_j (x,v_j) \le 0, \,\, v_j \in {\mathcal {V}}_j,\, j \in J\right\} , \end{aligned}$$
(UMP)

where \(f_i: {{\mathbb {R}}}^n \times {\mathcal {U}}_i \rightarrow {{\mathbb {R}}}, i\in I\) and \(g_j : {{\mathbb {R}}}^n \times {\mathcal {V}}_j \rightarrow {{\mathbb {R}}}\), \( j \in J\) are given real-valued functions, x is the vector of decision variable, and \(u_i, i\in I\) and \(v_j, j\in J\) are uncertain parameters belonging to sequentially compact topological spaces \({\mathcal {U}}_i, i\in I\) and \({\mathcal {V}}_j, j\in J,\) respectively. In fact, the uncertainty sets can be apprehended in the sense that parameters \(u_i, i\in I\) and \(v_j, j\in J\) are not known exactly at the time of the decision. For examining the uncertain optimization problem (UMP), we treat the robust approach for (UMP), which is the worst-case approach for (UMP). The following robust multiobjective optimization problem (RMP) associates with (UMP); it is a robust counterpart of (UMP):

$$\begin{aligned} \min _{x\in \Omega }\left\{ \left( \max _{u_1 \in {\mathcal {U}}_1}f_1(x,u_1),\ldots ,\max _{u_m\in {\mathcal {U}}_m}f_m(x,u_m)\right) : g_j(x,v_j)\le 0, \,\forall v_j \in {\mathcal {V}}_j,\, j\in J\right\} . \end{aligned}$$
(RMP)

The robust feasible set K is denoted by

$$\begin{aligned} K := \left\{ x \in \Omega : g_j(x,v_j) \le 0, \forall v_j \in {\mathcal {V}}_j, \, j\in J\right\} . \end{aligned}$$
(2.3)

Now, we recall the following concept of robust efficient solutions for (UMP), which can be found in the literature; see, e.g., [14].

Definition 2.4

The vector \(x_0 \in {{\mathbb {R}}}^n\) is said to be a local robust efficient solution for (UMP) if it is a local efficient solution for (RMP), i.e., there exists a neighborhood U of \(x_0\), such that

$$\begin{aligned} \max _{u\in {\mathcal {U}}}f(x,u)-\max _{u\in {\mathcal {U}}}f(x_0,u) \notin -{{\mathbb {R}}}^m_+\setminus \{0\}, \,\,\ \forall x\in K\cap U, \end{aligned}$$

where \(f :=(f_1,f_2,\ldots , f_m)\), \(u=(u_1,\ldots ,u_m)\in {\mathcal {U}}:={\mathcal {U}}_1\times \cdots \times {\mathcal {U}}_m\) and

$$\begin{aligned} \max _{u\in {\mathcal {U}}}f(x,u) := \left( \max _{u_1 \in {\mathcal {U}}_1}f_1\left( x,u_1\right) ,\ldots ,\max _{u_m\in {\mathcal {U}}_m}f_m\left( x,u_m\right) \right) \end{aligned}$$

and

$$\begin{aligned} \max _{u\in {\mathcal {U}}}f(x_0,u) := \left( \max _{u_1 \in {\mathcal {U}}_1}f_1\left( x_0,u_1\right) ,\ldots ,\max _{u_m\in {\mathcal {U}}_m}f_m\left( x_0,u_m\right) \right) . \end{aligned}$$

In addition, if \(U = {{\mathbb {R}}}^n,\) then \(x_0 \in K\) is said to be a global robust efficient solution for (UMP).

Next, we introduce a new concept of a robust solution, which is related to robustness and weak sharp efficiency, namely the (local/global) robust weak sharp efficient solution.

Definition 2.5

A point \(x_0\in K\) is said to be a local robust weak sharp efficient solution for (UMP) if it is a local weak sharp efficient solution for (RMP), i.e., there exist a neighborhood U of \(x_0\) and a real number \(\eta > 0\), such that

$$\begin{aligned} \max _{1\le i \le m}\left\{ \max _{u_i\in {\mathcal {U}}_i}f_i\left( x,u_i\right) -\max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} \ge \eta d(x,S), \,\,\forall x \in K \cap U, \end{aligned}$$
(2.4)

where \(S:=\{x\in K:\max _{u\in {\mathcal {U}}}f(x,u)=\max _{u\in {\mathcal {U}}}f(x_0,u)\}.\) Specially, if \(U = {{\mathbb {R}}}^n,\) then \(x_0 \in K\) is said to be a global robust weak sharp efficient solution for (UMP).

Remark 2.6

If the term \(\eta d(x,S)\) in the right-hand side of inequality (2.4) is replaced by \(\eta \Vert x-x_0\Vert ,\) then the point \(x_0\) is said to be local robust sharp efficient solution for (UMP). Similarly, if \(U={{\mathbb {R}}}^n\), then \(x_0\) is said to be a global robust sharp efficient solution for (UMP).

It is simple to see that every (local) robust sharp efficient solution or robust weak sharp efficient solution of a problem must also be a (local) robust efficient solution of the problem. In contrast, the converse implication does not need to be true. In the case that the solution set is singleton, the robust efficient solution of (UMP) is a robust weak sharp efficient solution of the problem. However, there are many cases that the problem, which has a robust weak sharp efficient solution, has no robust sharp efficient solution.

Example 2.7

Let \(f:{\mathbb {R}} \times {\mathcal {U}} \rightarrow {\mathbb {R}}^2\) and \(g: {\mathbb {R}} \times {\mathcal {V}}\rightarrow {\mathbb {R}}^2\) be defined by \(f(x,u) = (x^2+u_1,x^2+u_2) \text { and } g(x,v)=(\min \{x,0\}+v_1,\min \{x,0\}+v_2)\) where \(x\in {\mathbb {R}}, u\in {\mathcal {U}}:=[-1,0]\times [-1,0] \) and \(v\in {\mathcal {V}}:=[-1,0]\times [-1,0],\) and let \(\Omega :=[-1,1].\) Clearly, the robust feasible set is \(K=[-1,1].\) Observe that \(x_0:=(0,0) \in K\) is a global robust efficient solution of (UMP). Assume that \(x_0\) is a local robust sharp efficient solution of (UMP), then there exist \(\eta ,\varepsilon >0\), such that \(x^2\ge \eta \Vert x-x_0\Vert , \,\forall x\in K\cap U,\) holds with \(U:=(-\varepsilon ,\varepsilon ).\) It can be seen that \(S=\{(0,0)\}\) and the inequality deduces to \(x^2\ge \eta \Vert x\Vert , \,\forall x\in K\cap (-\varepsilon ,\varepsilon ),\) which is a contradiction.

3 Necessary optimality conditions for robust weak sharp efficient solutions

In this section, we focus our attention on establishing some necessary optimality conditions for the local (global) robust weak sharp efficient solutions of uncertain multiobjective optimization problems in terms of the advanced tools of variational analysis and generalized differentiation. Concretely, using the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials, and the sum rule for Mordukhovich subdifferentials, we establish a necessary condition for the local robust weak sharp efficient solution of the problem (UMP).

First, for given arbitrary \(x_0 \in \Omega \), we set

$$\begin{aligned} {\mathcal {U}}_i(x_0)&:= \left\{ u_i^* \in {\mathcal {U}}_i : f_i\left( x_0,u_i^*\right) = \max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} ,\\ {\mathcal {V}}_j(x_0)&:= \left\{ v_j^* \in {\mathcal {V}}_j : g_j\left( x_0,v_j^*\right) = \max _{v_j \in {\mathcal {V}}_j}g_j\left( x_0,v_j\right) \right\} ,\\ {\mathcal {J}}(x_0)&:= \left\{ j\in J : g_j\left( x_0,v_j\right) =0, \,\forall v_j \in {\mathcal {V}}_j\right\} . \end{aligned}$$

In what follows, throughout this section, we assume that \(g_i: {{\mathbb {R}}}^n \times {\mathcal {V}}_j \rightarrow {{\mathbb {R}}}\) is a function, such that for each fixed \(v_j \in {\mathcal {V}}_j,\, j\in J, g_j(\cdot ,v_j)\) is locally Lipschitz continuous, while each function \(f_i : {{\mathbb {R}}}^n \times {\mathcal {U}}\rightarrow {{\mathbb {R}}}, i\in I\) satisfies the following conditions:

  1. (C1)

    For a fixed \(x_0 \in \Omega ,\) there exists \(r_{x_0} >0\), such that the function \(f_i(x,\cdot ): {\mathcal {U}}_i \rightarrow {{\mathbb {R}}}, i\in I\) is upper semicontinuous for all \(x \in B(x_0,r_{x_0})\) and \(f_i(\cdot ,u_i)\) is Lipschitz continuous in x, uniformly for \(u_i \in {\mathcal {U}}_i\), i.e., for some real number \(l > 0,\) for all \(x,y \in \Omega \) and \(u_i \in {\mathcal {U}}_i,\) one has

    $$\begin{aligned} \left\| f_i\left( x,u_i\right) -f_i\left( y,u_i\right) \right\| \le l_i \left\| x-y\right\| . \end{aligned}$$
  2. (C2)

    The multifunction \(\partial _xf_i(\cdot ,\cdot ):{{\mathbb {R}}}^n \times {\mathcal {U}} \rightarrow 2^{{{\mathbb {R}}}^n}\) is closed at \((x_0,u_i)\) for each \(u_i \in {\mathcal {U}}_i(x_0),\) where the symbol \(\partial _x\) stands for the Mordukhovich subdifferential operation with respect to x.

Remark 3.1

  1. (i)

    The assumption (C1) guarantees that the function \(\max _{u_i\in {\mathcal {U}}_i}f(\cdot ,u_i), i\in I,\) is defined and locally Lipschitz of rank \(l_i\) (see, e.g., [41]). When dealing with subgradients of a supremum/max function over a compact set, this assumption has been widely used in the literature (see, e.g., [15, 42,43,44]).

  2. (ii)

    The assumption (C2) related to the closedness of the partial subdifferential operation with respect to the first variable is a relaxed property of subdifferentials for convex functions in the finite-dimensional setting (see [11, 45] fore more details).

To obtain the necessary optimality condition for local robust weak sharp efficient solutions of (UMP), we now state a constraint qualification for the uncertain multiobjective optimization problem with the feasible set K defined in (2.3).

Definition 3.2

([11]) Given arbitrary \(x_0\in \Omega \), the constraint qualification (CQ) is said to be satisfied at \(x_0\) if

$$\begin{aligned} 0 \notin \text {co}\left\{ \cup \partial g_j\left( \cdot ,v_j\right) (x_0):v_j\in {\mathcal {V}}_j(x_0), j=1,\dots ,p\right\} . \end{aligned}$$

Remark 3.3

We can see that the (CQ) defined in Definition 3.2 reduces to the constraint qualification defined in [39, Definition 3.2] when \(\Omega ={{\mathbb {R}}}^n.\) As well as, it is not hard to verify that this (CQ) reduces to the extended Mangasarian–Fromovitz constraint qualification (see [40]) in the smooth setting when \(\Omega ={{\mathbb {R}}}^n.\)

Next, we establish the following necessary optimality condition for local robust sharp solutions of (UMP) under the (CQ).

Theorem 3.4

Let \(x_0 \in K\) be given. Suppose that there exists a neighborhood U of \(x_0\), such that the constraint qualification (CQ) is satisfied at any \(x\in K\cap U.\) If \(x_0\) is a local robust weak sharp efficient solution for (UMP), then there exist real numbers \(\eta , r >0\), such that for any \(x\in S \cap B(x_0,r),\)

$$\begin{aligned} \eta {\mathbf {B}} \cap {\widehat{N}}(S,x) \subseteq&\sum _{i=1}^m\lambda _i\text {co}\left( \bigcup _{u_i \in {\mathcal {U}}_i(x)} \partial f_i\left( \cdot ,u_i\right) (x)\right) \nonumber \\&+ \bigcup _{\mu _j \in M_j(x)} \left( \sum _{j\in J}\mu _j\partial g_j\left( \cdot ,v_j\right) (x)\right) +N(\Omega ,x), \end{aligned}$$
(3.1)

where \(\lambda _i \ge 0, i \in I\) not all zero with \(\sum _{i=1}^m\lambda _i=1\) and \(M_j(x)=\left\{ \mu _i \ge 0\,:\right. \) \(\left. \, \mu _jg_j(x,v_j)=0, v_j \in {\mathcal {V}}_j\right\} \) for all \(j\in J.\)

Proof

Suppose that \(x_0 \in K\) is a local robust weak sharp efficient solution for (UMP). Then, there exist real numbers \(\eta , r_1 >0\), such that

$$\begin{aligned} \max _{1\le i \le m}\left\{ \max _{u_i\in {\mathcal {U}}_i}f_i\left( x,u_i\right) -\max _{u_i\in {\mathcal {U}}_i} f_i\left( x_0,u_i\right) \right\} \ge \eta d(x,S), \forall x \in K \cap B\left( x_0,r_1\right) . \end{aligned}$$
(3.2)

By assumption, there exists \(r_2\), such that the (CQ) is satisfied at any \(x\in S\cap B(x_0,r_2).\) By choosing \(r \in (0,\min \frac{1}{2}r_1,r_2),\) we then take arbitrary \(x\in S \cap B(x_0,r).\) Observe that, from \({\widehat{\partial }} d(x,S)={\mathbf {B}} \cap {\widehat{N}}(S,x),\) whenever \(x^*\in {\mathbf {B}} \cap {\widehat{N}}(S,x)\) we have \(x^*\in {\widehat{\partial }} d(x,S).\) By the definition of \({\widehat{\partial }}d(\cdot ,S),\) for any \(\varepsilon > 0,\) there exists \(r_3 \in ( 0, \frac{1}{2}r_1)\), such that

$$\begin{aligned} \left\langle x^*,y-x\right\rangle \le d(x,S)+\varepsilon \Vert y-x\Vert , \end{aligned}$$
(3.3)

for all \(x \in B(x,r_3).\) It is clear by the triangle inequality and the fact that \(r, r_3 <\frac{1}{2}r_1\) that \(B(x,r_3) \subseteq B(x_0,r_1).\) Indeed, if \(z\in B(x,r_3),\) then \(\Vert z-x_0\Vert \le \Vert z-x\Vert +\Vert x-x_0\Vert<r_3+r<r_1,\) which means \(z\in B(x_0,r_1).\) Hence, we derive from (3.2) that \(\max _{1\le i \le m}\left\{ \max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)-\max _{u_i\in {\mathcal {U}}_i} f_i(x_0,u_i)\right\} \ge \eta d(x,S)\) for all \(x \in K \cap B(x_0,r_1).\) This together with (3.3) implies that

$$\begin{aligned} \max _{1\le i \le m}\left\{ \max _{u_i \in {\mathcal {U}}_i}f_i\left( y,u_i\right) -\max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} +\eta \varepsilon \Vert y-x\Vert \ge \eta \left\langle x^*, y- x\right\rangle , \end{aligned}$$
(3.4)

for all \(y\in K\cap B(x,r_3).\) Note that \(x \in K\) and \(\max _{u\in {\mathcal {U}}}f(x,u)=\) \(\max _{u\in {\mathcal {U}}}f(x_0,u)\), since \(x \in S.\) Then, the following function \(\varphi : {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\cup \{+\infty \}\) defined by

$$\begin{aligned} \phi (y):=&-\eta \left\langle x^*,y-x\right\rangle +\displaystyle \max _{1\le i \le m}\left\{ \max _{u_i \in {\mathcal {U}}_i}f_i\left( y,u_i\right) -\displaystyle \max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} \\&+\eta \varepsilon \Vert y-x\Vert + \delta (y,K), \forall y \in {{\mathbb {R}}}^n, \end{aligned}$$

attains its local minimizer at x. Indeed, for each \(y\in {{\mathbb {R}}}^n,\) \(\phi (y) \ge 0,\) while \(\phi (x)= 0.\) Therefore, we arrive \(0\in {\widehat{\partial }}\phi (x)\) by applying the generalized Fermat rule (Lemma 2.3). Since for each \(u_i\in {\mathcal {U}}_i,i\in I, f_i(\cdot ,u_i)\) is locally Lipschitz continuous at x,  the function \({\tilde{f}}:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}},\) defined by

$$\begin{aligned} {\tilde{f}}(y):=\displaystyle \max _{1\le i \le m}\left\{ \max _{u_i \in {\mathcal {U}}_i}f_i\left( y,u_i\right) -\displaystyle \max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} \end{aligned}$$

is also locally Lipschitz continuous at x. Let \(\gamma >0\) be the modulus of the locally Lipschitz continuity of \({\tilde{f}}\). Additionally, since the robust feasible set K is locally closed, one has \(\delta (\cdot ,K)\) is lower semicontinuous around x. Clearly, the function \(\Vert \cdot -x\Vert \) is Lipschitz continuous with modulus 1. Therefore, by applying Lemma 2.3(i), we have that, for the proceeding \(\varepsilon >0,\) there exist \(x_1^\varepsilon , x_2^\varepsilon , x_3^\varepsilon \in B(x,\varepsilon ),\) such that \(|{\tilde{f}}(x_1^\varepsilon )| < \varepsilon ,\) \(\eta \varepsilon \Vert x_2^\varepsilon -x\Vert < \varepsilon ,\) \(\delta (x_3^\varepsilon ,K) < \varepsilon ,\) and

$$\begin{aligned} \eta x^* \in {\widehat{\partial }}{\tilde{f}}(x_1^\varepsilon )+\eta \varepsilon {\widehat{\partial }}\Vert \cdot - x\Vert \left( x_2^\varepsilon \right) + {\widehat{\partial }}\delta \left( \cdot ,K\right) \left( x_3^\varepsilon \right) +\varepsilon {\mathbf {B}}. \end{aligned}$$

It then follows from \(\delta (x_3^\varepsilon ,K) < \varepsilon \) that \(x_3^\varepsilon \in K\) and so \({\widehat{\partial }}\delta (\cdot , K)(x_3^\varepsilon )= {\widehat{N}}(K,x_3^\varepsilon ).\) Since \({\tilde{f}}\) is Lipschitz continuous around x with a constant \(\gamma \) and \(x_1^\varepsilon \in B(x,\varepsilon ),\) we have from [37, Proposition 1.85] with \(\epsilon =0\) for all sufficiently small \(\varepsilon > 0\) that \({\widehat{\partial }}{\tilde{f}}(x_1^\varepsilon ) \subseteq \gamma {\mathbf {B}}.\) Simultaneously, we also arrive \({\widehat{\partial }}\Vert x_2^\varepsilon - x\Vert \subseteq {\mathbf {B}}.\) By these inclusions, the compactness of \({\mathbf {B}},\) and the fact that \(x_1^\varepsilon , x_2^\varepsilon , x_3^\varepsilon \in B(x,\varepsilon )\), we obtain \(x_1 \xrightarrow {{\tilde{f}}} x, \,\,\,\, x_2^\varepsilon \xrightarrow {\Vert \cdot - x\Vert } x, \,\,\,\, x_3^\varepsilon \xrightarrow {K} x_0, \text { as } \varepsilon \downarrow 0,\) which yield

$$\begin{aligned} \eta x^* \in \partial {\tilde{f}}(x)+N(K,x). \end{aligned}$$
(3.5)

Since f satisfies (C1) and (C2), by the same fashion using to prove inequality (3.4) in Theorem 3.3 of [11], we obtain that for each fixed \(i\in I\)

$$\begin{aligned} \partial \max _{u_i\in {\mathcal {U}}_i}f_i\left( \cdot ,u_i\right) (x)\subseteq \text { co}\left\{ \cup \partial f_i\left( \cdot ,u_i\right) (x):{u_i \in {\mathcal {U}}_i(x)}\right\} . \end{aligned}$$
(3.6)

Furthermore, by applying the formula for the Mordukhovich subdifferential of maximum functions (see [25, Theorem 3.46(ii)]) and Lemma 2.3(ii), we have from \(\max _{u\in {\mathcal {U}}}f(x,u)=\max _{u\in {\mathcal {U}}}f(x_0,u)\) that there exist \(\lambda _i \ge 0, i\in I\) with \(\sum _{i=1}^m\lambda _i=1\), such that

$$\begin{aligned} \partial {\tilde{f}}(x) \subseteq \sum _{i=1}^m\lambda _i\text { co}\left\{ \cup \partial f_i\left( \cdot ,u_i\right) (x):{u_i \in {\mathcal {U}}_i(x)}\right\} . \end{aligned}$$
(3.7)

On the other hand, we put

$$\begin{aligned} \Pi := \left\{ x\in {{\mathbb {R}}}^n : g_i\left( x,v_j\right) \le 0, \forall v_j \in {\mathcal {V}}_j, j\in J\right\} . \end{aligned}$$

Hence, \(K=\Omega \cap \Pi .\) Observe that the (CQ) holds at x, since \(r<r_2\) and \(x\in S\cap B(x_0,r).\) In addition, as \(0\in N(\Omega ,x ),\) the following inclusion always holds:

$$\begin{aligned}&\bigcup _{\mu _j \in M_j(x)} \left( \sum _{j\in {\mathcal {J}}(x)}\mu _j\partial g_j\left( \cdot ,v_j\right) (x) \right) \\&\quad \subseteq \bigcup _{\mu _i \in M_i(x)} \left( \sum _{j\in J(x)}\mu _j\partial g_j\left( \cdot ,v_j\right) (x) \right) + N(\Omega ,x). \end{aligned}$$

Since the (CQ) is satisfied at x,  there do not exist \(\mu _i \ge 0\) and \(v_j \in {\mathcal {V}}_j, j\in J(x)\), such that \(\sum _{j\in J(x)}\mu _j \ne 0\) and

$$\begin{aligned} 0\in \sum _{j\in J(x)}\mu _j\partial g_j\left( \cdot ,v_j\right) (x) + N(\Omega ,x) . \end{aligned}$$

By applying [37, Corollary 4.36], we arrive

$$\begin{aligned} N(\Pi ,x) \subseteq \bigcup _{\mu _i \in M_i(x)} \left( \sum _{j\in J(x)}\mu _j\partial g_j\left( \cdot ,v_j\right) (x) \right) . \end{aligned}$$
(3.8)

It follows from [37, Corollary 3.37] that:

$$\begin{aligned} N(K,x)=N(\Omega \cap \Pi , x) \subseteq N(\Omega ,x)+N(\Pi ,x). \end{aligned}$$
(3.9)

By setting \(\mu _j =0\) for every \(j\in J\setminus {\mathcal {J}}(x),\) (3.8) and (3.9) imply the following inclusion:

$$\begin{aligned} N(K,x) \subseteq \bigcup _{\mu _j \in M_j(x)} \left( \sum _{j\in J}\mu _j\partial g_j(\cdot ,v_j)(x)\right) + N(\Omega ,x). \end{aligned}$$
(3.10)

Observe that \(x\in S\cap B(x_0,r)\) and \(x^* \in {\mathbf {B}} \cap {\widehat{N}}(K,x)\) are arbitrary.

Hence, we can verify (3.1) by combining (3.5), (3.7) and (3.10). \(\square \)

Remark 3.5

  1. (i)

    In the case that f is a real-valued function and \(g_j, j\in J\) are assumed to be continuous functions, such that for each \(u \in {\mathcal {U}} \subseteq {{\mathbb {R}}}^{q_0},\) and for each fixed \(v_j \in {\mathcal {V}}_j, f(\cdot ,u)\) and \(g_i(\cdot ,v_j)\) are convex functions, respectively, our considered problem reduces to a convex optimization problem with data uncertainty that was studied in [14]. Although, in [14, Proposition 2.1], the authors employed the assumptions of the convexity of objective and constraint functions and the convexity of parameter uncertain sets to establish the necessary optimality conditions for a robust solution, in Theorem 3.4 the necessary optimality conditions for a local robust weak sharp solution, which also is a local robust solution, without these mentioned assumptions.

  2. (ii)

    In the case that \(f_i, i\in I\) and \(g_j, j\in J\) are without uncertainty, our considered problem reduces to a multiobjective optimization problem involving nonsmooth and nonconvex functions. Necessary and sufficient conditions for weak sharp efficient solutions of such multiobjective optimization problems were established in [34].

The following example shows that the (CQ) being satisfied around \(x_0 \in K\) is essential for Theorem 3.4.

Example 3.6

Let \(f:{{\mathbb {R}}}\times {\mathcal {U}}\rightarrow {{\mathbb {R}}}^2\) be defined by \(f(x,u):=(f_1(x,u_1),f_2(x,u_2))\) with

$$\begin{aligned} f_i(x,u_i)=u_i\min \{0,x+1\}, i=1,2, \end{aligned}$$

where \(x\in {{\mathbb {R}}}\) and \(u_i\in {\mathcal {U}}_i:=[0,1], i=1,2.\) Furthermore, let \(g: {\mathbb {R}} \times {\mathcal {V}} \rightarrow {\mathbb {R}}\) be defined by \(g(x,v) :=v-x^3,\) where \(x\in {\mathbb {R}},\) and \(v\in {\mathcal {V}} := [-1,0].\) Take \(\Omega := [-1,1]\) and consider the problem (UMP). It is not hard to see that for each \(i=1,2, f_i\) satisfies (C1) and (C2), and the robust feasible set is \( K = [0,1].\) Consider \(x_0:=0 \in K\) with its neighborhood \(U= (-\frac{1}{2}, \frac{1}{2}).\) Choosing a positive real number \(\eta = 1>0\), we can verify that \(x_0\) is a local robust weak sharp efficient solution of the problem (UMP). Simultaneously, we get from direct calculating that \(\partial f_i(\cdot ,u_i)(x_0) = \{0\}, u_i \in {\mathcal {U}}_i, i=1,2, \, \partial g(\cdot ,v)(x_0) = \{0\}, \, \forall v\in {\mathcal {V}}, N(\Omega ,x_0) = \{0\}\) and \(N(K,x_0)= -{\mathbb {R}}_+.\) It follows that the (CQ) is not satisfied at \(x_0.\)

Furthermore, we get \(\eta {\mathbf {B}}\cap {\widehat{N}}(K,x_0)=[-\eta ,0]\), while

$$\begin{aligned} \text { co } \sum _{i=1}^2\lambda _i\left( \bigcup _{u_i \in {\mathcal {U}}_i(x_0)} \partial f_i(\cdot ,u_i)(x_0)\right) + \bigcup _{\mu \in M(x_0)}\mu \partial g(\cdot ,v)(x_0) +N(\Omega ,x_0) = \{0\}, \end{aligned}$$

which shows that (3.1) does not hold for every \(\eta , \delta > 0.\) Hence, condition (CQ) is vital. It is obvious that the functions \(f_i(\cdot ,u_i), i=1,2 \) and \(g(\cdot ,v)\) are not convex. Therefore, [35, Theorem 4.2] is not applicable for this example.

The following result is established easily by means of the basic concepts of variational analysis.

Corollary 3.7

Let \(x_0 \in K\) be given. Suppose that there exists a neighborhood U of \(x_0\), such that the constraint qualification (CQ) is satisfied at any \(x\in K\cap U.\) If \(x_0\) is a local robust weak sharp efficient solution for (UMP), then there exist real numbers \(\eta , r >0\), such that for any \(x\in S \cap B(x_0,r)\) and \(x^*\in \eta {\mathbf {B}}\cap {\widehat{N}}(S,x)\)

$$\begin{aligned} x^*\in \,\sum _{i=1}^m\lambda _i\text {co}\left( \bigcup _{u_i \in {\mathcal {U}}_i(x)} \partial f_i(\cdot ,u_i)(x)\right) + \bigcup _{\mu _j \in M_j(x)} \left( \sum _{j\in J}\mu _j\partial g_j(\cdot ,v_j)(x)\right) +N(\Omega ,x),\qquad \end{aligned}$$
(3.11)

where \(\lambda _i \ge 0, i \in I\) not all zero with \(\sum _{i=1}^m\lambda _i=1\) and \(M_j(x)=\left\{ \mu _i \ge 0\,:\right. \) \(\left. \, \mu _jg_j(x,v_j)=0, v_j \in {\mathcal {V}}_j\right\} \) for all \(j\in J.\)

Specially, if \(x_0 \in K\) is a local sharp efficient solution for (RMP), i.e., a local robust sharp efficient solution for (UMP), the point \(x_0\) is isolated in the solution set of (RMP). Therefore, we obtain that \({\widehat{N}}(S,x)={{\mathbb {R}}}^n\) and the (CQ) only needs to be fulfilled at \(x_0.\) The following result, which presents the necessary optimality conditions for the local robust sharp efficient solution of (UMP), is obtained if (CQ) is satisfied at \(x_0.\)

Corollary 3.8

Let \(x_0 \in K\) be given and the constraint qualification (CQ) be satisfied at \(x_0.\) Assume that \(x_0\) is a local robust sharp efficient solution for (UMP), then there exist real numbers \(\eta >0\), such that

$$\begin{aligned} \eta {\mathbf {B}} \subseteq&\sum _{i=1}^m\lambda _i\text { co}\left( \bigcup _{u_i \in {\mathcal {U}}_i(x_0)} \partial f_i(\cdot ,u_i)(x_0)\right) + \bigcup _{\mu _j \in M_j(x_0)}\left( \sum _{j\in J}\mu _j\partial g_j(\cdot ,v_j)(x_0)\right) +N(\Omega ,x_0), \end{aligned}$$

where \(\lambda _i \ge 0, i \in I\) not all zero with \(\sum _{i=1}^m\lambda _i=1\) and \(M_j(x_0)=\left\{ \mu _i \ge 0\,:\right. \) \(\left. \, \mu _jg_j(x_0,v_j)=0, v_j \in {\mathcal {V}}_j\right\} \) for all \(j\in J.\)

4 Sufficient optimality conditions for robust weak sharp efficient solutions

In this section, we focus on the sufficient optimality conditions for robust weak sharp efficient solution of uncertain multiobjective optimization problems. To formulate sufficient conditions for robust sharp solutions of problem (UMP) in the next theorem, we need the concept of generalized convexity at a given point for a family of real-valued functions. We set \(f:=(f_1,\ldots ,f_m)\) and \(g := (g_1,\ldots , g_p)\) for convenience in the sequel.

Definition 4.1

([11]) (fg) is said to be generalized convex at \(x_0 \in {{\mathbb {R}}}^n\) if for any \(x \in {{\mathbb {R}}}^n, z_u^* \in \partial f_i(\cdot ,u)(x_0), u \in {\mathcal {U}}_i(x_0), i\in I\) and \(x^*_v \in \partial g_j(x_0,v), v \in {\mathcal {V}}_j(x_0), j \in J,\) there exists \(w \in {{\mathbb {R}}}^n\), such that

$$\begin{aligned} f_i(x,u)-f_i(x_0,u)&\ge \langle z^*_u,w \rangle , \\ g_j(x,v)-g_j(x_0,v)&\ge \langle x^*_v,w \rangle . \end{aligned}$$

Remark 4.2

If \(f_i(\cdot ,u), u \in {\mathcal {U}}_i,i\in I\) are convex and \(g_j(\cdot , v), v \in {\mathcal {V}}_j, j \in J\) are convex, then (fg) is generalized convex at any \(x_0 \in {{\mathbb {R}}}^n \) with \(w := x-x_0\) for each \(x \in {{\mathbb {R}}}^n.\)

Next, we focus on the sufficiency of the considered problem. In the following theorem, we established the sufficient optimality conditions for robust weak sharp efficient solution for the problem (UMP).

Theorem 4.3

For the problem (UMP), let \(\Omega :={{\mathbb {R}}}^n.\) Assume that \(x_0 \in K\) satisfies the condition (3.11) with real numbers \(\eta \) and r. If (fg) is generalized convex at \(x_0,\) then \(x_0\) is a robust weak sharp efficient solution for the problem (UMP).

Proof

Since \(x_0 \in K\) satisfy the condition (3.11) with real numbers \(\eta \) and r, for any \(x\in S \cap B(x_0,r)\) and \(x^*\in \eta {\mathbf {B}}\cap {\widehat{N}}(S,x),\) there exist \(\lambda _i \ge 0, \lambda _{i_k} \ge 0, z^*_{i_k} \in \partial f_i(\cdot ,u_{i_k})(x_0), u_{i_k} \in {\mathcal {U}}_i(x_0),\sum _{k=1}^{k_i}\lambda _{i_k} =1, k=1,\ldots ,k_i, k_i \in {\mathbb {N}},\) and \(\mu \in {{\mathbb {R}}}^p_+, \mu _{j_l} \ge 0, x_{j_l}^*\in \partial g_i(\cdot ,v_{j_l})(x_0), v_{j_l} \in {\mathcal {V}}_j(x_0),\sum _{l=1}^{l_j}\mu _{l_j}=1, l=1,\dots ,l_j, l_j \in {\mathbb {N}},\) such that \(\sum _{i\in I}\lambda _i+\sum _{j\in J}\mu _j =1\) and

$$\begin{aligned} x^* = \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}z^*_{i_k}\right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}x^*_{j_l}\right) . \end{aligned}$$

Since \(0\in \eta {\mathbf {B}} \cap {\widehat{N}}(S,x)\) and we have

$$\begin{aligned} 0 = \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}z^*_{i_k}\right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}x^*_{j_l}\right) . \end{aligned}$$
(4.1)

Clearly, if the solution set of (UMP) is a singleton set of \(\{x_0\},\) then it is also a robust weak sharp efficient solution of the problem. Assume that \(x_0\) is a robust efficient solution but not a robust weak sharp efficient solution for problem (UMP). Then, there exists \({\tilde{x}} \in K\), such that for all \(\eta >0\)

$$\begin{aligned} 0< \max _{1\le i\le m}\left\{ \max _{u_i \in {\mathcal {U}}_i}f_i\left( {\tilde{x}},u_i\right) -\max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) \right\} < \eta d(x,S), \,\,\forall x\in K. \end{aligned}$$
(4.2)

It follows from the generalized convexity of (fg) and (4.1) that there exists \(w\in {{\mathbb {R}}}^n\), such that:

$$\begin{aligned} 0&= \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}z^*_{i_k}\right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}x^*_{j_l}\right) \nonumber \\&\le \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}\left[ f_i\left( {\tilde{x}},u_{i_k}\right) -f_i\left( x_0,u_{i_k}\right) \right] \right) \nonumber \\&\quad + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}\left[ g_j\left( {\tilde{x}},v_{j_l}\right) -g_j\left( x_0,v_{j_l}\right) \right] \right) . \end{aligned}$$
(4.3)

Therefore, one has

$$\begin{aligned}&\sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( x_0,u_{i_k}\right) \right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}g_j\left( x_0,v_{j_l}\right) \right) \nonumber \\&\quad \le \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( {\tilde{x}},u_{i_k}\right) \right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}g_j\left( {\tilde{x}},v_{j_l}\right) \right) . \end{aligned}$$
(4.4)

Since \(v_{j_l} \in {\mathcal {V}}_j(x_0),\) \(g_j(x_0,v_{i_j})=\sup _{v_j \in {\mathcal {V}}_j}g_j(x_0,v_j), \forall j\in J, \forall j =1,\ldots ,j_l.\) From (4.1), we have \(\mu _j g_j(x_0,v_{j_i})=0\) for \( j\in J\) and \(l =1,\ldots ,l_j.\) Furthermore, for each \({\tilde{x}} \in K, \mu _j g_j({\tilde{x}},v_{j_l}) \le 0\) for \( j\in J\) and \(l =1,\dots ,l_j.\) Hence, by (4.4), we have

$$\begin{aligned}&\sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( x_0,u_{i_k}\right) \right) \\&\quad = \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( x_0,u_{i_k}\right) \right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}g_j\left( x_0,v_{j_l}\right) \right) \\&\quad \le \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( {\tilde{x}},u_{i_k}\right) \right) + \sum _{j\in J} \mu _j\left( \sum _{l=1}^{l_j}\mu _{j_l}g_j\left( {\tilde{x}},v_{j_l}\right) \right) \\&\quad \le \sum _{i\in I}\lambda _i\left( \sum _{k=1}^{k_i}\lambda _{i_k}f_i\left( {\tilde{x}},u_{i_k}\right) \right) . \end{aligned}$$

This together with \(u_{i_k} \in {\mathcal {U}}_i(x_0), i\in I\) implies that

$$\begin{aligned} \sum _{k=1}^{k_i}\lambda _{i_k}\max _{u_i\in {\mathcal {U}}_i}f_i\left( x_0,u_{i_k}\right) \le \sum _{k=1}^{k_1}\lambda _{i_k}f_i\left( {\tilde{x}},u_{i_k}\right) \le \sum _{k=1}^{k_i}\lambda _{i_k}\max _{u_i\in {\mathcal {U}}_i}f_i\left( {\tilde{x}},u_i\right) , \end{aligned}$$

which yields

$$\begin{aligned} \max _{u_i \in {\mathcal {U}}_i}f_i\left( x_0,u_i\right) -\max _{u_i \in {\mathcal {U}}_i}f_i\left( {\tilde{x}},u_i\right) \le 0 \le \eta d(x,S), \forall \eta >0. \end{aligned}$$

This contradicts (4.2). Hence, we can conclude that \(x_0\) is a robust weak sharp efficient solution of (UMP), and so, the proof is complete. \(\square \)

Remark 4.4

In Theorem 4.3, the sufficient optimality conditions for a robust weak sharp efficient solution are established, while the assumptions of the convexity of objective and constraint functions and the convexity of parameter uncertain sets are dropped. However, these assumptions are employed in [15].

Specially, under some appropriate convexity and affineness conditions, by employing the approximate projection theorem, we establish the following sufficient optimality conditions for the local and global robust weak sharp efficient solutions of the problem (UMP), respectively.

Theorem 4.5

Let \(x_0\in K\) be given. Suppose that \(\Omega \) is closed and convex set, and K is convex. Assume that for each \(u_i\in {\mathcal {U}}_i\) and \(v_j, j\in J, f_i(\cdot ,u_i)\) and \(g_j(\cdot ,v_j)\) are convex and \(\bigcup _{u_i \in {\mathcal {U}}_i(x)} \partial f_i(\cdot ,u_i)(x)\) is convex. If there exist real numbers \(\eta , r >0\), such that for every \(x\in S\cap B(x_0,r)\)

$$\begin{aligned} \eta {\mathbf {B}} \cap N(S,x) \subseteq&\sum _{i=1}^m\lambda _i\left( \bigcup _{u_i \in {\mathcal {U}}_i(x)} \partial f_i\left( \cdot ,u_i\right) (x)\right) \nonumber \\&+ \bigcup _{\mu _j \in M_j(x)} \left( \sum _{j\in J}\mu _j\partial g_j\left( \cdot ,v_j\right) (x)\right) \nonumber \\&+N(\Omega ,x), \end{aligned}$$
(4.5)

then \(x_0\) is a local robust weak sharp efficient solution of (UMP).

Proof

Since \(\Omega \) is closed and convex, and for each \(v_j\in {\mathcal {V}}_j, j\in J,\) the functions \(g_j(\cdot ,v_j), j\in J\) are convex, the robust feasible set K is closed and convex. Therefore, it follows from the convexity of S and the local Lipschitz continuity of each \(f_i(\cdot , u_i), i\in I\) where \(u_i\in {\mathcal {U}}_i,i\in I\) that the robust feasible set S is closed and convex. Assume that there exist real numbers \(\eta , r \in (0,+\infty )\), such that (4.5) holds. To verify that \(x_0\) is a local robust weak sharp efficient solution of (UMP), we let \(r_1 \in (0,\frac{1}{2}r)\) be given. We claim that

$$\begin{aligned} \max _{1\le i \le m}\left\{ \max _{u_i\in {\mathcal {U}}_i}f_i(y,u_i)-\max _{u_i\in {\mathcal {U}}_i}f_i(x_0,u_i)\right\} \ge \eta d(x,S), \, \forall y\in K\cap B(x_0,r_1). \end{aligned}$$
(4.6)

Let \(y\in K\cap B(x_0,r_1)\) be arbitrary. It is not hard to see that (4.6) holds trivially if \(y\in K\) and \(\max _{u\in {\mathcal {U}}}f(y,u)=\max _{u\in {\mathcal {U}}}f(x_0,u),\) i.e., \(y \in S.\) On the other hand, if \(y\notin S,\) then we have from \(x_0\in S\) that \( 0<d(y,S)\le \Vert y-x_0\Vert <r_1. \) Clearly, we obtain \(\frac{1}{r_1}d(y,S) \in (0, 1).\) By following Theorem 2.3 [34], for any \(\gamma \in (\frac{1}{r_1}d(y,S),1),\) there exist

$$\begin{aligned} x\in S \text { and } x^* \in {\mathbf {B}}\cap N(S,x), \end{aligned}$$

such that

$$\begin{aligned} \min \left\{ d(y,S),\left\langle x^*,y-x\right\rangle \rangle \right\} >\gamma \Vert y-x\Vert . \end{aligned}$$
(4.7)

Therefore, we arrive \(\Vert y-x\Vert <\frac{1}{\gamma }d(y,S),\) and so \(x\in B(x_0,r)\), since \(\Vert x-x_0\Vert \le \Vert x-y\Vert +\Vert y-x_0\Vert<r_1+r_1<r.\) Since \(x\in S\cap B(x_0,r),x^* \in {\mathbf {B}}\cap N(S,x)\) and (4.5) holds, there exist \(\lambda _i \ge 0\) with \(\sum _{i\in I}\lambda _i=1,u^*_i \in \partial f_i(\cdot ,{\bar{u}}_i)(x), \exists {\bar{u}}_i \in {\mathcal {U}}_i(x), i\in I,\) \({\bar{\mu }}_j \in M_j(x) \ge 0, v^*_j\in \partial g_j(\cdot ,v_j)(x), j\in J,\) and \(b\in {\mathbb {N}}(\Omega ,x)\), such that

$$\begin{aligned} \eta x^* =\sum _{i\in I}\lambda _iu^*_i+\sum _{j\in J}\mu _jv^*_j+b. \end{aligned}$$
(4.8)

Observe that \(y \in \Omega \), since \(y\in K\subseteq \Omega .\) By the convexity of \(\omega ,\) we obtain \(\langle b,y-x\rangle \le 0.\) Furthermore, since for each \(u_i \in {\mathcal {U}}_i, i\in I\) and \(v_j\in {\mathcal {V}}_j,\max _{u_i\in {\mathcal {U}}_i}f_i(\cdot ,u_i)\) and \(g(\cdot ,v_j)\) are convex functions, one has

$$\begin{aligned} \left\langle u_i^*,y-x\right\rangle\le & {} \max _{u_i\in {\mathcal {U}}_i}f_i\left( y,u_i\right) -\max _{u_i\in {\mathcal {U}}_i}f_i\left( x,u_i\right) , \, \forall u_i \in {\mathcal {U}}_i, i\in I, \end{aligned}$$
(4.9)
$$\begin{aligned} \left\langle v_i^*,y-x\right\rangle\le & {} g_j\left( y,v_j\right) -g_j\left( x,v_j\right) , \, \forall v_j \in {\mathcal {V}}_j, j\in J. \end{aligned}$$
(4.10)

Since y is a robust feasible solution of (UMP), we have \(g_j(y,v_j)\le 0, \forall v_j\in {\mathcal {V}}_j, j \in J.\) Hence, it follows from \(g_j(y,v_j)\le 0, \forall v_j\in {\mathcal {V}}_j, j \in J,\) equality (4.8), \(\langle b,y-x\rangle \le 0,\) inequalities (4.9)-(4.10) and \(x\in S\) that \(x\in K, \max _{u\in {\mathcal {U}}}f(x,u)=\max _{u\in {\mathcal {U}}}f(x_0,u)\) and:

$$\begin{aligned} \langle \eta x^*,y-x\rangle =&\left\langle \sum _{i\in I}\lambda _i u^*_i +\sum _{j\in J}\mu _jv^*_j, y-x\right\rangle \nonumber \\ \le&\sum _{i\in I}\lambda _i \left( \max _{u_i\in {\mathcal {U}}_i}f_i\left( y,u_i\right) -\max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)\right) \nonumber \\&+\sum _{j\in J}\mu _j\left( g_j\left( y,v_j\right) -g_j\left( x,v_j\right) \right) \nonumber \\ \le&\sum _{i\in I}\lambda _i\max \left( \max _{u_i\in {\mathcal {U}}_i}f_i(y,u_i)-\max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)\right) \nonumber \\&+\sum _{j\in J}\mu _j\left( g_j\left( y,v_j\right) -g_j\left( x,v_j\right) \right) \nonumber \\ \le&\sum _{i\in I}\lambda _i\max \left( \max _{u_i\in {\mathcal {U}}_i}f_i(y,u_i)-\max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)\right) \nonumber \\ =&\max \left( \max _{u_i\in {\mathcal {U}}_i}f_i(y,u_i)-\max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)\right) . \end{aligned}$$
(4.11)

Observe that \(x\in S,\) so we have \(d(y,S)\le \Vert y-x\Vert .\) By inequalities (4.7), (4.11), and \(d(y,S)\le \Vert y-x\Vert , \) we obtain \( \eta \gamma d(y,S) \le \eta \gamma \Vert y-x\Vert \le \langle \eta x^*,y-x\rangle \le \max (\max _{u_i\in {\mathcal {U}}_i}f_i(y,u_i)-\max _{u_i\in {\mathcal {U}}_i}f_i(x,u_i)).\) Take \(\gamma \rightarrow 1,\) then inequality (4.6) is fulfilled as \(y \in K\cap B(x_0,r_1)\) is arbitrary. Therefore, the conclusion that \(x_0\) is a local robust weak sharp efficient solution for (UMP) is verified. \(\square \)

5 Concluding remarks

In this paper, we investigate an uncertain muliobjective optimization problem involving nonsmooth and nonconvex functions. We establish necessary and sufficient optimality conditions for robust weak sharp efficient solutions of the considered problem. These optimality conditions are presented in terms of multipliers and Mordukhovich subdifferentials of the related functions. To fulfill our goals, many tools are used in this paper, which has mainly the following three light spots:

  1. (1)

    In the discussion on the necessary optimality conditions for the local robust weak sharp efficient solution of (UMP), we employ the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials, and the sum rule for Mordukhovich subdifferentials.

  2. (2)

    In the discussions on such necessary optimality conditions, the assumptions of convexity conditions of objective function, constraint function, and uncertain sets are not assumed.

  3. (3)

    In the discussion on the sufficient optimality conditions for the robust weak sharp efficient solutions of (UMP), we employ the generalized convexity, approximate projection theorem, and some appropriate convexity and affineness conditions.