1 Introduction

Vector optimization problems, also known as multiobjective programming problems, have been applied in various fields of science, where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Vector optimization problems arise when more than one objective function is to be optimized over a given feasible region. Pareto optimum is the optimality concept that appears to be the natural extension of the optimization of a single objective to the consideration of multiple objectives. This concept of optimality in vector optimization has long played an important role in economics, game theory, statistical decision theory, and in all optimal decision problems with noncomparable criteria.

Researchers study vector optimization problems from different viewpoints and, therefore, there exist different goals when setting and solving them. The goal may be finding a representation set of Pareto optimal solutions, and/or qualifying the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the preferences of a human decisions making.

However, not all practical problems, when formulated as multiobjective optimization problems, fulfill the requirements of differentiability. Since many practical problems encountered in economics, engineering design, management science, and so forth, can be described only by nonsmooth functions and modeled as a vector optimization problem, consequently, the field of nonsmooth vector optimization problems, in which every component of involved functions is locally Lipschitz, has grown remarkably in the setting of optimality results (see, for instance, Bhatia and Jain [1], Bolintinéanu [2], Brandao et al. [3], Chinchuluun and Pardalos [4], Clarke [5], Coladas et al. [6], Craven [7], Doleżal [8], El Abdouni and Thibault [9], Huang et al. [10], Jeyakumar and Yang [11], Kanniappan [12], Miettinen [13], Minami [14], Luc [15], Reiland [16], Wang [17], and others).

The field of quasidifferentiable programming has grown remarkably in the setting of its theory by Demyanov and Rubinov [18] since 1980’s. This is a consequence of the fact that quasidifferential calculus plays an important role in nonsmooth analysis and optimization. Further, the class of quasidifferentiable functions is fairly broad. It contains not only convex, concave, and differentiable functions but also convex–concave, D.C. (i.e., difference of two convex), maximum, and other functions. In addition, it even includes some functions which are not locally Lipschitz continuous. Recently, many authors studied the optimality conditions for quasidifferentiable optimization problems. The necessary optimality conditions for quasidifferentiable scalar optimization problems were presented in geometric form (see, for example, Demyanov and Rubinov [19], Kuntz and Scholtes [20], Polyakova [21], Ward [22]). Later, the necessary optimality conditions were presented with the Lagrange multipliers (see, for example, Eppler and Luderer [23], Gao [24, 25], Luderer and Rösiger [26], Shapiro [27], Uderzo [28], Xia et al. [29]). In these necessary conditions, the Lagrange mutipliers usually depend on supergradients, in other words, different supergradients would yield different Lagrange multipliers.

However, quasidifferentiable multiobjective optimization problems, as a type of nonsmooth vector optimization, remain some unexplored questions for research. To the best of our knowledge, most of the works on quasidifferential calculus were devoted to study necessary and sufficient optimality conditions for quasidifferentiable scalar optimization problems only (see, for example, even those ones mentioned above). The aim of this paper is, therefore, to explore optimality conditions with (nonconstant) Lagrange multipliers for a quasidifferentiable vector optimization problem. In the paper, we consider a quasidifferentiable multiobjective optimization problem with inequality constraints. The main object of this paper is, therefore, to establish the Fritz John-type necessary optimality conditions and the Karush–Kuhn–Tucker-type necessary optimality conditions for a weak Pareto solution for such a nonsmooth vector optimization problem. Further, we introduce in the paper the definition of an F-convex function with respect to a convex compact set. Then, we prove sufficient optimality conditions for (weak) Pareto optimality of a feasible solution in the considered nonsmooth multiobjective optimization problem under assumptions that the involving functions are quasidifferentiable F-convex with respect to convex compact sets which are equal to the Minkowski sum of their subdifferentials and superdifferentials at this point.

The paper is organized as follows. Next section is devoted to recall some basic definitions related to the quasidifferential calculus. We recall the definition of a scalar quasidifferentiable function and its fundamental properties. Then, we extend this concept to the vectorial case. In Sect. 3, we formulate the quasidifferentiable multiobjective optimization problem that we deal with throughout this paper. The Fritz John-type necessary optimality conditions for a weak Pareto solution are proved for a feasible point in the considered nonsmooth vector optimization problem with quasidifferentiable functions. Under the constraint qualification (see, Kuntz and Scholtes [20], Luderer and Rösiger [26]), the Karush–Kuhn–Tucker-type necessary optimality conditions for a weak Pareto optimal solution are also established for such nondifferentiable vector optimization problems. In Sect. 4, we introduce the definition of an F-convex function at a point with respect to a nonempty, convex and compact set. Further, the sufficient optimality conditions for (weak) Pareto optimality of a feasible point are proved under assumptions that the functions constituting the considered nonsmooth multiobjective optimization problem are quasidifferentiable F-convex with respect to convex compact sets which are equal to the Minkowski sum of their subdifferentials and superdifferentials at this point. This result established in the paper is illustrated by an example of a nonconvex quasidifferentiable vector optimization problem with F-convex functions with respect to such convex compact sets.

2 Preliminaries

The following convention for equalities and inequalities will be used throughout the paper.

For any \({x} = ({x}_{1}, {x}_{2},\ldots ,{x}_{{n}})^{T}, {y} = ({y}_{1}, {y}_{2},\ldots ,{y}_{{n}})^{{T}},\) we define:

  1. (i)

    \(x = y\) if and only if \({x}_{{i}} = {y}_{{i}}\) for all \(i = 1,2,\ldots ,n\);

  2. (ii)

    \({x} < {y}\) if and only if \({x}_{{i}} < {y}_{{i}}\) for all \(i = 1,2,\ldots ,n\);

  3. (iii)

    \({x} \leqq {y}\) if and only if \({x}_{{i}}\leqq {y}_{{i}}\) for all \(i = 1,2,\ldots ,n\);

  4. (iv)

    \({x} \le {y}\) if and only if \({x} \leqq {y}\) and \({x} \ne {y}\).

We say that a mapping \(f : \text{ IR }^{{n}} \rightarrow \text{ IR }\) is directionally differentiable at \({u} \in \hbox {IR}^{{n}}\) in the direction \({d} \in \hbox {IR}^{{n}}\) iff, the limit

$$\begin{aligned} {{f}}'({u;d}):=\mathop {\lim }\limits _{\alpha \downarrow 0} \frac{{f}({u}+\alpha {d})-{f(u)}}{\alpha } \end{aligned}$$

exists finite. We say that f is directionally differentiable or semi-differentiable at u, iff its directional derivative \({f}^\prime ({u;d})\) exists finite for all \({d} \in \hbox {IR}^{{n}}\).

A vector-valued function \({f} : \hbox {IR}^{{n}} \rightarrow \hbox {IR}^{{k}}\) is said to be directionally differentiable at \({u} \in \hbox {IR}^{{n}}\) in the direction \({d} \in \hbox {IR}^{{n}}\) iff its each component \({f}_{{i}}, {i} = 1,{\ldots },{k},\) is directionally differentiable at u in the direction \({d} \in \hbox {IR}^{{n}}\).

Definition 2.1

[18] A real-valued function \({f} : \hbox {IR}^{{n}} \rightarrow \hbox {IR}\) is said to be quasidifferentiable at iff f is directionally differentiable and there exists an ordered pair of convex compact sets \({D}_{f} ({x})=[\underline{\partial }{f(x)},\overline{{\partial }}{f(x)}]\) such that

$$\begin{aligned} {{f}}'({u;d}):=\mathop {\sup }\limits _{v\in \underline{\partial }{f(u)}} \left\langle {{v,d}} \right\rangle +\mathop {\inf }\limits _{w\in \overline{\partial }{f(u)}} \left\langle {{w,d}} \right\rangle , \end{aligned}$$
(1)

where \(\underline{\partial }{f(u)}\) and \(\overline{{\partial }}{f(u)}\) are called subdifferential and superdifferential of f at u, respectively. Further, the ordered pair of sets \({D}_{f} ({u})=[\underline{\partial }{f(u)},\overline{{\partial }}{f(u)}]\) is called quasidifferential of the function f at u.

Let us note that the pair of sets, constituting the quasidifferential to a function f at a certain point u, is not unique, because if \({D}_{f} ({u})=[\underline{\partial }{f(u)},\overline{{\partial }}{f(u)}]\) is a quasidifferential of f at u, then, for any compact set V, the ordered pair of sets \([\underline{\partial }{f(u)+V},\overline{{\partial }}{f(u)-V}]\) is also its quasidifferential.

Definition 2.2

It is said that a vector-valued function \({f} := ({f}_{1},{\ldots },{f}_{{k}}) : \hbox {IR}^{{n}} \rightarrow \hbox {IR}^{{k}}\) is quasidifferentiable at \(u \in \hbox {IR}^{{n}}\) iff each its component \({f}_{{i}}, {i} = 1,{\ldots },{k},\) is a quasidifferentiable function at u, with its quasidifferential \({D}_{{f_i}} ({u})=[\underline{\partial }{f}_{i} ({u}),\overline{{\partial }}{f}_{i} ({u})]\) at u.

3 Necessary Optimality Conditions

In the paper, consider the following nonsmooth vector optimization problem:

$$\begin{aligned} (\hbox {VOP})\quad \min {f(x)}=({f}_1 ({x}),\ldots ,{f}_{k} ({x}))\,\hbox {s.t.}\quad {g}_{j} ({x})\leqq 0,\quad {j}= 1,\ldots ,m, \quad x \in \hbox {IR}^{{n}}, \end{aligned}$$

where \({f}_{i} :\hbox {IR}^{{n}}\rightarrow \hbox {IR}, {i} \in {I} = \{1,{\ldots },{k}\}, {g}_{j} :\hbox {IR}^{{n}}\rightarrow \hbox {IR}, {j}\in {J}=\{1,\ldots ,{m}\},\) are quasidifferentiable functions on \(\hbox {IR}^{{n}}\). We call (VOP) a quasidifferentiable vector optimization problem. We will write \({f} : = ({f}_{1},\ldots ,{f}_{{k}}) : \hbox {IR}^{{n}} \rightarrow \hbox {IR}^{{k}}\) and \({g} : = ({g}_{1},\ldots ,{g}_{{m}}): \hbox {IR}^{{n}} \rightarrow \hbox {IR}^{{m}}\) for convenience.

For the purpose of simplifying our presentation, we will introduce some notations, which will be used frequently throughout this paper.

Let \(\varOmega :=\{{x}\in \hbox {IR}^{{n}}:{g}_{j} ({x})\leqq 0,{j}\in {J}\}\) be the set of all feasible solutions in problem (VOP). Further, we denote by \({J}(\bar{{x}})\) the set of inequality constraint indexes that are active at point \(\bar{{x}}\in \varOmega \), that is, \(J(\bar{{x}}):=\{{j}\in {J:g}_{j} (\bar{{{x}}})=0\}\).

The solution concept of a vector optimization problem is referred in the literature as efficient solution, or Pareto solution (also weakly efficient solution, or weak Pareto solution).

Definition 3.1

(i):

A feasible point \(\bar{{{x}}}\) is said to be a Pareto solution (efficient solution) in problem (VOP) if and only if there exists no \({x}\in \varOmega \) such that

$$\begin{aligned} {f(x)} \le {f}(\bar{{{x}}}). \end{aligned}$$
(ii):

A feasible point \(\bar{{x}}\) is said to be a weak Pareto solution (weakly efficient solution, weak minimum) in problem (VOP) if and only if there exists no \({x} \in \varOmega \) such that

$$\begin{aligned} {f(x)} < {f}(\bar{{{x}}}). \end{aligned}$$

In order to prove the Fritz John-type necessary optimality conditions for the considered quasidifferentiable vector optimization problem (VOP), we use the \(\varepsilon \)-constraint method introduced by Haimes et al. [31] (see also Chankong and Haimes [32], Miettinen [13]).

In this method, one of the objective functions of the vector optimization problem is selected to be optimized and all other objective functions are covered into constraints by setting an upper bound to each of them. Hence, for problem (VOP), the associated scalar optimization problem to be solved is now of the form:

$$\begin{aligned} ({P}_{{r}})_{\varepsilon } \quad \min {f}_{{r}}({x})\quad \hbox {s.t.}\quad {f}_{i} ({x})\leqq \varepsilon _{{i}},\; {i}\in {I},\; {i}\ne {r}, \; {g}_{j} ({x})\leqq 0,\; {j}\in {J}, \; {x} \in \hbox {IR}^{{n}}. \end{aligned}$$

Theorem 3.1

[13] \(\bar{{{x}}} \in \varOmega \) is a Pareto solution in the considered (VOP) if and only if it is an optimal solution of the \(\varepsilon \)-constraint optimization problem \(({P}_{{r}}) _{\varepsilon }\) for every \({r} = 1,{\ldots },{k},\) where \(\varepsilon _{{i}} = {f}_{{i}}(\bar{{{x}}})\) for all \({i} \in {I}, {i} \ne {r}\).

Taking into account the above theorem, we denote the above problem by \(({P}_{{r}}(\bar{{{x}}}))\) and re-write it in the following way:

$$\begin{aligned} ({P}_{{r}}(\bar{{{x}}}))\quad \! {\min }{f}_{{r}}({x})\quad \text{ s.t. } \quad f_{i} {(x)}\leqq {f}_{i} (\bar{{x}}),\; {i}\in {I},\; {i}\ne {r}, \; {g}_{j} ({x})\leqq 0,\; {j}\in {J}, \; {x} \in \text{ IR }^{{n}}. \end{aligned}$$

We now prove the Fritz John-type necessary optimality conditions for the considered quasidifferentiable (VOP).

Theorem 3.2

(Fritz John-type necessary optimality conditions) Let \(\bar{{{x}}} \in \varOmega \) be a weak Pareto solution in the considered quasidifferentiable (VOP). Further, assume that each \(f_{i}, i \in I,\) is quasidifferentiable at \(\bar{{{x}}}\), with the quasidifferential \({D}_{{f}_{i}} (\bar{{{x}}})=[\underline{\partial }{f}_{i} (\bar{{{x}}}),\overline{{\partial }}{f}_{i} (\bar{{{x}}})]\), each \({g}_{{j}}, {j} \in {J},\) is quasidifferentiable at \(\bar{{{x}}}\), with the quasidifferential \({D}_{{g}_{j} } (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),\overline{{\partial }}{g}_{j} (\bar{{{x}}})]\). Then, for any sets of \({w}_{i} \in \overline{{\partial }}{f}_{i} (\bar{{{x}}}), {i} \in {I},\) and \({v}_{j} \in \overline{{\partial }}{g}_{j} (\bar{{{x}}}), j \in {J},\) there exist vectors \(\bar{{\lambda }}({z})\in \hbox {IR}^{{k}}\) and \(\bar{{\mu }}({z})\in \hbox {IR}^{{m}}\) such that

$$\begin{aligned}&0 \in \displaystyle \sum \limits _{i=1}^{{k}} {\bar{{\lambda }}_{{i}}} ({z})(\underline{\partial }{f}_{{i}} (\bar{{{x}}}) +{w}_{{i}}) +\sum \limits _{{j}=1}^{{m}} {\bar{{\mu }}_{{j}}} ({z})(\underline{\partial }{g}_{{j}} (\bar{{x}})+{v}_{{j}} ) , \end{aligned}$$
(2)
$$\begin{aligned}&\bar{{\mu }}_j ({z}){g}_{j} (\bar{{{x}}})=0,\quad {j}\in {J}, \end{aligned}$$
(3)
$$\begin{aligned}&(\bar{{\lambda }}({z}),\bar{{\mu }}({z}))\ge 0, \end{aligned}$$
(4)

where \(\bar{{\lambda }}({z})=(\bar{{\lambda }}_1 (z),\ldots ,\bar{{\lambda }}_{k} ({z}))\) and \(\bar{{\mu }}({z})=(\bar{{\mu }}_1 ({z}),\ldots ,\bar{{\mu }}_{m} ({z}))\) are dependent on the specific choice of \({z} = {(w,v)} = ({w}_{1},{\ldots },{w}_{{k}},{v}_{1},{\ldots },{v}_{{m}})\).

Proof

Assume that \(\bar{{{x}}} \in \varOmega \) is a weak Pareto solution of (VOP). Then, by Theorem 3.1, it follows that \(\bar{{{x}}}\) is a minimizer in the scalar optimization problem \(({P}_{{r}}(\bar{{{x}}}))\) for each \(r = 1,{\ldots },k\). Since \(\bar{{{x}}}\) is an optimal solution in the quasidifferentiable scalar optimization problem \(({P}_{{r}}(\bar{{{x}}}))\), by Proposition 2.1 (Gao [25]), it follows that, for any \({w_i} \in {\overline{\partial }}f_i(\bar{x}), {i \in I},\) and \({v_j} \in {\overline{\partial }}g_j(\bar{x}), {j \in J},\) there exist scalars \(\bar{{\lambda }}_{i} ({z})\geqq 0, {i} \in {I},\) and \(\bar{{\mu }}_{j} ({z})\geqq 0, {j} \in {J},\) not all zero, such that

$$\begin{aligned}&\displaystyle 0\in \bar{{\lambda }}_{r} ({z})(\underline{\partial }{f}_{r} (\bar{{{x}}})+{w}_{r})+ \sum \limits _{{i}\in {I},{i}\ne {r}}{\bar{{\lambda }}_{i} ({z})(\underline{\partial } {f}_{i} (\bar{{{x}}})+{w}_{i} )} +\sum \limits _{{j}=1}^{m} {\bar{{\mu }}_{j} ({z})(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} )} , \nonumber \\\end{aligned}$$
(5)
$$\begin{aligned}&\bar{{\mu }}_{{j}} ({z}){g}_{{j}} (\bar{{{x}}})=0, \quad {j}\in {J}, \end{aligned}$$
(6)
$$\begin{aligned}&(\bar{{\lambda }}({w}),\bar{{\mu }}({w}))\ge 0, \end{aligned}$$
(7)

where \(\bar{{\lambda }}({z})=(\bar{{\lambda }}_1 ({z}),\ldots ,\bar{{\lambda }}_{{k}} ({z}))\) and \(\bar{{\mu }}({z})=(\bar{{\mu }}_1 ({z}),\ldots ,\bar{{\mu }}_{{m}} ({z}))\) are dependent on the specific choice of \({z} = {(w,v)} = ({w}_{1},{\ldots },{w}_{{k}}, {v}_{1},{\ldots },{v}_{{m}})\). Note that (2) follows directly from (5), whereas (6) and (7) are the conditions (3) and (4), respectively. This completes the proof of this theorem.

In order to prove the following Karush–Kuhn–Tucker-type necessary optimality conditions for the considered quasidifferentiable multiobjective optimization problem, we need some suitable constraint qualification. We use the constraint qualification given by Luderer and Rösiger [26] for quasidifferentiable optimization problems and analyzed by Kuntz and Scholtes [20] for such nonsmooth extremum problems.

The constraint qualification (CQ) It is said that the constraint qualification (CQ) is fulfilled for the considered quasidifferentiable (VOP) at \(\bar{{{x}}}\) if there exists \(d \in \hbox {IR}^{{n}}\) such that

$$\begin{aligned} \mathop {\max }\limits _{{u}_{j} \in \underline{\partial }{g}_{j} (\bar{{{x}}})} \left\langle {{u}_{j} ,{d}} \right\rangle +\mathop {\max }\limits _{{v}_{j} \in \overline{\partial }{g}_{j} (\bar{{{x}}})} \left\langle {{v}_{j} ,{d}} \right\rangle <0,\quad {j}\in {J}(\bar{{{x}}}). \end{aligned}$$
(8)

In Kuntz and Scholtes [20], this constraint qualification is stated in terms of quasidifferentials as follows: It is said that the constraint qualification (CQ) is fulfilled at \(\bar{{{x}}}\) for (VOP) if for every \({j} \in {J}(\bar{{{x}}})\), there exists a quasidifferential \({D}_{{g}_{j} } (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),\overline{{\partial }}{g}_{j} (\bar{{{x}}})]\) such that

$$\begin{aligned} 0\notin \hbox {conv}\bigcup _{{j}\in {J}(\bar{{{x}}})}{(\underline{\partial }{g}_{j} (\bar{{{x}}})+\overline{{\partial }}{g}_{j} (\bar{{{x}}}))}. \end{aligned}$$
(9)

Theorem 3.3

(Karush–Kuhn–Tucker-type necessary optimality conditions) Let \(\bar{{{x}}}\in \varOmega \) be a weak Pareto solution for the considered (VOP). Further, assume that each \({f}_{{i}}, {i} \in {I}\), is quasidifferentiable at \(\bar{{{x}}}\), with the quasidifferential \({D}_{{f}_{i} } (\bar{{{x}}})=[\underline{\partial }{f}_{i} (\bar{{{x}}}),{\overline{\partial }}{f}_{i} (\bar{{{x}}})]\), each \({g} _{{j}} , {j} \in {{J}}\), is quasidifferentiable at \(\bar{{{x}}}\), with the quasidifferential \({D}_{{g}_{j}} (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),\overline{{\partial }}{g}_{j} (\bar{{{x}}})]\). If the constraint qualification (CQ) is satisfied at \(\bar{{{x}}}\) for (VOP), then, for any sets of \({w}_{i} \in \overline{{\partial }}{f}_{i} (\bar{{{x}}})\), \(i \in {I}\), and \(v_j \in \overline{{\partial }}g_j (\bar{{{x}}}), {j} \in {{J}}\), there exist vectors \(\bar{{\lambda }}({z})\in \hbox {IR}^{{k}}\) and \(\bar{{\mu }}({z})\in \hbox {IR}^{{m}}\) such that

$$\begin{aligned}&\displaystyle 0\in \sum _{{i}=1}^{k} {\bar{{\lambda }}_{i} ({z})(\underline{\partial }{f}_{i} (\bar{{{x}}})} +{w}_{i} )+\sum _{{j}=1}^{m} {\bar{{\mu }}_{j} ({z})(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} )} , \end{aligned}$$
(10)
$$\begin{aligned}&\bar{{\mu }}_{j} ({z}){g}_{j} (\bar{{{x}}})=0,\quad {j}\in {J}, \end{aligned}$$
(11)
$$\begin{aligned}&\bar{{\lambda }}({z})\ge 0,\quad \bar{{\mu }}({z})\geqq 0, \end{aligned}$$
(12)

where \(\bar{{\lambda }}({z})=(\bar{{\lambda }}_1 ({z}),\ldots ,\bar{{\lambda }}_{{k}} ({z}))\) and \(\bar{{\mu }}({z})=(\bar{{\mu }}_1 ({z}),\ldots ,\bar{{\mu }}_{{m}} ({z}))\) are dependent on the specific choice of \({z} = {(w,v)} = ({w}_{1},{\ldots },{w}_{{k}},{v}_{1},{\ldots },{v}_{{m}})\).

Proof

By assumption, \(\bar{{{x}}}\in \varOmega \) is a weak Pareto solution for the considered quasidifferentiable (VOP). Hence, the Fritz John-type necessary optimality conditions (2)–(4) for (VOP) are fulfilled at \(\bar{{{x}}}\). In order to prove this theorem, therefore, it is needed to show that \(\bar{{\lambda }}({z})\ne 0\) for all z. We proceed by contradiction. Suppose, contrary to the result, that there exists \({z}^{*}\) such that \(\bar{{\lambda }}({z}^{*})=0\). In other words, there exist \({v}_{j}^*\in \overline{{\partial }}{g}_{j} (\bar{{{x}}}), {j} \in {J},\) such that, by the Fritz John-type necessary optimality condition (2), we have

$$\begin{aligned} \displaystyle 0\in \sum _{{j}=1}^m {\bar{{\mu }}_{j} ({z}^{*})(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j}^*)} . \end{aligned}$$
(13)

By the Fritz John-type necessary optimality condition (4), it follows that \(\bar{{\mu }}({z}^{*})\ge 0\). Hence, this implies that

$$\begin{aligned} \sum _{{j}=1}^{m} {\bar{{\mu }}_{j} ({z}^{*})} >0. \end{aligned}$$
(14)

Thus, dividing (13) by \(\sum _{{t}=1}^{m} {\bar{{\mu }}_{t} ({z}^{*})} \), we get

$$\begin{aligned} 0\in \sum _{{j}=1}^{{m}} {\frac{\bar{{\mu }}_{{j}} ({z}^{{*}})}{\sum _{{t}=1}^{m} {\bar{{\mu }}_{{t}} ({z}^{{*}})}}(\underline{\partial }{g}_{{j}} (\bar{{{x}}})+{v}_{{j}}^{*} )} . \end{aligned}$$
(15)

Let us denote

$$\begin{aligned} \alpha _{j} ({z}^{*})=\frac{\bar{{\mu }}_{j} ({z}^{*})}{\sum \nolimits _{{t}=1}^{m} {\bar{{\mu }}_{{t}} ({z}^{*})} },\quad {j} \in {J}(\bar{{{x}}}). \end{aligned}$$
(16)

Hence, by (16), it follows that \(0\leqq \alpha _{j} ({z}^{*})\leqq 1,\) and, moreover, \(\sum _{{j}\in {J}(\bar{{{x}}})} {\alpha _{j} ({z}^{*})} =1\). Thus, (15) and (16) yield

$$\begin{aligned} 0\in \sum _{{j}=1}^{m} {\alpha _{j} ({z}^{*})(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j}^*)}. \end{aligned}$$
(17)

By the definition of a convex hull of a set, (16) and (17) imply that the following relation

$$\begin{aligned} 0\in \hbox {conv}\bigcup _{{j}\in {J}(\bar{{{x}}})} {(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j}^*)} \end{aligned}$$
(18)

holds. Since the constraint qualification (CQ) is satisfied at \(\bar{{{x}}}\) for problem (VOP), by (9), it follows that

$$\begin{aligned} 0\notin \hbox {conv}\bigcup _{{j}\in {J}(\bar{{{x}}})} {(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} )} \end{aligned}$$
(19)

holds for all \({v}_{j} \in {\overline{\partial }}{g}_{j} (\bar{{{x}}}), {j} \in {J}.\) Thus, it is also satisfied for \(v_j =v_j^*\in {\overline{\partial }}{g}_{j} (\bar{{{x}}}), {j} \in {J}\). Hence, (19) implies that the following relation

$$\begin{aligned} 0\notin \hbox {conv}\bigcup _{{j}\in {J}(\bar{{{x}}})} {(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j}^*)} \end{aligned}$$

holds, contradicting (18). This means that \(\bar{{\lambda }}({z})\ne 0\) for any choice of z and completes the proof of this theorem.

4 Sufficient Optimality Conditions

In this section, we prove the sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions introduced in the previous section. In order to do this, we introduce the definition of an F-convex function with respect to a convex compact set. Then, we establish the sufficient optimality conditions for a (weak) Pareto optimality of a feasible solution in the considered quasidifferentiable vector optimization problem under assumption that the involved functions are F-convex with respect to convex compact sets which are equal to the Minkowski sum of their subdifferentials and superdifferentials at this point.

Definition 4.1

A functional \({F}: {X} \times {X} \times \hbox {IR}^{{n}} {\rightarrow } \hbox {IR}\) is sublinear (with respect to the third component) if, for all \({x}, {u} \in {X} \subseteq \hbox {IR}^{{n}}\),

  1. (i)

    \({F}({x, u; q}_{1} + {q}_{2}) \leqq {F(x, u; q}_{1}) + {F(x, u; q}_{2}), \forall {q}_{1}, {q}_{2} \in \hbox {IR}^{{n}}\),

  2. (ii)

    \({F}({x, u}; \alpha {q}) = \alpha {F}({x, u; q}), \forall \alpha \in {R}, \forall {q} \in \hbox {IR}^{{n}}\).

By (ii), clearly,

$$\begin{aligned} {F}{(x,u;0)} = 0. \end{aligned}$$
(20)

In recent years, the concept of convexity has been generalized in many directions and it has potential and important applications in various fields. One of the significant generalizations of convex functions is the definition of an F-convex function, which was introduced by Hanson and Mond [33] for differentiable functions.

Now, we introduce the definition of an F-convex function with respect to a convex and compact set.

Let u be a given arbitrary point of \(\hbox {IR}^{{n}}\) and \({S}_{f} ({u})\) be a nonempty, convex and compact subset of \(\hbox {IR}^{{n}}\). Further, let \(F : \hbox {IR}^{{n}} \times \hbox {IR}^{{n}} \times \hbox {IR}^{{n}} {\rightarrow } \hbox {IR}\) be a sublinear functional (with respect to the third component).

Definition 4.2

Let \({f} : \hbox {IR}^{{n}} \rightarrow \) IR be a function defined on \(\hbox {IR}^{{n}}\). If there exists a functional F of the type described above such that the inequality

$$\begin{aligned} {f(x)}-{f(u)}\geqq {F}({x,u};\omega ),\quad \forall \omega \in {S}_{f} ({u}) \end{aligned}$$
(21)

holds for all \({x} \in \hbox {IR}^{{n}}\), then f is said to be an F-convex function at u on \(\hbox {IR}^{{n}}\) with respect to the convex compact set \({S}_{{f}}({u})\).

If inequality (21) is strict for all \({x} \in \hbox {IR}^{{n}}, {x} \ne {u},\) then f is said to be a strictly F-convex function at u on \(\hbox {IR}^{{n}}\) with respect to the convex compact set \({S}_{{f}}({u})\).

If inequality (21) is satisfied at every \({u} \in \hbox {IR}^{{n}}\), then f is said to be an F-convex function on \(\text{ IR }^{{n}}\) with respect to the convex compact set \({S}_{{f}}({u})\).

We will say that f is an F-convex function at \({u} \in {X}\) on a nonempty subset X of \(\hbox {IR}^{{n}}\) with respect to the convex compact set \({S}_{{f}}({u})\) if inequality (21) is fulfilled for all \({x}\in {X}\).

Remark 4.1

In order to define an analogous class of F-concave functions with respect to convex compact sets, the direction of inequality (21) should be reversed.

Remark 4.2

Note that we have the special cases of the definition of an F-convex function with respect to a convex compact set. Namely, in the case when \({f} : \hbox {IR}^{{n}} \rightarrow \hbox {IR}\) is a locally Lipschitz function at each \({u} \in \hbox {IR}^{{n}}\) and a set \({S}_{f} ({u})\) is equal to the Clarke subdifferential [5] of f at u, then we obtain the definition of a locally Lipschitz F-convex function on \(\hbox {IR}^{{n}}\) (see [1]). In the case when f is differentiable at each \(u \in \hbox {IR}^{{n}}\) and \({S}_{f} ({u})=\{\nabla {f(u)}\}\), then Definition 4.2 reduces to the definition of a differentiable F-convex function (see  [32]).

Remark 4.3

It is well-known in the literature (see, for example, [30]) that if f is a convex quasidifferentiable function on a convex open set \({X} \subset \hbox {IR}^{{n}}\), then (as it is known from convex analysis) its quasidifferential \({D}_{{f}}({u}) := \{\partial {f(u)},\{0\}\},\) where \(\partial {f(u)}\) is a subdifferential of f at u (see Example 1.4.2 [30]). Thus, f is even subdifferentiable. This means that every quasidifferentiable and convex function (in the sense of convex analysis) is a quasidifferentiable F-convex function with respect to the convex compact set equal to \({S}_{f} {(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}=\underline{\partial }{f(u)}+\{0\}\) (in the sense of Definition 4.2), where F is defined by \(F(x, u; q) = \langle q, x - {u}\rangle \). However, the converse result is not true (see, for instance, Example 4.1). This means that there exist quasidifferentiable F-convex functions with respect to the convex compact set \({S}_{{f}}({u})\) (in the sense of Definition 4.2) which are not convex (in the sense of convex analysis).

Now, we present an example of a nonconvex quasidifferentiable function which is quasidifferentiable F-convex with respect to the convex compact set.

Example 4.1

Consider the function \({f} : \hbox {IR} \rightarrow \hbox {IR}\) defined by \({f(x)} := {min}\{{x}, 0\}\). Let u be any arbitrary point in \({\hbox {IR}, F : \hbox {IR}} \times \hbox {IR} \times \hbox {IR} {\rightarrow } \hbox {IR}\) be a sublinear functional (with respect to the third component) defined by \({F(x, u; q)} := {q}(\left| {x} \right| +\left| {u} \right| )\) and \({S}_{f} {(u)}=[-2,-1]\). Then, by Definition 4.2, it follows that f is an F-convex function on IR with respect to the convex compact set \({S}_{{f}}({u})\) given above.

Now, we present an example of a nonconvex quasidifferentiable function which is quasidifferentiable F-convex with respect to the convex compact set \({S}_{{f}}({u})\) which is equal to the Minkowski sum of its subdifferential and superdifferential at this point.

Example 4.2

Consider the function \({f : \hbox {IR}} \rightarrow \hbox {IR}\) defined by \({f(x)}:=\left| {\left| {x} \right| -1} \right| \). Let \(u = 1\) and \({F : \hbox {IR}} \times \hbox {IR} \times \hbox {IR} {\rightarrow } \hbox {IR}\) be a sublinear functional defined by \({F(x,u;q)} := {q}(\left| {\left| {x} \right| -1} \right| -\left| {\left| {u} \right| -1} \right| )\). Note that f is quasidifferentiable at \(u = 1\). Indeed, by definition, we have that \({{f}}'(1;{d})=\mathop {\max }\nolimits _{{v}\in \underline{\partial }{f}(1)} \left\langle {{v,d}} \right\rangle +\mathop {\min }\nolimits _{{w}\in \overline{\partial }{f}(1)} \left\langle {{w,d}} \right\rangle ,\) where \(\underline{\partial }{f}(1)=\hbox {conv}\{-1,1\}, \overline{\partial }{f}(1)=\{0\}\). Then, by Definition 2.1, f is a quasidifferentiable function. Further, by Definition 4.2, it follows that it is a quasidifferentiable F-convex function at \(u =1\) on IR with respect to the convex compact set \({S}_{{f}}({u})\) which is equal to the Minkowski sum of its subdifferential and superdifferential at this point given above, that is, with respect to \({S}_{f} (1)=\underline{\partial }{f}(1)+\overline{\partial }{f}(1)\).

Remark 4.4

As it is mentioned above, for every quasidifferentiable function, its quasidifferential is not unique, because if \({D}_{f} ({u})=[\underline{\partial }{f(u)},\overline{{\partial }}{f(u)}]\) is a quasidifferential of f at u, then, for any compact set V, the ordered pair of sets \(\left[ {\underline{\partial } ^{{V}}{f(u)},{\overline{\partial }}^{{V}}{f(u)}} \right] =[\underline{\partial }{f(u)}+{V},\overline{{\partial }}{f(u)-V}]\) is also its quasidifferential. In the case of a quasidifferentiable function f considered in Example 4.2, if we set \(V = [0,2]\), then \({D}_{f^V} ({u})=[\underline{\partial }^{{V}}{f(u)},{\overline{\partial }}^{{V}}{f(u)}]\) is also its quasidifferential at u with \(\underline{\partial }^{{V}}{f(u)}=[-1,3]\) and \(\overline{\partial } ^{{V}}{f(u)}=[-2,0]\). In Example 4.2, it is showed that f is a quasidifferentiable F-convex function at \(u =1\) with respect to the convex compact set \({S}_{f} (1)=\underline{\partial }{f}(1)+\overline{\partial }{f}(1)\) (which is equal to the Minkowski sum of its subdifferential and superdifferential at this point defined in this example). However, it is not difficult to see that f is not a quasidifferentiable F-convex function at \(u =1\) with respect to the convex compact set \({S}_{f^V} (1)=\underline{\partial }^{{V}}{f}(1)+\overline{\partial } ^{{V}}{f}(1)\) (which is also equal to the Minkowski sum of its subdifferential and superdifferential at this point, but other subdifferential and superdifferential at this point than its subdifferential and superdifferential considered in Example 4.2). Even from this example, the fact that the given function is quasidifferentiable F-convex at the given point with respect to the Minkowski sum of its subdifferential and superdifferential at this point does not mean that this function is also quasidifferentiable F-convex at this point with respect to the Minkowski sums of its other subdifferentials and superdifferentials at this point. Further, as it also follows from Definition 4.2, a quasidifferential of a given function with respect to which it is F-convex at the given point (more exactly, with respect to the Minkowski sum of its subdifferential and superdifferential at this point) should be given a priori, and moreover, F-convexity property with respect to the given convex compact set equal to the Minkowski sum of its subdifferential and superdifferential does not mean that this function has this property with respect to other convex compact sets equal to the Minkowski sums of its other subdifferentials and superdifferentials at this point.

Now, we generalize Definition 4.2 to the vectorial case.

Definition 4.3

Let \({f} := ({f}_{1},{\ldots },{f}_{{k}}) : \hbox {IR}^{{n}} \rightarrow \hbox {IR}^{{k}}\) be a vector-valued function and u be a given arbitrary point of \(\hbox {IR}^{{n}}\). Further, every \({S}_{{f_i}} ({u}), {i}= 1,{\ldots },{k}\), be a nonempty, convex and compact subset of \(\hbox {IR}^{{n}}\). If each component \({f}_{{i}}\) of \(f, i = 1, {\ldots },k\) satisfies inequality (21) with respect to \({S}_{{f_i} } ({u})\), then \({f}_{{i}}\) is said to be an F-convex function at u on \(\hbox {IR}^{{n}}\) with respect to the convex compact set \({S}_{{f_i} } ({u})\), and moreover, f is said to be an F-convex function at u on \(\hbox {IR}^{{n}}\) with respect to \({S}_{f} ({u})={S}_{{f}_1 } ({u})\times \ldots \times {S}_{{f}_{{k}} } ({u})\).

Now, for the considered quasidifferentiable (VOP), we prove the sufficient optimality conditions for a (weak Pareto) Pareto optimality of a feasible solution \(\bar{{{x}}}\) under the assumptions that the functions involved are quasidifferentiable F-convex with respect to convex compact sets which are equal to the Minkowski sum of their subdifferentials and superdifferentials at this point.

Theorem 4.1

(Sufficient optimality conditions) Let \(\bar{{{x}}}\) be a feasible solution in the considered quasidifferentiable (VOP) and the Karush–Kuhn–Tucker-type necessary optimality conditions (10)–(12) be satisfied at \(\bar{{{x}}}\) with the quasidifferentials \({D}_{{f_i} } (\bar{{{x}}})=[\underline{\partial }{f}_{i} (\bar{{{x}}}),\overline{{\partial }}{f}_{i} (\bar{{{x}}})], {i} \in {I}, {D}_{{g_j} } (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),\overline{{\partial }}{g}_{j} (\bar{{{x}}})], {j} \in {J}\). Further, assume that each \({f}_{{i}}, {i} \in {I},\) is a quasidifferentiable F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{f_i} } (\bar{{{x}}})=\underline{\partial }{f}_{i} (\bar{{{x}}})+{\overline{\partial }}{f}_{i} (\bar{{{x}}})\), each \({g}_{{j}}, {j} \in {J}(\bar{{{x}}})\), is a quasidifferentiable F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{g_j} } (\bar{{{x}}})=\underline{\partial }{g}_{j} (\bar{{{x}}})+{\overline{\partial }}{g}_{j} (\bar{{{x}}})\). Then, \(\bar{{{x}}}\) is a weak Pareto solution in (VOP).

Proof

Assume that \(\bar{{{x}}} \in \varOmega \) and the Karush–Kuhn–Tuker-type necessary optimality conditions (10)–(12) are satisfied at \(\bar{{{x}}}\) with the quasidifferentials \({D}_{{f_i }} (\bar{{{x}}})=[\underline{\partial }{f}_{i} (\bar{{{x}}}),{\overline{\partial }}{f}_{i} (\bar{{{x}}})], {i} \in {I}, {D}_{{g_j} } (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),{\overline{\partial }}{g}_{j} (\bar{{{x}}})], {j} \in {J}\). This means that, for given sets of \({w}_{i} \in {\overline{\partial }}{f}_{i} (\bar{{{x}}}), {i} \in {I}\), and \({v}_{j} \in \overline{{\partial }}{g}_{j} (\bar{{{x}}}), {j} \in {J}\), there exist \(\bar{{\lambda }}({z})\in \hbox {IR}^{{k}}\) and \(\bar{{\mu }}({z})\in \hbox {IR}^{{m}}\) such that the conditions (10)–(12) are satisfied. Suppose, contrary to the result, that \(\bar{{{x}}}\) is not a weak Pareto solution in (VOP). Then, by Definition 3.1(ii), there exists \(\tilde{{x}}\in \varOmega \) such that

$$\begin{aligned} f(\tilde{{x}}) < f(\bar{{{x}}}), \quad {i} \in {I}. \end{aligned}$$
(22)

By assumption, each \({f}_{{i}}, {i} \in {I}\), is a quasidifferentiable F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{f_i} } (\bar{{{x}}})=\underline{\partial }{f}_{i} (\bar{{{x}}})+\overline{{\partial }}{f}_{i} (\bar{{{x}}})\), each \({g}_{{j}}, {j} \in {J}(\bar{{{x}}})\), is a quasidifferentiable F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{g_j }} (\bar{{{x}}})=\underline{\partial }{g}_{j} (\bar{{{x}}})+\overline{{\partial }}{g}_{j} (\bar{{{x}}})\). Hence, by Definition 4.2, the following inequalities

$$\begin{aligned}&{f}_{i} {(x)-f}_{i} (\bar{{{x}}})\geqq {F}({x},\bar{{{x}}};\omega _{i} ), \quad \forall \omega _{i} \in {S}_{{f_i }} (\bar{{{x}}}),\quad {i} \in {I}, \end{aligned}$$
(23)
$$\begin{aligned}&{g}_{j}({x})-{g}_{j} (\bar{{{x}}})\geqq {F(x},\bar{{{x}}};\vartheta _{j} ), \quad \forall \vartheta _{j} \in {S}_{{g_j} } (\bar{{{x}}}),\quad {j} \in {J}(\bar{{{x}}}) \end{aligned}$$
(24)

hold for all \({x} \in \varOmega \). Therefore, they are also fulfilled for \({x}=\tilde{{x}}\in \varOmega \). Hence, (23) and (24) yield, respectively,

$$\begin{aligned}&{f}_{i} (\tilde{{x}})-{f}_{i} (\bar{{{x}}})\geqq F(\tilde{{x}},\bar{{{x}}};\omega _{i} ),\quad \forall \omega _{i} \in {S}_{{f}_{i} } (\bar{{{x}}}), \quad {i} \in {I}, \end{aligned}$$
(25)
$$\begin{aligned}&{g}_{j} (\tilde{{x}})-{g}_{j} (\bar{{{x}}})\geqq {F}(\tilde{{x}},\bar{{{x}}};\vartheta _{j} ), \quad \forall \vartheta _{j} \in {S}_{{g_j} } (\bar{{{x}}}), \quad {j} \in {J}(\bar{{{x}}}). \end{aligned}$$
(26)

Combining (22) and (25), we get

$$\begin{aligned} {F}(\tilde{{x}},\bar{{{x}}};\omega _{i} )<0, \quad \forall \omega _{i} \in {S}_{{f_i} } (\bar{{{x}}}), \quad {i} \in {I}. \end{aligned}$$

Thus, by the Karush–Kuhn–Tucker-type necessary optimality condition (12), the above inequalities imply

$$\begin{aligned}&\bar{{\lambda }}_{i}({z}){F}(\tilde{{x}},\bar{{{x}}};\omega _{i} )\leqq 0, \quad \forall \omega _{i} \in {S}_{{f_i}} (\bar{{{x}}}), {i} \in {I}, \end{aligned}$$
(27)
$$\begin{aligned}&\bar{{\lambda }}_{i} ({z}){F}(\tilde{{x}},\bar{{{x}}};\omega _{i})<0, \quad \forall \omega _{i} \in {S}_{{f_i }} (\bar{{{x}}})\hbox { and for at least one }{i} \in {I}. \end{aligned}$$
(28)

By the definition of \({S}_{{f_i }} (\bar{{{x}}}), {i} \in {I},\) (27) and (28) yield

$$\begin{aligned} \sum _{{i}=1}^{k} {\bar{{\lambda }}_i ({z}){F}(\tilde{{x}},\bar{{{x}}};\omega _{i} )} <0, \quad \forall \omega _{i} \in \underline{\partial }{f}_{i} (\bar{{{x}}})+{w}_{i} . \end{aligned}$$
(29)

Since F is sublinear (with respect to the third component), (29) gives

$$\begin{aligned} {F}\left( {\tilde{{x}},\bar{{{x}}};\sum _{{i}=1}^{k} {\bar{{\lambda }}_{i} ({z})\omega _{i} } } \right) <0, \quad \forall \omega _{i} \in \underline{\partial }{f}_{i} (\bar{{{x}}})+{w}_{i} . \end{aligned}$$
(30)

Using \(\tilde{{x}}\in \varOmega \) and \(\bar{{{x}}}\in \varOmega \) together with the Karush–Kuhn–Tucker-type necessary optimality conditions (11) and (12), we obtain

$$\begin{aligned} \bar{{\mu }}_{j} ({z}){g}_{j} (\tilde{{x}})\leqq \bar{{\mu }}_{j} ({z}){g}_{j} (\bar{{{x}}})=0, \quad {j} \in {J}(\bar{{{x}}}). \end{aligned}$$
(31)

By the Karush–Kuhn–Tuker-type necessary optimality condition (12), (26) gives

$$\begin{aligned} \bar{{\mu }}_{j} ({z}){g}_{j} (\tilde{x})-\bar{{\mu }}_{j} ({z}){g}_{j} (\bar{{{x}}})\geqq \bar{{\mu }}_{j} ({z}){F}(\tilde{{x}},\bar{{{x}}};\vartheta _{j} ), \quad \forall \vartheta _{j} \in {S}_{{g_j }} (\bar{{{x}}}), \quad {j} \in {J}(\bar{{{x}}}).\nonumber \\ \end{aligned}$$
(32)

Thus, (31) and (32) yield

$$\begin{aligned} \bar{{\mu }}_{j} ({z}){F}(\tilde{{x}},\bar{{{x}}};\vartheta _{j} )\leqq 0, \quad \forall \vartheta _{j} \in {S}_{{g_j }} (\bar{{{x}}}), \quad {j} \in {J}(\bar{{{x}}}). \end{aligned}$$
(33)

By the definition of \({S}_{{g_j }} (\bar{{{x}}}), {j} \in {J}(\bar{{{x}}})\), inequalities (33) imply

$$\begin{aligned} \sum _{{j}\in {J}(\bar{{{x}}})} {\bar{{\mu }}_{j} ({z}){F}(\tilde{{x}},\bar{{{x}}};\vartheta _{j} )} \leqq 0, \quad \forall \vartheta _{j} \in \underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} . \end{aligned}$$
(34)

Using the sublinearity of F with respect to the third component, we get

$$\begin{aligned} {F}\left( {\tilde{{x}},\bar{{{x}}};\sum _{{j}\in {J}(\bar{{{x}}})} {\bar{{\mu }}_{j} ({z})\vartheta _{j} } } \right) \leqq 0, \quad \forall \vartheta _{j} \in \underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} . \end{aligned}$$
(35)

Hence, (30) and (35) yield

$$\begin{aligned} {F}\left( {\tilde{{x}},\bar{{{x}}};\sum _{{i}=1}^{k} {\bar{{\lambda }}_{i} ({z})\omega _{i} } +\sum _{{j}\in {J}(\bar{{{x}}})} {\bar{{\mu }}_{j} ({z})\vartheta _{j} } } \right) <0,&\quad \forall \omega _{i} \in \underline{\partial }{f}_{i} (\bar{{{x}}}) +{w}_{i} , {i} \in {I}, \nonumber \\&\quad \forall \vartheta _{j} \in \underline{\partial }{g}_{j} (\bar{{{x}}})+\,{v}_{j} , {j} \in {J}(\bar{{{x}}}). \end{aligned}$$
(36)

By (36), it follows that the relation

$$\begin{aligned} 0\notin \sum _{{i}=1}^k {\bar{{\lambda }}_{i} ({z})(\underline{\partial }{f}_{i} (\bar{{{x}}})} +{w}_{i} )+\sum _{{j}=1}^{m} {\bar{{\mu }}_{j} ({z})(\underline{\partial }{g}_{j} (\bar{{{x}}})+{v}_{j} )} \end{aligned}$$

holds, which is a contradiction to the Karush–Kuhn–Tucker-type necessary optimality condition (10). This completes the proof of this theorem.

In order to prove Pareto optimality of the feasible solution satisfying the Karush–Kuhn–Tucker-type necessary optimality conditions, a stronger hypothesis of quasidifferentiable strict F-convexity with respect to convex compact sets imposed on the objective function is needed.

Theorem 4.2

(Sufficient optimality conditions) Let \(\bar{{{x}}}\) be such a feasible solution in the considered quasidifferentiable (VOP) at which the Karush–Kuhn–Tuker-type necessary optimality conditions (10)–(12) be satisfied with the quasidifferentials \({D}_{{f_i }} (\bar{{{x}}})=[\underline{\partial }{f}_{i} (\bar{{{x}}}),\overline{{\partial }}{f}_{i} (\bar{{{x}}})], {i} \in {I}, {D}_{{g_j} } (\bar{{{x}}})=[\underline{\partial }{g}_{j} (\bar{{{x}}}),\overline{{\partial }}{g}_{j} (\bar{{{x}}})], {j} \in {J}\). Further, assume that each \({f}_{{i}}, {i} \in {I},\) is a quasidifferentiable strictly F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{f_i }} (\bar{{{x}}})=\underline{\partial }{f}_{i} (\bar{{{x}}})+\overline{{\partial }}{f}_{i} (\bar{{{x}}})\), each \({g}_{{j}}, {j} \in {J}(\bar{{{x}}})\), is a quasidifferentiable F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{g_j} } (\bar{{{x}}})=\underline{\partial }{g}_{j} (\bar{{{x}}})+\overline{{\partial }}{g}_{j} (\bar{{{x}}})\). Then, \(\bar{{{x}}}\) is a Pareto solution in (VOP).

Proof

Proof of this theorem is similar to the proof of Theorem 4.1, and therefore, it is omitted in the paper.

Now, in order to illustrate the result established in Theorem 4.2, we present an example of a nonsmooth vector optimization problem with quasidifferentiable F-convex functions with respect to convex compact sets which are equal to the Minkowski sum of their subdifferentials and superdifferentials. Further, we also illustrate the fact that the Lagrange multipliers for such nonsmooth vector optimization problems may not be constant.

Example 4.3

Consider the following nonsmooth (VOP1):

$$\begin{aligned} (\hbox {VOP}1)\quad {\min f(x)}= & {} \left( \left| {{x}_1 -\left| {{x}_2 } \right| }\right| +{x}_1,{x}_1^2 +{x}_2^2 +\left| {{x}_1 } \right| -{x}_1 -{x}_2 \right) \\ \hbox {s.t.}\quad g_1({x})= & {} \left| {\left| {{x}_1 } \right| + {x}_2 } \right| \leqq 0, {x} \in \hbox {IR}^{2}. \end{aligned}$$

Note that \(\varOmega = \{ {x} \in \hbox {IR}^{2} : \left| {{x}_2 +\left| {{x}_1 } \right| } \right| \leqq 0 \}\) and \(\bar{{{x}}} = (0,0)\) is a feasible solution in problem (VOP1). Further, it can be proved that \({f} = ({f}_{1}, {f}_{2})\) and \(g_{1}\) are quasidifferentiable at \(\bar{{{x}}}\). Indeed, by definition, we have \({{f}}'_1 (\bar{{{x}}};{d})=\left| {{d}_1 -\left| {{d}_2 } \right| } \right| +{d}_1 , {{f}}'_2 (\bar{{{x}}};{d})=\left| {{d}_1 } \right| -{d}_1 -{d}_2 \), and therefore,

$$\begin{aligned} {{f}}'_1 ((0,0);d)=\mathop {\max }\limits _{{v}\in \underline{\partial }{f}_1 (0,0)} \left\langle {{v,d}} \right\rangle +\mathop {\min }\limits _{{w}\in \overline{\partial }{f}_1 (0,0)} \left\langle {{w,d}} \right\rangle , \end{aligned}$$

where \(\underline{\partial }{f}_1 (0,0)=\hbox {conv}\{(2,0),(0,2),(0,-2)\}, \overline{\partial }{f}_1 (0,0)= \hbox {conv}\{(0,-1),(0,1)\}\) and

$$\begin{aligned} {{f}}'_2 ((0,0);{d})=\mathop {\max }\limits _{{v}\in \underline{\partial }{f}_2 (0,0)} \left\langle {{v,d}} \right\rangle +\mathop {\min }\limits _{{w}\in \overline{\partial }{f}_2 (0,0)} \left\langle {{w,d}} \right\rangle , \end{aligned}$$

where \(\underline{\partial }{f}_2 (0,0)=\hbox {conv}\{(-2,-2),(0,-2)\}, \overline{\partial }{f}_2 (0,0)=\{(0,1)\}\).

Hence, by Definition 2.1, f is a quasidifferentiable function at \(\bar{{{x}}} = (0,0)\). Further, by definition, we have \({{g}}'_1 (\bar{{{x}}};{d})=\left| {\left| {{d}_1 } \right| +{d}_2 } \right| \), and therefore,

$$\begin{aligned} {{g}}'_1 (\bar{{{x}}};{d})=\mathop {\max }\limits _{{v}\in \underline{{g}}_1 (0,0)} \left\langle {{v,d}} \right\rangle +\mathop {\min }\limits _{{w}\in \overline{\partial }{g}_1 (0,0)} \left\langle {{w,d}} \right\rangle , \end{aligned}$$

where \(\underline{\partial }{g}_1 (0,0)=\hbox {conv}\{(0,0),(-2,2),(2,2)\}\) and \(\overline{\partial }{g}_1 (0,0)=\hbox {conv}\{(-1,-1),(1,-1)\}\).

Now, we prove that the Karush–Kuhn–Tucker necessary optimality conditions are fulfilled at \(\bar{{{x}}}\) with the Lagrange multipliers which are not constant.

Indeed, it can be shown that, for any sets of \({w}_1 \in {\overline{\partial }}{f}_1 (\bar{{{x}}}), {w}_2 \in {\overline{\partial }}{f}_2 (\bar{{{x}}})\), and \({v}_1 \in {\overline{\partial }}{g}_1 (\bar{{{x}}})\), there exist \(\bar{{\lambda }}({z})\ge 0\) and \(\bar{{\mu }}({z})\geqq 0\) such that the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) are satisfied. Namely, we consider the following examples of the choice of \({z} = ({w}_{1}, {w}_{2}, {v}_{1})\):

  1. (a)

    if \({w}_{1} = (0,1), {w}_{2} = (0,1)\) and \({v}_{1} = (-1,-1)\), then we put \(\bar{{\lambda }}_1 ({w})=1, \bar{{\lambda }}_2 ({w})=3, \bar{{\mu }}_1 ({w})=1\);

  2. (b)

    if \({w}_{1} = (0,-1), {w}_{2} = (0,1)\) and \({v}_{1} = (1,-1)\), then we put \(\bar{{\lambda }}_1 ({w})=1, \bar{{\lambda }}_2 ({w})=1, \bar{{\mu }}_1 ({w})=1\).

Note that the Karush–Kuhn–Tucker necessary optimality condition (10) is satisfied in the above cases (a) and (b). The conditions (11) and (12) are obvious.

Now, we illustrate the considered above cases (a) and (b). We show, moreover, that the Karush–Kuhn–Tucker necessary optimality condition (10) is not satisfied in the case b) with the same Lagrange multipliers \(\bar{{\lambda }}({z})\) and \(\bar{{\mu }}({z})\) as in the case a). We denote by \({Z}_{{z}}\) the set appearing in the right side of the Karush-Kuhn-Tucker necessary optimality condition (10). Hence, in the considered (VOP1), we have that \({Z}_{z} =\sum _{{i}=1}^2 {\bar{{\lambda }}_{i} ({z})(\underline{\partial }{f}_{i} (\bar{{{x}}})} +{w}_{i} )+\bar{{\mu }}_1 ({z})(\underline{\partial }{g}_1 (\bar{{{x}}})+{v}_1 )\) for the given chosen \({z} = ({w}_{1}, {w}_{2}, {v}_{1})\) and, therefore, it depends on the Lagrange multiplier \(\bar{{\lambda }}({z})\) and \(\bar{{\mu }}({z})\). Since we put \(\bar{{\lambda }}_1 ({z}')=1, \bar{{\lambda }}_2 ({z}')=3, \bar{{\mu }}_1 ({z}')=1\) in the case a) \({z}^{\prime } = ({w}_{1}, {w}_{2}, {v}_{1}) = ((0, 1), (0,1), (-1,-1))\), the set \({Z}_{{z}\prime }\) is illustrated on Fig. 1.

Fig. 1
figure 1

Set \({Z}_{{z}^{\prime }}\) in the case (a) when \({z}^{\prime }\) and the Lagrange multipliers \(\bar{{\lambda }}_1({{z}}')=1, \bar{{\lambda }}_2 ({z}')=3, \bar{{\mu }}_1 ({z}')=1\) are chosen

Fig. 2
figure 2

(i) Set \({Z}_{{z}''}\) when \(z''\) and the Lagrange multipliers \(\bar{{\lambda }}_1 ({z}'')=1, \bar{{\lambda }}_2 ({z}'')=3, \bar{{\mu }}_1 ({z}'')=1\) are chosen. (ii) The set \({Z}_{{z}''}\) when \(z''\) and the Lagrange multipliers \(\bar{{\lambda }}_1 ({{z}}'')=1, \bar{{\lambda }}_2 ({{z}}'')=1, \bar{{\mu }}_1 ({{z}}'')=1\) are chosen

Note that in this case \(0 \in {Z}_{{z}\prime }\), in other words, the Karush–Kuhn–Tucker condition (10) is satisfied.

Now, we illustrate on Fig. 2(i) the set \({Z}_{{z}''}\) for \({z}'' = ({w}_{1}, {w}_{2}, {v}_{1}) = ((0,-1), (0,1), (-1,-1))\) with the same Lagrange multipliers \(\bar{{\lambda }}_1 ({z}'')=1, \bar{{\lambda }}_2 ({z}'')=3, \bar{{\mu }}_1 ({z}'')=1\) as in the case a). Note that \(0 \notin {Z}_{{z}''}\), in other words, the Karush–Kuhn–Tucker condition (10) is not satisfied. Further, the set \({Z}_{{z}''}\) in the case (b) for the Lagrange multipliers \(\bar{{\lambda }}_1 ({z}'')=1, \bar{{\lambda }}_2 ({z}'')=1, \bar{{\mu }}_1 ({z}'')=1\) is illustrated on Fig. 2(ii). Note that \(0 \in {Z}_{{z}''}\), in other words, the Karush–Kuhn–Tucker condition (10) is satisfied.

Further, in order to show that the sufficient optimality conditions formulated in Theorem 4.2 are applicable for problem (VOP1), we have to prove that the objective functions \({f}_{{i}}, {i} = 1, 2\), are quasidifferentiable strictly F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{f_i} } (\bar{{{x}}})=\underline{\partial }{f}_{i} (\bar{{{x}}})+\overline{{\partial }}{f}_{i} (\bar{{{x}}})\) and the constraint function \({g}_{1}\) is quasidifferentiable F-convex at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{g}_1 } (\bar{{{x}}})=\underline{\partial }{g}_1 (\bar{{{x}}})+\overline{{\partial }}{g}_1 (\bar{{{x}}})\). In order to this, we define F as follows: \({F}({x},\bar{{{x}}};{q})=\left( {{q}_1 +{q}_2 } \right) \left[ {\left( {\left| {{x}_1 } \right| +{x}_2 } \right) -\left( {\left| {\bar{{{x}}}_1 } \right| +\bar{{{x}}}_2 } \right) } \right] \). Then, it can be proved, by Definition 4.2, that \({f}_{{i}}, {i} = 1, 2\), are quasidifferentiable strictly F-convex function at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{f_i} } (\bar{{{x}}})=\underline{\partial }{f}_{i} (\bar{{{x}}})+\overline{{\partial }}{f}_{i} (\bar{{{x}}})\) and the constraint function \(g_{1}\) is quasidifferentiable F-convex at \(\bar{{{x}}}\) on \(\varOmega \) with respect to \({S}_{{g}_1 } (\bar{{{x}}})=\underline{\partial }{g}_1 (\bar{{{x}}})+\overline{{\partial }}{g}_1 (\bar{{{x}}})\). Since all hypotheses of Theorem 4.2 are fulfilled at \(\bar{{{x}}}\), therefore, \(\bar{{{x}}}\) is a Pareto solution in the considered nonsmooth vector optimization problem with quasidifferentiable convex functions with respect to convex compact sets that are equal to the Minkowski sum of their subdifferentials and superdifferentials at \(\bar{{{x}}}\).

5 Conclusions

In this paper, the class of nonsmooth multiobjective optimization problems with quasidifferentiable functions has been considered. Both the necessary optimality conditions of Fritz John type and the necessary optimality conditions of Karush–Kuhn–Tucker type have been established for such nonsmooth vector optimization problems. Further, the definition of an F-convex function with respect to a convex compact set has been introduced in the paper. Then, the sufficient optimality conditions for (weak) Pareto optimality of a feasible solution for the considered nonsmooth multiobjective optimization problem have also been established under assumptions that the functions involved are quasidifferentiable F-convex with respect to convex compact sets that are equal to the Minkowski sum of their subdifferentials and superdifferentials at this point. The results established in the paper show that quasidifferential calculus can be extended to the vectorial case. Finally, we have illustrated the results established in the paper by an example of a nonsmooth multiobjective optimization problem with quasidifferentiable F-convex functions with respect to convex compact sets that are equal to Minkowski sum of their subdifferentials and superdifferentials. By utilizing this example, it was also analyzed the fact that, for such nonsmooth vector optimization problems, the Lagrange multipliers may not be constant.

However, some interesting topics for further research remain. It would be of interest to investigate whether it is possible to prove the sufficient optimality conditions for a larger class of quasidifferentiable nonconvex vector optimization problems than those ones with F-convex functions with respect to convex compact sets. Also it would be interesting to prove similar optimality results for other classes of quasidifferentiable vector optimization problems. We shall investigate these questions in subsequent papers.