1 Introduction

Multiobjective optimization problems, also known vector optimization problems or multicriteria optimization problems, are extremum problems involving more than one objective function to be optimized. Many real-life problems can be formulated as multiobjective programming problems which include human decision making, economics, financial investment, portfolio, resource allocation, information transfer, engineering design, mechanics, control theory, etc. During the past five decades, the field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory. One of the classes of nondifferentiable multicriteria optimization problems studied in the recent past is the class of directionally differentiable vector optimization problems for which many authors have established the aforesaid fundamental results in optimization theory (see, for example, (Ahmad, 2011; Antczak, 2002, 2009; Arana-Jiménez et al., 2013; Dinh et al., 2005; Ishizuka, 1992; Kharbanda et al., 2015; Mishra & Noor, 2006; Mishra et al., 2008, 2015; Slimani & Radjef, 2010; Ye, 1991) and others).

Recently, a special class of optimization problems, known as the mathematical programming problems with vanishing constraints, was introduced by Achtziger and Kanzow (2008), which serves as a unified frame work for several applications in structural and topology optimization. Since optimization problems with vanishing constraints, in their general form, are quite a new class of mathematical programming problems, only very few works have been published on this subject so far (see, for example, (Achtziger et al. 2013; Antczak 2022; Dorsch et al. 2012; Dussault et al. 2019; Guu et al. 2017; Hoheisel and Kanzow 2008, 2009; Hoheisel et al. 2012; Hu et al. 2014, 2020; Izmailov and Solodov 2009; Khare and Nath 2019; Mishra et al. 2015, 2016; Thung 2022). However, to the best our knowledge there are no works on optimality conditions for (convex) directionally differentiable multiobjective programming problems with vanishing constraints in the literature.

The main purpose of this paper is, therefore, to develop optimality conditions for a new class of nondifferentiable multiobjective programming problems with vanishing constraints. Namely, this paper represents the study concerning both necessary and sufficient optimality conditions for convex directionally differentiable vector optimization problems with inequality, equality and vanishing constraints. Considering the concept of a (weak) Pareto solution, we establish Karush–Kuhn–Tucker type necessary optimality conditions which are formulated in terms of directional derivatives. In proving the aforesaid necessary optimality conditions, we use a nonlinear version of the Gordan alternative theorem for convex functions and also the Abadie constraint qualification. Further, we illustrate the case that the necessary optimality conditions may not hold under the aforesaid constraint qualification. Therefore, we introduce the VC-Abadie constraint qualification and, under this weaker constraint qualification in comparison to that classical one, we present the Karush–Kuhn–Tucker type necessary optimality conditions for the considered directionally differentiable multiobjective programming problem. Further, we prove the sufficiency of the aforesaid necessary optimality conditions for such nondifferentiable vector optimization problems under appropriate convexity hypotheses. The optimality results established in the paper are illustrated by the example of a convex directionally differentiable multiobjective programming problem with vanishing constraints. Furthermore, for the considered directionally differentiable vector optimization problem with vanishing constraints, we define its vector Wolfe dual problem and we prove several duality theorems also under convexity hypotheses.

2 Preliminaries

In this section, we provide some definitions and results that we shall use in the sequel. The following convention for equalities and inequalities will be used throughout the paper.

For any \(x=\left( x_{1},x_{2},...,x_{n}\right) ^{T}\), \(y=\left( y_{1},y_{2},...,y_{n}\right) ^{T}\) in \(R^{n}\), we define:

  1. (i)

    \(x=y\) if and only if \(x_{i}=y_{i}\) for all \(i=1,2,...,n\);

  2. (ii)

    \(x<y\) if and only if \(x_{i}<y_{i}\) for all \(i=1,2,...,n\);

  3. (iii)

    \(x\leqq y\) if and only if \(x_{i}\leqq y_{i}\) for all \(i=1,2,...,n\);

  4. (iv)

    \(x\le y\) if and only if \(x\leqq y\) and \(x\ne y\).

Throughout the paper, we will use the same notation for row and column vectors when the interpretation is obvious.

Definition 2.1

The affine hull of the set C of points \(x_{1},...,x_{k}\in C\) is defined by

$$\begin{aligned} aff\,C=\left\{ \sum _{i=1}^{k}\alpha _{i}x_{i}:\alpha _{i}\in R\text {,} \sum _{i=1}^{k}\alpha _{i}=1\right\} \text {.} \end{aligned}$$

Definition 2.2

(Hiriart-Urruty & Lemaréchal, 1993) The relative interior of the set C (denoted by \(relint\,C\)) is defined as

$$\begin{aligned} relint\,C=\left\{ x\in C:B\left( x,r\right) \cap aff\,C\subseteq C\text { for some }r>0\right\} , \end{aligned}$$

where \(B\left( x,r\right) :=\left\{ y\in R^{n}:\left\| x-y\right\| \leqq r\right\} \) is the ball of radius r around x with respect to some norm on \(R^{n}\).

Remark 2.3

(Rockafellar, 1970) The definition of the relative interior of a nonempty convex set C can be reduced to the following:

$$\begin{aligned} relint\,C=\left\{ x\in C:\forall y\in C \exists \lambda >1\text { s.t. }\lambda x+(1-\lambda )y\in C\right\} \text {.} \end{aligned}$$

Definition 2.4

It is said that \(\varphi :C\rightarrow R\), where \(C\subset R^{n}\) is a nonempty convex set, is convex on C if the inequality

$$\begin{aligned} \varphi \left( u+\lambda (x-u)\right) \leqq \lambda \varphi (x)+\left( 1-\lambda \right) \varphi (u) \end{aligned}$$
(1)

holds for all \(x,u\in C\) and any \(\lambda \in \left[ 0,1\right] \).

It is said that \(\varphi \) is said to be strictly convex on C if the inequality

$$\begin{aligned} \varphi \left( u+\lambda (x-u)\right) <\lambda \varphi (x)+\left( 1-\lambda \right) \varphi (u) \end{aligned}$$

holds for all \(x,u\in C\), \(x\ne u\), and any \(\lambda \in \left( 0,1\right) \).

Definition 2.5

We say that a mapping \(\varphi :X\rightarrow R\) defined on a nonempty set \(X\subseteq R^{n}\) is directionally differentiable at \(u\in X\) into a direction \(v\in R^{n}\) if the limit

$$\begin{aligned} \varphi ^{+}(u;v)=\underset{\alpha \rightarrow 0^{+}}{\lim }\frac{\varphi \left( u+\alpha v\right) -\varphi \left( u\right) }{\alpha } \end{aligned}$$
(2)

exists finite. We say that \(\varphi \) is directionally differentiable or (Dini differentiable) at u, if its directional derivative \(\varphi ^{+}(u;v)\) exists finite for all \(v\in R^{n}\).

Proposition 2.6

(Jahn, 2004) Let a mapping \(\varphi :R^{n}\rightarrow R\) be convex. Then, at every \( u\in R^{n}\) and in every direction \(v\in R^{n}\), the directional derivative \( \varphi ^{+}(u;v)\) exists. Moreover, since the convex function \(\varphi \) has a directional derivative in the direction \(x-u\) for any \(x\in R^{n}\), then the following inequality

$$\begin{aligned} \varphi \left( x\right) -\varphi \left( u\right) \geqq \varphi ^{+}(u;x-u) \end{aligned}$$
(3)

holds.

Lemma 2.7

(Jahn, 2004) Let \(X\subseteq R^{n}\) be open, \(u\in X\) be given, \(f,g:X\rightarrow R\) and \(v\in R^{n}\). Further, assume that the directional derivatives of f and g at u in the direction v exist, i.e. \(f^{+}(u;v)\) and \(g^{+}(u;v)\) both exist. Then the directional derivative of \(f\cdot g\) exists and \(\left( f\cdot g\right) ^{+}(u;v)=\) \(f(u)g^{+}(u;v)+\) \(f^{+}(u;v)g(u)\).

Giorgi (2002) proved the following theorem of the alternative for convex functions, which may be considered as a nonlinear version of the Gordan theorem presented by Mangasarian (1969) in the linear case.

Theorem 2.8

(Giorgi, 2002) Let \(C\subset R^{n}\) be a a nonempty convex set, \(F:C\rightarrow R^{k}\), \( \Psi :C\rightarrow R^{m}\) be convex functions and \(\Phi :R^{n}\rightarrow R^{q}\) be a linear function. Let us assume that there exists \(x_{0}\in relint\,C\) such that \(\Psi _{j}\left( x_{0}\right) <0\), \(j=1,...,m,\) and \( \Phi _{s}\left( x_{0}\right) \leqq 0\), \(s=1,...,q\). Then, the system

(4)

admits no solutions if and only if there exists a vector \(\left( \lambda ,\theta ,\beta \right) ^{T}\in R_{+}^{k}\times R_{+}^{m}\times R^{q}\), \( \lambda \ne 0\), such that

$$\begin{aligned} \lambda ^{T}F\left( x\right) +\theta ^{T}\Psi \left( x\right) +\beta ^{T}\Phi \left( x\right) \geqq 0, \ \forall x\in C\text {.} \end{aligned}$$

Definition 2.9

The cone of sequential linear directions (also known as the sequential radial cone) to a set \(Q\subset R^{n}\) at \( {\overline{x}}\in Q\) is the set denoted by \(Z\left( Q;{\overline{x}}\right) \) and defined by

$$\begin{aligned} Z\left( Q;{\overline{x}}\right) :=\left\{ v\in R^{n}:\exists \left( \alpha _{k}\right) \subset R_{+} \alpha _{k}\downarrow 0\text { such that } {\overline{x}}+\alpha _{k}v\in Q\text {, }\forall k\in N\right\} . \end{aligned}$$

Definition 2.10

The tangent cone to a set \(Q\subset R^{n}\) at \( {\overline{x}}\in cl\,Q\) is the set denoted by \({\mathcal {T}}\left( Q;\overline{x }\right) \) and defined by

$$\begin{aligned}{} & {} {\mathcal {T}}\left( Q;{\overline{x}}\right) :=\left\{ v\in R^{n}:\exists \left( x_{k}\right) \subseteq A\text {,}\left( \alpha _{k}\right) \subset R_{+}\text { such that }\alpha _{k}\downarrow 0\wedge x_{k}\rightarrow {\overline{x}}\wedge \frac{x_{k}-{\overline{x}}}{\alpha _{k}}\rightarrow v\right\} \\{} & {} \quad =\left\{ v\in R^{n}:\exists v_{k}\rightarrow v\text {, }\alpha _{k}\downarrow 0\text { such that }{\overline{x}}+\alpha _{k}v_{k}\in Q\text {, }\forall k\in N\right\} , \end{aligned}$$

where \(cl\,Q\) denotes the closure of Q.

Note that the aforesaid cones are nonempty, \({\mathcal {T}}\left( Q;{\overline{x}} \right) \) is closed, it may not be convex and \(Z\left( Q;{\overline{x}}\right) \subset {\mathcal {T}}\left( Q;{\overline{x}}\right) \).

3 Multiobjective programming with vanishing constraints

In the paper, we consider the following constrained multiobjective programming problem (MPVC) with vanishing constraints defined by

where \(f_{i}:R^{n}\rightarrow R\), \(i\in I=\left\{ 1,...,p\right\} \), \( g_{j}:R^{n}\rightarrow R\), \(j\in J=\left\{ 1,...,m\right\} \), \( h_{s}:R^{n}\rightarrow R,\) \(s=1,...,r,\) \(H_{t}:R^{n}\rightarrow R\), \( G_{t}:R^{n}\rightarrow R\), \(t\in T=\left\{ 1,...,r\right\} \), are real-valued functions and \(C\subseteq R^{n}\) is a nonempty open convex set.

For the purpose of simplifying our presentation, we will next introduce some notations which will be used frequently throughout this paper. Let \(\Omega =\{x\in C:g_{j}(x)\leqq 0\), \(j\in J,\) \(H_{t}\left( x\right) \geqq 0,\) \( H_{t}\left( x\right) G_{t}\left( x\right) \leqq 0,\) \(t\in T\}\) be the set of all feasible solutions for (MPVC). Further, we denote by \(J({\overline{x}} ):=\left\{ j\in J:g_{j}({\overline{x}})=0\right\} \) the set of inequality constraint indices that are active at \({\overline{x}}\in \Omega \) and by \( J^{<}({\overline{x}})=\{j\in \{1,...,m\}:g_{j}({\overline{x}})<0\}\) the set of inequality constraint indices that are inactive at \({\overline{x}}\in \Omega \). Then, \(J({\overline{x}})\cup J^{<}({\overline{x}})=J\).

Before studying optimality in multiobjective programming, one has to define clearly the well-known concepts of optimality and solutions in multiobjective programming problem. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural.

Definition 3.1

A feasible point \({\overline{x}}\) is said to be a Pareto solution (an efficient solution) in (MPVC) if and only if there exists no other \(x\in \Omega \) such that

$$\begin{aligned} f(x)\le f({\overline{x}})\text {.} \end{aligned}$$

Definition 3.2

A feasible point \({\overline{x}}\) is said to be a weak Pareto solution (a weakly efficient solution, a weak minimum) in (MPVC) if and only if there exists no other \(x\in \Omega \) such that

$$\begin{aligned} f(x)<f({\overline{x}})\text {.} \end{aligned}$$

As it follows from the definition of (weak) Pareto optimality, \({\overline{x}}\) is nonimprovable with respect to the vector cost function f. The quality of nonimprovability provides a complete solution if \({\overline{x}}\) is unique. However, usually this is not the case, and then one has to find the entire exact set of all Pareto optimality solutions in a multiobjective programming problem.

Now, for any feasible solution \({\overline{x}}\), let us denote the following index sets

$$\begin{aligned}{} & {} T_{+}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) >0\right\} \text {,}\\{} & {} T_{0}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) =0\right\} \text {.} \end{aligned}$$

Further, let us divide the index set \(T_{+}\left( {\overline{x}}\right) \) into the following index subsets:

$$\begin{aligned}{} & {} T_{+0}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right)>0\text {, }G_{t}\left( {\overline{x}}\right) =0\right\} \text {,}\\{} & {} T_{+-}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) >0\text {, }G_{t}\left( {\overline{x}}\right) <0\right\} \text {.} \end{aligned}$$

Similarly, the index set \(T_{0}\left( {\overline{x}}\right) \) can be partitioned into the following three index subsets:

$$\begin{aligned}{} & {} T_{0+}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) =0\text {, }G_{t}\left( {\overline{x}}\right) >0\right\} \text {,}\\{} & {} T_{00}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) =0\text {, }G_{t}\left( {\overline{x}}\right) =0\right\} \text {,}\\{} & {} T_{0-}\left( {\overline{x}}\right) =\left\{ t\in T:H_{t}\left( {\overline{x}} \right) =0\text {, }G_{t}\left( {\overline{x}}\right) <0\right\} \text {.} \end{aligned}$$

Moreover, we denote by \(T_{HG}\left( {\overline{x}}\right) \) the set of indexes \(t\in T\) defined by \(T_{HG}\left( {\overline{x}}\right) =T_{00}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \).

Before proving the necessary optimality conditions for the considered directionally differentiable multiobjective programming problem with vanishing constraints, we introduce the Abadie constraint qualification for this multicriteria optimization problem.

In order to introduce the aforesaid constraint qualification, for \({\overline{x}}\in \Omega \), we define the sets \(Q^{l}\left( {\overline{x}}\right) \), \( l=1,...,p\), and \(Q\left( {\overline{x}}\right) \) as follows

Now, we give the definition of the almost linearizing cone for the considered multiobjective programming problem (MPVC) with vanishing constraints. It is a generalization of the almost linearizing cone introduced by Preda and Chitescu (1999) for a directionally differentiable multiobjective optimization problem with inequality constraints only.

Definition 3.3

The almost linearizing cone \( L\left( \Omega ,{\overline{x}}\right) \) to the set \(\Omega \) at \({\overline{x}} \in \Omega \) is defined by

$$\begin{aligned}{} & {} L\left( \Omega ,{\overline{x}}\right) =\left\{ v\in R^{n}:f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0 \forall i\in I, g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0\text {,} j\in J\left( {\overline{x}}\right) , \right. \\{} & {} \left. h_{s}^{+}\left( {\overline{x}};v\right) =0, \forall s\in S, H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\text {, }t\in T, \left( H_{t}G_{t}\right) ^{+}\left( {\overline{x}};v\right) \leqq 0\text {, }t\in T\right\} \text {.} \end{aligned}$$

Now, we prove the result which gives the formulation of the almost linearizing cone to the sets \(Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\).

Proposition 3.4

Let \({\overline{x}}\in \Omega \) be a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, the linearizing cone to the set to each set \(Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), at \({\overline{x}}\), denoted by \(L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \), is given by

(5)

Proof

Let us assume that \({\overline{x}}\in \Omega \) is a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, by the definitions of the almost linearizing cone and index sets, we get

(6)

Note that, by Lemma 2.7, one has

$$\begin{aligned} \left( H_{t}G_{t}\right) ^{+}\left( {\overline{x}};v\right) =G_{t}\left( {\overline{x}}\right) \left( H_{t}\right) ^{+}\left( {\overline{x}};v\right) +H_{t}\left( {\overline{x}}\right) \left( G_{t}\right) ^{+}\left( {\overline{x}};v\right) \text {.} \end{aligned}$$
(7)

Then, by the definition of index sets, ( 7 ) gives

(8)

Combining ( 6 )-( 8 ), we get ( 5 ). This completes the proof of this proposition. \(\square \)

Remark 3.5

Note that the almost linearizing cone to \(Q\left( {\overline{x}}\right) \) at \( {\overline{x}}\in Q\left( {\overline{x}}\right) \) is given by

$$\begin{aligned} L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) =\bigcap \limits _{l=1}^{p}L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) . \end{aligned}$$
(9)

Indeed, by ( 5 ), we get ( 9 ). In other words, the formulation of \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is given by

(10)

Proposition 3.6

If \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I,\) \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J\left( {\overline{x}}\right) ,\) \( h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S,\) \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \), \( G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{+0}\left( {\overline{x}} \right) \), are convex on \(R^{n}\), then \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is a closed convex cone.

Proof

Since the directional derivative is a positive homogenous function, therefore, if \(\alpha \geqq 0\) and \(v\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \), one has \(\alpha v\in L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \). This means that \(L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is a cone.

Now, we prove that it is a convex cone. Let \(v_{1}\), \(v_{2}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) and \(\alpha \in \left[ 0,1 \right] \). By convexity assumption, it follows that

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};\alpha v_{1}+\left( 1-\alpha \right) v_{2}\right) \leqq \alpha f_{i}^{+}\left( {\overline{x}};v_{1}\right) +\left( 1-\alpha \right) f_{i}^{+}\left( {\overline{x}};v_{2}\right) \leqq 0\text {, } i\in I\text {,}\\{} & {} g_{j}^{+}\left( {\overline{x}};\alpha v_{1}+\left( 1-\alpha \right) v_{2}\right) \leqq \alpha g_{j}^{+}\left( {\overline{x}};v_{1}\right) +\left( 1-\alpha \right) g_{j}^{+}\left( {\overline{x}};v_{2}\right) \leqq 0\text {, } j\in J\left( {\overline{x}}\right) \text {,}\\{} & {} h_{s}^{+}\left( {\overline{x}};\alpha v_{1}+\left( 1-\alpha \right) v_{2}\right) \leqq \alpha h_{s}^{+}\left( {\overline{x}};v_{1}\right) +\left( 1-\alpha \right) h_{s}^{+}\left( {\overline{x}};v_{2}\right) \leqq 0\text {, } s\in S\text {,}\\{} & {} -H_{t}^{+}\left( {\overline{x}};\alpha v_{1}+\left( 1-\alpha \right) v_{2}\right) \leqq -\alpha H_{t}^{+}\left( {\overline{x}};v_{1}\right) \\{} & {} -\left( 1-\alpha \right) H_{t}^{+}\left( {\overline{x}};v_{2}\right) \leqq 0\text {, } t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \text {,}\\{} & {} G_{t}^{+}\left( {\overline{x}};\alpha v_{1}+\left( 1-\alpha \right) v_{2}\right) \leqq \alpha G_{t}^{+}\left( {\overline{x}};v_{1}\right) -\left( 1-\alpha \right) G_{t}^{+}\left( {\overline{x}};v_{2}\right) \leqq 0\text {, } t\in T_{+0}\left( {\overline{x}}\right) \text {.} \end{aligned}$$

The above inequalities imply that \(\alpha v_{1}+\left( 1-\alpha \right) v_{2}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \), which means that \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) is a convex cone.

Now, we prove the closedness of \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \). In order to prove this property, we take a sequence \( \left\{ v_{r}\right\} \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) such that \(v_{r}\rightarrow v\) as \(r\rightarrow \infty \). Since \( v_{r}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for any integer r, by the continuity of convex functions \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), we have

$$\begin{aligned} \underset{r\rightarrow \infty }{\lim }f_{i}^{+}\left( {\overline{x}};v_{r}\right) =f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0,~\forall i\in I\text {.} \end{aligned}$$

Similarly, we obtain \(g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0\), \(j\in J\left( {\overline{x}}\right) ,\) \(h_{s}^{+}\left( {\overline{x}};v\right) =0\), \( s\in S\), \(H_{t}^{+}\left( {\overline{x}};v\right) =0\), \(t\in T_{0+}\left( {\overline{x}}\right) \), \(H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\), \( t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0\), \(t\in T_{+0}\left( {\overline{x}}\right) \). This means that the set \(L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is closed. \(\square \)

Remark 3.7

Based on the result established in the above proposition, we conclude that also \(L\left( Q^{l}\left( {\overline{x}} \right) ;{\overline{x}}\right) \), \(l=1,...,p,\) are also closed convex cones.

Proposition 3.8

If, for each \(v\in Z\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \), the Dini directional derivatives \( f_{i}^{+}\left( {\overline{x}};v\right) \), \(i\in I,\) \(g_{j}^{+}\left( {\overline{x}};v\right) \), \(j\in J\left( {\overline{x}}\right) ,\) \( h_{s}^{+}\left( {\overline{x}};v\right) \), \(s\in S,\) \(H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \), \( G_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{+0}\left( {\overline{x}} \right) \), exist, then

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(11)

Proof

Firstly, we prove that, for each \(l=1,...,p\),

$$\begin{aligned} Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(12)

Therefore, for each \(l=1,...,p\), we take \(v\in Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \). Then, by Definition 2.9, there exists \(\left( \alpha _{k}\right) \subset R_{+}\), \(\alpha _{k}\downarrow 0\), such that \({\overline{x}}+\alpha _{k}v\in Q^{l}\left( {\overline{x}}\right) \) for all \(k\in N\). Therefore, for each \(l=1,...,p\), since \({\overline{x}}+\alpha _{k}v\in Q^{l}\left( {\overline{x}}\right) \), we have

$$\begin{aligned}{} & {} f_{i}\left( {\overline{x}}+\alpha _{k}v\right) \leqq f_{i}\left( {\overline{x}} \right) , \forall i=1,...,p, i\ne l,\\{} & {} g_{j}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0=g_{j}\left( {\overline{x}} \right) , \forall j\in J\left( {\overline{x}}\right) ,\\{} & {} h_{s}\left( {\overline{x}}+\alpha _{k}v\right) =0=h_{s}\left( {\overline{x}} \right) , \forall s\in S\\{} & {} H_{t}\left( {\overline{x}}+\alpha _{k}v\right) \geqq 0=H_{t}\left( {\overline{x}} \right) , \forall t\in T_{0}\left( {\overline{x}}\right) ,\\{} & {} H_{t}\left( {\overline{x}}+\alpha _{k}v\right) G_{t}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0=H_{t}\left( {\overline{x}}\right) G_{t}\left( {\overline{x}} \right) , \forall t\in T_{HG}\left( {\overline{x}}\right) . \end{aligned}$$

Then, by Definition 2.5, we have

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};v\right) =\underset{\alpha _{k}\downarrow 0}{ \lim }\frac{f_{i}\left( {\overline{x}}+\alpha _{k}v\right) -f_{i}\left( {\overline{x}}\right) }{\alpha _{k}}\leqq 0, \forall i=1,...,p, i\ne l, \end{aligned}$$
(13)
$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};v\right) =\underset{\alpha _{k}\downarrow 0}{ \lim }\frac{g_{j}\left( {\overline{x}}+\alpha _{k}v\right) -g_{j}\left( {\overline{x}}\right) }{\alpha _{k}}\leqq 0, \forall j\in J\left( {\overline{x}} \right) , \end{aligned}$$
(14)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};v\right) =\underset{\alpha _{k}\downarrow 0}{ \lim }\frac{h_{s}\left( {\overline{x}}+\alpha _{k}v\right) -h_{s}\left( {\overline{x}}\right) }{\alpha _{k}}=0, \forall s\in S, \end{aligned}$$
(15)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) =\underset{\alpha _{k}\downarrow 0}{ \lim }\frac{H_{t}\left( {\overline{x}}+\alpha _{k}v\right) -H_{t}\left( {\overline{x}}\right) }{\alpha _{k}}\geqq 0, \forall t\in T_{0}\left( {\overline{x}}\right) , \end{aligned}$$
(16)
$$\begin{aligned}{} & {} \left( G_{t}H_{t}\right) ^{+}\left( {\overline{x}};v\right) =\underset{\alpha _{k}\downarrow 0}{\lim }\frac{\left( G_{t}H_{t}\right) \left( {\overline{x}} +\alpha _{k}v\right) -\left( G_{t}H_{t}\right) \left( {\overline{x}}\right) }{ \alpha _{k}}\leqq 0, \forall t\in T_{HG}\left( {\overline{x}}\right) . \end{aligned}$$
(17)

By Lemma 2.7, one has

(18)

Thus, ( 16 )-( 18 ) yield

$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) =0, \forall t\in T_{0+}\left( {\overline{x}}\right) , \end{aligned}$$
(19)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0, \forall t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) , \end{aligned}$$
(20)
$$\begin{aligned}{} & {} G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0, \forall t\in T_{+0}\left( {\overline{x}}\right) . \end{aligned}$$
(21)

Hence, we conclude by ( 13 )-( 15 ) and ( 19 )-( 21 ) that \(v\in L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for each \(l=1,...,p\). Therefore, since we have shown that \(Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for each \(l=1,...,p\), we have by ( 9 ) that

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset \bigcap \limits _{l=1}^{p}L\left( Q^{l}\left( {\overline{x}} \right) ;{\overline{x}}\right) =L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {,} \end{aligned}$$

as was to be shown. \(\square \)

Note that, in general, the converse inclusion of ( 11 ) does not hold. Therefore, in order to prove the necessary optimality condition for efficiency in (MPVC), we give the definition of the Abadie constraint qualification.

Definition 3.9

It is said that the Abadie constraint qualification holds at \({\overline{x}}\in \Omega \) for (MPVC) iff

$$\begin{aligned} L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(22)

Remark 3.10

By ( 11 ), ( 22 ) means that the Abadie constraint qualification (ACQ) holds at \({\overline{x}} \) for (MPVC) iff \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) =\bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \).

Now, we state a necessary condition for efficiency in (MPVC).

Theorem 3.11

Let \({\overline{x}} \in \Omega \) be an efficient solution in (MPVC) and, for each \(v\in Z\left( C,{\overline{x}}\right) ,\) the directional derivatives \(f_{i}^{+}\left( {\overline{x}};v\right) \), \(i=1,..,p\), \(g_{j}^{+}\left( {\overline{x}};v\right) \), \(j\in J({\overline{x}})\), \(h_{s}^{+}\left( {\overline{x}};v\right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{0}({\overline{x}})\), \( H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{00}\left( {\overline{x}} \right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}} \right) \), \(G_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{+0}\left( {\overline{x}}\right) \), exist. Further, we assume that \(g_{j}\), \(j\in J^{<}( {\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+-}( {\overline{x}})\), are continuous functions at \({\overline{x}}\). If the Abadie constraint qualification (ACQ) holds at \({\overline{x}}\) for (MPVC), then, for each \(l=1,...,p\), the system

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0\text {, }f_{l}^{+}\left( {\overline{x}};v\right) <0\text {, }i=1,...,p, i\ne l, \end{aligned}$$
(23)
$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0, j\in J({\overline{x}}) \text {,} \end{aligned}$$
(24)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};v\right) =0, s\in S\text {,} \end{aligned}$$
(25)
$$\begin{aligned}{} & {} -H_{t}^{+}\left( {\overline{x}};v\right) \leqq 0, t\in T_{0}( {\overline{x}})\text {,} \end{aligned}$$
(26)
$$\begin{aligned}{} & {} \left( H_{t}G_{t}\right) ^{+}\left( {\overline{x}};v\right) \leqq 0, t\in T_{HG}\left( {\overline{x}}\right) \end{aligned}$$
(27)

has no solution \(v\in R^{n}\).

Proof

We proceed by contradiction. Suppose, contrary to the result, that there exists \(l_{0}\in \left\{ 1,...,p\right\} \) such that the system

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0\text {, }f_{l_{0}}^{+}\left( {\overline{x}};v\right) <0\text {, } \quad i=1,...,p, i\ne l_{0}, \end{aligned}$$
(28)
$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0, \quad j\in J({\overline{x}}) \text {,} \end{aligned}$$
(29)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};v\right) =0, \quad s\in S\text {,} \end{aligned}$$
(30)
$$\begin{aligned}{} & {} -H_{t}^{+}\left( {\overline{x}};v\right) \leqq 0, \quad t\in T_{0}( {\overline{x}})\text {,} \end{aligned}$$
(31)
$$\begin{aligned}{} & {} \left( H_{t}G_{t}\right) ^{+}\left( {\overline{x}};v\right) \leqq 0, \quad t\in T_{HG}\left( {\overline{x}}\right) \end{aligned}$$
(32)

has a solution \(v\in R^{n}\). Then, by ( 8 ), the system

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0\text {, }f_{l_{0}}^{+}\left( {\overline{x}};v\right) <0\text {, } \quad i=1,...,p, i\ne l_{0}, \end{aligned}$$
(33)
$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0, \quad \forall j\in J({\overline{x}}) \text {,} \end{aligned}$$
(34)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};v\right) =0, \quad \forall s\in S\text {,} \end{aligned}$$
(35)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) =0\quad \forall t\in T_{0+}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(36)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\quad \forall t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \text {, } \end{aligned}$$
(37)
$$\begin{aligned}{} & {} G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0\quad \forall t\in T_{+0}\left( {\overline{x}}\right) \end{aligned}$$
(38)

has a solution \(v\in R^{n}\). Hence, it is obvious that \(v\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \). By assumption, (ACQ) is satisfied at \({\overline{x}}\) for (MPVC). Then, by Definition 3.9, \(v\in \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \). Thus, \(v\in Z\left( Q^{l_{0}}\left( {\overline{x}} \right) ;{\overline{x}}\right) \). Therefore, by Definition 2.9, there exists \(\left( \alpha _{k}\right) \subset R_{+}\), \(\alpha _{k}\downarrow 0\), such that \({\overline{x}}+\alpha _{k}v\in Q^{l_{0}}\left( {\overline{x}}\right) \) for all \(k\in N\). Hence, \({\overline{x}}+\alpha _{k}v\in C\) and, moreover,

$$\begin{aligned}{} & {} f_{i}\left( {\overline{x}}+\alpha _{k}v\right) \leqq f_{i}\left( {\overline{x}} \right) , \quad \forall i=1,...,p, i\ne l_{0}, \end{aligned}$$
(39)
$$\begin{aligned}{} & {} g_{j}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0, \quad \forall j\in J\left( {\overline{x}}\right) , \end{aligned}$$
(40)
$$\begin{aligned}{} & {} h_{s}\left( {\overline{x}}+\alpha _{k}v\right) =0, \quad \forall s=1,...,q, \end{aligned}$$
(41)
$$\begin{aligned}{} & {} H_{t}\left( {\overline{x}}+\alpha _{k}v\right) =0, \quad \forall t\in T_{0+}\left( {\overline{x}}\right) , \end{aligned}$$
(42)
$$\begin{aligned}{} & {} H_{t}\left( {\overline{x}}+\alpha _{k}v\right) \geqq 0, \quad \forall t\in T_{0}\left( {\overline{x}}\right) , \end{aligned}$$
(43)
$$\begin{aligned}{} & {} G_{t}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0, \quad \forall t\in T_{+0}\left( {\overline{x}}\right) . \end{aligned}$$
(44)

By the definition of indexes sets, one has \(g_{j}\left( {\overline{x}}\right) <0\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\left( {\overline{x}}\right) >0\), \(t\in T_{+}({\overline{x}})\), \(G_{t}({\overline{x}})<0\), \(t\in T_{+-}({\overline{x}})\). Therefore, by the continuity of \(g_{j}\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+-}({\overline{x}}),\) at \( {\overline{x}}\), there exists \(k_{0}\in N\) such that, for all \(k>k_{0}\),

$$\begin{aligned}{} & {} g_{j}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0,\quad \forall j\notin J\left( {\overline{x}}\right) , \end{aligned}$$
(45)
$$\begin{aligned}{} & {} H_{t}\left( {\overline{x}}+\alpha _{k}v\right) \geqq 0,\quad \forall t\in T_{+}({\overline{x}}), \end{aligned}$$
(46)
$$\begin{aligned}{} & {} G_{t}\left( {\overline{x}}+\alpha _{k}v\right) \leqq 0,\quad \forall t\in T_{+-}({\overline{x}}). \end{aligned}$$
(47)

Thus, we conclude by ( 40 )-(47 ) that there exists \(\delta >0\) such that \({\overline{x}}+\alpha _{k}v\in \Omega \cap B\left( {\overline{x}};\delta \right) \), where \(B\left( {\overline{x}};\delta \right) \) denotes the open ball of radius \(\delta \) around \( {\overline{x}}\).

On the other hand, it follows from the assumption that \({\overline{x}}\in \Omega \) is an efficient solution in (MPVC). Hence, by Definition 3.1, there exists a number \(\delta >0\) such that there is no \(x\in \Omega \cap B\left( {\overline{x}};\delta \right) \) satisfying

$$\begin{aligned}{} & {} f_{i}\left( x\right) \leqq f_{i}\left( {\overline{x}}\right) , \ i=1,...,p, \end{aligned}$$
(48)
$$\begin{aligned}{} & {} f_{i}\left( x\right) <f_{i}\left( {\overline{x}}\right) \hbox { for some } i\in \left\{ 1,...,p\right\} . \end{aligned}$$
(49)

Hence, since \({\overline{x}}+\alpha _{k}v\in \Omega \cap B\left( {\overline{x}};\delta \right) \) and (39) holds, by (48) and (49), we conclude that, for all \(k\in N\), the inequality

$$\begin{aligned} f_{l_{0}}\left( {\overline{x}}+\alpha _{k}v\right) >f_{l_{0}}\left( {\overline{x}}\right) \end{aligned}$$

holds. Then, by Definition 2.5, the inequality above implies that the inequality

$$\begin{aligned} f_{l_{0}}^{+}\left( {\overline{x}};v\right) \geqq 0 \end{aligned}$$

holds, which is a contradiction to (28). Hence, the proof of this theorem is completed. \(\square \)

Remark 3.12

As follows from the proof of Theorem 3.11, if the system (23)-(27) has no solution \(v\in R^{n}\), then, for each \(l=1,...,p\), the system

$$\begin{aligned}{} & {} f_{i}^{+}\left( {\overline{x}};v\right) \leqq 0\text {, }f_{l}^{+}\left( {\overline{x}};v\right) <0\text {, }i=1,...,p, i\ne l, \end{aligned}$$
(50)
$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0, \forall j\in J({\overline{x}}) \text {,} \end{aligned}$$
(51)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};v\right) =0, \forall s\in S\text {,} \end{aligned}$$
(52)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) =0\quad \forall t\in T_{0+}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(53)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\quad \forall t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \text {, } \end{aligned}$$
(54)
$$\begin{aligned}{} & {} G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0\quad \forall t\in T_{+0}\left( {\overline{x}}\right) \end{aligned}$$
(55)

has no solution \(v\in R^{n}\).

Let us define the functions \(F=\left( F_{1},...,F_{p}\right) :R^{n}\rightarrow R^{p}\), \(\Psi =\big ( \Psi _{1},...,\Psi _{\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}} \right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| }\big ):R^{n}\rightarrow R^{\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| }\) and \(\Phi =\left( \Phi _{1},...,\Phi _{q+\left| T_{0+}\left( {\overline{x}}\right) \right| }\right) :R^{n}\rightarrow R^{q+\left| T_{0+}\left( {\overline{x}}\right) \right| }\) as follows

$$\begin{aligned}{} & {} F_{i}\left( v\right) :=f_{i}^{+}\left( {\overline{x}};v\right) , i\in I, \end{aligned}$$
(56)
$$\begin{aligned}{} & {} \Psi _{\alpha }\left( v\right) :=\left\{ \begin{array}{lll} g_{l}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} l\in J\left( \overline{x }\right) , \alpha =1,..,J\left( {\overline{x}}\right) , \\ -H_{l}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} l\in T_{0}({\overline{x}} )\text {, }\alpha =\left| J\left( {\overline{x}}\right) \right| +1,...,\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| , \\ -H_{l}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} l\in T_{0-}(\overline{x })\text {, }\alpha =\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +1,...,\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}} \right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| , \\ G_{t}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} \begin{array}{c} l\in T_{+0}({\overline{x}})\text {, } \\ \alpha =\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +1,...,\left| J\left( {\overline{x}} \right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| , \end{array} \end{array} \right. \end{aligned}$$
(57)
$$\begin{aligned}{} & {} \Phi _{\beta }\left( v\right) :=\left\{ \begin{array}{ccc} h_{l}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} l=1,...,q, \beta =1,..,q, \\ H_{l}^{+}\left( {\overline{x}};v\right) &{} \text {for} &{} l\in T_{0+}({\overline{x}} )\text {, }\beta =q+1,...,q+\left| T_{0+}\left( {\overline{x}}\right) \right| \text {.} \end{array} \right. \end{aligned}$$
(58)

We are now in a position to formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution \({\overline{x}}\) to be an efficient solution in (MPVC) under the Abadie constraint qualification (ACQ).

Theorem 3.13

(Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let \({\overline{x}} \in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that \( f_{i}\), \(i\in I\), \(g_{j}\), \(j\in J\), \(h_{s}\), \(s\in S\), \(H_{t}\), \(t\in T\), \( G_{t}\), \(t\in T\), are directionally differentiable functions at \({\overline{x}} \), \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J({\overline{x}})\), \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex functions, \( h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{0+}\left( {\overline{x}}\right) \), are linear functions, \(g_{j}\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}( {\overline{x}})\), \(G_{t}\), \(t\in T_{0}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), are continuous functions at \({\overline{x}} \) and, moreover, the Abadie constraint qualification (ACQ) is satisfied at \( {\overline{x}}\) for (MPVC). If there exists \(v_{0}\in relint\,Z\left( C; {\overline{x}}\right) \) such that \(\Psi \left( v_{0}\right) <0\) and \(\Phi \left( v_{0}\right) \leqq 0\), then there exist Lagrange multipliers \( {\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{\xi }} \in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \({\overline{\vartheta }} ^{G}\in R^{r}\) such that the following conditions

$$\begin{aligned} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}^{+}\left( {\overline{x}};v\right) +\sum _{j=1}^{m}\overline{\mu }_{j}g_{j}^{+}\left( {\overline{x}};v\right) +\sum _{s=1}^{q}{\overline{\xi }}_{s}h_{s}^{+}\left( {\overline{x}};v\right) - \end{aligned}$$
(59)
$$\begin{aligned} \sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}\left( {\overline{x}};v\right) +\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{G}G_{t}^{+}\left( {\overline{x}};v\right) \geqq 0,\forall v\in Z\left( C;{\overline{x}}\right) , \end{aligned}$$
$$\begin{aligned}&\displaystyle {\overline{\mu }}_{j}g_{j}({\overline{x}})=0\text {, }j\in J, \end{aligned}$$
(60)
$$\begin{aligned}&\displaystyle {\overline{\vartheta }}_{t}^{H}H_{t}\left( {\overline{x}}\right) =0\text {, }t\in T\text {,} \end{aligned}$$
(61)
$$\begin{aligned}&\displaystyle {\overline{\vartheta }}_{t}^{G}G_{t}\left( {\overline{x}}\right) =0\text {, }t\in T\text {,} \end{aligned}$$
(62)
$$\begin{aligned}&\displaystyle {\overline{\lambda }}\ge 0, {\overline{\mu }}\geqq 0\text {,} \end{aligned}$$
(63)
$$\begin{aligned}&\displaystyle {\overline{\vartheta }}_{t}^{H}=0\text {, }t\in T_{+}\left( {\overline{x}}\right) \text {, }{\overline{\vartheta }}_{j}^{H}\geqq 0\text {, }t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \text {, }\overline{ \vartheta }_{t}^{H}\text { free, }t\in T_{0+}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(64)
$$\begin{aligned}&\displaystyle {\overline{\vartheta }}_{t}^{G}=0\text {, }t\in T_{00}\left( {\overline{x}} \right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}} \right) \cup T_{+-}\left( {\overline{x}}\right) \text {, }{\overline{\vartheta }} _{t}^{G}\geqq 0\text {, }t\in T_{+0}\left( {\overline{x}}\right) \nonumber \\ \end{aligned}$$
(65)

hold.

Proof

Let \({\overline{x}}\in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Since (ACQ) is satisfied at \({\overline{x}}\) for (MPVC), by Remark 3.12, the system (50)-(55) has no solution \(v\in R^{n}\). By (56)-(58), it follows that the system

admits no solutions. Then, by Theorem 2.8, there exists a vector \(\left( \lambda ,\theta ,\beta \right) ^{T}\in R_{+}^{k}\times R_{+}^{m}\times R^{q}\), \(\lambda \ne 0\), such that

$$\begin{aligned} \lambda ^{T}F\left( v\right) +\theta ^{T}\Psi \left( v\right) +\beta ^{T}\Phi \left( v\right) \geqq 0, \forall v\in R^{n}\text {.} \end{aligned}$$

Hence, by (56)-(58), one has

$$\begin{aligned}{} & {} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}^{+}\left( {\overline{x}};v\right) +\sum _{j\in J\left( {\overline{x}}\right) }\theta _{j}g_{j}^{+}\left( {\overline{x}};v\right) +\sum _{s=1}^{q}\beta _{s}h_{s}^{+}\left( {\overline{x}};v\right) -\sum _{t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) }\theta _{t}H_{t}^{+}\left( {\overline{x}};v\right) - \nonumber \\{} & {} \sum _{t\in T_{0+}\left( {\overline{x}}\right) }\beta _{s}H_{t}^{+}\left( {\overline{x}};v\right) +\sum _{t\in T_{+0}\left( {\overline{x}}\right) }\theta _{s}G_{t}^{+}\left( {\overline{x}};v\right) \geqq 0, \forall v\in X. \end{aligned}$$
(66)

Let us set

$$\begin{aligned}{} & {} {\overline{\mu }}_{j}=\left\{ \begin{array}{lll} \theta _{j} &{} \text {if} &{} j\in J\left( {\overline{x}}\right) , \\ 0 &{} \text {if} &{} j\notin J\left( {\overline{x}}\right) , \end{array} \right. \end{aligned}$$
(67)
$$\begin{aligned}{} & {} {\overline{\xi }}_{s}=\beta _{s}, s=1,...,q\text {,} \end{aligned}$$
(68)
$$\begin{aligned}{} & {} {\overline{\vartheta }}_{t}^{H}=\left\{ \begin{array}{lll} \theta _{\alpha } &{} \text {if} &{} t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \text {, }\alpha =\left| J\left( {\overline{x}}\right) \right| +1,...,\left| J\left( {\overline{x}} \right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| , \\ \beta _{\alpha } &{} \text {if} &{} t\in T_{0+}\left( {\overline{x}}\right) \text {, }\alpha =q+1,...,q+\left| T_{0+}\left( {\overline{x}}\right) \right| , \\ 0 &{} \text {if} &{} t\in T_{+}\left( {\overline{x}}\right) , \end{array} \right. \end{aligned}$$
(69)
$$\begin{aligned}{} & {} {\overline{\vartheta }}_{t}^{G}=\left\{ \begin{array}{lll} \theta _{\alpha } &{} \text {if} &{} t\in T_{+0}\left( {\overline{x}}\right) \text {, }\alpha =\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +1,...,\left| J\left( {\overline{x}} \right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| , \\ 0 &{} \text {if} &{} t\in T_{00}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) , \end{array} \right. \nonumber \\ \end{aligned}$$
(70)

If we use (67)-(70) in (66), then we get the Karush–Kuhn–Tucker optimality condition (59). Moreover, note that (67)-(70) imply the Karush–Kuhn–Tucker optimality conditions (60)-(65). Hence, the proof of this theorem is finished. \(\square \)

Note that, in general, the Abadie constraint qualification may not be fulfilled at an efficient solution in (MPVC) if \(T_{00}\left( {\overline{x}} \right) \ne \varnothing \).

Based on the definition of the index sets, we substitute the constraint \( H_{t}G_{t}\left( x\right) \leqq 0\), \(t\in T\) by the constraints

$$\begin{aligned}{} & {} H_{t}\left( x\right) =0,\ G_{t}\left( x\right) \geqq 0,t\in T_{0+}\left( {\overline{x}}\right) \\{} & {} H_{t}\left( x\right) \geqq 0,\ G_{t}\left( x\right) \leqq 0,t\in T_{00}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) , \end{aligned}$$

in which the index sets depend on \({\overline{x}}\).

Then, we define the following vector optimization problem derived from (MPVC), some of the constraints of which depends on the optimal point \( {\overline{x}}\):

$$\begin{aligned} \begin{array}{c} V\text {-minimize }f(x):=\left( f_{1}(x),...,f_{p}(x)\right) \\ g_{j}(x)\leqq 0, j=1,...,m, \\ h_{s}\left( x\right) =0, \ \ s=1,...,q, \\ H_{t}\left( x\right) \geqq 0, t=1,...,r, \\ H_{t}\left( x\right) =0,\ G_{t}\left( x\right) \geqq 0,t\in T_{0+}\left( {\overline{x}}\right) \\ H_{t}\left( x\right) \geqq 0,\ G_{t}\left( x\right) \leqq 0,t\in T_{00}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) , \\ x\in C. \end{array} \; \text {(MP}\left( {\overline{x}}\right) \text {)} \end{aligned}$$

In order to introduce the modified Abadie constraint qualification, for \( {\overline{x}}\in \Omega \), we define the sets \({\overline{Q}}^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), and \({\overline{Q}}\left( {\overline{x}} \right) \) as follows

Then, the almost linearizing cone for the sets \({\overline{Q}}^{l}\left( {\overline{x}}\right) \) is defined by

(71)

Hence, the almost linearizing cone for the set \({\overline{Q}}\left( {\overline{x}}\right) \) is given as follows

$$\begin{aligned} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) =\bigcap \limits _{l=1}^{p}L\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) . \end{aligned}$$
(72)

Remark 3.14

Note that the only difference between \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) and \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is that we add the inequality \(G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0,\) \(\forall t\in T_{00}\left( {\overline{x}} \right) \) in \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}} \right) \) in comparison to \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}} \right) \). In particular, we always have the relation

$$\begin{aligned} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(73)

Proposition 3.15

Let \({\overline{x}}\) be a feasible solution in (MPVC). Then

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}} \right) . \end{aligned}$$
(74)

Proof

By Proposition 3.8, it follows that

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) . \end{aligned}$$
(75)

Moreover, as it follows from the proof of Proposition 3.8, one has

$$\begin{aligned} Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {, }\forall l=1,...,p. \end{aligned}$$
(76)

Thus, (76) and (72) yield

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \subset \bigcap \limits _{l=1}^{p}L\left( {\overline{Q}} ^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) =L\left( {\overline{Q}} \left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(77)

Since \({\overline{Q}}^{l}\left( {\overline{x}}\right) \subseteq Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), therefore, one has

$$\begin{aligned}{} & {} Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {, }\forall l=1,...,p\text {,} \end{aligned}$$
(78)
$$\begin{aligned}{} & {} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(79)

Combining (75)–(79), we get (74).

\(\square \)

Now, we are ready to introduce the modified Abadie constraint qualification which we name the VC-Abadie constraint qualification.

Definition 3.16

Let \({\overline{x}}\in \Omega \) be an efficient solution in (MPVC). Then, the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC) iff

$$\begin{aligned} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \text {.} \end{aligned}$$
(80)

Now, we define the Abadie constraint qualification for (MP\(\left( {\overline{x}}\right) \)) and we show that then the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC), even in a case in which the Abadie constraint qualification (ACQ) is not satisfied.

Definition 3.17

Let \({\overline{x}}\in \Omega \) be a (weakly) efficient solution in (MPVC). Then, the modified Abadie constraint qualification (MACQ) holds at \({\overline{x}}\) for (MP\(\left( {\overline{x}} \right) \)) iff

$$\begin{aligned} L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \text {.} \end{aligned}$$
(81)

We now give the sufficient condition for the VC-Abadie constraint qualification to be satisfied at an efficient solution in (MPVC).

Lemma 3.18

Let \({\overline{x}}\in \Omega \) be an efficient solution in (MPVC). If the modified Abadie constraint qualification (MACQ) holds at \({\overline{x}}\) for (MP\(\left( {\overline{x}}\right) \)), then the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC).

Proof

Assume that \({\overline{x}}\in \Omega \) is an efficient solution in (MPVC) and, moreover, the modified Abadie constraint qualification (MACQ) holds at \( {\overline{x}}\) for (MP\(\left( {\overline{x}}\right) \)). Then, by Definition 3.17, it follows that

$$\begin{aligned} L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \text {.} \end{aligned}$$
(82)

Since \({\overline{Q}}^{l}\left( {\overline{x}}\right) \subseteq Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), we have that

$$\begin{aligned}{} & {} Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) , l=1,...,p, \end{aligned}$$
(83)
$$\begin{aligned}{} & {} L\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) , l=1,...,p. \end{aligned}$$
(84)

Hence, (84) implies

$$\begin{aligned} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) =\bigcap \limits _{l=1}^{p}L\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) =L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(85)

Then, (83) gives

$$\begin{aligned} \bigcap \limits _{l=1}^{p}Z\left( {\overline{Q}}^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \text {.} \end{aligned}$$
(86)

Thus, by (82), (85) and (86), we get

$$\begin{aligned} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subseteq \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \text {,} \end{aligned}$$

as was to be shown. \(\square \)

Since the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualification (ACQ), the necessary optimality conditions (59)-(65) may not hold. Therefore, in the next theorem, we formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution \( {\overline{x}}\) to be an efficient solution in (MPVC) under the VC-Abadie constraint qualification (VC-ACQ).

Theorem 3.19

(Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let \({\overline{x}} \in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that \( f_{i}\), \(i\in I\), \(g_{j}\), \(j\in J\), \(h_{s}\), \(s\in S\), \(H_{t}\), \(t\in T\), \( G_{t}\), \(t\in T\), are directionally differentiable functions at \({\overline{x}} \), \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J({\overline{x}})\), \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \), are convex functions, \(h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \( t\in T_{0+}\left( {\overline{x}}\right) \), are linear functions, \(g_{j}\), \( j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \( t\in T_{0}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), are continuous functions at \({\overline{x}}\) and, moreover, the VC-Abadie constraint qualification (VC-ACQ) is satisfied at \({\overline{x}}\) for (MPVC). If there exists \(v_{0}\in relint\,Z\left( C;{\overline{x}}\right) \) such that \( \Psi \left( v_{0}\right) <0\) and \(\Phi \left( v_{0}\right) \leqq 0\), then there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \(\overline{\vartheta }^{G}\in R^{r}\) such that the following conditions

$$\begin{aligned} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}^{+}\left( {\overline{x}};v\right) +\sum _{j=1}^{m}\overline{\mu }_{j}g_{j}^{+}\left( {\overline{x}};v\right) +\sum _{s=1}^{q}{\overline{\xi }}_{s}h_{s}^{+}\left( {\overline{x}};v\right) - \end{aligned}$$
(87)
$$\begin{aligned} \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{H}H_{t}^{+}\left( {\overline{x}};v\right) +\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{G}G_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\forall v\in Z\left( C;{\overline{x}}\right) , \end{aligned}$$
$$\begin{aligned}{} & {} {\overline{\mu }}_{j}g_{j}({\overline{x}})=0\text {, }j\in J, \end{aligned}$$
(88)
$$\begin{aligned}{} & {} {\overline{\vartheta }}_{t}^{H}H_{t}\left( {\overline{x}}\right) =0\text {, }t\in T\text {,} \end{aligned}$$
(89)
$$\begin{aligned}{} & {} {\overline{\vartheta }}_{t}^{G}G_{t}\left( {\overline{x}}\right) =0\text {, }t\in T\text {,} \end{aligned}$$
(90)
$$\begin{aligned}{} & {} {\overline{\lambda }}\ge 0, {\overline{\mu }}\geqq 0\text {,} \end{aligned}$$
(91)
$$\begin{aligned} {\overline{\vartheta }}_{t}^{H}=0\text {, }t\in T_{+}\left( {\overline{x}}\right) \text {, }{\overline{\vartheta }}_{j}^{H}\geqq 0\text {, }{} & {} t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \text {, }{\overline{\vartheta }}_{t}^{H}\text { free, }t\in T_{0+}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(92)
$$\begin{aligned} {\overline{\vartheta }}_{t}^{G}=0\text {, }t\in T_{00}\left( {\overline{x}} \right) \cup T_{0+}{} & {} \left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}} \right) \cup T_{+-}\left( {\overline{x}}\right) \text {, }{\overline{\vartheta }} _{t}^{G}\geqq 0\text {, }t\in T_{+0}\left( {\overline{x}}\right) \cup T_{00}\left( {\overline{x}}\right) \nonumber \\ \end{aligned}$$
(93)

hold.

Now, we prove the sufficiency of the Karush–Kuhn–Tucker optimality conditions for the considered multiobjective programming problem (MPVC) with vanishing constraints under appropriate convexity hypotheses.

Theorem 3.20

Let \({\overline{x}}\) be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)–(65) be satisfied at \({\overline{x}}\) for (MPVC) with Lagrange multipliers \( {\overline{\lambda }}\in R_{+}^{k}\), \({\overline{\mu }}\in R_{+}^{m}\), \( {\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \( \overline{\vartheta }^{G}\in R^{r}\). Further, we assume that \(f_{i}\), \(i\in I \), \(g_{j}\), \(j\in J({\overline{x}})\), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }}_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }} _{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}})\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then \({\overline{x}}\) is a weak Pareto solution in (MPVC).

Proof

We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{x}}\) is not a weak Pareto solution in (MPVC). Thus, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that

$$\begin{aligned} f({\widetilde{x}})<f({\overline{x}})\text {.} \end{aligned}$$
(94)

By assumption, f is convex at \({\overline{x}}\) on \(\Omega \). Hence, by Proposition 2.6, (94) yields

$$\begin{aligned} f_{i}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) <0, i=1,...,p\text {.} \end{aligned}$$
(95)

Since \({\overline{\lambda }}\ge 0\), the inequalities (95) give

$$\begin{aligned} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) <0\text {.} \end{aligned}$$
(96)

From \({\overline{x}},{\widetilde{x}}\in \Omega \) and the definition of \(J\left( {\overline{x}}\right) \), it follows that

$$\begin{aligned}{} & {} g_{j}({\widetilde{x}})\leqq g_{j}({\overline{x}})=0\text {, }j\in J\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(97)
$$\begin{aligned}{} & {} h_{s}({\widetilde{x}})=h_{s}({\overline{x}})=0\text {, }s\in S\text {,} \end{aligned}$$
(98)
$$\begin{aligned}{} & {} -H_{t}\left( {\widetilde{x}}\right) \leqq -H_{t}\left( {\overline{x}}\right) =0 \text {, }t\in T_{00}({\overline{x}})\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\text {,} \end{aligned}$$
(99)
$$\begin{aligned}{} & {} G_{t}\left( {\widetilde{x}}\right) \leqq G_{t}\left( {\overline{x}}\right) =0 \text {, }t\in T_{+0}\left( {\overline{x}}\right) \text {.} \end{aligned}$$
(100)

By assumption, \(g_{j}\),\(\ j\in J({\overline{x}})\), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) =\left\{ s\in S:\overline{\xi }_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) =\left\{ s\in S:{\overline{\xi }} _{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}})\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then, by Proposition 2.6, (97)-(100) imply, respectively,

$$\begin{aligned}{} & {} g_{j}^{+}\left( {\overline{x}};x-{\overline{x}}\right) \leqq 0\text {, }j\in J\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(101)
$$\begin{aligned}{} & {} h_{s}^{+}\left( {\overline{x}};x-{\overline{x}}\right) \leqq 0\text {, }s\in S^{+}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(102)
$$\begin{aligned}{} & {} -h_{s}^{-}\left( {\overline{x}};x-{\overline{x}}\right) \leqq 0\text {, }s\in S^{-}\left( {\overline{x}}\right) \text {,} \end{aligned}$$
(103)
$$\begin{aligned}{} & {} H_{t}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) \leqq 0\text {, }t\in T_{00}({\overline{x}})\cup T_{0-}({\overline{x}})\cup T_{0+}({\overline{x}} )\text {,} \end{aligned}$$
(104)
$$\begin{aligned}{} & {} G_{t}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}})\right) <0\text {, } t\in T_{+0}({\overline{x}}). \end{aligned}$$
(105)

Taking into account that \({\overline{\mu }}_{j}=0\), \(j\in J^{<}\left( {\overline{x}}\right) \), \({\overline{\xi }}_{s}=0\), \(s\notin S^{+}\left( {\overline{x}}\right) \cup S^{-}\left( {\overline{x}}\right) \), \({\overline{\vartheta }}_{t}^{H}=0\), \(t\in T_{+}\left( {\overline{x}}\right) \), \({\overline{\vartheta }}_{t}^{G}=0\), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), the foregoing inequalities yield, respectively,

$$\begin{aligned}{} & {} \sum _{j=1}^{m}{\overline{\mu }}_{j}g_{j}^{+}\left( {\overline{x}};{\widetilde{x}}- {\overline{x}}\right) \leqq 0\text {,} \end{aligned}$$
(106)
$$\begin{aligned}{} & {} \sum _{s=1}^{q}{\overline{\xi }}_{s}h_{s}^{+}\left( {\overline{x}};{\widetilde{x}}- {\overline{x}}\right) \leqq 0\text {,} \end{aligned}$$
(107)
$$\begin{aligned}{} & {} \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{H}H_{t}^{+}\left( {\overline{x}}; {\widetilde{x}}-{\overline{x}}\right) \leqq 0\text {,} \end{aligned}$$
(108)
$$\begin{aligned}{} & {} \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{G}G_{t}^{+}\left( {\overline{x}}; {\widetilde{x}}-{\overline{x}}\right) \leqq 0\text {.} \end{aligned}$$
(109)

Combining (96) and (106)-(109), we get that the inequality

$$\begin{aligned}{} & {} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) +\sum _{j=1}^{m}{\overline{\mu }}_{j}g_{j}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) +\sum _{s=1}^{q}{\overline{\xi }} _{s}h_{s}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) -\\{} & {} \quad \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{H}H_{t}^{+}\left( {\overline{x}}; {\widetilde{x}}-{\overline{x}}\right) +\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{G}G_{t}^{+}\left( {\overline{x}};{\widetilde{x}}-{\overline{x}}\right) <0 \end{aligned}$$

holds, contradicting the Karush–Kuhn–Tucker type necessary optimality condition (59). This means that \({\overline{x}}\) is a weak Pareto solution in (MPVC). \(\square \)

In order to prove the sufficient optimality conditions for a feasible solution \({\overline{x}}\) to be a Pareto solution in (MPVC), stronger convexity assumptions are needed imposed on the objective functions.

Theorem 3.21

Let \({\overline{x}}\) be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)-(65) be satisfied at \({\overline{x}}\) for (MPVC) with Lagrange multipliers \( {\overline{\lambda }}\in R_{+}^{p}\), \({\overline{\mu }}\in R_{+}^{m}\), \( {\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \( \overline{\vartheta }^{G}\in R^{r}\). Further, we assume that \(f_{i}\), \(i\in I \), are strictly convex on \(\Omega \), \(g_{j}\), \(j\in J({\overline{x}})\), \( h_{s} \), \(s\in S^{+}\left( {\overline{x}}\right) =\left\{ s\in S:{\overline{\xi }}_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }}_{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}} )\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \( t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then \( {\overline{x}}\) is a Pareto solution in (MPVC).

Remark 3.22

In Theorem 3.21, all objective functions \(f_{i}\), \(i\in I\), are assumed to be strictly convex on \(\Omega \) in order to prove that \({\overline{x}}\in \Omega \) is a Pareto solution in (MPVC). However, as it follows from the proof of the aforesaid theorem, it is sufficient if we assume in Theorem 3.21 that at least one the objective function \(f_{i}\), \(i\in I\), is strictly convex on \(\Omega \), but Lagrange multiplier \({\overline{\lambda }}_{i}\) associated to such an objective function \(f_{i}\) should be greater than 0.

Remark 3.23

If \({\overline{x}}\) is such a feasible solution at which the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) in place of (59)-(65), then also the functions \(G_{t}\), \(t\in T_{00}\left( {\overline{x}}\right) \), should be assumed to be convex on \(\Omega \) in the sufficient optimality conditions.

Now, we illustrate the results established in the paper by an example of a convex directionally differentiable multiobjective programming problem with vanishing constraints.

Example 3.24

Consider a directionally differentiable multiobjective programming problem with vanishing constraints defined by

figure a

Note that \(\Omega =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:x_{2}\geqq 0 \text {, }x_{2}\left( -x_{1}-x_{2}\right) \leqq 0\right\} \), \({\overline{x}} =\left( 0,0\right) \) is a feasible solution in (MPVC1) and \(T_{00}\left( {\overline{x}}\right) =\left\{ 1\right\} \). Now, we define the sets \( Q^{1}\left( {\overline{x}}\right) \), \(Q^{2}\left( {\overline{x}}\right) \), \( Q\left( {\overline{x}}\right) \), \({\overline{Q}}\left( {\overline{x}}\right) \). Then, by definition, we have

$$\begin{aligned}{} & {} Q^{1}\left( {\overline{x}}\right) =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:x_{1}+\left| x_{2}\right| \leqq 0, x_{2}\geqq 0\text {, } x_{2}\left( -x_{1}-x_{2}\right) \leqq 0\right\} ,\\{} & {} Q^{2}\left( {\overline{x}}\right) =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:\left| x_{1}\right| -x_{2}\leqq 0, x_{2}\geqq 0\text {, } x_{2}\left( -x_{1}-x_{2}\right) \leqq 0\right\} ,\\{} & {} Q\left( {\overline{x}}\right) =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:x_{1}+\left| x_{2}\right| \leqq 0, \left| x_{1}\right| -x_{2}\leqq 0, x_{2}\geqq 0\text {, }x_{2}\left( -x_{1}-x_{2}\right) \leqq 0\right\} ,\\{} & {} {\overline{Q}}\left( {\overline{x}}\right) =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:x_{1}+\left| x_{2}\right| \leqq 0, \left| x_{1}\right| -x_{2}\leqq 0, x_{2}\geqq 0\text {, } -x_{1}-x_{2}\leqq 0\right\} . \end{aligned}$$

Further, by Definition 2.9 and the definition of the almost linearizing cone (see (5), (10)), we have, respectively,

$$\begin{aligned}{} & {} Z\left( Q^{1}\left( {\overline{x}}\right) ;{\overline{x}}\right) =\left\{ \left( v_{1},v_{2}\right) \in R^{2}:v_{1}+\left| v_{2}\right| \leqq 0, v_{2}\geqq 0\text {, }v_{2}\left( -v_{1}-v_{2}\right) \leqq 0\right\} ,\\{} & {} Z\left( Q^{2}\left( {\overline{x}}\right) ;{\overline{x}}\right) =\left\{ \left( v_{1},v_{2}\right) \in R^{2}:\left| v_{1}\right| -v_{2}\leqq 0, v_{2}\geqq 0\text {, }v_{2}\left( -v_{1}-v_{2}\right) \leqq 0\right\} ,\\{} & {} L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) =\left\{ \left( v_{1},v_{2}\right) \in R^{2}:v_{1}+\left| v_{2}\right| \leqq 0, \left| v_{1}\right| -v_{2}\leqq 0, v_{2}\geqq 0\right\} ,\\{} & {} L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}}\right) =\left\{ \left( v_{1},v_{2}\right) \in R^{2}:v_{1}+\left| v_{2}\right| \leqq 0, \left| v_{1}\right| -v_{2}\leqq 0, v_{2}\geqq 0 \text {, }-v_{1}-v_{2}\leqq 0\right\} . \end{aligned}$$

Note that the Abadie constraint qualification (ACQ) is not satisfied at \( {\overline{x}}=\left( 0,0\right) \) for (MPVC1) since the relation \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset \bigcap \nolimits _{l=1}^{2}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) is not satisfied. But the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}=\left( 0,0\right) \) for (MPVC1) since the relation \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}} \right) \subset \bigcap \nolimits _{l=1}^{2}Z\left( Q^{l}\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is satisfied. As it follows even from this example, the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualification (ACQ).Moreover, the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) are fulfilled at \({\overline{x}}\) with Lagrange multipliers \({\overline{\lambda }}_{1}=\frac{1}{2}\), \({\overline{\lambda }}_{2}=\frac{1}{4}\), \( \overline{\vartheta }_{1}^{H}=\frac{1}{4}\), \({\overline{\vartheta }}_{1}^{G}= \frac{1}{4}\). Further, note that the functions constituting (MPVC1) are convex on \(\Omega \) and the objective function \(f_{1}\) is strictly convex on \(\Omega \). Hence, by Theorem 3.21, \({\overline{x}}=\left( 0,0\right) \) is a Pareto solution in (MPVC1).Note that the optimality conditions established in the literature (see, for example, (Achtziger et al., 2013; Dorsch et al., 2012; Dussault et al., 2019; Hoheisel & Kanzow, 2008, 2007, 2009; Hoheisel et al., 2012; Izmailov & Solodov, 2009)) are not applicable for the considered multiobjective programming problem (MPVC1) with vanishing constraints since the results established in the above mentioned works have been proved for scalar optimization problems with vanishing constraints. Moreover, the results presented in Guu et al. (2017) and Mishra et al. (2015) have been established for differentiable multiobjective programming problems with vanishing constraints only and, therefore, they are not useful for finding (weak) Pareto solutions in such nondifferentiable vector optimization problems as the directionally differentiable multiobjective programming problem (MPVC1) with vanishing constraints.

4 Wolfe duality

In this section, for the considered vector optimization problem (MPVC) with vanishing constraints, we define its vector Wolfe dual problem. Then we prove several duality results between problems (MPVC) and (WDVC) under convexity assumption imposed on the functions constituting them.

We now define the vector-valued Lagrange function L for (MPVC) as follows

$$\begin{aligned}{} & {} L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) :=\left( f_{1}\left( y\right) ,...,f_{p}\left( y\right) \right) \\{} & {} \quad +\left( \sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}\xi _{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y)\right) e\text {,} \end{aligned}$$

where \(e=\left[ 1,...,1\right] ^{T}\in R^{p}\). Then, we re-write the above definition of the vector-valued Lagrange function L as follows:

$$\begin{aligned}{} & {} L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) :=\left( L_{1}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) ,...,L_{p}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \right) :=\\{} & {} \left( f_{1}\left( y\right) +\sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}\xi _{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y),...,\right. \\{} & {} \left. f_{p}\left( y\right) +\sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}\xi _{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y)\right) . \end{aligned}$$

For \(x\in \Omega \), we define the following vector Wolfe dual problem related to the considered multiobjective programming problem (MPVC) with vanishing constraints as follows:

Let

$$\begin{aligned} \Gamma \left( x\right) =\left\{ \left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) :\text {verifying the constraints of (WDVC}\left( x\right) \text {)}\right\} \end{aligned}$$

be the set of all feasible solutions in (WDVC\(\left( x\right) \)). Further, we define the set \(Y\left( x\right) \) as follows: \(Y\left( x\right) =\left\{ y\in X:\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \left( x\right) \right\} \) and \(J^{+}\left( x\right) :=\left\{ j\in J:\mu _{j}>0\right\} \).

Remark 4.1

In the Wolfe dual problem (WDVC\(\left( x\right) \)) given above, the significance of \(w_{t}\) and \(\theta _{t}\) is the same as \(v_{t}\) and \(\beta _{t}\) in Theorem 1 (Achtziger and Kazanov (2008)).

Now, on the line Hu et al. (2020), we define the following vector dual problem in the sense of Wolfe related to the considered multicriteria optimization problem (MPVC) with vanishing constraints by

where the set \(\Gamma \) of all feasible solutions in (WDVC) is defined by \( \Gamma =\bigcap \limits _{x\in \Omega }\Gamma \left( x\right) \). Further, let us define the set Y by \(Y=\left\{ y\in X:\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \right\} \).

Theorem 4.2

(Weak duality): Let x and \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \) be any feasible solutions for (MPVC) and (WDVC), respectively. Further, we assume that one of the following hypotheses is fulfilled:

  1. A)

    \(f_{i}\), \(k=1,...,p\), \(g_{j}\), \(j\in J^{+}\left( x\right) \), \( h_{s} \), \(s\in S^{+}\left( x\right) \), \(-h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{00}\left( x\right) \cup T_{0-}\left( x\right) \cup T_{0+}^{+}\left( x\right) \), \(T_{0+}^{+}\left( x\right) :=\left\{ t\in T_{0+}:\vartheta _{t}^{H}>0\right\} \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) :=\left\{ t\in T_{0+}:\vartheta _{t}^{H}<0\right\} \), \(G_{t}\), \( t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\).

  2. B)

    the vectorial Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is convex on \(\Omega \cup Y\).

Then, \(f\left( x\right) \nless L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \).

Proof

We proceed by contradiction. Suppose, contrary to the result, that

$$\begin{aligned} f\left( x\right) <L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) . \end{aligned}$$

Hence, by definition of the Lagrange function L, the aforesaid inequality gives

$$\begin{aligned} f_{i}\left( x\right) <f_{i}\left( y\right) +\sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}\xi _{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y), i=1,...,p.\nonumber \\ \end{aligned}$$
(110)

Thus, by \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \), it follows that

$$\begin{aligned} \sum _{i=1}^{p}\lambda _{i}f_{i}(x)<\sum _{i=1}^{p}\lambda _{i}f_{i}(y)+\sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}\xi _{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y)\text {.}\nonumber \\ \end{aligned}$$
(111)

A) Now, we prove this theorem under hypothesis A).

From convexity assumptions, by Proposition 2.6, the inequalities

$$\begin{aligned}{} & {} f_{i}(x)-f_{i}(y)\geqq f_{i}^{+}(y;x-y), { \ \ }i=1,...,p, \end{aligned}$$
(112)
$$\begin{aligned}{} & {} 0\geqq g_{_{j}}(x)\geqq g_{_{j}}(y)+g_{j}^{+}(y;x-y), { \ \ }j\in J^{+}\left( x\right) , \end{aligned}$$
(113)
$$\begin{aligned}{} & {} 0=h_{s}(x)\geqq h_{s}(y)+h_{s}^{+}(y;x-y),{ \ \ }s\in S^{+}\left( x\right) , \end{aligned}$$
(114)
$$\begin{aligned}{} & {} 0=-h_{s}(x)\geqq -h_{s}(y)-h_{s}^{+}(y;x-y),{ \ \ }s\in S^{-}\left( x\right) , \end{aligned}$$
(115)
$$\begin{aligned}{} & {} 0=-H_{t}(x)\geqq -H_{t}(y)-H_{t}^{+}(y;x-y)\text {, }t\in T_{00}\left( x\right) \cup T_{0-}\left( x\right) \cup T_{0+}^{+}\left( x\right) , \end{aligned}$$
(116)
$$\begin{aligned}{} & {} 0=H_{t}(x)\geqq H_{t}(y)+H_{t}^{+}(y;x-y),{ \ \ }t\in T_{0+}^{-}\left( x\right) , \end{aligned}$$
(117)
$$\begin{aligned}{} & {} 0=G_{t}(x)\geqq G_{t}(y)+G_{t}^{+}(y;x-y),{ \ \ }t\in T_{+0}\left( x\right) \end{aligned}$$
(118)

hold. Multiplying (112)–(118) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively,

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\lambda _{i}f_{i}(x)\geqq \sum _{i=1}^{p}\lambda _{i}f_{i}(y)+\sum _{i=1}^{p}\lambda _{i}f_{i}^{+}(y;x-y)\text {,} \end{aligned}$$
(119)
$$\begin{aligned}{} & {} 0\geqq \sum _{j\in J^{+}\left( x\right) }\mu _{j}g_{j}(y)+\sum _{j\in J^{+}\left( x\right) }\mu _{j}g_{j}^{+}(y;x-y)\text {,} \end{aligned}$$
(120)
$$\begin{aligned}{} & {} 0\geqq \sum _{s\in S^{+}\left( x\right) }\xi _{s}h_{s}(y)+\sum _{s\in S^{+}\left( x\right) }\xi _{s}h_{s}^{+}(y;x-y)\text {,} \end{aligned}$$
(121)
$$\begin{aligned}{} & {} 0\geqq -\sum _{s\in S^{-}\left( x\right) }\left( -\xi _{s}\right) h_{s}(y)-\sum _{s\in S^{-}\left( x\right) }\left( -\xi _{s}\right) h_{s}^{+}(y;x-y)\text {,} \end{aligned}$$
(122)
$$\begin{aligned}{} & {} 0\geqq -\sum _{t\in T_{00}\left( x\right) \cup T_{0-}\left( x\right) \cup T_{0+}^{+}\left( x\right) }\vartheta _{t}^{H}H_{t}(y)-\sum _{t\in T_{00}\left( x\right) \cup T_{0-}\left( x\right) \cup T_{0+}^{+}\left( x\right) }\vartheta _{t}^{H}H_{t}^{+}(y;x-y)\text {,} \end{aligned}$$
(123)
$$\begin{aligned}{} & {} 0\geqq -\sum _{t\in T_{0+}^{-}\left( x\right) }\vartheta _{t}^{H}H_{t}(y)+\sum _{t\in T_{0+}^{-}\left( x\right) }\left( -\vartheta _{t}^{H}\right) H_{t}^{+}(y;x-y)\text {,} \end{aligned}$$
(124)
$$\begin{aligned}{} & {} 0\geqq \sum _{t\in T_{+0}\left( x\right) }\vartheta _{t}^{G}G_{t}(y)+\sum _{t\in T_{+0}\left( x\right) }\vartheta _{t}^{G}G_{t}^{+}(y;x-y)\text {.} \end{aligned}$$
(125)

Taking into account Lagrange multipliers equal to 0, by (119)-(123), we obtain that the inequality

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\lambda _{i}f_{i}(x)\geqq \sum _{i=1}^{p}\lambda _{i}f_{i}(y)+\sum _{j=1}^{m}\mu _{j}g_{j}(y)+\sum _{s=1}^{q}v_{s}h_{s}(y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(y)+ \nonumber \\{} & {} \sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(y)+\sum _{i=1}^{p}\lambda _{i}f_{i}^{+}(y;x-y)+\sum _{j=1}^{m}\mu _{j}g_{j}^{+}(y;x-y)+\nonumber \\{} & {} \sum _{s=1}^{q}\xi _{s}h_{s}^{+}(y;x-y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}^{+}(y;x-y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}^{+}(y;x-y) \end{aligned}$$
(126)

holds. By (111) and (126), we get that the inequality

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\lambda _{i}f_{i}^{+}(y;x-y)+\sum _{j=1}^{m}\mu _{j}g_{j}^{+}(y;x-y)+\\{} & {} \sum _{s=1}^{q}\xi _{s}h_{s}^{+}(y;x-y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}^{+}(y;x-y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}^{+}(y;x-y)<0 \end{aligned}$$

holds, which contradicts the first constraint of (WDVC).

B) Now, we prove this theorem under hypothesis B). From \(x\in \Omega \) and \( \big ( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \big ) \in \Gamma \), it follows that

$$\begin{aligned}{} & {} g_{j}\left( x\right) =0\text {, }\mu _{j}\geqq 0, j\in J\left( x\right) \text {,} \end{aligned}$$
(127)
$$\begin{aligned}{} & {} g_{j}\left( x\right) <0\text {, }\mu _{j}\geqq 0, j\notin J\left( x\right) \text {,} \end{aligned}$$
(128)
$$\begin{aligned}{} & {} h_{s}\left( x\right) =0\text {, }\xi _{s}\in R, s\in S\text {,} \end{aligned}$$
(129)
$$\begin{aligned}{} & {} -H_{t}(x)=0\text {, }\vartheta _{t}^{H}\in R, t\in T_{0}\left( x\right) , \end{aligned}$$
(130)
$$\begin{aligned}{} & {} -H_{t}(x)<0\text {, }\vartheta _{t}^{H}=0, t\in T_{+}\left( x\right) , \end{aligned}$$
(131)
$$\begin{aligned}{} & {} G_{t}(x)=0\text {, }\vartheta _{t}^{G}\geqq 0, t\in T_{00}\left( x\right) \cup T_{+0}\left( x\right) , \end{aligned}$$
(132)
$$\begin{aligned}{} & {} G_{t}(x)\ne 0\text {, }\vartheta _{t}^{G}=0, t\in T_{0+}\left( x\right) \cup T_{0-}\left( x\right) \cup \cup T_{+-}\left( x\right) . \end{aligned}$$
(133)

By (127)-(133), we obtain

$$\begin{aligned} \sum _{j=1}^{m}\mu _{j}g_{j}(x)+\sum _{s=1}^{q}\xi _{s}h_{s}(x)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(x)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(x)\leqq 0\text {.} \end{aligned}$$
(134)

Since (111) is fulfilled, by (134), we get

$$\begin{aligned} f_{i}(x)+ & {} \sum _{j=1}^{m}\mu _{j}g_{j}(x)+\sum _{s=1}^{q}\xi _{s}h_{s}(x)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}(x)\\+ & {} \sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}(x)<L_{i}\left( y,\mu ,v,\vartheta ^{H},\vartheta ^{G}\right) \text {, }i=1,...,p. \end{aligned}$$

Then, by the definition of the vector-valued Lagrange function L, it follows that

$$\begin{aligned} L_{i}\left( x,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) <L_{i}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) , i=1,...,p. \end{aligned}$$
(135)

By hypothesis B), the vector-valued Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is directionally differentiable convex on \(\Omega \cup Y\). Then, by Proposition 2.6, the following inequalities

$$\begin{aligned} L_{i}\left( x,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) -L_{i}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \geqq L_{i}^{+}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G};x-y\right) , i=1,...,p\nonumber \\ \end{aligned}$$
(136)

are satisfied. Combining (135) and (136), we obtain

$$\begin{aligned} L_{i}^{+}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G};x-y\right) <0, i=1,...,p. \end{aligned}$$
(137)

Multiplying inequalities (137) by the corresponding Lagrange multipliers \(\lambda _{i}\), \(i=1,...,p\), we have

$$\begin{aligned} \sum _{i=1}^{p}\lambda _{i}L_{i}^{+}\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G};x-y\right) <0\text {.} \end{aligned}$$

Then, by the definition of the vector-valued Lagrange function L, one has

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\lambda _{i}f_{i}^{+}(y;x-y)+\sum _{i=1}^{p}\lambda _{i}\left[ \sum _{j=1}^{m}\mu _{j}g_{j}^{+}(y;x-y)+\right. \nonumber \\{} & {} \left. \sum _{s=1}^{q}\xi _{s}h_{s}^{+}(y;x-y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}^{+}(y;x-y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}^{+}(y;x-y) \right] <0. \end{aligned}$$
(138)

By \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \), it follows that \(\sum _{i=1}^{p}\lambda _{i}\). Thus, (138) implies that the following inequality

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\lambda _{i}f_{i}^{+}(y;x-y)+\sum _{j=1}^{m}\mu _{j}g_{j}^{+}(y;x-y)+\\{} & {} \sum _{s=1}^{q}\xi _{s}h_{s}^{+}(y;x-y)-\sum _{t=1}^{r}\vartheta _{t}^{H}H_{t}^{+}(y;x-y)+\sum _{t=1}^{r}\vartheta _{t}^{G}G_{t}^{+}(y;x-y)<0 \end{aligned}$$

holds, which contradicts the first constraint of (WDVC).

This completes the proof of this theorem under both hypothesis A) and hypothesis B). \(\square \)

If the stronger assumptions are imposed on the functions constituting (MPVC), then the following result is true:

Theorem 4.3

(Weak duality): Let x and \( \left( y,\lambda ,\mu ,v,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \) be any feasible solutions for (IVPVC) and (VC-IVWD), respectively. Further, we assume that one of the following hypotheses is fulfilled:

  1. A)

    \(f_{i}\), \(i=1,...,k\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( x\right) \), \(h_{s}\), \(s\in S^{+}\left( x\right) \), \(-h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{H}^{+}\left( x\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) \), \(G_{t}\), \(t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\).

  2. B)

    the vector-valued vectorial Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is strictly convex on \(\Omega \cup Y\).

Then, \(f\left( x\right) \nleq L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \).

Theorem 4.4

(Strong duality): Let \({\overline{x}}\in \Omega \) be a Pareto solution (a weak Pareto solution) in (MPVC) and the Abadie constraint qualification be satisfied at \({\overline{x}}\). Then, there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }} \in R^{m}\), \({\overline{v}}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\), \( {\overline{\vartheta }}^{G}\in R^{r}\) and \({\overline{w}}\in R^{r}\), \({\overline{\theta }}\in R^{r}\) such that \(\left( {\overline{x}},{\overline{\lambda }}, \overline{\mu },{\overline{v}},{\overline{\vartheta }}^{H},{\overline{\vartheta }} ^{G},{\overline{w}},{\overline{\theta }}\right) \) is feasible in (WDVC). If also all hypotheses of the weak duality theorem - Theorem 4.3 (Theorem 4.2 ) are satisfied, then \(\left( {\overline{x}},\overline{\lambda },{\overline{\mu }},{\overline{v}},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G},{\overline{w}},\overline{\theta }\right) \) is an efficient solution (a weakly efficient) of a maximum type in (WDVC).

Proof

By assumption, \({\overline{x}}\) is a Pareto solution of (MPVC) and the Abadie constraint qualification is satisfied at \({\overline{x}}\). Then, there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{v}}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\), \( {\overline{\vartheta }}^{G}\in R^{r}\) such that the Karush–Kuhn–Tucker necessary optimality conditions are fulfilled. Then, we conclude that \( \left( {\overline{x}},{\overline{\lambda }},{\overline{\mu }},{\overline{v}}, {\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \), where \({\overline{w}}_{t}\) and \({\overline{\theta }}_{t}\) satisfy the following conditions

$$\begin{aligned}{} & {} {\overline{\vartheta }}_{j}^{G}={\overline{w}}_{t}H_{t}({\overline{x}}),\ w_{t}\geqq 0, t=1,...,r\text {,}\\{} & {} {\overline{\vartheta }}_{tj}^{H}={\overline{\theta }}_{t}-{\overline{w}}_{t}G_{t}( {\overline{x}}),\ \theta _{t}\geqq 0, t=1,...,r\text {,} \end{aligned}$$

is feasible in (WDVC).

Now, we prove that \(\left( {\overline{x}},\overline{\lambda },{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in (WDVC). We proceed by contradiction. Suppose, contrary to the result, that \(\left( {\overline{x}},{\overline{\lambda }},{\overline{\mu }}, {\overline{\xi }},{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{w}},{\overline{\theta }}\right) \) is not an efficient solution of a maximum type in (WDVC). Then, by definition, there exists \(\left( {\widetilde{y}},{\widetilde{\lambda }},\widetilde{\mu },{\widetilde{\xi }},{\widetilde{\vartheta }}^{H},\widetilde{\vartheta }^{G},{\widetilde{w}},{\widetilde{\theta }} \right) \in \Gamma \) such that the inequality

$$\begin{aligned} L\left( {\widetilde{y}},{\widetilde{\mu }},\widetilde{\xi },{\widetilde{\vartheta }}^{H},{\widetilde{\vartheta }}^{G}\right) \ge L\left( {\overline{x}},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }} ^{G}\right) \end{aligned}$$

holds. Then, by the Karush–Kuhn–Tucker necessary optimality conditions, we conclude that

$$\begin{aligned} L\left( {\widetilde{y}},\widetilde{\mu },{\widetilde{v}},{\widetilde{\vartheta }} ^{H},\widetilde{\vartheta }^{G}\right) \ge f\left( {\overline{x}}\right) \end{aligned}$$

holds, which is a contradiction to the weak duality theorem (Theorem 4.3 ). Hence, we conclude that \(\left( {\overline{x}},{\overline{\lambda }}, {\overline{\mu }},{\overline{v}},\overline{\vartheta }^{H},{\overline{\vartheta }} ^{G},{\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in (WDVC). \(\square \)

The next two theorems give sufficient conditions for \({\overline{y}}\), where \( \left( {\overline{y}},{\overline{\lambda }},\overline{\mu },{\overline{\xi }}, {\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) is a feasible solution of the (WDVC), to be a Pareto solution of (MPVC).

Theorem 4.5

(Converse duality): Let x be any feasible solution in (MPVC) and \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{v}},{\overline{\beta }}\right) \) be an efficient solution of a maximum type (a weakly efficient solution of a maximum type) in Wolfe dual problem (WDVC) such that \({\overline{y}}\in \Omega \). Further, we assume that \( f_{i}\), \(i=1,...,k\), are strictly convex (convex) on \(\Omega \cup Y\), \(g_{j}\), \(j\in J^{+}\left( x\right) \), \(h_{s}\), \(s\in S^{+}\left( x\right) \), \( -h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{H}^{+}\left( x\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) \), \(G_{t}\), \(t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\). Then \({\overline{y}}\) is a Pareto solution (a weak Pareto solution) of (MPVC).

Proof

We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{y}}\in \Omega \) is not an efficient solution of (MPVC). Hence, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that

$$\begin{aligned} f\left( {\widetilde{x}}\right) \le f\left( {\overline{y}}\right) . \end{aligned}$$
(139)

From convexity hypotheses, by Proposition 2.6, the inequalities

$$\begin{aligned}{} & {} f_{i}({\widetilde{x}})-f_{i}({\overline{y}})>f_{i}^{+}(y;{\widetilde{x}}-y),{ \ \ }i=1,...,p\text {,} \end{aligned}$$
(140)
$$\begin{aligned}{} & {} g_{_{j}}({\widetilde{x}})\geqq g_{_{j}}({\overline{y}})+g_{j}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}}), { \ \ }j\in J^{+}\left( {\widetilde{x}} \right) , \end{aligned}$$
(141)
$$\begin{aligned}{} & {} h_{s}({\widetilde{x}})\geqq h_{s}({\overline{y}})+h_{s}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}}),{ \ \ }s\in S^{+}\left( {\widetilde{x}} \right) , \end{aligned}$$
(142)
$$\begin{aligned}{} & {} -h_{s}({\widetilde{x}})\geqq -h_{s}({\overline{y}})-h_{s}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}}),{ \ \ }s\in S^{-}\left( {\widetilde{x}} \right) , \end{aligned}$$
(143)
$$\begin{aligned}{} & {} -H_{t}({\widetilde{x}})\geqq -H_{t}({\overline{y}})-H_{t}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}})\text {, }t\in T_{00}\left( {\widetilde{x}}\right) \cup T_{0-}\left( {\widetilde{x}}\right) \cup T_{0+}^{+}\left( {\widetilde{x}} \right) , \end{aligned}$$
(144)
$$\begin{aligned}{} & {} H_{t}({\widetilde{x}})\geqq H_{t}({\overline{y}})+H_{t}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}}),{ \ \ }t\in T_{0+}^{-}\left( {\widetilde{x}} \right) , \end{aligned}$$
(145)
$$\begin{aligned}{} & {} G_{t}({\widetilde{x}})\geqq G_{t}({\overline{y}})+G_{t}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}}),{ \ \ }t\in T_{+0}\left( {\widetilde{x}} \right) \end{aligned}$$
(146)

hold. Multiplying (140)-(146) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively,

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\widetilde{x}})>\sum _{i=1}^{p} \overline{\lambda }_{i}f_{i}({\overline{y}})+\sum _{i=1}^{p}{\overline{\lambda }} _{i}f_{i}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(147)
$$\begin{aligned}{} & {} \sum _{j\in J^{+}\left( {\widetilde{x}}\right) }{\overline{\mu }}_{j}g_{j}( {\widetilde{x}})\geqq \sum _{j\in J^{+}\left( {\widetilde{x}}\right) }\overline{ \mu }_{j}g_{j}({\overline{y}})+\sum _{j\in J^{+}\left( {\widetilde{x}}\right) } {\overline{\mu }}_{j}g_{j}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(148)
$$\begin{aligned}{} & {} \sum _{s\in S^{+}\left( {\widetilde{x}}\right) }{\overline{\xi }}_{s}h_{s}( {\widetilde{x}})\geqq \sum _{s\in S^{+}\left( {\widetilde{x}}\right) }\overline{ \xi }_{s}h_{s}({\overline{y}})+\sum _{s\in S^{+}\left( {\widetilde{x}}\right) } {\overline{\xi }}_{s}h_{s}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(149)
$$\begin{aligned}{} & {} -\sum _{s\in S^{-}\left( {\widetilde{x}}\right) }\left( -{\overline{\xi }} _{s}\right) h_{s}({\widetilde{x}})\geqq -\sum _{s\in S^{-}\left( {\widetilde{x}} \right) }\left( -{\overline{\xi }}_{s}\right) h_{s}({\overline{y}})-\sum _{s\in S^{-}\left( {\widetilde{x}}\right) }\left( -{\overline{\xi }}_{s}\right) h_{s}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(150)
$$\begin{aligned}{} & {} -\sum _{t\in T_{00}\left( {\widetilde{x}}\right) \cup T_{0-}\left( {\widetilde{x}} \right) \cup T_{0+}^{+}\left( {\widetilde{x}}\right) }{\overline{\vartheta }} _{t}^{H}H_{t}({\widetilde{x}})\geqq -\sum _{t\in T_{00}\left( {\widetilde{x}} \right) \cup T_{0-}\left( {\widetilde{x}}\right) \cup T_{0+}^{+}\left( {\widetilde{x}}\right) }{\overline{\vartheta }}_{t}^{H}H_{t}({\overline{y}} )-\sum _{t\in T_{00}\left( {\widetilde{x}}\right) \cup T_{0-}\left( \widetilde{x }\right) \cup T_{0+}^{+}\left( {\widetilde{x}}\right) }{\overline{\vartheta }} _{t}^{H}H_{t}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(151)
$$\begin{aligned}{} & {} -\sum _{t\in T_{0+}^{-}\left( {\widetilde{x}}\right) }{\overline{\vartheta }} _{t}^{H}H_{t}({\widetilde{x}})\geqq -\sum _{t\in T_{0+}^{-}\left( {\widetilde{x}} \right) }{\overline{\vartheta }}_{t}^{H}H_{t}({\overline{y}})+\sum _{t\in T_{0+}^{-}\left( {\widetilde{x}}\right) }\left( -\vartheta _{t}^{H}\right) H_{t}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\text {,} \end{aligned}$$
(152)
$$\begin{aligned}{} & {} \sum _{t\in T_{+0}\left( {\widetilde{x}}\right) }{\overline{\vartheta }} _{t}^{G}G_{t}({\widetilde{x}})\geqq \sum _{t\in T_{+0}\left( {\widetilde{x}} \right) }{\overline{\vartheta }}_{t}^{G}G_{t}({\overline{y}})+\sum _{t\in T_{+0}\left( {\widetilde{x}}\right) }{\overline{\vartheta }}_{t}^{G}G_{t}^{+}( {\overline{y}};{\widetilde{x}}-{\overline{y}})\text {.} \end{aligned}$$
(153)

By \({\widetilde{x}}\), \({\overline{y}}\in \Omega \), we have, respectively,

$$\begin{aligned}{} & {} g_{j}({\widetilde{x}})\leqq 0\text {, }g_{j}({\overline{y}})\leqq 0, { \ \ } j\in J\text {,} \end{aligned}$$
(154)
$$\begin{aligned}{} & {} h_{s}({\widetilde{x}})=h_{s}({\overline{y}}), s\in S^{+}\left( \widetilde{ x}\right) \cup S^{-}\left( {\widetilde{x}}\right) \text {,} \end{aligned}$$
(155)
$$\begin{aligned}{} & {} \left. \begin{array}{c} H_{t}({\widetilde{x}})>0\text {, }{\overline{\vartheta }}_{t}^{H}=\overline{ \theta }_{t}-{\overline{w}}_{t}G_{t}({\widetilde{x}})\geqq 0\text {, }t\in T_{+}\left( {\widetilde{x}}\right) \\ H_{t}({\widetilde{x}})=0\text {, }{\overline{\vartheta }}_{t}^{H}=\overline{ \theta }_{t}-{\overline{w}}_{t}G_{t}({\widetilde{x}})\in R\text {, }t\in T_{0}\left( {\widetilde{x}}\right) \end{array} \right\} \Longrightarrow \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{H}H_{t}( {\widetilde{x}})\geqq 0\text {,} \end{aligned}$$
(156)
$$\begin{aligned}{} & {} \left. \begin{array}{c} G_{t}({\widetilde{x}})>0, {\overline{\vartheta }}_{t}^{G}={\overline{w}} _{t}H_{t}({\widetilde{x}})=0\text {, }t\in T_{0+}\left( {\widetilde{x}}\right) \\ G_{t}({\widetilde{x}})=0\text {, }{\overline{\vartheta }}_{t}^{G}={\overline{w}} _{t}H_{t}({\widetilde{x}})=0\text {, }t\in T_{00}\left( {\widetilde{x}}\right) \\ G_{t}\left( {\widetilde{x}}\right)<0\text {, }{\overline{\vartheta }}_{t}^{G}= {\overline{w}}_{t}H_{t}({\widetilde{x}})=0\text {, }t\in T_{0-}\left( \widetilde{x }\right) \\ G_{t}({\widetilde{x}})=0\text {, }{\overline{\vartheta }}_{t}^{G}={\overline{w}} _{t}H_{t}({\widetilde{x}})\geqq 0\text {, }t\in T_{+0}\left( {\widetilde{x}} \right) \\ G_{t}({\widetilde{x}})<0\text {, }{\overline{\vartheta }}_{t}^{G}={\overline{w}} _{t}H_{t}({\widetilde{x}})\geqq 0\text {, }t\in T_{+-}\left( {\widetilde{x}} \right) \end{array} \right\} \Longrightarrow \sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{G}G_{t}( {\widetilde{x}})\leqq 0\text {.} \end{aligned}$$
(157)

Hence, using (156), (157) together with \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, \overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{w}},{\overline{\theta }}\right) \in \Gamma \), we obtain

$$\begin{aligned}{} & {} -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}({\widetilde{x}})\leqq -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}({\overline{y}})\text {,} \end{aligned}$$
(158)
$$\begin{aligned}{} & {} \sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}({\widetilde{x}})\leqq \sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}({\overline{y}}). \end{aligned}$$
(159)

Combining (147)–(159), multiplying by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we obtain, respectively,

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}({\overline{y}};{\widetilde{x}}- {\overline{y}})<0\text {,} \end{aligned}$$
(160)
$$\begin{aligned}{} & {} \sum _{j\in J^{+}\left( {\widetilde{x}}\right) }{\overline{\mu }}_{j}g_{j}^{+}( {\overline{y}};{\widetilde{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(161)
$$\begin{aligned}{} & {} \sum _{s\in S^{+}\left( {\widetilde{x}}\right) \cup S^{-}\left( {\widetilde{x}} \right) }{\overline{\xi }}_{s}h_{s}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}} )\leqq 0\text {,} \end{aligned}$$
(162)
$$\begin{aligned}{} & {} -\sum _{t\in T_{00}\left( {\widetilde{x}}\right) \cup T_{0-}\left( {\widetilde{x}} \right) \cup T_{0+}^{+}\left( {\widetilde{x}}\right) \cup T_{0+}^{-}\left( {\widetilde{x}}\right) }{\overline{\vartheta }}_{t}^{H}H_{t}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(163)
$$\begin{aligned}{} & {} \sum _{t\in T_{+0}\left( {\widetilde{x}}\right) }{\overline{\vartheta }} _{t}^{G}G_{t}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})\leqq 0\text {.} \end{aligned}$$
(164)

Taking into account Lagrange multipliers equal to 0 and then combining (160)–(164), we get that the inequality

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}({\overline{y}};{\widetilde{x}}- {\overline{y}})+\sum _{j=1}^{m}\overline{\mu }_{j}g_{j}^{+}({\overline{y}}; {\widetilde{x}}-{\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}^{+}( {\overline{y}};{\widetilde{x}}-{\overline{y}})\\{} & {} -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}( {\overline{y}};{\widetilde{x}}-{\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta } _{t}^{G}G_{t}^{+}({\overline{y}};{\widetilde{x}}-{\overline{y}})<0 \end{aligned}$$

holds, which is a contradiction to the first constraint of (WDVC). This completes the proof of this theorem. \(\square \)

Theorem 4.6

(Strict converse duality): Let \({\overline{x}}\) be a feasible solution of (MPVC) and \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, {\overline{v}},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) be a feasible solution of (WDVC) such that \( f\left( {\overline{x}}\right) =L\left( {\overline{y}},{\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \). Further, we assume that one of the following hypotheses is fulfilled:

  1. A)

    \(f_{i}\), \(i=1,...,p\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( {\overline{x}}\right) \), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) \), \( -H_{t}\), \(t\in T_{H}^{+}\left( {\overline{x}}\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( {\overline{x}}\right) \), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \cup Y\).

  2. B)

    the vector-valued Lagrange function \(L\left( \cdot ,\mu ,v,\vartheta ^{H},\vartheta ^{G}\right) \) is strictly convex on \(\Omega \cup Y\).

Then \({\overline{x}}\) is a Pareto solution in (MPVC) and \(\left( {\overline{y}}, {\overline{\lambda }},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }} ^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) is an efficient solution of a maximum type in (WDVC).

Proof

We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{x}}\in \Omega \) is not a Pareto solution in (MPVC). Hence, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that

$$\begin{aligned} f\left( {\widetilde{x}}\right) \le f\left( {\overline{x}}\right) \text {.} \end{aligned}$$
(165)

Using (165) with the assumption \(f\left( {\overline{x}}\right) =L\left( {\overline{y}}, \overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{ \vartheta }^{G}\right) \), we obtain

$$\begin{aligned} f\left( {\widetilde{x}}\right) \le L\left( {\overline{y}},\overline{\mu }, {\overline{\xi }},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G}\right) \text {.} \end{aligned}$$

Since all hypotheses of Theorem 4.3 are fulfilled, the above relation contradicts weak duality. This means that \( {\overline{x}}\) is a Pareto solution in (MPVC). Further, efficiency of a maximum type of \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, {\overline{\xi }},{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{v}},{\overline{\beta }}\right) \) in (WDVC) follows from the weak duality theorem (Theorem 4.3 ). \(\square \)

A restricted version of converse duality for the problems (MPVC) and (WDVC) gives a sufficient condition for the uniqueness of an efficient solution in (MPVC) and an efficient solution of a maximum type in (WDVC).

Theorem 4.7

(Restricted converse duality): Let \({\overline{x}}\) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints, \(\left( {\overline{y}}, {\overline{\lambda }},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }} ^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) be an efficient solution of a maximum type in its vector Wolfe dual problem (WDVC) and the VC-Abadie constraint qualification (VC-ACQ) be satisfied at \( {\overline{x}}\) for (MPVC). Further, we assume that one of the following hypotheses is fulfilled:

  1. A)

    \(f_{i}\), \(i=1,...,p\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( {\overline{x}}\right) \), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) \), \( -H_{t}\), \(t\in T_{H}^{+}\left( {\overline{x}}\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( {\overline{x}}\right) \), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \cup Y\).

  2. B)

    the vectorial Lagrange function \(L\left( \cdot ,{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \) is strictly convex on \(\Omega \cup Y\).

Then \({\overline{x}}={\overline{y}}\).

Proof

By means of contradiction, suppose that \({\overline{x}}\ne {\overline{y}}\). Since is an efficient solution in (MPVC), by Theorem 3.11, there exist Lagrange multipliers \(\overline{\lambda }\in R^{p}\), \( {\overline{\mu }}\in R^{m}\), \({\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }} ^{H}\in R^{r}\) and \({\overline{\vartheta }}^{G}\in R^{r}\), not equal to 0, such that (87)-(93) are fulfilled. Thus, by (87)-(93), it follows that

$$\begin{aligned} f\left( {\overline{x}}\right) =L\left( {\overline{x}},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G}\right) \text {.} \end{aligned}$$

By assumption, \(\left( {\overline{y}},\overline{\lambda },{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in vector Wolfe dual problem (MPVC). Thus, one has

$$\begin{aligned} L\left( {\overline{x}},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }} ^{H},{\overline{\vartheta }}^{G}\right) =L\left( {\overline{y}},{\overline{\mu }}, \overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}\right) \text {.} \end{aligned}$$

Combining two above relations, we get

$$\begin{aligned} f\left( {\overline{x}}\right) =L\left( {\overline{y}},\overline{\mu },{\overline{v}},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G}\right) \text {.} \end{aligned}$$
(166)

Thus, (166) gives

$$\begin{aligned} {\overline{\lambda }}_{i}f_{i}\left( {\overline{x}}\right) ={\overline{\lambda }} _{i}L_{i}\left( {\overline{y}},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G}\right) , i=1,...,p, \end{aligned}$$
(167)

Adding both sides of (167), by the definition of the vectorial Lagrange function L, we get

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}\left( {\overline{x}}\right) =\sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}\left( {\overline{y}}\right) + \\ \sum _{k=1}^{p}{\overline{\lambda }}_{i}\left[ \sum _{j=1}^{m}{\overline{\mu }} _{j}g_{j}({\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}({\overline{y}} )-\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}({\overline{y}} )+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}({\overline{y}})\right] . \end{array}\nonumber \\ \end{aligned}$$
(168)

Hence, by \(\left( {\overline{y}},{\overline{\lambda }},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}}, {\overline{\theta }}\right) \in \Gamma \), one has \(\sum _{k=1}^{p}{\overline{\lambda }}_{i}=1\). Thus, (168) implies

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}\left( {\overline{x}}\right) =\sum _{i=1}^{p}{\overline{\lambda }}_{i}f_{i}\left( {\overline{y}}\right) + \\ \sum _{j=1}^{m}\overline{\mu }_{j}g_{j}({\overline{y}})+\sum _{s=1}^{q}\overline{ \xi }_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{H}H_{t}({\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}( {\overline{y}}). \end{array} \end{aligned}$$
(169)

Proof under hypothesis A). Using hypothesis A), by Proposition 2.6, the inequalities

$$\begin{aligned}{} & {} f_{i}({\overline{x}})-f_{i}({\overline{y}})>f_{i}^{+}(y;{\overline{x}}-{\overline{y}} ), { \ \ }i=1,...,p\text {,} \end{aligned}$$
(170)
$$\begin{aligned}{} & {} 0\geqq g_{_{j}}({\overline{x}})\geqq g_{_{j}}({\overline{y}})+g_{j}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}}),{ \ \ }j\in J^{+}\left( {\overline{x}}\right) , \end{aligned}$$
(171)
$$\begin{aligned}{} & {} 0=h_{s}({\overline{x}})\geqq h_{s}({\overline{y}})+h_{s}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}}),{ \ \ }s\in S^{+}\left( {\overline{x}}\right) , \end{aligned}$$
(172)
$$\begin{aligned}{} & {} 0=-h_{s}({\overline{x}})\geqq -h_{s}({\overline{y}})-h_{s}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}}),{ \ \ }s\in S^{-}\left( {\overline{x}}\right) , \end{aligned}$$
(173)
$$\begin{aligned}{} & {} 0=-H_{t}({\overline{x}})\geqq -H_{t}({\overline{y}})-H_{t}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}})\text {, }t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}^{+}\left( {\overline{x}} \right) , \end{aligned}$$
(174)
$$\begin{aligned}{} & {} 0=H_{t}({\overline{x}})\geqq H_{t}({\overline{y}})+H_{t}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}}),{ \ \ }t\in T_{0+}^{-}\left( {\overline{x}} \right) , \end{aligned}$$
(175)
$$\begin{aligned}{} & {} 0=G_{t}({\overline{x}})\geqq G_{t}({\overline{y}})+G_{t}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}}),{ \ \ }t\in T_{+0}\left( {\overline{x}}\right) \end{aligned}$$
(176)

hold. By the feasibility of \(\left( {\overline{y}},\overline{\lambda }, {\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G},{\overline{w}},\overline{\theta }\right) \) in (WDVC), (170)-(176) yield, respectively,

$$\begin{aligned}{} & {} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} \overline{\lambda }_{i}f_{i}({\overline{y}})+\sum _{i=1}^{p}{\overline{\lambda }} _{i}f_{i}^{+}(y;{\overline{x}}-{\overline{y}})\text {,} \end{aligned}$$
(177)
$$\begin{aligned}{} & {} \sum _{j\in J^{+}\left( {\overline{x}}\right) }{\overline{\mu }}_{j}g_{_{j}}( {\overline{y}})+\sum _{j\in J^{+}\left( {\overline{x}}\right) }{\overline{\mu }} _{j}g_{j}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(178)
$$\begin{aligned}{} & {} \sum _{s\in S^{+}\left( {\overline{x}}\right) }{\overline{\xi }}_{s}h_{s}( {\overline{y}})+\sum _{s\in S^{+}\left( {\overline{x}}\right) }{\overline{\xi }} _{s}h_{s}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(179)
$$\begin{aligned}{} & {} -\sum _{s\in S^{-}\left( {\overline{x}}\right) }\left( -{\overline{\xi }} _{s}\right) h_{s}({\overline{y}})-\sum _{s\in S^{-}\left( {\overline{x}}\right) }\left( -{\overline{\xi }}_{s}\right) h_{s}^{+}({\overline{y}};{\overline{x}}- {\overline{y}})\leqq 0\text {,} \end{aligned}$$
(180)
$$\begin{aligned}{} & {} \sum _{t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}} \right) \cup T_{0+}^{+}\left( {\overline{x}}\right) }{\overline{\vartheta }} _{t}^{H}\left( -H_{t}({\overline{y}})\right) +\sum _{t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}^{+}\left( {\overline{x}}\right) }{\overline{\vartheta }}_{t}^{H}H_{t}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(181)
$$\begin{aligned}{} & {} \sum _{t\in T_{0+}^{-}\left( {\overline{x}}\right) }\left( -\overline{\vartheta }_{t}^{H}\right) H_{t}({\overline{y}})+\sum _{t\in T_{0+}^{-}\left( {\overline{x}} \right) }\left( -{\overline{\vartheta }}_{t}^{H}\right) H_{t}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}})\leqq 0\text {,} \end{aligned}$$
(182)
$$\begin{aligned}{} & {} \sum _{t\in T_{+0}\left( {\overline{x}}\right) }{\overline{\vartheta }} _{t}^{G}G_{t}({\overline{y}})+\sum _{t\in T_{+0}\left( {\overline{x}}\right) } {\overline{\vartheta }}_{t}^{G}G_{t}^{+}({\overline{y}};{\overline{x}}-{\overline{y}} )\leqq 0\text {.} \end{aligned}$$
(183)

Thus, the above inequalities yield

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} \overline{\lambda }_{i}f_{i}({\overline{y}})+\sum _{j\in J^{+}\left( {\overline{x}}\right) }{\overline{\mu }}_{j}g_{_{j}}({\overline{y}})+\sum _{s\in S^{+}\left( {\overline{x}}\right) }{\overline{\xi }}_{s}h_{s}({\overline{y}})+ \\ \sum _{s\in S^{-}\left( {\overline{x}}\right) }\overline{\xi }_{s}h_{s}( {\overline{y}})+\sum _{t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}^{+}\left( {\overline{x}}\right) \cup } \overline{\vartheta }_{t}^{H}\left( -H_{t}({\overline{y}})\right) +\sum _{t\in T_{0+}^{-}\left( {\overline{x}}\right) }\left( -{\overline{\vartheta }} _{t}^{H}\right) H_{t}({\overline{y}})+ \\ \sum _{t\in T_{+0}\left( {\overline{x}}\right) }{\overline{\vartheta }} _{t}^{G}G_{t}({\overline{y}})+\sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}})+\sum _{j\in J^{+}\left( {\overline{x}} \right) }\overline{\mu }_{j}g_{j}^{+}({\overline{y}};{\overline{x}}-{\overline{y}} )+ \\ \sum _{s\in S^{+}\left( {\overline{x}}\right) }\overline{\xi }_{s}h_{s}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}})-\sum _{s\in S^{-}\left( {\overline{x}} \right) }\left( -{\overline{\xi }}_{s}\right) h_{s}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})+\sum _{t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}^{+}\left( {\overline{x}}\right) } \overline{\vartheta }_{t}^{H}H_{t}^{+}({\overline{y}};{\overline{x}}-{\overline{y}} )+ \\ \sum _{t\in T_{0+}^{-}\left( {\overline{x}}\right) }\left( -{\overline{\vartheta }}_{t}^{H}\right) H_{t}^{+}({\overline{y}};{\overline{x}}-{\overline{y}} )+\sum _{t\in T_{+0}\left( {\overline{x}}\right) }{\overline{\vartheta }} _{t}^{G}G_{t}^{+}({\overline{y}};{\overline{x}}-{\overline{y}}). \end{array} \end{aligned}$$

Taking into account the Lagrange multipliers equal to 0, we have

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} \overline{\lambda }_{i}f_{i}({\overline{y}})+\sum _{j=1}^{m}{\overline{\mu }} _{j}g_{_{j}}({\overline{y}})+ \\ \sum _{s=1}^{q}\overline{\xi }_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}\overline{ \vartheta }_{t}^{H}H_{t}({\overline{y}})+\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{G}G_{t}({\overline{y}})+ \\ \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}({\overline{y}};{\overline{x}}- {\overline{y}})+\sum _{j=1}^{m}\overline{\mu }_{j}g_{j}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}}) \\ -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}({\overline{y}};\overline{ x}-{\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}}). \end{array} \end{aligned}$$
(184)

Hence, by the first constraint of (WDVC), (184) yields that the inequality

$$\begin{aligned} \begin{array}{l} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} \overline{\lambda }_{i}f_{i}({\overline{y}})+\sum _{j=1}^{m}{\overline{\mu }} _{j}g_{_{j}}({\overline{y}})+ \\ \quad \sum _{s=1}^{q}{\overline{\xi }}_{s}h_{s}({\overline{y}} )-\sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{H}H_{t}({\overline{y}} )+\sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{G}G_{t}({\overline{y}}) \end{array} \end{aligned}$$

holds, contradicting (169). This completes the proof of this theorem under hypothesis A).

Proof under hypothesis B)

Now, we assume that the vector-valued Lagrange function \(L\left( \cdot , {\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \) is strictly convex on \(\Omega \cup Y\). Hence, by the definition of the vector-valued Lagrange function L, we get

$$\begin{aligned} L_{i}\left( {\overline{x}},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}\right) -L_{i}\left( {\overline{y}},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }} ^{G}\right) >L_{i}^{+}({\overline{y}},{\overline{\mu }},{\overline{\xi }}, {\overline{\vartheta }}^{H},\overline{\vartheta }^{G};{\overline{x}}-{\overline{y}} )\text {, }i=1,..,p. \end{aligned}$$

Then, by the definition of L, one has

$$\begin{aligned} \begin{array}{c} f_{i}({\overline{x}})>f_{i}({\overline{y}})+\sum _{j=1}^{m}\overline{\mu } _{j}g_{_{j}}({\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}({\overline{y}} )+\sum _{t=1}^{r}{\overline{\vartheta }}_{t}^{G}G_{t}({\overline{y}})+ \\ f_{i}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})+\sum _{j=1}^{m}\overline{ \mu }_{j}g_{j}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})+\sum _{s=1}^{q} \overline{\xi }_{s}h_{s}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})\\ -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}({\overline{y}};\overline{ x}-{\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}}), i=1,...,p \end{array} \end{aligned}$$

Multiplying each above inequality by the corresponding Lagrange multiplier \( {\overline{\lambda }}_{i}\), \(i=1,...,p\), respectively, and then summarizing the resulting inequalities, we obtain

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} {\overline{\lambda }}_{i}f_{i}({\overline{y}})+ \\ \sum _{i=1}^{p}{\overline{\lambda }}_{i}\left[ \sum _{j=1}^{m}{\overline{\mu }} _{j}g_{_{j}}({\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}({\overline{y}} )+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}({\overline{y}})\right] + \\ \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}({\overline{y}};{\overline{x}}- {\overline{y}})+\sum _{i=1}^{p}{\overline{\lambda }}_{i}\left[ \sum _{j=1}^{m} \overline{\mu }_{j}g_{j}^{+}({\overline{y}};{\overline{x}}-{\overline{y}} )+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}^{+}({\overline{y}};{\overline{x}}- {\overline{y}})\right. \\ \left. -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}})+\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{G}G_{t}^{+}({\overline{y}};{\overline{x}}-{\overline{y}})\right] . \end{array} \end{aligned}$$

By the feasibility of \(\left( {\overline{y}},\overline{\lambda },{\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{v}},\overline{\beta }\right) \) in (WDVC), one has \(\sum _{i=1}^{p} \overline{\lambda }_{i}=1\). Then, the aforesaid inequality gives

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} {\overline{\lambda }}_{i}f_{i}({\overline{y}})+ \\ \sum _{j=1}^{m}\overline{\mu }_{j}g_{_{j}}({\overline{y}})+\sum _{s=1}^{q} \overline{\xi }_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}{\overline{\vartheta }} _{t}^{H}H_{t}({\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}( {\overline{y}}) \\ \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}^{+}({\overline{y}};{\overline{x}}- {\overline{y}})+\sum _{j=1}^{m}\overline{\mu }_{j}g_{j}^{+}({\overline{y}}; {\overline{x}}-{\overline{y}})+\sum _{s=1}^{q}\overline{\xi }_{s}h_{s}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}})+ \\ -\sum _{t=1}^{r}\overline{\vartheta }_{t}^{H}H_{t}^{+}({\overline{y}};\overline{ x}-{\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}^{+}( {\overline{y}};{\overline{x}}-{\overline{y}}). \end{array} \end{aligned}$$

Using the first constraint of (WDVC), we get that the following inequality

$$\begin{aligned} \begin{array}{c} \sum _{i=1}^{p}\overline{\lambda }_{i}f_{i}({\overline{x}})>\sum _{i=1}^{p} {\overline{\lambda }}_{i}f_{i}({\overline{y}})+ \\ \sum _{j=1}^{m}\overline{\mu }_{j}g_{_{j}}({\overline{y}})+\sum _{s=1}^{q} {\overline{v}}_{s}h_{s}({\overline{y}})-\sum _{t=1}^{r}\overline{\vartheta } _{t}^{H}H_{t}({\overline{y}})+\sum _{t=1}^{r}\overline{\vartheta }_{t}^{G}G_{t}( {\overline{y}}) \end{array} \end{aligned}$$

holds, contradicting (169). This completes the proof of this theorem under hypothesis B). \(\square \)

5 Conclusions

This paper represents the study concerning the new class of nonsmooth vector optimization problems, that is, directionally differentiable multiobjective programming problems with vanishing constraints. Under the Abadie constraint qualification, the Karush–Kuhn–Tucker type necessary optimality conditions have been established for such nondifferentiable vector optimization problems in the terms of the right directional derivatives of the involved functions. The nonlinear Gordan alternative theorem has been used in proving these aforesaid necessary optimality conditions. However, the Abadie constraint qualification may not hold for such multicriteria optimization problems and therefore, the aforesaid necessary optimality conditions may not hold. Therefore, we have introduced the modified Abadie constraint qualification for the considered multiobjective programming problem with vanishing constraints. Then, under the modified Abadie constraint qualification, which is weaker than the standard Abadie constraint qualification, we prove weaker necessary optimality conditions of the Karush–Kuhn–Tucker type for such nondifferentiable vector optimization problems with vanishing constraints. The sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions have also been proved for the considered directionally differentiable multiobjective programming problem with vanishing constraints under appropriate convexity hypotheses. Furthermore, for the considered directionally differentiable multiobjective programming problems with vanishing constraints, its vector Wolfe dual problem has been defined on the line Hu et al. (2020). Then several duality theorems have been established between the primal directionally differentiable multiobjective programming problems with vanishing constraints and its vector Wolfe dual problem under convexity hypotheses.

Thus, the above mentioned optimality conditions and duality results have been derived for a completely a new class of directionally differentiable vector optimization problems in comparison to the results existing in the literature, that is, for directionally differentiable multiobjective programming problems with vanishing constraints. Hence, the results established in the literature generally for scalar differentiable extremum problems with vanishing constraints have been generalized and extended to directionally differentiable multiobjective programming problems with vanishing constraints.

It seems that the techniques employed in this paper can be used in proving similarly results for other classes of nonsmooth mathematical programming problems with vanishing constraints. We shall investigate these problems in the subsequent papers.