Abstract
In this paper, a class of directionally differentiable multiobjective programming problems with inequality, equality and vanishing constraints is considered. Under both the Abadie constraint qualification and the modified Abadie constraint qualification, the Karush–Kuhn–Tucker type necessary optimality conditions are established for such nondifferentiable vector optimization problems by using the nonlinear version Gordan theorem of the alternative for convex functions. Further, the sufficient optimality conditions for such directionally differentiable multiobjective programming problems with vanishing constraints are proved under convexity hypotheses. Furthermore, vector Wolfe dual problem is defined for the considered directionally differentiable multiobjective programming problem vanishing constraints and several duality theorems are established also under appropriate convexity hypotheses.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Multiobjective optimization problems, also known vector optimization problems or multicriteria optimization problems, are extremum problems involving more than one objective function to be optimized. Many real-life problems can be formulated as multiobjective programming problems which include human decision making, economics, financial investment, portfolio, resource allocation, information transfer, engineering design, mechanics, control theory, etc. During the past five decades, the field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory. One of the classes of nondifferentiable multicriteria optimization problems studied in the recent past is the class of directionally differentiable vector optimization problems for which many authors have established the aforesaid fundamental results in optimization theory (see, for example, (Ahmad, 2011; Antczak, 2002, 2009; Arana-Jiménez et al., 2013; Dinh et al., 2005; Ishizuka, 1992; Kharbanda et al., 2015; Mishra & Noor, 2006; Mishra et al., 2008, 2015; Slimani & Radjef, 2010; Ye, 1991) and others).
Recently, a special class of optimization problems, known as the mathematical programming problems with vanishing constraints, was introduced by Achtziger and Kanzow (2008), which serves as a unified frame work for several applications in structural and topology optimization. Since optimization problems with vanishing constraints, in their general form, are quite a new class of mathematical programming problems, only very few works have been published on this subject so far (see, for example, (Achtziger et al. 2013; Antczak 2022; Dorsch et al. 2012; Dussault et al. 2019; Guu et al. 2017; Hoheisel and Kanzow 2008, 2009; Hoheisel et al. 2012; Hu et al. 2014, 2020; Izmailov and Solodov 2009; Khare and Nath 2019; Mishra et al. 2015, 2016; Thung 2022). However, to the best our knowledge there are no works on optimality conditions for (convex) directionally differentiable multiobjective programming problems with vanishing constraints in the literature.
The main purpose of this paper is, therefore, to develop optimality conditions for a new class of nondifferentiable multiobjective programming problems with vanishing constraints. Namely, this paper represents the study concerning both necessary and sufficient optimality conditions for convex directionally differentiable vector optimization problems with inequality, equality and vanishing constraints. Considering the concept of a (weak) Pareto solution, we establish Karush–Kuhn–Tucker type necessary optimality conditions which are formulated in terms of directional derivatives. In proving the aforesaid necessary optimality conditions, we use a nonlinear version of the Gordan alternative theorem for convex functions and also the Abadie constraint qualification. Further, we illustrate the case that the necessary optimality conditions may not hold under the aforesaid constraint qualification. Therefore, we introduce the VC-Abadie constraint qualification and, under this weaker constraint qualification in comparison to that classical one, we present the Karush–Kuhn–Tucker type necessary optimality conditions for the considered directionally differentiable multiobjective programming problem. Further, we prove the sufficiency of the aforesaid necessary optimality conditions for such nondifferentiable vector optimization problems under appropriate convexity hypotheses. The optimality results established in the paper are illustrated by the example of a convex directionally differentiable multiobjective programming problem with vanishing constraints. Furthermore, for the considered directionally differentiable vector optimization problem with vanishing constraints, we define its vector Wolfe dual problem and we prove several duality theorems also under convexity hypotheses.
2 Preliminaries
In this section, we provide some definitions and results that we shall use in the sequel. The following convention for equalities and inequalities will be used throughout the paper.
For any \(x=\left( x_{1},x_{2},...,x_{n}\right) ^{T}\), \(y=\left( y_{1},y_{2},...,y_{n}\right) ^{T}\) in \(R^{n}\), we define:
-
(i)
\(x=y\) if and only if \(x_{i}=y_{i}\) for all \(i=1,2,...,n\);
-
(ii)
\(x<y\) if and only if \(x_{i}<y_{i}\) for all \(i=1,2,...,n\);
-
(iii)
\(x\leqq y\) if and only if \(x_{i}\leqq y_{i}\) for all \(i=1,2,...,n\);
-
(iv)
\(x\le y\) if and only if \(x\leqq y\) and \(x\ne y\).
Throughout the paper, we will use the same notation for row and column vectors when the interpretation is obvious.
Definition 2.1
The affine hull of the set C of points \(x_{1},...,x_{k}\in C\) is defined by
Definition 2.2
(Hiriart-Urruty & Lemaréchal, 1993) The relative interior of the set C (denoted by \(relint\,C\)) is defined as
where \(B\left( x,r\right) :=\left\{ y\in R^{n}:\left\| x-y\right\| \leqq r\right\} \) is the ball of radius r around x with respect to some norm on \(R^{n}\).
Remark 2.3
(Rockafellar, 1970) The definition of the relative interior of a nonempty convex set C can be reduced to the following:
Definition 2.4
It is said that \(\varphi :C\rightarrow R\), where \(C\subset R^{n}\) is a nonempty convex set, is convex on C if the inequality
holds for all \(x,u\in C\) and any \(\lambda \in \left[ 0,1\right] \).
It is said that \(\varphi \) is said to be strictly convex on C if the inequality
holds for all \(x,u\in C\), \(x\ne u\), and any \(\lambda \in \left( 0,1\right) \).
Definition 2.5
We say that a mapping \(\varphi :X\rightarrow R\) defined on a nonempty set \(X\subseteq R^{n}\) is directionally differentiable at \(u\in X\) into a direction \(v\in R^{n}\) if the limit
exists finite. We say that \(\varphi \) is directionally differentiable or (Dini differentiable) at u, if its directional derivative \(\varphi ^{+}(u;v)\) exists finite for all \(v\in R^{n}\).
Proposition 2.6
(Jahn, 2004) Let a mapping \(\varphi :R^{n}\rightarrow R\) be convex. Then, at every \( u\in R^{n}\) and in every direction \(v\in R^{n}\), the directional derivative \( \varphi ^{+}(u;v)\) exists. Moreover, since the convex function \(\varphi \) has a directional derivative in the direction \(x-u\) for any \(x\in R^{n}\), then the following inequality
holds.
Lemma 2.7
(Jahn, 2004) Let \(X\subseteq R^{n}\) be open, \(u\in X\) be given, \(f,g:X\rightarrow R\) and \(v\in R^{n}\). Further, assume that the directional derivatives of f and g at u in the direction v exist, i.e. \(f^{+}(u;v)\) and \(g^{+}(u;v)\) both exist. Then the directional derivative of \(f\cdot g\) exists and \(\left( f\cdot g\right) ^{+}(u;v)=\) \(f(u)g^{+}(u;v)+\) \(f^{+}(u;v)g(u)\).
Giorgi (2002) proved the following theorem of the alternative for convex functions, which may be considered as a nonlinear version of the Gordan theorem presented by Mangasarian (1969) in the linear case.
Theorem 2.8
(Giorgi, 2002) Let \(C\subset R^{n}\) be a a nonempty convex set, \(F:C\rightarrow R^{k}\), \( \Psi :C\rightarrow R^{m}\) be convex functions and \(\Phi :R^{n}\rightarrow R^{q}\) be a linear function. Let us assume that there exists \(x_{0}\in relint\,C\) such that \(\Psi _{j}\left( x_{0}\right) <0\), \(j=1,...,m,\) and \( \Phi _{s}\left( x_{0}\right) \leqq 0\), \(s=1,...,q\). Then, the system
admits no solutions if and only if there exists a vector \(\left( \lambda ,\theta ,\beta \right) ^{T}\in R_{+}^{k}\times R_{+}^{m}\times R^{q}\), \( \lambda \ne 0\), such that
Definition 2.9
The cone of sequential linear directions (also known as the sequential radial cone) to a set \(Q\subset R^{n}\) at \( {\overline{x}}\in Q\) is the set denoted by \(Z\left( Q;{\overline{x}}\right) \) and defined by
Definition 2.10
The tangent cone to a set \(Q\subset R^{n}\) at \( {\overline{x}}\in cl\,Q\) is the set denoted by \({\mathcal {T}}\left( Q;\overline{x }\right) \) and defined by
where \(cl\,Q\) denotes the closure of Q.
Note that the aforesaid cones are nonempty, \({\mathcal {T}}\left( Q;{\overline{x}} \right) \) is closed, it may not be convex and \(Z\left( Q;{\overline{x}}\right) \subset {\mathcal {T}}\left( Q;{\overline{x}}\right) \).
3 Multiobjective programming with vanishing constraints
In the paper, we consider the following constrained multiobjective programming problem (MPVC) with vanishing constraints defined by
where \(f_{i}:R^{n}\rightarrow R\), \(i\in I=\left\{ 1,...,p\right\} \), \( g_{j}:R^{n}\rightarrow R\), \(j\in J=\left\{ 1,...,m\right\} \), \( h_{s}:R^{n}\rightarrow R,\) \(s=1,...,r,\) \(H_{t}:R^{n}\rightarrow R\), \( G_{t}:R^{n}\rightarrow R\), \(t\in T=\left\{ 1,...,r\right\} \), are real-valued functions and \(C\subseteq R^{n}\) is a nonempty open convex set.
For the purpose of simplifying our presentation, we will next introduce some notations which will be used frequently throughout this paper. Let \(\Omega =\{x\in C:g_{j}(x)\leqq 0\), \(j\in J,\) \(H_{t}\left( x\right) \geqq 0,\) \( H_{t}\left( x\right) G_{t}\left( x\right) \leqq 0,\) \(t\in T\}\) be the set of all feasible solutions for (MPVC). Further, we denote by \(J({\overline{x}} ):=\left\{ j\in J:g_{j}({\overline{x}})=0\right\} \) the set of inequality constraint indices that are active at \({\overline{x}}\in \Omega \) and by \( J^{<}({\overline{x}})=\{j\in \{1,...,m\}:g_{j}({\overline{x}})<0\}\) the set of inequality constraint indices that are inactive at \({\overline{x}}\in \Omega \). Then, \(J({\overline{x}})\cup J^{<}({\overline{x}})=J\).
Before studying optimality in multiobjective programming, one has to define clearly the well-known concepts of optimality and solutions in multiobjective programming problem. The (weak) Pareto optimality in multiobjective programming associates the concept of a solution with some property that seems intuitively natural.
Definition 3.1
A feasible point \({\overline{x}}\) is said to be a Pareto solution (an efficient solution) in (MPVC) if and only if there exists no other \(x\in \Omega \) such that
Definition 3.2
A feasible point \({\overline{x}}\) is said to be a weak Pareto solution (a weakly efficient solution, a weak minimum) in (MPVC) if and only if there exists no other \(x\in \Omega \) such that
As it follows from the definition of (weak) Pareto optimality, \({\overline{x}}\) is nonimprovable with respect to the vector cost function f. The quality of nonimprovability provides a complete solution if \({\overline{x}}\) is unique. However, usually this is not the case, and then one has to find the entire exact set of all Pareto optimality solutions in a multiobjective programming problem.
Now, for any feasible solution \({\overline{x}}\), let us denote the following index sets
Further, let us divide the index set \(T_{+}\left( {\overline{x}}\right) \) into the following index subsets:
Similarly, the index set \(T_{0}\left( {\overline{x}}\right) \) can be partitioned into the following three index subsets:
Moreover, we denote by \(T_{HG}\left( {\overline{x}}\right) \) the set of indexes \(t\in T\) defined by \(T_{HG}\left( {\overline{x}}\right) =T_{00}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \).
Before proving the necessary optimality conditions for the considered directionally differentiable multiobjective programming problem with vanishing constraints, we introduce the Abadie constraint qualification for this multicriteria optimization problem.
In order to introduce the aforesaid constraint qualification, for \({\overline{x}}\in \Omega \), we define the sets \(Q^{l}\left( {\overline{x}}\right) \), \( l=1,...,p\), and \(Q\left( {\overline{x}}\right) \) as follows
Now, we give the definition of the almost linearizing cone for the considered multiobjective programming problem (MPVC) with vanishing constraints. It is a generalization of the almost linearizing cone introduced by Preda and Chitescu (1999) for a directionally differentiable multiobjective optimization problem with inequality constraints only.
Definition 3.3
The almost linearizing cone \( L\left( \Omega ,{\overline{x}}\right) \) to the set \(\Omega \) at \({\overline{x}} \in \Omega \) is defined by
Now, we prove the result which gives the formulation of the almost linearizing cone to the sets \(Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\).
Proposition 3.4
Let \({\overline{x}}\in \Omega \) be a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, the linearizing cone to the set to each set \(Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), at \({\overline{x}}\), denoted by \(L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \), is given by
Proof
Let us assume that \({\overline{x}}\in \Omega \) is a Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Then, by the definitions of the almost linearizing cone and index sets, we get
Note that, by Lemma 2.7, one has
Then, by the definition of index sets, ( 7 ) gives
Combining ( 6 )-( 8 ), we get ( 5 ). This completes the proof of this proposition. \(\square \)
Remark 3.5
Note that the almost linearizing cone to \(Q\left( {\overline{x}}\right) \) at \( {\overline{x}}\in Q\left( {\overline{x}}\right) \) is given by
Indeed, by ( 5 ), we get ( 9 ). In other words, the formulation of \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is given by
Proposition 3.6
If \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I,\) \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J\left( {\overline{x}}\right) ,\) \( h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S,\) \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \), \( G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{+0}\left( {\overline{x}} \right) \), are convex on \(R^{n}\), then \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is a closed convex cone.
Proof
Since the directional derivative is a positive homogenous function, therefore, if \(\alpha \geqq 0\) and \(v\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \), one has \(\alpha v\in L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \). This means that \(L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is a cone.
Now, we prove that it is a convex cone. Let \(v_{1}\), \(v_{2}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) and \(\alpha \in \left[ 0,1 \right] \). By convexity assumption, it follows that
The above inequalities imply that \(\alpha v_{1}+\left( 1-\alpha \right) v_{2}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \), which means that \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) is a convex cone.
Now, we prove the closedness of \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \). In order to prove this property, we take a sequence \( \left\{ v_{r}\right\} \subset L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) such that \(v_{r}\rightarrow v\) as \(r\rightarrow \infty \). Since \( v_{r}\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for any integer r, by the continuity of convex functions \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), we have
Similarly, we obtain \(g_{j}^{+}\left( {\overline{x}};v\right) \leqq 0\), \(j\in J\left( {\overline{x}}\right) ,\) \(h_{s}^{+}\left( {\overline{x}};v\right) =0\), \( s\in S\), \(H_{t}^{+}\left( {\overline{x}};v\right) =0\), \(t\in T_{0+}\left( {\overline{x}}\right) \), \(H_{t}^{+}\left( {\overline{x}};v\right) \geqq 0\), \( t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0\), \(t\in T_{+0}\left( {\overline{x}}\right) \). This means that the set \(L\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is closed. \(\square \)
Remark 3.7
Based on the result established in the above proposition, we conclude that also \(L\left( Q^{l}\left( {\overline{x}} \right) ;{\overline{x}}\right) \), \(l=1,...,p,\) are also closed convex cones.
Proposition 3.8
If, for each \(v\in Z\left( Q\left( {\overline{x}} \right) ;{\overline{x}}\right) \), the Dini directional derivatives \( f_{i}^{+}\left( {\overline{x}};v\right) \), \(i\in I,\) \(g_{j}^{+}\left( {\overline{x}};v\right) \), \(j\in J\left( {\overline{x}}\right) ,\) \( h_{s}^{+}\left( {\overline{x}};v\right) \), \(s\in S,\) \(H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \), \( G_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{+0}\left( {\overline{x}} \right) \), exist, then
Proof
Firstly, we prove that, for each \(l=1,...,p\),
Therefore, for each \(l=1,...,p\), we take \(v\in Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \). Then, by Definition 2.9, there exists \(\left( \alpha _{k}\right) \subset R_{+}\), \(\alpha _{k}\downarrow 0\), such that \({\overline{x}}+\alpha _{k}v\in Q^{l}\left( {\overline{x}}\right) \) for all \(k\in N\). Therefore, for each \(l=1,...,p\), since \({\overline{x}}+\alpha _{k}v\in Q^{l}\left( {\overline{x}}\right) \), we have
Then, by Definition 2.5, we have
By Lemma 2.7, one has
Hence, we conclude by ( 13 )-( 15 ) and ( 19 )-( 21 ) that \(v\in L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for each \(l=1,...,p\). Therefore, since we have shown that \(Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset L\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) for each \(l=1,...,p\), we have by ( 9 ) that
as was to be shown. \(\square \)
Note that, in general, the converse inclusion of ( 11 ) does not hold. Therefore, in order to prove the necessary optimality condition for efficiency in (MPVC), we give the definition of the Abadie constraint qualification.
Definition 3.9
It is said that the Abadie constraint qualification holds at \({\overline{x}}\in \Omega \) for (MPVC) iff
Remark 3.10
By ( 11 ), ( 22 ) means that the Abadie constraint qualification (ACQ) holds at \({\overline{x}} \) for (MPVC) iff \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) =\bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \).
Now, we state a necessary condition for efficiency in (MPVC).
Theorem 3.11
Let \({\overline{x}} \in \Omega \) be an efficient solution in (MPVC) and, for each \(v\in Z\left( C,{\overline{x}}\right) ,\) the directional derivatives \(f_{i}^{+}\left( {\overline{x}};v\right) \), \(i=1,..,p\), \(g_{j}^{+}\left( {\overline{x}};v\right) \), \(j\in J({\overline{x}})\), \(h_{s}^{+}\left( {\overline{x}};v\right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{0}({\overline{x}})\), \( H_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{00}\left( {\overline{x}} \right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}} \right) \), \(G_{t}^{+}\left( {\overline{x}};v\right) \), \(t\in T_{+0}\left( {\overline{x}}\right) \), exist. Further, we assume that \(g_{j}\), \(j\in J^{<}( {\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+-}( {\overline{x}})\), are continuous functions at \({\overline{x}}\). If the Abadie constraint qualification (ACQ) holds at \({\overline{x}}\) for (MPVC), then, for each \(l=1,...,p\), the system
has no solution \(v\in R^{n}\).
Proof
We proceed by contradiction. Suppose, contrary to the result, that there exists \(l_{0}\in \left\{ 1,...,p\right\} \) such that the system
has a solution \(v\in R^{n}\). Then, by ( 8 ), the system
has a solution \(v\in R^{n}\). Hence, it is obvious that \(v\in L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \). By assumption, (ACQ) is satisfied at \({\overline{x}}\) for (MPVC). Then, by Definition 3.9, \(v\in \bigcap \limits _{l=1}^{p}Z\left( Q^{l}\left( {\overline{x}}\right) ; {\overline{x}}\right) \). Thus, \(v\in Z\left( Q^{l_{0}}\left( {\overline{x}} \right) ;{\overline{x}}\right) \). Therefore, by Definition 2.9, there exists \(\left( \alpha _{k}\right) \subset R_{+}\), \(\alpha _{k}\downarrow 0\), such that \({\overline{x}}+\alpha _{k}v\in Q^{l_{0}}\left( {\overline{x}}\right) \) for all \(k\in N\). Hence, \({\overline{x}}+\alpha _{k}v\in C\) and, moreover,
By the definition of indexes sets, one has \(g_{j}\left( {\overline{x}}\right) <0\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\left( {\overline{x}}\right) >0\), \(t\in T_{+}({\overline{x}})\), \(G_{t}({\overline{x}})<0\), \(t\in T_{+-}({\overline{x}})\). Therefore, by the continuity of \(g_{j}\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+-}({\overline{x}}),\) at \( {\overline{x}}\), there exists \(k_{0}\in N\) such that, for all \(k>k_{0}\),
Thus, we conclude by ( 40 )-(47 ) that there exists \(\delta >0\) such that \({\overline{x}}+\alpha _{k}v\in \Omega \cap B\left( {\overline{x}};\delta \right) \), where \(B\left( {\overline{x}};\delta \right) \) denotes the open ball of radius \(\delta \) around \( {\overline{x}}\).
On the other hand, it follows from the assumption that \({\overline{x}}\in \Omega \) is an efficient solution in (MPVC). Hence, by Definition 3.1, there exists a number \(\delta >0\) such that there is no \(x\in \Omega \cap B\left( {\overline{x}};\delta \right) \) satisfying
Hence, since \({\overline{x}}+\alpha _{k}v\in \Omega \cap B\left( {\overline{x}};\delta \right) \) and (39) holds, by (48) and (49), we conclude that, for all \(k\in N\), the inequality
holds. Then, by Definition 2.5, the inequality above implies that the inequality
holds, which is a contradiction to (28). Hence, the proof of this theorem is completed. \(\square \)
Remark 3.12
As follows from the proof of Theorem 3.11, if the system (23)-(27) has no solution \(v\in R^{n}\), then, for each \(l=1,...,p\), the system
has no solution \(v\in R^{n}\).
Let us define the functions \(F=\left( F_{1},...,F_{p}\right) :R^{n}\rightarrow R^{p}\), \(\Psi =\big ( \Psi _{1},...,\Psi _{\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}} \right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| }\big ):R^{n}\rightarrow R^{\left| J\left( {\overline{x}}\right) \right| +\left| T_{00}\left( {\overline{x}}\right) \right| +\left| T_{0-}\left( {\overline{x}}\right) \right| +\left| T_{+0}\left( {\overline{x}}\right) \right| }\) and \(\Phi =\left( \Phi _{1},...,\Phi _{q+\left| T_{0+}\left( {\overline{x}}\right) \right| }\right) :R^{n}\rightarrow R^{q+\left| T_{0+}\left( {\overline{x}}\right) \right| }\) as follows
We are now in a position to formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution \({\overline{x}}\) to be an efficient solution in (MPVC) under the Abadie constraint qualification (ACQ).
Theorem 3.13
(Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let \({\overline{x}} \in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that \( f_{i}\), \(i\in I\), \(g_{j}\), \(j\in J\), \(h_{s}\), \(s\in S\), \(H_{t}\), \(t\in T\), \( G_{t}\), \(t\in T\), are directionally differentiable functions at \({\overline{x}} \), \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J({\overline{x}})\), \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex functions, \( h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{0+}\left( {\overline{x}}\right) \), are linear functions, \(g_{j}\), \(j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}( {\overline{x}})\), \(G_{t}\), \(t\in T_{0}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), are continuous functions at \({\overline{x}} \) and, moreover, the Abadie constraint qualification (ACQ) is satisfied at \( {\overline{x}}\) for (MPVC). If there exists \(v_{0}\in relint\,Z\left( C; {\overline{x}}\right) \) such that \(\Psi \left( v_{0}\right) <0\) and \(\Phi \left( v_{0}\right) \leqq 0\), then there exist Lagrange multipliers \( {\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{\xi }} \in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \({\overline{\vartheta }} ^{G}\in R^{r}\) such that the following conditions
hold.
Proof
Let \({\overline{x}}\in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. Since (ACQ) is satisfied at \({\overline{x}}\) for (MPVC), by Remark 3.12, the system (50)-(55) has no solution \(v\in R^{n}\). By (56)-(58), it follows that the system
admits no solutions. Then, by Theorem 2.8, there exists a vector \(\left( \lambda ,\theta ,\beta \right) ^{T}\in R_{+}^{k}\times R_{+}^{m}\times R^{q}\), \(\lambda \ne 0\), such that
Let us set
If we use (67)-(70) in (66), then we get the Karush–Kuhn–Tucker optimality condition (59). Moreover, note that (67)-(70) imply the Karush–Kuhn–Tucker optimality conditions (60)-(65). Hence, the proof of this theorem is finished. \(\square \)
Note that, in general, the Abadie constraint qualification may not be fulfilled at an efficient solution in (MPVC) if \(T_{00}\left( {\overline{x}} \right) \ne \varnothing \).
Based on the definition of the index sets, we substitute the constraint \( H_{t}G_{t}\left( x\right) \leqq 0\), \(t\in T\) by the constraints
in which the index sets depend on \({\overline{x}}\).
Then, we define the following vector optimization problem derived from (MPVC), some of the constraints of which depends on the optimal point \( {\overline{x}}\):
In order to introduce the modified Abadie constraint qualification, for \( {\overline{x}}\in \Omega \), we define the sets \({\overline{Q}}^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), and \({\overline{Q}}\left( {\overline{x}} \right) \) as follows
Then, the almost linearizing cone for the sets \({\overline{Q}}^{l}\left( {\overline{x}}\right) \) is defined by
Hence, the almost linearizing cone for the set \({\overline{Q}}\left( {\overline{x}}\right) \) is given as follows
Remark 3.14
Note that the only difference between \(L\left( Q\left( {\overline{x}}\right) ; {\overline{x}}\right) \) and \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ; {\overline{x}}\right) \) is that we add the inequality \(G_{t}^{+}\left( {\overline{x}};v\right) \leqq 0,\) \(\forall t\in T_{00}\left( {\overline{x}} \right) \) in \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}} \right) \) in comparison to \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}} \right) \). In particular, we always have the relation
Proposition 3.15
Let \({\overline{x}}\) be a feasible solution in (MPVC). Then
Proof
By Proposition 3.8, it follows that
Moreover, as it follows from the proof of Proposition 3.8, one has
Since \({\overline{Q}}^{l}\left( {\overline{x}}\right) \subseteq Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), therefore, one has
Combining (75)–(79), we get (74).
\(\square \)
Now, we are ready to introduce the modified Abadie constraint qualification which we name the VC-Abadie constraint qualification.
Definition 3.16
Let \({\overline{x}}\in \Omega \) be an efficient solution in (MPVC). Then, the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC) iff
Now, we define the Abadie constraint qualification for (MP\(\left( {\overline{x}}\right) \)) and we show that then the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC), even in a case in which the Abadie constraint qualification (ACQ) is not satisfied.
Definition 3.17
Let \({\overline{x}}\in \Omega \) be a (weakly) efficient solution in (MPVC). Then, the modified Abadie constraint qualification (MACQ) holds at \({\overline{x}}\) for (MP\(\left( {\overline{x}} \right) \)) iff
We now give the sufficient condition for the VC-Abadie constraint qualification to be satisfied at an efficient solution in (MPVC).
Lemma 3.18
Let \({\overline{x}}\in \Omega \) be an efficient solution in (MPVC). If the modified Abadie constraint qualification (MACQ) holds at \({\overline{x}}\) for (MP\(\left( {\overline{x}}\right) \)), then the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}\) for (MPVC).
Proof
Assume that \({\overline{x}}\in \Omega \) is an efficient solution in (MPVC) and, moreover, the modified Abadie constraint qualification (MACQ) holds at \( {\overline{x}}\) for (MP\(\left( {\overline{x}}\right) \)). Then, by Definition 3.17, it follows that
Since \({\overline{Q}}^{l}\left( {\overline{x}}\right) \subseteq Q^{l}\left( {\overline{x}}\right) \), \(l=1,...,p\), we have that
Hence, (84) implies
Then, (83) gives
Thus, by (82), (85) and (86), we get
as was to be shown. \(\square \)
Since the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualification (ACQ), the necessary optimality conditions (59)-(65) may not hold. Therefore, in the next theorem, we formulate the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution \( {\overline{x}}\) to be an efficient solution in (MPVC) under the VC-Abadie constraint qualification (VC-ACQ).
Theorem 3.19
(Karush–Kuhn–Tucker Type Necessary Optimality Conditions). Let \({\overline{x}} \in \Omega \) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints. We also assume that \( f_{i}\), \(i\in I\), \(g_{j}\), \(j\in J\), \(h_{s}\), \(s\in S\), \(H_{t}\), \(t\in T\), \( G_{t}\), \(t\in T\), are directionally differentiable functions at \({\overline{x}} \), \(f_{i}^{+}\left( {\overline{x}};\cdot \right) \), \(i\in I\), \(g_{j}^{+}\left( {\overline{x}};\cdot \right) \), \(j\in J({\overline{x}})\), \(-H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \), \(G_{t}^{+}\left( {\overline{x}};\cdot \right) \), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{+0}\left( {\overline{x}}\right) \), are convex functions, \(h_{s}^{+}\left( {\overline{x}};\cdot \right) \), \(s\in S\), \(H_{t}^{+}\left( {\overline{x}};\cdot \right) \), \( t\in T_{0+}\left( {\overline{x}}\right) \), are linear functions, \(g_{j}\), \( j\in J^{<}({\overline{x}})\), \(H_{t}\), \(t\in T_{+}({\overline{x}})\), \(G_{t}\), \( t\in T_{0}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), are continuous functions at \({\overline{x}}\) and, moreover, the VC-Abadie constraint qualification (VC-ACQ) is satisfied at \({\overline{x}}\) for (MPVC). If there exists \(v_{0}\in relint\,Z\left( C;{\overline{x}}\right) \) such that \( \Psi \left( v_{0}\right) <0\) and \(\Phi \left( v_{0}\right) \leqq 0\), then there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \(\overline{\vartheta }^{G}\in R^{r}\) such that the following conditions
hold.
Now, we prove the sufficiency of the Karush–Kuhn–Tucker optimality conditions for the considered multiobjective programming problem (MPVC) with vanishing constraints under appropriate convexity hypotheses.
Theorem 3.20
Let \({\overline{x}}\) be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)–(65) be satisfied at \({\overline{x}}\) for (MPVC) with Lagrange multipliers \( {\overline{\lambda }}\in R_{+}^{k}\), \({\overline{\mu }}\in R_{+}^{m}\), \( {\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \( \overline{\vartheta }^{G}\in R^{r}\). Further, we assume that \(f_{i}\), \(i\in I \), \(g_{j}\), \(j\in J({\overline{x}})\), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }}_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }} _{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}})\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then \({\overline{x}}\) is a weak Pareto solution in (MPVC).
Proof
We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{x}}\) is not a weak Pareto solution in (MPVC). Thus, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that
By assumption, f is convex at \({\overline{x}}\) on \(\Omega \). Hence, by Proposition 2.6, (94) yields
Since \({\overline{\lambda }}\ge 0\), the inequalities (95) give
From \({\overline{x}},{\widetilde{x}}\in \Omega \) and the definition of \(J\left( {\overline{x}}\right) \), it follows that
By assumption, \(g_{j}\),\(\ j\in J({\overline{x}})\), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) =\left\{ s\in S:\overline{\xi }_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) =\left\{ s\in S:{\overline{\xi }} _{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}})\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then, by Proposition 2.6, (97)-(100) imply, respectively,
Taking into account that \({\overline{\mu }}_{j}=0\), \(j\in J^{<}\left( {\overline{x}}\right) \), \({\overline{\xi }}_{s}=0\), \(s\notin S^{+}\left( {\overline{x}}\right) \cup S^{-}\left( {\overline{x}}\right) \), \({\overline{\vartheta }}_{t}^{H}=0\), \(t\in T_{+}\left( {\overline{x}}\right) \), \({\overline{\vartheta }}_{t}^{G}=0\), \(t\in T_{00}\left( {\overline{x}}\right) \cup T_{0+}\left( {\overline{x}}\right) \cup T_{0-}\left( {\overline{x}}\right) \cup T_{+-}\left( {\overline{x}}\right) \), the foregoing inequalities yield, respectively,
Combining (96) and (106)-(109), we get that the inequality
holds, contradicting the Karush–Kuhn–Tucker type necessary optimality condition (59). This means that \({\overline{x}}\) is a weak Pareto solution in (MPVC). \(\square \)
In order to prove the sufficient optimality conditions for a feasible solution \({\overline{x}}\) to be a Pareto solution in (MPVC), stronger convexity assumptions are needed imposed on the objective functions.
Theorem 3.21
Let \({\overline{x}}\) be a feasible solution in (MPVC) and the Karush–Kuhn–Tucker type necessary optimality conditions (59)-(65) be satisfied at \({\overline{x}}\) for (MPVC) with Lagrange multipliers \( {\overline{\lambda }}\in R_{+}^{p}\), \({\overline{\mu }}\in R_{+}^{m}\), \( {\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\) and \( \overline{\vartheta }^{G}\in R^{r}\). Further, we assume that \(f_{i}\), \(i\in I \), are strictly convex on \(\Omega \), \(g_{j}\), \(j\in J({\overline{x}})\), \( h_{s} \), \(s\in S^{+}\left( {\overline{x}}\right) =\left\{ s\in S:{\overline{\xi }}_{s}>0\right\} \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) :=\left\{ s\in S:{\overline{\xi }}_{s}<0\right\} \), \(-H_{t}\), \(t\in T_{00}({\overline{x}} )\cup T_{0-}\left( {\overline{x}}\right) \cup T_{0+}({\overline{x}})\), \(G_{t}\), \( t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \). Then \( {\overline{x}}\) is a Pareto solution in (MPVC).
Remark 3.22
In Theorem 3.21, all objective functions \(f_{i}\), \(i\in I\), are assumed to be strictly convex on \(\Omega \) in order to prove that \({\overline{x}}\in \Omega \) is a Pareto solution in (MPVC). However, as it follows from the proof of the aforesaid theorem, it is sufficient if we assume in Theorem 3.21 that at least one the objective function \(f_{i}\), \(i\in I\), is strictly convex on \(\Omega \), but Lagrange multiplier \({\overline{\lambda }}_{i}\) associated to such an objective function \(f_{i}\) should be greater than 0.
Remark 3.23
If \({\overline{x}}\) is such a feasible solution at which the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) in place of (59)-(65), then also the functions \(G_{t}\), \(t\in T_{00}\left( {\overline{x}}\right) \), should be assumed to be convex on \(\Omega \) in the sufficient optimality conditions.
Now, we illustrate the results established in the paper by an example of a convex directionally differentiable multiobjective programming problem with vanishing constraints.
Example 3.24
Consider a directionally differentiable multiobjective programming problem with vanishing constraints defined by
Note that \(\Omega =\left\{ \left( x_{1},x_{2}\right) \in R^{2}:x_{2}\geqq 0 \text {, }x_{2}\left( -x_{1}-x_{2}\right) \leqq 0\right\} \), \({\overline{x}} =\left( 0,0\right) \) is a feasible solution in (MPVC1) and \(T_{00}\left( {\overline{x}}\right) =\left\{ 1\right\} \). Now, we define the sets \( Q^{1}\left( {\overline{x}}\right) \), \(Q^{2}\left( {\overline{x}}\right) \), \( Q\left( {\overline{x}}\right) \), \({\overline{Q}}\left( {\overline{x}}\right) \). Then, by definition, we have
Further, by Definition 2.9 and the definition of the almost linearizing cone (see (5), (10)), we have, respectively,
Note that the Abadie constraint qualification (ACQ) is not satisfied at \( {\overline{x}}=\left( 0,0\right) \) for (MPVC1) since the relation \(L\left( Q\left( {\overline{x}}\right) ;{\overline{x}}\right) \subset \bigcap \nolimits _{l=1}^{2}Z\left( Q^{l}\left( {\overline{x}}\right) ;{\overline{x}}\right) \) is not satisfied. But the VC-Abadie constraint qualification (VC-ACQ) holds at \({\overline{x}}=\left( 0,0\right) \) for (MPVC1) since the relation \(L\left( {\overline{Q}}\left( {\overline{x}}\right) ;{\overline{x}} \right) \subset \bigcap \nolimits _{l=1}^{2}Z\left( Q^{l}\left( {\overline{x}} \right) ;{\overline{x}}\right) \) is satisfied. As it follows even from this example, the VC-Abadie constraint qualification (VC-ACQ) is weaker than the Abadie constraint qualification (ACQ).Moreover, the Karush–Kuhn–Tucker type necessary optimality conditions (87)-(93) are fulfilled at \({\overline{x}}\) with Lagrange multipliers \({\overline{\lambda }}_{1}=\frac{1}{2}\), \({\overline{\lambda }}_{2}=\frac{1}{4}\), \( \overline{\vartheta }_{1}^{H}=\frac{1}{4}\), \({\overline{\vartheta }}_{1}^{G}= \frac{1}{4}\). Further, note that the functions constituting (MPVC1) are convex on \(\Omega \) and the objective function \(f_{1}\) is strictly convex on \(\Omega \). Hence, by Theorem 3.21, \({\overline{x}}=\left( 0,0\right) \) is a Pareto solution in (MPVC1).Note that the optimality conditions established in the literature (see, for example, (Achtziger et al., 2013; Dorsch et al., 2012; Dussault et al., 2019; Hoheisel & Kanzow, 2008, 2007, 2009; Hoheisel et al., 2012; Izmailov & Solodov, 2009)) are not applicable for the considered multiobjective programming problem (MPVC1) with vanishing constraints since the results established in the above mentioned works have been proved for scalar optimization problems with vanishing constraints. Moreover, the results presented in Guu et al. (2017) and Mishra et al. (2015) have been established for differentiable multiobjective programming problems with vanishing constraints only and, therefore, they are not useful for finding (weak) Pareto solutions in such nondifferentiable vector optimization problems as the directionally differentiable multiobjective programming problem (MPVC1) with vanishing constraints.
4 Wolfe duality
In this section, for the considered vector optimization problem (MPVC) with vanishing constraints, we define its vector Wolfe dual problem. Then we prove several duality results between problems (MPVC) and (WDVC) under convexity assumption imposed on the functions constituting them.
We now define the vector-valued Lagrange function L for (MPVC) as follows
where \(e=\left[ 1,...,1\right] ^{T}\in R^{p}\). Then, we re-write the above definition of the vector-valued Lagrange function L as follows:
For \(x\in \Omega \), we define the following vector Wolfe dual problem related to the considered multiobjective programming problem (MPVC) with vanishing constraints as follows:
Let
be the set of all feasible solutions in (WDVC\(\left( x\right) \)). Further, we define the set \(Y\left( x\right) \) as follows: \(Y\left( x\right) =\left\{ y\in X:\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \left( x\right) \right\} \) and \(J^{+}\left( x\right) :=\left\{ j\in J:\mu _{j}>0\right\} \).
Remark 4.1
In the Wolfe dual problem (WDVC\(\left( x\right) \)) given above, the significance of \(w_{t}\) and \(\theta _{t}\) is the same as \(v_{t}\) and \(\beta _{t}\) in Theorem 1 (Achtziger and Kazanov (2008)).
Now, on the line Hu et al. (2020), we define the following vector dual problem in the sense of Wolfe related to the considered multicriteria optimization problem (MPVC) with vanishing constraints by
where the set \(\Gamma \) of all feasible solutions in (WDVC) is defined by \( \Gamma =\bigcap \limits _{x\in \Omega }\Gamma \left( x\right) \). Further, let us define the set Y by \(Y=\left\{ y\in X:\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \right\} \).
Theorem 4.2
(Weak duality): Let x and \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \) be any feasible solutions for (MPVC) and (WDVC), respectively. Further, we assume that one of the following hypotheses is fulfilled:
-
A)
\(f_{i}\), \(k=1,...,p\), \(g_{j}\), \(j\in J^{+}\left( x\right) \), \( h_{s} \), \(s\in S^{+}\left( x\right) \), \(-h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{00}\left( x\right) \cup T_{0-}\left( x\right) \cup T_{0+}^{+}\left( x\right) \), \(T_{0+}^{+}\left( x\right) :=\left\{ t\in T_{0+}:\vartheta _{t}^{H}>0\right\} \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) :=\left\{ t\in T_{0+}:\vartheta _{t}^{H}<0\right\} \), \(G_{t}\), \( t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\).
-
B)
the vectorial Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is convex on \(\Omega \cup Y\).
Then, \(f\left( x\right) \nless L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \).
Proof
We proceed by contradiction. Suppose, contrary to the result, that
Hence, by definition of the Lagrange function L, the aforesaid inequality gives
Thus, by \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \), it follows that
A) Now, we prove this theorem under hypothesis A).
From convexity assumptions, by Proposition 2.6, the inequalities
hold. Multiplying (112)–(118) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively,
Taking into account Lagrange multipliers equal to 0, by (119)-(123), we obtain that the inequality
holds. By (111) and (126), we get that the inequality
holds, which contradicts the first constraint of (WDVC).
B) Now, we prove this theorem under hypothesis B). From \(x\in \Omega \) and \( \big ( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \big ) \in \Gamma \), it follows that
Since (111) is fulfilled, by (134), we get
Then, by the definition of the vector-valued Lagrange function L, it follows that
By hypothesis B), the vector-valued Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is directionally differentiable convex on \(\Omega \cup Y\). Then, by Proposition 2.6, the following inequalities
are satisfied. Combining (135) and (136), we obtain
Multiplying inequalities (137) by the corresponding Lagrange multipliers \(\lambda _{i}\), \(i=1,...,p\), we have
Then, by the definition of the vector-valued Lagrange function L, one has
By \(\left( y,\lambda ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \in \Gamma \), it follows that \(\sum _{i=1}^{p}\lambda _{i}\). Thus, (138) implies that the following inequality
holds, which contradicts the first constraint of (WDVC).
This completes the proof of this theorem under both hypothesis A) and hypothesis B). \(\square \)
If the stronger assumptions are imposed on the functions constituting (MPVC), then the following result is true:
Theorem 4.3
(Weak duality): Let x and \( \left( y,\lambda ,\mu ,v,\vartheta ^{H},\vartheta ^{G},w,\theta \right) \) be any feasible solutions for (IVPVC) and (VC-IVWD), respectively. Further, we assume that one of the following hypotheses is fulfilled:
-
A)
\(f_{i}\), \(i=1,...,k\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( x\right) \), \(h_{s}\), \(s\in S^{+}\left( x\right) \), \(-h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{H}^{+}\left( x\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) \), \(G_{t}\), \(t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\).
-
B)
the vector-valued vectorial Lagrange function \(L\left( \cdot ,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \) is strictly convex on \(\Omega \cup Y\).
Then, \(f\left( x\right) \nleq L\left( y,\mu ,\xi ,\vartheta ^{H},\vartheta ^{G}\right) \).
Theorem 4.4
(Strong duality): Let \({\overline{x}}\in \Omega \) be a Pareto solution (a weak Pareto solution) in (MPVC) and the Abadie constraint qualification be satisfied at \({\overline{x}}\). Then, there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }} \in R^{m}\), \({\overline{v}}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\), \( {\overline{\vartheta }}^{G}\in R^{r}\) and \({\overline{w}}\in R^{r}\), \({\overline{\theta }}\in R^{r}\) such that \(\left( {\overline{x}},{\overline{\lambda }}, \overline{\mu },{\overline{v}},{\overline{\vartheta }}^{H},{\overline{\vartheta }} ^{G},{\overline{w}},{\overline{\theta }}\right) \) is feasible in (WDVC). If also all hypotheses of the weak duality theorem - Theorem 4.3 (Theorem 4.2 ) are satisfied, then \(\left( {\overline{x}},\overline{\lambda },{\overline{\mu }},{\overline{v}},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G},{\overline{w}},\overline{\theta }\right) \) is an efficient solution (a weakly efficient) of a maximum type in (WDVC).
Proof
By assumption, \({\overline{x}}\) is a Pareto solution of (MPVC) and the Abadie constraint qualification is satisfied at \({\overline{x}}\). Then, there exist Lagrange multipliers \({\overline{\lambda }}\in R^{p}\), \({\overline{\mu }}\in R^{m}\), \({\overline{v}}\in R^{q}\), \({\overline{\vartheta }}^{H}\in R^{r}\), \( {\overline{\vartheta }}^{G}\in R^{r}\) such that the Karush–Kuhn–Tucker necessary optimality conditions are fulfilled. Then, we conclude that \( \left( {\overline{x}},{\overline{\lambda }},{\overline{\mu }},{\overline{v}}, {\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \), where \({\overline{w}}_{t}\) and \({\overline{\theta }}_{t}\) satisfy the following conditions
is feasible in (WDVC).
Now, we prove that \(\left( {\overline{x}},\overline{\lambda },{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in (WDVC). We proceed by contradiction. Suppose, contrary to the result, that \(\left( {\overline{x}},{\overline{\lambda }},{\overline{\mu }}, {\overline{\xi }},{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{w}},{\overline{\theta }}\right) \) is not an efficient solution of a maximum type in (WDVC). Then, by definition, there exists \(\left( {\widetilde{y}},{\widetilde{\lambda }},\widetilde{\mu },{\widetilde{\xi }},{\widetilde{\vartheta }}^{H},\widetilde{\vartheta }^{G},{\widetilde{w}},{\widetilde{\theta }} \right) \in \Gamma \) such that the inequality
holds. Then, by the Karush–Kuhn–Tucker necessary optimality conditions, we conclude that
holds, which is a contradiction to the weak duality theorem (Theorem 4.3 ). Hence, we conclude that \(\left( {\overline{x}},{\overline{\lambda }}, {\overline{\mu }},{\overline{v}},\overline{\vartheta }^{H},{\overline{\vartheta }} ^{G},{\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in (WDVC). \(\square \)
The next two theorems give sufficient conditions for \({\overline{y}}\), where \( \left( {\overline{y}},{\overline{\lambda }},\overline{\mu },{\overline{\xi }}, {\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) is a feasible solution of the (WDVC), to be a Pareto solution of (MPVC).
Theorem 4.5
(Converse duality): Let x be any feasible solution in (MPVC) and \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{v}},{\overline{\beta }}\right) \) be an efficient solution of a maximum type (a weakly efficient solution of a maximum type) in Wolfe dual problem (WDVC) such that \({\overline{y}}\in \Omega \). Further, we assume that \( f_{i}\), \(i=1,...,k\), are strictly convex (convex) on \(\Omega \cup Y\), \(g_{j}\), \(j\in J^{+}\left( x\right) \), \(h_{s}\), \(s\in S^{+}\left( x\right) \), \( -h_{s}\), \(s\in S^{-}\left( x\right) \), \(-H_{t}\), \(t\in T_{H}^{+}\left( x\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( x\right) \), \(G_{t}\), \(t\in T_{+0}\left( x\right) \), are convex on \(\Omega \cup Y\). Then \({\overline{y}}\) is a Pareto solution (a weak Pareto solution) of (MPVC).
Proof
We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{y}}\in \Omega \) is not an efficient solution of (MPVC). Hence, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that
From convexity hypotheses, by Proposition 2.6, the inequalities
hold. Multiplying (140)-(146) by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we have, respectively,
By \({\widetilde{x}}\), \({\overline{y}}\in \Omega \), we have, respectively,
Hence, using (156), (157) together with \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, \overline{\xi },{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{w}},{\overline{\theta }}\right) \in \Gamma \), we obtain
Combining (147)–(159), multiplying by the corresponding Lagrange multipliers and then adding both sides of the resulting inequalities, we obtain, respectively,
Taking into account Lagrange multipliers equal to 0 and then combining (160)–(164), we get that the inequality
holds, which is a contradiction to the first constraint of (WDVC). This completes the proof of this theorem. \(\square \)
Theorem 4.6
(Strict converse duality): Let \({\overline{x}}\) be a feasible solution of (MPVC) and \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, {\overline{v}},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) be a feasible solution of (WDVC) such that \( f\left( {\overline{x}}\right) =L\left( {\overline{y}},{\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \). Further, we assume that one of the following hypotheses is fulfilled:
-
A)
\(f_{i}\), \(i=1,...,p\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( {\overline{x}}\right) \), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) \), \( -H_{t}\), \(t\in T_{H}^{+}\left( {\overline{x}}\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( {\overline{x}}\right) \), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \cup Y\).
-
B)
the vector-valued Lagrange function \(L\left( \cdot ,\mu ,v,\vartheta ^{H},\vartheta ^{G}\right) \) is strictly convex on \(\Omega \cup Y\).
Then \({\overline{x}}\) is a Pareto solution in (MPVC) and \(\left( {\overline{y}}, {\overline{\lambda }},{\overline{\mu }},\overline{\xi },{\overline{\vartheta }} ^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) is an efficient solution of a maximum type in (WDVC).
Proof
We proceed by contradiction. Suppose, contrary to the result, that \( {\overline{x}}\in \Omega \) is not a Pareto solution in (MPVC). Hence, by Definition 3.1, there exists \({\widetilde{x}}\in \Omega \) such that
Using (165) with the assumption \(f\left( {\overline{x}}\right) =L\left( {\overline{y}}, \overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{ \vartheta }^{G}\right) \), we obtain
Since all hypotheses of Theorem 4.3 are fulfilled, the above relation contradicts weak duality. This means that \( {\overline{x}}\) is a Pareto solution in (MPVC). Further, efficiency of a maximum type of \(\left( {\overline{y}},{\overline{\lambda }},{\overline{\mu }}, {\overline{\xi }},{\overline{\vartheta }}^{H},{\overline{\vartheta }}^{G}, {\overline{v}},{\overline{\beta }}\right) \) in (WDVC) follows from the weak duality theorem (Theorem 4.3 ). \(\square \)
A restricted version of converse duality for the problems (MPVC) and (WDVC) gives a sufficient condition for the uniqueness of an efficient solution in (MPVC) and an efficient solution of a maximum type in (WDVC).
Theorem 4.7
(Restricted converse duality): Let \({\overline{x}}\) be an efficient solution in the considered multiobjective programming problem (MPVC) with vanishing constraints, \(\left( {\overline{y}}, {\overline{\lambda }},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }} ^{H},\overline{\vartheta }^{G},{\overline{w}},{\overline{\theta }}\right) \) be an efficient solution of a maximum type in its vector Wolfe dual problem (WDVC) and the VC-Abadie constraint qualification (VC-ACQ) be satisfied at \( {\overline{x}}\) for (MPVC). Further, we assume that one of the following hypotheses is fulfilled:
-
A)
\(f_{i}\), \(i=1,...,p\), are strictly convex on \(\Omega \cup Y\), \( g_{j}\), \(j\in J^{+}\left( {\overline{x}}\right) \), \(h_{s}\), \(s\in S^{+}\left( {\overline{x}}\right) \), \(-h_{s}\), \(s\in S^{-}\left( {\overline{x}}\right) \), \( -H_{t}\), \(t\in T_{H}^{+}\left( {\overline{x}}\right) \), \(H_{t}\), \(t\in T_{0+}^{-}\left( {\overline{x}}\right) \), \(G_{t}\), \(t\in T_{+0}\left( {\overline{x}}\right) \), are convex on \(\Omega \cup Y\).
-
B)
the vectorial Lagrange function \(L\left( \cdot ,{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \) is strictly convex on \(\Omega \cup Y\).
Then \({\overline{x}}={\overline{y}}\).
Proof
By means of contradiction, suppose that \({\overline{x}}\ne {\overline{y}}\). Since is an efficient solution in (MPVC), by Theorem 3.11, there exist Lagrange multipliers \(\overline{\lambda }\in R^{p}\), \( {\overline{\mu }}\in R^{m}\), \({\overline{\xi }}\in R^{q}\), \({\overline{\vartheta }} ^{H}\in R^{r}\) and \({\overline{\vartheta }}^{G}\in R^{r}\), not equal to 0, such that (87)-(93) are fulfilled. Thus, by (87)-(93), it follows that
By assumption, \(\left( {\overline{y}},\overline{\lambda },{\overline{\mu }}, {\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{w}},\overline{\theta }\right) \) is an efficient solution of a maximum type in vector Wolfe dual problem (MPVC). Thus, one has
Combining two above relations, we get
Thus, (166) gives
Adding both sides of (167), by the definition of the vectorial Lagrange function L, we get
Hence, by \(\left( {\overline{y}},{\overline{\lambda }},\overline{\mu },{\overline{\xi }},{\overline{\vartheta }}^{H},\overline{\vartheta }^{G},{\overline{w}}, {\overline{\theta }}\right) \in \Gamma \), one has \(\sum _{k=1}^{p}{\overline{\lambda }}_{i}=1\). Thus, (168) implies
Proof under hypothesis A). Using hypothesis A), by Proposition 2.6, the inequalities
hold. By the feasibility of \(\left( {\overline{y}},\overline{\lambda }, {\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G},{\overline{w}},\overline{\theta }\right) \) in (WDVC), (170)-(176) yield, respectively,
Thus, the above inequalities yield
Taking into account the Lagrange multipliers equal to 0, we have
Hence, by the first constraint of (WDVC), (184) yields that the inequality
holds, contradicting (169). This completes the proof of this theorem under hypothesis A).
Proof under hypothesis B)
Now, we assume that the vector-valued Lagrange function \(L\left( \cdot , {\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}\right) \) is strictly convex on \(\Omega \cup Y\). Hence, by the definition of the vector-valued Lagrange function L, we get
Then, by the definition of L, one has
Multiplying each above inequality by the corresponding Lagrange multiplier \( {\overline{\lambda }}_{i}\), \(i=1,...,p\), respectively, and then summarizing the resulting inequalities, we obtain
By the feasibility of \(\left( {\overline{y}},\overline{\lambda },{\overline{\mu }},{\overline{\xi }},\overline{\vartheta }^{H},{\overline{\vartheta }}^{G}, {\overline{v}},\overline{\beta }\right) \) in (WDVC), one has \(\sum _{i=1}^{p} \overline{\lambda }_{i}=1\). Then, the aforesaid inequality gives
Using the first constraint of (WDVC), we get that the following inequality
holds, contradicting (169). This completes the proof of this theorem under hypothesis B). \(\square \)
5 Conclusions
This paper represents the study concerning the new class of nonsmooth vector optimization problems, that is, directionally differentiable multiobjective programming problems with vanishing constraints. Under the Abadie constraint qualification, the Karush–Kuhn–Tucker type necessary optimality conditions have been established for such nondifferentiable vector optimization problems in the terms of the right directional derivatives of the involved functions. The nonlinear Gordan alternative theorem has been used in proving these aforesaid necessary optimality conditions. However, the Abadie constraint qualification may not hold for such multicriteria optimization problems and therefore, the aforesaid necessary optimality conditions may not hold. Therefore, we have introduced the modified Abadie constraint qualification for the considered multiobjective programming problem with vanishing constraints. Then, under the modified Abadie constraint qualification, which is weaker than the standard Abadie constraint qualification, we prove weaker necessary optimality conditions of the Karush–Kuhn–Tucker type for such nondifferentiable vector optimization problems with vanishing constraints. The sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions have also been proved for the considered directionally differentiable multiobjective programming problem with vanishing constraints under appropriate convexity hypotheses. Furthermore, for the considered directionally differentiable multiobjective programming problems with vanishing constraints, its vector Wolfe dual problem has been defined on the line Hu et al. (2020). Then several duality theorems have been established between the primal directionally differentiable multiobjective programming problems with vanishing constraints and its vector Wolfe dual problem under convexity hypotheses.
Thus, the above mentioned optimality conditions and duality results have been derived for a completely a new class of directionally differentiable vector optimization problems in comparison to the results existing in the literature, that is, for directionally differentiable multiobjective programming problems with vanishing constraints. Hence, the results established in the literature generally for scalar differentiable extremum problems with vanishing constraints have been generalized and extended to directionally differentiable multiobjective programming problems with vanishing constraints.
It seems that the techniques employed in this paper can be used in proving similarly results for other classes of nonsmooth mathematical programming problems with vanishing constraints. We shall investigate these problems in the subsequent papers.
References
Achtziger, W., Hoheisel, T., & Kanzow, C. (2013). A smoothing-regularization approach to mathematical programs with vanishing constraints. Computational Optimization and Applications, 55, 733–767.
Achtziger, W., & Kanzow, C. (2008). Mathematical programs with vanishing constraints: Optimality conditions and constraint qualifications. Mathematical Programming, 114, 69–99.
Ahmad, I. (2011). Efficiency and duality in nondifferentiable multiobjective programming involving directional derivative. Applied Mathematics, 2, 452–460.
Antczak, T. (2002). Multiobjective programming under \(d\) -invexity. European Journal of Operational Research, 137, 28–36.
Antczak, T. (2009). Optimality conditions and duality for nondifferentiable multiobjective programming problems involving \(d\)-\(r\)-type I functions. Journal of Computational and Applied Mathematics, 225, 236–250.
Antczak, T. (2022). Optimality conditions and Mond–Weir duality for a class of differentiable semi-infinite multiobjective programming problems with vanishing constraints. 4OR, 20(3), 417–442.
Arana-Jiménez, M., Ruiz-Garzón, G., Osuna-Gómez, R., & Hernández-Jiménez, B. (2013). Duality and a characterization of pseudoinvexity for Pareto and weak Pareto solutions in nondifferentiable multiobjective programming. Journal of Optimization Theory and Applications, 156, 266–277.
Dinh, N., Lee, G. M., & Tuan, L. A. (2005). Generalized Lagrange multipliers for nonconvex directionally differentiable programs. In V. Jeyakumar & A. Rubinov (Eds.), Continuous optimization. Springer.
Dorsch, D., Shikhman, V., & Stein, O. (2012). Mathematical programs with vanishing constraints: Critical point theory. Journal of Global Optimization, 52, 591–605.
Dussault, J. P., Haddou, M., & Migot, T. (2019). Mathematical programs with vanishing constraints: Constraint qualifications, their applications and a new regularization method. Optimization, 68, 509–538.
Florenzano, M., & Le Van, C. (2001). Finite Dimensional Convexity and Optimization. Studies in Economics Theory. Springer.
Giorgi, G., et al. (2002). Osservazioni sui teoremi dell’alternativa non lineari implicanti relazioni di uguaglianza e vincolo insiemistico. In G. Crespi (Ed.), Optimality in economics, finance and industry (pp. 171–183). Milan: Datanova.
Guu, S.-M., Singh, Y., & Mishra, S. K. (2017). On strong KKT type sufficient optimality conditions for multiobjective semi-infinite programming problems with vanishing constraints. Journal of Inequalities and Applications, 2017, 282.
Hiriart-Urruty, J.-B., & Lemaréchal, C. (1993). Convex analysis and minimization algorithms I Grundlehren der mathematischen Wissenschaften. Springer.
Hoheisel, T., & Kanzow, C. (2007). First- and second-order optimality conditions for mathematical programs with vanishing constraints. Applied Mathematics, 52, 495–514.
Hoheisel, T., & Kanzow, C. (2008). Stationary conditions for mathematical programs with vanishing constraints using weak constraint qualifications. Journal of Mathematical Analysis and Applications, 337, 292–310.
Hoheisel, T., & Kanzow, C. (2009). On the Abadie and Guignard constraint qualifications for mathematical programmes with vanishing constraints. Optimization, 58, 431–448.
Hoheisel, T., Kanzow, C., & Schwartz, A. (2012). Mathematical programs with vanishing constraints: a new regularization approach with strong convergence properties. Optimization, 61, 619–636.
Hu, Q. J., Chen, Y., Zhu, Z. B., & Zhang, B. S. (2014). Notes on some convergence properties for a smoothing-regularization approach to mathematical programs with vanishing constraints. Abstract and Applied Analysis, 2014, 1–7.
Hu, Q., Wang, J., & Chen, Y. (2020). New dualities for mathematical programs with vanishing constraints. Annals of Operations Research, 287, 233–255.
Ishizuka, Y. (1992). Optimality conditions for directionally differentiable multi-objective programming problems. Journal of Optimization Theory and Applications, 72, 91–111.
Izmailov, A. F., & Solodov, M. V. (2009). Mathematical programs with vanishing constraints: Optimality conditions. sensitivity, and relaxation method. Journal of Optimization Theory and Applications, 142, 501–532.
Jahn, J. (2004). Vector optimization: Theory applications and extensions. Springer.
Jiménez, B., & Novo, V. (2002). Alternative theorems and necessary optimality conditions for directionally multiobjective programs. Journal of Convex Analysis, 9, 97–116.
Kharbanda, P., Agarwal, D., & Sinh, D. (2015). Multiobjective programming under \(\left( \varphi , d\right) \)-\(V\)-type I univexity. Opsearch, 52, 168–185.
Khare, A., & Nath, T. (2019). Enhanced Fritz John stationarity, new constraint qualifications and local error bound for mathematical programs with vanishing constraints. Journal of Mathematical Analysis and Applications, 472, 1042–1077.
Mangasarian, O. L. (1969). Nonlinear Programming. McGraw-Hill.
Mishra, S. K., & Noor, M. A. (2006). Some nondifferentiable multiobjective programming problems. Journal of Mathematical Analysis and Applications, 316, 472–82.
Mishra, S. K., Rautela, J. S., & Pant, R. P. (2008). On nondifferentiable multiobjective programming involving type-I \(\alpha \) -invex functions. Applied Mathematics & Information Sciences, 2, 317–331.
Mishra, S. K., Singh, V., & Laha, V. (2016). On duality for mathematical programs with vanishing constraints. Annals of Operations Research, 243, 249–272.
Mishra, S. K., Singh, V., Laha, V., & Mohapatra, R. N. (2015). On constraint qualifications for multiobjective optimization problems with vanishing constraints. In H. Xu, S. Wang, & S.-Y. Wu (Eds.), Optimization methods (pp. 95–135). Springer.
Mishra, S. K., Wang, S. Y., & Lai, K. K. (2004). Optimality and duality in nondifferentiable and multi objective programming under generalized \(d\)-invexity. Journal of Global Optimization, 29, 425–438.
Preda, V., & Chitescu, I. (1999). On constraint qualification in multiobjective optimization problems: Semidifferentiable case. Journal of Optimization Theory and Applications, 100, 417–433.
Rockafellar, R. T. (1970). Convex analysis. Princeton University Press.
Slimani, H., & Radjef, M. S. (2010). Nondifferentiable multiobjective programming under generalized \(d_{I}\)-invexity. European Journal of Operational Research, 202, 32–41.
Thung, L. T. (2022). Karush-Kuhn-Tucker optimality conditions and duality for multiobjective semi-infinite programming with vanishing constraints. Annals of Operations Research, 311, 1307–1334.
Ye, Y. L. (1991). \(d\)-invexity and optimality conditions. Journal of Mathematical Analysis and Applications, 162, 242–249.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No potential conflict of interest was reported by the author.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Antczak, T. On directionally differentiable multiobjective programming problems with vanishing constraints. Ann Oper Res 328, 1181–1212 (2023). https://doi.org/10.1007/s10479-023-05368-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-023-05368-5
Keywords
- Directionally differentiable multiobjective programming problems with vanishing constraints
- Pareto solution
- Karush–Kuhn–Tucker necessary optimality conditions
- Wolfe vector dual
- Convex function