Abstract
Based on the definitions of lower and upper limits of vector functions introduced in Rahmo and Studniarski (J Math Anal Appl 393:212–221, 2012), we extend the lower and upper Ginchev directional derivatives to functions with values in finite-dimensional spaces where partial order is introduced by a polyhedral cone. This allows us to obtain some modifications of the optimality conditions from Luu (Higher-order optimality conditions in nonsmooth cone-constrained multiobjective programming. Institute of Mathematics, Hanoi, Vietnam 2008) with weakened assumptions on the minimized function.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Higher-order optimality conditions in multiobjective optimization stated in terms of Ginchev directional derivatives have been studied by several authors (see [4, 9, 10]). The results obtained so far can be essentially divided into two groups:
-
1.
Conditions using lower and upper Ginchev derivatives of scalar functions; see e.g. Theorems 5.1 and 5.2 in [10], Theorems 4.1 and 4.2 in [4]. These results either use some scalarization of a multiobjective problem or are formulated in terms of Ginchev derivatives of coordinate functions (when partial order is defined by the positive orthant).
-
2.
Conditions using Ginchev derivatives of Hadamard type (defined for vector functions by formulae (37)–(38) below); see e.g. Theorems 3.1 and 4.1 in [10], Theorem 5.1 in [4]. These theorems require stronger assumptions on the minimized function than the ones in the first group.
In this paper we propose another approach which is valid only for minimization problems in finite-dimensional vector spaces where partial order is introduced by a polyhedral cone. Using the definitions of lower and upper limits of vector functions presented recently by the authors in [12], we define lower and upper Ginchev derivatives of vector functions and use them directly to formulate optimality conditions, thus avoiding any scalarization.
Note that the optimality conditions presented here are only in the primal form (i.e., they are formulated in terms of directional derivatives). Conditions in the dual form (i.e., containing Lagrange multipliers) can be obtained by using the tools of nonsmooth analysis and generalized convexity; see, e.g., [14]. For a general overview of multiobjective optimization and the Pareto optimality, see [1, 8, 11].
The paper is organized as follows. In Sects. 2 and 3 we review some definitions and results from [12] which will be used in the sequel. In Sect. 4 we describe the partial order defined by a polyhedral cone. In Sect. 5 we formulate a multiobjective optimization problem and define lower and upper Ginchev derivatives of vector functions. Sections 6 and 7 are devoted to necessary and sufficient optimality conditions, respectively.
2 Infima and suprema of sets in extended Euclidean spaces
Let \({\bar{\mathbb{R }}}=\mathbb{R }\cup \{-\infty ,\infty \}\) be the set of extended real numbers. The arithmetic operations in \(\mathbb{R }\) are extended to \({\bar{\mathbb{R }}}\) in an obvious manner, except for the combinations \(0\cdot (-\infty )\), \(0\cdot \infty \), \(-\infty +\infty \) and \( \infty -\infty \) which we regard as undefined rather than define them in any special way (such as, for example, in [13, p. 15]). The weak inequality \(\leqq \) in \(\mathbb{R }\) is extended to \({\bar{\mathbb{R }}}\) by assuming that the following (and only the following) inequalities hold for infinite elements:
Definition 1
For any positive integer \(p\), the extended Euclidean space \( {\bar{\mathbb{R }}}^{p}\) is defined as the Cartesian product of \(p\) copies of \( {\bar{\mathbb{R }}}\). The operations of addition and scalar multiplication in \( {\bar{\mathbb{R }}}^{p}\) are performed componentwise whenever the respective operations in \({\bar{\mathbb{R }}}\) are defined.
Remark 1
In the sequel, the vectors in \({\bar{\mathbb{R }}}^{p}\) and in other Euclidean spaces will be assumed to be column vectors.
Let \(I:=\{1,{\ldots },p\}\). For any two vectors \(x=(x_{1},{\ldots },x_{p})^{T}\), \( y=(y_{1},{\ldots },y_{p})^{T}\) in \({\bar{\mathbb{R }}}^{p}\), we write
We shall also consider the negations of (2)–(3):
Using (1) and (2), it is easy to prove the following.
Proposition 1
\(\left( {\bar{\mathbb{R }}}^{p},\leqq \right) \) is a partially ordered set (that is, the relation \(\leqq \) is reflexive, transitive and antisymmetric on \({\bar{\mathbb{R }}}^{p}\)).
Definition 2
Let \(M\) be a nonempty subset of \({\bar{\mathbb{R }}}^{p}\).
-
(a)
An element \(a\in {\bar{\mathbb{R }}}^{p}\) is called a lower (upper) bound of \(M\) if \(a\leqq x\) (\(x\leqq a\)) for all \(x\in M\).
-
(b)
\(a\in {\bar{\mathbb{R }}}^{p}\) is called the infimum (supremum) of \(M\) if \(a\) is a lower (upper) bound of \(M\) and for any lower (upper) bound \(b\) of \(M\) we have that \(b\leqq a\) (\(a\leqq b\)).
Definition 2 is in accordance with the general definition of infimum (supremum) of a subset of a partially ordered set [6, Def. 2.1.7]. By the antisymmetry of \(\leqq \), there may exist only one infimum (supremum) of \(M\); we will denote it by \(\inf M\) (\(\sup M\)).
We now define the projections \(\pi _{i}:{\bar{\mathbb{R }}}^{p}\rightarrow {\bar{\mathbb{R }}}\) by
Proposition 2
Let \(M\) be a nonempty subset of \({\bar{\mathbb{R }}}^{p}\). For \(i\in I \), define
Then
Corollary 1
For every nonempty set \(M\subset {\bar{\mathbb{R }}}^{p}\) and for every \(i\in I\), we have
It follows from Proposition 2 that \(\inf M\) and \(\sup M\) exist in \( {\bar{\mathbb{R }}}^{p}\) for every nonempty subset \(M\) of \({\bar{\mathbb{R }}}^{p}\). Since \(\inf M\) (\(\sup M\)) is a lower (upper) bound of \(M\), we always have
It should also be noted that
3 Lower and upper limits of vector functions
Let \(X\) be a real normed space. Below we define lower and upper limits for a function \(\varphi :X\rightarrow {\bar{\mathbb{R }}}^{p}\) in such a way that they generalize the well-known definitions for an extended-real-valued function [13, pp. 8, 13].
Definition 3
Let \(E\) be a nonempty subset of \(X\), and let \(\bar{x}\) be a limit point of \(E\). The lower and upper limits of a function \( \varphi :E\rightarrow {\bar{\mathbb{R }}}^{p}\) at \(\bar{x}\) are the elements of \({\bar{\mathbb{R }}}^{p}\) defined by
where \(B(\bar{x},\delta ):=\{x\in X:\Vert x-\bar{x}\Vert <\delta \}\).
Remark 2
The second equality in (10) follows from (9) and the fact that each component of \(\inf _{x\in B(\bar{x},\delta )}\varphi (x)\) is a nonincreasing function of \(\delta >0\). A similar explanation is valid for (11). These properties also imply that
Proposition 3
For any function \(\varphi =(\varphi _{1},{\ldots },\varphi _{p}):X\rightarrow {\bar{\mathbb{R }}}^{p}\) and \(\bar{x}\in X\), we have
4 The case of partial order defined by a polyhedral cone
This section describes a partially ordered space \((\mathbb R ^{m},\preceq )\) where the partial order is defined by a polyhedral cone.
Definition 4
Let \(Q\subset \mathbb R ^{m}\) be a cone.
-
(a)
The dual cone of \(Q\) is defined by
$$\begin{aligned} Q^{*}:=\left\{ z\in \mathbb R ^{m}:z^{T}y\geqq 0,~\forall y\in Q\right\} . \end{aligned}$$(15) -
(b)
\(Q\) is called polyhedral if \(Q\) is an intersection of a finite number of half-spaces containing the origin:
$$\begin{aligned} Q=\left\{ y\in \mathbb R ^{m}:Ay\geqq 0\right\} , \end{aligned}$$(16)where \(A\) is some matrix of finite dimension (cf. [13], p. 102, formula 3(14)).
We assume that the cone \(Q\) has nonempty interior, hence it cannot be contained in any nontrivial hyperplane. Then the dual cone \(Q^{*}\) can be represented as the conic hull of the transposed rows of \(A\) (see [2, p. 155]):
It follows from (16) that
Proposition 4
Let \(A\in \mathbb R ^{p\times m}\). The cone \(Q\) defined by (16) is pointed, i.e., satisfies the equality
if and only if \(\mathrm rank A=m\).
Proof
We have
Let \(\mathrm rank A=r\). It is known from linear algebra that the equation \( Ay=0\) has \(m-r\) linearly independent solutions. Hence, for (19) to hold, it is necessary and sufficient that \(Ay=0\) has only the zero solution, that is, \(m=r. \square \)
Corollary 2
If the cone \(Q\) is pointed, then \(p\geqq m\).
Proposition 5
If the matrix \(A\) has no zero rows, then
Proof
“ \(\supset \)” : Let \(Ay>0\). By the continuity of matrix multiplication, there exists a neighborhood \(U\) of \(y\) such that \(Au>0\) for all \(u\in U\). This implies \(Au\geqq 0\) for all \(u\in U\) , therefore \(U\subset Q\) by (16). We have thus verified that \(y\in \mathrm int Q\).
“ \(\subset \)” : Let \(y\in \mathrm int Q\), then there exists an open set \(U\) such that \(y\in U\subset Q\). Suppose that \( Ay\ngtr 0\), hence there exists \(i\in I\) such that \(a_{i}^{T}y\leqq 0\). However, since \(U\subset Q\), it follows from (18) that \( a_{i}^{T}u\geqq 0\) for all \(u\in U\). Thus \(a_{i}^{T}y=0\) and \(a_{i}^{T}u\) is nonnegative on a neighborhood of \(y\), which can hold only if \(a_{i}^{T}=0\). We have shown that \(A\) has a zero row, contrary to the assumption. \(\square \)
In the sequel, we will assume that the space \(\mathbb R ^{m}\) is partially ordered by a polyhedral cone \(Q\) which is pointed and has nonempty interior. The partial order relation \(\preceq \) is defined as follows:
We will identify the matrix \(A\) with the linear mapping \(A:\mathbb R ^{m}\rightarrow \mathbb R ^{p}\). Observe that conditions (21) and (16) imply that
which shows that the mapping \(A\) preserves the partial orders in the respective spaces \((\mathbb R ^{m},\preceq )\) and \((\mathbb R ^{p},\leqq )\).
For any nonempty subset \(M\) of \(\mathbb R ^{m}\), let us consider the image \( A(M)=\left\{ Ay:y\in M\right\} \subset \mathbb R ^{p}\). Then \(\inf A(M)\) and \(\sup A(M)\) are defined with respect to the natural partial order (2), and so, by Proposition 2, they always exist as elements of \( {\bar{\mathbb{R }}}^{p}\).
Remark 3
In particular, if \(Q=\mathbb R _{+}^{m}=\left\{ y\in \mathbb R ^{m}:y\geqq 0\right\} \), then we can assume that \(p=m\) and \(A\) is the identity matrix. In this case, we have \(\inf A(M)=\inf M\) and \(\sup A(M)=\sup M\).
5 Multiobjective optimization
Let \(X\), \(Y\) be real normed spaces. We shall deal with the following multiobjective optimization problem:
where \(S\) is a nonempty subset of \(X\) defined by
We assume that \(f=(f_{1},{\ldots },f_{m}):X\rightarrow \mathbb R ^{m}\) is an arbitrary mapping, \(g:X\rightarrow Y\) is a continuous mapping, \(C\) is a nonempty closed subset of \(X\) and \(D\) is a closed convex cone in \(Y\) (hence \( S\) is closed). The minimization in (23) is understood with respect to the partial order defined by (21), where \(Q\) is a pointed polyhedral cone in \(\mathbb R ^{m}\) with nonempty interior.
We denote by \(\mathcal N (x)\) the collection of all neighborhoods of \(x\).
Definition 5
[7, 10]. Let \(\bar{x}\in S\).
-
(a)
We say that \(\bar{x}\) is a weakly local Pareto minimizer (or weakly local efficient solution) for (23) if there exists \(U\in \mathcal N (\bar{x})\) such that
$$\begin{aligned} f(x)-f(\bar{x})\notin -\mathrm int Q \quad \text{ for} \text{ all} x\in S\cap U. \end{aligned}$$(25) -
(b)
Let \(\nu \) be a positive integer. We say that \(\bar{x}\) is a strict local Pareto minimizer of order \(\nu \) (or strict local efficient solution of order \(\nu \)) for (23), if there exist \( \alpha >0\) and \(U\in \mathcal N (\bar{x})\) such that
$$\begin{aligned} (f(x)+Q)\cap B(f(\bar{x}),\alpha \Vert x-\bar{x}\Vert ^{\nu })=\emptyset \quad \text{ for} \text{ all} x\in S\cap U\backslash \{\bar{x}\}. \end{aligned}$$(26)
Extending the definitions from [3] to vector-valued functions \( f:X\rightarrow \mathbb R ^{m}\), we now introduce the following lower and upper Ginchev derivatives for any point \(\bar{x}\in X\), any direction \(y\in X\backslash \{0\}\), and \(\nu =1,2,{\ldots }\):
where the lower and upper limits are considered as elements of \({\bar{\mathbb{R }}}^{p}\) in the sense of Definition 3. More precisely, we have
and analogous descriptions for (29)–(30). We accept that the derivative \(f_{-}^{(\nu )}(\bar{x};y)\) (resp. \(f_{+}^{(\nu )}(\bar{x};y)\) ) exists as an element of \({\bar{\mathbb{R }}}^{p}\) if and only if the derivatives \(f_{-}^{(j)}(\bar{x};y)\) (resp. \(f_{+}^{(j)}(\bar{x};y)\)) exist as elements of \(\mathbb R ^{p}\) for \(j=0,1,{\ldots },\nu -1\).
In particular, if \(Q=\mathbb R _{+}^{m}\), then by Remark 3, we have \( p=m\), and the matrix \(A\) can be deleted from formulae (27)–(30).
Note that the higher-order directional derivatives defined above do not require the existence of usual limits of any kind. Another possibility to avoid such requirement in vector optimization is to use the Kuratowski upper limit set in the definition of a second-order directional derivative; see, e.g., [5, p. 21].
Applying Proposition 3, we can represent the limits (27) and (28) componentwise as follows:
We denote by \(K(S,\bar{x})\) the contingent cone to \(S\) at \(\bar{x}\):
For the function \(g\) appearing in (24), we define
whenever this limit exists.
6 Necessary optimality conditions
The following theorem presents necessary conditions for weakly local Pareto minimizers in problem (23)–(24). It is a modification of [9, Theorem 3.1]. While the author of [9] assumes the existence of the following Ginchev derivatives (Hadamard type):
we use a considerably weaker assumption of the existence of upper derivatives (28) and (30). On the other hand, we assume that the ordering cone \(Q\) is polyhedral, which is not present in [9].
Theorem 1
Suppose that \(\mathrm int D\ne \emptyset \) and the space \(\mathbb R ^{m}\) is partially ordered by a pointed polyhedral cone \(Q\) with \(\mathrm int Q\ne \emptyset \), such that the corresponding matrix \(A\) (see (16)) has no zero rows. Let \(\bar{x}\) be a weakly local Pareto minimizer for problem (23)–(24). Assume that \(dg(\bar{ x};y)\) exists for all \(y\in K(C,\bar{x})\) and, for each \(y\in K(C,\bar{x} )\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), there exist the upper Ginchev derivatives \(f_{+}^{(j)}(\bar{x};y)\), \(j=0,1,{\ldots },\nu \). Then the following optimality conditions hold:
-
(i)
\(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0\), for all \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \).
-
(ii)
Let \(\nu \geqq 1\). If for some \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), we have
$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)=Af(\bar{x}),~f_{+}^{(j)} (\bar{x};y)=0,~j=0,1,{\ldots },\nu -1, \end{aligned}$$(39)then \(f_{+}^{(\nu )}(\bar{x};y)\nless 0\).
Proof
-
(i)
Since \(\bar{x}\) is a weakly local Pareto minimizer for (23)–(24), there exists \(U\in \mathcal N (\bar{x})\) such that (25) holds, which is equivalent to
$$\begin{aligned} f(x)-f(\bar{x})\in -(\mathbb R ^{m}\backslash \text{ int}Q) \quad \text{ for} \text{ all} x\in S\cap U. \end{aligned}$$(40)For \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), there exist sequences \(t_{k}\rightarrow 0^{+}\) and \(y_{k}\rightarrow y\) such that
$$\begin{aligned} \bar{x}+t_{k}y_{k}\in C \quad \text{ for} \text{ all} k. \end{aligned}$$(41)Since \(dg(\bar{x};y)\) exists, we have
$$\begin{aligned} dg(\bar{x};y)=\lim _{k\rightarrow \infty }\frac{g(\bar{x}+t_{k}y_{k})-g(\bar{x })}{t_{k}}. \end{aligned}$$(42)For sufficiently large \(k\), it follows from (41), (42), and \( dg(\bar{x};y)\in -\mathrm int D\) that
$$\begin{aligned} \bar{x}+t_{k}y_{k}\in C\cap U \end{aligned}$$(43)and
$$\begin{aligned} \frac{g(\bar{x}+t_{k}y_{k})-g(\bar{x})}{t_{k}}\in -D. \end{aligned}$$(44)Condition (44) and the convexity of \(D\) yield
$$\begin{aligned} g(\bar{x}+t_{k}y_{k})\in g(\bar{x})-D\subset -D-D\subset -D. \end{aligned}$$(45)Therefore, from (43) and (45), we have that
$$\begin{aligned} \bar{x}+t_{k}y_{k}\in S\cap U. \end{aligned}$$(46)Making use of (40) and (46), we obtain
$$\begin{aligned} f(\bar{x}+t_{k}y_{k})-f(\bar{x})\in -(\mathbb R ^{m}\backslash \mathrm int Q). \end{aligned}$$(47)By Proposition 5, formula (20) is valid, therefore
$$\begin{aligned} \mathbb R ^{m}\backslash \mathrm int Q=\left\{ y:a_{i}^{T}y\leqq 0 \text{ for} \text{ some} i\in I\right\} . \end{aligned}$$(48)Conditions (47) and (48) imply that, for each \(k\), there exists an index \(i(k)\in I\) satisfying
$$\begin{aligned} a_{i(k)}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0. \end{aligned}$$By choosing an appropriate subsequence of \(\{k\}\), we may assume that the sequence \(\{i(k)\}\) is constant. In other words, there exists an index \(l\in I\) such that
$$\begin{aligned} a_{l}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0 \quad \text{ for} \text{ all} k. \end{aligned}$$(49)Using the convergence conditions \(t_{k}\rightarrow 0^{+}\) and \( y_{k}\rightarrow y\), we deduce from (49) that, for each \(\delta >0\),
$$\begin{aligned} {\mathop {\mathop {\sup }\limits _{t\in (0,\delta )}}\limits _{ u\in B(y,\delta )}}a_{l}^{T}f(\bar{x }+tu)-a_{l}^{T}f(\bar{x})\geqq 0. \end{aligned}$$(50)Therefore, by (11), we have
$$\begin{aligned} \limsup _{(t,u)\rightarrow (0^{+},y)}a_{l}^{T}f(\bar{x}+tu)-a_{l}^{T}f(\bar{x} )\geqq 0. \end{aligned}$$(51)Now observe that, by (34), the left-hand side of (51) is equal to the \(l\)-th component of \(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\), which shows that
$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0. \end{aligned}$$ -
(ii)
Using definition (30) and assumptions (39), it is easy to compute:
$$\begin{aligned} f_{+}^{(\nu )}(\bar{x};y)=\limsup _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{ t^{\nu }}\left( Af(\bar{x}+tu)-Af(\bar{x})\right) . \end{aligned}$$(52)The rest of the proof is almost the same as in part (i). The only difference is that we multiply (49) by \(\nu !/t_{k}^{\nu }\) to get
$$\begin{aligned} \frac{\nu !}{t_{k}^{\nu }}a_{l}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0 \text{ for} \text{ all} k. \end{aligned}$$(53)Consequently, instead of (50) and (51), we have, respectively,
$$\begin{aligned} \sup _{_{\begin{array}{c} t\in (0,\delta ) \\ u\in B(y,\delta ) \end{array}}}\frac{\nu !}{ t^{\nu }}a_{l}^{T}(f(\bar{x}+tu)-f(\bar{x}))\geqq 0 \end{aligned}$$and
$$\begin{aligned} \limsup _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{t^{\nu }}a_{l}^{T}(f(\bar{x} +tu)-f(\bar{x}))\geqq 0. \end{aligned}$$(54)Now by (14), the left-hand side of (54) is equal to the \(l\)-th component of (52). Hence, inequality (54) means that \( f_{+}^{(\nu )}(\bar{x};y)\nless 0\).
\(\square \)
We now give an example to illustrate Theorem 1.
Example 1
Let \(f:\mathbb R \rightarrow \mathbb R ^{2}\) and \(g:\mathbb R \rightarrow \mathbb R \) be given by
where \(\mathbb Q \) stands for the set of rational numbers,
Let \(C=\mathbb R \), \(D=\mathbb R _{+}\), \(\bar{x}=0\) and \(Q=\left\{ y\in \mathbb R ^{2}:Ay\geqq 0\right\} \), where \(A=\left[ \begin{array}{cc} 0&-1 \\ 1&0 \end{array} \right] .\) Hence \(Q=\left\{ y\in \mathbb R ^{2}:y_{1}\geqq 0,~y_{2}\leqq 0\right\} \),
and \(K(C,\bar{x})=\mathbb R \). Since \(f(\bar{x})=(0,0)^{T}\), we have \(f(x)-f( \bar{x})\notin -\mathrm int Q\) for all \(x\in S,\) hence \(\bar{x}=0\) is a weakly local Pareto minimizer for problem (23)–(24). We have that \(dg(0;u)\) exists for all \(u\in \mathbb R \ \)and \(dg(0;u)=-u<0\) for \(u>0\). Therefore, \(K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int \mathbb R _{+}\right\} =\mathbb R _{+}{\backslash } \left\{ 0\right\} .\) Let \( \nu =1,\) then, for any \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int \mathbb R _{+}\right\} \),
which leads to \(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0\), hence condition (i) of Theorem 1 is satisfied. Moreover,
hence \(f_{+}^{(1)}(\bar{x};y)\nless 0\), and condition (ii) of Theorem 1 is satisfied for \(\nu =1.\)
Let us note that we cannot apply Theorem 3.1 of [9] to Example 1 because the derivatives (37)–(38) of \(f\) do not exist.
7 Sufficient optimality conditions
In the next theorem we shall use the following notation for the closure of the cone generated by \(D+g(\bar{x})\):
Theorem 2
Suppose that \(\dim X<\infty \). Let \(\bar{x}\) be a feasible point for problem (23)–(24) and let \(dg(\bar{x};y)\) exist for all \(y\in K(C,\bar{x})\backslash \{0\}\). Assume that there is a positive integer \(\nu \) such that for each \(y\in K(C,\bar{x})\cap \left\{ u:dg(\bar{x} ;u)\in -D_{g(\bar{x})}\right\} \backslash \{0\}\), there exist the lower Ginchev derivatives \(f_{-}^{(j)}(\bar{x};y)\), \(j=0,1,{\ldots },\nu \), and one of the following conditions \((A_{k})\) \((k=1,{\ldots },\nu )\) holds:
Then \(\bar{x}\) is a strict local Pareto minimizer of order \(\nu \) for problem (23)–(24).
Proof
Contrary to the conclusion, suppose that condition \((A_{k})\) holds for some fixed \(k\in \{1,{\ldots },\nu \}\), but \(\bar{x}\) is not a strict local Pareto minimizer of order \(\nu \) for problem (23)–(24). By (26), we deduce that there exist sequences \(x_{n}\in S,\) \(x_{n}\ne \bar{x},\) \(x_{n}\rightarrow \bar{x}\) and \(b_{n}\in Q\) such that
(see Proposition 3.4 in [7]). Putting \(y_{n}=\frac{x_{n}-\bar{x}}{ \Vert x_{n}-\bar{x}\Vert }\) and \(t_{n}=\Vert x_{n}-\bar{x} \Vert ,\) we get that \(t_{n}\rightarrow 0^{+}\) and \(x_{n}=\bar{x} +t_{n}y_{n}\in S\subset C.\) We may assume, by choosing a subsequence if necessary, that \(y_{n}\) converges to some vector \(y\) with \(\Vert y\Vert =1\). Hence \(y\in K(C,\bar{x})\backslash \left\{ 0\right\} .\) Since \(dg(\bar{x};y)\) exists, it must satisfy
Moreover, \(g(\bar{x}+t_{n}y_{n})=g(x_{n})\in -D,\) and consequently,
Conditions (56), (57), \(y\in \) \(K(C,\bar{x}){\backslash } \left\{ 0\right\} ,\) and the closedness of \(D_{g(\bar{x})}\) imply that
Since \(k\leqq \nu \), it follows from (55) that
By condition \((A_{k})\), we have \(f_{-}^{(0)}(\bar{x};y)=Af(\bar{x})\), \( f_{-}^{(j)}(\bar{x};y)=0\), \(j=0,1,{\ldots },k-1\), hence
Now, using (13) and (59), we obtain that, for each \(i\in I\), the \(i\)-th component of \(f_{-}^{(k)}(\bar{x};y)\) is equal to
But (58) yields
Since \(b_{n}\in Q\), it follows from (18) that \(a_{i}^{T}b_{n}\geqq 0\), hence
We obtain from (60)–(62) that \(\left( f_{-}^{(k)}(\bar{x} ;y)\right) _{i}\leqq 0\). Since \(i\) is arbitrary, we have thus verified that \( f_{-}^{(k)}(\bar{x};y)\leqq 0\), which is in contradiction to condition \( (A_{k})\). \(\square \)
The following example illustrates Theorem 2.
Example 2
Let \(f:\mathbb R \rightarrow \mathbb R ^{2}\) and \(g:\mathbb R \rightarrow \mathbb R \) be given by
and
Let \(D=\mathbb R _{+}\), \(C=\mathbb R _{+}\), hence the feasible set is given by
Let \(\bar{x}=0\). For all \(y\in \mathbb R \), we have that \(dg(0;y)=-\Vert y\Vert \) exists and \(dg(0;y)\in -D_{g(0)}=-\mathbb R _{+}\). It is clear that \(K(\mathbb R _{+},0)=\mathbb R _{+}\), hence
Now, let \(Q=\left\{ y\in \mathbb R ^{2}:Ay\geqq 0\right\} \) where \(A=\left[ \begin{array}{cc} -1&1 \\ 1&1 \end{array} \right] \). Hence
Note that, for any \((y_{1},y_{2})\in Q\), we have \(y_{2}\geqq 0\). We have, for all \(y\in \mathbb R _{+}\backslash \{0\}\),
By Theorem 2, the point \(\bar{x}=0\) is a strict local Pareto minimizer of order one (with respect to the polyhedral cone \(Q\)) for problem (23)–(24).
The same conclusion can be verified by Definition 5(b). Indeed, let us take \(\alpha =3/2\). Then, for each \((y_{1},y_{2})\in Q\), we have
hence condition (26) holds with \(v=1\) and \(U=\mathbb R \), which means that \(\bar{x}=0\) is a strict (global) Pareto minimizer of order one for problem (23)–(24).
It is not difficult to see that we cannot apply Theorem 4.1 of [9] to Example 2 because the derivative \(f^{(1)}(0;y)\) defined by (38) does not exist.
References
Chinchuluun, A., Pardalos, P.M.: A survey of recent developments in multiobjective optimization. Ann. Oper. Res. 154, 29–50 (2007)
Dattoro, J.: Convex Optimization and Euclidean Distance Geometry. \({\cal M}\varepsilon \beta oo\) Publishing, Palo Alto, California, USA (2005) (available online: http://meboo.convexoptimization.com/access.html)
Ginchev, I.: Higher order optimality conditions in nonsmooth optimization. Optimization 51(1), 47–72 (2002)
Ginchev, I.: Higher-order conditions for strict efficiency. Optimization 60(3), 311–328 (2011)
Ginchev, I., Guerraggio, A., Rocca, M.: From scalar to vector optimization. Appl. Math. 51(1), 5–36 (2006)
Göpfert, A., Riahi, H., Tammer, Ch., Zălinescu, C.: Variational Methods in Partially Ordered Spaces. Springer, New York (2003)
Jiménez, B.: Strict efficiency in vector optimization. J. Math. Anal. Appl. 265, 264–284 (2002)
Luc D.T.: Pareto optimality. In: Chinchuluun, A., Migdalas, A., Pardalos, P.M., Pitsoulis, L. (eds.) Pareto Optimality, Game Theory and Equilibria, pp. 481–515. Springer (2008)
Luu, D.V.: Higher-order optimality conditions in nonsmooth cone-constrained multiobjective programming. Preprint, Institute of Mathematics, Hanoi, Vietnam (2008)
Luu, D.V., Kien, P.T.: On higher-order conditions for strict efficiency. Soochow J. Math. 33(1), 17–31 (2007)
Pappalardo, M.: Multiobjective optimization: a brief overview. In: Chinchuluun, A., Migdalas, A., Pardalos, P.M., Pitsoulis, L. (eds.) Pareto Optimality, Game Theory and Equilibria, pp. 517–528. Springer (2008)
Rahmo, E.-D., Studniarski, M.: Higher-order conditions for strict local Pareto minima in terms of generalized lower and upper directional derivatives. J. Math. Anal. Appl. 393, 212–221 (2012)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Yuan, D., Chinchuluun, A., Liu, X., Pardalos, P.M.: Optimality conditions and duality for multiobjective programming involving \((C; \alpha ; \rho ; d)\) type-I functions. In: Konnov, I.V., Luc, D.T., Rubinov, A.M. (eds.) Generalized Convexity and Related Topics, pp. 73–87. Springer (2006)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Rahmo, ED., Stasiak, A. & Studniarski, M. Lower and upper Ginchev derivatives of vector functions and their applications to multiobjective optimization. Optim Lett 8, 653–667 (2014). https://doi.org/10.1007/s11590-012-0604-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-012-0604-3