1 Introduction

Higher-order optimality conditions in multiobjective optimization stated in terms of Ginchev directional derivatives have been studied by several authors (see [4, 9, 10]). The results obtained so far can be essentially divided into two groups:

  1. 1.

    Conditions using lower and upper Ginchev derivatives of scalar functions; see e.g. Theorems 5.1 and 5.2 in [10], Theorems 4.1 and 4.2 in [4]. These results either use some scalarization of a multiobjective problem or are formulated in terms of Ginchev derivatives of coordinate functions (when partial order is defined by the positive orthant).

  2. 2.

    Conditions using Ginchev derivatives of Hadamard type (defined for vector functions by formulae (37)–(38) below); see e.g. Theorems 3.1 and 4.1 in [10], Theorem 5.1 in [4]. These theorems require stronger assumptions on the minimized function than the ones in the first group.

In this paper we propose another approach which is valid only for minimization problems in finite-dimensional vector spaces where partial order is introduced by a polyhedral cone. Using the definitions of lower and upper limits of vector functions presented recently by the authors in [12], we define lower and upper Ginchev derivatives of vector functions and use them directly to formulate optimality conditions, thus avoiding any scalarization.

Note that the optimality conditions presented here are only in the primal form (i.e., they are formulated in terms of directional derivatives). Conditions in the dual form (i.e., containing Lagrange multipliers) can be obtained by using the tools of nonsmooth analysis and generalized convexity; see, e.g., [14]. For a general overview of multiobjective optimization and the Pareto optimality, see [1, 8, 11].

The paper is organized as follows. In Sects. 2 and 3 we review some definitions and results from [12] which will be used in the sequel. In Sect. 4 we describe the partial order defined by a polyhedral cone. In Sect. 5 we formulate a multiobjective optimization problem and define lower and upper Ginchev derivatives of vector functions. Sections 6 and 7 are devoted to necessary and sufficient optimality conditions, respectively.

2 Infima and suprema of sets in extended Euclidean spaces

Let \({\bar{\mathbb{R }}}=\mathbb{R }\cup \{-\infty ,\infty \}\) be the set of extended real numbers. The arithmetic operations in \(\mathbb{R }\) are extended to \({\bar{\mathbb{R }}}\) in an obvious manner, except for the combinations \(0\cdot (-\infty )\), \(0\cdot \infty \), \(-\infty +\infty \) and \( \infty -\infty \) which we regard as undefined rather than define them in any special way (such as, for example, in [13, p. 15]). The weak inequality \(\leqq \) in \(\mathbb{R }\) is extended to \({\bar{\mathbb{R }}}\) by assuming that the following (and only the following) inequalities hold for infinite elements:

$$\begin{aligned} -\infty&\leqq \alpha \leqq \infty \quad \text{ for} \text{ all} \alpha \in \mathbb{R } , \nonumber \\ -\infty&\leqq -\infty , \quad -\infty \leqq \infty , \quad \infty \leqq \infty . \end{aligned}$$
(1)

Definition 1

For any positive integer \(p\), the extended Euclidean space \( {\bar{\mathbb{R }}}^{p}\) is defined as the Cartesian product of \(p\) copies of \( {\bar{\mathbb{R }}}\). The operations of addition and scalar multiplication in \( {\bar{\mathbb{R }}}^{p}\) are performed componentwise whenever the respective operations in \({\bar{\mathbb{R }}}\) are defined.

Remark 1

In the sequel, the vectors in \({\bar{\mathbb{R }}}^{p}\) and in other Euclidean spaces will be assumed to be column vectors.

Let \(I:=\{1,{\ldots },p\}\). For any two vectors \(x=(x_{1},{\ldots },x_{p})^{T}\), \( y=(y_{1},{\ldots },y_{p})^{T}\) in \({\bar{\mathbb{R }}}^{p}\), we write

$$\begin{aligned}&x\leqq y \text{ if} \text{ and} \text{ only} \text{ if} x_{i}\leqq y_{i} \quad \text{ for} \text{ all} i\in I; \end{aligned}$$
(2)
$$\begin{aligned}&x <y \text{ if} \text{ and} \text{ only} \text{ if} x_{i}<y_{i} \quad \text{ for} \text{ all} i\in I. \end{aligned}$$
(3)

We shall also consider the negations of (2)–(3):

$$\begin{aligned}&x \nleqq y \text{ if} \text{ and} \text{ only} \text{ if} x_{i}>y_{i} \quad \text{ for} \text{ some} i\in I; \end{aligned}$$
(4)
$$\begin{aligned}&x \nless y \text{ if} \text{ and} \text{ only} \text{ if} x_{i}\geqq y_{i} \quad \text{ for} \text{ some} i\in I. \end{aligned}$$
(5)

Using (1) and (2), it is easy to prove the following.

Proposition 1

\(\left( {\bar{\mathbb{R }}}^{p},\leqq \right) \) is a partially ordered set (that is, the relation \(\leqq \) is reflexive, transitive and antisymmetric on \({\bar{\mathbb{R }}}^{p}\)).

Definition 2

Let \(M\) be a nonempty subset of \({\bar{\mathbb{R }}}^{p}\).

  1. (a)

    An element \(a\in {\bar{\mathbb{R }}}^{p}\) is called a lower (upper) bound of \(M\) if \(a\leqq x\) (\(x\leqq a\)) for all \(x\in M\).

  2. (b)

    \(a\in {\bar{\mathbb{R }}}^{p}\) is called the infimum (supremum) of \(M\) if \(a\) is a lower (upper) bound of \(M\) and for any lower (upper) bound \(b\) of \(M\) we have that \(b\leqq a\) (\(a\leqq b\)).

Definition 2 is in accordance with the general definition of infimum (supremum) of a subset of a partially ordered set [6, Def. 2.1.7]. By the antisymmetry of \(\leqq \), there may exist only one infimum (supremum) of \(M\); we will denote it by \(\inf M\) (\(\sup M\)).

We now define the projections \(\pi _{i}:{\bar{\mathbb{R }}}^{p}\rightarrow {\bar{\mathbb{R }}}\) by

$$\begin{aligned} \pi _{i}(x_{1},{\ldots },x_{p}):=x_{i}, i\in I. \end{aligned}$$
(6)

Proposition 2

Let \(M\) be a nonempty subset of \({\bar{\mathbb{R }}}^{p}\). For \(i\in I \), define

$$\begin{aligned} l_{i}:=\inf \{\pi _{i}(x):x\in M\}, \quad u_{i}:=\sup \{\pi _{i}(x):x\in M\}. \end{aligned}$$
(7)

Then

$$\begin{aligned} l:=(l_{1},{\ldots },l_{p})^{T}=\inf M, \quad u:=(u_{1},{\ldots },u_{p})^{T}=\sup M. \end{aligned}$$
(8)

Corollary 1

For every nonempty set \(M\subset {\bar{\mathbb{R }}}^{p}\) and for every \(i\in I\), we have

$$\begin{aligned} \pi _{i}(\inf M)=\inf \pi _{i}(M),\quad \pi _{i}(\sup M)=\sup \pi _{i}(M). \end{aligned}$$
(9)

It follows from Proposition 2 that \(\inf M\) and \(\sup M\) exist in \( {\bar{\mathbb{R }}}^{p}\) for every nonempty subset \(M\) of \({\bar{\mathbb{R }}}^{p}\). Since \(\inf M\) (\(\sup M\)) is a lower (upper) bound of \(M\), we always have

$$\begin{aligned} \inf M\leqq x\leqq \sup M\quad \text{ for} \text{ all} x\in M. \end{aligned}$$

It should also be noted that

$$\begin{aligned} (-\infty ,{\ldots },-\infty )^{T}=\inf {\bar{\mathbb{R }}}^{p}, (\infty ,{\ldots },\infty )^{T}=\sup {\bar{\mathbb{R }}}^{p}. \end{aligned}$$

3 Lower and upper limits of vector functions

Let \(X\) be a real normed space. Below we define lower and upper limits for a function \(\varphi :X\rightarrow {\bar{\mathbb{R }}}^{p}\) in such a way that they generalize the well-known definitions for an extended-real-valued function [13, pp. 8, 13].

Definition 3

Let \(E\) be a nonempty subset of \(X\), and let \(\bar{x}\) be a limit point of \(E\). The lower and upper limits of a function \( \varphi :E\rightarrow {\bar{\mathbb{R }}}^{p}\) at \(\bar{x}\) are the elements of \({\bar{\mathbb{R }}}^{p}\) defined by

$$\begin{aligned} \liminf _{E\ni x\rightarrow \bar{x}}\varphi (x)&:= \lim _{\delta \rightarrow 0^{+}}\left( \inf _{x\in B(\bar{x},\delta )\cap E}\varphi (x)\right) =\sup _{\delta >0}\left( \inf _{x\in B(\bar{x},\delta )\cap E}\varphi (x)\right) , \end{aligned}$$
(10)
$$\begin{aligned} \limsup _{E\ni x\rightarrow \bar{x}}\varphi (x)&:= \lim _{\delta \rightarrow 0^{+}}\left( \sup _{x\in B(\bar{x},\delta )\cap E}\varphi (x)\right) =\inf _{\delta >0}\left( \sup _{x\in B(\bar{x},\delta )\cap E}\varphi (x)\right) , \end{aligned}$$
(11)

where \(B(\bar{x},\delta ):=\{x\in X:\Vert x-\bar{x}\Vert <\delta \}\).

Remark 2

The second equality in (10) follows from (9) and the fact that each component of \(\inf _{x\in B(\bar{x},\delta )}\varphi (x)\) is a nonincreasing function of \(\delta >0\). A similar explanation is valid for (11). These properties also imply that

$$\begin{aligned} \liminf _{E\ni x\rightarrow \bar{x}}\varphi (x)\leqq \limsup _{E\ni x\rightarrow \bar{x}}\varphi (x). \end{aligned}$$
(12)

Proposition 3

For any function \(\varphi =(\varphi _{1},{\ldots },\varphi _{p}):X\rightarrow {\bar{\mathbb{R }}}^{p}\) and \(\bar{x}\in X\), we have

$$\begin{aligned} \liminf _{E\ni x\rightarrow \bar{x}}\varphi (x)&= \left( \liminf _{E\ni x\rightarrow \bar{x}}\varphi _{1}(x),{\ldots },\liminf _{E\ni x\rightarrow \bar{x} }\varphi _{p}(x)\right) ^{T}, \end{aligned}$$
(13)
$$\begin{aligned} \limsup _{E\ni x\rightarrow \bar{x}}\varphi (x)&= \left( \limsup _{E\ni x\rightarrow \bar{x}}\varphi _{1}(x),{\ldots },\limsup _{E\ni x\rightarrow \bar{x} }\varphi _{p}(x)\right) ^{T}. \end{aligned}$$
(14)

4 The case of partial order defined by a polyhedral cone

This section describes a partially ordered space \((\mathbb R ^{m},\preceq )\) where the partial order is defined by a polyhedral cone.

Definition 4

Let \(Q\subset \mathbb R ^{m}\) be a cone.

  1. (a)

    The dual cone of \(Q\) is defined by

    $$\begin{aligned} Q^{*}:=\left\{ z\in \mathbb R ^{m}:z^{T}y\geqq 0,~\forall y\in Q\right\} . \end{aligned}$$
    (15)
  2. (b)

    \(Q\) is called polyhedral if \(Q\) is an intersection of a finite number of half-spaces containing the origin:

    $$\begin{aligned} Q=\left\{ y\in \mathbb R ^{m}:Ay\geqq 0\right\} , \end{aligned}$$
    (16)

    where \(A\) is some matrix of finite dimension (cf. [13], p. 102, formula 3(14)).

We assume that the cone \(Q\) has nonempty interior, hence it cannot be contained in any nontrivial hyperplane. Then the dual cone \(Q^{*}\) can be represented as the conic hull of the transposed rows of \(A\) (see [2, p. 155]):

$$\begin{aligned} Q^{*}=\text{ cone}\{a_{1},{\ldots },a_{p}\}, \text{ where} A=\left[ \begin{array}{c} a_{1}^{T} \\ \vdots \\ a_{p}^{T} \end{array} \right] \in \mathbb R ^{p\times m}. \end{aligned}$$
(17)

It follows from (16) that

$$\begin{aligned} (y\in Q)\Leftrightarrow (a_{i}^{T}y\geqq 0,~\forall i\in I). \end{aligned}$$
(18)

Proposition 4

Let \(A\in \mathbb R ^{p\times m}\). The cone \(Q\) defined by (16) is pointed, i.e., satisfies the equality

$$\begin{aligned} Q\cap (-Q)=\{0\}, \end{aligned}$$
(19)

if and only if \(\mathrm rank A=m\).

Proof

We have

$$\begin{aligned} Q\cap (-Q)=\left\{ y\in \mathbb R ^{m}:Ay=0\right\} . \end{aligned}$$

Let \(\mathrm rank A=r\). It is known from linear algebra that the equation \( Ay=0\) has \(m-r\) linearly independent solutions. Hence, for (19) to hold, it is necessary and sufficient that \(Ay=0\) has only the zero solution, that is, \(m=r. \square \)

Corollary 2

If the cone \(Q\) is pointed, then \(p\geqq m\).

Proposition 5

If the matrix \(A\) has no zero rows, then

$$\begin{aligned} \mathrm int Q=\left\{ y\in \mathbb R ^{m}:Ay>0\right\} . \end{aligned}$$
(20)

Proof

\(\supset \)” : Let \(Ay>0\). By the continuity of matrix multiplication, there exists a neighborhood \(U\) of \(y\) such that \(Au>0\) for all \(u\in U\). This implies \(Au\geqq 0\) for all \(u\in U\) , therefore \(U\subset Q\) by (16). We have thus verified that \(y\in \mathrm int Q\).

\(\subset \)” : Let \(y\in \mathrm int Q\), then there exists an open set \(U\) such that \(y\in U\subset Q\). Suppose that \( Ay\ngtr 0\), hence there exists \(i\in I\) such that \(a_{i}^{T}y\leqq 0\). However, since \(U\subset Q\), it follows from (18) that \( a_{i}^{T}u\geqq 0\) for all \(u\in U\). Thus \(a_{i}^{T}y=0\) and \(a_{i}^{T}u\) is nonnegative on a neighborhood of \(y\), which can hold only if \(a_{i}^{T}=0\). We have shown that \(A\) has a zero row, contrary to the assumption. \(\square \)

In the sequel, we will assume that the space \(\mathbb R ^{m}\) is partially ordered by a polyhedral cone \(Q\) which is pointed and has nonempty interior. The partial order relation \(\preceq \) is defined as follows:

$$\begin{aligned} (x\preceq y):\Leftrightarrow (y-x\in Q). \end{aligned}$$
(21)

We will identify the matrix \(A\) with the linear mapping \(A:\mathbb R ^{m}\rightarrow \mathbb R ^{p}\). Observe that conditions (21) and (16) imply that

$$\begin{aligned} (x\preceq y)\Leftrightarrow (y-x\in Q)\Leftrightarrow (A(y-x)\geqq 0)\Leftrightarrow (Ax\leqq Ay), \end{aligned}$$
(22)

which shows that the mapping \(A\) preserves the partial orders in the respective spaces \((\mathbb R ^{m},\preceq )\) and \((\mathbb R ^{p},\leqq )\).

For any nonempty subset \(M\) of \(\mathbb R ^{m}\), let us consider the image \( A(M)=\left\{ Ay:y\in M\right\} \subset \mathbb R ^{p}\). Then \(\inf A(M)\) and \(\sup A(M)\) are defined with respect to the natural partial order (2), and so, by Proposition 2, they always exist as elements of \( {\bar{\mathbb{R }}}^{p}\).

Remark 3

In particular, if \(Q=\mathbb R _{+}^{m}=\left\{ y\in \mathbb R ^{m}:y\geqq 0\right\} \), then we can assume that \(p=m\) and \(A\) is the identity matrix. In this case, we have \(\inf A(M)=\inf M\) and \(\sup A(M)=\sup M\).

5 Multiobjective optimization

Let \(X\), \(Y\) be real normed spaces. We shall deal with the following multiobjective optimization problem:

$$\begin{aligned} \min \{f(x):x\in S\}, \end{aligned}$$
(23)

where \(S\) is a nonempty subset of \(X\) defined by

$$\begin{aligned} S:=\left\{ x\in C:-g(x)\in D\right\} . \end{aligned}$$
(24)

We assume that \(f=(f_{1},{\ldots },f_{m}):X\rightarrow \mathbb R ^{m}\) is an arbitrary mapping, \(g:X\rightarrow Y\) is a continuous mapping, \(C\) is a nonempty closed subset of \(X\) and \(D\) is a closed convex cone in \(Y\) (hence \( S\) is closed). The minimization in (23) is understood with respect to the partial order defined by (21), where \(Q\) is a pointed polyhedral cone in \(\mathbb R ^{m}\) with nonempty interior.

We denote by \(\mathcal N (x)\) the collection of all neighborhoods of \(x\).

Definition 5

[7, 10]. Let \(\bar{x}\in S\).

  1. (a)

    We say that \(\bar{x}\) is a weakly local Pareto minimizer (or weakly local efficient solution) for (23) if there exists \(U\in \mathcal N (\bar{x})\) such that

    $$\begin{aligned} f(x)-f(\bar{x})\notin -\mathrm int Q \quad \text{ for} \text{ all} x\in S\cap U. \end{aligned}$$
    (25)
  2. (b)

    Let \(\nu \) be a positive integer. We say that \(\bar{x}\) is a strict local Pareto minimizer of order \(\nu \) (or strict local efficient solution of order \(\nu \)) for (23), if there exist \( \alpha >0\) and \(U\in \mathcal N (\bar{x})\) such that

    $$\begin{aligned} (f(x)+Q)\cap B(f(\bar{x}),\alpha \Vert x-\bar{x}\Vert ^{\nu })=\emptyset \quad \text{ for} \text{ all} x\in S\cap U\backslash \{\bar{x}\}. \end{aligned}$$
    (26)

Extending the definitions from [3] to vector-valued functions \( f:X\rightarrow \mathbb R ^{m}\), we now introduce the following lower and upper Ginchev derivatives for any point \(\bar{x}\in X\), any direction \(y\in X\backslash \{0\}\), and \(\nu =1,2,{\ldots }\):

$$\begin{aligned} f_{-}^{(0)}(\bar{x};y)&:= \liminf _{(t,u)\rightarrow (0^{+},y)}Af(\bar{x}+tu), \end{aligned}$$
(27)
$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)&:= \limsup _{(t,u)\rightarrow (0^{+},y)}Af(\bar{x}+tu), \end{aligned}$$
(28)
$$\begin{aligned} f_{-}^{(\nu )}(\bar{x};y)&:= \liminf _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{t^{\nu }}\left( Af(\bar{x}+tu)-\sum _{j=0}^{\nu -1}\frac{t^{j}}{j!} f_{-}^{(j)}(\bar{x};y)\right), \end{aligned}$$
(29)
$$\begin{aligned} f_{+}^{(\nu )}(\bar{x};y)&:= \limsup _{(t,u)\rightarrow (0^{+},y)}\frac{\nu ! }{t^{\nu }}\left( Af(\bar{x}+tu)-\sum _{j=0}^{\nu -1}\frac{t^{j}}{j!} f_{+}^{(j)}(\bar{x};y)\right), \end{aligned}$$
(30)

where the lower and upper limits are considered as elements of \({\bar{\mathbb{R }}}^{p}\) in the sense of Definition 3. More precisely, we have

$$\begin{aligned} f_{-}^{(0)}(\bar{x};y)&= \sup _{\delta >0}\left(\,\, \inf \limits _{\begin{array}{c} t\in (0,\delta ) \\ u\in B(y,\delta ) \end{array}}Af(\bar{x}+tu)\right) , \end{aligned}$$
(31)
$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)&= \inf _{\delta >0}\left(\,\, \sup \limits _{_{\begin{array}{c} t\in (0,\delta ) \\ u\in B(y,\delta ) \end{array}}}Af(\bar{x}+tu)\right) , \end{aligned}$$
(32)

and analogous descriptions for (29)–(30). We accept that the derivative \(f_{-}^{(\nu )}(\bar{x};y)\) (resp. \(f_{+}^{(\nu )}(\bar{x};y)\) ) exists as an element of \({\bar{\mathbb{R }}}^{p}\) if and only if the derivatives \(f_{-}^{(j)}(\bar{x};y)\) (resp. \(f_{+}^{(j)}(\bar{x};y)\)) exist as elements of \(\mathbb R ^{p}\) for \(j=0,1,{\ldots },\nu -1\).

In particular, if \(Q=\mathbb R _{+}^{m}\), then by Remark 3, we have \( p=m\), and the matrix \(A\) can be deleted from formulae (27)–(30).

Note that the higher-order directional derivatives defined above do not require the existence of usual limits of any kind. Another possibility to avoid such requirement in vector optimization is to use the Kuratowski upper limit set in the definition of a second-order directional derivative; see, e.g., [5, p. 21].

Applying Proposition 3, we can represent the limits (27) and (28) componentwise as follows:

$$\begin{aligned} f_{-}^{(0)}(\bar{x};y)&= \left( \liminf _{(t,u)\rightarrow (0^{+},y)}a_{1}^{T}f(\bar{x}+tu),{\ldots },\liminf _{(t,u)\rightarrow (0^{+},y)}a_{p}^{T}f(\bar{x}+tu)\right) ^{T}\!\!, \qquad \end{aligned}$$
(33)
$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)&= \left( \limsup _{(t,u)\rightarrow (0^{+},y)}a_{1}^{T}f(\bar{x}+tu),{\ldots },\limsup _{(t,u)\rightarrow (0^{+},y)}a_{p}^{T}f(\bar{x}+tu)\right) ^{T}\!. \qquad \end{aligned}$$
(34)

We denote by \(K(S,\bar{x})\) the contingent cone to \(S\) at \(\bar{x}\):

$$\begin{aligned} K(S,\bar{x}):=\{y\in X:\exists t_{k}\rightarrow 0^{+},~y_{k}\rightarrow y \text{ such} \text{ that} \bar{x}+t_{k}y_{k}\in S~(\forall k)\}. \end{aligned}$$
(35)

For the function \(g\) appearing in (24), we define

$$\begin{aligned} dg(\bar{x};y):=\lim _{(t,v)\rightarrow (0+,y)}\frac{g(\bar{x}+tv)-g(\bar{x})}{ t}, \end{aligned}$$
(36)

whenever this limit exists.

6 Necessary optimality conditions

The following theorem presents necessary conditions for weakly local Pareto minimizers in problem (23)–(24). It is a modification of [9, Theorem 3.1]. While the author of  [9] assumes the existence of the following Ginchev derivatives (Hadamard type):

$$\begin{aligned} f^{(0)}(\bar{x};y)&:= \lim _{(t,u)\rightarrow (0^{+},y)}f(\bar{x}+tu), \end{aligned}$$
(37)
$$\begin{aligned} f^{(\nu )}(\bar{x};y)&:= \lim _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{ t^{\nu }}\left( f(\bar{x}+tu)-\sum _{j=0}^{\nu -1}\frac{t^{j}}{j!}f^{(j)}( \bar{x};y)\right), \end{aligned}$$
(38)

we use a considerably weaker assumption of the existence of upper derivatives (28) and (30). On the other hand, we assume that the ordering cone \(Q\) is polyhedral, which is not present in [9].

Theorem 1

Suppose that \(\mathrm int D\ne \emptyset \) and the space \(\mathbb R ^{m}\) is partially ordered by a pointed polyhedral cone \(Q\) with \(\mathrm int Q\ne \emptyset \), such that the corresponding matrix \(A\) (see (16)) has no zero rows. Let \(\bar{x}\) be a weakly local Pareto minimizer for problem (23)–(24). Assume that \(dg(\bar{ x};y)\) exists for all \(y\in K(C,\bar{x})\) and, for each \(y\in K(C,\bar{x} )\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), there exist the upper Ginchev derivatives \(f_{+}^{(j)}(\bar{x};y)\), \(j=0,1,{\ldots },\nu \). Then the following optimality conditions hold:

  1. (i)

    \(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0\),   for all   \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \).

  2. (ii)

    Let \(\nu \geqq 1\). If for some \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), we have

    $$\begin{aligned} f_{+}^{(0)}(\bar{x};y)=Af(\bar{x}),~f_{+}^{(j)} (\bar{x};y)=0,~j=0,1,{\ldots },\nu -1, \end{aligned}$$
    (39)

    then \(f_{+}^{(\nu )}(\bar{x};y)\nless 0\).

Proof

  1. (i)

    Since \(\bar{x}\) is a weakly local Pareto minimizer for (23)–(24), there exists \(U\in \mathcal N (\bar{x})\) such that (25) holds, which is equivalent to

    $$\begin{aligned} f(x)-f(\bar{x})\in -(\mathbb R ^{m}\backslash \text{ int}Q) \quad \text{ for} \text{ all} x\in S\cap U. \end{aligned}$$
    (40)

    For \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int D\right\} \), there exist sequences \(t_{k}\rightarrow 0^{+}\) and \(y_{k}\rightarrow y\) such that

    $$\begin{aligned} \bar{x}+t_{k}y_{k}\in C \quad \text{ for} \text{ all} k. \end{aligned}$$
    (41)

    Since \(dg(\bar{x};y)\) exists, we have

    $$\begin{aligned} dg(\bar{x};y)=\lim _{k\rightarrow \infty }\frac{g(\bar{x}+t_{k}y_{k})-g(\bar{x })}{t_{k}}. \end{aligned}$$
    (42)

    For sufficiently large \(k\), it follows from (41), (42), and \( dg(\bar{x};y)\in -\mathrm int D\) that

    $$\begin{aligned} \bar{x}+t_{k}y_{k}\in C\cap U \end{aligned}$$
    (43)

    and

    $$\begin{aligned} \frac{g(\bar{x}+t_{k}y_{k})-g(\bar{x})}{t_{k}}\in -D. \end{aligned}$$
    (44)

    Condition (44) and the convexity of \(D\) yield

    $$\begin{aligned} g(\bar{x}+t_{k}y_{k})\in g(\bar{x})-D\subset -D-D\subset -D. \end{aligned}$$
    (45)

    Therefore, from (43) and (45), we have that

    $$\begin{aligned} \bar{x}+t_{k}y_{k}\in S\cap U. \end{aligned}$$
    (46)

    Making use of (40) and (46), we obtain

    $$\begin{aligned} f(\bar{x}+t_{k}y_{k})-f(\bar{x})\in -(\mathbb R ^{m}\backslash \mathrm int Q). \end{aligned}$$
    (47)

    By Proposition 5, formula (20) is valid, therefore

    $$\begin{aligned} \mathbb R ^{m}\backslash \mathrm int Q=\left\{ y:a_{i}^{T}y\leqq 0 \text{ for} \text{ some} i\in I\right\} . \end{aligned}$$
    (48)

    Conditions (47) and (48) imply that, for each \(k\), there exists an index \(i(k)\in I\) satisfying

    $$\begin{aligned} a_{i(k)}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0. \end{aligned}$$

    By choosing an appropriate subsequence of \(\{k\}\), we may assume that the sequence \(\{i(k)\}\) is constant. In other words, there exists an index \(l\in I\) such that

    $$\begin{aligned} a_{l}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0 \quad \text{ for} \text{ all} k. \end{aligned}$$
    (49)

    Using the convergence conditions \(t_{k}\rightarrow 0^{+}\) and \( y_{k}\rightarrow y\), we deduce from (49) that, for each \(\delta >0\),

    $$\begin{aligned} {\mathop {\mathop {\sup }\limits _{t\in (0,\delta )}}\limits _{ u\in B(y,\delta )}}a_{l}^{T}f(\bar{x }+tu)-a_{l}^{T}f(\bar{x})\geqq 0. \end{aligned}$$
    (50)

    Therefore, by (11), we have

    $$\begin{aligned} \limsup _{(t,u)\rightarrow (0^{+},y)}a_{l}^{T}f(\bar{x}+tu)-a_{l}^{T}f(\bar{x} )\geqq 0. \end{aligned}$$
    (51)

    Now observe that, by (34), the left-hand side of (51) is equal to the \(l\)-th component of \(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\), which shows that

    $$\begin{aligned} f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0. \end{aligned}$$
  2. (ii)

    Using definition (30) and assumptions (39), it is easy to compute:

    $$\begin{aligned} f_{+}^{(\nu )}(\bar{x};y)=\limsup _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{ t^{\nu }}\left( Af(\bar{x}+tu)-Af(\bar{x})\right) . \end{aligned}$$
    (52)

    The rest of the proof is almost the same as in part (i). The only difference is that we multiply (49) by \(\nu !/t_{k}^{\nu }\) to get

    $$\begin{aligned} \frac{\nu !}{t_{k}^{\nu }}a_{l}^{T}(f(\bar{x}+t_{k}y_{k})-f(\bar{x}))\geqq 0 \text{ for} \text{ all} k. \end{aligned}$$
    (53)

    Consequently, instead of (50) and (51), we have, respectively,

    $$\begin{aligned} \sup _{_{\begin{array}{c} t\in (0,\delta ) \\ u\in B(y,\delta ) \end{array}}}\frac{\nu !}{ t^{\nu }}a_{l}^{T}(f(\bar{x}+tu)-f(\bar{x}))\geqq 0 \end{aligned}$$

    and

    $$\begin{aligned} \limsup _{(t,u)\rightarrow (0^{+},y)}\frac{\nu !}{t^{\nu }}a_{l}^{T}(f(\bar{x} +tu)-f(\bar{x}))\geqq 0. \end{aligned}$$
    (54)

    Now by (14), the left-hand side of (54) is equal to the \(l\)-th component of (52). Hence, inequality (54) means that \( f_{+}^{(\nu )}(\bar{x};y)\nless 0\).

\(\square \)

We now give an example to illustrate Theorem 1.

Example 1

Let \(f:\mathbb R \rightarrow \mathbb R ^{2}\) and \(g:\mathbb R \rightarrow \mathbb R \) be given by

$$\begin{aligned} f(x):=\left\{ \begin{array}{lll} (-\left|x\right|,-\left|x\right|)^{T}&\text{ if}&x\in \mathbb Q , \\ (\left|x\right|,\left|x\right|)^{T}&\text{ if}&x\in \mathbb R \backslash \mathbb Q , \end{array} \right. \end{aligned}$$

where \(\mathbb Q \) stands for the set of rational numbers,

$$\begin{aligned} g(x):=-x. \end{aligned}$$

Let \(C=\mathbb R \), \(D=\mathbb R _{+}\), \(\bar{x}=0\) and \(Q=\left\{ y\in \mathbb R ^{2}:Ay\geqq 0\right\} \), where \(A=\left[ \begin{array}{cc} 0&-1 \\ 1&0 \end{array} \right] .\) Hence \(Q=\left\{ y\in \mathbb R ^{2}:y_{1}\geqq 0,~y_{2}\leqq 0\right\} \),

$$\begin{aligned} S=\left\{ x\in \mathbb R :-g(x)=x\in \mathbb R _{+}\right\} =\mathbb R _{+} \end{aligned}$$

and \(K(C,\bar{x})=\mathbb R \). Since \(f(\bar{x})=(0,0)^{T}\), we have \(f(x)-f( \bar{x})\notin -\mathrm int Q\) for all \(x\in S,\) hence \(\bar{x}=0\) is a weakly local Pareto minimizer for problem (23)–(24). We have that \(dg(0;u)\) exists for all \(u\in \mathbb R \ \)and \(dg(0;u)=-u<0\) for \(u>0\). Therefore, \(K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int \mathbb R _{+}\right\} =\mathbb R _{+}{\backslash } \left\{ 0\right\} .\) Let \( \nu =1,\) then, for any \(y\in K(C,\bar{x})\cap \left\{ u:-dg(\bar{x};u)\in \mathrm int \mathbb R _{+}\right\} \),

$$\begin{aligned} f_{+}^{(0)}(\bar{x};y)&=\limsup _{(t,u)\rightarrow (0^{+},y)}Af(\bar{x}+tu) \\&=\left( \limsup _{(t,u)\rightarrow (0^{+},y)}-f_{2}(tu),\limsup _{(t,u)\rightarrow (0^{+},y)}f_{1}(tu)\right) ^{T}=(0,0)^{T}, \end{aligned}$$

which leads to \(f_{+}^{(0)}(\bar{x};y)-Af(\bar{x})\nless 0\), hence condition (i) of Theorem 1 is satisfied. Moreover,

$$\begin{aligned} f_{+}^{(1)}(\bar{x};y)&=\limsup _{(t,u)\rightarrow (0^{+},y)}\frac{1}{t}Af( \bar{x}+tu) \\&=\left( \limsup _{(t,u)\rightarrow (0^{+},y)}-\frac{1}{t} f_{2}(tu),\limsup _{(t,u)\rightarrow (0^{+},y)}\frac{1}{t}f_{1}(tu)\right) ^{T}=(\left|y\right|,\left|y\right|)^{T}, \end{aligned}$$

hence \(f_{+}^{(1)}(\bar{x};y)\nless 0\), and condition (ii) of Theorem 1 is satisfied for \(\nu =1.\)

Let us note that we cannot apply Theorem 3.1 of [9] to Example 1 because the derivatives (37)–(38) of \(f\) do not exist.

7 Sufficient optimality conditions

In the next theorem we shall use the following notation for the closure of the cone generated by \(D+g(\bar{x})\):

$$\begin{aligned} D_{g(\bar{x})}:=\text{ cl} \text{ cone}(D+g(\bar{x})). \end{aligned}$$

Theorem 2

Suppose that \(\dim X<\infty \). Let \(\bar{x}\) be a feasible point for problem (23)–(24) and let \(dg(\bar{x};y)\) exist for all \(y\in K(C,\bar{x})\backslash \{0\}\). Assume that there is a positive integer \(\nu \) such that for each \(y\in K(C,\bar{x})\cap \left\{ u:dg(\bar{x} ;u)\in -D_{g(\bar{x})}\right\} \backslash \{0\}\), there exist the lower Ginchev derivatives \(f_{-}^{(j)}(\bar{x};y)\), \(j=0,1,{\ldots },\nu \), and one of the following conditions \((A_{k})\) \((k=1,{\ldots },\nu )\) holds:

$$\begin{aligned} (A_{k})\quad f_{-}^{(0)}(\bar{x};y)=Af(\bar{x}), f_{-}^{(j)}(\bar{x} ;y)=0, j=0,1,{\ldots },k-1, f_{-}^{(k)}(\bar{x};y)\nleqq 0. \end{aligned}$$

Then \(\bar{x}\) is a strict local Pareto minimizer of order \(\nu \) for problem (23)–(24).

Proof

Contrary to the conclusion, suppose that condition \((A_{k})\) holds for some fixed \(k\in \{1,{\ldots },\nu \}\), but \(\bar{x}\) is not a strict local Pareto minimizer of order \(\nu \) for problem (23)–(24). By (26), we deduce that there exist sequences \(x_{n}\in S,\) \(x_{n}\ne \bar{x},\) \(x_{n}\rightarrow \bar{x}\) and \(b_{n}\in Q\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{f(x_{n})-f(\bar{x})+b_{n}}{\Vert x_{n}- \bar{x}\Vert ^{\nu }}=0 \end{aligned}$$
(55)

(see Proposition 3.4 in [7]). Putting \(y_{n}=\frac{x_{n}-\bar{x}}{ \Vert x_{n}-\bar{x}\Vert }\) and \(t_{n}=\Vert x_{n}-\bar{x} \Vert ,\) we get that \(t_{n}\rightarrow 0^{+}\) and \(x_{n}=\bar{x} +t_{n}y_{n}\in S\subset C.\) We may assume, by choosing a subsequence if necessary, that \(y_{n}\) converges to some vector \(y\) with \(\Vert y\Vert =1\). Hence \(y\in K(C,\bar{x})\backslash \left\{ 0\right\} .\) Since \(dg(\bar{x};y)\) exists, it must satisfy

$$\begin{aligned} dg(\bar{x};y)=\lim _{n\rightarrow \infty }\frac{g(\bar{x}+t_{n}y_{n})-g(\bar{x })}{t_{n}}. \end{aligned}$$
(56)

Moreover, \(g(\bar{x}+t_{n}y_{n})=g(x_{n})\in -D,\) and consequently,

$$\begin{aligned} \frac{g(\bar{x}+t_{n}y_{n})-g(\bar{x})}{t_{n}}\in \text{ cone}(-D-g(\bar{x} ))\subset -D_{g(\bar{x})},\, \text{ for} \text{ all} n. \end{aligned}$$
(57)

Conditions (56), (57), \(y\in \) \(K(C,\bar{x}){\backslash } \left\{ 0\right\} ,\) and the closedness of \(D_{g(\bar{x})}\) imply that

$$\begin{aligned} y\in K(C,\bar{x})\cap \left\{ u:dg(\bar{x};u)\in -D_{g(\bar{x})}\right\} . \end{aligned}$$

Since \(k\leqq \nu \), it follows from (55) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{f(\bar{x}+t_{n}y_{n})-f(\bar{x})+b_{n}}{ \Vert x_{n}-\bar{x}\Vert ^{k}}=0. \end{aligned}$$
(58)

By condition \((A_{k})\), we have \(f_{-}^{(0)}(\bar{x};y)=Af(\bar{x})\), \( f_{-}^{(j)}(\bar{x};y)=0\), \(j=0,1,{\ldots },k-1\), hence

$$\begin{aligned} f_{-}^{(k)}(\bar{x};y)=\liminf _{(t,u)\rightarrow (0^{+},y)}\frac{k!}{t^{k}} \left( Af(\bar{x}+tu)-Af(\bar{x})\right). \end{aligned}$$
(59)

Now, using (13) and (59), we obtain that, for each \(i\in I\), the \(i\)-th component of \(f_{-}^{(k)}(\bar{x};y)\) is equal to

$$\begin{aligned} \left( f_{-}^{(k)}(\bar{x};y)\right) _{i}&= \liminf _{(t,u)\rightarrow (0^{+},y)}\frac{k!}{t^{k}}\left( a_{i}^{T}f(\bar{x}+tu)-a_{i}^{T}f(\bar{x} )\right) \nonumber \\&\leqq \liminf _{n\rightarrow \infty }\frac{k!}{t_{n}^{k}}\left( a_{i}^{T}f(\bar{x}+t_{n}y_{n})-a_{i}^{T}f(\bar{x})\right) \nonumber \\&\leqq \liminf _{n\rightarrow \infty } k!\left(\frac{a_{i}^{T}f(\bar{x} +t_{n}y_{n})-a_{i}^{T}f(\bar{x})+a_{i}^{T}b_{n}}{t_{n}^{k}}-\frac{ a_{i}^{T}b_{n}}{t_{n}^{k}}\right).\qquad \quad \end{aligned}$$
(60)

But (58) yields

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{a_{i}^{T}f(\bar{x}+t_{n}y_{n})-a_{i}^{T}f( \bar{x})+a_{i}^{T}b_{n}}{t_{n}^{k}}=0. \end{aligned}$$
(61)

Since \(b_{n}\in Q\), it follows from (18) that \(a_{i}^{T}b_{n}\geqq 0\), hence

$$\begin{aligned} \liminf _{n\rightarrow \infty }~k!\left( -\frac{a_{i}^{T}b_{n}}{t_{n}^{k}} \right) \leqq 0. \end{aligned}$$
(62)

We obtain from (60)–(62) that \(\left( f_{-}^{(k)}(\bar{x} ;y)\right) _{i}\leqq 0\). Since \(i\) is arbitrary, we have thus verified that \( f_{-}^{(k)}(\bar{x};y)\leqq 0\), which is in contradiction to condition \( (A_{k})\). \(\square \)

The following example illustrates Theorem 2.

Example 2

Let \(f:\mathbb R \rightarrow \mathbb R ^{2}\) and \(g:\mathbb R \rightarrow \mathbb R \) be given by

$$\begin{aligned} f(x)\text{:=}\left\{ \begin{array}{lll} \left( x\sin \frac{1}{x},x\left( 3-\sin \frac{1}{x}\right) \right) ^{T}&\text{ if}&x\ne 0, \\ [2mm] (0,0)^{T}&\text{ if}&x=0, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} g(x)\text{:=}-\left|x\right|. \end{aligned}$$

Let \(D=\mathbb R _{+}\), \(C=\mathbb R _{+}\), hence the feasible set is given by

$$\begin{aligned} S=\left\{ x\in \mathbb R :-g(x)=\left|x\right|\in \mathbb R _{+} \text{,} x\in \mathbb R _{+}\right\} =\mathbb R _{+}. \end{aligned}$$

Let \(\bar{x}=0\). For all \(y\in \mathbb R \), we have that \(dg(0;y)=-\Vert y\Vert \) exists and \(dg(0;y)\in -D_{g(0)}=-\mathbb R _{+}\). It is clear that \(K(\mathbb R _{+},0)=\mathbb R _{+}\), hence

$$\begin{aligned} K(\mathbb R _{+},0)\cap \left\{ u\in \mathbb R :dg(0;u)\in -D_{g(0)}\right\} =\mathbb R _{+}. \end{aligned}$$

Now, let \(Q=\left\{ y\in \mathbb R ^{2}:Ay\geqq 0\right\} \) where \(A=\left[ \begin{array}{cc} -1&1 \\ 1&1 \end{array} \right] \). Hence

$$\begin{aligned} Q=\left\{ (y_{1},y_{2})\in \mathbb R ^{2}:-y_{1}+y_{2}\geqq 0,~y_{1}+y_{2}\geqq 0\right\} . \end{aligned}$$

Note that, for any \((y_{1},y_{2})\in Q\), we have \(y_{2}\geqq 0\). We have, for all \(y\in \mathbb R _{+}\backslash \{0\}\),

$$\begin{aligned} f_{-}^{(0)}(0;y)&= \liminf _{(t,u)\rightarrow (0^{+},y)}Af(0+tu)=\liminf _{(t,u)\rightarrow (0^{+},y)}\left( tu\left(3-2\sin \frac{1}{tu}\right) ,3tu\right) ^{T} \\&= (0,0)^{T},\\ f_{-}^{(1)}(0;y)&= \liminf _{(t,u)\rightarrow (0^{+},y)}\frac{1}{t} Af(0+tu)=\liminf _{(t,u)\rightarrow (0^{+},y)}\left( u\left( 3-2\sin \frac{1}{tu}\right) ,3u\right) ^{T} \\&= (y,3y)^{T}>(0,0)^{T}. \end{aligned}$$

By Theorem 2, the point \(\bar{x}=0\) is a strict local Pareto minimizer of order one (with respect to the polyhedral cone \(Q\)) for problem (23)–(24).

The same conclusion can be verified by Definition 5(b). Indeed, let us take \(\alpha =3/2\). Then, for each \((y_{1},y_{2})\in Q\), we have

$$\begin{aligned}&\Vert f(x)+(y_{1},y_{2})^{T}\Vert \geqq \left|f_{2}(x)+y_{2}\right|\geqq x\left( 3-\sin \frac{1}{x}\right) >\alpha x,\\&\quad \forall x\in S\backslash \{0\}=\mathbb R _{+}\backslash \{0\}, \end{aligned}$$

hence condition (26) holds with \(v=1\) and \(U=\mathbb R \), which means that \(\bar{x}=0\) is a strict (global) Pareto minimizer of order one for problem (23)–(24).

It is not difficult to see that we cannot apply Theorem 4.1 of [9] to Example 2 because the derivative \(f^{(1)}(0;y)\) defined by (38) does not exist.