1 Introduction

Quasidifferential calculus were developed by Demyanov and Rubinov (1980) and have been studied in more detail in Demyanov and Rubinov (1986). Since then it has been developed extensively (see, for example, Craven 1986, 2000; Eppler and Luderer 1987; Demyanov and Rubinov 1981; Gao 2000a, b; Glover 1992; Kuntz and Scholtes 1993; Luderer and Rosiger 1990; Polyakova 1986; Shapiro 1984; Uderzo 2002; Ward 1991; Xia et al. 2005; Yin and Zhang 1998, and others). This follows from the fact that quasidifferential calculus plays an important role in nonsmooth analysis and optimization. Namely, the concept of quasidifferentiability can be employed to study a wide range of theoretical and practical issues in many fields, for instance, in nonsmooth analysis, economics, optimal control theory, engineering, mechanics, etc. (see, Demyanov et al. 1986, 1996; Stavroulakis et al. 1995, and others). Further, the class of quasidifferentiable functions is fairly broad. It contains not only convex, concave, and differentiable functions but also convex–concave, D.C. (i.e., difference of two convex), maximum, and other functions. In addition, it even includes some functions which are not locally Lipschitz continuous.

In most of the above-mentioned works, the necessary optimality conditions have been established for quasidifferentiable optimization problems only (see, for example, Demyanov 1986; Kuntz and Scholtes 1993; Luderer and Rosiger 1990; Shapiro 1984; Uderzo 2002; Ward 1991; Xia et al. 2005, and others). Indeed, it is possible to find sufficient optimality conditions and duality results in some of the above-mentioned papers, but they have been established under assumption that the objective and constraint functions are directionally differentiable (see, for example, Craven 1986, 2000; Demyanov 1986, and others). In this paper, our approach in proving the sufficiency of the Karush–Kuhn–Tucker necessary optimality conditions and duality results for the considered quasidifferentiable optimization problem differs from those ones mentioned above, in which directionally differentiable generalized convex functions have been used. We define in this paper a new concept of generalized convexity, namely we introduce the concept of r-invexity with respect to a convex compact set. Then, we prove several conditions for a quasidifferentiable r-invex function with respect to a convex compact set. However, the main purpose of this article is to prove the sufficient optimality conditions of the Lagrange multiplier type and various duality results in the sense of Mond–Weir for a new class of nonconvex quasidifferentiable optimization problems with inequality constraints. We assume in establishing the results mentioned above that the functions involved in the considered nonconvex nondifferentiable optimization problem are quasidifferentiable r-invex with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials. We illustrate the sufficient optimality conditions established in the paper by an example of a nonconvex nonsmooth optimization problem with quasidifferentiable r-invex functions with respect to such convex compact sets and with respect to the same function \(\eta \). Moreover, we illustrate also the fact that the Lagrange multipliers may not be constant for such nonconvex nonsmooth optimization problems.

The paper is organized as follows. In Sect. 2, we recall the definition of a scalar quasidifferentiable function and its fundamental property. We introduce a new concept of generalized convexity, namely, we give the definition of an r-invex function with respect to a convex compact set. Further, we prove several conditions for a quasidifferentiable function to be r-invex with respect to a convex compact set. In Sect. 3, we formulate the quasidifferentiable optimization problem that we deal with throughout this paper. Further, we prove the sufficiency of the Karush–Kuhn–Tucker type necessary optimality conditions under assumptions that the functions constituting the considered quasidifferentiable optimization problem are r-invex with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials. The results established in this section are illustrated by the example of a nonconvex quasidifferentiable optimization problem with quasidifferentiable r-invex functions with respect to such convex compact sets and with respect to the same function \(\eta \). Further, in Sect. 4, for the considered quasidifferentiable optimization problem, we define its dual problem in the sense of Mond–Weir and we prove several duality theorems also using the concept of quasidifferentiable r-invexity with respect to a convex compact set.

2 Preliminaries

In this section, we provide some definitions that we shall use in the sequel.

Definition 2.1

We say that a mapping f : \(\mathbb {R}^{n} \rightarrow \mathbb {R}\) is directionally differentiable at \(\bar{{x}} \in \mathbb {R}^{n}\) into a direction d \(\in \mathbb {R}^{n}\) if the limit

$$\begin{aligned} {{f^{\prime }}}(\bar{{x}};{d})=\mathop {\lim }\limits _{{t}\downarrow 0} \frac{{f}(\bar{{x}}+{td})-{f}(\bar{{x}})}{{t}}\; \end{aligned}$$

exists finite. We say that f is directionally differentiable or semi-differentiable at \(\bar{{x}}\), if its directional derivative \({f}^{\prime } (\bar{{x}}\); d) exists finite for all \({d} \in \mathbb {R}^{n}\).

Definition 2.2

(Demyanov and Rubinov 1981) A real-valued function f : \(\mathbb {R}^{n}\) \(\rightarrow \mathbb {R}\) is said to be quasidifferentiable at \(\bar{{x}}\) \(\in \mathbb {R}^{n}\) if f is directionally differentiable and there exists a ordered pair of convex compact sets \({D}_{f} (\bar{{x}})=[\underline{\partial }{f}(\bar{{x}}),\overline{\partial }{f}(\bar{{x}})]\) such that

$$\begin{aligned} {{f^{\prime }}}(\bar{{x}};{d})=\mathop {\max }\limits _{{{v}}\in \underline{\partial }{f}(\bar{{x}})} {v}^{T}{d}+\;\mathop {\min }\limits _{{w}\in \overline{\partial }{f}(\bar{{x}})} {w}^{T}{d}, \end{aligned}$$
(1)

where \(\underline{\partial }{f}(\bar{{x}})\) and \(\overline{\partial }{f}(\bar{{x}})\) are called subdifferential and superdifferential of f at \(\bar{{x}}\), respectively. Further, the ordered pair of sets \({D}_{f} (\bar{{x}})=[\underline{\partial }{f}(\bar{{x}}),\overline{\partial }{f}(\bar{{x}})]\) is called quasidifferential of the function f at \(\bar{{x}}\).

Let us note that the pair of sets constituting the quasidifferential to a function f at a certain point \(\bar{x}\) is not unique, because if \({D}_{f} (\bar{{x}})=[\underline{\partial }{f}(\bar{{ x}}),\overline{\partial }{f}(\bar{{x}})]\) is a quasidifferential of f at \(\bar{{x}}\), then, for any nonempty compact convex set V, the ordered pair of sets \([\underline{\partial }{f}(\bar{{x}})+{V},\overline{\partial }{f}(\bar{{x}})-{V}]\) is also its quasidifferential.

Now, we introduce the concept of r-invexity with respect to a convex compact set.

Definition 2.3

Let f : \(\mathbb {R}^{n} \rightarrow \mathbb {R}\) be a real-valued function, \(\bar{{x}} \in \mathbb {R}^{n}\) and \({S}_{{f}}(\bar{{x}})\) be a nonempty convex compact subset of \(\mathbb {R}^{n}\). If there exist the vector-valued function \(\eta \) : \(\mathbb {R}^{{n}} \times \mathbb {R}^{{n}} \rightarrow \mathbb {R}^{n}\) and a scalar r such that the inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf}({x})}&\ge \frac{1}{{r}}{e}^{{rf}(\bar{{x}})}[1+{r}\omega ^{T}\eta ({x},\bar{{x}})], \quad {\mathrm{if}} \quad {{r}\ne 0,} \nonumber \\ {f(x)}&\ge {f}(\bar{{x}})+\omega ^{T}\eta ({x},\bar{{x}}), \quad {\mathrm{if}} \quad {{r=0,}} \end{aligned}$$
(2)

holds for all x \(\in \mathbb {R}^{n}\) and for all \(\omega \in \) \({S}_{{f}}(\bar{{x}})\), then f is said to be an r-invex function at \(\bar{{x}}\) on \(\mathbb {R}^{n}\) with respect to \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \).

If the inequality (2) is strict for all x \(\in \mathbb {R}^{n}\), x \(\ne \bar{{x}}\), then f is said to be a strictly r-invex function at \(\bar{{x}}\) on \(\mathbb {R}^{n}\) with respect to \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \).

If, for each \(\bar{{x}} \in \mathbb {R}^{n}\), there exists a convex compact subset \({S}_{{f}}(\bar{{x}})\) of \(\mathbb {R}^{n}\) such that the inequality (2) is satisfied at each \(\bar{{x}}\) with respect to the same function \(\eta \), then f is said to be an r-invex function on \(\mathbb {R}^{n}\) with respect to convex compact sets \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \).

If the inequality (2) is satisfied for all x \(\in \) X, where X is a nonempty subset of \(\mathbb {R}^{n}\), then f is r-invex at \(\bar{{x}}\) on X with respect to the convex compact set \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \).

Remark 2.1

To define an analogous class of (strictly) r-incave functions with respect to a convex compact set, the direction of each inequality (2) should be reversed.

Remark 2.2

Note that the definition of a 0-invex function f at \(\bar{{x}}\) with respect to \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \) is, in fact, the definition of an invex function with respect to \({S}_{{f}}(\bar{{x}})\) and with respect to \(\eta \).

Remark 2.3

Note that, in the case when f is a locally Lipschitz function at \(\bar{{x}}\) and \({S}_{{f}}(\bar{{x}})\) is equal to the Clarke subdifferential of f at \(\bar{{x}}\) (see Clarke 1983), then we obtain the definition of a locally Lipschitz r-invex function introduced by Antczak (2002). Further, in the case r = 0, we obtain the definition of a locally Lipschitz invex function introduced by Reiland (1990) (see also Kaul et al. 1994). In the case when f is differentiable, then \({S}_{{f}}(\bar{{x}})\) = {\(\nabla \)f(\(\bar{{x}})\)} and the definition of an r-invex function with respect to the convex compact set \({S}_{{f}}(\bar{{x}})\) reduces to the definition of a differentiable r-invex function introduced by Antczak (2005) and in the case r = 0 to the definition of a differentiable invex function introduced by Hanson (1981).

Remark 2.4

All theorems in the further part of this work will be proved only in the case when r \(\ne \) 0 (others cases be dealt with likewise since the only changes arise from the form of inequality defining the class of r-invex functions with respect to a convex compact set and with respect to \(\eta \) for a scalar r). The proofs in the case r = 0 are easier than in this one. This follows from the form of inequalities which are given in the Definition 2.3. Moreover, without limiting generality of considerations, we shall assume that r \(>\) 0 (in the case when r \(<\) 0, the direction of some of the inequalities in the proofs of theorems should be changed to the opposite one).

Now, we present several necessary and sufficient conditions for a quasidifferentiable r-invex function with respect to a convex compact set.

In Antczak (2005), using the definition of a weighted r-mean, Antczak introduced the definition of an r-preinvex function. We recall it for a convenience of a common reader.

Definition 2.4

Let a \(\in \mathbb {R}^{{m}}\), q \(\in \mathbb {R}^{{m}}\) be vectors whose coordinates are positive and nonnegative numbers, respectively, and let r be any finite real number. If \(\sum \nolimits _{{i=1}}^{m} {{q}_{i} } =1\), then a weighted r-mean is defined as follows

$$\begin{aligned} {M}_{r} {(a;q)}:={M}_{r} ({a}_1 ,\ldots ,{a}_{m} ;{q})=\left\{ {{\begin{array}{*{20}l} {\left( {\sum \limits _{{i=1}}^{m} {{q}_{i} {a}_{i}^{r} } }\right) ^{{1/r}}} &{}\quad {\mathrm{if}} &{} {{r}\ne {0},} \\ {\prod \limits _{{i=1}}^{m} {{a}_{i}^{{q}_{i} } } } &{}\quad {\mathrm{if}} &{} {{r=0}.} \\ \end{array} }} \right. \end{aligned}$$

Definition 2.5

(Antczak 2005) Let X be a nonempty invex (with respect to \(\eta :X\times X\rightarrow \mathbb {R}^{n}\)) subset of \(\mathbb {R}^{n}\) and f : X \(\rightarrow \mathbb {R}\) be a real-valued function defined on X. If there exists a real number r such that, for all \(x \in X \text { and } q_{1}\ge 0,q_{2}\ge 0\), \(q_{1}\) + \(q_{2}\) = 1, the following inequality

$$\begin{aligned} {f}({q}_1 {u}+{q}_2 (\eta {(x,u)}+{u}))\le \ln \left( {M}_{r} ({e}^{{f}{(u)}},{e}^{{f(x)}};{q})\right) \end{aligned}$$

holds, then f is said to be an r-preinvex function at u on X with respect to \(\eta \).

If the above inequality is satisfied at any point u \(\in \) X with respect to the same function \(\eta \), then f is said to be r-preinvex with respect to \(\eta \) on X.

If we adopt \(q_{2}=\lambda \) for any \(\lambda \in \) [0,1] (therefore, \(q_{1}\) + \(q_{2}\) = 1 implies that \(q_{1}=1 - \lambda )\), then the definition of an r-preinvex function with respect to \(\eta \) can be re-written as follows:

$$\begin{aligned} {f}({u}+\lambda \eta {(x,u)})\le \left\{ {{\begin{array}{*{20}l} {\ln (\lambda {e}^{{rf(x)}}+(1-\lambda ){e}^{{rf(u)}})^{{1/r}}} &{}\quad {\mathrm{if}} &{} {{r}\ne 0,} \\ {\lambda {f(x)}+(1-\lambda ){f(u)}} &{}\quad {\mathrm{if}} &{} {{r=0}.} \\ \end{array} }} \right. \end{aligned}$$

Now, we prove that if f : X \(\rightarrow \) \(\mathbb {R}\) is an r-preinvex function with respect to \(\eta \) at u \(\in \) X on X and f is a quasidifferentiable function at u \(\in \) X, then it is a quasidifferentiable r-invex function at u on X with respect to the same function \(\eta \) and with respect to the convex compact set \({S}_{{f}}(u)\) \(\subset \mathbb {R}^{n}\) with \({S}_{f} {(u)}=\underline{\partial }{f(u)}+\bar{{w}}\), where \(\bar{{w}}\in \mathop {\arg \min }_{{w}\in \overline{\partial }{f(u)}} \;{w}^{T}\eta {(x,u)}\) for any arbitrary x \(\in \) X.

Proposition 2.1

Let X be a nonempty invex (with respect to \(\eta \)) subset of \(\mathbb {R}^{n}\) and \(u\in \) X. Assume that \(f : X \rightarrow \) \(\mathbb {R}\) is an r-preinvex function at \(u \in X\) on X with respect to \(\eta \) and f is a quasidifferentiable function at \(u \in X\). Then, f is a quasidifferentiable r-invex function at u on X with respect to the same function \(\eta \) and with respect to the convex compact set \({S}_{f (u)}=\underline{\partial }{f(u)}+\bar{{w}}\), where \(\bar{{w}}\in \mathop {\arg \min }_{{w}\in \overline{\partial }{f(u)}} \;{w}^{T}\eta {(x,u)}\) for any arbitrary x \(\in \) X.

Proof

Assume that f : X \(\rightarrow \) \(\mathbb {R}\) is an r-preinvex function at u \(\in \) X on X with respect to \(\eta \) and, moreover, r \(\ne \) 0. Without loss of generality, assume that r \(>\) 0. Hence, by Definition 2.5, the inequality

$$\begin{aligned} {f}({u}+\lambda \eta {(x,u)})\le \ln \left( \lambda {e}^{{rf(x)}}+(1-\lambda ){e}^{{rf(u)}}\right) ^{{1/r}} \end{aligned}$$

holds for all x \(\in \) X and \(\lambda \in \) [0,1]. Thus, we have

$$\begin{aligned} {e}^{{rf(x)}}-{e}^{{rf(u)}}\ge {e}^{{rf(u)}}\frac{{e}^{{rf}({u}+\lambda \eta {(x,u)})-{rf(u)}}-1}{\lambda }. \end{aligned}$$

By assumption, f is a quasidifferentiable function at u \(\in \) X. Then, by Definition 2.2, it follows that it is directional differentiable at u. By letting \(\lambda \downarrow \) 0, we get that the inequality

$$\begin{aligned} {e}^{{rf(x)}}-{e}^{{rf(u)}}\ge {re}^{{rf(u)}}{{f^{\prime }}}({u};\eta {(x,u)}) \end{aligned}$$

holds for all x \(\in \) X. Thus, the above inequality yields that the inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}\ge \frac{1}{{r}}{e}^{{rf(u)}}[1+{r}{{f^{\prime }}}({u};\eta {(x,u)}] \end{aligned}$$
(3)

holds for all x \(\in \) X. Since f is quasidifferentiable, by Definition 2.2, we have that, for any arbitrary x \(\in \) X,

$$\begin{aligned} {{f^{\prime }}}({u};\eta {(x,u)})=\mathop {\max }\limits _{{v}\in \underline{\partial }{f(u)}} {v}^{T}\eta {(x,u)}+\;\mathop {\min }\limits _{{w}\in \overline{\partial }{f(u)}} {w}^{T}\eta {(x,u)}. \end{aligned}$$

By definition, \(\overline{\partial }{f(u)}\) is nonempty and compact. Therefore, for any arbitrary x \(\in \) X, we can find \(\bar{{w}}\) such that \(\bar{{w}}\in {\arg \min }_{{w}\in \overline{\partial }{f(u)}} \;{w}^{T}\eta {(x,u)}\). Hence, the relation above implies that the inequality

$$\begin{aligned} {{f^{\prime }}}({u};\eta {(x,u)})\ge {v}^{T}\eta {(x,u)}+\;\bar{{w}}^{T}\eta {(x,u)},\quad \forall {v}\in \underline{\partial }{f(u)} \end{aligned}$$
(4)

holds for all x \(\in \) X. Hence, (3) and (4) yield that the following inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}\ge \frac{1}{{r}}{e}^{{rf(u)}}[1+{r}\omega ^{T}\eta {(x,u)}],\quad \forall \omega \in \underline{\partial }{f(u)}+\bar{{w}} \end{aligned}$$

holds for all x \(\in \) X. This means, by Definition 2.3, that f is a quasidifferentiable r-invex function at u on X with respect to \({S}_{f}{(u)}=\underline{\partial }{f(u)}+\bar{{w}}\) and with respect to \(\eta \). This completes the proof. \(\square \)

Corollary 2.1

Let X be a nonempty invex (with respect to \(\eta \)) subset of \(\mathbb {R}^{n}\) and u \(\in \) X. Assume that f : X \(\rightarrow \) \(\mathbb {R}\) is an r-preinvex function at \(u \in X\) on X with respect to \(\eta \) and f is a quasidifferentiable function at \(u \in X\). If \(\overline{\partial }{f(u)}\) is a singleton, then f is a quasidifferentiable r-invex function at u on X with respect to the same function \(\eta \) and with respect to \({S}_{f}{(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\).

Theorem 2.1

Let u be an arbitrary point of X and \({f : X \rightarrow }\) \(\mathbb {R}\) be a quasidifferentiable r-invex function at u on X with respect to the set \({S}_{f} {(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\). Then, the following inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}&\ge \frac{1}{{r}}{e}^{{rf(u)}}[1+{r}{{f^{\prime }}}({u};\eta {(x,u)})], \quad {\mathrm{if}} \quad {{r}\ne 0} \nonumber \\ {f(x)}&\ge {f(u)}+{{f^{\prime }}}({u};\eta {(x,u)}), \quad {\mathrm{if}} \quad {{r=0}} \end{aligned}$$
(5)

holds for all \(x \in X\).

Proof

Let u be an arbitrary point of X. Assume that f : X \(\rightarrow \mathbb {R}\) is a quasidifferentiable r-invex function at u on X with respect to the set \({S}_{f}{(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\). Then, by Definition 2.3, the inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}&\ge \frac{1}{{r}}{e}^{{rf(u})}[1+{r}\omega ^{T}\eta {(x,u)}], \quad {\mathrm{if}} \quad {{r}\ne 0,} \nonumber \\ {f(x)}&\ge {f(u)}+\omega ^{T}\eta {(x,u)}, \quad {\mathrm{if}} \quad {{r=0,}} \end{aligned}$$
(6)

holds for all x \(\in \) X and for each \(\omega \in {S}_{f} {(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\). Hence, (6) gives that the inequality

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}&\ge \frac{1}{{r}}{e}^{{rf(u)}}\left[ 1+{r}\left( {v}^{T}\eta {(x,u)}+{w}^{T}\eta {(x,u)}\right) \right] , \quad {\mathrm{if}} \quad {{r}\ne 0,} \nonumber \\ {f(x)}&\ge {f(u)}+{v}^{T}\eta {(x,u)}+{w}^{T}\eta {(x,u)}, \quad {\mathrm{if}} \quad {{r=0,}} \end{aligned}$$
(7)

holds for all x \(\in \) X and for any \({v}\in \underline{\partial }{f(u)}\), \({w}\in \overline{\partial }{f(u)}\). Then, for some \({w(x,u)}\in \overline{\partial }{f(u)}\), (7) yields

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}&\ge \frac{1}{{r}}{e}^{{rf(u)}}\left[ 1+{r}\left( \mathop {\max }\limits _{{v}\in \underline{\partial }{f(u)}} {v}^{T}\eta {(x,u)}+{w(x,u)}^{T}\eta {(x,u)}\right) \right] , \quad {\mathrm{if}} \quad {{r}\ne 0,} \nonumber \\ {f(x)}&\ge {f(u)}+\mathop {\max }\limits _{{v}\in \underline{\partial }{f(u)}} {v}^{T}\eta {(x,u)}+{w(x,u)}^{T}\eta {(x,u)}, \quad {\mathrm{if}} \quad {{r=0}.} \end{aligned}$$
(8)

Thus, (8) gives

$$\begin{aligned} \frac{1}{{r}}{e}^{{rf(x)}}&\ge \frac{1}{{r}}{e}^{{rf(u)}}\left[ 1+{r}\left( \mathop {\max }\limits _{{v}\in \underline{\partial }{f(u)}} {v}^{T}\eta {(x,u)}+\mathop {\min }\limits _{{w}\in \overline{\partial }{f(u)}} {w}^{T}\eta {(x,u)}\right) \right] , \quad {\mathrm{if}} \quad {{r}\ne 0,} \nonumber \\ {f(x)}&\ge {f(u)}+\mathop {\max }\limits _{{v}\in \underline{\partial }{f(u)}} {v}^{T}\eta {(x,u)}+\mathop {\min }\limits _{{w}\in \overline{\partial }{f(u)}} {w}^{T}\eta {(x,u)}, \quad {\mathrm{if}} \quad {{r=0}.} \end{aligned}$$

Hence, by Definition 2.1, it follows that the inequality (5) holds for all x \(\in \) X. This completes the proof of this theorem. \(\square \)

Now, we recall the definition of a stationary point of a quasidifferentiable function given by Demyanov and Vasilev (1985).

Definition 2.6

Let X be a nonempty subset of \(\mathbb {R}^{n}\) and f : X \(\rightarrow \mathbb {R}\) be a quasidifferentiable function on X. A point u \(\in \) X is said to be a stationary point of f if \(-\overline{\partial }{f(u)}\subseteq \underline{\partial }{f(u)}\).

We now prove the necessary and sufficient optimality conditions for a quasidifferentiable function \({f} : {X}\rightarrow \mathbb {R}\) to be r-invex at each point u of X with respect to the convex compact set \({S}_{f}{(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\).

Theorem 2.2

Let X be a nonempty subset of \(\mathbb {R}^{n}\). A quasidifferentiable function \({f} : {X}\rightarrow \) \(\mathbb {R}\) is r-invex at each point u \(\in \) X on X with respect to \({S}_{f (u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\) if and only if every stationary point of f is its global minimizer on X.

Proof

Necessity. Let f be a quasidifferentiable r-invex function at each point u \(\in \) X on X with respect to \({S}_{f}{(u)}=\underline{\partial }{f(u)}+\overline{\partial }{f(u)}\). Further, assume that u \(\in \) X is a stationary point of f. Hence, by Definition 2.6, it follows that \(-\overline{\partial }{f(u)}\subseteq \underline{\partial }{f(u)}\). Then, it is possible to choose \({v}\in \underline{\partial }{f(u)}\) such that \({v}=-{w}\in -\overline{\partial }{f(u)}\). Hence, by Definition 2.3, it follows that the inequality \(f(x) - f(u)\) \(\ge \) 0 holds for all x \(\in \) X. Thus, u is a global minimum of f on X.

Sufficiency. Assume that every stationary point of f is its global minimizer on X. Let x, u be two arbitrary points of X. If f(x) \(\ge f(u)\), then choose \(\eta (x,u) = 0 \). If \(f(x) < f(u)\), then u cannot be a stationary point. Then, for every \({w}\in \overline{\partial }{f(u)}\), we have, by Definition 2.6, that \(0\notin {w}+\underline{\partial }{f(u)}\). Note, moreover, that any set \({w}+\underline{\partial }{f(u)}\) is convex and compact. Let us denote

$$\begin{aligned} \hat{\omega }_{w} =\;\mathop {\min }\limits _{\omega _{w} \in {w}+\underline{\partial }{f(u)}} \left\| {\omega _{w} } \right\| >0\,\,\,\,\,\text{ for } \text{ all }\,\,\,\,\,{w}\in \overline{\partial }{f(u)}. \end{aligned}$$

Hence, by Theorem 2.4.4 (see Bazaraa and Shetty 1976), we have that

$$\begin{aligned} \omega _{w}^{T} \hat{\omega }_{w} \ge \hat{\omega }_{w}^{T} \hat{\omega }_{w} \,\,\,\,\forall {w}\in \overline{\partial }{f(u)}\,\,\forall \omega _{w} \in {w}+\underline{\partial }{f(u)}. \end{aligned}$$
(9)

Then, we set

$$\begin{aligned} \bar{\omega }_{w} =\;\mathop {\min }\limits _{{w}\in \overline{\partial }{f(u)}} \left\| {\hat{\omega }_{w} } \right\| >0. \end{aligned}$$
(10)

Note that, by (9) and (10), we have that, for any \({w}\in \overline{\partial }{f(u)}\),

$$\begin{aligned} \omega _{w}^{T} \bar{\omega }_{w} \ge \bar{\omega }_{w}^{T} \bar{\omega }_{w} \,\,\,\forall \omega _{w} \in {w}+\underline{\partial }{f(u)}. \end{aligned}$$
(11)

Then, it should be taken

$$\begin{aligned} \eta {(x,u)}=\left\{ {{\begin{array}{*{20}l} {\frac{{e}^{{r}({f(x)}-{f(u)})}-1}{{ r}\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\bar{\omega }_{ w} } &{}\quad {\mathrm{if}} &{} {{ r}\ne 0,} \\ {\frac{{ f(x)}-{ f(u)}}{\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\bar{\omega }_{ w} } &{}\quad {\mathrm{if}} &{} {{r=0}.} \\ \end{array} }} \right. \end{aligned}$$

Hence, by (11), it follows that, for any \({ w}\in \overline{\partial }{ f(u)}\), the following relations

$$\begin{aligned} \frac{1}{{ r}}({ e}^{{ r}({ f(x)}-{ f(u)})}-1)-\omega _{ w}^{ T} \eta { (x,u)}&= \frac{1}{{ r}}({ e}^{{ r}({ f(x)}-{ f(u)})}-1)-\frac{{ e}^{{ r}({ f(x)}-{ f(u)})}-1}{{ r}\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\omega _{ w}^{ T} \bar{\omega }_{ w} \\&\ge \frac{1}{{ r}}({ e}^{{ r}({ f(x)}-{ f(u)})}-1)-\frac{{ e}^{{ r}({ f(x)}-{ f(u)})}-1}{{ r}\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} =0,\,\,\,\,\,\text{ if } \text{ r }\ne 0,\\ { f(x)}-{ f(u)}-\omega _{ w}^{ T} \eta { (x,u)}&= { f(x)}-{ f(u)}-\frac{{ f(x)}-{ f(u)}}{\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\omega _{ w}^{ T} \bar{\omega }_{ w}\\&\ge { f(x)}-{ f(u)}-\frac{{ f(x)}-{ f(u)}}{\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} }\bar{\omega }_{ w}^{ T} \bar{\omega }_{ w} =0,\quad \text{ if } \text{ r } =0 \end{aligned}$$

hold for every \(\omega _{ w} \in { w}+\underline{\partial }{ f(u)}\). Then, by Definition 2.3, it follows that f is a quasidifferentiable r-invex function at each point u \(\in \) X on X with respect to \({ S}_{ f} { (u)}=\underline{\partial }{ f(u)}+\overline{\partial }{ f(u)}\) and with respect to the function \(\eta \) given above.

Since we choose \(\eta (x,u) = 0 \) in the case \(f(x) \ge f(u)\), therefore, also in this case, by Definition 2.3, it follows that f is a quasidifferentiable r-invex function at each point u \(\in \) X on X with respect to \({ S}_{ f}{ (u)}=\underline{\partial }{ f(u)}+\overline{\partial }{ f(u)}\) and with respect to the function \(\eta \).

Thus, the proof of this theorem is completed. \(\square \)

To illustrate the introduced concept of r-invexity with respect to a convex compact set, we present an example of a quasidifferentiable r-invex function at a point \(\bar{{ x}}\) with respect to the convex compact set \({ S}_{ f} (\bar{{ x}})=\underline{\partial }{ f}(\bar{{ x}})+\overline{\partial }{ f}(\bar{{ x}})\), that is, with respect to such a convex compact set which is equal to Minkowski sum of its subdifferential and superdifferential at this point.

Example 2.1

Let f : \(\mathbb {R}^{2} \rightarrow \mathbb {R}\) be a function defined by \({ f(x)}=\ln (\left| {{ x}_1 +\left| {{ x}_2 } \right| } \right| +1)\). We show that f is a quasidifferentiable 1-invex function at \(\bar{{ x}}\) = (0,0) on \(\mathbb {R}^{2}\) with respect to the convex compact set \({ S}_{ f} (\bar{{ x}})=\underline{\partial }{ f}(\bar{{ x}})+\overline{\partial }{ f}(\bar{{ x}})\). First, we show that f is a quasidifferentiable function at \(\bar{{ x}}\). Indeed, we have \({{ f^{\prime }}}(\bar{{ x}};{ d})=\left| {{ d}_1 +\left| {{ d}_2 } \right| } \right| .\) Hence, it can be proved that \({{ f^{\prime }}}(\bar{{ x}};{ d})={\max }_{{{ v}\in \mathrm{co}\{(0,0),(-2,2),(2,2)\}}} { v}^{ T}{ d}+\;{\min }_{{{ w}\in \mathrm{co}\{(-1,-1),(1,-1)\}}} { w}^{ T}{ d}\), where \(\underline{\partial }{ f}(\bar{{ x}})=\mathrm{co}\{(0,0),(-2,2),(2,2)\}\) and \(\overline{\partial }{ f}(\bar{{ x}})=\mathrm{co}\{(-1,-1),(1,-1)\}\). Hence, by Definition 2.2, it follows that f is a quasidifferentiable function at \(\bar{{ x}}\) = (0,0). Further, we have \({ S}_{ f} (\bar{{ x}})=\underline{\partial }{ f}(\bar{{ x}})+\overline{\partial }{ f}(\bar{{ x}})=\) \(\mathrm{co}\{(-1,-1),(-3,1),(1,1),(1,-1),(-1,1),(3,1)\}\). Now, let r =1 and \(\eta \) : X \(\times \) X \(\rightarrow \quad \mathbb {R}^{ n}\) be a vector-valued function \(\eta ({ x},\bar{{ x}})=\left[ {{\begin{array}{*{20}c} {\textstyle {{\left| {{ x}_1 +\left| {{ x}_2 } \right| } \right| } \over 2}}\\ {-\textstyle {{\left| {{ x}_1 +\left| {{ x}_2 } \right| } \right| } \over 2}}\\ \end{array} }} \right] \). Hence, by Definition 2.3, it can be proved that f is a quasidifferentiable 1-invex function at \(\bar{{ x}}\) = (0,0) on \(\mathbb {R}^{2}\) with respect to the convex compact set \({ S}_{{ f}}(\bar{{ x}})\) and with respect to \(\eta \) given above.

Remark 2.5

Note that the function \(\eta \) given in Example 2.1 is not a unique function with respect to which the function f considered in Example 2.1 is quasidifferentiable 1-invex at \(\bar{{ x}}\) = (0,0) on \(\mathbb {R}^{2}\) with respect to \(S_{f}(\bar{{ x}})\). Indeed, if we set \(\tilde{\eta }({ x},\bar{{ x}})=\left[ {{\begin{array}{*{20}c}{\textstyle {{\left| {{ x}_1 +\left| {{ x}_2 } \right| } \right| } \over 3}} \\ 0 \\ \end{array} }} \right] \), then, by Definition 2.3, it can be shown that f is also a quasidifferentiable 1-invex function at \(\bar{{ x}}\) = (0,0) on \(\mathbb {R}^{2}\) with respect to the convex compact set \({ S}_{{ f}}(\bar{{ x}})\) and with respect to \(\tilde{\eta }\) given above.

3 Optimality conditions for nonsmooth optimization problems with quasidifferentiable r-invex functions

In the paper, consider the following nonsmooth optimization problem:

$$\begin{aligned}(\text{ P })\qquad \qquad \qquad&\begin{array}{l} { f(x)}\rightarrow \min \\ \hbox { s.t.} \quad { g}_{{ j}} ( {x})\le 0, \quad { j}\in {J }=\left\{ {{1},\ldots ,{m}} \right\} , \\ {x}\in \mathbb {R}^{{ n}}, \\ \end{array} \end{aligned}$$

where \({ f}:{\mathbb {R}}^{ n}\rightarrow {\mathbb {R}}\), \({ g}_{ j} :{\mathbb {R}}^{ n}\rightarrow {\mathbb {R}},\;{ j}\in { J},\) are quasidifferentiable functions on \(\mathbb {R}^{ n}\). Thus, problem (P) may be referred as a quasidifferentiable optimization problem.

For the purpose of simplifying our presentation, we will introduce some notations, which will be used frequently throughout this paper.

Let \({ X}:=\{\;{ x}\in {\mathbb {R}}^{ n}:{ g}_{ j} { (x)}\;\le \;0,\;{ j}\in { J}\;\}\) be the set of all feasible solutions in problem (P). Further, we denote by J(\(\bar{{ x}})\) the set of inequality constraint indexes that are active at point \(\bar{{ x}}\) \(\in \) X, that is, \({ J}(\bar{{ x}}):=\{\;{ j}\in { J}:{ g}_{ j} (\bar{{ x}})=0\;\}\).

In Gao (2000a), Gao presented the following necessary optimality conditions for nonsmooth optimization problems with the inequality constraints in which the functions involved are quasidifferentiable.

Theorem 3.1

(Karush–Kuhn–Tuker type necessary optimality conditions) Let \(\bar{{ x}} \in X\) be an optimal solution for the considered nonsmooth optimization problem (P). Further, assume that f is quasidifferentiable at \(\bar{{ x}}\), with the quasidifferential \({ D}_{ f} (\bar{{ x}})=[\underline{\partial }{ f}(\bar{{ x}}),\overline{\partial }{ f}(\bar{{ x}})]\), each \({ g}_{{ j}}\), \(j \in J\), is quasidifferentiable at \(\bar{{ x}}\), with the quasidifferential \({ D}_{{ g}_{ j} } (\bar{{ {x}}})=[\underline{\partial }{ g}_{ j} (\bar{{ x}}),\overline{\partial }{ g}_{ j} (\bar{{ x}})]\). If the constraint qualification Kuntz and Scholtes (1993) is satisfied at \(\bar{{ x}}\) for problem (P),  then, for any sets of \({ w}_0 \in \overline{\partial }{ f(}\bar{{ x}})\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j} (\bar{{ x}})\), \(j \in J\), there exist scalars \(\bar{\lambda }_{ j} { (w)}\ge \;0\), \(j \in J\), not all zero, such that

$$\begin{aligned} 0\in \underline{\partial }{ f}(\bar{{ x}})+{ w}_0 +\sum \limits _{j=1}^{ m} {\bar{\lambda }_{ j}{ (w)}(\underline{\partial }{ g}_{ j} (\bar{{ x}})+{ w}_{ j} )} , \end{aligned}$$
(12)
$$\begin{aligned} \bar{\lambda }_{ j}{ (w)}{ g}_{ j} (\bar{{ x}})=0,\;\quad { j}\in { J}, \end{aligned}$$
(13)
$$\begin{aligned} \bar{\lambda }_{ j}{ (w)}\;\ge \;0,\quad { j}\in { J}, \end{aligned}$$
(14)

where \(\bar{\lambda }_1 { (w)},\ldots ,\bar{\lambda }_{ m}{ (w)}\) are dependent on the specific choice of \({ w} = ({ w}_{0}, { w}_{1},{\ldots },{ w}_{{ m}})\).

Now, we prove the sufficient optimality conditions for the given feasible solution in the considered quasidifferentiable optimization problem (P) under assumptions that the functions involved are quasidifferentiable r-invex functions with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials.

Theorem 3.2

(Sufficient optimality conditions) Let \(\bar{{ x}}\) be a feasible solution in the considered optimization problem (P) and the Karush–Kuhn–Tuker type necessary optimality conditions (12)–(14) be satisfied at \(\bar{{ x}}\). Further, assume that f is a quasidifferentiable r-invex function at \(\bar{{ x}}\) on X with respect to \({ S}_{ f} (\bar{{ {x}}})=\underline{\partial }{ f}(\bar{{x}})+\overline{\partial }{ f}(\bar{{ x}})\) and with respect to \(\eta \) and, moreover, each \({ g}_{{ j}}\), \({ j}\in \) \({ J} ({\bar{x}})\), is a quasidifferentiable r-invex function at \(\bar{{ x}}\) on X with respect to \({ S}_{{ g}_{ j} } (\bar{{ x}})=\underline{\partial }{ g}_{ j} (\bar{{ x}})+\overline{\partial }{ g}_{ j} (\bar{{ x}})\) and with respect to \(\eta \). Then, \(\bar{{ x}}\) is an optimal solution in problem (P).

Proof

Assume that \(\bar{{ x}}\) is such a feasible point in problem (P) at which the Karush–Kuhn–Tuker type necessary optimality conditions (12)–(14) are satisfied. This means that, for given sets of \({ w}_0 \in \overline{\partial }{ f}(\bar{{ x}})\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j} (\bar{{ x}})\), \({ j}\in { J}\) , there exist \(\bar{\lambda }_0 { (w)}\;\in {\mathbb {R}}\) and \(\bar{\lambda }{ (w)}\in {\mathbb {R}}^{ m}\) such that the conditions (12)–(14) are satisfied. Hence, by the Karush–Kuhn–Tuker type necessary optimality condition (12), it follows that there exist \({ v}_0 \in \underline{\partial }{ f}(\bar{{ x}})\) and \({ v}_{ j} \in \underline{\partial }{ g}_{ j} (\bar{{ x}})\), \({ j}\in { J}\) , such that

$$\begin{aligned} 0={ v}_0 +{ w}_0 +\sum \limits _{{ j=1}}^{ m} {\bar{\lambda }_{ j}{ (w)}({ v}_{ j} +{ w}_{ j} )} . \end{aligned}$$
(15)

By hypotheses, f is a quasidifferentiable r-invex function at \(\bar{{ x}}\) on \(\mathbb {R}^{ n}\) with respect to \({ S}_{ f} (\bar{{ x}})=\underline{\partial }{ f}(\bar{{ x}})+\overline{\partial }{ f}(\bar{{ x}})\) and with respect to \(\eta \), \({ g}_{{ j}}\), \({ j} \in { J}\) (\(\bar{{ x}})\), is a quasidifferentiable r-invex function at \(\bar{{ x}}\) on X with respect to \({ S}_{{ g}_{ j} } (\bar{{ x}})=\underline{\partial }{ g}_{ j} (\bar{{ x}})+\overline{\partial }{ g}_{ j} (\bar{{ x}})\) and with respect to \(\eta \). Hence, by Definition 2.3, the following inequalities

$$\begin{aligned} \frac{1}{{ r}}{ e}^{{ rf(x)}}\ge \frac{1}{{ r}}{ e}^{{ rf}(\bar{{ x}})}\left[ 1+{ r}\omega _0^{ T} \eta ({ x},\bar{{ x}})\right] \;,\,\,\,\forall \,\omega _0 \in { S}_{ f} (\bar{{ x}}), \end{aligned}$$
(16)
$$\begin{aligned} \frac{1}{{ r}}e^{{ rg}_{ j}}{ (x)}\ge \frac{1}{{ r}}{ e}^{{ rg}_{ j} (\bar{{ x}})}\left[ 1+{ r}\omega _{ j}^{ T} \eta ({ x},\bar{{ x}})\right] ,\,\,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } (\bar{{ x}})\,\,\,,{ j}\in {J}(\bar{{ x}}) \end{aligned}$$
(17)

hold for all x \(\in \) X. Since (16) and (17) are fulfilled for any sets \(\omega _{0} \in \) \({ S}_{{ f}}(\bar{{ x}})\) and \(\,\omega _{ j} \in { S}_{{ g}_{ j} } (\bar{{ x}})\), \({ j}\in { J}\) (\(\bar{{ x}})\), respectively, by the definitions of \({ S}_{ f} (\bar{{ x}})\) and \(\,{ S}_{{ g}_{ j} } (\bar{{ x}})\), they are also fulfilled for \(\omega _0 ={ v}_0 +{ w}_0 \in \;\;{ S}_{ f} (\bar{{ x}})\) and \(\,\omega _{ j} ={ v}_{ j} +{ w}_{ j} \in \;\;{ S}_{{ g}_{ j} } (\bar{{ x}})\). Thus, (16) and (17) yield

$$\begin{aligned} \frac{1}{{ r}}\left[ { e}^{{ r(f(x)}-{ f}(\bar{{ x}}))}-1\right] \ge \left( { v}_0^{ T} +{ w}_0^{ T} \right) \eta ({ x},\bar{{ x}}), \end{aligned}$$
(18)
$$\begin{aligned} \frac{1}{{ r}}\left[ { e}^{{ r}({ g}_{ j} { (x)}-{ g}_{ j} (\bar{{ x}}))}-1\right] \ge \left( { v}_{ j}^{ T} +{ w}_{ j}^{ T} \right) \eta ({ x},\bar{{ x}}),\,\,\,{j}\in {J}(\bar{{ x}}). \end{aligned}$$
(19)

Using x \(\in \) X and \(\bar{{ x}} \in X\) together with the definition of J(\(\bar{{ x}})\), we get g\(_{{ j}}(x)\) \(\le \) g\(_{{ j}}(\bar{{ x}})\), \(j \in \) J(\(\bar{{ x}})\). Hence, we have that the following inequalities

$$\begin{aligned} \frac{1}{{ r}}\left[ { e}^{{ r}({ g}_{ j}{ (x)}-{ g}_{ j} (\bar{{ x}}))}-1\right] \le 0,\,\,\,{j}\in {J}(\bar{{ x}}) \end{aligned}$$
(20)

hold for all x \(\in \) X. Thus, (19) and (20) yield

$$\begin{aligned} \left( { v}_{ j}^{ T} +{ w}_{ j}^{ T}\right) \eta ({ x},\bar{{ x}})\le 0, \,\,\,{j}\in {J}(\bar{{ x}}). \end{aligned}$$
(21)

Since \(\bar{\lambda }_{ j}{ (w)}\;>\;0\)\(j \in J\) (\(\bar{{ x}})\), and \(\bar{\lambda }_{ j}{ (w)}\;=\;0\)j \(\notin \) J(\(\bar{{ x}})\), therefore, (21) yields

$$\begin{aligned} \sum \limits _{j=1}^{ m} {\bar{\lambda }_{ j}{ (w)}\left( { v}_{ j}^{ T} +{ w}_{ j}^{ T} \right) } \eta ({ x},\bar{{ x}})\le 0. \end{aligned}$$
(22)

By (15) and (22), it follows that

$$\begin{aligned} \left( { v}_0^{ T} +{ w}_0^{ T} \right) \eta ({ x},\bar{{ x}})\ge 0. \end{aligned}$$
(23)

Combining (18) and (23), we get that the following inequality

$$\begin{aligned} \frac{1}{{ r}}\left[ { e}^{{ r(f(x)}-{ f}(\bar{{ x}}))}-1\right] \ge \;0 \end{aligned}$$
(24)

holds for all x \(\in \) X. Thus, by (24), we conclude that also the inequality \(f(x) \ge \) f(\(\bar{{ x}})\) holds for all x \(\in \) X. This means that \(\bar{{ x}}\) is an optimal solution in the considered optimization problem (P). Hence, the proof of the theorem is complete. \(\square \)

In order to illustrate the sufficient optimality conditions established in Theorem 3.2, we consider the example of a nonsmooth optimization problem in which the involved functions are quasidifferentiable r-invex with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials.

Example 3.1

Consider the following nondifferentiable optimization problem:

Note that X = { x \(\in \mathbb {R}^{2}\) : \(\ln (\left| {x_2 +\left| {{ x}_1 } \right| } \right| +1)\le 0\) } and \(\bar{{ x}}\) = (0,0) is a feasible solution in problem (P1). Further, it can be proved that f and g\(_{1}\) are quasidifferentiable at \(\bar{{ x}}\). Indeed, by Definition 2.1, we have that \({ f}^{\prime } ((0,0);d) = -3d_{1} + 3\vert d_{2} - d_{1}\vert \) and, therefore,

$$\begin{aligned} {{ f}}^{\prime }((0,0);{ d})=\mathop {\max { v}^{ T}{{ d}}}\limits _{{ v}\in \mathrm{co}\{(-3,3),(3,-3)\}} \;+\;\mathop {\max { w}^{ T}{{ d}}}\limits _{{ w}\in \{(-3,0)\}\quad \;\;} , \end{aligned}$$

where \(\underline{\partial }{ f}(0,0)=\mathrm{co}\{(-3,3),(3,-3)\}\), \(\overline{\partial }{ f}(0,0)=\{(-3,0)\}\). Hence, by Definition 2.2, f is a quasidifferentiable function at \(\bar{{ x}}\) = (0,0). Further, by Definition 2.2, we have \({{ g^{\prime }}}_1 (\bar{{ x}};{ d})=\left| {{ d}_2 +\left| {{ d}_1 } \right| } \right| \) and, therefore,

$$\begin{aligned} {{ g}}^{\prime }_1 (\bar{{ x}};{ d})=\mathop {\max }\limits _{{ v}\in \mathrm{co}\{(0,0),(-2,2),(2,2)\}} {v}^{T}{ d}+\;\mathop {\min }\limits _{{ w}\in \mathrm{co}\{(-1,-1),(1,-1)\}} { w}^{ T}{ d}, \end{aligned}$$

where \(\underline{\partial }{ g}_1 (\bar{{ x}})=\mathrm{co}\{(0,0),(-2,2),(2,2)\}\) and \(\overline{\partial }{ g}_1 (\bar{{ x}})=\mathrm{co}\{(-1,-1),(1,-1)\}\).

It can be proved that the Karush–Kuhn–Tucker necessary optimality conditions are fulfilled at \(\bar{{ x}}\). Indeed, it can be shown that, for any sets of \({ w}_0 \in \overline{\partial }{ f}(\bar{{ x}})\) and \({ w}_1 \in \overline{\partial }{ g}_1 (\bar{{ x}})\), there exists \(\bar{\lambda }_1 { (w)}>\;0\) such that the conditions (12)–(14) are satisfied. Namely, if \(w_{0} = (-3,0)\) and \(w_{1} = (1,-1)\), then, if we put \(\bar{\lambda }_1 { (w)}=1\), the condition (12) is satisfied. However, if \(w_{0} = (-3,0)\) and \(w_{1} = (-1,-1)\), then, if we put \(\bar{\lambda }_1 { (w)}=2\), the condition (12) is also satisfied. The conditions (13) and (14) are obvious.

Since the Karush–Kuhn–Tucker necessary optimality conditions are fulfilled at \(\bar{{ x}}\), therefore, to prove by Theorem 3.2 that \(\bar{{ x}}\) is optimal in problem (P1), we have to show that f and g\(_{1}\) are quasidifferentiable r-invex functions at \(\bar{{ x}}\) on X with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials at this point.

Let \(S_{f}(\bar{x}) =\underline{\partial }{ f}(\bar{{ x}})+\overline{\partial }{ f}(\bar{{ x}})\), \(S_{g_{1}}(\bar{x}) =\underline{\partial }{ g}_1 (\bar{{ x}})+\overline{\partial }{ g}_1 (\bar{{ x}})\), \(\eta \) : X \(\times \) X \(\rightarrow \mathbb {R}^{2}\) be a vector-valued function defined by \(\eta ({ x},\bar{{ x}})=\left[ {{\begin{array}{*{20}c} {\textstyle {{\left| {{ x}_1 } \right| +{ x}_2 } \over 4}} \\ {\textstyle {{\left| {{ x}_1 } \right| +{ x}_2 } \over 4}} \\ \end{array} }} \right] \).

Fig. 1
figure 1

The set \({ W}_{{{ w}}^{\prime }} \) when \(w^\prime \) is chosen and if \(\bar{\lambda }_1 ({{ w}}^{\prime })=1\). Note that 0 \(\in { W}_{{{ w}}^{\prime }} \), in other words, the Karush–Kuhn–Tucker optimality condition (12) is satisfied

Fig. 2
figure 2

a The set \({ W}_{{{ w^{\prime \prime }}}} \) when \({{ w^{\prime \prime }}}\) is chosen and if we set \(\bar{\lambda }_1 ({{ w^{\prime \prime }}})=1\). Note that 0 \(\notin { W}_{{{ w^{\prime \prime }}}} \), in other words, the Karush–Kuhn–Tucker optimality condition (12) is not satisfied. b The set \({ W}_{{{ w^{\prime \prime }}}} \) when \({{ w^{\prime \prime }}}\) is chosen and if we set \(\bar{\lambda }_1 ({{ w^{\prime \prime }}})=2\). Note that 0 \(\in { W}_{{{ w^{\prime \prime }}}} \), in other words, the Karush–Kuhn–Tucker condition (12) is satisfied

Then, by Definition 2.3, it follows that f is a quasidifferentiable 1-invex function at \(\bar{{ x}}\) on X with respect to \({ S}_{{ f}}(\bar{{ x}})\) and with respect to \(\eta \) and also \({ g}_{1}\) is a quasidifferentiable 1-invex function at \(\bar{{ x}}\) on X with respect to \({ S}_{{ g}_1 } (\bar{\mathrm{x}})\) and with respect to the same function \(\eta \). Hence, since all hypotheses of Theorem 3.2 are fulfilled at \(\bar{{ x}}\), therefore, \(\bar{{ x}}\) is an optimal solution in the considered nonsmooth optimization problem.

Now, for the considered nonsmooth optimization problem, we illustrate the fact that the Lagrange multiplier \(\bar{\lambda }_1 (w)\) depends on the chosen w. In fact, we illustrate that, for the given chosen w, the Karush–Kuhn–Tucker necessary optimality conditions are not fulfilled at \(\bar{{ x}}\) with the same Lagrange multiplier \(\bar{\lambda }_1 { (w)}\) as in the case of another chosen w. We denote by \(W_{{ w}}\) the set appearing in the Karush–Kuhn–Tucker optimality condition (12), that is, \({ W}_{ w} =\underline{\partial }{ f}(\bar{{ x}})+{ w}_0 +\bar{\lambda }_1 { (w)}(\underline{\partial }{ g}_1 (\bar{{ x}})+{ w}_1 )\) for the given chosen \(w = (w_{0}, w_{1})\) and, therefore, depending on the Lagrange multiplier \(\bar{\lambda }_1 { (w)}\) (see Figs. 1, 2).

  1. 1.

    \({ w}^{\prime } = (w_{0},w_{1}) = ((-3,0); (1,-1))\), let \(\bar{\lambda }_1 ({{ w}}^{\prime })=1\), then \({ W}_{{{ w^{\prime }}}} =\mathrm{co}\{(-5,2),(-3,4),(-7,4), (1,-4),(3,-2),(-1,-2)\}\)

  2. 2.

    Let us choose another w:

    $$\begin{aligned}&{ w}^{\prime \prime } = (w_{0},w_{1}) = ((-3,0); (-1,-1)); \mathrm{let}\,\, \bar{\lambda }_1 ({w^{\prime \prime }})=1, \mathrm{then}\\&{ W}_{{{ w^{\prime \prime }}}} =\mathrm{co}\{(-7,2),(-5,4),(-9,4).(-1,-4),(1,-2),(-3,-2)\}\\&{ w}^{\prime \prime } = (w_{0}, w_{1}) = ((-3,0); (-1,-1)); \mathrm{let}\,\, \bar{\lambda }_1 ({{ w^{\prime \prime }}})=2, \mathrm{then}\\&{ W}_{{{ w^{\prime \prime }}}} =\mathrm{co}\{(-8,1),(-4,5),(-12,5).(-2,-5),(2,-1),(-6,1)\}. \end{aligned}$$

4 Mond–Weir duality

In this section, we define Mond–Weir type dual problem for the considered nonsmooth optimization problem (P) as follows:

$$\begin{aligned}&{ (D)} \quad \quad \quad { f(y)} {\rightarrow } \mathrm{max}\\&\mathrm{subject \, to} \ ({ y},\lambda ) \quad \in \quad \Gamma , \end{aligned}$$

where \(\Gamma \) is the set of all pairs (y, \(\lambda )\) with y \(\in \mathbb {R}^{ n}\) and \(\lambda \) : \(\mathbb {R}^{{ m}+1}\) \(\rightarrow \) \(\mathbb {R}^{{ m}}\), \(\lambda { (w)}=(\lambda _1 { (w)},\ldots ,\lambda _{ m} { (w)})\), satisfying, for any sets of \({ w}_0 \in \overline{\partial }{ f(y)}\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j}{ (y)}\), j \(\in \) J, the following conditions:

$$\begin{aligned} 0\in \underline{\partial }{ f(y)}+{ w}_0 +\sum \limits _{{ j=1}}^{ m} {\lambda _{ j} { (w)}(\underline{\partial }{ g}_{ j}{ (y)}+{ w}_{ j} )} , \end{aligned}$$
(25)
$$\begin{aligned} \lambda _{{ j}} ({w}){g}_{{ j}} ({y})\ge 0,\quad { j}\in {J}, \end{aligned}$$
(26)
$$\begin{aligned} {y}\in \mathbb {R}^{{ n}},\lambda _{{ j}} ({w})\ge 0,\quad { j}\in {J}, \end{aligned}$$
(27)

where \(w = (w_{0},w_{1},{\ldots },w_{{ m}})\). Then, \(\Gamma \) is the set of all feasible solutions in Mond–Weir type dual problem (D) and, moreover, we denote by \(Y = { pr}_{{\mathbb {R}}^{ n}} \Gamma \) the projection of the set \(\Gamma \) on \(\mathbb {R}^{ n}\).

Theorem 4.1

(Weak duality) Let x and \((y,\lambda )\) be any feasible solutions in the considered optimization problem (P) and its Mond–Weir type dual problem (D), respectively. Further, assume that f is a quasidifferentiable r-invex function at y on \(X\cup Y\) with respect to \({ S}_{ f}{ (y)}=\underline{\partial }{ f(y)}+\overline{\partial }{ f(y)}\) and with respect to \(\eta \), each g\(_{{ j}}\), \(j \in J(y)\), is a quasidifferentiable r-invex function at y on \(X\cup Y\) with respect to \({ S}_{{ g}_{ j} } { (y)}=\underline{\partial }{ g}_{ j} { (y)}+\overline{\partial }{ g}_{ j}{ (y)}\) and with respect to \(\eta \). Then, \(f(x) \ge f(y)\).

Proof

Let x and (y, \(\lambda )\) be any feasible solutions in the considered optimization problem (P) and its Mond–Weir type dual problem (D), respectively. This means that \(\lambda { (w)}=(\lambda _1 { (w)},\ldots ,\lambda _{ m}{ (w)}) \in \mathbb {R}^{{m}}\) and, moreover, for the given sets of \({ w}_0 \in \overline{\partial }{ f(y)}\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j}{ (y)}\), \({ j}\in { J}\) , the constraints (25)–(27) are fulfilled.

Suppose, contrary to the result, that

$$\begin{aligned} {f}({x})<{f}({y}). \end{aligned}$$
(28)

By hypotheses,  f is a quasidifferentiable r-invex function at y on X \(\cup \) Y with respect to \({ S}_{ f}{ (y)}=\underline{\partial }{ f}{ (y)}+\overline{\partial }{ f(y)}\) and with respect to \(\eta \), each \(g_{{ j}}\), j \(\in \) J(y), is a quasidifferentiable r-invex function at y on X \(\cup \) Y with respect to \({ S}_{{ g}_{ j} }{ (y)}=\underline{\partial }{ g}_{ j}{ (y)}+\overline{\partial }{ g}_{ j}{ (y)}\) and with respect to \(\eta \). Hence, by Definition 2.3, the following inequalities

$$\begin{aligned} \frac{1}{{ r}}{ e}^{{ rf(x)}}\ge \frac{1}{{ r}}{ e}^{{ rf(y)}}\left[ 1+{ r}\omega _0^{ T} \eta { (x,y)}\right] ,\,\,\,\,\,\forall \,\omega _0 \in { S}_{ f}{ (y)}, \end{aligned}$$
(29)
$$\begin{aligned} \frac{1}{{ r}}{ e}^{{ rg}_{ j}}{ (x)}\ge \frac{1}{{ r}}{ e}^{{ rg}_{ j}}{ (y)}\left[ 1+{ r}\omega _{ j}^{ T} \eta { (x,y)}\right] ,\,\,\,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } { (y)},\,\,\,{ j}\in {J}( {y}) \end{aligned}$$
(30)

hold. Combining (28) and (29), we get

$$\begin{aligned} \;\omega _0^{ T} \eta { (x,y)}<0,\,\,\,\,\forall \,\omega _0 \in { S}_{ f}{ (y)}. \end{aligned}$$
(31)

By \(x \in X\) , y \(\in \) Y and the constraint (26) of dual problem (D), it follows that

$$\begin{aligned} \lambda _{{ j}}{ (w)}{ g}_{{ j}}{ (x)} \le \lambda _{{ j}}{ (w)}{ g}_{{ j}}{ (y)}, \quad { j} \in { J}. \end{aligned}$$
(32)

Since \(\lambda _{{ j}}\)(w) \(>\) 0, j \(\in \) J(y), therefore, we re-write (30) in the following form

$$\begin{aligned} \frac{1}{{ r}}\left( {{ e}^{\textstyle {{ r} \over {\lambda _{ j}}{ (w)}}(\lambda _{ j}{ (w)}{ g}_{ j} { (x)}-\lambda _{ j}{ (w)}{ g}_{ j}{ (y)})}-1}\right) \ge \omega _{ j}^{ T} \eta { (x,y)},\,\,\,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } { (y)},{ j}\in {J}( {y}). \end{aligned}$$
(33)

Combining (32) and (33), we get

$$\begin{aligned} \omega _{ j}^{ T }\eta { (x,y)}\le 0,\,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } { (y)},{ j}\in {J}({y}). \end{aligned}$$
(34)

Taking into account the constraint (27) of dual problem (D), we obtain

$$\begin{aligned} \sum \limits _{{ j=1}}^{ m} {\lambda _{ j} (w)\omega _{ j}^{ T} \eta { (x,y)}} \;\le \;0,\,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } { (y)},{j}\in {J}. \end{aligned}$$
(35)

Hence, (31) and (35) yield

$$\begin{aligned} \begin{array}{l} \left[ {\lambda _0 { (w)}\;\omega _0^{ T} +\sum \limits _{{ j}=1}^{ m} {\lambda _j { (w)}\omega _{ j}^{ T} } } \right] \eta { (x,y)}<0,\,\,\,\forall \,\omega _0 \in { S}_{ f} { (y)}\qquad \\ \qquad \,\,\,\forall \,\omega _{ j} \in { S}_{{ g}_{ j} } { (y)}\,\,\,\,\forall \omega _0 \in \underline{\partial }{ f(y)}+{ w}_0 ,\,\,\,\,\omega _{ j} \in \underline{\partial }{ g}_{ j (y)}+{ w}_{ j} \\ \end{array} \end{aligned}$$

This means, by the definitions of \(S_{{ f}}\)(y) and \({ S}_{{ g}_{ j} } { (y)}\) j \(\in \) J, that, for any sets of \({ w}_0 \in \overline{\partial }{ f(y)}\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j}{ (y)}\), j \(\in \) J, the following inequality

$$\begin{aligned} \left[ {{ v}_0^{ T} +{ w}_0^{ T} +\sum \limits _{{ j=1}}^{ m} {\lambda _{ j}{ (w)}\left( { v}_{ j}^{ T} +{ w}_{ j}^{ T}\right) } } \right] \eta { (x,y)}<0 \end{aligned}$$
(36)

holds for every \({ v}_0 \in \underline{\partial }{ f(y)}\) and \({ v}_{ j} \in \underline{\partial }{ g}_{ j}{ (y)}\), j \(\in \) J. However, by the constraint (25) of dual problem (D), for any sets of \({ w}_0 \in \overline{\partial }{ f(y)}\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j}{ (y)}\), j \(\in \) J, there exist \({ v}_0 \in \underline{\partial }{ f(y)}\) and \({ v}_{ j} \in \underline{\partial }{ g}_{ j}{ (y)}\)j \(\in \) J such that the following inequality

$$\begin{aligned} \left[ {{ v}_0^{ T} +{ w}_0^{ T} +\sum \limits _{{ j=1}}^{ m} {\lambda _{ j} { (w)}\left( { v}_{ j}^{ T} +{ w}_{ j}^{ T} \right) } } \right] \eta { (x,y)}=0 \end{aligned}$$

holds, which contradicting (36). This completes the proof of this theorem.

It turns out that, under stronger r-invexity hypothesis imposed on the objective function, it is possible to prove the stronger result.

Theorem 4.2

(Weak duality) Let x and \((y,\lambda )\) be any feasible solutions in the considered optimization problem (P) and its Mond–Weir type dual problem (D), respectively. Further, assume that f is a strictly quasidifferentiable r-invex function at y on \(X\cup Y\) with respect to \({ S}_{ f} { (y)}=\underline{\partial }{ f(y)}+\overline{\partial }{ f(y)}\) and with respect to \(\eta \), each \({g}_{{j}}\) , \(j \in J(y)\), is a quasidifferentiable r-invex function at y on \(X\cup Y\) with respect to \({ S}_{{ g}_{ j} } { (y)}=\underline{\partial }{ g}_{ j} { (y)}+\overline{\partial }{ g}_{ j}{ (y)}\) and with respect to \(\eta \). Then \(f(x) > f(y)\).

Theorem 4.3

(Direct duality) Let \(\bar{{ x}}\) be an optimal solution in the considered optimization problem (P). Further, assume that there exists the function \(\bar{\lambda }\) : \(\mathbb {R}^{{ m}+1}\) \(\rightarrow \) \(\mathbb {R}^{{ m}}\) such that (\(\bar{{ x}},\bar{\lambda })\) is feasible in its Mond–Weir type dual problem (D). Further, if all hypotheses of the weak duality theorem (Theorem 4.1) are fulfilled, then (\(\bar{{ x}},\bar{\lambda })\) is optimal in Mond–Weir type dual problem (D).

Proof

Assume that \(\bar{{ x}}\) is an optimal solution in the considered optimization problem (P) and, moreover, that there exists the function \(\bar{\lambda }\) : \(\mathbb {R}^{{ m}+1}\) \(\rightarrow \) \(\mathbb {R}^{{ m}}\) such that (\(\bar{{ x}},\bar{\lambda })\) is feasible in its Mond–Weir type dual problem (D). Since \(\bar{{ x}}\in { X}\), by weak duality (Theorem 4.1), it follows that

$$\begin{aligned} { f}(\bar{{ x}}) \ge \mathrm{sup}{\{} { f(y)} : ({ y},\lambda ) \quad \in \quad \Gamma {\}}. \end{aligned}$$

This means that (\(\bar{{ x}},\bar{\lambda })\) is optimal in Mond–Weir type dual problem (D).

Remark 4.1

Note that the feasibility of (\(\bar{{ x}},\bar{\lambda })\) in Mond–Weir type dual problem (D) does not follow from the Karush–Kuhn–Tucker necessary optimality conditions (12)–(14). To confirm the feasibility of (\(\bar{{ x}},\bar{\lambda })\) in Mond–Weir type dual problem (D), the Lagrange multiplier \(\bar{\lambda } \in \mathbb {R}^{{ m}}\) should be the same for any sets of \({ w}_0 \in \overline{\partial }{ f(y)}\) and \({ w}_{ j} \in \overline{\partial }{ g}_{ j}{ (y)}\), j \(\in \) J. Indeed, if we assume that f and g \(_{{ j}}\), j \(\in \) J, are locally Lipschitz in a neighbourhood of \(\bar{{ x}}\) and quasidifferentiable at \(\bar{{ x}}\), then the above-mentioned property holds.

Theorem 4.4

(Converse duality) Let (\(\bar{{ y}},\bar{\lambda }\)) be an optimal solution in Mond–Weir type dual problem (D) and \(\bar{{ y}}\in { X}\). Further, assume that f is a quasidifferentiable r-invex function at \(\bar{{ y}}\) on \(X\cup Y\) with respect to \({ S}_{ f} (\bar{{ y}})=\underline{\partial }{ f}(\bar{{ y}})+\overline{\partial }f(\bar{{ y}})\) and with respect to \(\eta \), each g\(_{j}\), \(j \in J\)(\(\bar{{ y}}\)), is a quasidifferentiable r-invex function at \(\bar{{ y}}\) on \(X\cup Y\) with respect to \({ S}_{{ g}_{ j} } (\bar{{ y}})=\underline{\partial }{ g}_{ j} (\bar{{ y}})+\overline{\partial }{ g}_{ j} (\bar{{ y}})\) and with respect to \(\eta \). Then, \(\bar{y}\) is optimal in the considered nonsmooth optimization problem (P).

Proof

Proof of this theorem follows directly from the weak duality theorem (Theorem 4.1).

5 Conclusions

In this paper, a new class of nonconvex quasidifferentiable optimization problems with inequality constraints has been considered. Namely, all functions constituting the considered nonconvex nonsmooth optimization problems are quasidifferentiable r-invex with respect to convex compact sets. Under the hypotheses that the functions involved are r-invex with respect to the same function \(\eta \) and with respect to convex compact sets which are equal to Minkowski sum of their subdifferentials and superdifferentials, the sufficient optimality conditions and several duality results have been proved for such nonconvex nonsmooth optimization problems.

However, some interesting topics for further research remain. It would be of interest to investigate whether it is possible to prove similar results for a larger class of nonconvex nonsmooth extremum problems with quasidifferentiable functions and/or for other types of nonsmooth optimization problems. We shall investigate these questions in the subsequent papers.