1 Introduction

The field of multiobjective programming, also known as vector programming, has attracted a lot of attention since many real-world problems in decision theory, economics, engineering problems, game theory, management sciences, physics, optimal control can be modeled as nonlinear vector optimization problems. Therefore, many approaches were developed in the literature to address these problems. The properties of the objective function and the constraints determine the applicable technique. Considerable attention has been given recently to devising new methods which solve the given multiobjective programming problem by means of some associated optimization problem (see, for example, [13]).

Exact penalty function methods are important analytic and algorithmic techniques in nonlinear mathematical programming for solving a nonlinear constrained scalar optimization problem. Exact penalty function methods transform the considered optimization problem into a single unconstrained optimization problem or into a finite sequence of unconstrained optimization problems, avoiding thus the infinite sequential process of the classical penalty function methods. Nondifferentiable exact penalty functions were introduced by Zangwill [4] and Pietrzykowski [5]. Much of the literature on nondifferentiable exact penalty functions is devoted to the study of scalar convex optimization problems (see, for example, [616], and others). However, some results on exact penalty functions used for solving various classes of nonconvex optimization problems have been proved in the literature recently (see, for example, [17, 18]). Namely, in [17], Antczak introduced a new approach for solving nonconvex differentiable optimization problems involving r-invex functions. He defined a new exact absolute value penalty function method, called the exact exponential penalty function method, for solving nonconvex constrained scalar optimization problems. Further, under r -invexity hypotheses, Antczak established the equivalence between the sets of optimal solutions in the original scalar optimization problem with both inequality and equality constraints and its associated penalized optimization problem with the exact exponential penalty function. Furthermore, in [17], a lower bound on the penalty parameter was provided such that this result is satisfied if the penalty parameter is larger than this value.

In [19], Antczak defined a new vector exact \(l_{1}\) penalty function method and used it for solving nondifferentiable convex multiobjective programming problems. He gave conditions guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered convex nondifferentiable multiobjective programming problem and its associated vector penalized optimization problem with the vector exact \(l_{1}\) penalty function.

An exponential penalty function method was proposed by Murphy [20] for solving nonlinear differentiable scalar optimization problems. Exponential penalty function methods have been used widely in optimization theory by several authors for solving optimization problems of various types (see, for example, [2129], and others).

The aim of this paper is to show that unconstrained global optimization methods can be used also for solving nondifferentiable constrained multiobjective programing problems, by resorting to an exact penalty approach. Namely, we extend the exact exponential penalty function method introduced by Antczak [17] to the vectorial case. Hence, we introduce a new vector exponential penalty function method, and we use it for solving a class of nondifferentiable multiobjective programming problems involving r-invex functions (with respect to the same function \(\eta \)). This method is based on such a construction of an exact absolute value penalty function, which is minimized in the exponential penalized optimization problem constructed in this method. It is the sum of a certain “merit” function (which reflects the objective function of the original problem) and a penalty term which reflects the constraint set. The merit function is chosen as the composition of the exponential function and the original objective function, while the penalty term is obtained by multiplying a suitable function, which represents the constraints (in this case, it is also the sum of the composition of an exponential function and the suitable function, which represents the constraint) by a positive parameter c, called the penalty parameter.

This work is organized as follows. In Sect. 2, some preliminary results are given that are useful in proving the main results in the paper. In Sect. 3, a new vector exponential penalty function method is introduced, and its algorithmic aspect is presented. The convergence of the sequence of weak Pareto solutions of vector subproblems generated in the described method is established. In Sect. 4, the exactness of the penalization is extended to the case of an exact vector penalty function method. Then the results for vector exterior exponential penalty function algorithm are reviewed, and the relationship between the weak Pareto solution in the original multiobjective programming problem and weak Pareto optimal solutions in the associated penalized optimization subproblems is commented. Thus, the exactness property is defined for the introduced vector exponential penalty function method. Namely, we prove that there exists a finite lower bound of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem coincides with an unconstrained (weak) Pareto solution in its associated vector penalized optimization problem with the vector exact exponential penalty function. Also under nondifferentiable r-invexity, the converse result is established for sufficiently large penalty parameter exceeding the finite threshold. Hence, the equivalence between the nonconvex nondifferentiable multiobjective programming problems is established for sufficiently large penalty parameters under the assumption that all functions constituting the considered nonsmooth multiobjective programming problem are r-invex (with respect to the same function \(\eta \)). The results established in the paper are illustrated by suitable examples of nonconvex nondifferentiable vector optimization problems which we solve by using the vector exact exponential penalty function method defined in this paper. Finally, in Sect. 5, we discuss the consequences of the extension of the exact exponential penalty function method defined by Antczak [17] for scalar optimization problems to the vectorial case and its significance for vector optimization.

2 Preliminaries

The following convention for equalities and inequalities will be used throughout the paper.

For any \(x=\left( x_{1},x_{2},\ldots ,x_{n}\right) ^{T}, \ y=\left( y_{1},y_{2},\ldots ,y_{n}\right) ^{T}\), we define

  1. (i)

    \(x=y\) if and only if \(x_{i}=y_{i}\) for all \(i=1,2,\ldots ,n\);

  2. (ii)

    \(x<y\) if and only if \(x_{i}<y_{i}\) for all \(i=1,2,\ldots ,n\);

  3. (iii)

    \(x\leqq y\) if and only if \(x_{i}\leqq y_{i}\) for all \(i=1,2,\ldots ,n\);

  4. (iv)

    \(x\le y\) if and only if \(x\leqq y\) and \(x\ne y.\)

Definition 1

A function \(f:R^{n}\rightarrow R\) is locally Lipschitz at a point \(x\in R^{n} \) if there exist scalars \(K_{x}>0\) and \(\varepsilon >0\) such that, the following inequality

holds for all \(y, z\in x+\varepsilon B\), where B signifies the open unit ball in \(R^{n}\), so that \(x+\varepsilon B\) is the open ball of radius \( \varepsilon \) about x.

Definition 2

[30] The Clarke generalized directional derivative of a locally Lipschitz function \(f:X\rightarrow R\) at \(x\in X\) in the direction \(v\in R^{n}\), denoted \(f^{\,0}\left( x;v\right) \), is given by

$$\begin{aligned} f^{\,0}(x;v)=\underset{\underset{\lambda \downarrow 0}{y\rightarrow x}}{\lim \sup }\frac{f\left( y+\lambda v\right) -f(y)}{\lambda }. \end{aligned}$$

Definition 3

[30] The Clarke generalized subgradient of a locally Lipschitz function \(f:X\rightarrow R\) at \(x\in X\), denoted \(\partial f\left( x\right) \), is defined as follows:

$$\begin{aligned} \partial f\left( x\right) =\left\{ \xi \in R^{n}:f^{\,0}(x;d)\geqq \left\langle \xi ,d\right\rangle \text { for all }d\in R^{n}\right\} . \end{aligned}$$

Lemma 4

[30] Let \(f:X\rightarrow R\) be a locally Lipschitz function on a nonempty open set \(X\subset R^{n}, u\) be an arbitrary point of X and \(\lambda \in R\). Then

$$\begin{aligned} \partial \left( \lambda {f}\right) \left( {u}\right) = \lambda \partial {f}\left( {u}\right) . \end{aligned}$$

Proposition 5

[30] Let \(f_{i}:X\rightarrow R , i=1,\ldots ,k\), be locally Lipschitz functions on a nonempty set \(X\subset R^{n}, u\) be an arbitrary point of \(X\subset R^{n}\). Then

$$\begin{aligned} \partial \left( \sum _{i=1}^{k}f_{i}\right) \left( u\right) \subseteq \sum _{i=1}^{k}\partial f_{i}\left( u\right) . \end{aligned}$$

Equality holds in the above relation if all but at most one of the functions \(f_{i}\) is strictly differentiable at u.

Corollary 6

[30] For any scalars \(\lambda _{i}\), one has

$$\begin{aligned} \partial \left( \sum _{i=1}^{k}\lambda _{i}f_{i}\right) \left( u\right) \subseteq \sum _{i=1}^{k}\lambda _{i}\partial f_{i}\left( u\right) , \end{aligned}$$

and equality holds if all but at most one of the functions \(f_{i}\) is strictly differentiable at u.

Theorem 7

[30] Let the function \(f:R^{n}\rightarrow R\) be locally Lipschitz at a point \(\overline{x}\in R^{n}\) and attain its (local) minimum at \(\overline{x}\). Then

$$\begin{aligned} 0\in \partial f\left( \overline{x}\right) . \end{aligned}$$

Proposition 8

[30] Let the functions \( f_{i}:R^{n}\rightarrow R, i\in I=\left\{ 1,\ldots ,k\right\} ,\) be locally Lipschitz at a point \(\overline{x}\in R^{n}\). Then the function \( f:R^{n}\rightarrow R\) defined by \(f(x):=\underset{i=1,..,k}{\max }f_{i}(x)\) is also locally Lipschitz at \(\overline{x}\). In addition,

$$\begin{aligned} \partial f\left( \overline{x}\right) \subset conv\left\{ \partial f_{i}\left( \overline{x}\right) :i\in I\left( \overline{x}\right) \right\} , \end{aligned}$$

where \(I(\overline{x}):=\left\{ i\in I:f(\overline{x})=f_{i}(\overline{x} )\right\} \).

Now, for the reader’s convenience, we give the definition of a nondifferentiable vector-valued (strictly) r-invex function (see [31] for a scalar case and [32] in the vectorial case).

Definition 9

Let X be a nonempty subset of \(R^{n}\) and \( f:R^{n}\rightarrow R^{k}\) be a vector-valued function such that each its component is locally Lipschitz at a given point \(\overline{x}\in X\). If there exist a function \(\eta :X\times X\rightarrow R^{n}\) and a real number r such that, for \(i=1,\ldots ,k\), the following inequalities:

$$\begin{aligned} \begin{array}{lll} \frac{1}{r}e^{rf_{i}(x)}\geqq \frac{1}{r}e^{rf_{i}(\overline{x})}\left[ 1+r\left\langle \xi _{i},\eta \left( x,\overline{x}\right) \right\rangle \right] , &{}\quad \text {if} &{} r\ne 0 \\ \qquad f_{i}(x)-f_{i}(\overline{x})\geqq \left\langle \xi _{i},\eta \left( x, \overline{x}\right) \right\rangle , &{} \quad \text {if} &{} r=0 \end{array} \end{aligned}$$
(1)

hold for each \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) \) and all \(x\in X\), then f is said to be a nondifferentiable r-invex vector-valued function at \(\overline{x}\) on X (with respect to \(\eta \)). If inequalities (1) are satisfied at any point \(\overline{x}\in X\), then f is said to be a nondifferentiable r-invex function on X (with respect to \(\eta \)).

Each function \(f_{i}, i=1,\ldots ,k\), satisfying (1), is said to be locally Lipschitz r-invex at \(\overline{x}\) on X (with respect to \(\eta \) ).

Definition 10

Let X be a nonempty subset of \(R^{n}\) and \(f:R^{n}\rightarrow R^{k}\) be a vector-valued function such that each its component is locally Lipschitz at a given point \(\overline{x}\in X\). If there exist a function \(\eta :X\times X\rightarrow R^{n}\) and a real number r such that, for \(i=1,\ldots ,k\), the following inequalities

$$\begin{aligned} \begin{array}{lll} \frac{1}{r}e^{rf_{i}(x)}>\frac{1}{r}e^{rf_{i}(\overline{x})}\left[ 1+r\left\langle \xi _{i},\eta \left( x,\overline{x}\right) \right\rangle \right] , &{} \quad \text {if} &{} r\ne 0 \\ f_{i}(x)-f_{i}(\overline{x})>\left\langle \xi _{i},\eta \left( x,\overline{x} \right) \right\rangle , &{} \quad \text {if} &{} r=0 \end{array} \end{aligned}$$
(2)

hold for each \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) \) and all \(x\in X, x\ne \overline{x}\), then f is said a nondifferentiable vector-valued strictly r-invex function at \(\overline{x}\) on X (with respect to \(\eta \)). If inequalities (2) are satisfied at any point \( \overline{x}\in X\), then f is said to be a nondifferentiable strictly r-invex function on X (with respect to \(\eta \)).

Remark 11

In order to define an analogous class of vector-valued r-incave functions with respect to \(\eta \), the direction of the inequalities (1) should be changed to the opposite one.

Remark 12

Note that in the case when \(r=0\), the definition of a (strictly) r-invex vector-valued function reduces to the definition of nondifferentiable (strictly) invex vector-valued function (see, for example, [33, 34]).

Remark 13

For more details on the properties of nondifferentiable r-invex functions, we refer, for example, to see Antczak [31] for a scalar case and Antczak [32] in the vectorial case.

Now, we prove the useful result which we will use in proving the main result in the paper.

Theorem 14

Let \(\overline{x}\in X\subset R^{n}\) and \( q:X\rightarrow R\) be a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to \(\eta :X\times X\rightarrow R\). Further, let \( \frac{1}{r}\left( e^{rq^{+}(x)}-1\right) :=\max \left\{ 0,\frac{1}{r}\left( e^{rq(x)}-1\right) \right\} \). Then the function \(\frac{1}{r}\left( e^{rq^{+}(\cdot )}-1\right) \) is a locally Lipschitz invex function at \( \overline{x}\in X\) on X with respect to the same function \(\eta \).

Proof

We consider the following cases:

  1. (1)

    \(q\left( \overline{x}\right) >0\).

    Then \(\frac{1}{r}\left( e^{rq^{+}(x)}-1\right) =\frac{1}{r}\left( e^{rq(x)}-1\right) \) on some neighborhood of \(\overline{x}\), and so \(\partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) =\partial \left( \frac{1}{r}\left( e^{rq(\overline{x})}-1\right) \right) \). By assumption, q is a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to \(\eta :X\times X\rightarrow R\). Therefore, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) \right) \) and all \(x\in X\), we have

    $$\begin{aligned} \left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle\leqq & {} e^{rq(\overline{x})}q^{0}\left( \overline{x};\eta \left( x,\overline{x} \right) \right) \leqq \frac{1}{r}\left( e^{rq\left( x\right) }-e^{rq( \overline{x})}\right) \nonumber \\\leqq & {} \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) - \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) . \end{aligned}$$
    (3)
  2. (2)

    \(q\left( \overline{x}\right) <0\).

    Then, by definition, \(\frac{1}{r}\left( e^{rq^{+}(x)}-1\right) =0\) on some neighborhood of \(\overline{x}\), and so \(\partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) =\left\{ 0\right\} \). Therefore, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{ x})}-1\right) \right) \) and all \(x\in X\), we have

    $$\begin{aligned} 0=\left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle = \frac{1}{r}\left( e^{rq^{+}\left( \overline{x}\right) }-1\right) \leqq \frac{ 1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}( \overline{x})}-1\right) . \end{aligned}$$
    (4)
  3. (3)

    \(q\left( \overline{x}\right) =0\).

    By Proposition 8, it follows that

    $$\begin{aligned} \partial \left( \frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) \subset conv\left\{ \partial \left( \frac{1}{r}\left( e^{rq(\overline{x} )}-1\right) \right) ,0\right\} . \end{aligned}$$
    (5)

    By assumption, q is a locally Lipschitz r-invex function at \(\overline{x} \in X\) on X with respect to the function \(\eta \). Therefore, by definition, for any \(\zeta \in \partial q^{+}(\overline{x})\) and all \(x\in X\), we have

    $$\begin{aligned}&\left\langle \zeta ,\eta \left( x,\overline{x}\right) \right\rangle \leqq e^{rq(\overline{x})}q^{0}\left( \overline{x};\eta \left( x,\overline{x} \right) \right) \leqq \frac{1}{r}\left( e^{rq\left( x\right) }-e^{rq( \overline{x})}\right) \nonumber \\&\quad =\frac{1}{r}\left( e^{rq\left( x\right) }-1\right) \leqq \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) =\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \nonumber \\ \end{aligned}$$
    (6)

    or

    $$\begin{aligned} 0=\left\langle 0,\eta \left( x,\overline{x}\right) \right\rangle =-\frac{1}{r }\left( e^{rq^{+}(\overline{x})}-1\right) \leqq \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) . \end{aligned}$$
    (7)

    Hence, by (6) and (7), the following relations

    $$\begin{aligned}&\left\langle \lambda \zeta +\left( 1-\lambda \right) 0,\eta \left( x, \overline{x}\right) \right\rangle \nonumber \\&\quad \leqq \lambda \left( \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1 }{r}\left( e^{rq^{+}(\overline{x})}-1\right) \right) +\left( 1-\lambda \right) \Bigg (\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) \nonumber \\&\qquad -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \Bigg )=\frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x} )}-1\right) \end{aligned}$$
    (8)

    hold for every \(\lambda \in \left[ 0,1\right] \). Thus, (8) implies that, for any \(\zeta ^{+}\in \partial \left( \frac{1}{r}\left( e^{rq^{+}( \overline{x})}-1\right) \right) \) and all \(x\in X\), the following inequality

    $$\begin{aligned} \frac{1}{r}\left( e^{rq^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rq^{+}(\overline{x})}-1\right) \geqq \left\langle \zeta ^{+},\eta \left( x,\overline{x}\right) \right\rangle \end{aligned}$$
    (9)

    holds. Hence, by (3), (4), (9) and Remark 12, we conclude that \(\frac{1}{r}\left( e^{rq^{+}(\cdot )}-1\right) \) is a nondifferentiable invex function at \(\overline{x}\in X\) on X with respect to \(\eta \).

\(\square \)

In general, the unconstrained nonsmooth vectorial optimization problem is represented as follows:

$$\begin{aligned} \begin{array}{l} f(x):=\left( f_{1}(x),\ldots ,f_{k}(x)\right) \rightarrow V-\mathrm{min} \\ \qquad \qquad \quad \!\text {subject to }x\in X, \end{array} \quad \text { (VP)} \end{aligned}$$

where the objective functions \(f_{i}:X\rightarrow R, i\in I=\{1,\ldots ,k\}\), are locally Lipschitz on X, where X is a nonempty open subset of \(R^{n}\).

In general, the concept of an optimal solution defined in scalar optimization problems does not work in multiobjective programming problems. For such multicriterion optimization problems, an optimal solution is defined in terms of a (weak) Pareto solution [(weakly) efficient solution] in the following sense:

Definition 15

A feasible point \(\overline{x}\) is said to be a Pareto solution (efficient solution) for a vector optimization problem if and only if there exists no other feasible solution x such that

$$\begin{aligned} f(x)\le f(\overline{x}). \end{aligned}$$

Definition 16

A feasible point \(\overline{x}\) is said to be a weak Pareto solution (weakly efficient solution, weak minimum) for a vector optimization problem if and only if there exists no other feasible solution x such that

$$\begin{aligned} f(x)<f(\overline{x}). \end{aligned}$$

It is easy to verify that every Pareto solution is a weak Pareto solution.

The following result gives the necessary optimality condition for the unconstrained vectorial optimization problem (UVP) (see [35]).

Theorem 17

A necessary condition for the point \(\overline{ x}\) to be (weak) Pareto optimal in the nondifferentiable vector optimization problem (UVP) is that there exists multiplier \(\overline{\lambda }\in R^{k}\) such that

$$\begin{aligned}&0\in {\sum }_{i=1}^{k}\overline{\lambda }_{i}\partial f_{i}\left( \overline{x} \right) , \\&\overline{\lambda }\ge 0, {\sum }_{i=1}^{k}\overline{\lambda }_{i}=1. \end{aligned}$$

Often, the feasible set of a multiobjective programming problem can be represented by functional inequalities and, therefore, we consider the nondifferentiable constrained vector optimization problem in the following form

$$\begin{aligned} \begin{array}{ll} &{}f(x):=\left( f_{1}(x),\ldots ,f_{k}(x)\right) \rightarrow V\text {-min} \\ \text {subject to }&{}g(x):=\left( g_{1}(x),\ldots ,g_{k}(x)\right) \leqq 0, \\ &{}\qquad \qquad \qquad x\in X, \end{array} \end{aligned}$$
(VP)

where \(f_{i}:X\rightarrow R, i\in I=\{1,\ldots ,k\}\) and \(g_{j}:X\rightarrow R , j\in J=\{1,\ldots ,m\}\), are locally Lipschitz functions on a nonempty open set \(X\subset R^{n}\).

Let

$$\begin{aligned} D:=\left\{ x\in X:g(x)\leqq 0\right\} \end{aligned}$$

denote the set of all feasible solutions of the constrained multiobjective programming problem (VP).

It is well known (see, for example, [3338]) that the following conditions, known as the generalized form of the Karush–Kuhn–Tucker conditions, are necessary for a (weak) Pareto solution in the considered nondifferentiable vector optimization problem (VP).

Theorem 18

Let \(\overline{x}\in D\) be a (weak) Pareto solution in problem (VP) and a constraint qualification (see, for example, [30, 36, 39]) be satisfied at \(\overline{x}\). Then, there exist the Lagrange multipliers \(\overline{\lambda }\in R^{k}\) and \(\overline{\mu }\in R^{m}\) such that

$$\begin{aligned}&0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial f_{i}\left( \overline{x} \right) +\sum _{j=1}^{m}\overline{\mu }_{j}\partial g_{j}\left( \overline{x} \right) , \end{aligned}$$
(10)
$$\begin{aligned}&\overline{\mu }_{j}g_{j}\left( \overline{x}\right) =0,\quad \ j\in J, \end{aligned}$$
(11)
$$\begin{aligned}&\overline{\lambda }\ge 0, \sum _{i=1}^{k}\overline{\lambda }_{i}=1 , \overline{\mu }\geqq 0. \end{aligned}$$
(12)

Definition 19

The point \(\overline{x}\in D\) is said to be a Karush–Kuhn–Tucker point in the considered multiobjective programming problem (VP) if there exist the Lagrange multipliers \(\overline{\lambda }\in R^{k}, \overline{\mu }\in R^{m}\) such that the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) are satisfied at \(\overline{x}\).

3 Convergence of a New Vector Exponential Penalty Function Method for a Multiobjective Programming Problem

For the considered nonlinear multiobjective programming problem (VP), we introduce a new vector exponential penalty function method as follows:

$$\begin{aligned} P_{r}(x,c)=\frac{1}{r}e^{rf(x)}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) e\rightarrow V\text {-min}, \quad \text {(VP}_{r}\text {(}c \text {))} \end{aligned}$$
(13)

where r is a finite real number not equal to 0 and \(e=\left( 1,\ldots ,1\right) \in R^{k}\). Note that, for a given constraint \(g_{j}(x)\leqq 0 \), the function \(\frac{1}{r}\left( e^{rg_{j}^{+}(\cdot )}-1\right) \) defined by

$$\begin{aligned} \frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) =\left\{ \begin{array}{lll} 0 &{} \quad \text {if } &{} g_{j}(x)\leqq 0 \\ \frac{1}{r}\left( e^{rg_{j}(x)}-1\right) &{} \quad \text {if } &{} g_{j}(x)>0 \end{array} \right. \end{aligned}$$
(14)

is equal to zero for all x that satisfy the constraint and that it has a positive value whenever this constraint is violated. Moreover, large violations in the constraint \(g_{j}(x)\leqq 0\) result in large values for \( \frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) \). Thus, the function \(\frac{1}{ r}\left( e^{rg_{j}^{+}(\cdot )}-1\right) \) has the penalty features relative to the single inequality constraint \(g_{j}\). However, observe that at points, where \(g_{j}(x)=0\), the foregoing objective function might not be differentiable, even though \(g_{j}\) is differentiable.

As it follows from (13), the vector penalized problem \((\hbox {VP}_{r}(c))\) constructed in the vector exponential penalty function method is such an unconstrained vector optimization problem, in which a vector objective function is the sum of a certain vector “merit” function (which reflects the vector objective function of the given multiobjective programming problem) and the same penalty term which reflects the constraint set being added to each component of the vector “merit” function. The vector merit function is chosen as the composition of the exponential function and the original vector objective function, while the penalty term (the same for each component of merit function) is obtained by multiplying a suitable function, which represents the constraints, by a positive parameter c, called the penalty parameter.

Remark 20

Note that \(P_{r}:R^{n}\times R_{+}\rightarrow R^{k}\) and \( P_{r}(x,c)=(P_{r1}(x,c),\ldots ,P_{rk}(x,c))\), where \(P_{ri}(x,c):=\frac{1}{r} e^{rf_{i}(x)}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) , i=1,\ldots ,k.\)

Remark 21

In the case when \(r=0\), the definition of the vector penalized problem \((\hbox {VP} _{r}(c))\) reduces to the following form:

$$\begin{aligned} P_{^{_{0}}}(x,c)=f(x)+c\sum _{j=1}^{m}g_{j}^{+}(x)e\rightarrow V\text {-min.}\ \ \ \ \text {(VP}_{0}\text {(}c\text {))} \end{aligned}$$
(15)

Thus, it can be set that in the case when \(r=0\), we obtain the definition of the classical vector \(l_{1}\) penalty function method (see Antczak [19]).

Now, we show that a weak Pareto solution in the considered multiobjective programming problem (VP) can be obtained by solving a sequence of problems ( 13) with the penalty parameter c selected from an increasing sequence of parameters \(\left( c_{n}\right) \).

Therefore, for the considered multiobjective programming problem (VP), we now construct a sequence of vector penalized optimization problems \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function \(n=1,2,\ldots \) as follows:

$$\begin{aligned} P_{r}(x,c_{n})=\frac{1}{r}e^{rf(x)}+c_{n}\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}(x)}-1\right) e\rightarrow V\text {-min},\quad \hbox {(VP}_{r}\text {(} c_{n}\text {))} \end{aligned}$$
(16)

where \(c_{n}>0\) and \(\underset{n\rightarrow \infty }{\lim }c_{n}=\infty \). Moreover, we denote by \(\overline{x}_{n}\) an approximate weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exact exponential penalty function.

An algorithmic framework that forms the basis for the introduced vector exponential penalty function method is as follows:

figure a

Now, we prove the convergence theorem for the introduced vector exponential penalty function method. Namely, we show that if \(\left( \overline{x} _{n_{s}}\right) \) is any convergence subsequence of \(\left( \overline{x} _{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim }\overline{x} _{n_{s}}=\overline{x}\in D\), then \(\overline{x}\) is a weak Pareto solution in the considered multiobjective programming problem (VP).

First, we show that any limit point \(\overline{x}\) of the sequence \(\left( \overline{x}_{n}\right) \), that is, the sequence of approximate weak Pareto solutions in vector penalized optimization problems \((\hbox {VP}_{r}(c_{n}))\) with the vector exact exponential penalty function, is feasible in the considered multiobjective programming problem (VP).

Lemma 22

Let \(c_{n}>0\) and \(\underset{n\rightarrow \infty }{ \lim }c_{n}=\infty \). If \(\overline{x}=\underset{n\rightarrow \infty }{\lim } \overline{x}_{n}\), then \(\overline{x}\) is feasible in the considered multiobjective programming problem (VP).

Proof

Let \(\overline{x}=\underset{n\rightarrow \infty }{\lim }\overline{x}_{n}\). Thus, there exists a subsequence \(\{\overline{x}_{n_{s}}\}\) of \(\{\overline{x }_{n}\}\) such that \(\overline{x}_{n_{s}}\) is an approximate weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}})), s=1,2,\ldots , \) with the vector exponential penalty function and, moreover, \(\underset{ s\rightarrow \infty }{\lim }\overline{x}_{n_{s}}=\overline{x}\). We proceed by contradiction. Suppose, contrary to the result, that \(\overline{x}\notin D \). If we take \(\widetilde{x}\in D\), then, according to the definition of a vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}}))\) and Definition 16, there exists \(i_{n_{s}}\in \left\{ 1,\ldots ,k\right\} \) such that

$$\begin{aligned} \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}+c_{n_{s}}\sum _{j=1}^{m}\frac{1}{ r}\left( e^{rg_{j}^{+}\left( \widetilde{x}\right) }-1\right) \geqq \frac{1}{r }e^{rf_{i_{n_{s}}}(\overline{x}_{n_{s}})}+c_{n_{s}}\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}_{n_{s}}\right) }-1\right) \end{aligned}$$
(17)

Since \(\widetilde{x}\in D\), by (14), we have

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \widetilde{x}\right) }-1\right) =0. \end{aligned}$$
(18)

By assumption, \(\overline{x}\notin D\). This means that there exists \(j\in \{1,2,\ldots ,m\}\) such that \(g_{j}\left( \overline{x}\right) >0\). Then, by (14), it follows that

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) >0. \end{aligned}$$
(19)

Hence, (19) implies that

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) >\varepsilon \end{aligned}$$
(20)

for some \(\varepsilon >0\). By assumption, \(\underset{s\rightarrow \infty }{ \lim }\overline{x}_{n_{s}}=\overline{x}\). Since \(\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}_{n_{s}}\right) }-1\right) \rightarrow \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x} \right) }-1\right) \), for all s sufficiently large, by (20), we have

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x} _{n_{s}}\right) }-1\right) >\varepsilon . \end{aligned}$$
(21)

Therefore, using (18) and (21), for s sufficiently large, we have,

$$\begin{aligned}&\frac{1}{r}e^{rf_{i_{n_{s}}}(\overline{x}_{n_{s}})}+c_{n_{s}}\sum _{j=1}^{m} \frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}_{n_{s}}\right) }-1\right) -\left[ \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}+c_{n_{s}}\sum _{j=1}^{m} \frac{1}{r}\left( e^{rg_{j}^{+}\left( \widetilde{x}\right) }-1\right) \right] \\&\qquad > \frac{1}{r}e^{rf_{i_{n_{s}}}(\overline{x}_{n_{s}})}-\frac{1}{r} e^{rf_{i_{n_{s}}}(\widetilde{x})}+c_{n_{s}}\varepsilon \rightarrow \infty , \text {as }s\rightarrow \infty . \end{aligned}$$

This is a contradiction to (17). This means that \(\overline{x}\in D\), and the proof of this lemma is completed. \(\square \)

The following theorem shows that if a sequence of approximate weak Pareto solutions in the vector penalized problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function converges to \(\overline{x}\), then \( \overline{x}\) is also a weak Pareto solution in the considered multiobjective programming problem (VP).

Theorem 23

Let \(\overline{x}_{n}\) be an approximate weak Pareto optimal solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c_{n}))\) with the vector exponential penalty function, \(n=1,2,\ldots \). If \(\left( \overline{x} _{n_{s}}\right) \) is any convergent subsequence of \(\left( \overline{x} _{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim }\overline{x} _{n_{s}}=\overline{x}\), then \(\overline{x}\) is a weak Pareto solution in the considered multiobjective programming problem (VP).

Proof

Let \(\left( \overline{x}_{n_{s}}\right) \) be any convergent subsequence of \( \left( \overline{x}_{n}\right) \) and \(\underset{s\rightarrow \infty }{\lim } \overline{x}_{n_{s}}=\overline{x}\). By Lemma 22, it follows that \(\overline{x}\) is feasible in the considered multiobjective programming problem (VP). We proceed by contradiction. Suppose, contrary to the result, that \(\overline{x}\) is not a weak Pareto solution in the considered multiobjective programming problem (VP). Hence, by Definition 16, it follows that there exists \(\widetilde{x}\in D\) such that

$$\begin{aligned} f_{i}(\widetilde{x})<f_{i}(\overline{x}),\quad i=1,\ldots ,k. \end{aligned}$$
(22)

Thus,

$$\begin{aligned} \frac{1}{r}e^{rf_{i}(\widetilde{x})}<\frac{1}{r}e^{rf_{i}(\overline{x})} , \quad i=1,\ldots ,k. \end{aligned}$$
(23)

Since \(\overline{x}_{n_{s}}\) is a weak Pareto solution in vector penalized problem \((\hbox {VP}_{r}(c_{n_{s}}))\), there exists \(i_{n_{s}}\in \left\{ 1,\ldots ,k\right\} \) such that

$$\begin{aligned} \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}+c_{n_{s}}\sum _{j=1}^{m}\frac{1}{ r}\left( e^{rg_{j}^{+}\left( \widetilde{x}\right) }-1\right) \geqq \frac{1}{r }e^{rf_{i_{n_{s}}}(\overline{x}_{n_{s}})}+c_{n_{s}}\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}_{n_{s}}\right) }-1\right) . \end{aligned}$$
(24)

Since \(\widetilde{x}\in D\), by (14), we have

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \widetilde{x}\right) }-1\right) =0. \end{aligned}$$
(25)

Also by (14), it follows that

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x} _{n_{s}}\right) }-1\right) \geqq 0. \end{aligned}$$
(26)

Combining (24)–(26), we get

$$\begin{aligned} \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}\geqq \frac{1}{r} e^{rf_{i_{n_{s}}}(\overline{x}_{n_{s}})}. \end{aligned}$$
(27)

By (23), it follows that

$$\begin{aligned} \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}<\frac{1}{r}e^{rf_{i_{n_{s}}}( \overline{x})}. \end{aligned}$$
(28)

Since \(\underset{s\rightarrow \infty }{\lim }\overline{x}_{n_{s}}=\overline{x }\), for sufficiently large s, (28) implies that the following inequality

$$\begin{aligned} \frac{1}{r}e^{rf_{i_{n_{s}}}(\widetilde{x})}<\frac{1}{r}e^{rf_{i_{n_{s}}}( \overline{x}_{n_{s}})} \end{aligned}$$
(29)

holds, contradicting (27). This means that any limit point of any convergence subsequence of \(\left( \overline{x}_{n}\right) \) is a weak Pareto solution in the considered multiobjective programming problem (VP). The proof of this theorem is completed. \(\square \)

It turns out that the strategy for choosing the penalty parameter \(c_{n}\) is crucial to the practical success of the algorithm presented above. If the initial choice \(c_{0}\) is too small, many cycles of the algorithm presented above may be needed to determine an appropriate solution.

In order to illustrate the difficulties caused by an inappropriate value of c, we consider the following multiobjective programming problem.

Example 24

Consider the following vector optimization problem:

$$\begin{aligned} \begin{array}{l} f(x)=\left( x-1,x-1\right) \rightarrow V\mathrm{{-\min }} \\ g(x)=1-x\leqq 0, \\ x\in X=\left\{ x\in R:x\geqq 0\right\} . \end{array} \qquad \text {(VP0)} \end{aligned}$$

Note that \(D=\left\{ x\in R:0\leqq x\leqq 1\right\} \) and \(\overline{x}=1\) is a Pareto solution in the considered multiobjective programming problem (VP0). Note that all functions constituting problem (VP0) are 1-invex on R with respect to the same function \(\eta \), where \(\eta \left( x, \overline{x}\right) =x-\overline{x}\). We define the vector exponential penalty function \(P_{1}(\cdot ,c)\) as follows:

$$\begin{aligned} P_{1}(x,c)=\left\{ \begin{array}{lll} \left( e^{x-1}+ce^{1-x},e^{x-1}+ce^{1-x}\right) &{}\quad \text {if} &{} x<1, \\ \left( e^{x-1},e^{x-1}\right) &{} \quad \text {if} &{} x\geqq 1. \end{array} \right. \end{aligned}$$

Now, we consider various values of the penalty parameter c and draw graphs of each component of the vector exponential penalty function \(P_{1}(\cdot ,c) \) for the considered multiobjective programming problem (VP0) according to these values of the penalty parameter c on below figures (Fig. 1).

Fig. 1
figure 1

Each component of the vector exponential penalty function \( P_{1}(\cdot ,c)\) for the considered multiobjective programming problem (VP0) according to values of the penalty parameter (a) \(c=0.14\), (b) \(c=0.5\), (c) \( c=1.5\)

Note that each component of the vector exponential penalty function \( P_{1}(\cdot ,c)\) is a monotonically increasing function when c is smaller than 0.15. On the other hand, when c is chosen from the interval (0.15, 1), then the vector exponential penalty function has a Pareto solution, but not at the feasible solution \(\overline{x}=1\). While the vector exponential penalty function \(P_{1}(\cdot ,c)\) has a Pareto solution at \(\overline{x}=1\), when \(c>1\).

Therefore, if, for example, the current iterate in the above algorithm is \( \overline{x}_{n}=\frac{1}{4}\) and the penalty parameter \(c_{n}\) is chosen to be less than 1, then almost any implementation of the vector exponential penalty method will give a step that moves away from the Pareto solution \( \overline{x}=1\). This behavior of the algorithm will be repeated, producing increasingly poorer iterates, until the penalty parameter c is increased above the threshold equal to 1.

It turns out in our further considerations that this value of the threshold is not accidental (see Remark 37 below).

4 Exactness of the Introduced Vector Exponential Penalty Function Method

In order to avoid the need of a sequence of unbounded penalty parameters, in other words, a sequence of unbounded penalized optimization problems, we now prove that the introduced vector exponential penalty function method is exact in the sense that a (weak) Pareto solution in the original multiobjective programming problem is equivalent to a (weak) Pareto solution in the associated vector penalized problem, for a finite (sufficiently large) value of the penalty parameter. In order to prove this result, we assume that each component of the functions constituting the considered multiobjective programming problem (VP) is locally Lipschitz r-invex with respect to the same function \(\eta \).

Now, in a natural way, we extend the well-known definition of exactness property for a scalar exact penalty function to the vectorial case.

Definition 25

If a threshold value \(\overline{c}\geqq 0\) exists such that, for every \(c>\overline{c}\),

$$\begin{aligned} \arg \,(\mathrm{{weak}})\,\mathrm{{Pareto}}\left\{ P_{r}(x,c):x\in R^{n}\right\} =\arg \,(\mathrm{{weak}})\,\mathrm{{Pareto}}\left\{ f(x):x\in D\right\} , \end{aligned}$$

then the function \(P_{r}(x,c)\) is termed a vector exact exponential penalty function.

According to the definition of the function \(P_{r}(x,c)\), we call \((\hbox {VP}_{r}( c))\), defined by (13), the vector penalized problem with the vector exact exponential penalty function.

It is clear that, conceptually, if \(P_{r}\left( x,c\right) \) is a vector exact exponential penalty function, we can find the constrained (weak) Pareto in the considered multiobjective programming problem (VP), by looking for the unconstrained (weak) Pareto solutions of the function \(P_{r}\left( x,c\right) \), for sufficiently large (but finite) values of the penalty parameter c.

Now, for sufficiently large values of the penalty parameter c, we prove the equivalence between the sets of (weak) Pareto solutions of problem (VP) and the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.

First, we establish that a Karush–Kuhn–Tucker point in the considered multiobjective programming problem is a weak Pareto solution of the vector exact exponential penalty function in the associated vector penalized problem \((\hbox {VP}_{r}(c))\), for sufficiently large penalty parameters c greater than the given threshold.

Theorem 26

Let \(\overline{x}\) be a feasible solution in the nonsmooth multiobjective programming problem (VP) and the generalized Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) be satisfied at \(\overline{x}\) with the Lagrange multipliers \( \overline{\lambda }_{i}, i\in I, \overline{\mu }_{j}, j\in J\). Furthermore, assume that the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \) and \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) } , i\in I\right\} \). If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{ \mu }_{j}, j\in J\right\} \)), then \(\overline{x}\) is also a weak Pareto solution in any associated vector penalized optimization problem \((\hbox {VP} _{r}(c))\) with the vector exact exponential penalty function.

Proof

We proceed by contradiction. Suppose, contrary to the result, that \( \overline{x}\) is not a weak Pareto solution in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. Therefore, by Definition 16, there exists \(\widetilde{x}\in X\) such that

$$\begin{aligned} P_{r}\left( \widetilde{x},c\right) <P_{r}\left( \overline{x},c\right) . \end{aligned}$$

By definition of the vector penalized problem (VP\(_{{r}}({c})\)) [see (13)], we have

$$\begin{aligned} \frac{1}{r}e^{rf(\widetilde{x})}+c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}(\widetilde{x})}-1\Bigg ) e<\frac{1}{r}e^{rf(\overline{x} )}+c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}(\overline{x})}-1\Bigg ) e . \end{aligned}$$
(30)

Thus,

$$\begin{aligned} \frac{1}{r}e^{rf_{i}(\widetilde{x})}-\frac{1}{r}e^{rf_{i}(\overline{x} )}+c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}(\widetilde{x})}-1\Bigg ) -c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}(\overline{x})}-1\Bigg ) <0 , \quad i=1,\ldots ,k. \end{aligned}$$
(31)

Since \(\overline{x}\) is a feasible solution in the nonsmooth multiobjective programming problem (VP), (14) yields

$$\begin{aligned} \frac{1}{r}e^{rf_{i}(\widetilde{x})}-\frac{1}{r}e^{rf_{i}(\overline{x} )}+c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}(\widetilde{x})}-1\Bigg ) <0 ,\quad i=1,\ldots ,k. \end{aligned}$$
(32)

By assumption, \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) }, i\in I\right\} \) and, moreover, \(c\geqq M\max \left\{ \overline{\mu }_{j} , j\in J\right\} \). Hence, (32) gives

$$\begin{aligned} \frac{1}{r}e^{rf_{i}(\widetilde{x})}-\frac{1}{r}e^{rf_{i}(\overline{x})}+ \frac{1}{r}e^{rf_{i}(\overline{x})}\sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}(\widetilde{x})}-1\Bigg ) <0,\quad i=1,\ldots ,k. \end{aligned}$$
(33)

Thus, (33) yields

$$\begin{aligned} \frac{1}{r}\Bigg ( e^{r\left( f_{i}(\widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r}\sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}( \widetilde{x})}-1\Bigg ) <0,\quad i=1,\ldots ,k. \end{aligned}$$

By the Karush–Kuhn–Tucker necessary optimality condition (12), it follows that

$$\begin{aligned} \frac{1}{r}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}(\widetilde{x} )-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r}\overline{\lambda } _{i}\sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}(\widetilde{x} )}-1\Bigg ) \leqq 0,\quad i=1,\ldots ,k, \end{aligned}$$
(34)

and, for at least one \(i^{*}\in I\), we have

$$\begin{aligned} \frac{1}{r}\overline{\lambda }_{i^{*}}\Bigg ( e^{r\left( f_{i^{*}}( \widetilde{x})-f_{i^{*}}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \overline{\lambda }_{i^{*}}\sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}(\widetilde{x})}-1\Bigg ) <0. \end{aligned}$$
(35)

Adding both sides of (34) and (35), we get

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}(\widetilde{x} )}-1\Bigg ) \sum _{i=1}^{k}\overline{\lambda }_{i}<0. \end{aligned}$$
(36)

By the Karush–Kuhn–Tucker necessary optimality condition (12), we have that \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\). Hence, (36) yields

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}^{+}(\widetilde{x} )}-1\Bigg ) <0. \end{aligned}$$
(37)

By assumption, the objective function f and the constraint function g are locally Lipschitz r-invex at \(\overline{x}\) on X with respect to the same function \( \eta \). Then, by Definition 9, the following inequalities

$$\begin{aligned} \frac{1}{r}e^{rf_{i}(x)}\geqq & {} \frac{1}{r}e^{rf_{i}(\overline{x})}\Bigg [ 1+r\left\langle \xi _{i},\eta \left( x,\overline{x}\right) \right\rangle \Bigg ] , \ \forall \xi _{i}\in \partial f_{i}\left( \overline{x} \right) , \ \ \quad i=1,\ldots ,k,\\ \frac{1}{r}e^{rg_{j}(x)}\geqq & {} \frac{1}{r}e^{rg_{j}(\overline{x})}\Bigg [ 1+r\left\langle \zeta _{j},\eta \left( x,\overline{x}\right) \right\rangle \Bigg ] , \ \forall \zeta _{j}\in \partial g_{j}\left( \overline{x} \right) , \ \ \quad j=1,\ldots ,m \end{aligned}$$

hold for all \(x\in X\). Therefore, they are also satisfied for \(x=\widetilde{x }\in X\). Using the Karush–Kuhn–Tucker necessary optimality condition (12), we get, respectively,

$$\begin{aligned}&\frac{1}{r}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}(\widetilde{x} )-f_{i}(\overline{x})\right) }-1\Bigg ) \geqq \overline{\lambda } _{i}\left\langle \xi _{i},\eta \left( \widetilde{x},\overline{x}\right) \right\rangle , \,\forall \xi _{i}\in \partial f_{i}\left( \overline{x }\right) , \quad i=1,\ldots ,k,\\&\frac{1}{r}\overline{\mu }_{j}\Bigg ( e^{r\left( g_{j}(\widetilde{x})-g_{j}( \overline{x})\right) }-1\Bigg ) \geqq \overline{\mu }_{j}\left\langle \zeta _{j},\eta \left( \widetilde{x},\overline{x}\right) \right\rangle , \forall \zeta _{j}\in \partial g_{j}\left( \overline{x}\right) , \quad j=1,\ldots ,m. \end{aligned}$$

Adding both sides of the above inequalities, we get that the following inequality

$$\begin{aligned}&\frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{r\left( g_{j}(\widetilde{x} )-g_{j}(\overline{x})\right) }-1\Bigg ) \nonumber \\&\quad \qquad \geqq \left\langle \sum _{i=1}^{k}\overline{\lambda }_{i}\xi _{i}+\sum _{j=1}^{m}\overline{\mu }_{j}\zeta _{j},\eta \left( \widetilde{x}, \overline{x}\right) \right\rangle \end{aligned}$$

holds for every \(\xi _{i}\in \partial f_{i}\left( \overline{x}\right) , i=1,\ldots ,k,\) and \(\zeta _{j}\in \partial g_{j}\left( \overline{x}\right) , j=1,\ldots ,m\). By the Karush–Kuhn–Tucker necessary optimality condition (10), the following inequality

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{r\left( g_{j}(x)-g_{j}(\overline{x })\right) }-1\Bigg ) \geqq 0 \end{aligned}$$

holds. Thus,

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r}\sum _{j\in J\left( \overline{x}\right) }\overline{\mu }_{j}\Bigg ( e^{\frac{r}{\overline{ \mu }_{j}}\left( \overline{\mu }_{j}g_{j}(\widetilde{x})-\overline{\mu } _{j}g_{j}(\overline{x})\right) }-1\Bigg ) \geqq 0. \end{aligned}$$

Hence, using the Karush–Kuhn–Tucker necessary optimality condition (11 ), we obtain

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\Bigg ( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\Bigg ) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\Bigg ( e^{rg_{j}(\widetilde{x})}-1\Bigg ) \geqq 0. \end{aligned}$$

By (14), it follows that the following inequality

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}\left( e^{r\left( f_{i}( \widetilde{x})-f_{i}(\overline{x})\right) }-1\right) +\frac{1}{r} \sum _{j=1}^{m}\overline{\mu }_{j}\left( e^{rg_{j}^{+}(\widetilde{x} )}-1\right) \geqq 0 \end{aligned}$$

holds, contradicting (37). Hence, the proof of this theorem is completed. \(\square \)

The following corollary follows directly from Theorem 26.

Corollary 27

Let \(\overline{x}\) be a weak Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 26 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j}, j\in J\right\} \)), then \( \overline{x}\) is also a weak Pareto solution in the associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.

Now, under stronger assumptions, we establish the relationship between a Karush–Kuhn–Tucker point in the considered multiobjective programming problem (VP) and a Pareto solution in its associated vector penalized problem (VP(c)) with the vector exact exponential penalty function.

Theorem 28

Let \(\overline{x}\) be a feasible solution in the multiobjective programming problem (VP) and the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) be satisfied at \( \overline{x}\) with the Lagrange multipliers \(\overline{\lambda }_{i}, i\in I, \overline{\mu }_{j}, j\in J\). Furthermore, assume that one of the following hypotheses is satisfied:

  1. (i)

    the Lagrange multipliers \(\overline{\lambda }_{i}, i\in I\), associated to the objectives \(f_{i}\), are positive real numbers and, moreover, the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to \(\eta \),

  2. (ii)

    the objective function f is strictly r-invex at \(\overline{x}\) on X with respect to \(\eta \) and the constraint function g is r-invex at \(\overline{x}\) on X with respect to \(\eta \),

If \(M=\max \left\{ e^{rf_{i}\left( \overline{x}\right) }, i\in I\right\} \) and the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j} , j\in J\right\} \)), then \(\overline{x}\) is also a Pareto solution in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.

Proof

Proof of this theorem is similar to that of Theorem 26. \(\square \)

Corollary 29

Let \(\overline{x}\) be a Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 28 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\geqq M\max \left\{ \overline{\mu }_{j}, j\in J\right\} \)), then \(\overline{x}\) is a Pareto solution also in the associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function.

Now, under stronger assumptions, we establish the converse results to those ones proved above. Namely, we prove that, for sufficiently large values of the penalty parameter c, if \(\overline{x}\) is a (weak) Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(\overline{c}))\) with the vector exact exponential penalty function, then it is also a (weak) Pareto solution in the original multiobjective programming problem (VP). To prove this result, we assume that both the objective function and the constraint functions are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \). We also show that there exists the finite threshold \(\overline{c}\) of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function is a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP).

Theorem 30

Let D be a compact subset of \(R^{n}\) and \( \overline{x}\) be a weak Pareto solution in the vector penalized problem \((\hbox {VP} _{r}(c))\) with the vector exact exponential penalty function, where c is assumed to be sufficiently large. Further, assume that the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \). Then \(\overline{x}\) is also a weak Pareto solution in the given multiobjective programming problem (VP).

Proof

By assumption, \(\overline{x}\) is a weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. We consider two cases. First, assume that \(\overline{x}\in D\). Then, by definition of the vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function, it follows that

$$\begin{aligned} \sim \exists _{x\in X}\, \frac{1}{r}e^{rf(x)}+c\sum _{j=1}^{m}\frac{1}{ r}\Bigg ( e^{rg_{j}^{+}(x)}-1\Bigg ) e<\frac{1}{r}e^{rf(\overline{x} )}+c\sum _{j=1}^{m}\frac{1}{r}\Bigg ( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\Bigg ) e. \end{aligned}$$
(38)

Hence, by (14), (38) implies that the relation

$$\begin{aligned} \sim \exists _{x\in D}\,f\left( x\right) <f\left( \overline{x} \right) \end{aligned}$$
(39)

holds, by which we conclude that \(\overline{x}\) is a weak Pareto optimal in the considered multiobjective programming problem (VP). Then, for any \( c\geqq \overline{c}\) (where \(\overline{c}\) is equal to the penalty parameter for which the vector penalized problem (VP\(_r (c)\)) is defined), a weak Pareto solution \(\overline{x}\) in each vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function is a weak Pareto solution also in the considered multiobjective programming problem (VP).

In the case when \(\overline{x}\in D\), the result follows directly from (39).

Now, suppose that \(\overline{x}\notin D\). Since \(\overline{x}\) is a weak Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(c))\), by Theorem 17, there exists \(\overline{\lambda }\in R^{k}, \overline{\lambda }\ge 0, \sum _{i=1}^{k}\overline{\lambda }_{i}=1\), such that

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial P_{ri}(\overline{x},c). \end{aligned}$$
(40)

By definition of the vector exact exponential penalty function, it follows that

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$
(41)

By assumption, all functions \(g_{j}, j=1,\ldots ,m\), are locally Lipschitz on X. Then, by definition, the functions \(\frac{1}{r}\left( e^{rg_{j}^{+}\left( \cdot \right) }-1\right) , j=1,\ldots ,m\), are also locally Lipschitz on X. Since all \(\overline{\lambda }_{i}\) are nonnegative, equality holds in Corollary 6. Thus, (41) yields

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}\right) +\sum _{i=1}^{k}\overline{\lambda } _{i}\partial \left( c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$

Thus,

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}\right) +\partial \left( c\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) \sum _{i=1}^{k}\overline{\lambda }_{i}. \end{aligned}$$
(42)

Since \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\), (42) gives

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}\right) +\partial \left( c\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$

Then, by Lemma 4, it follows that

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}\right) +c\partial \left( \sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$
(43)

Hence, by Proposition 5, we have

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}\partial \left( \frac{1}{r} e^{rf_{i}(\overline{x})}\right) +c\sum _{j=1}^{m}\partial \left( \frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$

Thus, by Theorem 2.3.9 [30], it follows that

$$\begin{aligned} 0\in \sum _{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(\overline{x})}\partial \left( f_{i}(\overline{x})\right) +c\sum _{j=1}^{m}\partial \left( \frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) . \end{aligned}$$
(44)

By assumption, the objective function f and the constraint function g are r-invex at \(\overline{x}\) on X with respect to the same function \( \eta \). Since the constraint functions \(g_{j}, j\in J\), are locally Lipschitz on X and r-invex at \(\overline{x}\) on X with respect to the same function \(\eta \), by Theorem 14, the functions \(\frac{1 }{r}\left( e^{rg_{j}^{+}\left( \cdot \right) }-1\right) , j\in J\), are invex at \(\overline{x}\) on X with respect to the same function \(\eta \). Then, the following inequalities

$$\begin{aligned}&\frac{1}{r}e^{rf_{i}(x)}\geqq \frac{1}{r}e^{rf_{i}(\overline{x})}\left[ 1+r\left\langle \xi _{i},\eta \left( x,\overline{x}\right) \right\rangle \right] , \, \forall \xi _{i}\in \partial f_{i}\left( \overline{x} \right) , \quad i=1,\ldots ,k, \end{aligned}$$
(45)
$$\begin{aligned}&\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) -\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \geqq \left\langle \zeta _{j}^{+},\eta \left( x,\overline{x}\right) \right\rangle \nonumber \\&\quad \qquad \qquad \qquad \quad \qquad \qquad \forall \zeta _{j}^{+}\in \partial \left( \frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , \quad j=1,\ldots ,m \end{aligned}$$
(46)

hold for all \(x\in X\). Multiplying (46) by \(c>0\), we get

$$\begin{aligned}&c\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) -c\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \geqq c\left\langle \zeta _{j}^{+},\eta \left( x,\overline{x}\right) \right\rangle \nonumber \\&\qquad \qquad \quad \qquad \quad \qquad \qquad \forall \zeta _{j}^{+}\in \partial \left( \frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , \quad j=1,\ldots ,m. \end{aligned}$$
(47)

Adding both sides of the inequalities (47), we obtain

$$\begin{aligned} c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) -c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \geqq c\sum _{j=1}^{m}\left\langle \zeta _{j}^{+},\eta \left( x, \overline{x}\right) \right\rangle . \end{aligned}$$
(48)

Thus, by (45) and (48), for any \(i=1,\ldots ,k\), the following inequalities

$$\begin{aligned}&\frac{1}{r}e^{rf_{i}(x)}+c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) -\left( \frac{1}{r}e^{rf_{i}( \overline{x})}+c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) \nonumber \\&\qquad \qquad \qquad \qquad \qquad \,\qquad \qquad \qquad \qquad \geqq \left\langle e^{rf_{i}(\overline{x})}\xi _{i}+c{\sum }_{j=1}^{m}\zeta _{j}^{+},\eta \left( x,\overline{x}\right) \right\rangle \end{aligned}$$
(49)

hold for all \(x\in X\), for every \(\xi _{i}\in \partial f_{i}\left( \overline{ x}\right) , i=1,\ldots ,k\), and \(\zeta _{j}^{+}\in \partial \left( \frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , j=1,\ldots ,m\). Multiplying (49) by \(\overline{\lambda }_{i}\), we get

$$\begin{aligned}&\frac{1}{r}\overline{\lambda }_{i}e^{rf_{i}(x)}+c\overline{\lambda } _{i}{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) - \overline{\lambda }_{i}\left( \frac{1}{r}e^{rf_{i}(\overline{x} )}\!+\!c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }\!-\!1\right) \right) \! \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \geqq \!\left\langle \overline{\lambda }_{i}e^{rf_{i}(\overline{x })}\xi _{i}\!+\!c\overline{\lambda }_{i}{\sum }_{j=1}^{m}\zeta _{j}^{+},\eta \left( x,\overline{x}\!\right) \right\rangle . \end{aligned}$$
(50)

Adding both sides of the above inequalities, we obtain

$$\begin{aligned}&\frac{1}{r}{\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(x)}+c{\sum }_{j=1}^{m} \frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) {\sum }_{i=1}^{k} \overline{\lambda }_{i} \nonumber \\&\quad -\left( \frac{1}{r}{\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(\overline{x} )}+c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) {\sum }_{i=1}^{k}\overline{\lambda }_{i}\right) \nonumber \\&\quad \geqq \left\langle {\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}( \overline{x})}\xi _{i}+c{\sum }_{j=1}^{m}\zeta _{j}^{+}{\sum }_{i=1}^{k}\overline{ \lambda }_{i},\eta \left( x,\overline{x}\right) \right\rangle . \end{aligned}$$
(51)

Since \(\sum _{i=1}^{k}\overline{\lambda }_{i}=1\), (51) implies that the inequality

$$\begin{aligned}&\frac{1}{r}{\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(x)}+c{\sum }_{j=1}^{m} \frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) -\frac{1}{r} {\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(\overline{x})}\nonumber \\&\quad -c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \geqq \left\langle {\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}( \overline{x})}\xi _{i}+c{\sum }_{j=1}^{m}\zeta _{j}^{+},\eta \left( x,\overline{ x}\right) \right\rangle \qquad \quad \end{aligned}$$
(52)

holds for every \(\xi _{i}\in \partial f_{i}(\overline{x}), i=1,\ldots ,k,\) and \(\zeta _{j}^{+}\in \partial \left( \frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) , j=1,\ldots ,m\). Hence, by (44), the following inequality

$$\begin{aligned}&\frac{1}{r}{\sum }_{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(x)}+c{\sum }_{j=1}^{m} \frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) \nonumber \\&\qquad -\left( \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda } _{i}e^{rf_{i}(\overline{x})}+c{\sum }_{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \right) \geqq 0 \end{aligned}$$
(53)

holds for all \(x\in X\). By (14), for each \(x\in D\), it follows that

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) =0 . \end{aligned}$$
(54)

Combining (53) and (54), we get that the following inequality

$$\begin{aligned} \frac{1}{r}\sum _{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(x)}-\frac{1}{r} \sum _{i=1}^{k}\overline{\lambda }_{i}e^{rf_{i}(\overline{x})}\geqq c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) \end{aligned}$$
(55)

holds for all \(x\in D\). Since \(\overline{x}\) is not feasible in the given multiobjective programming problem (VP), by (14), it follows that

$$\begin{aligned} \sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) >0. \end{aligned}$$
(56)

By assumption, c is sufficiently large. Let c satisfy

$$\begin{aligned} c>\overline{c}=\max \left\{ \frac{\frac{1}{r}e^{rf_{i}(x)}-\frac{1}{r} e^{rf_{i}(\overline{x})}}{\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) }:i\in I, x\in D\right\} . \end{aligned}$$
(57)

Now, we prove that \(\overline{c}\geqq 0\). Indeed, by assumption, \(\overline{x }\) is a weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function. Thus, by (39), it follows that, for every \(x\in D\), there exist at least one \( i\in I\) such that \(\frac{1}{r}e^{rf_{i}(x)}-\frac{1}{r}e^{rf_{i}(\overline{x} )}\geqq 0\). Hence, in fact, (57) implies that \(\overline{c}\geqq 0\).

We now show that

$$\begin{aligned} \overline{c}\geqq M\max \left\{ \overline{\mu }_{j}, j\in J\right\} . \end{aligned}$$
(58)

Suppose, contrary to the result, that

$$\begin{aligned} \overline{c}<M\max \left\{ \overline{\mu }_{j}, j\in J\right\} . \end{aligned}$$
(59)

Since (57) is fulfilled for all \(c>\overline{c}\), there exists the penalty parameter \(c^{*}>\overline{c}\) such that

$$\begin{aligned} c^{*}=M\max \left\{ \overline{\mu }_{j}, j\in J\right\} . \end{aligned}$$
(60)

Combining (57), (58) and (60), we have

$$\begin{aligned} \max \left\{ \frac{\frac{1}{r}e^{rf_{i}(x)}-\frac{1}{r}e^{rf_{i}(\overline{x} )}}{\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) }:i\in I,\text { }x\in D\right\} <c^{*}. \end{aligned}$$
(61)

Hence, (61) gives

$$\begin{aligned} \forall _{x\in D}\,\forall i\in I\,\frac{1}{r}e^{rf_{i}(x)}- \frac{1}{r}e^{rf_{i}(\overline{x})}<c^{*}\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) . \end{aligned}$$
(62)

Since \(\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) =0\) for all \(x\in D\), (62) implies that the following inequality

$$\begin{aligned} \forall _{x\in D}\,\forall i\in I\,\frac{1}{r} e^{rf_{i}(x)}+c^{*}\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) <\frac{1}{r}e^{rf_{i}(\overline{x})}+c^{*}\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) , \end{aligned}$$

holds, which contradicting the weak efficiency of \(\overline{x}\) in the vector penalized optimization problem \((\hbox {VP}_{r}(c^{*}))\). Thus, (58) is satisfied.

By assumption, \(\overline{x}\) is a weak Pareto solution in the vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function for sufficiently large c (as it follows by ( 57), for all \(c>\overline{c}\)). Hence, by Definition 16, it follows that

$$\begin{aligned} \sim \exists _{x\in X}\,P_{r}(x,c)<P_{r}(\overline{x},c). \end{aligned}$$
(63)

Thus, (13) gives

$$\begin{aligned} \sim \exists _{x\in X}\,\frac{1}{r}e^{rf(x)}+c\sum _{j=1}^{m}\frac{1 }{r}\left( e^{rg_{j}^{+}\left( x\right) }-1\right) e<\frac{1}{r}e^{rf( \overline{x})}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) e. \end{aligned}$$
(64)

Since \(D\subset X\), the inequality (64) yields

$$\begin{aligned} \sim \exists _{x\in D}\,\frac{1}{r}e^{rf(x)}<\frac{1}{r}e^{rf( \overline{x})}+c\sum _{j=1}^{m}\frac{1}{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) e. \end{aligned}$$
(65)

The above relation is equivalent to

$$\begin{aligned} \forall _{x\in D}\text { }\exists _{i\in I}\,\frac{1}{r} e^{rf_{i}(x)}\geqq \frac{1}{r}e^{rf_{i}(\overline{x})}+c\sum _{j=1}^{m}\frac{1 }{r}\left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) . \end{aligned}$$
(66)

Thus, (66) implies that the following relation

$$\begin{aligned} \forall _{x\in D}\text { }\exists _{i\in I}\quad c\leqq \frac{\frac{1}{r} e^{rf_{i}(x)}-\frac{1}{r}e^{rf_{i}(\overline{x})}}{\sum _{j=1}^{m}\frac{1}{r} \left( e^{rg_{j}^{+}\left( \overline{x}\right) }-1\right) } \end{aligned}$$
(67)

holds, contradicting (57). Therefore, the case \(\overline{x}\notin D\) is impossible. This means that \(\overline{x}\) is feasible in the multiobjective programming problem (VP). Hence, by the feasibility of \( \overline{x}\) in constrained optimization problem (VP), (66) yields

$$\begin{aligned} \forall _{x\in D}\text { }\exists _{i\in I}\quad f_{i}(x)\geqq f_{i}( \overline{x}). \end{aligned}$$

By Definition, 16, the above inequality implies that \(\overline{x}\) is a weak Pareto solution in the multiobjective programming problem (VP). This completes the proof of this theorem. \(\square \)

Theorem 31

Let D be a compact subset of \(R^{n}\) and \(\overline{x} \) be a Pareto solution in the vector penalized problem \((\hbox {VP}_{r}(\overline{c }))\) with the vector exact exponential penalty function. Further, assume that the objective function f is strictly r-invex at \(\overline{x}\) on X and the constraint function g is r-invex at \(\overline{x}\) on X with respect to the same \(\eta \). If the penalty parameter \(\overline{c}\) is sufficiently large, then \(\overline{x}\) is also a Pareto solution in the considered multiobjective programming problem (VP).

Proof

Proof of this theorem is similar to the proof of Theorem 30. \(\square \)

Corollary 32

Let all hypotheses of Corollary 27 (or Corollary 29) and Theorem 30 (or Theorem 31) be fulfilled. Then the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem \((VP_{r}(c))\) with the exact vector exponential penalty coincide.

The importance of this result is to guarantee the existence of a finite penalty parameter c such that the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem \((\hbox {VP}_{r}(c))\) with the exact vector exponential penalty are the same.

Remark 33

Note that since the lower bound for the penalty parameter is finite, therefore, the sequence of vector penalized subproblems (16) generated by the above presented algorithm is also finite, in opposition to nonexact penalty function methods.

Now, we illustrate the result established above by means of a nondifferentiable multiobjective programming problem with r-invex functions, which we solve by using the introduced vector exact exponential penalty function method.

Example 34

Consider the following nonsmooth multiobjective programming problem:

Note that \(D=\left\{ \left( x_{1},x_{2}\right) \in R^{2}:0\leqq x_{1}\leqq 1\wedge 0\leqq x_{2}\leqq 1\right\} \) and \(\overline{x}=\left( 0,0\right) \) is a Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP1). Further, it is not difficult to prove, by Definition 10, that the objective function f is strictly 1-invex on \(R^{2}\) with respect to the function \( \eta :R^{2}\times R^{2}\rightarrow R^{2}\) and, by Definition 9, the constraints \(g_{1}\) and \(g_{2}\) are 1-invex on \( R^{2}\) with respect to the same function \(\eta \), where

$$\begin{aligned} \eta \left( x,\overline{x}\right) =\left[ \begin{array}{c} \eta _{1}\left( x,\overline{x}\right) \\ \eta _{2}\left( x,\overline{x}\right) \end{array} \right] =\left[ \begin{array}{c} \left| x_{1}\right| -\left| \overline{x}_{1}\right| \\ \left| x_{2}\right| -\left| \overline{x}_{2}\right| \end{array} \right] . \end{aligned}$$

We use the vector exact exponential penalty method for solving the considered nonconvex nondifferentiable vector optimization problem (VP1). Hence, we construct the following unconstrained vector penalized problem \((VP1 _{1}(c))\) with the vector exact exponential penalty function as follows:

where

$$\begin{aligned} \sum _{j=1}^{2}\left( e^{g_{j}^{+}(x)}-1\right) =\max \left\{ 0,x_{1}^{2}-x_{1}\right\} +\max \left\{ 0,x_{2}^{2}-x_{2}\right\} . \end{aligned}$$

Further, the Karush–Kuhn–Tucker necessary optimality conditions (10)–(12) are satisfied at \(\overline{x}=\left( 0,0\right) \) with the Lagrange multipliers \(\overline{\lambda }=\left( \overline{\lambda }_{1}, \overline{\lambda }_{2},\overline{\lambda }_{3}\right) \ge 0\) and \( \overline{\mu }=\left( \overline{\mu }_{1},\overline{\mu }_{2}\right) \) satisfying \(\overline{\lambda }_{1}\xi _{1}-\overline{\mu }_{1}=0, - \overline{\lambda }_{2}+\overline{\lambda }_{3}\xi _{3}-\overline{\mu }_{2}=0 , \overline{\lambda }_{1}+\overline{\lambda }_{2}+\overline{\lambda }_{3}=1 \), where \(\xi _{1}\in \left[ -1,1\right] , \xi _{3}\in \left[ -1,1\right] \) . As it follows from these relations, \(max\left\{ \overline{\mu }_{j}, j=1,2\right\} =1\) and \(M=1\). Therefore, if we set \(c\geqq 1\), then, by Corollary 27, \(\overline{x}=\left( 0,0\right) \), being a Pareto solution in the considered multiobjective programming problem (VP1), is also a Pareto solution in each its associated vector penalized optimization problem (VP\(_{1}\)(c)) with the vector exact exponential penalty function. Since also hypotheses of Theorem 31 are fulfilled, the converse result is also true. Note that for the considered constrained multiobjective programming problem (VP1), it is not possible to use the similar result established under convexity assumptions by Antczak [17]. This follows from the fact that none of the functions constituting the considered nonsmooth vector optimization problem (VP1) is convex on \(R^{2}\). It is also difficult to show that the functions involved in problem (VP1) are invex with respect to the same function \( \widetilde{\eta }:R^{2}\times R^{2}\rightarrow R^{2}\). Therefore, it would be difficult to prove the similar result under invexity assumption. However, the results proved in the paper are applicable for the considered nonconvex multiobjective programming problem (VP1), since the functions involved in it are 1-invex on \(R^{2}\) with respect to the same function \(\eta \). Thus, the introduced vector exact exponential penalty function method is applicable to a larger class of nonconvex vector optimization problems than the classical vector exact penalty function method considered in [19].

Now, we consider an example of such a nondifferentiable vector optimization problem in which not all functions constituting it are r-invex. We show in such a case that there is no equivalence between the sets of Pareto solution in the considered nondifferentiable vector optimization problem and its associated vector penalized problem constructed in the introduced vector exponential penalty function method.

Example 35

Consider the following nondifferentiable multiobjective programming problem:

$$\begin{aligned} f(x)&=\left( \ln \left( x^{3}+27\right) ,\ln \left( \frac{1}{3}x^{3}+81\right) \right) \rightarrow V\text {-min}\\&\quad g_{1}(x)=\ln \left( x^{2}-x+1\right) \leqq 0,\\&\quad x\in X=\left\{ x\in R:x>-3\right\} . \end{aligned}$$
(VP2)

Note that \(D=\left\{ x\in X:0\leqq x\leqq 1\right\} \) and \(\overline{x}=0\) is a Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP2) with the optimal value \(f\left( \overline{x}\right) =\left( \ln 27,\ln 81\right) \). Further, none of the objective functions is r-invex with respect to any function \(\eta :X\times X\rightarrow R\) (see Theorem 12 [31]). However, we use the vector exact exponential penalty method for solving the considered nonconvex nondifferentiable vector optimization problem (VP2). Therefore, we construct the following vector penalized optimization problem \((\hbox {VP2}_{r}(c))\) with the vector exact exponential penalty function as follows:

It is not difficult to see that \(P_{r}(x,c)\) does not have a Pareto solution at \(\overline{x}=0\) for any \(c>0\). This follows from the fact that the downward order of growth of f exceeds the upward of growth of g at x when moving from x towards smaller values. Indeed, note that, for any \(r>0 , P_{r}(x,c)\rightarrow \left( \frac{1}{r}c\left( 13^{r}-1\right) , \frac{1}{r}c\left( 13^{r}-1\right) \right) <\left( \frac{1}{r}27^{r}, \frac{1}{r}81^{r}\right) = P_{r}(0,c)\) when \(x\rightarrow - 3\) for \(c\in \left( 0,\frac{27^{r}}{13^{r}-1}\right) \) whereas for any \(r<0, P_{r}(x,c)\rightarrow \left( -\infty ,-\infty \right) \) when \(x\rightarrow - 3\) for any \(c<0\) As it follows even from this example, r-invexity notion is an essential assumption to prove the equivalence between the sets of (weak) Pareto solutions in the original multiobjective programming problem and its exact penalized vector optimization problem with the vector absolute value penalty function for any penalty parameters exceeding the given threshold.

In the next example, we compare the presented exact vector exponential penalty function method and the classical exact vector \(l_{1}\) penalty function method.

Example 36

Consider the following nondifferentiable multiobjective programming problem:

$$\begin{aligned} f(x)&=\left( \ln \left( x^{2}+\left| x\right| +1\right) ,\text { }\ln \left( x^{2}-\frac{1}{2}x+1\right) \right) \rightarrow V\text {-min} \\&\qquad g_{1}(x)=\ln \left( x^{2}-x+1\right) \leqq 0.\\&\qquad \qquad x\in R. \end{aligned}$$
(VP3)

Note that \(D=\left\{ x\in R:0\leqq x\leqq 1\right\} \) and \(\overline{x}=0\) is a Pareto solution in the considered nonconvex nondifferentiable vector optimization problem (VP3). It can be shown by definition that the objective function is strictly 1-invex and the constraint function is 1-invex with respect to the same function \(\eta :R\times R\rightarrow R\), where \(\eta (x, \overline{x})=x-\overline{x}\).

If we use the presented exact vector exponential penalty function method for solving (VP3), then we construct the following unconstrained vector optimization problem \((\hbox {VP3}_{1}(c))\):

If we use the classical exact vector \(l_{1}\) penalty function method for solving (VP3), then we have to solve the following unconstrained vector optimization problem \((\hbox {VP3}_{0}(c))\):

Note that, in the first case, we have to solve a convex vector optimization problem, whereas, in the second case, an unconstrained vector optimization problem constructed in the classical exact \(l_{1}\) penalty function method is not convex. Therefore, it is not possible to use for solving it the methods for solving convex unconstrained vector optimization problems.

Remark 37

Now, let us return yet to Example 24. One of strategies to overcome the difficulties associated with too small value of penalty parameter in the presented algorithm is just to set the penalty parameter \( c_{n}\) in the presented algorithm to be large than the threshold equal to \( Mmax\{\overline{\mu },j\in J\}\). For the multiobjective programming problem (VP0) considered in Example 24, \(\overline{\mu }=1\) and \(M=e^{0}=1\) and, therefore, the threshold of penalty parameter is equal to 1. Thus, it is now clear, why the vector exponential penalty function \(P_1(\cdot ,c)\) in Example 24 has a Pareto solution at \(\bar{x}=1\) for all penalty parameters \(c> 1\).

5 Conclusion

In this paper, the exact vector exponential penalty function method has been used for solving nonconvex nondifferentiable multiobjective programming problems with inequality constraints. The convergence of the introduced vector exponential penalty function method has been established. Further, it has been proved that there exists the finite threshold value \(\overline{c}\) of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, any (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP) is a (weak) Pareto solution in its associated vector penalized problem \((\hbox {VP}_{r}( c))\) with the vector exact exponential penalty function. We have established this result for nondifferentiable multiobjective programming problems involving r-invex functions with respect to the same function \( \eta \). Also the converse result has been established for such nonconvex nondifferentiable multiobjective programming problems and under assumption that the penalty parameter c is sufficiently large. Thus, the equivalence between the set of (weak) Pareto optimal solutions in the considered nonconvex multiobjective programming problem (VP) and its associated vector penalized problem \((\hbox {VP}_{r}(c))\) with the vector exact exponential penalty function has been proved for sufficiently large penalty parameters c. Thus, the vector exact exponential penalty function method analyzed in the paper turns out to be useful for solving a class of nonconvex nonsmooth multiobjective programming problems with r-invex functions (with respect to the same function \(\eta \)), that is, the larger class of vector optimization problems than convex and invex ones. Also, in some cases, the exact vector exponential penalty function method turns out more useful than the classical exact vector \(l_{1}\) penalty function method. This is consequence of the fact that, in some cases, a vector penalized problem with the exact vector exponential penalty function is easier to solve than a vector penalized problem constructed in the classical exact vector \(l_{1}\) penalty function method. This property of the introduced vector exact exponential penalty function method is, of course, important from the practical point of view. In this way, due to the importance of the exact \(l_{1}\) penalty function methods in nonlinear scalar programming, similarly results have been established in the vectorial case, which show that we are in a position to solve of nonconvex nonsmooth vector optimization problems by using such methods, in the considered case, the vector exact exponential penalty function method.