Vector Exponential Penalty Function Method for Nondifferentiable Multiobjective Programming Problems

In this paper, a new vector exponential penalty function method for nondifferentiable multiobjective programming problems with inequality constraints is introduced. First, the case when a sequence of vector penalized optimization problems with vector exponential penalty function constructed for the original multiobjective programming problem is considered, and the convergence of this method is established. Further, the exactness property of a vector exact penalty function method is defined and analyzed in the context of the introduced vector exponential penalty function method. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered nondifferentiable multiobjective programming problem and the associated vector penalized optimization problem with the vector exact exponential penalty function. This equivalence is established for nondifferentiable vector optimization problems with inequality constraints in which involving functions are r-invex.


Introduction
The field of multiobjective programming, also known as vector programming, has attracted a lot of attention since many real-world problems in decision theory, economics, engineering problems, game theory, management sciences, physics, optimal control can be modeled as nonlinear vector optimization problems. Therefore, many approaches were developed in the literature to address these problems. The properties of the objective function and the constraints determine the applicable technique. Considerable attention has been given recently to devising new methods which solve the given multiobjective programming problem by means of some associated optimization problem (see, for example, [1][2][3]).
Exact penalty function methods are important analytic and algorithmic techniques in nonlinear mathematical programming for solving a nonlinear constrained scalar optimization problem. Exact penalty function methods transform the considered optimization problem into a single unconstrained optimization problem or into a finite sequence of unconstrained optimization problems, avoiding thus the infinite sequential process of the classical penalty function methods. Nondifferentiable exact penalty functions were introduced by Zangwill [4] and Pietrzykowski [5]. Much of the literature on nondifferentiable exact penalty functions is devoted to the study of scalar convex optimization problems (see, for example, [6][7][8][9][10][11][12][13][14][15][16], and others). However, some results on exact penalty functions used for solving various classes of nonconvex optimization problems have been proved in the literature recently (see, for example, [17,18]). Namely, in [17], Antczak introduced a new approach for solving nonconvex differentiable optimization problems involving r -invex functions. He defined a new exact absolute value penalty function method, called the exact exponential penalty function method, for solving nonconvex constrained scalar optimization problems. Further, under r -invexity hypotheses, Antczak established the equivalence between the sets of optimal solutions in the original scalar optimization problem with both inequality and equality constraints and its associated penalized optimization problem with the exact exponential penalty function. Furthermore, in [17], a lower bound on the penalty parameter was provided such that this result is satisfied if the penalty parameter is larger than this value.
In [19], Antczak defined a new vector exact l 1 penalty function method and used it for solving nondifferentiable convex multiobjective programming problems. He gave conditions guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered convex nondifferentiable multiobjective programming problem and its associated vector penalized optimization problem with the vector exact l 1 penalty function.
The aim of this paper is to show that unconstrained global optimization methods can be used also for solving nondifferentiable constrained multiobjective programing problems, by resorting to an exact penalty approach. Namely, we extend the exact exponential penalty function method introduced by Antczak [17] to the vectorial case.
Hence, we introduce a new vector exponential penalty function method, and we use it for solving a class of nondifferentiable multiobjective programming problems involving r -invex functions (with respect to the same function η). This method is based on such a construction of an exact absolute value penalty function, which is minimized in the exponential penalized optimization problem constructed in this method. It is the sum of a certain "merit" function (which reflects the objective function of the original problem) and a penalty term which reflects the constraint set. The merit function is chosen as the composition of the exponential function and the original objective function, while the penalty term is obtained by multiplying a suitable function, which represents the constraints (in this case, it is also the sum of the composition of an exponential function and the suitable function, which represents the constraint) by a positive parameter c, called the penalty parameter.
This work is organized as follows. In Sect. 2, some preliminary results are given that are useful in proving the main results in the paper. In Sect. 3, a new vector exponential penalty function method is introduced, and its algorithmic aspect is presented. The convergence of the sequence of weak Pareto solutions of vector subproblems generated in the described method is established. In Sect. 4, the exactness of the penalization is extended to the case of an exact vector penalty function method. Then the results for vector exterior exponential penalty function algorithm are reviewed, and the relationship between the weak Pareto solution in the original multiobjective programming problem and weak Pareto optimal solutions in the associated penalized optimization subproblems is commented. Thus, the exactness property is defined for the introduced vector exponential penalty function method. Namely, we prove that there exists a finite lower bound of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem coincides with an unconstrained (weak) Pareto solution in its associated vector penalized optimization problem with the vector exact exponential penalty function. Also under nondifferentiable r -invexity, the converse result is established for sufficiently large penalty parameter exceeding the finite threshold. Hence, the equivalence between the nonconvex nondifferentiable multiobjective programming problems is established for sufficiently large penalty parameters under the assumption that all functions constituting the considered nonsmooth multiobjective programming problem are r -invex (with respect to the same function η). The results established in the paper are illustrated by suitable examples of nonconvex nondifferentiable vector optimization problems which we solve by using the vector exact exponential penalty function method defined in this paper. Finally, in Sect. 5, we discuss the consequences of the extension of the exact exponential penalty function method defined by Antczak [17] for scalar optimization problems to the vectorial case and its significance for vector optimization.

Preliminaries
The following convention for equalities and inequalities will be used throughout the paper.
For any x = (x 1 , x 2 , . . . , x n ) T , y = (y 1 , y 2 , . . . , y n ) T , we define (i) x = y if and only if x i = y i for all i = 1, 2, . . . , n; (ii) x < y if and only if x i < y i for all i = 1, 2, . . . , n; (iii) x y if and only if x i y i for all i = 1, 2, . . . , n; (iv) x ≤ y if and only if x y and x = y.
Definition 1 A function f : R n → R is locally Lipschitz at a point x ∈ R n if there exist scalars K x > 0 and ε > 0 such that, the following inequality is the open ball of radius ε about x.
Definition 2 [30] The Clarke generalized directional derivative of a locally Lipschitz Definition 3 [30] The Clarke generalized subgradient of a locally Lipschitz function , is defined as follows: Lemma 4 [30] Let f : X → R be a locally Lipschitz function on a nonempty open set X ⊂ R n , u be an arbitrary point of X and λ ∈ R. Then Proposition 5 [30] Let f i : X → R, i = 1, . . . , k, be locally Lipschitz functions on a nonempty set X ⊂ R n , u be an arbitrary point of X ⊂ R n . Then Equality holds in the above relation if all but at most one of the functions f i is strictly differentiable at u. Corollary 6 [30] For any scalars λ i , one has and equality holds if all but at most one of the functions f i is strictly differentiable at u. Theorem 7 [30] Let the function f : R n → R be locally Lipschitz at a point x ∈ R n and attain its (local) minimum at x. Then Proposition 8 [30] Let the functions f i : R n → R, i ∈ I = {1, . . . , k} , be locally Lipschitz at a point x ∈ R n . Then the function f : Now, for the reader's convenience, we give the definition of a nondifferentiable vector-valued (strictly) r -invex function (see [31] for a scalar case and [32] in the vectorial case).

Definition 9
Let X be a nonempty subset of R n and f : R n → R k be a vector-valued function such that each its component is locally Lipschitz at a given point x ∈ X . If there exist a function η : X × X → R n and a real number r such that, for i = 1, . . . , k, the following inequalities: 1 and all x ∈ X , then f is said to be a nondifferentiable r -invex vector-valued function at x on X (with respect to η). If inequalities (1) are satisfied at any point x ∈ X , then f is said to be a nondifferentiable r -invex function on X (with respect to η). Each function f i , i = 1, . . . , k, satisfying (1), is said to be locally Lipschitz r -invex at x on X (with respect to η ).

Definition 10
Let X be a nonempty subset of R n and f : R n → R k be a vector-valued function such that each its component is locally Lipschitz at a given point x ∈ X . If there exist a function η : X × X → R n and a real number r such that, for i = 1, . . . , k, the following inequalities hold for each ξ i ∈ ∂ f i (x) and all x ∈ X, x = x, then f is said a nondifferentiable vector-valued strictly r -invex function at x on X (with respect to η). If inequalities (2) are satisfied at any point x ∈ X , then f is said to be a nondifferentiable strictly r -invex function on X (with respect to η).

Remark 11
In order to define an analogous class of vector-valued r -incave functions with respect to η, the direction of the inequalities (1) should be changed to the opposite one.

Remark 12
Note that in the case when r = 0, the definition of a (strictly) r -invex vector-valued function reduces to the definition of nondifferentiable (strictly) invex vector-valued function (see, for example, [33,34]).

Remark 13
For more details on the properties of nondifferentiable r -invex functions, we refer, for example, to see Antczak [31] for a scalar case and Antczak [32] in the vectorial case. Now, we prove the useful result which we will use in proving the main result in the paper.
Theorem 14 Let x ∈ X ⊂ R n and q : X → R be a locally Lipschitz r -invex function at x ∈ X on X with respect to η : X × X → R. Further, let 1 r e rq + (x) − 1 := max 0, 1 r e rq(x) − 1 . Then the function 1 r e rq + (·) − 1 is a locally Lipschitz invex function at x ∈ X on X with respect to the same function η.
Proof We consider the following cases: Then 1 r e rq + (x) − 1 = 1 r e rq(x) − 1 on some neighborhood of x, and so ∂ 1 r e rq + (x) − 1 = ∂ 1 r e rq(x) − 1 . By assumption, q is a locally Lipschitz r -invex function at x ∈ X on X with respect to η : X × X → R. Therefore, for any ζ + ∈ ∂ 1 r e rq + (x) − 1 and all x ∈ X , we have (2) q (x) < 0. Then, by definition, 1 r e rq + (x) − 1 = 0 on some neighborhood of x, and so ∂ 1 r e rq + (x) − 1 = {0}. Therefore, for any ζ + ∈ ∂ 1 r e rq + (x) − 1 and all x ∈ X , we have By Proposition 8, it follows that By assumption, q is a locally Lipschitz r -invex function at x ∈ X on X with respect to the function η. Therefore, by definition, for any ζ ∈ ∂q + (x) and all x ∈ X , we have Hence, by (6) and (7), the following relations hold for every λ ∈ [0, 1]. Thus, (8) implies that, for any ζ + ∈ ∂ 1 r e rq + (x) − 1 and all x ∈ X , the following inequality holds. Hence, by (3), (4), (9) and Remark 12, we conclude that 1 r e rq + (·) − 1 is a nondifferentiable invex function at x ∈ X on X with respect to η.
In general, the unconstrained nonsmooth vectorial optimization problem is represented as follows: where the objective functions f i : In general, the concept of an optimal solution defined in scalar optimization problems does not work in multiobjective programming problems. For such multicriterion optimization problems, an optimal solution is defined in terms of a (weak) Pareto solution [(weakly) efficient solution] in the following sense: It is easy to verify that every Pareto solution is a weak Pareto solution.
The following result gives the necessary optimality condition for the unconstrained vectorial optimization problem (UVP) (see [35]).

Theorem 17 A necessary condition for the point x to be (weak) Pareto optimal in the nondifferentiable vector optimization problem (UVP) is that there exists multiplier
Often, the feasible set of a multiobjective programming problem can be represented by functional inequalities and, therefore, we consider the nondifferentiable constrained vector optimization problem in the following form It is well known (see, for example, [33][34][35][36][37][38]) that the following conditions, known as the generalized form of the Karush-Kuhn-Tucker conditions, are necessary for a (weak) Pareto solution in the considered nondifferentiable vector optimization problem (VP). [30,36,39]) be satisfied at x. Then, there exist the Lagrange multipliers λ ∈ R k and μ ∈ R m such that

Definition 19
The point x ∈ D is said to be a Karush-Kuhn-Tucker point in the considered multiobjective programming problem (VP) if there exist the Lagrange multipliers λ ∈ R k , μ ∈ R m such that the Karush-Kuhn-Tucker necessary optimality conditions (10)- (12) are satisfied at x.

Convergence of a New Vector Exponential Penalty Function Method for a Multiobjective Programming Problem
For the considered nonlinear multiobjective programming problem (VP), we introduce a new vector exponential penalty function method as follows: where r is a finite real number not equal to 0 and e = (1, . . . , 1) ∈ R k . Note that, for a given constraint g j (x) 0, the function 1 r e rg + j (·) − 1 defined by is equal to zero for all x that satisfy the constraint and that it has a positive value whenever this constraint is violated. Moreover, large violations in the constraint g j (x) 0 result in large values for 1 r e rg + j (x) − 1 . Thus, the function 1 r e rg + j (·) − 1 has the penalty features relative to the single inequality constraint g j . However, observe that at points, where g j (x) = 0, the foregoing objective function might not be differentiable, even though g j is differentiable.
As it follows from (13), the vector penalized problem (VP r (c)) constructed in the vector exponential penalty function method is such an unconstrained vector optimization problem, in which a vector objective function is the sum of a certain vector "merit" function (which reflects the vector objective function of the given multiobjective programming problem) and the same penalty term which reflects the constraint set being added to each component of the vector "merit" function. The vector merit function is chosen as the composition of the exponential function and the original vector objective function, while the penalty term (the same for each component of merit function) is obtained by multiplying a suitable function, which represents the constraints, by a positive parameter c, called the penalty parameter.
Remark 20 Note that P r : R n × R + → R k and P r (x, c) = (P r 1 (x, c), Remark 21 In the case when r = 0, the definition of the vector penalized problem (VP r (c)) reduces to the following form: Thus, it can be set that in the case when r = 0, we obtain the definition of the classical vector l 1 penalty function method (see Antczak [19]). Now, we show that a weak Pareto solution in the considered multiobjective programming problem (VP) can be obtained by solving a sequence of problems ( 13) with the penalty parameter c selected from an increasing sequence of parameters (c n ).
Therefore, for the considered multiobjective programming problem (VP), we now construct a sequence of vector penalized optimization problems (VP r (c n )) with the vector exponential penalty function n = 1, 2, . . . as follows: where c n > 0 and lim n→∞ c n = ∞. Moreover, we denote by x n an approximate weak Pareto solution in the vector penalized optimization problem (VP r (c n )) with the vector exact exponential penalty function. An algorithmic framework that forms the basis for the introduced vector exponential penalty function method is as follows: Exponential penalty function method for a multiobjective programming problem (VP).
Find an approximate weak Pareto solution x n of P r (x, c n ), starting at x s n ; Choose new penalty parameter c n+1 > c n ; Choose new starting point at x s n+1 ; END IF END FOR Now, we prove the convergence theorem for the introduced vector exponential penalty function method. Namely, we show that if x n s is any convergence subsequence of (x n ) and lim s→∞ x n s = x ∈ D, then x is a weak Pareto solution in the considered multiobjective programming problem (VP).
First, we show that any limit point x of the sequence (x n ), that is, the sequence of approximate weak Pareto solutions in vector penalized optimization problems (VP r (c n )) with the vector exact exponential penalty function, is feasible in the considered multiobjective programming problem (VP). Proof Let x = lim n→∞ x n . Thus, there exists a subsequence {x n s } of {x n } such that x n s is an approximate weak Pareto solution in the vector penalized problem (VP r (c n s )), s = 1, 2, . . . , with the vector exponential penalty function and, moreover, lim s→∞ x n s = x. We proceed by contradiction. Suppose, contrary to the result, that x / ∈ D. If we take x ∈ D, then, according to the definition of a vector penalized problem (VP r (c n s )) and Definition 16, there exists i n s ∈ {1, . . . , k} such that By assumption, x / ∈ D. This means that there exists j ∈ {1, 2, . . . , m} such that g j (x) > 0. Then, by (14), it follows that Hence, (19) implies that for some ε > 0. By assumption, lim Therefore, using (18) and (21), for s sufficiently large, we have, This is a contradiction to (17). This means that x ∈ D, and the proof of this lemma is completed.
The following theorem shows that if a sequence of approximate weak Pareto solutions in the vector penalized problem (VP r (c n )) with the vector exponential penalty function converges to x, then x is also a weak Pareto solution in the considered multiobjective programming problem (VP).
Theorem 23 Let x n be an approximate weak Pareto optimal solution in the vector penalized optimization problem (VP r (c n )) with the vector exponential penalty function, n = 1, 2, . . .. If x n s is any convergent subsequence of (x n ) and lim s→∞ x n s = x, then x is a weak Pareto solution in the considered multiobjective programming problem (VP).
Proof Let x n s be any convergent subsequence of (x n ) and lim s→∞ x n s = x. By Lemma 22, it follows that x is feasible in the considered multiobjective programming problem (VP). We proceed by contradiction. Suppose, contrary to the result, that x is not a weak Pareto solution in the considered multiobjective programming problem (VP). Hence, by Definition 16, it follows that there exists x ∈ D such that Thus, Also by (14), it follows that Combining (24) By (23), it follows that 1 r e r f in s ( x) < 1 r e r f in s (x) .
Since lim s→∞ x n s = x, for sufficiently large s, (28) implies that the following inequality holds, contradicting (27). This means that any limit point of any convergence subsequence of (x n ) is a weak Pareto solution in the considered multiobjective programming problem (VP). The proof of this theorem is completed.
It turns out that the strategy for choosing the penalty parameter c n is crucial to the practical success of the algorithm presented above. If the initial choice c 0 is too small, many cycles of the algorithm presented above may be needed to determine an appropriate solution.
In order to illustrate the difficulties caused by an inappropriate value of c, we consider the following multiobjective programming problem.
Example 24 Consider the following vector optimization problem: Note that D = x ∈ R : 0 x 1 and x = 1 is a Pareto solution in the considered multiobjective programming problem (VP0). Note that all functions constituting problem (VP0) are 1-invex on R with respect to the same function η, where η (x, x) = x −x. We define the vector exponential penalty function P 1 (·, c) as follows: Now, we consider various values of the penalty parameter c and draw graphs of each component of the vector exponential penalty function P 1 (·, c) for the considered multiobjective programming problem (VP0) according to these values of the penalty parameter c on below figures (Fig. 1). Note that each component of the vector exponential penalty function P 1 (·, c) is a monotonically increasing function when c is smaller than 0.15. On the other hand, when c is chosen from the interval (0.15, 1), then the vector exponential penalty function has a Pareto solution, but not at the feasible solution x = 1. While the vector exponential penalty function P 1 (·, c) has a Pareto solution at x = 1, when c > 1.
Therefore, if, for example, the current iterate in the above algorithm is x n = 1 4 and the penalty parameter c n is chosen to be less than 1, then almost any implementation of the vector exponential penalty method will give a step that moves away from the Pareto solution x = 1. This behavior of the algorithm will be repeated, producing increasingly poorer iterates, until the penalty parameter c is increased above the threshold equal to 1. It turns out in our further considerations that this value of the threshold is not accidental (see Remark 37 below).

Exactness of the Introduced Vector Exponential Penalty Function Method
In order to avoid the need of a sequence of unbounded penalty parameters, in other words, a sequence of unbounded penalized optimization problems, we now prove that the introduced vector exponential penalty function method is exact in the sense that a (weak) Pareto solution in the original multiobjective programming problem is equivalent to a (weak) Pareto solution in the associated vector penalized problem, for a finite (sufficiently large) value of the penalty parameter. In order to prove this result, we assume that each component of the functions constituting the considered multiobjective programming problem (VP) is locally Lipschitz r -invex with respect to the same function η. Now, in a natural way, we extend the well-known definition of exactness property for a scalar exact penalty function to the vectorial case.

Definition 25
If a threshold value c 0 exists such that, for every c > c, then the function P r (x, c) is termed a vector exact exponential penalty function.
According to the definition of the function P r (x, c), we call (VP r (c)), defined by (13), the vector penalized problem with the vector exact exponential penalty function.
It is clear that, conceptually, if P r (x, c) is a vector exact exponential penalty function, we can find the constrained (weak) Pareto in the considered multiobjective programming problem (VP), by looking for the unconstrained (weak) Pareto solutions of the function P r (x, c), for sufficiently large (but finite) values of the penalty parameter c. Now, for sufficiently large values of the penalty parameter c, we prove the equivalence between the sets of (weak) Pareto solutions of problem (VP) and the vector penalized problem (VP r (c)) with the vector exact exponential penalty function.
First, we establish that a Karush-Kuhn-Tucker point in the considered multiobjective programming problem is a weak Pareto solution of the vector exact exponential penalty function in the associated vector penalized problem (VP r (c)), for sufficiently large penalty parameters c greater than the given threshold. (12) be satisfied at x with the Lagrange multipliers λ i , i ∈ I, μ j , j ∈ J . Furthermore, assume that the objective function f and the constraint function g are r-invex at x on X with respect to the same function η and M = max e r f i (x) , i ∈ I . If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set c M max μ j , j ∈ J ), then x is also a weak Pareto solution in any associated vector penalized optimization problem (VP r (c)) with the vector exact exponential penalty function.

Theorem 26 Let x be a feasible solution in the nonsmooth multiobjective programming problem (VP) and the generalized Karush-Kuhn-Tucker necessary optimality conditions (10)-
Proof We proceed by contradiction. Suppose, contrary to the result, that x is not a weak Pareto solution in the associated vector penalized problem (VP r (c)) with the vector exact exponential penalty function. Therefore, by Definition 16, there exists x ∈ X such that P r ( x, c) < P r (x, c) .
By definition of the vector penalized problem (VP r (c)) [see (13)], we have Thus, (31) Since x is a feasible solution in the nonsmooth multiobjective programming problem (VP), (14) yields By assumption, M = max e r f i (x) , i ∈ I and, moreover, c M max μ j , j ∈ J . Hence, (32) gives Thus, (33) yields By the Karush-Kuhn-Tucker necessary optimality condition (12), it follows that and, for at least one i * ∈ I , we have Adding both sides of (34) and (35), we get By the Karush-Kuhn-Tucker necessary optimality condition (12), we have that By assumption, the objective function f and the constraint function g are locally Lipschitz r -invex at x on X with respect to the same function η. Then, by Definition 9, the following inequalities hold for all x ∈ X . Therefore, they are also satisfied for x = x ∈ X . Using the Karush-Kuhn-Tucker necessary optimality condition (12), we get, respectively, Adding both sides of the above inequalities, we get that the following inequality holds for every ξ i ∈ ∂ f i (x) , i = 1, . . . , k, and ζ j ∈ ∂g j (x) , j = 1, . . . , m. By the Karush-Kuhn-Tucker necessary optimality condition (10), the following inequality Hence, using the Karush-Kuhn-Tucker necessary optimality condition (11 ), we obtain By (14), it follows that the following inequality holds, contradicting (37). Hence, the proof of this theorem is completed.
The following corollary follows directly from Theorem 26.

Corollary 27 Let x be a weak Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 26 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set c M max μ j , j ∈ J ), then x is also a weak Pareto solution in the associated vector penalized optimization problem (VP r (c)) with the vector exact exponential penalty function.
Now, under stronger assumptions, we establish the relationship between a Karush-Kuhn-Tucker point in the considered multiobjective programming problem (VP) and a Pareto solution in its associated vector penalized problem (VP(c)) with the vector exact exponential penalty function. (10)-(12) be satisfied at x with the Lagrange multipliers λ i , i ∈ I, μ j , j ∈ J . Furthermore, assume that one of the following hypotheses is satisfied:

Theorem 28 Let x be a feasible solution in the multiobjective programming problem (VP) and the Karush-Kuhn-Tucker necessary optimality conditions
(i) the Lagrange multipliers λ i , i ∈ I , associated to the objectives f i , are positive real numbers and, moreover, the objective function f and the constraint function g are r-invex at x on X with respect to η, (ii) the objective function f is strictly r -invex at x on X with respect to η and the constraint function g is r -invex at x on X with respect to η, , i ∈ I and the penalty parameter c is assumed to be sufficiently large (it is sufficient to set c M max μ j , j ∈ J ), then x is also a Pareto solution in the associated vector penalized problem (VP r (c)) with the vector exact exponential penalty function.
Proof Proof of this theorem is similar to that of Theorem 26.

Corollary 29 Let x be a Pareto solution in the multiobjective programming problem (VP) and all hypotheses of Theorem 28 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set c
M max μ j , j ∈ J ), then x is a Pareto solution also in the associated vector penalized problem (VP r (c)) with the vector exact exponential penalty function. Now, under stronger assumptions, we establish the converse results to those ones proved above. Namely, we prove that, for sufficiently large values of the penalty parameter c, if x is a (weak) Pareto solution in the vector penalized problem (VP r (c)) with the vector exact exponential penalty function, then it is also a (weak) Pareto solution in the original multiobjective programming problem (VP). To prove this result, we assume that both the objective function and the constraint functions are r -invex at x on X with respect to the same function η. We also show that there exists the finite threshold c of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, a (weak) Pareto solution in the associated vector penalized optimization problem (VP r (c)) with the vector exact exponential penalty function is a (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP).

Theorem 30 Let D be a compact subset of R n and x be a weak Pareto solution in the vector penalized problem (VP r (c)) with the vector exact exponential penalty function,
where c is assumed to be sufficiently large. Further, assume that the objective function f and the constraint function g are r -invex at x on X with respect to the same function η. Then x is also a weak Pareto solution in the given multiobjective programming problem (VP).
Proof By assumption, x is a weak Pareto solution in the vector penalized problem (VP r (c)) with the vector exact exponential penalty function. We consider two cases. First, assume that x ∈ D. Then, by definition of the vector penalized problem (VP r (c)) with the vector exact exponential penalty function, it follows that Hence, by (14), (38) implies that the relation holds, by which we conclude that x is a weak Pareto optimal in the considered multiobjective programming problem (VP). Then, for any c c (where c is equal to the penalty parameter for which the vector penalized problem (VP r (c)) is defined), a weak Pareto solution x in each vector penalized problem (VP r (c)) with the vector exact exponential penalty function is a weak Pareto solution also in the considered multiobjective programming problem (VP).
In the case when x ∈ D, the result follows directly from (39). Now, suppose that x / ∈ D. Since x is a weak Pareto solution in the vector penalized problem (VP r (c)), by Theorem 17, there exists λ ∈ R k , λ ≥ 0, k i=1 λ i = 1, such that By definition of the vector exact exponential penalty function, it follows that By assumption, all functions g j , j = 1, . . . , m, are locally Lipschitz on X . Then, by definition, the functions 1 r e rg + j (·) − 1 , j = 1, . . . , m, are also locally Lipschitz on X . Since all λ i are nonnegative, equality holds in Corollary 6. Thus, (41) yields Thus, Then, by Lemma 4, it follows that Hence, by Proposition 5, we have Thus, by Theorem 2.3.9 [30], it follows that By assumption, the objective function f and the constraint function g are r -invex at x on X with respect to the same function η. Since the constraint functions g j , j ∈ J , are locally Lipschitz on X and r -invex at x on X with respect to the same function η, by Theorem 14, the functions 1 r e rg + j (·) − 1 , j ∈ J , are invex at x on X with respect to the same function η. Then, the following inequalities hold for all x ∈ X . Multiplying (46) by c > 0, we get Adding both sides of the inequalities (47), we obtain Thus, by (45) and (48), for any i = 1, . . . , k, the following inequalities hold for all x ∈ X , for every ξ i ∈ ∂ f i (x) , i = 1, . . . , k, and ζ + j ∈ ∂ 1 r e rg + j (x) − 1 , j = 1, . . . , m. Multiplying (49) by λ i , we get Adding both sides of the above inequalities, we obtain Since k i=1 λ i = 1, (51) implies that the inequality holds for every ξ i ∈ ∂ f i (x), i = 1, . . . , k, and ζ + j ∈ ∂ 1 r e rg + j (x) − 1 , j = 1, . . . , m. Hence, by (44), the following inequality holds for all x ∈ X . By (14), for each x ∈ D, it follows that Combining (53) and (54), we get that the following inequality holds for all x ∈ D. Since x is not feasible in the given multiobjective programming problem (VP), by (14), it follows that By assumption, c is sufficiently large. Let c satisfy Now, we prove that c 0. Indeed, by assumption, x is a weak Pareto solution in the vector penalized optimization problem (VP r (c)) with the vector exact exponential penalty function. Thus, by (39), it follows that, for every x ∈ D, there exist at least one i ∈ I such that 1 r e r f i (x) − 1 r e r f i (x) 0. Hence, in fact, (57) implies that c 0. We now show that c M max μ j , j ∈ J .
Suppose, contrary to the result, that Since (57) is fulfilled for all c > c, there exists the penalty parameter c * > c such that Combining (57), (58) and (60), we have Hence, (61) gives Since m j=1 1 r e rg + j (x) − 1 = 0 for all x ∈ D, (62) implies that the following inequality holds, which contradicting the weak efficiency of x in the vector penalized optimization problem (VP r (c * )). Thus, (58) is satisfied. By assumption, x is a weak Pareto solution in the vector penalized optimization problem (VP r (c)) with the vector exact exponential penalty function for sufficiently large c (as it follows by ( 57), for all c > c). Hence, by Definition 16, it follows that Thus, (13) gives (64) Since D ⊂ X , the inequality (64) yields The above relation is equivalent to Thus, (66) implies that the following relation holds, contradicting (57). Therefore, the case x / ∈ D is impossible. This means that x is feasible in the multiobjective programming problem (VP). Hence, by the feasibility of x in constrained optimization problem (VP), (66) yields By Definition, 16, the above inequality implies that x is a weak Pareto solution in the multiobjective programming problem (VP). This completes the proof of this theorem.

Theorem 31 Let D be a compact subset of R n and x be a Pareto solution in the vector penalized problem (VP r (c)) with the vector exact exponential penalty function.
Further, assume that the objective function f is strictly r -invex at x on X and the constraint function g is r -invex at x on X with respect to the same η. If the penalty parameter c is sufficiently large, then x is also a Pareto solution in the considered multiobjective programming problem (VP).
Proof Proof of this theorem is similar to the proof of Theorem 30.

Corollary 32
Let all hypotheses of Corollary 27 (or Corollary 29) and Theorem 30 (or Theorem 31) be fulfilled. Then the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem (V P r (c)) with the exact vector exponential penalty coincide.
The importance of this result is to guarantee the existence of a finite penalty parameter c such that the sets of weak Pareto solutions (Pareto solutions) in the given multiobjective programming problem (VP) and its associated vector penalized optimization problem (VP r (c)) with the exact vector exponential penalty are the same.

Remark 33
Note that since the lower bound for the penalty parameter is finite, therefore, the sequence of vector penalized subproblems (16) generated by the above presented algorithm is also finite, in opposition to nonexact penalty function methods. Now, we illustrate the result established above by means of a nondifferentiable multiobjective programming problem with r -invex functions, which we solve by using the introduced vector exact exponential penalty function method.
Example 34 Consider the following nonsmooth multiobjective programming problem: Note that D = (x 1 , x 2 ) ∈ R 2 : 0 x 1 1 ∧ 0 x 2 1 and x = (0, 0) is a Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP1). Further, it is not difficult to prove, by Definition 10, that the objective function f is strictly 1-invex on R 2 with respect to the function η : R 2 × R 2 → R 2 and, by Definition 9, the constraints g 1 and g 2 are 1-invex on R 2 with respect to the same function η, where We use the vector exact exponential penalty method for solving the considered nonconvex nondifferentiable vector optimization problem (VP1). Hence, we construct the following unconstrained vector penalized problem (V P1 1 (c)) with the vector exact exponential penalty function as follows: Further, the Karush-Kuhn-Tucker necessary optimality conditions (10)-(12) are satisfied at x = (0, 0) with the Lagrange multipliers λ = λ 1 , λ 2 , λ 3 ≥ 0 and 1] . As it follows from these relations, max μ j , j = 1, 2 = 1 and M = 1. Therefore, if we set c 1, then, by Corollary 27, x = (0, 0), being a Pareto solution in the considered multiobjective programming problem (VP1), is also a Pareto solution in each its associated vector penalized optimization problem (VP 1 (c)) with the vector exact exponential penalty function. Since also hypotheses of Theorem 31 are fulfilled, the converse result is also true. Note that for the considered constrained multiobjective programming problem (VP1), it is not possible to use the similar result established under convexity assumptions by Antczak [17]. This follows from the fact that none of the functions constituting the considered nonsmooth vector optimization problem (VP1) is convex on R 2 . It is also difficult to show that the functions involved in problem (VP1) are invex with respect to the same function η : In the next example, we compare the presented exact vector exponential penalty function method and the classical exact vector l 1 penalty function method.
Example 36 Consider the following nondifferentiable multiobjective programming problem: x ∈ R.
Note that D = x ∈ R : 0 x 1 and x = 0 is a Pareto solution in the considered nonconvex nondifferentiable vector optimization problem (VP3). It can be shown by definition that the objective function is strictly 1-invex and the constraint function is 1-invex with respect to the same function η : If we use the presented exact vector exponential penalty function method for solving (VP3), then we construct the following unconstrained vector optimization problem (VP3 1 (c)): If we use the classical exact vector l 1 penalty function method for solving (VP3), then we have to solve the following unconstrained vector optimization problem (VP3 0 (c)): P 0 (x, c) = ln x 2 + |x| + 1 + c max 0, ln x 2 − x + 1 , ln x 2 − 1 2 x + 1 + c max 0, ln x 2 − x + 1 → V -min.
(VP3 0 (c)) Note that, in the first case, we have to solve a convex vector optimization problem, whereas, in the second case, an unconstrained vector optimization problem constructed in the classical exact l 1 penalty function method is not convex. Therefore, it is not possible to use for solving it the methods for solving convex unconstrained vector optimization problems.

Remark 37
Now, let us return yet to Example 24. One of strategies to overcome the difficulties associated with too small value of penalty parameter in the presented algorithm is just to set the penalty parameter c n in the presented algorithm to be large than the threshold equal to Mmax{μ, j ∈ J }. For the multiobjective programming problem (VP0) considered in Example 24, μ = 1 and M = e 0 = 1 and, therefore, the threshold of penalty parameter is equal to 1. Thus, it is now clear, why the vector exponential penalty function P 1 (·, c) in Example 24 has a Pareto solution atx = 1 for all penalty parameters c > 1.

Conclusion
In this paper, the exact vector exponential penalty function method has been used for solving nonconvex nondifferentiable multiobjective programming problems with inequality constraints. The convergence of the introduced vector exponential penalty function method has been established. Further, it has been proved that there exists the finite threshold value c of the penalty parameter c such that, for every penalty parameter c exceeding this threshold, any (weak) Pareto solution in the considered nonconvex nondifferentiable multiobjective programming problem (VP) is a (weak) Pareto solution in its associated vector penalized problem (VP r (c)) with the vector exact exponential penalty function. We have established this result for nondifferentiable multiobjective programming problems involving r -invex functions with respect to the same function η. Also the converse result has been established for such nonconvex nondifferentiable multiobjective programming problems and under assumption that the penalty parameter c is sufficiently large. Thus, the equivalence between the set of (weak) Pareto optimal solutions in the considered nonconvex multiobjective programming problem (VP) and its associated vector penalized problem (VP r (c)) with the vector exact exponential penalty function has been proved for sufficiently large penalty parameters c. Thus, the vector exact exponential penalty function method analyzed in the paper turns out to be useful for solving a class of nonconvex nonsmooth multiobjective programming problems with r -invex functions (with respect to the same function η), that is, the larger class of vector optimization problems than convex and invex ones. Also, in some cases, the exact vector exponential penalty function method turns out more useful than the classical exact vector l 1 penalty function method. This is consequence of the fact that, in some cases, a vector penalized problem with the exact vector exponential penalty function is easier to solve than a vector penalized problem constructed in the classical exact vector l 1 penalty function method. This property of the introduced vector exact exponential penalty function method is, of course, important from the practical point of view. In this way, due to the importance of the exact l 1 penalty function methods in nonlinear scalar programming, similarly results have been established in the vectorial case, which show that we are in a position to solve of nonconvex nonsmooth vector optimization problems by using such methods, in the considered case, the vector exact exponential penalty function method.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.