On the exactness and the convergence of the l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} exact penalty E-function method for E-differentiable optimization problems

This paper is devoted to introduce and investigate a new exact penalty function method which is called the l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} exact penalty E-function method. Namely, we use the aforesaid exact penalty function method to solve a completely new class of nonconvex (not necessarily) differentiable mathematical programming problems, that is, E-differentiable minimization problems. Then, we analyze the most important from a practical point of view property of all exact penalty function methods, that is, exactness of the penalization. Thus, under appropriate E-convexity hypotheses, we prove the equivalence between the original E-differentiable extremum problem and its corresponding penalized optimization problem created in the introduced l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} exact penalty E-function method. Further, we also present and investigate the algorithm for this exact penalty function method which minimizes the l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} exact penalty E-function. The convergence theorem for the aforesaid algorithm is also established.


Introduction
Exact penalty function methods are an important and powerful tool for solving various classes of constrained optimization problems. As it is well-known from the optimization literature, the nonlinear constrained extremum problem is, by using one of such methods, transformed into unconstrained one. This transformation is that a penalty term is added to the objective function of the original extremum problem. Therefore, by making use of exact penalty functions, many ideas and techniques applicable to unconstrained global optimization can be transferred to the constrained case. Thus, exact penalty functions provide with a direct and powerful way of tackling constrained global optimization problems by using solving unconstrained optimization problems. Numerous studies in the literature have been focused on exact penalty functions for both differentiable and nondifferentiable optimization problems (see, for example, [1,2,[6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]30], and others).
In order to weak the assumption of convexity in proving fundamental results in optimization theory, many concepts of generalized convexity notions have been defined in the literature. One of such generalizations of a convex function is the concept of E-convexity introduced by Youness [27]. Over the years, the concept of E-convexity has grown remarkably in different directions in the settings of optimality conditions and duality results (see, for example, [3-5, 22-26, 28, 29], and others).
The main purpose of this paper is analysis of main properties of a new exact penalty function method called the l 1 exact penalty E-function method. The aforesaid exact penalty method is used in the paper to solve E-differentiable optimization problems in which the functions involved are E-convex. Therefore, for the considered E-differentiable optimization problem with both inequality and equality constraints, we construct the penalized optimization problem with the l 1 exact penalty E-function, also called the penalized E-optimization problem. Then, we investigate the main property of the l 1 exact penalty E-function method, i.e. exactness of the penalization in the case when this exact penalty method is used for solving E-differentiable optimization problems with E-convex functions. Thus, we associate a global E-minimizer in the analyzed E-differentiable optimization problem in which the involved functions are E-convex with a global optimal solution in its corresponding penalized E-optimization problem which is constructed in this approach. It should be proven that, under appropriate E-convexity hypotheses, this equivalence is only in the case when unconstrained optimization problem is constructed for such all penalty parameters which exceed the threshold value which is equal to the largest absolute value of a Lagrange multiplier associated with some constraint of the original mathematical programming problem. Further, if only for algorithmic reasons, we also establish the converse result which ensures that a global optimal solution for the penalized E-optimization problem is also an E-optimal solution of the E-differentiable constrained minimization problem. Hence, we prove the equivalence between global minimizer solutions of two mentioned optimization problems, i.e. the original E-differentiable constrained extremum problem and its associated penalized E-optimization problem under E-convexity hypotheses. We also illustrate the established results and, for this purpose, we shall solve exemplary optimization problems with E-convex functions. We also compare the introduced l 1 exact penalty E-function method and the classical l 1 exact penalty function method. Namely, we shall analyze such a nondifferentiable extremum problem with E-differentiable E-convex function which only the first of the aforesaid methods can be used for solving it. Also we analyze an example of such a differentiable mathematical programming problem which it can be solved both the introduced l 1 exact penalty E-function method and the classical l 1 exact penalty function method, but the computational effort is less for the first of the aforesaid methods. Furthermore, the algorithm is investigated which minimizes the l 1 exact penalty E-function in the introduced method if it is used for solving the E-differentiable optimization problem considered in the paper. The convergence theorem is also proved for the aforesaid algorithm.

Preliminaries
Let S be a nonempty set in R n . The definition of an E-convex set and the definition of an E-convex function were introduced by Youness [27]. Now, for convenience, we recall those aforesaid definitions. Definition 2.1 [27] A set S ⊆ R n is said to be an E-convex set (with respect to an operator E ∶ R n → R n ) if and only if, the following relation holds for all x, u ∈ S and any ∈ [0, 1].
Note that every convex set is E-convex (if E is the identity map), but the converse is not true (see, for example, Cristescu and Lupsa [10]). If S ⊆ R n is an E-convex set, then E(S) ⊆ S . If E(S) is a convex set and E(S) ⊆ S , then S is E-convex (Youness [28]). [27] Let S be a nonempty E-convex subset of R n . A real-valued function f ∶ S → R is said to be E-convex if and only if the following inequality holds for all x, u ∈ S and any ∈ [0, 1] . If the above inequality is strict for all x, u ∈ S such that E(x) ≠ E(u) and any ∈ (0, 1) , then f is said to be strictly E-convex on S.

Definition 2.2
It is clear that every (strictly) convex function is (strictly) E-convex (if E is the identity map).

E-convex programming and the l 1 exact penalty E-function method
In the paper, we consider the following constrained optimization problem: where f ∶ R n → R and g i ∶ R n → R , i ∈ I , h j ∶ R n → R , j ∈ J , are E-differentiable functions on R n . Now, let us introduce some notations which we shall use frequently in this paper. Firstly, we define the set of all feasible solutions in (P) as follows Moreover, we denote by I(x * ) the set of such inequality constraint indices that are active at x * ∈ D , i.e. I(x * ) = i ∈ I ∶ g i (x * ) = 0 . Now, we define the associated differentiable E-optimization problem ( P E ) for the considered E-differentiable optimization problem (P) as follows: f (E(x)) → min subject to g i (E(x)) ≤ 0, i ∈ I = {1, … , k}, h j (E(x)) = 0, j ∈ J = {1, … , q}.
(P E ) Throughout the paper, we shall assume that the suitable constraint qualification (for example, the E-Abadie constraint qualification (Antczak and Abdulaleem 1 3 [3])) is satisfied for the considered E-differentiable optimization problem (P) at any its E-KKT-point. Further, let us denote by I E (x * ) the set of indexes defined by I E (x * ) = i ∈ I ∶ g i (E(x * )) = 0 .
It is known from the optimization theory that the use of the exact penalty function methods consists in replacing the given constrained extremum problem with its corresponding unconstrained optimization problem with the objective function which is an exact penalty function. Each exact penalty function consists of the objective function of the original optimization problem and the constraints that are placed into it via the penalty parameter. Moreover, the way in which the constraints are placed in each exact penalty function is such that any violation of constraint is penalized and one of the key roles in penalization is played by the penalty parameter. It is also known that one of the most used nondifferentiable exact penalty function methods is the absolute value penalty function method which is called the l 1 exact penalty function method.
We now propose a new exact penalty function method for such extremum problems in which the functions involved are E-differentiable. Then, we use this method for solving the considered optimization problems (P) with E-differentiable E-convex functions. In the introduced l 1 exact penalty E-function method, for the auxiliary minimization problem ( P E ), the following penalized extremum problem is constructed which is defined by For a given inequality constraint g, note that the function (g•E) + is defined by In other words, the definition of the function (g•E) + can be re-formulated as By (9), it follows that g + (E(x)) is equal to 0 for all x that belong to D E , that is, for all x satisfying the condition g(E(x)) ≤ 0 and, moreover, it has a positive value whenever x ∉ D E . If we use (9) in the definition of the penalized extremum problem which is constructed for the original mathematical programming problem (P), then its formulation can be re-written as follows We call ( P E ( )) the penalized optimization problem with the l 1 exact penalty E-function or, shortly, the l 1 penalized E-optimization problem.
The first result that we establish in this section is the equivalence between an E-KKT point of (P) and a global minimizer of its corresponding penalized OPSEARCH (2023) 60:1331-1359 E-optimization problem for such all penalty parameters that exceed the given threshold value.
Theorem 3.2 Let x * ∈ D E be a KKT point of ( P E ) and the Karush-Kuhn-Tucker necessary optimality conditions (4)-(6) be fulfilled at x * with Lagrange multipliers * ∈ R k and * ∈ R q . Further, assume the following hypotheses:

e) If the penalty parameter is no less than the threshold value which is equal to the largest absolute value of a Lagrange multiplier corresponding to some constraint
By assumption, f, , are E-differentiable E-convex functions at x * on R n . Hence, by Definition 2.4, the following inequalities are satisfied for all x ∈ R n . Then, summarizing both sides of the resulting inequalities after multiplying (15)-(17) by associated Lagrange multiplier, we obtain, for any x ∈ R n , Combining (14) and (18)-(20), we get If we take also such Lagrange multipliers that are equal to 0, we get for all x ∈ R n , Then, by (10), the inequality .
is satisfied for all x ∈ R n . Thus, x * is a global optimal solution of the l 1 penalized E-optimization problem (P E ( )) and, therefore, the proof of this theorem is complete. ◻ The next result is directly consequence of the results formulated in Theorems 3.1, 3.2.
be a global E-optimal solution of the considered extremum problem (P). Further, assume that all hypotheses of Theorem 3.2 are fulfilled. If the penalty parameter is no less than the threshold value which is equal to the largest absolute value of a Lagrange multiplier corresponding to some constraint (that is, , then x * is a global minimizer in its corresponding Now, under some stronger assumptions, we prove the converse result to that one which is formulated in Corollary 3.1.

Theorem 3.3 Let E ∶ R n → R n be a continuous map and let x * be an optimal solution in the penalized E-optimization problem (P E ( )). Further, E x be an E -KKT point in the given mathematical programming problem (P) and the E-Karush-Kuhn-Tucker necessary optimality conditions
Moreover, we assume the following hypotheses: If we assume the compactness of D and that the penalty parameter exceeds the is an E-optimal solution in the given mathematical programming problem (P).
Proof By assumption, x * is a global optimal solution of (P E ( )). Therefore, by (10), the inequality holds for all x ∈ R n . Using (8) together with the definition of the absolute value, we get that the following inequality holds for all x ∈ R n . Hence, again by (8) We now show that E(x * ) is a global E-minimizer in the given mathematical programming problem (P). Firstly, we prove that E(x * ) ∈ D . We proceed by contradiction. Therefore, suppose, contrary to the result, that E(x * ) ∉ D . As we have shown above, f (E(⋅)) has a global minimizer x on D E . In other words, there exists and E-optimal solution E x in the given mathematical programming problem (P). Therefore, there exist Lagrange multipliers ̂ ∈ R k , ̂ ∈ R q such that the E-KKT necessary optimality conditions (4)-(6) are fulfilled at this point. Using the assumptions (a)-(d) together with Definition 2.4, we obtain that the inequalities hold, respectively. Multiplying the inequalities (27)-(29) by the corresponding Lagrange multipliers, respectively, and then adding both sides of the resulting inequalities, we obtain

3
Thus, adding both sides of (26) and (30) (33) (34) Based on the above result, we conclude that, if we assume that the functions involved in the given E-differentiable mathematical programming problem are E-convex and a constraint qualification holds, there does exist a finite value of the penalty parameter such that there is the equivalence between the set of E-optimal solutions in the problem (P) and the set of minimizers of its associated penalized E-extremum problem (P E ( )).
In order to illustrate the results proved in the paper, we investigate the use of the introduced l 1 exact penalty E-function method in solving an exemplary nonconvex E-differentiable optimization problem in which the functions involved are E-differentiable E-convex.

Example 3.1
In this example, we analyze the use of the introduced l 1 exact penalty E-function method to solve an exemplary E-differentiable optimization problem defined by an one-to-one and onto mapping defined as follows E x 1 , x 2 = x 1 , x 3 2 . Now, for the considered nonconvex nonsmooth optimization problem (P1), we define its associated optimization problem (P1 E ) as follows 2 2 − 1 = 0 and, moreover, x * = (0, 0) is the unique global minimum of (P1 E ). Moreover, we can show, by Definition 2.4, that all functions constituting (P1) are E-differentiable E-convex at x * on R 2 . Using the l 1 exact penalty E-function method introduced in this paper makes us construct the unconstrained optimization problem defined by Note that the E-Karush-Kuhn-Tucker necessary optimality conditions (4)-(6) are satisfied at x * = (0, 0) with the Lagrange multipliers * 1 = 1 , * 1 = 0 . Then, for any penalty parameter satisfying ≥ max * 1 ,

E-optimal solution in the original optimization problem (P1) if and only if it is also a
global minimizer of its associated penalized E-optimization problem (P1 E ( )) constructed in the l 1 exact E-penalty function method. Thus, there is the equivalence between a global E-optimal solution of the given extremum problem (P1) and a global optimal solution of its corresponding penalized E-optimization problem (P1 E ( ) ) constructed in the introduced l 1 exact penalty E-function method. Furthermore, it is not difficult to see that some of the functions constituting (P1) are nondifferentiable and nonconvex. Therefore, it is not possible to apply the conditions for convex smooth optimization problems (see, Theorem 9.3.1 Bazaraa [6], for instance) in order to show that E(x * ) = (0, 0), which is an E-optimal solution in (P1), is also a minimizer in the unconstrained optimization problem (P1 E ( ) ). However, we are in position to use the established results in the considered case since the functions constituting the mathematical programming problem (P1) are E-differentiable.
Remark 3. 1 We now compare both exact penalty function methods, that is, the classical l 1 exact penalty function method and the introduced l 1 exact penalty E-function method which we also use in solving the extremum problem (P1) considered in Example 3.1. Using the classical l 1 exact penalty function method for solving the optimization problem (P1) causes us to construct the unconstrained (penalized) extremum problem defined by Note that, in the considered case of the mathematical programming problem (P1), the formulation of the unconstrained optimization problem defined in the introduced l 1 exact penalty E-function method is less complex than the penalized optimization problem (P1( ) ) constructed in the classical l 1 exact penalty function method. Therefore, the second foregoing penalized optimization problem constructed for (P1) is more difficult to solve. Moreover, all functions constituting the original optimization problem (P1) considered in Example 3.1 are only E-differentiable, but some of them are not differentiable in the usual sense. Therefore, for this reason, the classical l 1 exact penalty function method cannot be used to solve this mathematical programming problem as it is usually use to solve differentiable extremum problems. The l 1 exact penalty E-function method introduced in this paper is, however, applicable in solving such nondifferentiable extremum problems (since (P1) is E-differentiable).
As it follows even from this example, the class of extremum problems for which the l 1 exact penalty E-function method introduced in can be used to solve contains also such nonsmooth constrained extremum problems (that is, E-differentiable mathematical programming problems) for which the classical l 1 exact penalty function method cannot be applied, especially if we want to use tools for solving smooth extremum problems.

Applications
In this section, we discuss the potential applications of the l 1 exact penalty E-function method for solving an example real-world extremum problem. We now present an example of an economic optimization problem in which the functions involved are E-differentiable. In order to solve it, we use the introduced and analyzed l 1 exact penalty E-function method.
Example 4.1 A manufacturing firm producing cement has entered into a contract to supply 50 tons cement silo at the end of the first month, 50 tons at the end of the second month, and 50 tons at the end of the third. The cost of producing 3 √ x tons cement silo in any month is given by The firm can produce more tons cement silo in any month and carry them to a subsequent month. However, it costs 20 dollars per unit for any tons cement silo carried over from one month to the next. Assuming that there is no initial inventory, determine the tons cement silo to be produced in each month to minimize the total cost. Let x 1 , x 2 and x 3 represent the tons cement silo produced in the first, second and third month, respectively. The total cost to be minimized is given by That is an E-differentiable optimization problem give by Note that 3 be an one-to-one and onto mapping defined as follows E x 1 , x 2 , x 3 = x 3 1 , x 3 2 , x 3 3 . Now, for the considered nonconvex nonsmooth optimization problem (P2), we define its associated optimization problem (P2 E ) as follows Total cost = production cost + holding cost (50, 50, 50) is a global minimum of (P2 E ). Further, it can be shown by Definition 2.4 that the objective function f, the inequality constraint function g 1 , g 2 and g 3 are E-convex at x * on R 3 . Since we use the l 1 exact penalty E-function method introduced in the paper, we construct the following unconstrained optimization problem: Note that the E-Karush-Kuhn-Tucker necessary optimality conditions (4)- (6) are satisfied at x * = (50, 50, 50) with the Lagrange multipliers * 1 = * 2 = 20 and * 3 = 100 . Then, for any penalty parameter satisfying ≧ max{ * 1 , * 2 , * 3 } = 100 , by Theorem 3.2, x * = (50, 50, 50) is a global minimizer in its associated the l 1 penalized E-optimization problem (P2 E ( )). Note that also all hypotheses of Theorem 3.3 are also fulfilled. Therefore, from (P2 E ( )) we have, ≥ 100. Also, E(x * ) = (125000, 125000, 125000) is an E-optimal solution of the problem (P2).

The algorithm of the l 1 exact penalty E-function method
Now, we present the algorithm for the introduced l 1 exact penalty function method if the aforesaid exact penalty function method is used for solving an E-differentiable mathematical programming problem involving both inequality and equality constraints. In the aforesaid algorithm, for the given E-differentiable constrained problem (P), the sequence of the penalized E-optimization problems (P E ( m )) is generated in which any (P E ( m )) is defined as follows: where m > 0 and lim m→∞ m = ∞. Now, we present the algorithm of the l 1 exact penalty E-function method used in the paper to solve (P): Algorithm 1 The algorithm of the l 1 exact penalty E-function method Choose a new penalty parameter that satisfies the relation ρ m+1 > ρm; Stop with the E -We prove the convergence theorem for the presented above Algorithm 1. Now, we give and prove some auxiliary results that we use in proving the convergence of Algorithm 1.
We assume in this section that E ∶ R n → R n is a continuous map.
In order to prove the convergence of Algorithm 1, we now show that if is an E-optimal solution of the considered optimization problem (P).
, then E(x * ) is feasible in the considered optimization problem (P).
is feasible in the considered optimization problem (P) and, moreover, lim t→∞ E(x * m t ) = E(x * ) . We proceed by contradiction. Suppose, contrary to the result, that E(x * ) ∉ D . If we take E(x) ∈ D , then, according to the definition of the penalized E-optimization problem (P E ( m t )) and Definition 3.1, we obtain Therefore, using (45) and (48), we have for t sufficiently large This is a contradiction to (44). This means that E(x * ) ∈ D, and the proof of this lemma is completed. ◻ The following theorem shows that if the sequence of optimal solutions in the penalized E-optimization problem (P E ( m )) with the l 1 exact penalty E-function converges to x * , then E(x * ) is optimal for the unconstrained E-optimization problem (P E ( m )).
be an one-to-one and onto mapping defined as follows E x 1 , x 2 = (x 3 1 , x 2 ) . Now, for the considered E-differentiable programming problem (P4), we define its associated E-optimization problem (P4 E ) as follows is an optimal solution of the problem (P4 E ). We now use the l 1 exact penalty E-function method and, consequently, we formulate the next unrestricted (penalized) optimization problem defined by In Table 1, we present the results generated by Algorithm 1 used for solving the nondifferentiable optimization problem (P4) with E-differentiable functions considered in Example 5.1. Table 1, the l 1 exact penalty E-function method solves the optimization problem (P4) considered in Example 5.1. Moreover, all functions constituting the original optimization problem (P4) are only E-differentiable and some of them are not differentiable in the usual sense. Therefore, for this reason, we cannot use the classical l 1 exact penalty function method as it is usually used for solving differentiable extremum problems. However, the l 1 exact penalty E-function method introduced in the paper is applicable in solving such nondifferentiable extremum problems (since (P4) is E-differentiable). As it follows even from this example, the class of extremum problems for which the l 1 exact penalty E-function method introduced in the paper is applicable is larger in comparison to the classical l 1 exact penalty function method, especially if we want to use tools for solving smooth extremum problems.

Remark 5.1 As it follows from
In Example 5.1, we presented the example of such an E-differentiable optimization problem in which it is not possible to use the classical l 1 exact penalty function method. Now, we give such an example of a differentiable optimization problem for which we use to solve it both the classical l 1 exact penalty function method and the l 1 exact E-penalty function method introduced in the paper. Then we compare both aforesaid exact penalty functions of a l 1 type based on this example.

Example 5.2 Consider the following nonconvex nondifferentiable optimization problem
be an one-to-one and onto mapping defined as follows E � ) and E(x * ) = (0, 0) is an E-optimal solution of the problem (P5). Now, for the considered nondifferentiable programming problem (P5), we define its associated optimization problem (P5 E ) as follows (P5) Table 2 The results generated by Algorithm 1 (of the l 1 exact penalty E-function method) which has been used for solving (P5) considered in Example 5.2 In order to compare both aforesaid exact penalty function methods of a l 1 type, the results generated by Algorithm 1 of the introduced l 1 exact penalty E-function method and the algorithm of the classical l 1 exact penalty function method used for solving the optimization problem (P5) are presented in Tables 2 and 3, respectively. Tables 2 and 3, respectively, we presented the results generated by the l 1 exact penalty E-function method and the classical exact l 1 penalty function method used for solving the optimization problem (P5) considered in Example 5.2. Note that, although both aforesaid methods terminate at the optimal solution x * = (0, 0) , however, the l 1 exact penalty E-function method needs fewer iterations to find an optimal solution of the differentiable optimization problem (P5) than classical l 1 exact penalty function method. In addition, it should be noted that the cost of calculations for this method is less than for the classical exact l 1 penalty function method in the considered case. This is a consequence of the fact that the unconstrained optimization problem constructed in the the classical exact l 1 penalty function method for the optimization problem (P5) is more complex than the unconstrained optimization problem constructed in l 1 exact penalty E-function method and, therefore, it is more difficult to solve. In other words, the penalized optimization problem constructed in the the classical exact l 1 penalty function method for (P5) requires more computational effort than the penalized optimization problem constructed in the l 1 exact penalty E-function method.

Concluding remarks
We have proposed in the paper a new l 1 exact penalty function method, named the l 1 exact penalty E-function method, which we have used to solve nonconvex E-differentiable mathematical programming problems. Then, for the introduced l 1 exact penalty function method, we have investigated the most important property from a practical point of view of all exact penalty function methods -exactness of the penalization. Therefore, under E-convexity hypotheses, we have shown the equivalence between the set of optimal solutions of the original constrained E-differentiable minimization problem and its corresponding optimization problem with the l 1 exact penalty E-function which is constructed in the introduced method. We have proven that the aforesaid result is true for all penalty parameters that are larger than the threshold which was also given in the paper. Moreover, note that the unconstrained optimization problem with the objective function, which is the l 1 exact penalty E-function, is, in general, less complex than the original E-differentiable (61) P(x, ) =(2x 1 + 1) 2 + (2x 3 2 + 1) 2 − 2 + max 0, x 2 1 − x 3 2 + max 0, x 6 2 − x 1 . (P5( )) mathematical programming problem and, therefore, it is simpler for solving. These two aforesaid results have practical importance. In fact, the computational effort is less in the introduced l 1 exact penalty E-function method than the analyzed case of the l 1 exact penalty function method as it was even illustrated in Example 5.2. Moreover, there are also E-differentiable optimization problems such that the first of the aforesaid methods cannot be used to solve them, but the second one is applicable (such a case was illustrated in Example 5.1). Even by this example follows that there are E-differentiable optimization problems that cannot be solved by using the classical l 1 exact penalty function method, unlike the introduced l 1 exact penalty E-function method can be successfully applied in such cases. Furthermore, the algorithm has been investigated which minimizes the analyzed l 1 exact penalty E-function and, moreover, its convergence has been established. The practical significance of this result is that the introduced l 1 exact penalty E-function method can be used to solve successfully even such (not necessarily) differentiable mathematical programming problems for which other methods of this type may fail.