1 Introduction

The concepts of E-convex sets and an E-convex function have been introduced by Youness in [1, 2], and they have some important applications in various branches of mathematical sciences. Youness in [1] introduced a class of sets and functions which is called E-convex sets and E-convex functions by relaxing the definition of convex sets and convex functions. This kind of generalized convexity is based on the effect of an operator E: R n R n on the sets and the domain of the definition of functions. Also, in [2] Youness discussed the optimality criteria of E-convex programming. Xiusu Chen [3] introduced a new concept of semi E-convex functions and discussed its properties. Yu-Ru Syan and Stanelty [4] introduced some properties of an E-convex function, while Emam and Youness in [5] introduced a new class of E-convex sets and E-convex functions, which are called strongly E-convex sets and strongly E-convex functions, by taking the images of two points x and y under an operator E: R n R n besides the two points themselves. In [6] Megahed et al. introduced a combined interactive approach for solving E-convex multiobjective nonlinear programming. Also, in [7, 8] Iqbal and et al. introduced geodesic E-convex sets, geodesic E-convex and some properties of geodesic semi-E-convex functions.

In this paper we present the concept of an E-differentiable convex function which transforms a non-differentiable convex function to a differentiable function under an operator E: R n R n , for which we can apply the Fritz-John and Kuhn-Tucker conditions [9, 10] to find a solution of mathematical programming with a non-differentiable function.

In the following, we present the definitions of E-convex sets, E-convex functions, and semi E-convex functions.

Definition 1[1]

A set M is said to be an E-convex set with respect to an operator E: R n R n if and only if λE(x)+(1λ)E(y)M for each x,yM and λ[0,1].

Definition 2[1]

A function f: R n R is said to be an E-convex function with respect to an operator E: R n R n on an E-convex set M R n if and only if

f ( λ E ( x ) + ( 1 λ ) E ( y ) ) λ(fE)(x)+(1λ)(fE)(y)

for each x,yM and λ[0,1].

Definition 3[3]

A real-valued function f:M R n R is said to be semi E-convex function with respect to an operator E: R n R n on M if M is an E-convex set and

f ( λ E ( x ) + ( 1 λ ) E ( y ) ) λf(x)+(1λ)f(y)

for each x,yM and λ[0,1].

Proposition 4[1]

1- Let a setM R n be an E-convex set with respect to an operator E, E: R n R n , thenE(M)M.

2- IfE(M)is a convex set andE(M)M, then M is an E-convex set.

3- If M 1 and M 2 are E-convex sets with respect to E, then M 1 M 2 is an E-convex set with respect to E.

Lemma 5[1]

LetM R n be an E 1 - and E 2 -convex set, then M is an( E 1 E 2 )- and( E 2 E 1 )-convex set.

Lemma 6[1]

LetE: R n R n be a linear map and let M 1 , M 2 R n be E-convex sets, then M 1 + M 2 is an E-convex set.

Definition 7[1]

Let S R n ×R and E: R n R n , we say that the set S is E-convex if for each (x,α),(y,β)S and each λ[0,1], we have

( λ E x + ( 1 λ ) E y , λ α + ( 1 λ ) β ) S.

2 Generalized E-convex function

Definition 8[1]

Let M R n be an E-convex set with respect to an operator E: R n R n . A function f:MR is said to be a pseudo E-convex function if for each x 1 , x 2 M with (fE)( x 1 )( x 2 x 1 )0 implies f(E x 2 )f(E x 1 ) or for all x 1 , x 2 M and f(E x 2 )<f(E x 1 ) implies (fE)( x 1 )( x 2 x 1 )<0.

Definition 9[1]

Let M R n be an E-convex set with respect to an operator E: R n R n . A function f:MR is said to be a quasi-E-convex function if and only if

f ( λ E x + ( 1 λ ) E y ) max { ( f E ) x , ( f E ) y }

for each x,yM and λ[0,1].

3 E-differentiable function

Definition 10 Let f:M R n R be a non-differentiable function at x ¯ and let E: R n R n be an operator. A function f is said to be E-differentiable at x ¯ if and only if (fE) is a differentiable function at x ¯ and

( f E ) ( x ) = ( f E ) ( x ¯ ) + ( f E ) ( x ¯ ) ( x x ¯ ) + x x ¯ α ( x ¯ , x x ¯ ) , α ( x ¯ , x x ¯ ) 0 as  x x . ¯

Example 11 Let f(x)=|x| be a non-differentiable function at the point x=0 and let E:RR be an operator such that E(x)= x 2 , then the function (fE)(x)=f(Ex)= x 2 is a differentiable function at the point x=0, and hence f is an E-differentiable function.

3.1 Problem formulation

Now, we formulate problems P and P E , which have a non-differentiable function and an E-differentiable function, respectively.

Let E: R n R n be an operator, M be an E-convex set and f be an E-differentiable function. The problem P is defined as

P{ Min f ( x ) , subject to  M = { x : g i ( x ) 0 , i = 1 , 2 , , m } ,

where f is a non-differentiable function, and the problem P E is defined as

P E { Min ( f E ) ( x ) , subject to  M = { x : ( g i E ) ( x ) 0 , i = 1 , 2 , , m } ,

where f is an E-differentiable function.

Now, we will discuss the relationship between the solutions of problems P and P E .

Lemma 12[11]

LetE: R n R n be a one-to-one and onto operator and let M ={x:( g i E)(x)0,i=1,2,,m}. ThenE( M )=M, where M and M are feasible regions of problems P and P E , respectively.

Theorem 13 LetE: R n R n be a one-to-one and onto operator and let f be an E-differentiable function. If f is non-differentiable at x ¯ , and x ¯ is an optimal solution of the problem P, then there exists y ¯ M such that x ¯ =E( y ¯ )and y ¯ is an optimal solution of the problem P E .

Proof Let x ¯ be an optimal solution of the problem P. From Lemma 12 there exists y ¯ M such that x ¯ =E( y ¯ ). Let y ¯ be a not optimal solution of the problem P E , then there is y ˆ M such that (fE)( y ˆ )(fE)( y ¯ ). Also, there exists x ˆ M such that x ˆ =E( y ˆ ) . Then f( x ˆ )<f( x ¯ ) contradicts the optimality of x ¯ for the problem P. Hence the proof is complete. □

Theorem 14 LetE: R n R n be a one-to-one and onto operator, and let f be an E-differentiable function and strictly quasi-E-convex. If x ¯ is an optimal solution of the problem P, then there exists y ¯ M such that x ¯ =E( y ¯ )and y ¯ is an optimal solution of the problem  P E .

Proof Let x ¯ be an optimal solution of the problem P. Then from Lemma 12 there is y ¯ M such that x ¯ =E( y ¯ ). Let y ¯ be a not optimal solution of the problem P E , then there is y ˆ M and also x ˆ M, x ˆ =E( y ˆ ) such that (fE)( y ˆ )(fE)( y ¯ ). Since f is strictly quasi-E-convex function, then

f ( λ E ( y ¯ ) + ( 1 λ ) E ( y ˆ ) ) < max { ( f E ) ( y ¯ ) , ( f E ) ( y ˆ ) } < max { f ( x ¯ ) , f ( x ˆ ) } < f ( x ¯ ) .

Since M is an E-convex set and E(M)M, then λE( y ¯ )+(1λ)E( y ˆ )M contradicts the assumption that x ¯ is a solution of the problem P, then there exists y ¯ M , a solution of the problem P E , such that x ¯ =E( y ¯ ). □

Theorem 15 Let M be an E-convex set, E: R n R n be a one-to-one and onto operator andf:M R n Rbe an E-differentiable function at x ¯ . If there is a vectord R n such that(fE)( x ¯ )d<0, then there existsδ>0such that

(fE)( x ¯ +λd)<(fE)( x ¯ ) for each λ(0,δ).

Proof Since f is an E-differentiable function at x ¯ , then

( f E ) ( x ¯ + λ d ) = ( f E ) ( x ¯ ) + λ ( f E ) ( x ¯ ) + λ d α ( x ¯ , λ d ) , α ( x ¯ , λ d ) 0 as  λ 0 .

Since (fE)( x ¯ )d<0 and α( x ¯ ,λd)0 as λ0, then there exists δ>0 such that

(fE)( x ¯ )+dα( x ¯ ,λd)<0for each λ(0,δ)

and thus (fE)( x ¯ +λd)<(fE)( x ¯ ). □

Corollary 16 Let M be an E-convex set, letE: R n R n be a one-to-one and onto operator, and letf:M R n Rbe an E-differentiable and strictly E-convex function at x ¯ . If x ¯ is a local minimum of the function(fE), then(fE)( x ¯ )=0.

Proof Suppose that (fE)( x ¯ )0 and let d=(fE)( x ¯ ), then (fE)( x ¯ )d= ( f E ) ( x ¯ ) 2 <0. By Theorem 15 there exists δ>0 such that

(fE)( x ¯ +λd)<(fE)( x ¯ )for each λ(0,δ)

contradicting the assumption that x ¯ is a local minimum of (fE)(x), and thus (fE)( x ¯ )=0. □

Theorem 17 Let M be an E-convex set, E: R n R n be a one-to-one and onto operator, andf:M R n Rbe twice E-differentiable and strictly E-convex function at x ¯ . If x ¯ is a local minimum of(fE), then(fE)( x ¯ )=0and the Hessian matrixH( x ¯ )= 2 (fE)( x ¯ )is positive semidefinite.

Proof Suppose that d is an arbitrary direction. Since f is a twice E-differentiable function at x ¯ , then

( f E ) ( x ¯ + λ d ) = ( f E ) ( x ¯ ) + λ ( f E ) ( x ¯ ) d + 1 2 λ 2 d t 2 ( f E ) ( x ¯ ) d + λ 2 d 2 α ( x ¯ , λ d ) ,

where α( x ¯ ,λd)0 as λ0.

From Corollary 16 we have (fE)( x ¯ )=0, and

( f E ) ( x ¯ + λ d ) ( f E ) ( x ¯ ) λ 2 = 1 2 d t 2 (fE)( x ¯ )d.

Since x ¯ is a local minimum of (fE), then (fE)( x ¯ )<(fE)( x ¯ +λd), and

d t 2 (fE)( x ¯ )d0, i.e. , H( x ¯ )= 2 (fE)( x ¯ )is positive semidefinite.

 □

Example 18 Let f(x,y)=x+2 y 2 2 x 1 3 be a non-differentiable function at (0,y), and let E(x,y)=( x 3 ,y), then (fE)(x,y)= x 3 +2 y 2 2x, and

( f E ) x = 3 x 2 2 = 0 implies x = ± 2 3 , ( f E ) y = 4 y = 0 implies y = 0 , 2 ( f E ) x 2 = 6 x , 2 ( f E ) y x = 0 , 2 ( f E ) y 2 = 4 , 2 ( f E ) x y = 0 .

Then ( x 1 , y 1 )=( 2 3 ,0) and ( x 2 , y 2 )=( 2 3 ,0) are extremum points of (fE)(x,y), and the Hessian matrix H( 2 3 ,0)= [ 6 2 3 0 0 4 ] is positive definite. And thus the point ( 2 3 ,0) is a local minimum of the function (fE)(x,y), but the Hessian matrix H( 2 3 ,0)= [ 6 2 3 0 0 4 ] is indefinite.

Theorem 19 Let M be an E-convex set, letE: R n R n be a one-to-one and onto operator, and letf:M R n Rbe a twice E-differentiable and strictly E-convex function at x ¯ . If(fE)( x ¯ )=0and the Hessian matrixH( x ¯ )= 2 (fE)( x ¯ )is positive definite, then x ¯ is a local minimum of(fE).

Proof Suppose that x ¯ is not a local minimum of (fE)(x), and there exists a sequence { x k } is converging to x ¯ such that (fE)( x k )<(fE)( x ¯ ) for each k. Since (fE)( x ¯ )=0, and f is twice E-differentiable at x ¯ , then

( f E ) ( x k ) = ( f E ) ( x ¯ ) + λ ( f E ) ( x ¯ ) ( x k x ¯ ) + 1 2 ( x k x ¯ ) t 2 ( f E ) ( x ¯ ) ( x k x ¯ ) + ( x k x ¯ ) 2 α ( x ¯ , ( x k x ¯ ) ) ,

where α( x ¯ ,( x k x ¯ ))0 as k, and

1 2 ( x k x ¯ ) t 2 (fE)( x ¯ )( x k x ¯ )+ ( x k x ¯ ) 2 α ( x ¯ , ( x k x ¯ ) ) <0for each k.

By dividing on ( x k x ¯ ) 2 , and letting d k = ( x k x ¯ ) ( x k x ¯ ) , we get

1 2 d k t 2 (fE)( x ¯ ) d k +α ( x ¯ , ( x k x ¯ ) ) <0for each k.

But d k =1 for each k, and hence there exists an index set K such that { d k } K d, where d=1. Considering this subsequence and the fact that α( x ¯ ,( x k x ¯ ))0 as k, then d t 2 (fE)( x ¯ )d<0. This contradicts the assumption that H( x ¯ ) is positive definite. Therefore x ¯ is indeed a local minimum. □

Example 20 Let f(x,y)= x 2 3 + y 2 1 be a non-differentiable at the point (0,y), and let E(x,y)=( x 3 ,y), then (fE)(x,y)= x 2 + y 2 1

( f E ) x = 2 x , 2 ( f E ) y x = 0 , 2 ( f E ) x 2 = 2 , ( f E ) y = 2 y , 2 ( f E ) x y = 0 , 2 ( f E ) y 2 = 2 .

The necessary condition for x ¯ is a local minimum of (fE) is (fE)( x ¯ )=0, then x ¯ =(0,0), and the Hessian matrix H( x ¯ )

H= [ 2 ( f E ) x 2 2 ( f E ) y x 2 ( f E ) x y 2 ( f E ) y 2 ] = [ 2 0 0 2 ]

is positive definite.

Example 21 Let f(x,y)= x 1 3 +y1 be non-differentiable at the point (0,y), and let E(x,y)=( x 3 ,y), then (fE)(x,y)=x+y1.

Now, let M={ λ 1 (0,0)+ λ 2 (0,3)+ λ 3 (1,2)+ λ 4 (1,0)}{ λ 1 (0,0)+ λ 2 (0,3)+ λ 3 (1,2)+ λ 4 (1,0)}, i = 1 4 λ i =1, λ i 0 be an E-convex set with respect to operator E (the feasible region is shown in Figure 1) and

f ( 0 , 0 ) = 1 , ( f E ) ( 0 , 0 ) = 1 , f ( 0 , 3 ) = 4 , ( f E ) ( 0 , 3 ) = 4 , f ( 1 , 2 ) = 2 , ( f E ) ( 1 , 2 ) = 2 , f ( 1 , 0 ) = 0 , ( f E ) ( 1 , 0 ) = 0 , f ( 0 , 3 ) = 2 , ( f E ) ( 0 , 3 ) = 2 , f ( 1 , 2 ) = 2 , ( f E ) ( 1 , 2 ) = 2 .
Figure 1
figure 1

The feasible region M.

Then x ¯ =(0,3) is a solution of the problem P E and E( x ¯ )=E(0,3)=(0,3) is a solution of the problem P.

Definition 22 Let M be a nonempty E-convex set in R n and let E( x ¯ )clM. The cone of feasible direction of E(M) at E( x ¯ ) denoted by D is given by

D= { d : d 0 , E ( x ¯ ) + λ d M  for each  λ [ 0 , δ ] , δ > 0 } .

Lemma 23 Let M be an E-convex set with respect to an operatorE: R n R n , and letf:M R n Rbe E-differentiable at x ¯ . If x ¯ is a local minimum of the problem P E , then F 0 D=ϕ, where F 0 ={d:(fE)( x ¯ )d<0}, and D is the cone of feasible direction of M at  x ¯ .

Proof Suppose that there exists a vector d F 0 D. Then by Theorem 15, there exists δ 1 such that

(fE)( x ¯ +λd)<(fE)( x ¯ )for each λ(0, δ 1 ).
(3.1)

By the definition of the cone of feasible direction, there exists δ 2 such that

E( x ¯ )+λdMfor each λ(0, δ 2 ).
(3.2)

From 3.1 and 3.2 we have (fE)( x ¯ +λd)<(fE)( x ¯ ) for each λ(0,δ), where δ=min{ δ 1 , δ 2 }, which contradicts the assumption that x ¯ is a local optimal solution, then F 0 D=ϕ. □

Lemma 24 Let M be an open E-convex set with respect to an operatorE: R n R n , letf:M R n Rbe E-differentiable at x ¯ and let g i : R n Rfori=1,2,,m. Let x ¯ be a feasible solution of the problem P E and letI={i:( g i E)( x ¯ )=0}. Furthermore, suppose that g i foriIis E-differentiable at x ¯ and that g i foriIis continuous at x ¯ . If x ¯ is a local optimal solution, then F 0 G 0 =ϕ, where

F 0 = { d : ( f E ) ( x ¯ ) d < 0 } , G 0 = { d : ( g i E ) ( x ¯ ) d < 0 , for each i I }

and E is one-to-one and onto.

Proof Let d G 0 . Since E( x ¯ )M and M is an open E-convex set, there exists a δ 1 >0 such that

E( x ¯ )+λdMfor λ(0, δ 1 ).
(3.3)

Also, since ( g i E)( x ¯ )<0 and since g i is continuous at x ¯ for iI, there exists a δ 2 >0 such that

( g i E)( x ¯ +λd)<0for λ(0, δ 2 ) and for iI.
(3.4)

Finally, since d G 0 , ( g i E)( x ¯ )d<0 for each iI and by Theorem 15, there exists δ 3 >0 such that

( g i E)( x ¯ +λd)<( g i E)( x ¯ )for λ(0, δ 3 ) and iI.
(3.5)

From 3.3, 3.4 and 3.5, it is clear that points of the form E( x ¯ )+λd are feasible to the problem P E for each λ(0,δ), where δ=min( δ 1 , δ 2 , δ 3 ). Thus dD, where D is the cone of feasible direction of the feasible region at x ¯ . We have shown that for d G 0 implies that dD, and hence G 0 D. By Lemma 23, since x ¯ is a local solution of the problem P E , F 0 D=ϕ. It follows that F 0 G 0 =ϕ. □

Theorem 25 (Fritz-John optimality conditions)

Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n R n , letf:M R n Rbe E-differentiable at x ¯ and let g i : R n Rfori=1,2,,m. Let x ¯ be feasible solution of the problem P E and letI={i:( g i E)( x ¯ )=0}. Furthermore, suppose that g i foriIis differentiable at x ¯ and that g i foriIis continuous at x ¯ . If x ¯ is a local optimal solution, then there exist scalars u 0 and u i foriIsuch that

andE( x ¯ )is a local solution of the problem P.

Proof Let x ¯ be a local solution of the problem P E , then there is no vector d such that (fE)( x ¯ )d<0 and ( g i E)( x ¯ )d<0. Let A be a matrix with rows (fE)( x ¯ ) and ( g i E)( x ¯ ). From Gordon’s theorem [10], we have the system Ad<0 is inconsistent, then there exists a vector b0 such that Ab=0, where b=( u 0 u i ) for each iI. And thus

u (fE)( x ¯ )+ i I u i ( g i E)( x ¯ )=0

holds and E( x ¯ ) is a local solution of the problem P. □

Theorem 26 LetE: R n R n be a one-to-one and onto operator and letf:M R n Rbe an E-differentiable function. If x ¯ is an optimal solution of the problem P, then there exists y ¯ M such that x ¯ =E( y ¯ )is an optimal solution of the problem P E and the Fritz-John optimality condition of the problem P E is satisfied.

Proof Let x ¯ be an optimal solution of the problem P. Since E is one-to-one and onto, according to Theorem 13, there exists y ¯ M , x ¯ =E( y ¯ ) is an optimal solution of the problem P E . Hence there exist scalars u 0 . u i satisfying the Fritz-John optimality conditions of the problem P E

u ( f E ) ( x ¯ ) + i I u i ( g i E ) ( x ¯ ) = 0 , ( u 0 , u i ) = 0 , u 0 , u i 0 .

 □

Theorem 27 (Kuhn-Tucker necessary condition)

Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n R n , letf:M R n Rbe E-differentiable and strictly E-convex at x ¯ and let g i : R n Rfori=1,2,,m. Let y ¯ be a feasible solution of the problem P E and letI={i:( g i E)( y ¯ )=0}. Furthermore, suppose that( g i E)is continuous at y ¯ foriIand( g i E)( y ¯ )foriIare linearly independent. If x ¯ is a solution of the problem P, x ¯ =E( y ¯ )and y ¯ is a local solution of the problem P E , then there exist scalars u i foriIsuch that

(fE)( y ¯ )+ i I u i ( g i E)( y ¯ )=0, u i 0 for each iI.

Proof From the Fritz-John optimality condition theorem, there exist scalars u 0 and u i for each iI such that

u 0 (fE)( y ¯ )+ i I u ˆ i ( g i E)( y ¯ )=0, u 0 , u ˆ i 0 for each iI.

If u 0 =0, the assumption of linear independence of ( g i E)( y ¯ ) does not hold, then u 0 >0. By taking u i = u ˆ i u 0 , then (fE)( y ¯ )+ i I u i ( g i E)( y ¯ )=0, u i 0 holds for each iI. From Theorem 26, y ¯ is a local solution of the problem P E . □

Theorem 28 Let M be an open E-convex set with respect to the one-to-one and onto operatorE: R n R n , g i : R n Rfori=1,2,,m, and letf:M R n Rbe E-differentiable at x ¯ and strictly E-convex at x ¯ . Let x ¯ =E( y ¯ )be a feasible solution of the problem P E andI={i:( g i E)( y ¯ )=0}. Suppose that f is pseudo-E-convex at y ¯ and that g i is quasi-E-convex and differentiable at y ¯ for eachiI. Furthermore, suppose that the Kuhn-Tucker conditions hold at y ¯ . Then y ¯ is a global optimal solution of the problem P E and hence x ¯ =E( y ¯ )is a solution of the problem P.

Proof Let y ˆ be a feasible solution of the problem P E , then ( g i E)( y ˆ )( g i E)( y ¯ ) for each iI. Since ( g i E)( y ˆ )0, ( g i E)( y ¯ )=0 and g i is quasi-E-convex at y ¯ , then

( g i E ) ( y ¯ + λ ( y ˆ y ¯ ) ) = ( g i E ) ( λ y ˆ + ( 1 λ ) y ¯ ) max { ( g i E ) ( y ˆ ) , ( g i E ) ( y ¯ ) } = ( g i E ) ( y ¯ ) .

This means that ( g i E) does not increase by moving from y ¯ along the direction y ˆ y ¯ . Then we must have from Theorem 15 that ( g i E)(y y ¯ )0. Multiplying by u i and summing over I, we get

[ i I u i ( g i E ) ( y ¯ ) ] (y y ¯ )0.

But since

(fE)( y ¯ )+ i I u i ( g i E)( y ¯ )=0,

it follows that (fE)( y ¯ )(y y ¯ )0. Since f is pseudo E-convex at y ¯ , we get

(fE)(y)(fE)( y ¯ ).

Then y ¯ is a global solution of the problem P E and from Theorem 13 x ¯ =E( y ¯ ) is a global solution of the problem P. □

Example 29 Consider the following problem (problem P):

Min f ( x , y ) = x 2 3 + y 2 , subject to  x 2 + y 2 5 , x + 2 y 4 , x , y 0 .

The feasible region of this problem is shown in Figure 2.

Figure 2
figure 2

The feasible region M.

Let E(x,y)=( 1 8 x 3 , 1 3 y), then the problem P E is as follows:

min ( f E ) ( x , y ) = 1 4 x 2 + 1 9 y 2 , subject to  x 6 64 + y 2 9 5 , 1 8 x 3 + 2 3 y 4 , x , y 0 .

We note that E(M)M, where

( 5 , 0 ) M implies E ( 5 , 0 ) = ( 5 5 8 , 0 ) M , ( 0 , 2 ) M implies E ( 0 , 2 ) = ( 0 , 2 3 ) M , ( 0 , 0 ) M implies E ( 0 , 0 ) = ( 0 , 0 ) M , ( 2 , 1 ) M implies E ( 2 , 1 ) = ( 1 , 1 3 ) M .

The Kuhn-Tucker conditions are as follows:

( f E ) ( x , y ) + u 1 ( g 1 E ) ( x , y ) + u 2 ( g 2 E ) ( x , y ) = 0 , [ 1 2 x 2 9 y ] + u 1 [ 6 64 x 5 2 9 y ] + u 2 [ 3 8 x 2 2 3 ] = 0 , u 1 [ x 6 64 + y 2 9 5 ] = 0 , u 2 [ 1 8 x 3 + 2 3 y 4 ] = 0 .

The solution is {[x=0.0, u 1 =0.0, u 2 =0.0,y=0.0]}, z ¯ =(0,0), and x ¯ =E( z ¯ )=(0,0) is a solution of the problem P.

4 Conclusion

In this paper we introduced a new definition of an E-differentiable convex function, which transforms a non-differentiable function to a differentiable function under an operator E: R n R n , and we studied Kuhn-Tucker and Fritz-John conditions for obtaining an optimal solution of mathematical programming with a non-differentiable function. At the end, some examples have been presented to clarify the results.