1 Introduction

Recently there has been an increasing interest in developing optimality conditions and duality relations for nonsmooth multiobjective programming problems involving locally Lipschitz functions. Many authors have studied under kinds of generalized convexity, and some results have been obtained. Schaible [1] and Bector et al. [2] derived some Kuhn-Tucker necessary and sufficient optimality conditions for the multiobjective fractional programming. By using ρ-invexity of a fractional function, Kim [3] obtained necessary and sufficient optimality conditions and duality theorems for nonsmooth multiobjective fractional programming problems. Lai and Ho [4] established sufficient optimality conditions for multiobjective fractional programming problems involving exponential V-r-invex Lipschitz functions. In [5], Kim and Schaible considered nonsmooth multiobjective programming problems with inequality and equality constraints involving locally Lipschitz functions and presented several sufficient optimality conditions under various invexity assumptions and regularity conditions. Soghra Nobakhtian [6] obtained optimality conditions and a mixed dual model for nonsmooth fractional multiobjective programming problems. Jeyakumar and Yang [7] considered nonsmooth constrained multiobjective optimization problems where the objective function and the constraints are compositions of convex functions and locally Lipschitz and Gâteaux differentiable functions. Lagrangian necessary conditions and new sufficient optimality conditions for efficient and properly efficient solutions were presented. Mishra and Mukherjee [8] extended the work of Jeyakumar and Yang [7] and the constraints are compositions of V-invex functions.

The present article begins with an extension of the results in [7, 8] from the nonfractional to the fractional case. We consider nonsmooth multiobjective programs where the objective functions are fractional compositions of invex functions and locally Lipschitz and Gâteaux differentiable functions. Kuhn-Tucker necessary conditions and sufficient optimality conditions for weakly efficient solutions are presented. We formulate dual problems and establish weak, strong and converse duality theorems for a weakly efficient solution.

2 Preliminaries

Let R n be the n-dimensional Euclidean space and R + n be its nonnegative orthant. Throughout the paper, the following convention for inequalities will be used for x,y R n :

x = y if and only if x i = y i for all  i = 1 , 2 , , n ; x < y if and only if x i < y i for all  i = 1 , 2 , , n ; x y if and only if x i y i for all  i = 1 , 2 , , n .

The real-valued function f: R n R is said to be locally Lipschitz if for any z R n there exists a positive constant K and a neighbourhood N of z such that, for each x,yN,

| f ( x ) f ( y ) | Kxy.

The Clarke generalized directional derivative of a locally Lipschitz function f at x in the direction d denoted by f (x;d) (see, e.g., Clarke [9]) is as follows:

f (x;d)= lim sup y x t 0 t 1 ( f ( y + t d ) f ( y ) ) .

The Clarke generalized subgradient of f at x is denoted by

f(x)= { ξ | f 0 ( x ; d ) ξ T d  for all  d R n } .

Proposition 2.1 [9]

Let f, h be Lipschitz near x, and suppose h(x)0. Then f h is Lipschitz near x, and one has

( f h ) (x) h ( x ) f ( x ) f ( x ) h ( x ) h 2 ( x ) .

If, in addition, f(x)0, h(x)>0 and if f andh are regular at x, then equality holds and f h is regular at x.

In this paper, we consider the following composite multiobjective fractional programming problem:

( P ) Minimize ( f 1 ( F 1 ( x ) ) h 1 ( F 1 ( x ) ) , , f p ( F p ( x ) ) h p ( F p ( x ) ) ) subject to g j ( G j ( x ) ) 0 , j = 1 , 2 , , m , x C ,

where

  1. (1)

    C is an open convex subset of a Banach space X,

  2. (2)

    f i , h i , i=1,2,,p, and g j , j=1,2,,m, are real-valued locally Lipschitz functions on R n , and F i and G j are locally Lipschitz and Gâteaux differentiable functions from X into R n with Gâteaux derivatives F i () and G j (), respectively, but are not necessarily continuously Fréchet differentiable or strictly differentiable [9],

  3. (3)

    f i (x)0, h i (x)>0, i=1,2,,p,

  4. (4)

    f i (x) and h i (x) are regular.

Definition 2.1 A feasible point x 0 is said to be a weakly efficient solution for (P) if there exists no feasible point x for which

f i ( F i ( x ) ) h i ( F i ( x ) ) < f i ( F i ( x 0 ) ) h i ( F i ( x 0 ) ) ,i=1,2,,p.

Definition 2.2 [10]

A function f is invex on X 0 R n if for x,u X 0 there exists a function η(x,u): X 0 × X 0 R n such that

f i (x) f i (u) ξ i T η(x,u), ξ i f i (u).

Definition 2.3 [10]

A function f: X 0 R n is V-invex on X 0 R n if for x,u X 0 there exist functions η(x,u): X 0 × X 0 R n and α i (x,u): X 0 × X 0 R + {0} such that

f i (x) f i (u) α i (x,u) ξ i T η(x,u), ξ i f(u).

The following lemma is needed in necessary optimality conditions, weak duality and converse duality.

Lemma 2.1 [3]

If f i 0, h i >0, f i and h i are invex at u with respect to η(x,u), and f i and h i are regular at u, then f i h i is V-invex at u with respect to η ¯ , where η ¯ (x,u)= h i ( u ) h i ( x ) η(x,u).

3 Optimality conditions

Note that if F:X R n is locally Lipschitz near a point xX and Gâteaux differentiable at x and if f: R n R is locally Lipschitz near F(x), then the continuous sublinear function, defined by

π x (h):=max { k = 1 n w k F k ( x ) h | w f ( F ( x ) ) } ,

satisfies the inequality

( f F ) + (x,h) π x (h),hX.
(3.1)

Recall that q + (x,h)= lim λ 0 sup λ 1 (q(x+λh)q(x))is the upper Dini-directional derivative of q:XR at x in the direction of h, and f(F(x)) is the Clarke subdifferential of f at F(x). The function π x () in (3.1) is called upper convex approximation of fF at x, see [11, 12].

Note that for a set C, int C denotes the interior of C, and C + ={v X |v(x)0,xC}, denotes the dual cone of C, where X is the topological dual space of X. It is also worth noting that for a convex set C, the closure of the cone generated by the set C at a point a, cl cone(Ca), is the tangent cone of C at a, and the dual cone ( C a ) + is the normal cone of C at a, see [9, 13].

Theorem 3.1 (Necessary optimality conditions)

Suppose that f i , h i and g j are locally Lipschitz functions, and that F i and G j are locally Lipschitz and Gâteaux differentiable functions. If aC is a weakly efficient solution for (P), then there exist Lagrange multipliers λ i 0, i=1,2,,p, and μ j 0, j=1,2,,m, not all zero, satisfying

0 i = 1 p λ i T i ( a ) F i ( a ) + j = 1 m μ j g j ( G j ( a ) ) G j ( a ) ( C a ) + , μ j g j ( G j ( a ) ) = 0 , j = 1 , 2 , , m , T i ( a ) = f i ( F i ( a ) ) ϕ i ( a ) h i ( F i ( a ) ) h i ( F i ( a ) ) , ϕ i ( a ) = f i ( F i ( a ) ) h i ( F i ( a ) ) .

Proof Let I={1,2,,p}, J p ={p+j|j=1,2,,m}, J p (a)={p+j| g j ( G j (a))=0,j{1,2,,m}}.

For convenience, we define

l k (x)={ ( f k h k F k ) ( x ) , k = 1 , 2 , , p , ( g k p G k p ( x ) , k = p + 1 , , p + m .

Suppose that the following system has a solution:

dcone(Ca), π a k (d)<0,kI J p (a),
(3.2)

where π a k (d) is given by

π a k (d)={ max { k = 1 p ν k F k ( a ) d | ν T k ( a ) } , k I , max { k p = 1 m w k p G k p ( a ) d | w g k p ( G k p ( a ) ) } , k J p ( a ) .

Then the system

dcone(Ca), ( l k ) + (a;d)<0,kI J p (a)

has a solution. So, there exists α 1 >0 such that a+αdC, l k (a+αd)< l k (a), kI J p (a), whenever 0<α α 1 . Since l k (a)<0 for k J p J p (a) and l k is continuous in a neighbourhood of a, there exists α 2 >0 such that l k (a+αd)<0, whenever 0<α α 2 , k J p J p (a). Let α =min{ α 1 , α 2 }. Then a+αd is a feasible solution for (P) and l k (a+αd)< l k (a), kI for sufficiently small α such that 0<α α .

This contradicts the fact that a is a weakly efficient solution for (P). Hence (3.2) has no solution.

Since, for each k, π a k () is sublinear and cone(Ca) is convex, it follows from a separation theorem [12, 14] that there exist λ i 0, i=1,,p, μ j 0, j J p (a), not all zero, such that

i = 1 p λ i π a i (x)+ j J p ( a ) μ j π a j (x)0,xcone(Ca).

Then, by applying standard arguments of convex analysis (see [15, 16]) and choosing μ j =0 whenever j J p J p (a), we have

0 i = 1 p λ i π a i (0)+ j = 1 m μ j π a j + p (0) ( C a ) + .

So, there exist ν i T i (a), w j g j ( G j (a)) satisfying

i = 1 p λ i ν i T F i (a)+ j = 1 m μ j w j T G j (a) ( C a ) + .

Hence, the conclusion holds. □

Under the following generalized Slater condition, we do the following:

x 0 cone(Ca), μ T G j (a) x 0 <0,μ g j ( G j ( a ) ) ,jJ(a),

where J(a)={j| g j ( G j (a))=0,j=1,,m}.

Choosing q R p , q>0 with λ T q=1 and defining Λ=q q T , we can select the multipliers λ ¯ =Λλ=q q T λ=q>0 and μ ¯ =Λμ=q q T μ0. Hence, the following Kuhn-Tucker type optimality conditions (KT) for (P) are obtained:

( KT ) λ ¯ R p , λ ¯ i > 0 , μ ¯ R m , μ ¯ j 0 , μ ¯ j g j ( G j ( a ) ) = 0 , 0 i = 1 p λ ¯ i T i ( a ) F i ( a ) + j = 1 m μ ¯ j g j ( G j ( a ) ) G j ( a ) ( C a ) + , T i ( a ) = f i ( F i ( a ) ) ϕ i ( a ) h i ( F i ( a ) ) h i ( F i ( a ) ) , ϕ i ( a ) = f i ( F i ( a ) ) h i ( F i ( a ) ) .

We present new conditions under which the optimality conditions (KT) become sufficient for weakly efficient solutions.

The following null space condition is as in [7]:

Let x,aX. Define K:X R n ( p + m ) :=π R n by K(x)=( F 1 (x),, F p (x), G 1 (x),, G m (x)). For each x,aX, the linear mapping A x , a :X R n ( p + m ) is given by

A x , a (y)= ( α 1 ( x , a ) F 1 ( a ) y , , α p ( x , a ) F p ( a ) y , β 1 ( x , a ) G 1 ( a ) y , , β m ( x , a ) G m ( a ) y ) ,

where α i (x,a), i=1,2,,p and β j (x,a), j=1,2,,m, are real positive constants. Let us denote the null space of a function H by N[H].

Recall, from the generalized Farkas lemma [14], that K(x)K(a) A x , a (X) if and only if A x , a T (u)=0 u T (K(x)K(a))=0. This observation prompts us to define the following general null space condition:

For each x,aX, there exist real constants α i (x,a)>0, i=1,2,,p, and β j (x,a)>0, j=1,2,,m, such that

N[ A x , a ]N [ K ( x ) K ( a ) ] ,
(NC)

where

A x , a (y)= ( α 1 ( x , a ) F 1 ( a ) y , , α p ( x , a ) F p ( a ) y , β 1 ( x , a ) G 1 ( a ) y , , β m ( x , a ) G m ( a ) y ) .

Equivalently, the null space condition means that for each x,aX, there exist real constants α i (x,a)>0, i=1,2,,p, and β j (x,a)>0, i=1,2,,m, and ζ(x,a)X such that F i (x) F i (a)= α i (x,a) F i (a)ζ(x,a) and G j (x) G j (a)= β j (x,a) G j (a)ζ(x,a). For our problem (P), we assume the following generalized null space condition for invex function (GNCI):

For each x,aC, there exist real constants α i (x,a)>0, i=1,2,,p, and β j (x,a)>0, j=1,2,,m, and ζ(x,a)(Ca) such that η( F i (x), F i (a))= α i (x,a) F i (a)ζ(x,a) and η( G j (x), G j (a))= β j (x,a) G j (a)ζ(x,a).

Note that when C=X and η( F i (x), F i (a))= F i (x) F i (a) and η( G j (x), G j (a))= G j (x) G j (a), the generalized null space condition for invex function (GNCI) reduces to (NC).

Theorem 3.2 (Sufficient optimality conditions)

For the problem (P), assume that f i , h i and g j are invex functions and F i and G j are locally Lipschitz and Gâteaux differentiable functions. Let u be feasible for (P). Suppose that the optimality conditions (KT) hold at u. If (GNCI) holds at each feasible point x of (P), then u is a weakly efficient solution of (P).

Proof From the optimality conditions (KT), there exist ν i T i (u), w j g j ( G j (u)) such that

i = 1 p λ i ν i T F i (u)+ j = 1 m μ j w j T G j (u) ( C u ) + , μ j g j ( G j ( u ) ) =0.

Suppose that u is not a weakly efficient solution of (P). Then there exists a feasible xC for (P) with

f i ( F i ( x ) ) h i ( F i ( x ) < f i ( F i ( u ) ) h i ( F i ( u ) ) ,i=1,2,,p.

By (GNCI), there exists ζ(x,u)(Cu), same for each F i and G j , such that η( F i (x), F i (u))= α i (x,u) F i (u)ζ(x,u), i=1,2,,p, and η( G j (x), G j (u))= β j (x,u) G j (u)ζ(x,u), j=1,2,,m. Hence

0 j = 1 m μ j β j ( x , u ) ( g j ( G j ( x ) ) g j ( G j ( u ) ) ) (by feasibility) j = 1 m μ j β j ( x , u ) w j T η ( G j ( x ) , G j ( u ) ) (by subdifferentiability) = j = 1 m μ j w j T G j ( u ) ζ ( x , u ) (by (GNCI)) j = 1 m λ i ν i T F i ( u ) ζ ( x , u ) (by a hypothesis) = i = 1 p λ i α i ( x , u ) ν i T η ( F i ( x ) , F i ( u ) ) (by (GNCI)) = i = 1 p λ i α i ( x , u ) ( h i ( F i ( x ) ) h i ( F i ( u ) ) ) ( f i ( F i ( x ) ) h i ( F i ( x ) ) f i ( F i ( u ) ) h i ( F i ( u ) ) ) (by subdifferentiability) > 0 .

This is a contradiction and hence u is a weakly efficient solution for (P). □

4 Duality theorems

In this section, we introduce a dual programming problem and establish weak, strong and converse duality theorems. Now we propose the following dual (D) to (P).

( D ) Maximize ( f 1 ( F 1 ( u ) ) h 1 ( F 1 ( u ) ) , , f p ( F p ( u ) ) h p ( F p ( u ) ) ) subject to 0 i = 1 p λ i ν i T F i ( u ) + j = 1 m μ j w j T G j ( u ) ( C u ) + , μ j g j ( G j ( u ) ) 0 , j = 1 , 2 , , m , u C , λ R p , λ i > 0 , μ j R m , μ j 0 .

Theorem 4.1 (Weak duality)

Let x be feasible for (P), and let (u,λ,μ) be feasible for (D). Assume that (GNCI) holds with α i (x,u)= β j (x,u)=1. Moreover, f i , h i and g j are invex functions and F i and G j are locally Lipschitz and Gâteaux differentiable functions. Then

( f 1 ( F 1 ( x ) ) h 1 ( F 1 ( x ) ) , , f p ( F p ( x ) ) h p ( F p ( x ) ) ) T ( f 1 ( F 1 ( u ) ) h 1 ( F 1 ( u ) ) , , f p ( F p ( u ) ) h p ( F p ( u ) ) ) T R p + {0}.

Proof Since (u,λ,μ) is feasible for (D), there exist λ i >0, μ j 0, ν i T i (u), i=1,2,,p, w j g j ( G j (u)), j=1,2,,m, satisfying μ j g j ( G j (u))0 for j=1,2,,m and

i = 1 p λ i ν i T F i (u)+ j = 1 m μ j w j T G j (u) ( C u ) + .

Suppose that xu and

( f 1 ( F 1 ( x ) ) h 1 ( F 1 ( x ) ) , , f p ( F p ( x ) ) h p ( F p ( x ) ) ) T ( f 1 ( F 1 ( u ) ) h 1 ( F 1 ( u ) ) , , f p ( F p ( u ) ) h p ( F p ( u ) ) ) T R p + {0}.

Then

0> f i ( F i ( x ) ) h i ( F i ( x ) ) f i ( F i ( u ) ) h i ( F i ( u ) ) .

By the invexity of f i and h i , we have

0 > h i ( F i ( u ) ) h i ( F i ( x ) ) ν i T η ( F i ( x ) , F i ( u ) ) = h i ( F i ( u ) ) h i ( F i ( x ) ) ν i T α i ( x , u ) F i ( u ) ζ ( x , u ) ( by (GNCI) ) > h i ( F i ( u ) ) h i ( F i ( x ) ) ν i T F i ( u ) ζ ( x , u ) ( by  α i ( x , u ) = 1 )

since h i ( F i ( u ) ) h i ( F i ( x ) ) >0 and λ i >0, then

i = 1 p λ i ν i T F i (u)ζ(x,u)<0.
(4.1)

From the feasibility conditions, we get μ j g j ( G j (x))0, μ j g j ( G j (u))0, and so

j = 1 m μ j β j ( x , u ) ( g j ( G j ( x ) ) g j ( G j ( u ) ) ) 0.

Similarly, by the invexity of g j , positivity of β j (x,u) and by (GNCI), we have

j = 1 m μ j w j T G j (u)ζ(x,u)0.
(4.2)

By (4.1) and (4.2), we get

[ i = 1 p λ i ν i T F i ( u ) + j = 1 m μ j w j T G j ( u ) ] ζ(x,u)<0.

This is a contradiction. The proof is completed by noting that when x=u the conclusion trivially holds. □

Theorem 4.2 (Strong duality)

For the problem (P), assume that the generalized Slater constraint qualification holds. If u is a weakly efficient solution for (P), then there exist λ R p , λ i >0, μ R m , μ j 0 such that (u,λ,μ) is a weakly efficient solution for (D).

Proof It follows from Theorem 3.1 that there exist λ R p , λ i >0, μ R m , μ j 0 such that

0 i = 1 p λ i T i ( u ) F i ( u ) + j = 1 m μ j g j ( G j ( u ) ) G j ( u ) ( C u ) + , μ j g j ( G j ( u ) ) = 0 , j = 1 , 2 , , m .

Then (u,λ,μ) is a feasible solution for (D). By weak duality,

( f 1 ( F 1 ( x ) ) h 1 ( F 1 ( x ) ) , , f p ( F p ( x ) ) h p ( F p ( x ) ) ) T ( f 1 ( F 1 ( u ) ) h 1 ( F 1 ( u ) ) , , f p ( F p ( u ) ) h p ( F p ( u ) ) ) T R p + {0}.

Since (u,λ,μ) is a feasible solution for (D), (u,λ,μ) is a weakly efficient solution for (D). Hence the result holds. □

Theorem 4.3 (Converse duality)

Let (u,λ,μ) be a weakly efficient solution of (D), and let a be a feasible solution of (P). Assume that f i , h i and g j are invex functions and F i and G j are locally Lipschitz and Gâteaux differentiable functions. Moreover, (GNCI) holds with α i (x,u)= β j (x,u)=1. Then u is a weakly efficient solution of (P).

Proof Suppose, contrary to the result, that u is not a weakly efficient solution of (P). Then there exists xD such that

f i ( F i ( x ) ) h i ( F i ( x ) ) < f i ( F i ( u ) ) h i ( F i ( u ) ) .

Since f i , h i are invex functions, for each ν i T i (x), we have

0> h i ( F i ( u ) ) h i ( F i ( x ) ) ν i T η ( F i ( x ) , F i ( u ) ) .

Since (u,λ,μ) are feasible for (P), we get

0 > i = 1 p λ i ν i T η ( F i ( x ) , F i ( u ) ) = i = 1 p λ i ν i T α i ( x , u ) F i ( u ) ζ ( x , u ) ( by (GNCI) ) = i = 1 p λ i ν i T F i ( u ) ζ ( x , u ) ( by  α i ( x , u ) = 1 ) .
(4.3)

From the hypothesis μ j g j ( G j (x)) μ j g j ( G j (u)), g j is an invex function and for each w j g j ( G j (x)), it follows that

0 μ j w j T η ( G j ( x ) , G j ( u ) ) = μ j w j T β ( x , u ) G j ( u ) ζ ( x , u ) ( by (GNCI) ) = μ j w j T G j ( u ) ζ ( x , u ) ( by  β j ( x , u ) = 1 )

and μ j 0, j=1,2,,m, then we have

j = 1 m μ j w j T G j (u)ζ(x,u)0.
(4.4)

From (4.3) and (4.4), we get

[ i = 1 p λ i ν i T F i ( u ) + j = 1 m μ j w j T G j ( u ) ] ζ(x,u)<0.

This is a contradiction. □