Introduction

In this paper, we consider the following multiobjective fractional programming problem:

( P ) minimize ϕ ( x ) = f 1 ( x ) g 1 ( x ) , , f p ( x ) g p ( x ) subject to h j ( x ) 0 , j = 1 , 2 , , m

where f i g i h j : Rn R, i = 1 , 2 , , p , j = 1 , 2 , , m . f i (.), i = 1 , 2 , , p , and h j ( . ) , j = 1 , 2 , , m are continuous, convex functions and g i (.), i = 1 , 2 , , p are continuous, concave functions such that f i (x) ≥ 0 and g i (x) > 0, i = 1 , 2 , , p for all x ∊ Rn.

Let E = {x ∊ Rn: h j ( x ) 0 , j = 1 , 2 , , m } denote the feasible set for problem (P).

The study of multiobjective optimization problems has been a subject of great interest since multiobjective decision models can be widely applied to many practical problems which appear in the field of economics, management, medicine, etc. An important class of such problems is multiobjective fractional programming problems. Many authors studied optimality conditions and solution concepts for multiobjective optimization problems such as Chaoo and Atkins [3], Coladas et al. [4], Geoffrion [6], Gerth [7], Kaliszewski [13] and Li and Wang [14]. Though, generally we deal with exact optimal solutions but in many situations the concept of exact optimal solution cannot be applied but an approximate solution is required because from the computational point of view only approximate solutions can be obtained. So, in this article, we consider ε-approximate solutions defined as follows:

x ¯ E is an ε-weak efficient solution of (P) if there does not exist any feasible solution x ∊ E such that

f i ( x ) g i ( x ) < f i ( x ¯ ) g i ( x ¯ ) - ε i , i = 1 , 2 , , p

where ε = ( ε 1 , ε 2 , , ε p ) with ε i 0 , i = 1 , 2 , , p , when ε = 0, an ε-weak efficient solution is weak efficient solution of (P). For the notion of ε-optimal solution for scalar optimization problem one can refer to Bai et al. [1].

In the field of optimization, ε-optimality conditions have been discussed by many researchers like Loridan [15], Loridan and Morgan [16], Strodiot et al. [19], Yokoyama [22], Gajek and Zagrodny [5], Li and Wang [14], Lui [17], Tanaka [20], Yokoyama [24], Li and Wang [25], etc. Yokoyama [22] in 1992 obtained ε-optimality conditions for convex programming problem via exact penalty functions. In 1994, Yokoyama [23] extended the above results to vector minimization problem. Li and Wang [25] in 1998 introduced the concept of ε-proper efficiency and studied necessary and/or sufficient conditions for an ε-efficient solution (an ε-properly efficient solution, an ε-weak efficient solution) for multiobjective optimization problem via scalarization and an alternative theorem.

Since study of multiobjective optimization problems is a subject of great importance, we have focused ourselves in this paper in developing optimality conditions for multiobjective fractional programming problem.

To derive necessary optimality conditions one needs to impose some kind of constraint qualification. But these qualifications may sometimes become cumbersome to verify and give rise to optimality conditions that are very difficult to trace from the view point of computation. In the absence of constraint qualifications (CQs.), Lagrange multiplier rules and Karush–Kuhn–Tucker (KKT) conditions may fail to hold. So, we need to develop optimality conditions without CQs. which would give more practical formulation of optimality conditions for multiobjective fractional programming problem (P). This motivates us to derive sequential optimality conditions for multiobjective fractional programming problem. Recently, work has been done in this direction for convex programming problems with cone convex constraints by Jeyakumar et al. [10, 12] and Bai et al. [1]. Jeyakumar et al. [10, 12] introduced the concept of sequential Lagrange multiplier rules for convex programs with cone convex constraints using the concept of epigraph of conjugate function in terms of ε-subdifferential computed at optimal solution. These conditions coincide with standard optimality conditions under the assumption of appropriate CQs. One of the main advantages of the ε-subdifferential which makes it a useful tool both in theory and practice is that for every x dom f , ε f ( x ) ϕ . Thibault [21] derived sequential optimality conditions using the subdifferential calculus for convex functions with cone convex constraints.

In this paper, our aim is to develop sequential optimality conditions for multiobjective fractional programming problem (P) via scalarization and using the concept of epigraph of conjugate function in terms of ε-subdifferentials computed at ε-weak efficient solution.

The paper is planned as follows: “Preliminaries” deals with some preliminary results that will be used in the sequel. In “Sequential optimality conditions”, we derive sequential optimality conditions. Finally, in “Sequential duality results”, sequential duality results have been obtained.

Preliminaries

In this section, we give some basic definitions and results which will be used in the sequel.

Let f: Rn → R ± .

The ε-subdifferential of f at x ¯ dom f , ε f ( x ¯ ) is defined as

ε f ( x ¯ ) = { ξ R n : f ( x ) - f ( x ¯ ) ξ , x - x ¯ - ε , x dom f } ,

where ε > 0.

For detailed study on ε-subdifferentials one may refer to Hiriart-Urruty [8].

Remark 2.1

(Rockafellar and Wets [18]). If f ^ ( x ) = f ( x ) - α , x ∊ Rn, α ∊ R, then

f ^ = f + α , epi f ^ = epi f + ( 0 , α ) ,

where f*denotes the conjugate of function f and epif*denotes epigraph of f*. For the definitions of conjugate and epigraph of a function one can see Bector et al. [2].

Remark 2.2

(Rockafellar and Wets [18]). For any scalar λ > 0, 

( λ f ) = λ f , ( λ f ) = λ f

where λ * f stands for epi-multiple. It satisfies

epi ( λ f ) = λ epi f .

For details on conjugacy theory one may refer to Rockafellar and Wets [18].

Theorem 2.1

(Theorem 1.2.1, Bector et al. [2]). A function f is a lower semicontinuous (lsc) function if and only if its epigraph is a closed convex set.

For a set E, the indicator function δ E is defined as

δ E ( x ) = 0 , x E + x E

For a nonempty closed convex set E, δ E is a proper, lsc, convex function.

Proposition 2.1

(Proposition 2.1, Jeyakumar et al. [10]). Let f: Rn → R ± be a proper, lsc, convex function and x ¯ dom f . Then

epi f = ε 0 { ( ξ , ξ , x ¯ + ε - f ( x ¯ ) ) : ξ ε f ( x ¯ ) } .

For x ¯ dom f , ε f ( x ¯ ) is nonempty and hence epif * is nonempty.

Proposition 2.2

(Rockafellar and Wets [18], Jeyakumar et al. [12]) For proper, lsc, convex functionsf1f2: Rn → R ±

epi ( f 1 + f 2 ) = cl ( epi f 1 + e p i f 2 )
If dom f 1 = R n
epi ( f 1 + f 2 ) = epi f 1 + epi f 2

For any setA ⊂ Rn, we denote by coA and clcoA as convex hull and the closed convex hull of the set A, respectively.

Proposition 2.3

(Jeyakumar et al. [11]) Let Ibe an arbitrary index set andf i : Rn → R ± , i ∊ I, be proper, lsc, convex functions. Define f ( x ) = sup i I f i ( x ) .

Then, epi f = clco i I epi f i .

Sequential optimality conditions

In this section, we prove sequential optimality conditions for multiobjective fractional programming problem (P).

We shall be using following Lemma on the lines of Lemma 5.1 [1] to prove our optimality conditions.

Lemma 3.1

For (P), if E ϕ . Then

( i ) epi δ E = cl μ j 0 j = 1 m epi ( μ j h j ) ( ii ) e p i i = 1 p λ i ( f i - v i g i ) + δ E = cl epi i = 1 p λ i ( f i - v i g i ) + μ j 0 j = 1 m epi ( μ j h j )

wherev ∊ Rp.

Since f i (.),  i = 1 , 2 , , p , are convex functions andg i (.), i = 1 , 2 , , p , are concave functions, f i ( . ) - v i g i ( . ) , i = 1 , 2 , , p , are convex functions.

Theorem 3.1

x ¯  ∊ Eis anε-weak efficient solution of (P) if and only if there exist ε′ ≥ 0, { ε n j } , { μ n j } R+, j = 1 , 2 , , m , λ i 0 , i = 1 , 2 , , p , i = 1 p λ i = 1 and { ξ i } Rn, i = 1 , 2 , , p , { ξ n j } Rn j = 1 , 2 , , m with i = 1 p ξ i ε i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) , ξ n j ε n j h j ( x ¯ ) , j = 1 , 2 , , m such that

i = 1 p ξ i + j = 1 m μ n j ξ n j 0 as n
(1)

and

ε + lim n j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ ) = i = 1 p λ i g i ( x ¯ ) ε i .
(2)

Assume that f ( x ) < + , g ( x ) < + , for all x ∊ Rn.

Here v i ¯ = f i ( x ¯ ) g i ( x ¯ ) - ε i , i = 1 , 2 , , p .

Proof

Since x ¯  ∊ E is an ε-weak efficient solution of (P), there does not exist any feasible solution x ∊ E such that

f i ( x ) g i ( x ) < f i ( x ¯ ) g i ( x ¯ ) - ε i , i = 1 , 2 , , p
(3)

Using parametric approach, problem (P) can be written as

P 1 minimize f 1 ( x ) - v 1 g 1 ( x ) , , f p ( x ) - v p g p ( x ) subject to h j ( x ) 0 , j = 1 , 2 , , m ,

where v ∊ Rp is the parameter.

By (3) we have that there does not exist any feasible solution x ∊ E such that

f i ( x ) < f i ( x ¯ ) g i ( x ¯ ) - ε i g i ( x ) , i = 1 , 2 , , p f i ( x ) - f i ( x ¯ ) g i ( x ¯ ) - ε i g i ( x ) < 0 = f i ( x ¯ ) - f i ( x ¯ ) g i ( x ¯ ) - ε i g i ( x ¯ ) - ε i g i ( x ¯ ) , i = 1 , 2 , , p f i ( x ) - v i ¯ g i ( x ) < 0 = f i ( x ¯ ) - v i ¯ g i ( x ¯ ) - ε i g i ( x ¯ ) , i = 1 , 2 , , p
(4)

x ¯ is an ε ¯ - weak efficient solution of P 1 where ε ¯ = ( ε 1 g 1 ( x ¯ ) , , ε p g p ( x ¯ ) ) .

By weighted sum approach the problem (P1) can be converted to the following scalar optimization problem

( P 2 ) minimize i = 1 p λ i ( f i ( x ) - v i g i ( x ) ) subject to h j ( x ) 0 , j = 1 , 2 , , m , where λ i 0 , i = 1 , 2 , , p , i = 1 p λ i = 1 .

By (4), we have

f i ( x ) - v i ¯ g i ( x ) < 0 = f i ( x ¯ ) - v i ¯ g i ( x ¯ ) - ε i g i ( x ¯ ) , i = 1 , 2 , , p

Since λ i 0 , i = 1 , 2 , , p , multiplying above by λ i , i = 1 , 2 , , p and adding we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) < i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i

Hence x ¯ is an i = 1 p λ i g i ( x ¯ ) ε i —optimal solution of (P2).

Since x ¯ is an i = 1 p λ i g i ( x ¯ ) ε i —optimal solution of (P2), it is an i = 1 p λ i g i ( x ¯ ) ε i -optimal solution of the function i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) + δ E ) ( x ) on E, that is,

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) + δ E ) ( x ) i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) + δ E ) ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i , for all x E
0 i = 1 p λ i g i ( x ¯ ) ε i i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) + δ E ( x ¯ )

Hence

0 , - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i epi i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) + δ E

Using Proposition 2.2, we get

0 , - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i epi i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) + epi δ E

Using Lemma 3.1(i) and Remark 2.2, we obtain

0 , - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i e p i i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) + cl μ j 0 j = 1 m μ j epi h j

Proposition 2.1 implies that there exist ε 0 , { ε n j } , j = 1 , 2 , , m R+ and i = 1 p ξ i ε i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) , ξ n j ε n j h j ( x ¯ ) such that

0 , - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i = i = 1 p ξ i , i = 1 p ξ i , x ¯ + ε - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + lim n j = 1 m μ n j ξ n j , j = 1 m μ n j ξ n j , x ¯ + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ )

Now, compairing both the sides we get

i = 1 p ξ i + j = 1 m μ n j ξ n j 0 as n

and

- i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i = i = 1 p ξ i , x ¯ + ε - i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) + lim n j = 1 m μ n j ξ n j , x ¯ + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ )

Using (1), we get

ε + lim n j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ ) = i = 1 p λ i g i ( x ¯ ) ε i .

Conversely suppose that (1) and (2) hold.

We have to show that x ¯ is an ε-weak efficient solution of (P).

Suppose on contrary that x ¯ is not an ε-weak efficient solution of (P). Then, as argued in the necessary part, we have that x ¯ is not a i = 1 p λ i g i ( x ¯ ) ε i -optimal solution of (P2). Then, there exists a feasible solution x ∊ E such that

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) < i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i
(5)

Since i = 1 p ξ i ε i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) , ξ n j ε n j h j ( x ¯ ) , j = 1 , 2 , , m therefore, we have

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) i = 1 p ξ i , x - x ¯ - ε h j ( x ) - h j ( x ¯ ) ξ n j , x - x ¯ - ε n j , j = 1 , 2 , , m
(6)

Since { μ n j } , j = 1 , 2 , , m R+, therefore, multiplying (6) with μ n j , j = 1 , 2 , , m and adding we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) + j = 1 m μ n j ( h j ( x ) - h j ( x ¯ ) ) i = 1 p ξ i , x - x ¯ + j = 1 m μ n j ξ n j , x - x ¯ - ε + j = 1 m μ n j ε n j

Taking limit as n → ∞ and using (1) we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) + lim n j = 1 m μ n j h j ( x ) - lim n ε + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ )

Using (2) we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) + lim n j = 1 m μ n j h j ( x ) - i = 1 p λ i g i ( x ¯ ) ε i

As x is feasible for (P), we have

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i

which is contradictory to (5).

Hence our assumption was wrong, x ¯ is an ε-weak efficient solution of (P).

Corollary 3.1

If in the above theorem we impose a constraint qualification, that is, the set μ j 0 j = 1 m μ j epi h j is closed, then the sequential optimality conditions reduce to the standard KKT conditions.

We now illustrate above theorem with the help of the following example.

Example 3.1

Consider the problem

min f 1 ( x ) g 1 ( x ) , f 2 ( x ) g 2 ( x ) = ( x 2 + x , x )

subject to h ( x ) = x x 0 - x - 1 , x < 0 0 ,

Here f i g i h:R → R,i = 1, 2.

Set of feasible solutions is given by

E = { x R : - 1 x 0 } .

Here x ¯ = 0 is not an optimal solution as for x = - 1 2 f 1 ( x ) g 1 ( x ) f 1 ( x ¯ ) g 1 ( x ¯ ) but it is an ε1-optimal solution as for ε1 = 2 f 1 ( x ) g 1 ( x ) f 1 ( x ¯ ) g 1 ( x ¯ ) - ε 1 , for all x ∊ E, and f 2 ( x ) g 2 ( x ) f 2 ( x ¯ ) g 2 ( x ¯ ) but it is an ε2-optimal solution as for ε 2 = 2 , f 2 ( x ) g 2 ( x ) f 2 ( x ¯ ) g 2 ( x ¯ ) - ε 2 , for all x ∊ E.

Now

v 1 ¯ = f 1 ( x ¯ ) g 1 ( x ¯ ) - ε 1 = - 2 , v 2 ¯ = f 2 ( x ¯ ) g 2 ( x ¯ ) - ε 2 = - 2 ,

Then there exist ε = 1 , ε n 1 = 1 n , ε n 2 = 1 + 1 n , λ 1 = 1 2 , λ 2 = 1 2 , ξ n i , ξ n j R, i = 1, 2,  j = 1, 2, with ξ 1 + ξ 2 = 1 ε λ 1 ( f 1 - v 1 ¯ g 1 ) + λ 2 ( f 2 - v 2 ¯ g 2 ) ( x ¯ ) as x x 2 2 + x + 1 , for all x ∊ R ξ n 1 = 1 ε n 1 h ( x ¯ ) , ξ n 2 = - 1 ε n 2 h ( x ¯ ) as

x x + ε n 1 , x 0 - x - x - 1 + ε n 2 , x < 0 ,

and μ n 1 = 1 n , μ n 2 = 1 such that

ξ 1 + ξ 2 + j = 1 2 μ n j ξ n j 0 as n

and

ε + lim n j = 1 2 μ n j ε n j - j = 1 2 μ n j h j ( x ¯ ) = λ 1 g 1 ( x ¯ ) ε 1 + λ 2 g 2 ( x ¯ ) ε 2 .

Theorem 3.2

x ¯ ∊ Eis an ε-weak efficientsolution of (P) if and only if there exist { ε n } , { ε n j } , { λ n i } , { μ n j } R+, i = 1 , 2 , , p , j = 1 , 2 , , m , λ i 0 , i = 1 , 2 , , p , i = 1 p λ i = 1 and { ξ n i } , { ξ n j } Rn, i = 1 , 2 , , p , j = 1 , 2 , , m with i = 1 p ξ n i ε n i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) ,

ξ n j ε n j h j ( x ¯ ) , j = 1 , 2 , , m such that

i = 1 p λ n i ξ n i + j = 1 m μ n j ξ n j 0 ,
(7)
j = 1 m μ n j h j ( x ¯ ) 0

ε n 0 , ε n j 0 , i = 1 , 2 , , p , j = 1 , 2 , , m as n → ∞.

Assume that f ( x ) < + , g ( x ) < + , for allx ∊ Rn.

Proof

Since x ¯ is an ε-weak efficient solution of (P), it is an i = 1 p λ i g i ( x ¯ ) ε i -optimal solution of (P2); hence it solves the following unconstrained problem

( P 0 ) min f 0 ( x ) subject to x R n

where f 0 ( x ) = max i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - α , h 1 ( x ) , h 2 ( x ) , , h m ( x ) and α = i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) .

Since x ¯ is an i = 1 p λ i g i ( x ¯ ) ε i -optimal solution of (P0); therefore, we have f 0 ( x ) f 0 ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i , for all x ∊ E. that is, f 0 ( x ) - i = 1 p λ i g i ( x ¯ ) ε i , for all x ∊ E

0 i = 1 p λ i g i ( x ¯ ) ε i f 0 ( x ¯ )

Hence

0 , i = 1 p λ i g i ( x ¯ ) ε i e p i f 0
(8)

By Proposition 2.3, we have

epi f 0 = clco e p i i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - α j J epi h j .
(9)

Using (9) in (8) we get

0 , i = 1 p λ i g i ( x ¯ ) ε i clco epi i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - α j J epi h j .

By Remark 2.1, we have

0 , i = 1 p λ i g i ( x ¯ ) ε i clco epi i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) + ( 0 , α ) j J epi h j .

Thus, there exist

i = 1 p ξ n i ε n i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) , ξ n j ε n j h j ( x ¯ ) , j = 1 , 2 , , m

and { ε n } , { ε n j } , { λ n i } , { μ n j } R + , i = 1 , 2 , , p , j = 1 , 2 , , m with i = 1 p λ n i + j = 1 m μ n j = 1 as n → ∞ such that

0 , i = 1 p λ i g i ( x ¯ ) ε i = lim n i = 1 p λ n i ξ n i , i = 1 p λ n i ξ n i , x ¯ + i = 1 p λ n i ε n - i = 1 p λ n i α + 0 , i = 1 p λ n i α + j = 1 m μ n j ξ n j , j = 1 m μ n j ξ n j , x ¯ + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ )

Now, comparing both the sides we get i = 1 p λ n i ξ n i + j = 1 m μ n j ξ n j 0 , as n → ∞ and

i = 1 p λ i g i ( x ¯ ) ε i = lim n i = 1 p λ n i ξ n i , x ¯ + i = 1 p λ n i ε n + j = 1 m μ n j ξ n j , x ¯ + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ ) .

Using (7) we get

i = 1 p λ i g i ( x ¯ ) ε i = lim n i = 1 p λ n i ε n + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ ) .

This equation along with the conditions ε i 0 , λ i 0 , g i ( x ¯ ) > 0 , ε n 0 , ε n j 0 , λ n i 0 , i = 1 , 2 , , p , μ n j 0 , j = 1 , 2 , , m and the fact that x ¯ is feasible for (P) implies ε n 0 , ε n j 0 , j = 1 , 2 , , m , j = 1 m μ n j h j ( x ¯ ) 0 as n → ∞.

Conversely proceeding on the similar lines of Theorem 3.1, we arrive at the following condition

lim n i = 1 p λ n i λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ n i λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) lim n - j = 1 m μ n j h j ( x ) + lim n j = 1 m μ n j h j ( x ¯ ) - i = 1 p λ n i ε n + j = 1 m μ n j ε n j

Using the conditions j = 1 m μ n j h j ( x ¯ ) 0 , ε n 0 , μ n j 0 , j = 1 , 2 , , m as n → ∞ and the fact that x is feasible for (P), we get

lim n i = 1 p λ n i λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ n i λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) 0

Since λ i 0 , λ n i 0 , i = 1 , 2 , , p for all n ∊ N, we have

f i ( x ) - v i ¯ g i ( x ) f i ( x ¯ ) - v i ¯ g i ( x ¯ ) , i = 1 , 2 , , p

That is, f i ( x ) - v i ¯ g i ( x ) f i ( x ¯ ) - v i ¯ g i ( x ¯ ) f i ( x ¯ ) - v i ¯ g i ( x ¯ ) - g i ( x ¯ ) ε i , i = 1 , 2 , , p as ε i 0 , g i ( x ¯ ) > 0 , i = 1 , 2 , , p .

Since λ i 0 , i = 1 , 2 , , p we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) - i = 1 p λ i g i ( x ¯ ) ε i

which gives contradiction to (5).

Hence the result.

We now give an example to illustrate the above theorem.

Remark 3.1

Consider Example 3.1 with h(x) replaced by h ( x ) = x , x - 1 - x , x < - 1 . Set of feasible solutions is given by

E = { x R : - 1 x 0 } .

It can be seen that x ¯ = 0 is not an optimal solution but it is an ε2-optimal solution.

Then, there exist ε n = 1 n , ε n 1 = 1 n , λ 1 = 1 2 , λ 2 = 1 2 , ξ n i , ξ n 1 R, i = 1 , 2 , with ξ n 1 + ξ n 2 = 1 ε n λ 1 ( f 1 - v 1 ¯ g 1 ) + λ 2 ( f 2 - v 2 ¯ g 2 ) ( x ¯ ) as x x 2 2 + x + ε n , for all x ∊ R and

ξ n 1 = 1 ε n 1 h ( x ¯ ) as x x + ε n 1 , x - 1 x - x + ε n 1 , x < - 1

and λ n 1 = 1 + 1 n , λ n 2 = 1 n , μ n 1 = 1 n such that

λ n 1 ξ n 1 + λ n 2 ξ n 2 + μ n 1 ξ n 1 0 with λ n 1 + λ n 2 + μ n 1 = 1 as n → ∞

and μ n 1 h ( x ¯ ) 0 , ε n 0 , ε n 1 0 as n → ∞.

Sequential duality results

In this section, we prove sequential duality results for (P).

For (P), the sequential Lagrange function

L: Rn × R m+  →  R ± is defined as

L ( x , μ n ) = i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) + j = 1 m μ n j h j ( x )

The sequential dual for (P) is given by

max μ n R + m min x E lim n inf L ( x , μ n )

In the following theorem, we establish sequential duality result.

Theorem 4.1

Let x ¯ be an ε-weak efficient solution of (P) with optimal value α - i = 1 p λ i g i ( x ¯ ) ε i where α = i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) . Then

max μ n R + m min x E lim n inf L ( x , μ n ) = α - i = 1 p λ i g i ( x ¯ ) ε i .

Assume that f ( x ) < + , g ( x ) < + , h ( x ) < + , for allx ∊ Rn.

Proof

Since x ¯ is an ε-weak efficient solution of (P), therefore, proceeding on the lines of Theorem 3.1, there exist i = 1 p ξ i ε i = 1 p λ i f i ( . ) - v i ¯ g i ( . ) ( x ¯ ) , ξ n j ε n j h j ( x ¯ ) , j = 1 , 2 , , m , ε′ ≥ 0 and sequences ξ i , { ξ n j } Rn, i = 1 , 2 , , p , j = 1 , 2 , , m and { ε n j } , { μ n j } R+, j = 1 , 2 , , m such that

0 , - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) + i = 1 p λ i g i ( x ¯ ) ε i = i = 1 p ξ i , i = 1 p ξ i , x ¯ + ε - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) + lim n j = 1 m μ n j ξ n j , j = 1 m μ n j ξ n j , x ¯ + j = 1 m μ n j ε n j - j = 1 m μ n j h j ( x ¯ )

Using definitions of conjugates of f i ( . ) - v i ¯ g i ( . ) , i = 1 , 2 , , p , h j ( . ) , j = 1 , 2 , , m and comparing both the sides we get

i = 1 p ξ i + j = 1 m μ n j ξ n j 0 as n

and

- α + i = 1 p λ i g i ( x ¯ ) ε i = i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) i = 1 p ξ i + ε + lim n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j

Using definition of i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) i = 1 p ξ i we get i = 1 p ξ i , x - i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) - lim n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j - α - ε + i = 1 p λ i g i ( x ¯ ) ε i , for all x ∊ E, which gives that

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) lim n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j + i = 1 p ξ i , x + α + ε - i = 1 p λ i g i ( x ¯ ) ε i , lim n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j + i = 1 p ξ i , x + α - i = 1 p λ i g i ( x ¯ ) ε i
(9)

as ε′ ≥ 0.

Now since { ε n j } , { μ n j } R+, j = 1 , 2 , , m

j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j , for all n N

and therefore

lim n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + j = 1 m μ n j ε n j lim n sup j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j
(10)

(9) and (10) give

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) lim sup n j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j + i = 1 p ξ i , x + α - i = 1 p λ i g i ( x ¯ ) ε i
(11)

Now

lim n sup j = 1 m μ n j h j ( . ) j = 1 m μ n j ξ n j = lim n sup sup x d o m ( j = 1 m μ n j h j ( . ) ) j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x )
lim n sup j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x )

Using above in (11) we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) lim n sup j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x ) + i = 1 p ζ i , x + α - i = 1 p λ i g i ( x ¯ ) ε i
(12)

Now,

lim n sup - j = 1 m μ n j h j ( x ) = lim n sup j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x ) - j = 1 m μ n j ξ n j , x
lim n sup j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x ) + lim n sup - j = 1 m μ n j ξ n j , x

which implies

lim n sup j = 1 m μ n j ξ n j , x - j = 1 m μ n j h j ( x ) lim n sup - j = 1 m μ n j h j ( x ) + lim n inf j = 1 m μ n j ξ n j , x

Using above in (12) and then using (1) we get

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) lim n sup - j = 1 m μ n j h j ( x ) + α - i = 1 p λ i g i ( x ¯ ) ε i

which implies

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) + lim inf n j = 1 m μ n j h j ( x ) α - i = 1 p λ i g i ( x ¯ ) ε i .

Hence lim n inf L ( x , μ n ) α - i = 1 p λ i g i ( x ¯ ) ε i .

Thus,

max μ n R + m min x E lim n inf L ( x , μ n ) α - i = 1 p λ i g i ( x ¯ ) ε i .
(13)

To show

max μ n R + m min x E lim n inf L ( x , μ n ) α - i = 1 p λ i g i ( x ¯ ) ε i .

Since x is feasible for (P) and { μ n j } , j = 1 , 2 , , m R+, we have

i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) + j = 1 m μ n j h j ( x ) for all x E

which implies

inf x E i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ) L ( x , μ n ) .

That is,

max μ n R + m min x E lim n inf L ( x , μ n ) α - i = 1 p λ i g i ( x ¯ ) ε i .
(14)

(13) and (14) imply the required result.

Hence proved.

Corollary 4.1

Let x ¯ be an ε-weak efficient solution of (P) with optimal value α - i = 1 p λ i g i ( x ¯ ) ε i where α = i = 1 p λ i ( f i ( . ) - v i ¯ g i ( . ) ) ( x ¯ ) . If, in the above theorem we impose a constraint qualification, that is, the set μ j 0 j = 1 m μ j e p i h j is closed, then

max μ R + m min x E L ( x , μ ) = α - i = 1 p λ i g i ( x ¯ ) ε i .

Application [26]

The most important and common application of multiobjective fractional programming problem is transportation problem. Multiobjective linear fractional transportation problem is the problem with several criteria such as the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. The problem is as follows:

Let there be m sources and n destinations. At each source, let a i , i = 1 , 2 , , m be the amount of homogenous products which are transported to n destinations to satisfy the demand for b j , j = 1 , 2 , , n units of the product there. Let x ij be units of goods shipped from source i to destination j. For the objective function Z q ( x ) , q = 1 , 2 , , Q , profit matrix p q = [ p i j q ] m × n which determines the profit p i j q gained from shipment i to j, cost matrix d q = [ d i j q ] m × n which determines the cost d i j q per unit of shipment from i to j, scalars p 0 q , d 0 q which determine some constant profit and cost, respectively, the problem is

maximize Z q ( x ) = p q ( x ) d q ( x ) = i = 1 m j = 1 n p i j q x i j + p 0 q i = 1 m j = 1 n d i j q x i j + d 0 q q = 1 , 2 , , Q

subject to

j = 1 n x i j a i , i = 1 , 2 , , m
(5.1)
i = 1 m x i j b j , j = 1 , 2 , , n
(5.2)
x i j 0 , i = 1 , 2 , , m , j = 1 , 2 , , n
(5.3)

where Z q ( x ) = ( Z 1 ( x ) , Z 2 ( x ) , , Z Q ( x ) ) is a vector of objective functions.

We suppose that d q ( x ) > 0 , q = 1 , 2 , , Q and for all x = ( x i j ) S , where S ϕ denotes a convex and compact feasible set defined by (5.1), (5.2), (5.3). p q ( x ) and d q (x) are continuous on S.

Further, a i  > 0, for all i, b j  > 0, for all j, p i j q > 0 , d i j q > 0 , p 0 q > 0 , d 0 q > 0 , for all ij and

i = 1 m a i j = 1 n b j .

Conclusion

We know that constraint qualifications are required to obtain necessary optimality conditions but sometimes these constraint qualifications become very difficult to compute. In this paper, we develop sequential optimality conditions in the absence of any constraint qualification for multiobjective fractional programming problem (P) via scalarization and using the concept of epigraph of conjugate function in terms of ε-subdifferential computed at ε-weak efficient solution. Also, we derive sequential duality results for the problem (P).