Introduction

Duality plays a very important role in optimization problems. Many authors have made contributions in the development of duality theory for multiple objective programming problems. There has been tremendous development in area of multiobjective optimization problems during the past years. For the most recent developments in this area, one can refer to the book by Ansari and Yao [1]. In this book, several aspects of multiobjective optimization starting from the very beginning to the most recent ones have been discussed in the form of various research papers in this field. An important class of such problems, namely, multiple objective fractional programming problems, is of great interest in many areas such as transportation, production, information theory, and numerical analysis. Some of the papers by Schaible [24] review the early work done in fractional programming. For some recent work on duality in fractional programming, one can see the study of Lyall et al. [5], Liang [6], etc. Duality in generalized fractional programming has been studied by Barros et al. [7], Liu [8], etc. Weir [9] studied a multiobjective fractional programming problem with the same denominators. Since then, a great deal of research was started in this area under the assumptions of convexity and generalized convexity by many researchers such as Singh [10], Egudo [11], Singh and Hanson [12], Weir [9, 13], Suneja and Gupta [14], Suneja and Lalitha [15], etc. Duality for multiobjective fractional programming problem under various assumptions has also been studied by authors like Liu [16], Kim et al. [17], Kim [18], Nobakhtian [19], Mishra and Upadhyay [20], etc.

The concept of convexifactor was first introduced by Demyanov [21] as a generalization of the notion of upper convex and lower concave approximations. In [21], convexifactor was defined as convex and compact set and was termed as convexificator. However, Jeyakumar and Luc [22] in their further study suggested that one can use a closed, nonconvex set instead of a compact and convex one to define a convexificator. Dutta and Chandra [23] called them as convexifactors. They have been further studied by Dutta and Chandra [24], Li and Zhang [25], Gadhi [26], etc. Dutta and Chandra [24] introduced ∂*-pseudoconvex functions by using this concept of convexifactor. The importance of convexifactors lies in the fact that they are useful even when they are unbounded or nonconvex, and the use of a nonconvex set to define convexifactors has an advantage that in many situations one may just have a convexifactor consisting of finite number of points which is more amenable to various applications. Further, for locally Lipschitz function, one can have convexifactors smaller than the Clarke subdifferential, Michel-Penot subdifferential, etc., so optimality conditions and duality results obtained in terms of convexifactors are sharper. In multiobjective programming problems, generalized convexity plays an important role in deriving duality results. Gadhi [26] has proved necessary and sufficient optimality conditions for a multiobjective fractional programming problem in terms of convexifactors. In this paper, we introduce the notion of ∂*-quasiconvex functions. We associate Mond-Weir-type and Schaible-type duals to the multiobjective fractional programming problem and derive duality results under the assumptions of ∂*-pseudoconvexity and ∂*-quasiconvexity. Our results in terms of convexifactors recast the well-known results in the multiobjective fractional programming problem and hence present more general results.

The paper is organized as follows: In the ‘Preliminaries’ section, we introduce the concept of ∂*-quasiconvex functions in terms of convexifactors. Kuhn-Tucker-type optimality conditions for multiobjective fractional programming problem have been given in the ‘Optimality conditions’ section. Finally, in the ‘Duality’ section, we prove duality results by associating Mond-Weir-type and Schaible-type duals to the problem.

Preliminaries

Throughout the paper, we are concerned with finite dimensional spaces.

Let f: RnR ∪ {+ ∞} be an extended real valued function.

f d ( x , v ) = L i m inf t 0 f x + tv f x t , f d + ( x , v ) = L i m sup t 0 f x + tv f x t

denote, respectively, the lower and upper Dini directional derivatives of f at x in direction v.

We begin with the definitions of convexifactors given by Dutta and Chandra [23].

Definition 2.1. The function f: RnR ∪ {+ ∞}is said to have an upper convexifactor ∂ uf(x) at x if ∂ uf(x) ⊂ Rn is closed, and for each vRn,

f d x , v sup x * u f x x * , v .

Definition 2.2. The function f: RnR ∪ {+ ∞} is said to have a lower convexifactor l f(x) at x if ∂ l f(x) ⊂ Rn is closed, and for each vRn,

f d + x , v inf x * l f x x * , v .

Definition 2.3. The function f: RnR ∪ {+ ∞} is said to have a convexifactor*f(x) at x if it is both an upper and lower convexifactor of f at x.

Definition 2.4. The function f: RnR ∪ {+ ∞} is said to have an upper regular convexifactor ∂ uf(x) at x if ∂ uf(x) is an upper convexifactor of f at x, and for each vRn,

f d + x , v = sup x * u f x x * , v .

Definition 2.5. The function f: RnR ∪ {+ ∞}is said to have a lower regular convexifactor l f(x) at x if ∂ l f(x) is lower convexifactor of f at x, and for each vRn,

f d x , v = inf x * l f x x * , v .

Convexifactors are not necessarily convex or compact. These relaxations allow applications to a large class of nonsmooth functions.

The important question arises regarding the existence of convexifactors or regular convexifactors at a given point for a general real valued function. In this regard, we present the following theorem from Dutta and Chandra [24].

Theorem 2.6. Let f: RnR ∪ {+ ∞} and let xRnbe a given point where f(x) is finite. Moreover, assume that the lower Dini directional derivative (f) d (x, v) is bounded above. Then, there exists a compact upper convexifactor of f at x. If the upper Dini directional derivative (f) d +(x, v) is bounded below, then there exists a compact lower convexifactor of f at x.

Now, we consider the following multiobjective fractional programming problem:

P minimize ϕ x = f 1 x g 1 x , , f p x g p x

subject to

h j x 0 , j = 1 , 2 , , m

Let E = {xRn: h j (x) ≤ 0, j = 1, 2,…, m}denote the feasible set for problem (P).

Here, f i , g i , i = 1, 2,…, p and h j , j = 1, 2,…, m are continuous real valued functions defined on Rn such that f i (x) ≥ 0 and g i (x) > 0, i = 1, 2,…, p for all xE, and minimization means finding weak efficient solutions in the following sense:

Definition 2.7. x̅ ∈ E is a weak efficient solution of (P) if there does not exist any feasible solution xE such that

f i x g i x < f i x ¯ g i x ¯ , i = 1 , 2 , , p .

Definition 2.8. x̅ ∈ E is a local weak efficient solution of (P) if there exists a neighborhood U of x̅ such that for any feasible solution xUE, the following does not hold:

f i x g i x < f i x ¯ g i x ¯ , i = 1 , 2 , , p .

We give below the definition of ∂*-pseudoconvex function given by Dutta and Chandra [24].

Definition 2.9. A function f: RnR is said to be ∂*-pseudoconvex at x̅ ∈ Rn if for xRn,

f x < f x ¯ ξ , x x ¯ < 0 , for all ξ * f x ¯ ,

where ∂*f(x̅) is a convexifactor of f at x̅.

We now introduce ∂*-quasiconvex function.

Definition 2.10. A function f: RnR is said to be ∂*-quasiconvex at x̅ ∈ Rn if for xRn,

f x f x ¯ ξ , x x ¯ 0 , for all ξ * f x ¯ .

Remark 2.11. (i) (Dutta and Chandra [24]) If f is a differentiable function and ∂*f(x̅) is an upper regular convexifactor, then ∂*f(x̅) = {∇f(x̅)}, and the above definition reduces to the definition of quasiconvex function.

(ii) If f is a locally Lipschitz function and * f (x̅) = c f (x̅) where c f (x̅) is the Clarke generalized gradient, then the above definition reduces to the definition of ∂c-quasiconvex function defined by Bector et al. [27].

Remark 2.12. ∂*-quasiconvex function is not necessarily ∂*-pseudoconvex as can be seen from the following example:

Example 2.13. Let f: R2R be a function defined by

f x , y = { x 2 y , x 0 , y > 0 , x , x 0 , y = 0 , xy , x 0 , y < 0 , x y 2 , x < 0 , y 0 , x 2 y , x < 0 , y < 0 ,
* f 0 , 0 = x * , 0 : x * 0 .

f is ∂*-quasiconvex at (x ̅,y ̅) = (0,0), but f is not ∂*-pseudoconvex at (x ̅,y ̅) = (0,0) because for (x,y) = (− 1, 2), ξ = (0, 0)

f x , y < f 0 , 0 but ξ , x y = 0.

Remark 2.14. It may be noted that every ∂c-pseudoconvex function is ∂c-quasiconvex when f is locally Lipschitz as can be seen from Remark 3.1 in the study of Rezaie and Zafarani [28] by taking η(x, y) = x − y. However, the next example shows that the ∂*-pseudoconvex function is not ∂*-quasiconvex.

Example 2.15. Let f: R2R be a function defined by

f x , y = { x y , x 0 , y 0 0 , x > 0 , y < 0 x + y , x < 0 , y 0 x + y , x 0 , y < 0
* f 0 , 0 = x * , y * : x * < 0 , y * < 0 .

f is ∂*-pseudoconvex at (x̅, y̅) = (0,0), but f is not ∂*-quasiconvex at (x̅, y̅) = (0,0) because for (x, y) = (1, −1), ξ = (− 1, −2)

f x , y = f 0 , 0 but ξ , x y > 0.

The following result is given by Li and Zhang [25].

Lemma 2.16. Let*f(x) be a convexifactor of f at x. Then,λR, λ*f(x) is a convexifactor of λf at x.

We now give the following result given by Jeyakumar and Luc [22].

Remark 2.17.[22] Assume that the functions f, g:RnR admit upper convexifactors ∂ uf(x) and ∂ ug(x) at x, respectively, and that one of the convexifactors is upper regular at x. Then, ∂ uf(x) + ∂ ug(x) is an upper convexifactor of f + g at x.

Similarly, if one of the convexifactors is lower regular at x. Then, ∂  l f(x) + ∂  l g(x) is a lower convexifactor of f + g at x.

Optimality conditions

Gadhi [26] gave the following necessary optimality conditions for (P).

Theorem 3.1. Let x̅εE be a local weak efficient solution of (P). Assume that f i , g i , i = 1, 2,…, p and h j , j = 1, 2,…, m are continuous and admit bounded convexifactors*f i (x̅), ∂*g i (x̅), i = 1, 2,…, p and*h j (x̅), j = 1, 2,…, m at x̅, respectively, and that x↦ ∂*f i (x), x↦ ∂*g i (x), i = 1, 2,…, p and x↦ ∂*h j (x), j = 1, 2,…, m are upper semicontinuous at x̅. Then, there exist vectors α* = (α1*, α2*, …, α p *) ∈ R+pand μ* = (μ1*, μ2*, …, μ m *) ∈ R+m(not both zero) such that

0 i = 1 p α i * * f i x ¯ ϕ i x ¯ * g i x ¯ + j = 1 m μ j * * h j x ¯
(1)
μ j * h j x ¯ = 0 , j = 1 , 2 , , m
(2)
μ j * 0 , h j x ¯ 0 , j = 1 , 2 , , m
(3)

where

ϕ i x = f i x g i x , i = 1 , 2 , , p .

We now deduce the Kuhn-Tucker-type necessary optimality conditions for (P) under the assumption of the Slater-type weak constraint qualification which is defined as follows on the lines in the study of Mangasarian [29].

Definition 3.2. The function h is said to satisfy the Slater-type weak constraint qualification at x̅ ∈ E if h J is ∂*-pseudoconvex at x̅, and there exists an xo ∈ Rn such that h J (xo) < 0 where J = {j|h j (x̅) = 0}.

Remark 3.3. If h is a differentiable function at x̅ and admits an upper regular convexifactor ∂*h(x̅) at x̅, then the above Slater-type weak constraint qualification reduces to Slater's weak constraint qualification given by Mangasarian [29].

Theorem 3.4. Let x̅ ∈ E be a weak efficient solution of (P). Suppose that the hypotheses of Theorem 3.1 hold. Then, there exist vectors α* ∈ R+pand μ* ∈ R+m (not both zero) such that (1), (2) and (3) hold. If the Slater-type weak constraint qualification holds at x̅, then α* ≠ 0.

Proof. Suppose on contrary α* = 0, then μ* ≠ 0.

Now using (1), we get that there exists ζ j ∈ ∂*h j (x̅), j = 1, 2,…, m such that

j = 1 m μ j * ζ j = 0.
(4)

Since h satisfies the Slater-type weak constraint qualification at x̅, therefore h J is ∂*-pseudoconvex at x̅, and there exists an xo ∈ Rn such that

h j x o < 0 , j J ,

where

J = j | h j x ¯ = 0 .
h j x o < h j x ¯ , j J .

Using ∂*- pseudoconvexity of h j , jJ, we get

ζ j , x o x ¯ < 0 , j J
j J μ j * ζ j , x o x ¯ < 0 as μ * 0 .

Now, (2) gives μ j * = 0, for all jJ and thus we have

j = 1 m μ j * ζ j , x o x ¯ < 0 ,

which is contradiction to (4).

Hence, α* ≠ 0.

Duality

Duality plays a crucial role in mathematical programming as sometimes solving a dual is easier than solving a primal. Wolfe [30] associated a dual problem with a primal nonlinear programming problem and proved various duality theorems under the assumptions of convexity. Since certain duality theorems may fail to hold for the Wolfe model if the objective and/or the constraint functions are generalized convex, Mond and Weir [31] presented a new model for studying duality which allowed the weakening of the convexity requirements for the objective and the constraint functions. In this section, we have introduced two types of duals: Mond-Weir-type and Schaible-type duals in terms of convexifactors which are more general than the duals existing in the literature.

We associate the following Mond-Weir-type dual with problem (P).

(D1) Maximize ϕ u = f 1 u g 1 u , f 2 u g 2 u , , f p u g p u

subject to

0 i = 1 p λ i ( * f i ( u ) v i * g i ( u ) ) + j = 1 m γ j * h j u γ j h j ( u ) 0 , j = 1 , 2 , , m
(5)

where 0 λ R + p , γ R + m , v i = ϕ i u = f i u g i u , i = 1 , 2 , , p . Here, maximizing means finding weak efficient solutions in the following sense:

A feasible solution (u*, λ*, γ*, v*) of the dual (D1) is said to be a weak efficient solution of (D1) if there does not exist any feasible solution (u, λ, γ, v) of (D1) such that

f i u g i u > f i u * g i u * , i = 1 , 2 , , p .

We shall now prove the weak duality theorem.

Theorem 4.1. (Weak Duality). Let x be feasible for (P) and (u, λ, γ, v) be feasible for (D 1 ). Suppose that*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u. If f i (.) − v i g i (.), i = 1, 2,…, p is*-pseudoconvex at u, γ j h j (.), j = 1, 2,…, m is*-quasiconvex at u, then

ϕ x ϕ u .

Proof. Since ∂*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u, using Remark 2.17 and Lemma 2.16, we have that ∂*f i (u) − v i *g i (u), i = 1, 2,…, p is a convexifactor of f i (.) − v i g i (.), i = 1, 2,…, p at u.

On the contrary, suppose that ϕ(x) < ϕ(u).

Then,

f i x v i g i x < 0 , i = 1 , 2 , , p ,
(6)
where v i = ϕ i u = f i u g i u , i = 1 , 2 , , p .

Since (u, λ, γ, v) is feasible for (D1), therefore, there exist ξ i ∈ ∂*f i (u) − v i *g i (u), i = 1, 2,…, p, ζ j ∈ ∂*h j (u), j = 1, 2,…, m such that

i = 1 p λ i ξ i + j = 1 m γ j ζ j = 0.
(7)

Using (6), the feasibility of x for (P), and the feasibility of (u, λ, γ, v) for (D1), we get

f i x v i g i x < 0 = f i u v i g i u , i = 1 , 2 , , p
(8)

and

γ j h j x 0 γ j h j u , j = 1 , 2 , , m .
(9)

Since f i (.) − v i g i (.), i = 1, 2,…, p is ∂*-pseudoconvex at u, we have from (8),

ξ i , xu〉 < 0 for all ξ i ∈ ∂*f i (u) − v i *g i (u), i = 1, 2,…, p.

As 0 ≠ λ ∈ R+p, we get

i = 1 p λ i ξ i , x u < 0 for all ξ i * f i u v i * g i u .
(10)

Now, the ∂*-quasiconvexity of γ j h j , j = 1, 2,…, m and (9) gives us

ζ j , xu〉 ≤ 0 for all ζ j ∈ ∂*(γ j h j )(u), j = 1, 2,…, m

which on using Lemma 2.16 implies that

γ j ζ j , xu〉 ≤ 0 for all ζ j ∈ ∂*h j (u), j = 1, 2,…, m.

As γ j ≥ 0, j = 1, 2,…, m, we have

j = 1 m γ j ζ j , x u 0 , for all ζ j * h j u .
(11)

Adding (10) and (11), we get

i = 1 p λ i ξ i + j = 1 m γ j ζ j , x u < 0

which is a contradiction to (7).

In the next theorem, we shall prove the strong duality result.

Theorem 4.2. (Strong Duality). Let x*be a weak efficient solution of (P). Assume that the hypotheses of Theorem 3.4 hold. Then, there exists (λ*, γ*, v*) ∈ Rp × Rm × Rpsuch that (x*, λ*, γ*, v*) is feasible for dual (D 1 ). If for each feasible x for (P) and (u, λ, γ, v) for (D 1 ) hypotheses of Theorem 4.1 hold, then (x*, λ*, γ*, v*) is a weak efficient solution of (D 1 ).

Proof. Since x*is a weak efficient solution of (P) and all the assumptions of Theorem 3.4 are satisfied, therefore, there exist vectors 0 ≠ λ* ∈ R+p and γ* ∈ R+m such that (1), (2), and (3) hold.

That is,

0 i = 1 p λ i * * f i x * ϕ i x * * g i x * + j = 1 m γ j * * h j x * γ j * h j ( x * ) = 0 , j = 1 , 2 , , m γ j * 0 , h j ( x * ) 0 , j = 1 , 2 , , m , ϕ i ( x * ) = f i x * g i x * , i = 1 , 2 , , p

which implies that (x*, λ*, γ*, v*) is feasible for dual where v i * = f i u * g i u * , i = 1 , 2 , , p . Let, if possible, (x*, λ*, γ*, v*) be not a weak efficient solution of (D1). Then, there exists (u, λ, γ, v) feasible for dual such that ϕ(u) > ϕ(x*).

However, this is a contradiction to Theorem 4.1 as x* is feasible for (P) and (u, λ, γ, v) is feasible for (D1).

Hence, (x*, λ*, γ*, v*) is a weak efficient solution of (D1).

We now provide an example illustrating Theorem 4.1.

Example 4.3 Consider the problem

(P) Minimize f 1 x g 1 x , f 2 x g 2 x

subject to h1(x) ≤ 0, h2(x) ≤ 0,

where f i , g i , h j : RR, i = 1, 2, j = 1, 2 are defined by

f 1 ( x ) = { 1 + x 10 , x 0 1 + x 2 , x < 0 , f 2 ( x ) = { 2 + x 2 + 2 x , x 0 2 + x , x < 0 g 1 ( x ) = { 1 + 2 x + x 2 , x 0 1 + 3 x , x < 0 , g 2 ( x ) = { 2 + x 2 , x 0 2 + x 2 , x < 0 h 1 ( x ) = { x , x 0 x 2 , x < 0 , h 2 ( x ) = { 2 x , x 0 2 + x 2 , x < 0

The set of feasible solutions of (P) is E = [2, ∞[, and its dual is given by

(D1) Maximize f 1 u g 1 u , f 2 u g 2 u

subject to

0 i = 1 2 λ i ( * f i ( u ) v i * g i ( u ) ) + j = 1 2 γ j * h j u γ j h j ( u ) 0 , j = 1 , 2 ,

where 0 ≠ λ ∈ R+2, γ ∈ R+2, and v i = f i u g i u , i = 1 , 2. u , λ 1 , λ 2 , γ 1 , γ 2 = 0 , 1 4 , 1 , 2 , 1 4 is feasible for dual (D1).

* f 1 ( 0 ) = { 0 , 1 10 } , * f 2 ( 0 ) = { 1 , 2 } , * g 1 ( 0 ) = { 2 , 3 } , * g 2 ( 0 ) = { 0 , 1 2 } , * h 1 ( 0 ) = { 1 , 0 } , * h 2 ( 0 ) = { 2 , 0 } * f 1 ( 0 ) v 1 * g 1 ( 0 ) = 2 , 3 , 21 10 , 31 10 , * f 2 ( 0 ) v 2 * g 2 ( 0 ) = 1 , 1 2 , 2 , 3 2 ,

where

v 1 = f 1 u g 1 u = 1 , v 2 = f 2 u g 2 u = 1

f1(.) − v1g1(.) and f2(.) − v2g2(.) are ∂*-pseudoconvex at u = 0.

γ1h1(.) and γ2h2(.) are ∂*-quasiconvex at u = 0.

We can see that for feasible point x = 2 for (P) and u , λ 1 , λ 2 , γ 1 , γ 2 = 0 , 1 4 , 1 , 2 , 1 4 for ( D 1 ) ,

f 1 x g 1 x , f 2 x g 2 x = 6 35 , 5 3 f 1 u g 1 u , f 2 u g 2 u = 1 , 1 .

Hence, Theorem 4.1 is illustrated.

Remark 4.4. There do exist functions which are both ∂*-pseudoconvex and ∂*-quasiconvex as can be seen from the following example.

Example 4.5. Let f: R2R be a function defined by

f x , y = { x + y , x > 0 , y > 0 , x y , x > 0 , y 0 , x + y , x 0 , y > 0 , x + y , x 0 , y 0 ,
* f 0 , 0 = x * , y * : x * 1 , y * 1 .

f is ∂*-pseudoconvex and ∂*-quasiconvex at (x̅, y̅) = (0,0).

We now associate the Schaible-type dual with (P) which is given as follows:

(D2) Maximize v = (v1, v2,…, v p )

subject to

0 i = 1 p λ i ( * f i ( u ) v i * g i ( u ) ) + j = 1 m γ j * h j u i = 1 p λ i ( f i ( u ) v i g i ( u ) ) 0 , j = 1 m γ j h j ( u ) 0 , 0 λ R + p , γ R + m , v i 0 , i = 1 , 2 , , p .
(12)

Remark 4.6. If we assume that f i , g i , i = 1, 2,…, p, h j , j = 1, 2,…, m are differentiable and admit upper regular convexifactors ∂*f i (u), ∂*g i (u), i = 1, 2,…, p and ∂*h j (u), j = 1, 2,…, m at u, respectively, then the Schaible-type dual reduces to the following:

Maximize v = (v1, v2,…, v p )

subject to

i = 1 p λ i ( f i ( u ) v i g i ( u ) ) + j = 1 m γ j h j u = 0 i = 1 p λ i ( f i ( u ) v i g i ( u ) ) 0 , j = 1 m γ j h j ( u ) 0 ,
0 λ R + p , γ R + m , v i 0 , i = 1 , 2 , , p ,

which is similar to the dual given by Suneja and Gupta [14].

We shall now prove the weak duality and strong duality results.

Theorem 4.7. (Weak Duality). Let x be feasible for (P) and (u, λ, γ, v) be feasible for (D2). Suppose that ∂*fi(u), i = 1, 2, …, p is an upper regular convexifactor of f i  (.), i = 1, 2,…, p at u and *g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i  (.), i = 1, 2,…, p at u. Also, assume that for some i, and some j, *f i  (u) − v i *g i  (u), *h j  (u) are respectively upper regular convexifactors of f i  (.) − v i g i  (.), i = 1, 2,…, p and h j  (.), j = 1, 2, .., m at u, and for some i 0 i, j 0 j, * f i 0 u v i 0 * g i 0 u , * h j 0 u are respectively lower regular convexifactors of f i 0 . v i 0 g i 0 . , and h j 0 . , at u. If i = 1 p λ i f i . v i g i . is ∂*-pseudoconvex at u and , j = 1 m γ j h j . is ∂*-quasiconvex at u, then

ϕ (x) ≮v.

Proof. Since ∂*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u, using Remark 2.17 and Lemma 2.16, we have that ∂*f i (u) − v i *g i (u), i = 1, 2,…, p is a convexifactor of f i (.) − v i g i (.), i = 1, 2,…, p at u. Also, since for some i, and some j, ∂*f i (u) − v i *g i (u), ∂*h j (u) are respectively upper regular convexifactors of f i (.) − v i g i (.), i = 1, 2,…, p and h j (.), j = 1, 2,., m at u, and for some i0i, j0j, * f i 0 u v i 0 * g i 0 u , * h j 0 u are respectively lower regular convexifactors of f i 0 . v i 0 g i 0 . , and h j 0 . , at u using Remark 2.17 and Lemma 2.16, we have that i = 1 p λ i * f i u v i * g i u , j = 1 m γ j * h j u are convexifactors of i = 1 p λ i f i . v i g i . and j = 1 m γ j h j . at u, respectively.

On the contrary, suppose ϕ(x) < v.

Then,

f i x v i g i x < 0 , i = 1 , 2 , , p .
(13)

Since (u, λ, γ, v) is feasible for (D2), therefore, there exist ξ i ∈ (∂*f i (u) − v i *g i (u)), i = 1, 2,…, p, ζ j ∈ ∂*h j (u), j = 1, 2,…, m such that

i = 1 p λ i ξ i + j = 1 m γ j ζ j = 0.
(14)

Using (13), 0 ≠ λ ∈ R+p, γ ∈ R+m, and the fact that x is feasible for (P), we get that

i = 1 p λ i ( f i ( x ) v i g i ( x ) ) < 0 j = 1 m γ j h j ( x ) 0.
(15)

Since (u, λ, γ, v) is feasible for (D2), using (15) we have

i = 1 p λ i ( f i ( x ) v i g i ( x ) ) < 0 i = 1 p λ i ( f i ( u ) v i g i ( u ) ) j = 1 m γ j h j ( x ) 0 j = 1 m γ j h j ( u ) .

Using ∂*-pseudoconvexity of i = 1 p λ i f i . v i g i . and ∂*-quasiconvexity of j = 1 m γ j h j . , we have

i = 1 p λ i ξ i , x u < 0 for ξ i * f i u v i * g i u , i = 1 , 2 , , p
(16)

and

j = 1 m γ j ζ j , x u 0 for ζ j * h j u , j = 1 , 2 , , m .
(17)

Adding (16) and (17), we get

i = 1 p λ i ξ i + j = 1 m γ j ζ j , x u < 0

which is a contradiction to (14).

Theorem 4.8. (Strong Duality). Let x*be a weak efficient solution of (P). Assume that the hypotheses of Theorem 3.4 hold. Then, there exists (λ*, γ*, v*) ∈ Rp × Rm × Rpsuch that (x*, λ*, γ*, v*) is feasible for dual. If for each feasible x for (P) and (u, λ, γ, v) for (D 2 ) hypotheses of Theorem 4.7 hold, then (x*, λ*, γ*, v*) is a weak efficient solution of (D 2 ).

Proof. Since x* is a weak efficient solution of (P) and all the assumptions of Theorem 3.4 are satisfied, therefore, there exist vectors 0 ≠ λ* ∈ R+p, γ* ∈ R+m, and v i * ≥ 0,

i = 1, 2,…, p such that (1), (2), and (3) hold.

That is,

0 i = 1 p λ i * * f i x * v i * * g i x * + j = 1 m γ j * * h j x * γ j * h j ( x * ) = 0 , j = 1 , 2 , , m γ j * 0 , h j ( x * ) 0 , j = 1 , 2 , , m

where

v i * = ϕ i x * , i = 1 , 2 , , p
(18)

which implies that (x*, λ*, γ*, v*) is feasible for dual (D2). Let, if possible, (x*, λ*, γ*, v*) be not a weak efficient solution of (D2). Then, there exists (u, λ, γ, v) feasible for dual such that v*<v.

On using (18), we have

ϕ x * < v .

However, this is a contradiction to Theorem 4.7 as x* is feasible for (P) and (u, λ, γ, v) is feasible for (D2).

Hence, x* is a weak efficient solution of (P).