Abstract
In this paper, the concept of ∂*-quasiconvexity is introduced by using convexifactors. Mond-Weir-type and Schaible-type duals are associated with a multiobjective fractional programming problem, and various duality results are established under the assumptions of ∂*-pseudoconvexity and ∂*-quasiconvexity.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Duality plays a very important role in optimization problems. Many authors have made contributions in the development of duality theory for multiple objective programming problems. There has been tremendous development in area of multiobjective optimization problems during the past years. For the most recent developments in this area, one can refer to the book by Ansari and Yao [1]. In this book, several aspects of multiobjective optimization starting from the very beginning to the most recent ones have been discussed in the form of various research papers in this field. An important class of such problems, namely, multiple objective fractional programming problems, is of great interest in many areas such as transportation, production, information theory, and numerical analysis. Some of the papers by Schaible [2–4] review the early work done in fractional programming. For some recent work on duality in fractional programming, one can see the study of Lyall et al. [5], Liang [6], etc. Duality in generalized fractional programming has been studied by Barros et al. [7], Liu [8], etc. Weir [9] studied a multiobjective fractional programming problem with the same denominators. Since then, a great deal of research was started in this area under the assumptions of convexity and generalized convexity by many researchers such as Singh [10], Egudo [11], Singh and Hanson [12], Weir [9, 13], Suneja and Gupta [14], Suneja and Lalitha [15], etc. Duality for multiobjective fractional programming problem under various assumptions has also been studied by authors like Liu [16], Kim et al. [17], Kim [18], Nobakhtian [19], Mishra and Upadhyay [20], etc.
The concept of convexifactor was first introduced by Demyanov [21] as a generalization of the notion of upper convex and lower concave approximations. In [21], convexifactor was defined as convex and compact set and was termed as convexificator. However, Jeyakumar and Luc [22] in their further study suggested that one can use a closed, nonconvex set instead of a compact and convex one to define a convexificator. Dutta and Chandra [23] called them as convexifactors. They have been further studied by Dutta and Chandra [24], Li and Zhang [25], Gadhi [26], etc. Dutta and Chandra [24] introduced ∂*-pseudoconvex functions by using this concept of convexifactor. The importance of convexifactors lies in the fact that they are useful even when they are unbounded or nonconvex, and the use of a nonconvex set to define convexifactors has an advantage that in many situations one may just have a convexifactor consisting of finite number of points which is more amenable to various applications. Further, for locally Lipschitz function, one can have convexifactors smaller than the Clarke subdifferential, Michel-Penot subdifferential, etc., so optimality conditions and duality results obtained in terms of convexifactors are sharper. In multiobjective programming problems, generalized convexity plays an important role in deriving duality results. Gadhi [26] has proved necessary and sufficient optimality conditions for a multiobjective fractional programming problem in terms of convexifactors. In this paper, we introduce the notion of ∂*-quasiconvex functions. We associate Mond-Weir-type and Schaible-type duals to the multiobjective fractional programming problem and derive duality results under the assumptions of ∂*-pseudoconvexity and ∂*-quasiconvexity. Our results in terms of convexifactors recast the well-known results in the multiobjective fractional programming problem and hence present more general results.
The paper is organized as follows: In the ‘Preliminaries’ section, we introduce the concept of ∂*-quasiconvex functions in terms of convexifactors. Kuhn-Tucker-type optimality conditions for multiobjective fractional programming problem have been given in the ‘Optimality conditions’ section. Finally, in the ‘Duality’ section, we prove duality results by associating Mond-Weir-type and Schaible-type duals to the problem.
Preliminaries
Throughout the paper, we are concerned with finite dimensional spaces.
Let f: Rn → R ∪ {+ ∞} be an extended real valued function.
denote, respectively, the lower and upper Dini directional derivatives of f at x in direction v.
We begin with the definitions of convexifactors given by Dutta and Chandra [23].
Definition 2.1. The function f: Rn → R ∪ {+ ∞}is said to have an upper convexifactor ∂ uf(x) at x if ∂ uf(x) ⊂ Rn is closed, and for each v ∈ Rn,
Definition 2.2. The function f: Rn → R ∪ {+ ∞} is said to have a lower convexifactor ∂ l f(x) at x if ∂ l f(x) ⊂ Rn is closed, and for each v ∈ Rn,
Definition 2.3. The function f: Rn → R ∪ {+ ∞} is said to have a convexifactor ∂*f(x) at x if it is both an upper and lower convexifactor of f at x.
Definition 2.4. The function f: Rn → R ∪ {+ ∞} is said to have an upper regular convexifactor ∂ uf(x) at x if ∂ uf(x) is an upper convexifactor of f at x, and for each v ∈ Rn,
Definition 2.5. The function f: Rn → R ∪ {+ ∞}is said to have a lower regular convexifactor ∂ l f(x) at x if ∂ l f(x) is lower convexifactor of f at x, and for each v ∈ Rn,
Convexifactors are not necessarily convex or compact. These relaxations allow applications to a large class of nonsmooth functions.
The important question arises regarding the existence of convexifactors or regular convexifactors at a given point for a general real valued function. In this regard, we present the following theorem from Dutta and Chandra [24].
Theorem 2.6. Let f: Rn → R ∪ {+ ∞} and let x ∈ Rnbe a given point where f(x) is finite. Moreover, assume that the lower Dini directional derivative (f) d −(x, v) is bounded above. Then, there exists a compact upper convexifactor of f at x. If the upper Dini directional derivative (f) d +(x, v) is bounded below, then there exists a compact lower convexifactor of f at x.
Now, we consider the following multiobjective fractional programming problem:
subject to
Let E = {x ∈ Rn: h j (x) ≤ 0, j = 1, 2,…, m}denote the feasible set for problem (P).
Here, f i , g i , i = 1, 2,…, p and h j , j = 1, 2,…, m are continuous real valued functions defined on Rn such that f i (x) ≥ 0 and g i (x) > 0, i = 1, 2,…, p for all x ∈ E, and minimization means finding weak efficient solutions in the following sense:
Definition 2.7. x̅ ∈ E is a weak efficient solution of (P) if there does not exist any feasible solution x ∈ E such that
Definition 2.8. x̅ ∈ E is a local weak efficient solution of (P) if there exists a neighborhood U of x̅ such that for any feasible solution x ∈ U ∩ E, the following does not hold:
We give below the definition of ∂*-pseudoconvex function given by Dutta and Chandra [24].
Definition 2.9. A function f: Rn → R is said to be ∂*-pseudoconvex at x̅ ∈ Rn if for x ∈ Rn,
where ∂*f(x̅) is a convexifactor of f at x̅.
We now introduce ∂*-quasiconvex function.
Definition 2.10. A function f: Rn → R is said to be ∂*-quasiconvex at x̅ ∈ Rn if for x ∈ Rn,
Remark 2.11. (i) (Dutta and Chandra [24]) If f is a differentiable function and ∂*f(x̅) is an upper regular convexifactor, then ∂*f(x̅) = {∇f(x̅)}, and the above definition reduces to the definition of quasiconvex function.
(ii) If f is a locally Lipschitz function and ∂ * f (x̅) = ∂ c f (x̅) where ∂ c f (x̅) is the Clarke generalized gradient, then the above definition reduces to the definition of ∂c-quasiconvex function defined by Bector et al. [27].
Remark 2.12. ∂*-quasiconvex function is not necessarily ∂*-pseudoconvex as can be seen from the following example:
Example 2.13. Let f: R2→ R be a function defined by
f is ∂*-quasiconvex at (x ̅,y ̅) = (0,0), but f is not ∂*-pseudoconvex at (x ̅,y ̅) = (0,0) because for (x,y) = (− 1, 2), ξ = (0, 0)
Remark 2.14. It may be noted that every ∂c-pseudoconvex function is ∂c-quasiconvex when f is locally Lipschitz as can be seen from Remark 3.1 in the study of Rezaie and Zafarani [28] by taking η(x, y) = x − y. However, the next example shows that the ∂*-pseudoconvex function is not ∂*-quasiconvex.
Example 2.15. Let f: R2 → R be a function defined by
f is ∂*-pseudoconvex at (x̅, y̅) = (0,0), but f is not ∂*-quasiconvex at (x̅, y̅) = (0,0) because for (x, y) = (1, −1), ξ = (− 1, −2)
The following result is given by Li and Zhang [25].
Lemma 2.16. Let ∂*f(x) be a convexifactor of f at x. Then, ∀λ ∈ R, λ ∂*f(x) is a convexifactor of λf at x.
We now give the following result given by Jeyakumar and Luc [22].
Remark 2.17.[22] Assume that the functions f, g:Rn → R admit upper convexifactors ∂ uf(x) and ∂ ug(x) at x, respectively, and that one of the convexifactors is upper regular at x. Then, ∂ uf(x) + ∂ ug(x) is an upper convexifactor of f + g at x.
Similarly, if one of the convexifactors is lower regular at x. Then, ∂ l f(x) + ∂ l g(x) is a lower convexifactor of f + g at x.
Optimality conditions
Gadhi [26] gave the following necessary optimality conditions for (P).
Theorem 3.1. Let x̅εE be a local weak efficient solution of (P). Assume that f i , g i , i = 1, 2,…, p and h j , j = 1, 2,…, m are continuous and admit bounded convexifactors ∂*f i (x̅), ∂*g i (x̅), i = 1, 2,…, p and ∂*h j (x̅), j = 1, 2,…, m at x̅, respectively, and that x↦ ∂*f i (x), x↦ ∂*g i (x), i = 1, 2,…, p and x↦ ∂*h j (x), j = 1, 2,…, m are upper semicontinuous at x̅. Then, there exist vectors α* = (α1*, α2*, …, α p *) ∈ R+pand μ* = (μ1*, μ2*, …, μ m *) ∈ R+m(not both zero) such that
where
We now deduce the Kuhn-Tucker-type necessary optimality conditions for (P) under the assumption of the Slater-type weak constraint qualification which is defined as follows on the lines in the study of Mangasarian [29].
Definition 3.2. The function h is said to satisfy the Slater-type weak constraint qualification at x̅ ∈ E if h J is ∂*-pseudoconvex at x̅, and there exists an xo ∈ Rn such that h J (xo) < 0 where J = {j|h j (x̅) = 0}.
Remark 3.3. If h is a differentiable function at x̅ and admits an upper regular convexifactor ∂*h(x̅) at x̅, then the above Slater-type weak constraint qualification reduces to Slater's weak constraint qualification given by Mangasarian [29].
Theorem 3.4. Let x̅ ∈ E be a weak efficient solution of (P). Suppose that the hypotheses of Theorem 3.1 hold. Then, there exist vectors α* ∈ R+pand μ* ∈ R+m (not both zero) such that (1), (2) and (3) hold. If the Slater-type weak constraint qualification holds at x̅, then α* ≠ 0.
Proof. Suppose on contrary α* = 0, then μ* ≠ 0.
Now using (1), we get that there exists ζ j ∈ ∂*h j (x̅), j = 1, 2,…, m such that
Since h satisfies the Slater-type weak constraint qualification at x̅, therefore h J is ∂*-pseudoconvex at x̅, and there exists an xo ∈ Rn such that
where
Using ∂*- pseudoconvexity of h j , j ∈ J, we get
Now, (2) gives μ j * = 0, for all j∉J and thus we have
which is contradiction to (4).
Hence, α* ≠ 0.
Duality
Duality plays a crucial role in mathematical programming as sometimes solving a dual is easier than solving a primal. Wolfe [30] associated a dual problem with a primal nonlinear programming problem and proved various duality theorems under the assumptions of convexity. Since certain duality theorems may fail to hold for the Wolfe model if the objective and/or the constraint functions are generalized convex, Mond and Weir [31] presented a new model for studying duality which allowed the weakening of the convexity requirements for the objective and the constraint functions. In this section, we have introduced two types of duals: Mond-Weir-type and Schaible-type duals in terms of convexifactors which are more general than the duals existing in the literature.
We associate the following Mond-Weir-type dual with problem (P).
(D1) Maximize
subject to
where Here, maximizing means finding weak efficient solutions in the following sense:
A feasible solution (u*, λ*, γ*, v*) of the dual (D1) is said to be a weak efficient solution of (D1) if there does not exist any feasible solution (u, λ, γ, v) of (D1) such that
We shall now prove the weak duality theorem.
Theorem 4.1. (Weak Duality). Let x be feasible for (P) and (u, λ, γ, v) be feasible for (D 1 ). Suppose that ∂*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u. If f i (.) − v i g i (.), i = 1, 2,…, p is ∂*-pseudoconvex at u, γ j h j (.), j = 1, 2,…, m is ∂*-quasiconvex at u, then
Proof. Since ∂*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u, using Remark 2.17 and Lemma 2.16, we have that ∂*f i (u) − v i ∂*g i (u), i = 1, 2,…, p is a convexifactor of f i (.) − v i g i (.), i = 1, 2,…, p at u.
On the contrary, suppose that ϕ(x) < ϕ(u).
Then,
Since (u, λ, γ, v) is feasible for (D1), therefore, there exist ξ i ∈ ∂*f i (u) − v i ∂*g i (u), i = 1, 2,…, p, ζ j ∈ ∂*h j (u), j = 1, 2,…, m such that
Using (6), the feasibility of x for (P), and the feasibility of (u, λ, γ, v) for (D1), we get
and
Since f i (.) − v i g i (.), i = 1, 2,…, p is ∂*-pseudoconvex at u, we have from (8),
〈ξ i , x − u〉 < 0 for all ξ i ∈ ∂*f i (u) − v i ∂*g i (u), i = 1, 2,…, p.
As 0 ≠ λ ∈ R+p, we get
Now, the ∂*-quasiconvexity of γ j h j , j = 1, 2,…, m and (9) gives us
〈ζ′ j , x − u〉 ≤ 0 for all ζ′ j ∈ ∂*(γ j h j )(u), j = 1, 2,…, m
which on using Lemma 2.16 implies that
〈γ j ζ j , x − u〉 ≤ 0 for all ζ j ∈ ∂*h j (u), j = 1, 2,…, m.
As γ j ≥ 0, j = 1, 2,…, m, we have
Adding (10) and (11), we get
which is a contradiction to (7).
In the next theorem, we shall prove the strong duality result.
Theorem 4.2. (Strong Duality). Let x*be a weak efficient solution of (P). Assume that the hypotheses of Theorem 3.4 hold. Then, there exists (λ*, γ*, v*) ∈ Rp × Rm × Rpsuch that (x*, λ*, γ*, v*) is feasible for dual (D 1 ). If for each feasible x for (P) and (u, λ, γ, v) for (D 1 ) hypotheses of Theorem 4.1 hold, then (x*, λ*, γ*, v*) is a weak efficient solution of (D 1 ).
Proof. Since x*is a weak efficient solution of (P) and all the assumptions of Theorem 3.4 are satisfied, therefore, there exist vectors 0 ≠ λ* ∈ R+p and γ* ∈ R+m such that (1), (2), and (3) hold.
That is,
which implies that (x*, λ*, γ*, v*) is feasible for dual Let, if possible, (x*, λ*, γ*, v*) be not a weak efficient solution of (D1). Then, there exists (u, λ, γ, v) feasible for dual such that ϕ(u) > ϕ(x*).
However, this is a contradiction to Theorem 4.1 as x* is feasible for (P) and (u, λ, γ, v) is feasible for (D1).
Hence, (x*, λ*, γ*, v*) is a weak efficient solution of (D1).
We now provide an example illustrating Theorem 4.1.
Example 4.3 Consider the problem
(P) Minimize
subject to h1(x) ≤ 0, h2(x) ≤ 0,
where f i , g i , h j : R→ R, i = 1, 2, j = 1, 2 are defined by
The set of feasible solutions of (P) is E = [2, ∞[, and its dual is given by
(D1) Maximize
subject to
where 0 ≠ λ ∈ R+2, γ ∈ R+2, and is feasible for dual (D1).
where
f1(.) − v1g1(.) and f2(.) − v2g2(.) are ∂*-pseudoconvex at u = 0.
γ1h1(.) and γ2h2(.) are ∂*-quasiconvex at u = 0.
We can see that for feasible point x = 2 for (P) and
Hence, Theorem 4.1 is illustrated.
Remark 4.4. There do exist functions which are both ∂*-pseudoconvex and ∂*-quasiconvex as can be seen from the following example.
Example 4.5. Let f: R2 → R be a function defined by
f is ∂*-pseudoconvex and ∂*-quasiconvex at (x̅, y̅) = (0,0).
We now associate the Schaible-type dual with (P) which is given as follows:
(D2) Maximize v = (v1, v2,…, v p )
subject to
Remark 4.6. If we assume that f i , g i , i = 1, 2,…, p, h j , j = 1, 2,…, m are differentiable and admit upper regular convexifactors ∂*f i (u), ∂*g i (u), i = 1, 2,…, p and ∂*h j (u), j = 1, 2,…, m at u, respectively, then the Schaible-type dual reduces to the following:
Maximize v = (v1, v2,…, v p )
subject to
which is similar to the dual given by Suneja and Gupta [14].
We shall now prove the weak duality and strong duality results.
Theorem 4.7. (Weak Duality). Let x be feasible for (P) and (u, λ, γ, v) be feasible for (D2). Suppose that ∂*fi(u), i = 1, 2, …, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u. Also, assume that for some i, and some j, ∂*f i (u) − v i ∂*g i (u), ∂*h j (u) are respectively upper regular convexifactors of f i (.) − v i g i (.), i = 1, 2,…, p and h j (.), j = 1, 2, .., m at u, and for some i 0 ≠ i, j 0 ≠ j, are respectively lower regular convexifactors of at u. is ∂*-pseudoconvex at u is ∂*-quasiconvex at u, then
ϕ (x) ≮v.
Proof. Since ∂*f i (u), i = 1, 2,…, p is an upper regular convexifactor of f i (.), i = 1, 2,…, p at u and ∂*g i (u), i = 1, 2,…, p is a lower regular convexifactor of g i (.), i = 1, 2,…, p at u, using Remark 2.17 and Lemma 2.16, we have that ∂*f i (u) − v i ∂*g i (u), i = 1, 2,…, p is a convexifactor of f i (.) − v i g i (.), i = 1, 2,…, p at u. Also, since for some i, and some j, ∂*f i (u) − v i ∂*g i (u), ∂*h j (u) are respectively upper regular convexifactors of f i (.) − v i g i (.), i = 1, 2,…, p and h j (.), j = 1, 2,., m at u, and for some i0 ≠ i, j0 ≠ j, are respectively lower regular convexifactors and at u using Remark 2.17 and Lemma 2.16, we have that are convexifactors of and u, respectively.
On the contrary, suppose ϕ(x) < v.
Then,
Since (u, λ, γ, v) is feasible for (D2), therefore, there exist ξ i ∈ (∂*f i (u) − v i ∂*g i (u)), i = 1, 2,…, p, ζ j ∈ ∂*h j (u), j = 1, 2,…, m such that
Using (13), 0 ≠ λ ∈ R+p, γ ∈ R+m, and the fact that x is feasible for (P), we get that
Since (u, λ, γ, v) is feasible for (D2), using (15) we have
Using ∂*-pseudoconvexity and ∂*-quasiconvexity , we have
and
Adding (16) and (17), we get
which is a contradiction to (14).
Theorem 4.8. (Strong Duality). Let x*be a weak efficient solution of (P). Assume that the hypotheses of Theorem 3.4 hold. Then, there exists (λ*, γ*, v*) ∈ Rp × Rm × Rpsuch that (x*, λ*, γ*, v*) is feasible for dual. If for each feasible x for (P) and (u, λ, γ, v) for (D 2 ) hypotheses of Theorem 4.7 hold, then (x*, λ*, γ*, v*) is a weak efficient solution of (D 2 ).
Proof. Since x* is a weak efficient solution of (P) and all the assumptions of Theorem 3.4 are satisfied, therefore, there exist vectors 0 ≠ λ* ∈ R+p, γ* ∈ R+m, and v i * ≥ 0,
i = 1, 2,…, p such that (1), (2), and (3) hold.
That is,
where
which implies that (x*, λ*, γ*, v*) is feasible for dual (D2). Let, if possible, (x*, λ*, γ*, v*) be not a weak efficient solution of (D2). Then, there exists (u, λ, γ, v) feasible for dual such that v*<v.
On using (18), we have
However, this is a contradiction to Theorem 4.7 as x* is feasible for (P) and (u, λ, γ, v) is feasible for (D2).
Hence, x* is a weak efficient solution of (P).
References
Ansari QH, Yao JC: Recent Developments in Vector Optimization. Springer, Berlin Heidelberg; 2012.
Schaible S: Fractional programming 1, duality. Manag. Sci 1976, 22: 858–867. 10.1287/mnsc.22.8.858
Schaible S: Duality in fractional programming: a unified approach. Oper. Res 1976, 24: 452–461. 10.1287/opre.24.3.452
Schaible S: Fractional programming, applications and algorithms. Eur. J. Oper. Res 1981, 7: 111–120. 10.1016/0377-2217(81)90272-1
Lyall V, Suneja S, Aggarwal S: Optimality and duality in fractional programming involving semilocally convex and related functions. Optimization 1997, 41: 237–255. 10.1080/02331939708844338
Liang ZA, Huang HX, Pardalos PM: Optimality conditions and duality for a class of nonlinear fractional programming problems. J. Optim. Theory Appl 2001, 110: 611–619. 10.1023/A:1017540412396
Barros AI, Frenk JBG, Schaible S, Zhang S: Using duality to solve generalized fractional programming problems. J. Glob. Optim 1996, 8: 139–170. 10.1007/BF00138690
Liu JC: Optimality and duality for generalized fractional programming involving nonsmooth pseudoinvex functions. J. Math. Anal. Appl 1996, 202: 667–685. 10.1006/jmaa.1996.0341
Weir T: A dual for a multiobjective fractional programming problem. J. Inf. Optim. Sci 1986, 7: 261–269.
Singh C: A class of multiple criteria fractional programming problems. J. Math. Anal. Appl 1986, 115: 202–213. 10.1016/0022-247X(86)90034-X
Egudo RR: Multiobjective fractional duality. Bull. Aust. Math. Soc 1988, 37: 367–378. 10.1017/S0004972700026988
Singh C, Hanson MA: Multiobjective fractional programming duality theory. Nav. Res. Logistics Q 1991, 38: 925–933.
Weir T: On duality in multiobjective fractional programming. Opsearch 1989, 26: 151–158.
Suneja SK, Gupta S: Duality in multiple objective fractional programming problems involving nonconvex functions. Opsearch 1990, 27: 239–252.
Suneja SK, Lalitha CS: Multiobjective fractional programming involving ρ–invex and related functions. Opsearch 1993, 30: 1–14.
Liu JC: Optimality and duality for multiobjective fractional programming involving nonsmooth pseudoinvex functions. Optimization 1996, 37: 27–39. 10.1080/02331939608844194
Kim DS, Jo CL, Lee GM: Optimality and duality for multiobjective fractional programming involving n-set functions. J. Math. Anal. Appl 1998, 224: 1–13. 10.1006/jmaa.1998.5974
Kim DS: Optimality and Duality for Multiobjective Fractional Programming with Generalized Invexity. MCDM, Whistler, B.C., Canada; 2004.
Nobakhtian S: Optimality and duality for nonsmooth multiobjective fractional programming with mixed constraints. J. Glob. Optim 2008, 41: 103–115. 10.1007/s10898-007-9168-7
Mishra SK, Upadhyay BB: Efficiency and duality in nonsmooth multiobjective fractional programming involving η–pseudolinear functions. Yugoslav J. Oper. Res 2012,22(1):13–18.
Demyanov VF: Convexification and Concavification of Positively Homogeneous Functions by the Same Family of Linear functions, Technical Report. University of Pisa, Italy; 1994:1–11.
Jeyakumar V, Luc DT: Nonsmooth calculus, minimality and monotonicity of convexificators. J. Optim. Theory Appl 1999, 101: 599–621. 10.1023/A:1021790120780
Dutta J, Chandra S: Convexifactors, generalized convexity and optimality conditions. J. Optim. Theory Appl 2002, 113: 41–65. 10.1023/A:1014853129484
Dutta J, Chandra S: Convexifactor, generalized convexity and vector optimization. Optimization 2004, 53: 77–94. 10.1080/02331930410001661505
Li XF, Zhang JZ: Necessary optimality conditions in terms of convexificators in lipschitz optimization. J. Optim. Theory Appl 2006, 131: 429–452. 10.1007/s10957-006-9155-z
Gadhi N: Necessary and sufficient optimality conditions for fractional multiobjective problems. Optimization 2008, 57: 527–537. 10.1080/02331930701455945
Bector CR, Chandra S, Dutta J: Principles of Optimization Theory. Narosa Publications, New Delhi; 2005.
Rezaie M, Zafarani J: Vector optimization and variational-like inequalities. J. Glob. Optim 2009, 43: 47–66. 10.1007/s10898-008-9290-1
Mangasarian OL: Nonlinear Programming. McGraw-Hill, NewYork; 1969.
Wolfe P: A duality theorem for nonlinear programming. Q. J. Appl. Math 1961, 19: 239–244.
Mond B, Weir T: Generalized concavity and duality. In Generalized Concavity in Optimization and Economics. Edited by: Schaible S, Ziemba WT. Academic Press, New York; 1981:263–279.
Acknowledgment
The authors wish to thank the unknown referees of this paper for their useful comments and constructive suggestions which have improved the presentation of the paper. The first author is grateful to U.G.C. for providing financial support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
Both the authors have contributed significantly in the preparation of this article. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Suneja, S.K., Kohli, B. Duality for multiobjective fractional programming problem using convexifactors. Math Sci 7, 6 (2013). https://doi.org/10.1186/2251-7456-7-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/2251-7456-7-6