Abstract
In this paper, our aim is to develop necessary and sufficient optimality conditions in the absence of any constraint qualification for multiobjective fractional programming problem using the powerful combination of conjugate analysis and ε-subdifferential calculus. Furthermore, as an application of these conditions we derive sequential duality results for this class of problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
In this paper, we consider the following multiobjective fractional programming problem:
where f i , g i , h j : Rn R, f i (.), are continuous, convex functions and g i (.), are continuous, concave functions such that f i (x) ≥ 0 and g i (x) > 0, for all x ∊ Rn.
Let E = {x ∊ Rn: denote the feasible set for problem (P).
The study of multiobjective optimization problems has been a subject of great interest since multiobjective decision models can be widely applied to many practical problems which appear in the field of economics, management, medicine, etc. An important class of such problems is multiobjective fractional programming problems. Many authors studied optimality conditions and solution concepts for multiobjective optimization problems such as Chaoo and Atkins [3], Coladas et al. [4], Geoffrion [6], Gerth [7], Kaliszewski [13] and Li and Wang [14]. Though, generally we deal with exact optimal solutions but in many situations the concept of exact optimal solution cannot be applied but an approximate solution is required because from the computational point of view only approximate solutions can be obtained. So, in this article, we consider ε-approximate solutions defined as follows:
is an ε-weak efficient solution of (P) if there does not exist any feasible solution x ∊ E such that
where with when ε = 0, an ε-weak efficient solution is weak efficient solution of (P). For the notion of ε-optimal solution for scalar optimization problem one can refer to Bai et al. [1].
In the field of optimization, ε-optimality conditions have been discussed by many researchers like Loridan [15], Loridan and Morgan [16], Strodiot et al. [19], Yokoyama [22], Gajek and Zagrodny [5], Li and Wang [14], Lui [17], Tanaka [20], Yokoyama [24], Li and Wang [25], etc. Yokoyama [22] in 1992 obtained ε-optimality conditions for convex programming problem via exact penalty functions. In 1994, Yokoyama [23] extended the above results to vector minimization problem. Li and Wang [25] in 1998 introduced the concept of ε-proper efficiency and studied necessary and/or sufficient conditions for an ε-efficient solution (an ε-properly efficient solution, an ε-weak efficient solution) for multiobjective optimization problem via scalarization and an alternative theorem.
Since study of multiobjective optimization problems is a subject of great importance, we have focused ourselves in this paper in developing optimality conditions for multiobjective fractional programming problem.
To derive necessary optimality conditions one needs to impose some kind of constraint qualification. But these qualifications may sometimes become cumbersome to verify and give rise to optimality conditions that are very difficult to trace from the view point of computation. In the absence of constraint qualifications (CQs.), Lagrange multiplier rules and Karush–Kuhn–Tucker (KKT) conditions may fail to hold. So, we need to develop optimality conditions without CQs. which would give more practical formulation of optimality conditions for multiobjective fractional programming problem (P). This motivates us to derive sequential optimality conditions for multiobjective fractional programming problem. Recently, work has been done in this direction for convex programming problems with cone convex constraints by Jeyakumar et al. [10, 12] and Bai et al. [1]. Jeyakumar et al. [10, 12] introduced the concept of sequential Lagrange multiplier rules for convex programs with cone convex constraints using the concept of epigraph of conjugate function in terms of ε-subdifferential computed at optimal solution. These conditions coincide with standard optimality conditions under the assumption of appropriate CQs. One of the main advantages of the ε-subdifferential which makes it a useful tool both in theory and practice is that for every Thibault [21] derived sequential optimality conditions using the subdifferential calculus for convex functions with cone convex constraints.
In this paper, our aim is to develop sequential optimality conditions for multiobjective fractional programming problem (P) via scalarization and using the concept of epigraph of conjugate function in terms of ε-subdifferentials computed at ε-weak efficient solution.
The paper is planned as follows: “Preliminaries” deals with some preliminary results that will be used in the sequel. In “Sequential optimality conditions”, we derive sequential optimality conditions. Finally, in “Sequential duality results”, sequential duality results have been obtained.
Preliminaries
In this section, we give some basic definitions and results which will be used in the sequel.
Let f: Rn → R.
The ε-subdifferential of f at is defined as
where ε > 0.
For detailed study on ε-subdifferentials one may refer to Hiriart-Urruty [8].
Remark 2.1
(Rockafellar and Wets [18]). If , x ∊ Rn, α ∊ R, then
where f*denotes the conjugate of function f and epif*denotes epigraph of f*. For the definitions of conjugate and epigraph of a function one can see Bector et al. [2].
Remark 2.2
(Rockafellar and Wets [18]). For any scalar λ > 0,
where λ * f stands for epi-multiple. It satisfies
For details on conjugacy theory one may refer to Rockafellar and Wets [18].
Theorem 2.1
(Theorem 1.2.1, Bector et al. [2]). A function f is a lower semicontinuous (lsc) function if and only if its epigraph is a closed convex set.
For a set E, the indicator function δ E is defined as
For a nonempty closed convex set E, δ E is a proper, lsc, convex function.
Proposition 2.1
(Proposition 2.1, Jeyakumar et al. [10]). Let f: Rn → Rbe a proper, lsc, convex function andThen
For is nonempty and hence epif * is nonempty.
Proposition 2.2
(Rockafellar and Wets [18], Jeyakumar et al. [12]) For proper, lsc, convex functionsf1, f2: Rn → R
For any setA ⊂ Rn, we denote by coA and clcoA as convex hull and the closed convex hull of the set A, respectively.
Proposition 2.3
(Jeyakumar et al. [11]) Let Ibe an arbitrary index set andf i : Rn → Ri ∊ I, be proper, lsc, convex functions. Define
Then,
Sequential optimality conditions
In this section, we prove sequential optimality conditions for multiobjective fractional programming problem (P).
We shall be using following Lemma on the lines of Lemma 5.1 [1] to prove our optimality conditions.
Lemma 3.1
For (P), ifThen
wherev ∊ Rp.
Since f i (.), , are convex functions andg i (.), , are concave functions, , are convex functions.
Theorem 3.1
∊ Eis anε-weak efficient solution of (P) if and only if there exist ε′ ≥ 0, R+, andRn, Rn with such that
and
Assume thatfor all x ∊ Rn.
Here
Proof
Since ∊ E is an ε-weak efficient solution of (P), there does not exist any feasible solution x ∊ E such that
Using parametric approach, problem (P) can be written as
where v ∊ Rp is the parameter.
By (3) we have that there does not exist any feasible solution x ∊ E such that
where
By weighted sum approach the problem (P1) can be converted to the following scalar optimization problem
By (4), we have
Since , multiplying above by and adding we get
Hence is an —optimal solution of (P2).
Since is an —optimal solution of (P2), it is an -optimal solution of the function on E, that is,
Hence
Using Proposition 2.2, we get
Using Lemma 3.1(i) and Remark 2.2, we obtain
Proposition 2.1 implies that there exist R+ and such that
Now, compairing both the sides we get
and
Using (1), we get
Conversely suppose that (1) and (2) hold.
We have to show that is an ε-weak efficient solution of (P).
Suppose on contrary that is not an ε-weak efficient solution of (P). Then, as argued in the necessary part, we have that is not a -optimal solution of (P2). Then, there exists a feasible solution x ∊ E such that
Since therefore, we have
Since R+, therefore, multiplying (6) with and adding we get
Taking limit as n → ∞ and using (1) we get
Using (2) we get
As x is feasible for (P), we have
which is contradictory to (5).
Hence our assumption was wrong, is an ε-weak efficient solution of (P).
Corollary 3.1
If in the above theorem we impose a constraint qualification, that is, the setis closed, then the sequential optimality conditions reduce to the standard KKT conditions.
We now illustrate above theorem with the help of the following example.
Example 3.1
Consider the problem
subject to
Here f i , g i , h:R → R,i = 1, 2.
Set of feasible solutions is given by
Here is not an optimal solution as for but it is an ε1-optimal solution as for ε1 = 2 for all x ∊ E, and but it is an ε2-optimal solution as for , for all x ∊ E.
Now
Then there exist R, i = 1, 2, j = 1, 2, with as for all x ∊ R as
and such that
and
Theorem 3.2
∊ Eis an ε-weak efficientsolution of (P) if and only if there existR+, andRn, with
such that
as n → ∞.
Assume thatfor allx ∊ Rn.
Proof
Since is an ε-weak efficient solution of (P), it is an -optimal solution of (P2); hence it solves the following unconstrained problem
where and
Since is an -optimal solution of (P0); therefore, we have for all x ∊ E. that is, for all x ∊ E,
Hence
By Proposition 2.3, we have
By Remark 2.1, we have
Thus, there exist
and with as n → ∞ such that
Now, comparing both the sides we get as n → ∞ and
Using (7) we get
This equation along with the conditions and the fact that is feasible for (P) implies as n → ∞.
Conversely proceeding on the similar lines of Theorem 3.1, we arrive at the following condition
Using the conditions as n → ∞ and the fact that x is feasible for (P), we get
Since for all n ∊ N, we have
That is, as
Since we get
which gives contradiction to (5).
Hence the result.
We now give an example to illustrate the above theorem.
Remark 3.1
Consider Example 3.1 with h(x) replaced by Set of feasible solutions is given by
It can be seen that is not an optimal solution but it is an ε2-optimal solution.
Then, there exist R, with as for all x ∊ R and
and such that
with as n → ∞
and , as n → ∞.
Sequential duality results
In this section, we prove sequential duality results for (P).
For (P), the sequential Lagrange function
L: Rn × R m+ → R is defined as
The sequential dual for (P) is given by
In the following theorem, we establish sequential duality result.
Theorem 4.1
Let be an ε-weak efficient solution of (P) with optimal value where Then
Assume thatfor allx ∊ Rn.
Proof
Since is an ε-weak efficient solution of (P), therefore, proceeding on the lines of Theorem 3.1, there exist ε′ ≥ 0 and sequences Rn, and R+, such that
Using definitions of conjugates of and comparing both the sides we get
and
Using definition of we get for all x ∊ E, which gives that
as ε′ ≥ 0.
Now since R+,
and therefore
Now
Using above in (11) we get
Now,
which implies
Using above in (12) and then using (1) we get
which implies
Hence
Thus,
To show
Since x is feasible for (P) and R+, we have
which implies
That is,
(13) and (14) imply the required result.
Hence proved.
Corollary 4.1
Let be an ε-weak efficient solution of (P) with optimal value where If, in the above theorem we impose a constraint qualification, that is, the set is closed, then
Application [26]
The most important and common application of multiobjective fractional programming problem is transportation problem. Multiobjective linear fractional transportation problem is the problem with several criteria such as the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. The problem is as follows:
Let there be m sources and n destinations. At each source, let be the amount of homogenous products which are transported to n destinations to satisfy the demand for units of the product there. Let x ij be units of goods shipped from source i to destination j. For the objective function profit matrix which determines the profit gained from shipment i to j, cost matrix which determines the cost per unit of shipment from i to j, scalars which determine some constant profit and cost, respectively, the problem is
subject to
where is a vector of objective functions.
We suppose that and for all where denotes a convex and compact feasible set defined by (5.1), (5.2), (5.3). and d q (x) are continuous on S.
Further, a i > 0, for all i, b j > 0, for all j, for all i, j and
Conclusion
We know that constraint qualifications are required to obtain necessary optimality conditions but sometimes these constraint qualifications become very difficult to compute. In this paper, we develop sequential optimality conditions in the absence of any constraint qualification for multiobjective fractional programming problem (P) via scalarization and using the concept of epigraph of conjugate function in terms of ε-subdifferential computed at ε-weak efficient solution. Also, we derive sequential duality results for the problem (P).
References
Bai, F., Wu, Z., Zhu, D.: Sequential Lagrange multiplier condition for ε-optimal solution in convex programming. Optimization 57(5), 669–680 (2008)
Bector, C.R., Chandra, S., Dutta, J.: Principles of Optimization Theory. Narosa, New Delhi (2005)
Choo, E.U., Atkins, D.R.: Proper efficiency in nonconvex programming. Math. Oper. Res. 8, 467–470 (1983)
Coladas, L., Li, Z., Wang, S.: Optimality conditions for multiobjective and nonsmooth minimization in abstract spaces. Bull. Aust. Math. Soc. 50, 205–218 (1994)
Gajek, L., Zagrodny, D.: Approximate necessary conditions for locally weak pareto optimality. J. Optim. Theory Appl. 82, 49–58 (1994)
Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Optim. Theory Appl. 22, 618–630 (1968)
Gerth, Chr: Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 67, 297–320 (1990)
Hiriart-Urruty, J.B.: ε-subdifferential. In: Aubin, J.B., Vinter, R. (eds.) Convex Analysis and Optimization, pp. 43–92. Pitman, London (1982)
Jeyakumar, V.: Asymptotic dual conditions characterizing optimality for convex programs. J. Optim. Theory Appl. 93, 153–165 (1997)
Jeyakumar, V., Lee, G.M., Dinh, N.: New sequential lagrange multiplier conditions characterizing optimality without constraint qualifications for convex programs. SIAM J. Optim. 14(2), 534–547 (2003)
Jeyakumar, V., Rubinov, A.M., Glover, B.M., Ishizuka, Y.: Inequality systems and global optimization. J. Math. Anal. Appl. 202, 900–919 (1996)
Jeyakumar, V., Wu, Z.Y., Lee, G.M., Dinh, N.: Liberating the subgradient optimality conditions from constraint qualifications. J. Global Optim. 36, 127–137 (2006)
Kaliszewski, I.: A theorem on nonconvex functions and its application to vector optimization. Eur. J. Oper. Res. 80, 439–449 (1995)
Li, Z., Wang, S.: Lagrangian multipliers and saddle points in multiobjective programming. J. Optim. Theory Appl. 83, 63–81 (1994)
Loridan, P.: Necessary conditions for ε-optimality. Math. Program. Study 19, 140–152 (1982)
Loridan, P., Morgan, J.: Penalty functions in ε-programming and ε-minimax problems. Math. Program. 26, 213–231 (1983)
Lui, J.C.: ε-Duality theorem of nondifferentiable nonconvex multiobjective programming. J. Optim. Theory Appl. 69, 152–167 (1991)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Strodiot, J.J., Nguyen, V.H., Heukemes, N.: ε-Optimal solutions in nondifferentiable convex programming and some related questions. Math. Program. 25, 307–328 (1983)
Tanaka, T.: A new approach to approximation of solutions in vector optimization problems. In: Proceedings of APROS, 1994, World Scientific, Singapore, pp. 497–504 (1995)
Thibault, L.: Sequential convex subdifferential calculus and sequential lagrange multipliers. SIAM J. Control Optim. 35, 1434–1444 (1997)
Yokoyama, K.: ε-Optimality criteria for convex programming problems via exact penalty functions. Math. Program. 56, 233–243 (1992)
Yokoyama, K.: ε-Optimality criteria for vector minimization problems via exact penalty functions. J. Math. Anal. Appl. 187, 296–305 (1994)
Yokoyama, K.: Epsilon approximate solutions for multiobjective programming problems. J. Math. Anal. Appl. 203, 142–149 (1996)
Li, Z., Wang, S.: ε-Approximate solutions in multiobjective optimization. Optimization 44, 161–174 (1998)
Additional Reference
Cetin N., Tiryaki F.: A Fuzzy Approach Using Generalized Dinkelbach’s Algorithm for Multiobjective Linear Fractional Transportation Problem. Math. Probl. Eng. Article ID 702319, 1–10 (2014)
Acknowledgments
The author wishes to thank the unknown referees of this paper for valuable suggestions which have improved the final presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
This article is published under license to BioMed Central Ltd.Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Kohli, B. Sequential optimality conditions for multiobjective fractional programming problems. Math Sci 8, 128 (2014). https://doi.org/10.1007/s40096-014-0128-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40096-014-0128-3