A new algorithm for linear multiobjective programming problems with bounded variables

AbstractIn this paper, we present a new adapted algorithm for defining the solution set of a multiobjective linear programming problem, where the decision variables are upper and lower bounded. The method is an extension of the direct support method developed by Gabasov and Kirillova in single programming. Its particularity is that it avoids the preliminary transformation of the decision variables. The method is really effective, simple to use and permits to speed-up the resolution process. We use the suboptimal criterion of the method in single-objective programming to find the ϵ-efficient extreme points and the ϵ-weakly efficient extreme points of the multiobjective problem.


such as they are initially formulated. It is simple to use. It allows us to treat problems in a natural way and permits to speed-up the whole resolution process. It generates an important gain in memory space and CPU time.
We propose an efficiency test of a nonbasic variable and a new procedure to find a first efficient extreme point. By exploiting the direct support method principle, we propose an algorithm to find all the efficient extreme points.
Furthermore, the method in single programming integrates a suboptimal criterion which permits to stop the algorithm with a desired accuracy. We use this suboptimal criterion in our multiple objective case to find the -efficient extreme points and the -weakly efficient extreme points of the problem.
The rest of this paper is organized as follows: we first and briefly review some definitions and concepts in linear multiobjective programming. In Sect. 3, we propose a procedure for finding an initial efficient extreme point. A procedure to test the efficiency of a nonbasic variable and a method of computing all the efficient extreme points are proposed in Sect. 4. In Sect. 5, an algorithm for computing all efficient extreme points is given. A numerical example is utilized to illustrate the applicability of the proposed method in Sect. 6. Finally, conclusion is given in Sect. 7.

Statement of the problem and definitions
A multiobjective linear programming with bounded variables can be presented in the following canonical form: where S is the set of feasible decisions defined as follows: with A a m × n-matrix and rank(A) = m ≤ n, b ∈ R m , d − ∈ R n , d + ∈ R n . We define the criterion function C as follows: where C is a k × n-matrix, c T i are n-vectors, i = 1, k. We suppose that S is a bounded set and the problem is not degenerate, therefore all the feasible solutions have at least m noncritical components, with m = rank(A).
The problem of multiobjective linear programming with bounded variables can then be regarded as the problem of searching for all the feasible solutions which are efficient or weakly efficient.
The set of efficient solutions is denoted S E .
The following properties focus directly on the definitions of efficient solutions and -efficient solutions.
The following theorems focus on the conditions of existence of efficient solutions and weakly efficient solutions.
Consider the following sets:

Lemma 2.7
The point x ∈ S is -efficient in the problem (1) if and only if there exists an efficient point The multiobjective linear programming consists of determining the whole set of the efficient decisions and all weakly efficient decisions of the problem (1) for given C, A, b, d − and d + .

Procedure for finding an initial efficient extreme point
We propose a procedure for finding an initial efficient extreme point, inspired by the one proposed by Benson [1], taking into account the specificity of the constraints of the problem (1). This procedure consists of solving a particular linear program by the direct support method [5].
Let λ ∈ Λ and consider the following linear program: If we set y = x − d − , then we obtain the linear program To establish the procedure of resolution, the following linear program is defined where x 0 ∈ S: where e is a vector of ones. The suggested procedure is then given by the following three steps: Step (1): Find a feasible point x 0 ∈ S.
If not stop.
Step (3): Obtain an extreme optimal point solution of the linear program (4) with λ = (u 0 + e) using the direct support method for the resolution of a generalized linear program [5].
Let x 0 be the feasible solution selected at step (1) of the procedure.
The following theorem brings back the search for an efficient solution of the multiobjective problem (1) with the resolution of one linear program with bounded variables.

Theorem 3.2 [3]
The following linear program: admits an optimal solution if and only if the multiobjective problem (1) has an efficient solution.

Theorem 3.3 The linear program (6) admits an optimal solution if, and only if, the multiobjective problem (1) has an efficient solution.
Proof The dual program of (6) is given by However, as e T Cd − is a constant value, then the linear program (9) is equivalent to the following one: If we set y + d − = x, then we obtain the program (8). By applying Theorem 3.2 and according to the duality theory, we establish the theorem.

Computing all efficient extreme points
In this phase, we locate all the efficient extreme points by introducing all nonoptimal nonbasic variables into basis; and this, using the direct support method adapted for taking into account the multiobjective aspect of the problem. The principle of the method consists of: starting from an initial efficient extreme point, we determine a neighboring solution, and test whether it is efficient. If it is not, we return to an another efficient point and the process is reiterated. A test of efficiency of a nonbasic variable is then necessary.

Test of efficiency of a nonbasic variable
In order to test the efficiency of a point x 0 in the multiobjective linear program (1), we introduce a k-dimensional column vector s and define the following linear program: (11) The problem (11) is a generalized linear program (i.e., some variables are nonnegative and some are bounded). We solve it by the adapted direct support method [5]. For this, we introduce the following notations: where I is the identity matrix of order k. The program (11) becomes then where e is a k−vector of ones. However, two particular cases can arise: Particular case 1 If all the elements of the matrix C are nonnegative, the obtained test program is: With the following notations: the linear program (13) takes the form: We solve the linear program (14) by the direct support method adapted to the resolution of a linear program with bounded variables.

Particular case 2
If all the elements of the matrix C are nonpositive, the test program has the form: Using the following notations: we obtain the following test program: which can be solved by the direct support method.

Method of computing all the efficient extreme points
Following the direct support method for the resolution of a linear program, we propose a method which consists of generating from an efficient extreme point all the others using the direct support method modified for the circumstance. We denote: the criteria function of the problem (1).
We can then split the vectors and the matrix in the following way:   (1) is called a feasible solution of the problem. • A feasible solution x 0 is said to be optimal for the objective i if The solution x 0 is then weakly efficient for the problem (1). • Let x 0 be the optimal solution for the objective function i and the fixed vector 0 ≤ = ( 1 , . . . , k ) A feasible solution x is said to be i -optimal or suboptimal for the objective i if The solution x is then -weakly efficient for the problem (1).
• Let x 0 be an efficient solution in S in the problem (1) and

Increment formula of the objective function
Let {x, J B } be a support feasible solution for the problem (1) and let us consider another unspecified feasible solutionx = x + Δx. The increment of the objective function is: We define the potential matrix U and the estimation matrix E: Therefore, the increment formula takes the following final form:

Theorem 4.3 Let {x, J B } be a support feasible solution for the problem (1) and i ∈ K . If
then x is a weakly efficient point for the problem (1). If the support feasible solution is nondegenerate, then those relations are also necessary to have x weakly efficient.

The subefficiency criterion
The value is called subefficiency formula of the objective i, i = 1, . . . , k. The method of searching for all the efficient extreme points consists of introducing into the basis, one by one, the nonbasic variables corresponding to the first efficient extreme point found in the first phase.
The construction of the new feasible solution x = x + θ 0 l consists of choosing a vector l ∈ R n , called direction of improvement, and a nonnegative real number θ 0 which is the maximum step along this direction.
Let j 0 be the index candidate to enter in basis and the criterion i 0 is defined by the relation: In addition, the step θ 0 has to satisfy the following relations: Consequently, the maximum step θ 0 along the direction l is equal to where and The new feasible solution is thusx = x + θ 0 l.

Calculation of β(x, J B ).
We have The componentsx j for j ∈ J N are defined as follows: If β i 0 (x, J B ) ≤ i 0 , then the feasible solutionx is -optimal for the objective i 0 , then we consider all nonbasic variables.
If not, we change J B in the following way: The new support feasible solution {x,J B } will be written If not, we stop the procedure by having an extreme point. The test program is then used (given in the previous section) to test the efficiency of this extreme point. We start again the process by considering another nonbasic variable. However, the use of this test is not always necessary, since some solutions are clearly efficient or nonefficient, according to the following observations :

Observation 1 Let x be a basic feasible solution.
If there is j ∈ J N for which for all i = 1, . . . , k, we have then, we have ΔZ ≤ 0, i.e., Z − Z ≤ 0 ⇒ Z ≤ Z and Z = Z , therefore the introduction of j in basis leads to a solution x dominated by the current solution x. Thus, the introduction of j in basis is useless.

Observation 2 Let x be a basic feasible solution.
If there is j 0 ∈ J N such that for all i ∈ {1, . . . , k}, the relations (18) are not satisfied, then the introduction of j 0 in basis leads to a solution x dominating the current solution x.

Observation 3 Let x be basic feasible solution.
If there is i ∈ {1, . . . , k} such that β i (x, J B ) ≤ i , then the maximum of the i-th criterion is attained with the precision i .
If, β i (x, J B ) = 0, then the maximum of the i-th criterion is attained and the solution x is weakly efficient for the problem.
In conclusion, only the remaining columns are candidates for entering into basis. In this case, we can say nothing on efficiency of the corresponding solution x. For this, we apply the test of efficiency stated before.

Algorithm for computing all efficient extreme points
The steps of the method of searching for all efficient extreme points are summarized in the following algorithm: 1. Find a feasible solution of the problem, let it be x 0 .
• If it exists, then go into 2.
• If not, stop, the problem is impossible. 2. Find the first efficient extreme point using the following procedure: • Find an optimal solution (u 0 , w 0 , γ 0 , α 0 ) of the program using the direct support method.
• Obtain an optimal extreme point solution of the linear program with λ = (u 0 + e) using the direct support method for the resolution of a linear program with bounded variables, let x 1 be the obtained solution. • If not, go to 7. 6. Can we improve another objective?
• If so, go to 7.
• If not, go to 12. 7. Is there j ∈ J N such that the relations (18) are not satisfied?
• If so, go to 8.
• If not, go to 11. 8. Consider all j ∈ J N . 9. Test if the introduction of the jth corresponding column leads to an unprocessed basis?
• If so, set s = s + 1 and go to 4.
• If not, go to 10. 10. Is there an already stored nonexplored basis?
• If so, set s = s + 1 and go to 4.
• If not, stop, all the vertices are determined. 11. Consider the following program: • If max g = 0, go to 12.
• If not, go to 13. 12. The solution x s is efficient, go to 13. 13. Test if there exists j ∈ J s N such that the relations (18) are satisfied? • If so, go to 14.
• If not, go to 10. 14. Test if s ≤ n − m.
• If so, go to 15.
• If not, go to 10. 15. Store the corresponding basic indices, which lead to an unprocessed basis and go to 10.

Numerical example
Let us consider the bicriterion linear problem with bounded variables: Let x 0 = 0 0 1 be an initial feasible solution of this problem.
1. Search the first extreme efficient point.
The problem to solve is: The obtained optimal solution is So x 1 is the first extreme efficient point of the problem (19).

Search all efficient extreme points
• Introduce the nonbasic variable x 2 into the basis. We set j 0 = 2, the index candidate to enter in the basis.
-Determine the criterion i 0 : So, -Compute the appropriate direction l: -Compute the step θ 0 = min(θ j 0 , θ j 1 ) where The maximal step is then -Compute x 2 : • Introduce the nonbasic variable x 2 into the basis.
We set j 0 = 2, the index candidate to enter in the basis. -Compute the estimation matrix: 6522 .

Conclusion
In this paper, we have focused on solving the multiobjective linear programming problem, where the decision variables are upper and lower bounded. The algorithm is simple to use. It allows us to treat problems in a natural way and permits to speed-up the whole resolution process. We first introduced a new procedure for finding an initial efficient extreme point. Subsequently, we proposed a test of efficiency of a nonbasic variable and a detailed method of computing all the efficient extreme points. We used the suboptimal criterion of the method in single-objective programming to find the -efficient extreme points and the -weakly efficient extreme points of the problem. Finally, we gave an algorithm to search for all efficient extreme points.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.