Article Outline
Keywords
Generation of E(P)
Klein–Hannan Method
Kiziltan–Yucaoglu Method
Interactive Methods
Gonzalez–Reeves–Franz Algorithm
Steuer–Choo Method
The MOMIX Method
See also
References
Keywords
Multiobjective programming Integer Linear programmingFrom the 1970s onwards, multiobjective linear programming (MOLP) methods with continuous solutions have been developed [8]. However, it is well known that discrete variables are unavoidable in the linear programming modeling of many applications, for instance, to represent an investment choice, a production level, etc.
The mathematical structure is then integer linear programming (ILP), associated with MOLP giving a MOILP problem. Unfortunately, MOILP cannot be solved by simply combining ILP and MOLP methods, because it has got its own specific difficulties.
This fundamental principle – often called Geoffrion's theorem – is no longer valid in presence of discrete variables because the set D is not convex. The set of optimal solutions of problem (P_{λ}), defined as problem (LP_{λ}) in which LD is replaced by D, is only a subset SE(P) of E(P); the solutions in SE(P) are called supported efficient solutions , while the solutions belonging to NSE(P) = E(P) \ SE(P) are called nonsupported efficient solutions .
Let us note that another characterization of E(P) is given in [2] for the particular case of binary variables.
Two types of problems can be analysed:

Generate E(P) explicitly. Several methods have been proposed; they are reviewed in [10]. below we will present two of them, which appear general, characteristic and efficient.

To determine interactively with the decision maker a ‘best compromise’ in E(P) according to the preferences of the decision maker. Some of the existing approaches are reviewed in [11]; below we will describe three of these interactive methods.
Generation of E(P)
Klein–Hannan Method
See [5]. This is an iterative procedure for sequentially generating the complete set of efficient solutions for problem (P) (we suppose that the coefficients c _{ j } ^{(k)} are integers); it consists in solving a sequence of progressively more constrained single objective ILP problems and can be implemented through use of any ILP algorithm.

(Initialization: step 0) An objective function l ∊ {1, …, K} is chosen arbitrarily and the following single objective ILP problem is considered:Let E(P_{0}) be the set of all optimal solutions of (P_{0}) and let E _{0}(P) be the set of solutions defined as E _{0}(P) = E(P_{0}) ∩ E(P). Thus, E _{0}(P) is the subset of nondominated solutions in E(P_{0}).$$ (\text{P}_{0})\quad \max_{X\in D}z_{l}(X). $$

(Step j, (j ≥1)) The efficient solutions generated at the previous steps are denoted by X \( _{r}^{\ast } \), r = 1, …, R, i. e. ∪\( _{i = 1}^{j  1} \) E _{ i }(P) = {X \( _{r}^{\ast } \):r = 1, …, R}. In this jth step, the following problem is solvedThe new set of constraints represents the requirement that a solution to (P_{ j }) be better on some objective k ≠ l for each efficient solution X \( _{r}^{\ast } \) generated during the previous steps; an example of implementation of theseconstraints is given in [5]. The set of solutions E _{ j }(P) is then defined as E _{ j }(P) = E(P_{ j })∩ E(P), where E(P_{ j }) is the set of all optimal solutions of (P_{ j }).$$ (\text{P}_{j})\quad \begin{cases} \displaystyle \max_{X\in D} \ z_{l}(X) & \\ \displaystyle \bigcap_{r = 1}^{R}\left(\bigcup_{ \begin{subarray}{c} k = 1 \\ k\neq l \end{subarray} }^{K} z_{k}(X)\ge z_{k}(X_{r}^{\ast}) + 1\right). & \end{cases} $$
The procedure continues until, at some iteration J, the problem (P_{ J }) becomes infeasible; at this time E(P) = ∪\( _{j = 0}^{J  1} \) E _{ j }(P).
Kiziltan–Yucaoglu Method
See [4]. This is a direct adaptation to a multiobjective framework of the wellknown Balas algorithm for the ILP problem with binary variables.

(bounding rule) A lower and upper bound vector, \( \underline{Z}^{r} \) and \( \overline{Z}{}^{r} \), respectively, are defined aswhere Y \( _{k}^{r} \) = ∑_{ j }∊F ^{ r } max{0, c \( _{j}^{k} \)}. The vector \( \underline{Z}^{r} \) is added to a list \( \widehat{E} \) of existing lower bounds if \( \underline{Z}^{r} \) is not dominated by any of the existing vectors of \( \widehat{E} \). At the same time, any vector of \( \widehat{E} \) dominated by \( \underline{Z}^{r} \) is discarded.$$ \begin{aligned} &\underline{Z}^{r} = \sum_{j\in B^{r}} c_{j}, \\ & \overline{Z}{}^{r} = \underline{Z}^{r} + Y^{r}, \end{aligned} $$

(fathoming rules) In the multiobjective case, the feasibility of a node is no longer a sufficient condition for fathoming it. The three general fathoming conditions are:The usual backtracking rules are applied.

\( \overline{Z}{}^{r} \) is dominated by some vector of \( \widehat{E} \);

the node S ^{ r } is feasible and \( \underline{Z}^{r} = \overline{Z}{}^{r} \);

the node S ^{ r } is unfeasible and ∑_{ j }∊ F ^{ r } min(0, t _{ ij })> d \( _{i}^{r} \) for some i = 1, …, m.


(branching rule) A variable x _{ l } ∊ F ^{ r } is selected to be the branching variable.

If the node S ^{ r } is feasible, \( l \in \{ j \in F^r \colon c_j \not \leq 0 \} \).

Otherwise, index l is selected by the minimum unfeasibility criterion :$$ \min_{j\in F^{r}}\sum_{i = 1}^{m}\max\left(0,  d_{i}^{r} + t_{ij}\right). $$

Interactive Methods
Such methods are particularly important to solve multiobjective applications. The general idea is to determine progressively a good compromise solution integrating the preferences of the decision maker.
The dialog with the decision maker consist of a succession of ‘calculation phase’ managed by the model and ‘information phase’ managed by the decision maker.
At each calculation phase, one or several new efficient solutions are determined taking into account the information given by the decision maker at the preceding information phase. At each information phase, a few number of easy questions are asked to the decision maker to collect information about its preferences in regard to the new solutions.
Gonzalez–Reeves–Franz Algorithm
See [3]. In this method a set \( \widetilde{E} \) of K efficient solutions is selected and updated in each algorithm step according to the decision maker's preferences. At the end of the procedure, \( \widetilde{E} \) will contain the most preferred solutions. The method is divided in two stages: in the first one, the supported efficient solutions are considered, while the second one deals with nonsupported efficient solutions.

(Stage 1): Determination of the best supported efficient solutions. \( \widetilde{E} \) is initialized with K optimal solutions of the K single objective ILP problems. Let us denote by \( \widetilde{Z} \) the K corresponding points in the objective space of the solution of \( \widetilde{E} \). At each iteration, a linear direction of search G(X) is build:G(X) is the inverse mapping of the hyperplane defined by the points of \( \widetilde{Z} \) in the objective space into the decision space. A new supported efficient solution X ^{∗} is determined by solving the single objective ILP problem max_{ X∊D } G(X) and Z ^{∗} is the corresponding point in the objective space. Then:

if \( Z^{\ast} \notin \widetilde{Z} \) and the decision maker prefers solution X ^{∗} to at least one solution of \( \widetilde{E} \): the least preferred solution is replaced in \( \widetilde{E} \) by X ^{∗} and a new iteration is performed;

if \( Z^{\ast} \notin \widetilde{Z} \) and X ^{∗} is not preferred to any solution in \( \widetilde{E} \): \( \widetilde{E} \) is not modified and the second stage is initiated;

if \( Z\ast\widetilde{Z} \): \( \widetilde{Z} \) defines a face of the efficient surface and the second stage is initiated.


(Stage 2): Introduction of the best non supported solutions. We will not give details about this second stage (see [3] or [10]); letus just say that it is performed in the same spirit but considering the single objective problemwhere \( \widetilde{G} \) is the optimal value obtained for the last function G(X) considered.$$ \begin{cases} \displaystyle \max & G(X) \\ & X\in D \\ & G(X)\le \widetilde{G}  \varepsilon \quad \text{with}\ \varepsilon > 0 \end{cases} $$
Steuer–Choo Method
See [9]. Several interactive approaches of MOLP problems can also be applied to MOILP; among them, we mention only the Steuer–Choo method, which is a very general procedure based on problem (P\( _{\lambda }^{T} \)) defined in the introduction.
The first iteration uses a widely dispersed group of λ weighting vectors to sample the set of efficient solutions. The sample is obtained by solving problem (P\( _{\lambda }^{T} \)) for each of the λ values in the set. Then the decision maker is asked to identify the most preferred solution X ^{(1)} among the sample. At iteration j, a more refined grid of weighting vectors λ is used to sample the set of efficient solution in the neighborhood of the point z _{ k }(X ^{(j)}) (k = 1, …, K) in the objective space. Again the sample is obtained by solving several problems (P\( _{\lambda }^{T} \)) and the most preferred solution X ^{(j+1)} is selected. The procedure continues using increasingly finer sampling until the solution is deemed to be acceptable.
The MOMIX Method
(See [6].) The main characteristic of this method is the use of an interactive branch and bound concept – initially introduced in [7] – to design the interactive phase.

(First compromise): The following minimax optimization, with m = 1, is performed to determined the compromise \( \widetilde{X}{}^{(1)} \):where$$ (\text{P}^{m})\quad \begin{cases} \displaystyle \min & \delta \\ \forall k & \Pi_{k}^{(m)} (M_{k}^{(m)}  z_{k}(X)) \le \delta, \\ & X \in D^{(m)} \end{cases} $$
Remark 1
If the optimal solution is not unique, an augmented weighted Tchebychev distance is required in order to obtain an efficient first solution.

(Interactive phases): There are integrated in an interactive branch and bound tree; a first step (a depthfirst progression in the tree) leads to the determination of a first good compromise; the second step (a backtracking procedure) confirms the degree of satisfaction achieved by the decision maker or it finds a better compromise if necessary.

(Depth first progression): For m ≥ 1, let at the mth iteration
 1)
\( \widetilde{X}{}^{(m)} \) be the mth compromise;
 2)
z \( _{k}^{(m)} \) be the corresponding values of the criteria;
 3)
[m \( _{k}^{(m)} \), M \( _{k}^{(m)} \)] be the variation intervals of the criteria; and
 4)
Π\( _{k}^{(m)} \) be the weight of the criteria.
The decision maker has to choose, at this mth iteration, the criterion l _{ m }(1)∊ {k: k = 1, …, K} he is willing to improve in priority. Then a new constraint is introduced so that the feasible set becomes D ^{(m+1)} ≡ D ^{(m)} ∩ {z _{ l } _{ m }(1)(X) > z _{ l } _{ m }(1)^{(m)}} Further, the variation intervals [m \( _{k}^{(m+1)} \), M \( _{k}^{(m+1)} \)] and the weights Π\( _{k}^{(m+1)} \) are updated on the new feasible set D ^{(m+1)}. The new compromise \( \widetilde{X}{}^{(m + 1)} \) is obtained by solving the problem (P^{ m+1}).
Different tests allow to terminate this first step. The node (m+1) is fathomed if one of the following conditions is verified: a)
D ^{(m+1)} = ∅;
 b)
M \( _{k}^{(m+1)} \) − m \( _{k}^{(m+1)} \) ≤ ϵ_{ k } ∀ k;
 c)
the vector \( \widehat{Z} \) of the incumbent values (values of the criteria for the best compromise already determined) is preferred to the new ideal point (of component M \( _{k}^{(m+1)} \)).
The first step of the procedure is stopped if either more than q successive iterations do not bring an improvement of the incumbent point \( \widehat{Z} \) or more than Q iterations have been performed.
Note that the parameters ϵ_{ k }, q and Q are fixed in the agreement with the decision maker.
 1)


(Backtracking procedure): It can be hoped that the appropriate choice of the criterion z _{ l } _{ m }(1), at each level m of the depthfirst progression, has been made so that at the end of the first step, a good compromise has been found.
Nevertheless, it is worth examining some other parts of the tree to confirm the satisfaction of the decision maker. The complete tree is generated in the following manner: at each level, K subnodes are introduced by successively adding the constraints:for all k = 1, …, K − 1, where l _{ m }(k) ∊ {k: k = 1, …, K} is the kth objective that the decision maker wants to improve at the mth level of the branch and bound tree.$$ \begin{aligned} &z_{l_{m}(1)}(X)> z_{l_{m}(1)}^{(m)}, \\ & z_{l_{m}(2)}(X)> z_{l_{m}(2)}^{(m)}; \quad z_{l_{m}(1)}(X)\le z_{l_{m}(1)}^{(m)}, \\ & \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \\ & z_{l_{m}(K)}(X)> z_{l_{m}(K)}^{(m)}; \quad z_{l_{m}(k)}(X)\le z_{l_{m}(k)}^{(m)}, \end{aligned} $$At each level m, the criteria are thus ordered according to the priorities of the decision maker in regard with the compromise \( \widetilde{X}{}^{(m)} \).
The usual backtracking procedure is applied; yet it seems unnecessary to explore the whole tree. Indeed, the subnode \( k > \overline{K} \) of each branching correspond to a simultaneous relaxation of those criteria l _{ m }(k), \( k \le \overline{K} \), the decision maker wants to improve in priority!
Therefore, the subnodes \( k > \overline{K} = 2 \) or 3, for instance, do almost certainly not bring any improved solutions.
The fathoming tests and the stopping tests are again applied in this second step.
See also
Biobjective Assignment Problem
Branch and Price: Integer Programming with Column Generation
Decision Support Systems with Multiple Criteria
Decomposition Techniques for MILP: Lagrangian Relaxation
Estimating Data for Multicriteria Decision Making Problems: Optimization Techniques
Financial Applications of Multicriteria Analysis
Fuzzy Multiobjective Linear Programming
Integer Linear Complementary Problem
Integer Programming: Algebraic Methods
Integer Programming: Branch and Bound Methods
Integer Programming: Branch and Cut Algorithms
Integer Programming: Cutting Plane Algorithms
Integer Programming: Lagrangian Relaxation
LCP: Pardalos–Rosen Mixed Integer Formulation
Mixed Integer Classification Problems
Multiobjective Combinatorial Optimization
Multiobjective Mixed Integer Programming
Multiobjective Optimization and Decision Support Systems
Multiobjective Optimization: Interaction of Design and Control
Multiobjective Optimization: Interactive Methods for Preference Value Functions
Multiobjective Optimization: Lagrange Duality
Multiobjective Optimization: Pareto Optimal Solutions, Properties
Multiparametric Mixed Integer Linear Programming
Multiple Objective Programming Support
Parametric Mixed Integer Nonlinear Optimization
Portfolio Selection and Multicriteria Analysis
Preference Disaggregation Approach: Basic Features, Examples From Financial Decision Making
Set Covering, Packing and Partitioning Problems
Simplicial Pivoting Algorithms for Integer Programming
Stochastic Integer Programming: Continuity, Stability, Rates of Convergence