Reference Work Entry

Encyclopedia of Optimization

pp 2448-2454

Multi-objective Integer Linear Programming

MOILP
  • Jacques TeghemAffiliated withLab. Math. & Operational Research Fac., Polytechn. Mons

Article Outline

Keywords

Generation of E(P)

  Klein–Hannan Method

  Kiziltan–Yucaoglu Method

Interactive Methods

  Gonzalez–Reeves–Franz Algorithm

  Steuer–Choo Method

  The MOMIX Method

See also

References

Keywords

Multi-objective programming Integer Linear programming

Article Outline

Keywords

Generation of E(P)

  Klein–Hannan Method

  Kiziltan–Yucaoglu Method

Interactive Methods

  Gonzalez–Reeves–Franz Algorithm

  Steuer–Choo Method

  The MOMIX Method

See also

References

Keywords

Multi-objective programming Integer Linear programming

From the 1970s onwards, multi-objective linear programming (MOLP) methods with continuous solutions have been developed [8]. However, it is well known that discrete variables are unavoidable in the linear programming modeling of many applications, for instance, to represent an investment choice, a production level, etc.

The mathematical structure is then integer linear programming (ILP), associated with MOLP giving a MOILP problem. Unfortunately, MOILP cannot be solved by simply combining ILP and MOLP methods, because it has got its own specific difficulties.

The problem (P) considered is defined as
$$ \mathrm{(P)} \quad \begin{cases} \displaystyle {}^{\prime}\max_{X \in D}{}^{\prime} & \displaystyle z_{k} (X) = \sum_{j = 1}^{n}c_{j}^{(k)} x_{j}, \\ &\quad k = 1 ,\ldots, K, \\ \text{where} & D = \left\{X \in \mathbb{R}^{n}\colon\ \begin{array}{c} TX \le d, \\ X\ge 0, \\ x_{j} \ \text{integer}, \\ j\in J \end{array} \right\} \\ \text{with} & T (m \times n), \\ & d (m \times 1), \\ & X (n \times 1), \\ & J \subset \{ 1 ,\ldots, n\}. \end{cases} $$
If we denote LD = {X: TXd, X ≥ 0}, problem (LP) is the linear relaxation of problem (P):
$$ \mathrm{(LP)} \quad \begin{cases} \displaystyle {}^{\prime} \max {}^{\prime} & z_{k} (X), \quad k = 1 ,\ldots, K, \\ & X\in LD \end{cases} $$
A solution X in D (or LD) is said to be efficient for problem (P) (or (LP)) if there does not exist any other solution in D (or LD) such that z k (X)≥ z k (X ), k = 1, …, K, with at least one strict inequality.
Let E(·) denote the set of all efficient solutions of problem (·). It is well known (see [8]) that (LP) may be characterized by the optimal solutions of the single objective and parametrized problem:
$$ \mathrm{(LP}_\lambda\mathrm{)} \quad \begin{cases} \displaystyle \max & \displaystyle \sum_{k = 1}^{K} \lambda_{k}z_{k}(X) \\ & X \in LD \\ \text{with} & \lambda_{k} > 0, \quad\forall k, \\ & \displaystyle \sum_{k = 1}^{K} \lambda_{k} = 1 \end{cases} $$

This fundamental principle – often called Geoffrion's theorem  – is no longer valid in presence of discrete variables because the set D is not convex. The set of optimal solutions of problem (Pλ), defined as problem (LPλ) in which LD is replaced by D, is only a subset SE(P) of E(P); the solutions in SE(P) are called supported efficient solutions , while the solutions belonging to NSE(P) = E(P) \ SE(P) are called nonsupported efficient solutions .

The breakdown of Geoffrion's theorem for problem (P) can be illustrated by the following obvious example:
$$ \begin{aligned} &K = 2, \\ & z_{1}(X) = 6x_{1} + 3x_{2} + x_{3}, \\ & z_{2}(X) = x_{1} + 3x_{2} + 6x_{3}, \\ & D = \left\{X\colon\ x_{1} + x_{2} + x_{3} \le 1, \ x_{i}\in\{ 0, 1\}\right\}. \end{aligned} $$
For this problem,
$$ E\text{(P)} = \{(1, 0, 0); (0, 1, 0);(0, 0, 1)\} $$
while NSE(P) = {(0, 1, 0)}.
Nevertheless, V.J. Bowman [1] has given a theoretical characterization of E(P): Setting
$$ \begin{aligned} &M_{k} = \max_{X\in D} z_{k}(X), \\ & \overline{z}_{k} = M_{k} + \varepsilon_{k}, \quad\text{with}\ \varepsilon_{k}> 0, \\ & \rho > 0, \end{aligned} $$
then E(P) is characterized by the optimal solutions of the problem(P\( _{\lambda }^{T} \)):
$$ \min_{X \in D} \max_{k} \left(\lambda_{k}\left(\overline{z}_{k} - z_{k}(X)\right) + \rho \left(\sum_{k = 1}^{K} \left(\overline{z}_{k} - z_{k}(X)\right)\right)\right), $$
consisting of minimizing the augmented weighted Tchebychev distance between z k (X) and \( \overline{z}_{k} \).

Let us note that another characterization of E(P) is given in [2] for the particular case of binary variables.

Two types of problems can be analysed:

  • Generate E(P) explicitly. Several methods have been proposed; they are reviewed in [10]. below we will present two of them, which appear general, characteristic and efficient.

  • To determine interactively with the decision maker a ‘best compromise’ in E(P) according to the preferences of the decision maker. Some of the existing approaches are reviewed in [11]; below we will describe three of these interactive methods.

Generation of E(P)

Klein–Hannan Method

See [5]. This is an iterative procedure for sequentially generating the complete set of efficient solutions for problem (P) (we suppose that the coefficients c j (k) are integers); it consists in solving a sequence of progressively more constrained single objective ILP problems and can be implemented through use of any ILP algorithm.

  • (Initialization: step 0) An objective function l ∊ {1, …, K} is chosen arbitrarily and the following single objective ILP problem is considered:
    $$ (\text{P}_{0})\quad \max_{X\in D}z_{l}(X). $$
    Let E(P0) be the set of all optimal solutions of (P0) and let E 0(P) be the set of solutions defined as E 0(P) = E(P0) ∩ E(P). Thus, E 0(P) is the subset of nondominated solutions in E(P0).
  • (Step j, (j ≥1)) The efficient solutions generated at the previous steps are denoted by X \( _{r}^{\ast } \), r = 1, …, R, i. e. ∪\( _{i = 1}^{j - 1} \) E i (P) = {X \( _{r}^{\ast } \):r = 1, …, R}. In this jth step, the following problem is solved
    $$ (\text{P}_{j})\quad \begin{cases} \displaystyle \max_{X\in D} \ z_{l}(X) & \\ \displaystyle \bigcap_{r = 1}^{R}\left(\bigcup_{ \begin{subarray}{c} k = 1 \\ k\neq l \end{subarray} }^{K} z_{k}(X)\ge z_{k}(X_{r}^{\ast}) + 1\right). & \end{cases} $$
    The new set of constraints represents the requirement that a solution to (P j ) be better on some objective kl for each efficient solution X \( _{r}^{\ast } \) generated during the previous steps; an example of implementation of theseconstraints is given in [5]. The set of solutions E j (P) is then defined as E j (P) = E(P j )∩ E(P), where E(P j ) is the set of all optimal solutions of (P j ).

The procedure continues until, at some iteration J, the problem (P J ) becomes infeasible; at this time E(P) = ∪\( _{j = 0}^{J - 1} \) E j (P).

Kiziltan–Yucaoglu Method

See [4]. This is a direct adaptation to a multi-objective framework of the well-known Balas algorithm for the ILP problem with binary variables.

At node S r of the branch and bound scheme, the following problem is considered:
$$ \begin{cases} \displaystyle {}^{\prime} \max {}^{\prime} & \displaystyle \sum_{j\in F^{r}} c_{j}x_{j} + \sum_{j\in B^{r}}c_{j} \\ \text{s.t.} & \displaystyle \sum_{j\in F} t_{j}x_{j}\le d^{r} \\ & x_{j} = (0, 1) \\ \text{where} & B^{r} \ \text{is the index set of variables} \\ & \quad\text{assigned the value one} \\ & F^{r}\ \text{is the index of free variables} \\ & \displaystyle d^{r} = d - \sum_{j\in B^{r}} t_{j} \\ & t_{j}\ \text{is the}\ j\text{th column of}\ T \\ & c_{j}\ \text{is the vector of components}\ c_{j}^{(k)}. \end{cases} $$
The node S r is called feasible when d r ≥ 0 and infeasible otherwise. The three basic rules of the branch and bound algorithm are:
  • (bounding rule) A lower and upper bound vector, \( \underline{Z}^{r} \) and \( \overline{Z}{}^{r} \), respectively, are defined as
    $$ \begin{aligned} &\underline{Z}^{r} = \sum_{j\in B^{r}} c_{j}, \\ & \overline{Z}{}^{r} = \underline{Z}^{r} + Y^{r}, \end{aligned} $$
    where Y \( _{k}^{r} \) = ∑ j F r max{0, c \( _{j}^{k} \)}. The vector \( \underline{Z}^{r} \) is added to a list \( \widehat{E} \) of existing lower bounds if \( \underline{Z}^{r} \) is not dominated by any of the existing vectors of \( \widehat{E} \). At the same time, any vector of \( \widehat{E} \) dominated by \( \underline{Z}^{r} \) is discarded.
  • (fathoming rules) In the multi-objective case, the feasibility of a node is no longer a sufficient condition for fathoming it. The three general fathoming conditions are:
    • \( \overline{Z}{}^{r} \) is dominated by some vector of \( \widehat{E} \);

    • the node S r is feasible and \( \underline{Z}^{r} = \overline{Z}{}^{r} \);

    • the node S r is unfeasible and ∑ j F r min(0, t ij )> d \( _{i}^{r} \) for some i = 1, …, m.

    The usual backtracking rules are applied.
  • (branching rule) A variable x l F r is selected to be the branching variable.
    • If the node S r is feasible, \( l \in \{ j \in F^r \colon c_j \not \leq 0 \} \).

    • Otherwise, index l is selected by the minimum unfeasibility criterion :
      $$ \min_{j\in F^{r}}\sum_{i = 1}^{m}\max\left(0, - d_{i}^{r} + t_{ij}\right). $$
When the explicit enumeration is complete, \( E\text{(P)} = \widehat{E} \).

Interactive Methods

Such methods are particularly important to solve multi-objective applications. The general idea is to determine progressively a good compromise solution integrating the preferences of the decision maker.

The dialog with the decision maker consist of a succession of ‘calculation phase’ managed by the model and ‘information phase’ managed by the decision maker.

At each calculation phase, one or several new efficient solutions are determined taking into account the information given by the decision maker at the preceding information phase. At each information phase, a few number of easy questions are asked to the decision maker to collect information about its preferences in regard to the new solutions.

Gonzalez–Reeves–Franz Algorithm

See [3]. In this method a set \( \widetilde{E} \) of K efficient solutions is selected and updated in each algorithm step according to the decision maker's preferences. At the end of the procedure, \( \widetilde{E} \) will contain the most preferred solutions. The method is divided in two stages: in the first one, the supported efficient solutions are considered, while the second one deals with nonsupported efficient solutions.

  • (Stage 1): Determination of the best supported efficient solutions. \( \widetilde{E} \) is initialized with K optimal solutions of the K single objective ILP problems. Let us denote by \( \widetilde{Z} \) the K corresponding points in the objective space of the solution of \( \widetilde{E} \). At each iteration, a linear direction of search G(X) is build:G(X) is the inverse mapping of the hyperplane defined by the points of \( \widetilde{Z} \) in the objective space into the decision space. A new supported efficient solution X is determined by solving the single objective ILP problem max XD G(X) and Z is the corresponding point in the objective space. Then:
    • if \( Z^{\ast} \notin \widetilde{Z} \) and the decision maker prefers solution X to at least one solution of \( \widetilde{E} \): the least preferred solution is replaced in \( \widetilde{E} \) by X and a new iteration is performed;

    • if \( Z^{\ast} \notin \widetilde{Z} \) and X is not preferred to any solution in \( \widetilde{E} \): \( \widetilde{E} \) is not modified and the second stage is initiated;

    • if \( Z\ast\widetilde{Z} \): \( \widetilde{Z} \) defines a face of the efficient surface and the second stage is initiated.

  • (Stage 2): Introduction of the best non supported solutions. We will not give details about this second stage (see [3] or [10]); letus just say that it is performed in the same spirit but considering the single objective problem
    $$ \begin{cases} \displaystyle \max & G(X) \\ & X\in D \\ & G(X)\le \widetilde{G} - \varepsilon \quad \text{with}\ \varepsilon > 0 \end{cases} $$
    where \( \widetilde{G} \) is the optimal value obtained for the last function G(X) considered.

Steuer–Choo Method

See [9]. Several interactive approaches of MOLP problems can also be applied to MOILP; among them, we mention only the Steuer–Choo method, which is a very general procedure based on problem (P\( _{\lambda }^{T} \)) defined in the introduction.

The first iteration uses a widely dispersed group of λ weighting vectors to sample the set of efficient solutions. The sample is obtained by solving problem (P\( _{\lambda }^{T} \)) for each of the λ values in the set. Then the decision maker is asked to identify the most preferred solution X (1) among the sample. At iteration j, a more refined grid of weighting vectors λ is used to sample the set of efficient solution in the neighborhood of the point z k (X (j)) (k = 1, …, K) in the objective space. Again the sample is obtained by solving several problems (P\( _{\lambda }^{T} \)) and the most preferred solution X (j+1) is selected. The procedure continues using increasingly finer sampling until the solution is deemed to be acceptable.

The MOMIX Method

(See [6].) The main characteristic of this method is the use of an interactive branch and bound concept – initially introduced in [7] – to design the interactive phase.

  • (First compromise): The following minimax optimization, with m = 1, is performed to determined the compromise \( \widetilde{X}{}^{(1)} \):
    $$ (\text{P}^{m})\quad \begin{cases} \displaystyle \min & \delta \\ \forall k & \Pi_{k}^{(m)} (M_{k}^{(m)} - z_{k}(X)) \le \delta, \\ & X \in D^{(m)} \end{cases} $$
    where
    • D (1)D;

    • [m \( _{k}^{(1)} \), M \( _{k}^{(1)} \)] are the variation intervals of the criteria k, provided by the pay-off table (see [8]);

    • Π\( _{k}^{(1)} \) are certain normalizing weights taking into account these variation intervals (see [8]).

Remark 1

If the optimal solution is not unique, an augmented weighted Tchebychev distance is required in order to obtain an efficient first solution.

  • (Interactive phases): There are integrated in an interactive branch and bound tree; a first step (a depth-first progression in the tree) leads to the determination of a first good compromise; the second step (a backtracking procedure) confirms the degree of satisfaction achieved by the decision maker or it finds a better compromise if necessary.
    • (Depth first progression): For m ≥ 1, let at the mth iteration
      1. 1)

        \( \widetilde{X}{}^{(m)} \) be the mth compromise;

         
      2. 2)

        z \( _{k}^{(m)} \) be the corresponding values of the criteria;

         
      3. 3)

        [m \( _{k}^{(m)} \), M \( _{k}^{(m)} \)] be the variation intervals of the criteria; and

         
      4. 4)

        Π\( _{k}^{(m)} \) be the weight of the criteria.

         

      The decision maker has to choose, at this mth iteration, the criterion l m (1)∊ {k: k = 1, …, K} he is willing to improve in priority. Then a new constraint is introduced so that the feasible set becomes D (m+1)D (m) ∩ {z l m (1)(X) > z l m (1)(m)} Further, the variation intervals [m \( _{k}^{(m+1)} \), M \( _{k}^{(m+1)} \)] and the weights Π\( _{k}^{(m+1)} \) are updated on the new feasible set D (m+1). The new compromise \( \widetilde{X}{}^{(m + 1)} \) is obtained by solving the problem (P m+1).

      Different tests allow to terminate this first step. The node (m+1) is fathomed if one of the following conditions is verified:
      1. a)

        D (m+1) = ∅;

         
      2. b)

        M \( _{k}^{(m+1)} \)m \( _{k}^{(m+1)} \) ≤ ϵ k k;

         
      3. c)

        the vector \( \widehat{Z} \) of the incumbent values (values of the criteria for the best compromise already determined) is preferred to the new ideal point (of component M \( _{k}^{(m+1)} \)).

         

      The first step of the procedure is stopped if either more than q successive iterations do not bring an improvement of the incumbent point \( \widehat{Z} \) or more than Q iterations have been performed.

      Note that the parameters ϵ k , q and Q are fixed in the agreement with the decision maker.

  • (Backtracking procedure): It can be hoped that the appropriate choice of the criterion z l m (1), at each level m of the depth-first progression, has been made so that at the end of the first step, a good compromise has been found.

    Nevertheless, it is worth examining some other parts of the tree to confirm the satisfaction of the decision maker. The complete tree is generated in the following manner: at each level, K subnodes are introduced by successively adding the constraints:
    $$ \begin{aligned} &z_{l_{m}(1)}(X)> z_{l_{m}(1)}^{(m)}, \\ & z_{l_{m}(2)}(X)> z_{l_{m}(2)}^{(m)}; \quad z_{l_{m}(1)}(X)\le z_{l_{m}(1)}^{(m)}, \\ & \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \\ & z_{l_{m}(K)}(X)> z_{l_{m}(K)}^{(m)}; \quad z_{l_{m}(k)}(X)\le z_{l_{m}(k)}^{(m)}, \end{aligned} $$
    for all k = 1, …, K − 1, where l m (k) ∊ {k: k = 1, …, K} is the kth objective that the decision maker wants to improve at the mth level of the branch and bound tree.

    At each level m, the criteria are thus ordered according to the priorities of the decision maker in regard with the compromise \( \widetilde{X}{}^{(m)} \).

    The usual backtracking procedure is applied; yet it seems unnecessary to explore the whole tree. Indeed, the subnode \( k > \overline{K} \) of each branching correspond to a simultaneous relaxation of those criteria l m (k), \( k \le \overline{K} \), the decision maker wants to improve in priority!

    Therefore, the subnodes \( k > \overline{K} = 2 \) or 3, for instance, do almost certainly not bring any improved solutions.

    The fathoming tests and the stopping tests are again applied in this second step.

See also

Bi-objective Assignment Problem

Branch and Price:​ Integer Programming with Column Generation

Decision Support Systems with Multiple Criteria

Decomposition Techniques for MILP:​ Lagrangian Relaxation

Estimating Data for Multicriteria Decision Making Problems:​ Optimization Techniques

Financial Applications of Multicriteria Analysis

Fuzzy Multi-objective Linear Programming

Integer Linear Complementary Problem

Integer Programming

Integer Programming:​ Algebraic Methods

Integer Programming:​ Branch and Bound Methods

Integer Programming:​ Branch and Cut Algorithms

Integer Programming:​ Cutting Plane Algorithms

Integer Programming Duality

Integer Programming:​ Lagrangian Relaxation

LCP:​ Pardalos–Rosen Mixed Integer Formulation

Mixed Integer Classification Problems

Multicriteria Sorting Methods

Multi-objective Combinatorial Optimization

Multi-objective Mixed Integer Programming

Multi-objective Optimization and Decision Support Systems

Multi-objective Optimization:​ Interaction of Design and Control

Multi-objective Optimization:​ Interactive Methods for Preference Value Functions

Multi-objective Optimization:​ Lagrange Duality

Multi-objective Optimization:​ Pareto Optimal Solutions, Properties

Multiparametric Mixed Integer Linear Programming

Multiple Objective Programming Support

Outranking Methods

Parametric Mixed Integer Nonlinear Optimization

Portfolio Selection and Multicriteria Analysis

Preference Disaggregation

Preference Disaggregation Approach:​ Basic Features, Examples From Financial Decision Making

Preference Modeling

Set Covering, Packing and Partitioning Problems

Simplicial Pivoting Algorithms for Integer Programming

Stochastic Integer Programming:​ Continuity, Stability, Rates of Convergence

Stochastic Integer Programs

Time-dependent Traveling Salesman Problem

Copyright information

© Springer-Verlag 2008
Show all