Skip to main content
Log in

Group Multi-Objective Optimization Under Imprecision and Uncertainty Using a Novel Interval Outranking Approach

  • Published:
Group Decision and Negotiation Aims and scope Submit manuscript

Abstract

This paper deals with group multi-objective optimization problems with imperfect information (imprecision, uncertainty, ill-definition, arbitrariness). Since collective preference, belief, and attitude towards conservatism are ill-defined concepts, the handling of imperfect information is a crucial issue in group decision making. Imperfect knowledge is modeled here using a novel interval-based outranking approach that permits us to handle the levels of conservatism from the group members and its dependence on the set of criteria. We propose two different methods that are appropriate for decision structures in the form of teams and committees. In these structures, there is a special actor (the so-called Supra Decision Maker, SDM) with authority for creating an aggregation of the group members’ preferences, beliefs,and levels of conservatism, and for taking the final collective decision. In the first method, the SDM behaves as an “altruistic dictator”. The SDM creates an “altruistic” aggregation of preferences, beliefs, and levels of conservatism in an outranking model based on interval numbers. This model is then used to identify a final solution that is a best compromise for the SDM, taking into account the interval-based aggregated information from group members. In the second method, each group member uses an interval outranking model to solve an individual optimization problem, and to identify his or her best solution. Based on these individual best solutions, the SDM behaves democratically, searching for a solution that maximizes the measures of group satisfaction and dissatisfaction; the solution is improved through several rounds of consensus-reaching. The potential of both methods is illustrated with a realistically sized example of many-objective project portfolio optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. A description of the projects can be found at https://www.dropbox.com/s/t80u9kbdcub6jua/Projects%20Description.pdf?dl=0.

References

Download references

Acknowledgments

We acknowledge the support from CONACYT projects no. 1340, 3058, A1-S-11012, 236,154 and 280,081. We also thank three anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nelson Rangel-Valdez.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Brief description of I-NOSGA and I-MOEA/D

This appendix describes the variants of I-NOSGA and MOEA/D used to address the example presented along with the document. All the performed experiments followed the conditions described below:

  1. 1.

    The algorithms run on a computer with an Intel Core i7 3.5 GHz CPU, 16 GB of RAM, and Mac OS X Yosemite 10.10.4 operating system;

  2. 2.

    The solutions for each analysed instance derived from the result of 30 independent runs while keeping the random nature of the outputs under control; and

  3. 3.

    The best values of the algorithms’ parameters were a population size of card(P) = 100, a maximum number of evaluations of ne = 100,000, and a mutation probability of pm = 0.02. The identification of such configuration derived from a fine-tuning process based on covering arrays; the related experiment explored different combinations considering the following values: population size (card(P)) {50, 100, 200}, number of evaluations (ne) {10,000, 100,000, 200,000} and mutation probability (pm) {0.01, 0.02,0.05}.

Interval Non-Outranked Sorting Genetic Algorithm

The I-NOSGA method [cf. Balderas et al. (2019)] has as sources of inspiration the very well-known algorithm NSGA-II (Deb et al. 2002), and NOSGA [cf. Fernandez et al. (2010, 2011, 2013)]. The I-NOSGA is an interval extension of NOSGA that identifies non-strictly outranked solutions in optimization problems. This algorithm was proposed for solving project portfolio optimization problems with many interval-valued objective functions and with an interval-based outranking relation defined on the objective space. I-NOSGA separates each population in non-strictly outranked fronts. This method identifies non-strictly outranked portfolios (modeled using interval mathematics) during the evolutionary search, satisfying a set of constraints that can be described by intervals.

Algorithm 1 shows the pseudo-code of the I-NOSGA used during the research conducted in this work. The algorithm begins by combining the existing population of parents and children in Line 1; then, using the preference model defined in this work, it builds the non-outranked fronts (see Line 2). The fronts F formed in this way are used to create the new generation of parents PopT+1 (see Lines 5–11). The algorithm orderly includes complete fronts in PopT+1 (Line 6), and complete it with the best solutions from the last front Fi that does not fit entirely (the used solutions are in descending order of its outranking strength score, see Lines 9 to 11). Finally, a new generation of individuals QT+1 is evolved from PopT+1 using the genetic operators chosen for this purpose (see Line 12). The pseudocode reflects the main loop of I-NOSGA. Also note that the initialization strategies for the parent PopT and children QT populations are left to the user, and the best solutions will be in the front F0 of the last iteration.

Let us point out that in non-outranked sorting, unlike the classical dominance relation, the preference relation from Definition 8 may be cyclic, causing that not all the members of a population appear in a front. The I-NOSGA algorithm validates the presence of such cyclic conditions. When the situation occurs, new fronts arise from the non-assigned set of solutions based on the number of preferred solutions np (previously computed in the fast-non-outranked sort). For the latter purpose, I-NOSGA locates the smallest value nsmallest among the np values of solutions that are not yet in a front; after that, all the solutions that satisfy np = nsmallest will form a new front. The value np of the solutions in the new front becomes zero, and I-NOSGA repeats the process until every solution has been assigned to a front.

figure a

Figure 

Fig. 2
figure 2

Binary representation of a portfolio in the decision variable space

2 represents the chromosome encoding identifying a solution built by the genetic algorithm. The solution is a portfolio encoded as a binary array of a size equivalent to the number of projects n. In this array, the i-th element represents one gen whose information tells whether or not the i-th project forms part of the portfolio (using the values 1 and 0, respectively).

The genetic operators involved in the design of the algorithm are the binary tournament, the one-point crossover, and the flip mutation. The binary tournament chooses the best-ranked parent according to its front, randomly breaking ties.

Given that the preference relation that determines the selective pressure in I-NOSGA depends weakly on the number of objective functions, it is expected to behave satisfactorily in problems with many objective functions.

Interval MOEA/D

MOEA/D is a multi-objective evolutionary algorithm based on a decomposition approach (Zhang and Li 2007). The method decomposes a multi-objective problem through aggregation functions into several optimization sub-problems, which are optimized simultaneously. Here, to solve Problem 15, we use a variant of MOEA/D based on an interval architecture to handle imprecision, objectives and constraints in the frame of project portfolio optimization. Algorithm 2 shows the pseudo-code.

figure b

The implemented MOEA/D uses the encoding shown in Fig. 2, and it gives as output the external population (EP) used during the optimization process to store non-dominated solutions. The input parameters of the algorithm are the number of scalar functions N, the number of objectives K, the initial set of weight vectors Vector = {V1, V2,…,VN}, and the size of the neighborhood of weight vectors T. The value of each weight vector i is\({V}_{i}=\left(\frac{i}{N},\frac{N-i}{N}\right)\). The main two steps of MOEA/D are as follows:

The first step is the initialization phase (indicated in Line 0 of the Algorithm 2). Here the algorithm initializes properly the data structures required for the construction of the final set of solutions. These structures are: 1) the set EP, which will become the Pareto frontier approximation, and which initially is empty; 2) the neighborhood sets B(i) of each vector Vi that contain the T closest weight vectors to Vi by Euclidean distance; 3) the initial set of solutions X = {x1, x2,…, xN} where each solution xj, 1 ≤ j ≤ Number of group members, corresponds to the j-th DM best compromise in the decision variable space, and the remaining ones are random; 4) the set of fitness values FV = {FV1, FV2,…, FVN}, where each FVi has the K objective values of each solution xi; and 5) the set z = {z1, …., zk} formed by values zj corresponding to the best objective value among all the solutions built during the initialization process.

The second step is an update process based on evolution. In this process, for each solution in the population, two solutions of the population are randomly selected and used to generate a new solution by applying genetic operators. The operators used are the following ones:

  1. (1)

    One-point crossover: the two randomly chosen parents combine their chromosomes by designating a point in the bit string that represents them. The operator swaps between the two parents the bits in their encoding to the right of that point. The results are a new offspring of two children.

  2. (2)

    Flip mutation: This mutation process generates a random value for each allele in the bit string encoding the solution; every time that the generated value lies under a given probability (e.g. the probability of 0.02 used in this work), the bit value in the encoding is inverted.

After the application of the genetic operators, the created solution is subject of a repair and improvement phase, which is performed by the Repair and Improvement Operator (RIO). The RIO combines a process that might repair, and that definitely improve a portfolio; i.e., it turns an infeasible solution into a quasi-feasible solution, a solution that might be feasible in budget but not feasible in satisfaction degree of the DMs. The process orders the projects in the portfolio by the fitness ratio FR and from the worst to the best eliminates them one by one until the budget becomes feasible. After that, using a threshold parameter (set to 0.50) it also eliminates a proportion of the remaining projects. Then, from the best to the worst projects by FR, it adds projects into the portfolio while keeping the budget feasible. The fitness ratio FR is a generalized measure of the fitness of a solution based on its objectives (in the original domain); first, for each individual project i it computes the ratio fcij = fitness/cost for each objective j; with it computes fci which is the median among the values fcij previously computed. The value fci becomes the value of FR.

The final step is to check the stopping condition; if it is satisfied, the algorithm finishes and reports the EP as its output; otherwise, the algorithm returns to the second step. The stop criterion is to reach a maximum number of evaluations of 100,000. During each iteration, the algorithm conveniently updates B, EP, z, and in the end, it returns the non-dominated set found in EP.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fernández, E., Gómez-Santillán, C., Rangel-Valdez, N. et al. Group Multi-Objective Optimization Under Imprecision and Uncertainty Using a Novel Interval Outranking Approach. Group Decis Negot 31, 945–994 (2022). https://doi.org/10.1007/s10726-022-09789-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10726-022-09789-8

Keywords

Navigation