A PSO-algorithm-based consensus model with the application to large-scale group decision-making

Group decision-making (GDM) implies a process of extracting wisdom from a group of experts. In this study, a novel GDM model is proposed by applying the particle swarm optimization (PSO) algorithm to simulate the consensus process within a group of experts. It is assumed that the initial positions of decision-makers (DMs) are characterized by pairwise comparison matrices (PCMs). The minimum and maximum of the entries in the same locations of individual PCMs are supposed to be the constraints of DMs’ opinions. The novelty comes with the construction of the optimization problem by considering the group consensus and the consistency degree of the collective PCM. The former is to minimize the distance between the collective PCM and each individual one. The latter is to make the collective PCM be acceptably consistent in virtue of the geometric consistency index. The fitness function used in the PSO algorithm is the linear combination of the two objectives. The proposed model is applied to solve a large-scale GDM problem arising in emergency management. Some comparisons with the existing methods reveal that the developed model has the advantages to decrease the order of an optimization problem and reach a fast yet effective solution.


Introduction
A group of experts are always considered to be much wiser than individuals to reach a reasonable solution for a complex decision-making problem [5,40]. It is worth noting that the theory and methods for group decision-making (GDM) have attracted a great deal of attention [17,26]. A GDM problem is usually simplified as the consensus process that a group of experts reach the optimal solution to the problem of choosing the best one from a finite set of alternatives. Generally, there are three phases in the process of GDM [22,24] 1 In the phase of preference information, various preference formats could be provided by decision-makers (DMs) when evaluating their opinions on the alternatives, such as pairwise comparison matrices (PCMs) [41,51], additive reciprocal preference relations [11,35,44], linguistic preference relations [18,54], interval-valued preference relations [42,55] and others. Once the judgements are provided by DMs, the next important issue is how to aggregate individual opinions and to reach the consensus [17,24,26]. The consensus phase implies that the maximum degree agreement among the group of experts is obtained by a series of interactive discussions and learning each other [24,26]. In particular, it is worth noting that the particle swarm optimization (PSO) algorithm has been used to simulate the consensus process [6,7,31,32,38]. A flexibility degree is always equipped, then the initial positions of DMs are adjusted to reach a consensus. The selection phase consists of two different steps: aggregation of individual preference relations and exploitation of the collective one [24]. When the PSO algorithm is used to achieve the consensus process, the level of consensus has been incorporated into the GDM model. That is, the opinion of any expert in the group is approximate to the collective judgement used to choose the best alternative.
On the other hand, the consistency of preference relations should be considered in the process of GDM. The consistency degree of decision information is related to the level of logic and rationality of DMs. Let us trace the study of consistency of preference relations back to 1980s. Saaty [41] gave the consistency definition of PCMs originating from the analytic hierarchy process (AHP). Then one can find that it is difficult to give a consistent PCM in a practical case. The consistency index was further defined to quantify the inconsistency degree of a PCM [41]. Moreover, it is noted that the consistency index is dependent on the dimensionality of a matrix. The consistency ratio was defined to eliminate the influences of the order of a PCM. When the consistency ratio equals to or less than 0.1, the PCM is considered to be acceptable. When the consistency ratio is bigger than 0.1, the PCM is unacceptable. The unacceptable PCM should be adjusted to that with acceptable consistency and many methods have been proposed [8,21,58]. In addition, the other consistency indexes have been further proposed to capture the inconsistency degree of PCMs [4]. One of the popular consistency indexes was proposed by Crawford and Williams [14], which was named as the geometric consistency index (GCI) and the thresholds of acceptable consistency have been provided [1]. The study of consistency definitions and consistency indexes has been further extended to additive reciprocal matrices [23,35,44,52] and fuzzy-valued preference relations [30,33,48,49,55]. Furthermore, the group consistency level and the group consensus measure are the two other important issues in GDM. The former focuses on the consistency degree of the collective preference relation [53]. The latter refers to the consensus degree between individual preference relations and the collective one [19]. It is found that when individual preference relations are of acceptable consistency, the collective one obtained by an aggregation operator is acceptably consistent [34,53]. To improve the group consensus degree, a great deal of models have been proposed within the framework of AHP. For example, Dong et al. [19] defined the geometric cardinal consensus index and the geometric ordinal consensus index to measure the consensus degree between individual PCMs and the collective one. Then some algorithms were offered to improve the consensus degree and two consensus models were proposed. Wu and Xu [51] constructed a decision support model where the individual consistency and the group consensus were captured by defining two indexes through the Hadamard product of two PCMs. Xu et al. [57] proposed a distance-based consensus model to solve group decision problems with additive reciprocal matrices and PCMs. Dong and Saaty [16] proposed a consensus reaching model where a moderator was set up and the most discordant DM could update her/his judgements. A novel consensus reaching model was further proposed by Dong and Cooper [15], where an automatic feedback mechanism was offered in a dynamic environment.
In the above-mentioned consensus models, the number of DMs is always supposed to be of small scale. With the development of societal and technological trends, the largescale GDM has attracted much attention [20,29,46,50] and it is worth to be further investigated. In addition, in the above typical consensus models, the consensus measures are usually defined and some algorithms are proposed to adjust their values to reach the consensus. When GDM is considered to be a social behavior of people, the PSO algorithm could be used to model the consensus process [6,7,31,32,38]. Motivated by the new study trends of GDM, the objective of this paper is to propose a new consensus model such that the large-scale GDM can be dealt with. The main novelty of the proposed model comes with an introduction of a new fitness function for performing the PSO algorithm. The group consensus is reached by minimizing the distance between the individual preference relations and the collective one. The group consistency is ensured by minimizing the GCI index of the collective PCM. A new algorithm is elaborated on to solve the GDM problem. Then it is applied to solve a practical largescale GDM problem. Some comparisons with the existing methods show the advantages of the proposed model. This paper is structured as follows. Section 2 briefly introduces the concepts of PCMs and the consistency indexes. In Sect. 3, a new consensus model for GDM is constructed and the performance of the PSO algorithm is analyzed. As compared to the existing GDM models based on the PSO algorithm, the main differences are the research domain of the optimal solution formed by the proposed minimum-maximum method and the order of the optimization problem to be solved. In Sect. 4, a large-scale GDM problem is investigated and some comparisons with the existing method are offered. The obtained results reveal that the proposed model can be used to achieve an intelligent and effective decision-making for addressing a large-scale emergency management problem. Some concluding remarks are presented in Sect. 5.

Preliminaries
To choose the best alternative from a finite set X = {x 1 , x 2 , . . . , x n }, a natural way is to compare them in pairs [41]. Then a PCM A = (a i j ) n×n is constructed and it can be used to derive the priority weights of alternatives. Then the ranking of alternatives is obtained and the best one can be chosen. It is considered that the 1 − 9 scale is enough to evaluate the relative importance of the alternatives x i over the alternative x j , which is expressed as the comparison ratio a i j . When x i is extremely important than x j , the value of a i j is offered as 9 or 8. If x i is very important than x j , the corresponding value of a i j is given as 7 or 6. When the DM considers that x i is essentially important than x j , the value of a i j is evaluated using 5 or 4. If the DM thinks that x i is weakly important than x j , the value of a i j is expressed as 3 or 2. The unity value of the comparison ratio a i j implies that x i is equally important to x j . Moreover, when the alternative x i is not important than the alternative x j , the value of a i j is computed using the following reciprocal property: Hence, the definition of PCM is given as follows: Furthermore, when evaluating the preference intensity of x i over x j (i, j = 1, 2, ..., n), some vagueness could be experienced by DMs. The theory of fuzzy sets is an effective tool used to quantify the uncertainty [2]. For example, interval number has been used to capture the uncertainty experienced by DMs and one has the following definition [42]: Definition 2 [42] An interval multiplicative reciprocal matrix A is represented as whereã i j = a − i j , a + i j means that the alternative x i is between a − i j and a + i j times as important as the alternative On the other hand, the consistency degree of a PCM reflects the flexibility levels of the judgements. The cardinal transitivity of the judgements means the perfect consistency of a PCM. It is seen that the consistent PCM is defined as follows:

Definition 3 [41] A PCM in relative measurements
Unfortunately, one always gives an inconsistent PCM in a practical case [41]. Then the inconsistency degree of a PCM should be quantified by using a consistency index. The consistency index (CI) and the consistency ratio (CR) have been defined by Saaty [41], respectively. Here we recall the geometric consistency index proposed by Crawford and Williams [14] as follows: Definition 4 [14] Assume that A = (a i j ) n×n is a PCM. The geometric consistency index (GCI) is defined as The thresholds GC I of GC I for a PCM with acceptable consistency have also been proposed in [1]. That is, we have GC I = 0.31 for n = 3, GC I = 0.35 for n = 4, and GC I = 0.37 for n > 4. When the GCI of a PCM is less than the corresponding GC I , the matrix is considered to be acceptable.

Building consensus in GDM
In what follows, we consider the GDM problem with the m experts in E = {e 1 , e 2 , . . . , e m } and the n alternatives in X = {x 1 , x 2 , . . . , x n }. It is supposed that the initial positions of the m experts are expressed using PCMs as There may be some contradictions among the opinions of the experts. Then the initial positions with PCMs should be allowed to be adjusted to some degree such that the consensus in GDM can be built. According to the known works [6,7,31,32,38], a flexibility degree is always offered to each DM. Then the comparison ratio of x i over x j can be varied in an interval and an interval multiplicative reciprocal matrix is constructed as that in Definition 2. In the present study, different from the methods in [6,7,31,32,38], an interval-valued comparison matrix is constructed using the minimum and maximum of the entries in {A 1 , A 2 , . . . , A m }.

Construction of an interval-valued comparison matrix
It is assumed that the m experts express their opinions as and n×n is constructed. One can see that all the matrices A k = (a (k) i j ) n×n belong toĀ, meaning that the entries a (k) We have the following result: In virtue of (2), it is assumed that there is a number k (3), this implies that ji .
The application of the above result leads tō is an interval multiplicative reciprocal preference relation. The proof is completed.
One can see from Theorem 1 that an interval multiplicative reciprocal preference relation is constructed using (2) and (3). For convenience, it is called as the minimum-maximum method since the minimum and maximum of entries in {A 1 , A 2 , . . . , A m } are used. Moreover, the consensus process for GDM involves of the dynamical iteration of the judgements of DMs after some discussions and learning each other [37]. In the present model, it is considered that the constructed includes all the possible changes of the positions of the experts. The above idea is reasonable as compared to those in [6,7,31,32,38]. There are two main reasons. The first one is that when various experts compare two alternatives such as x i and x j , the provided different comparison ratios reveal the possible importance degrees of x i over x j . This means that the alternatives x i and x j have been thoroughly investigated by all the experts in the group. The constructed interval numbers using (2) and (3) quantify all the possible values of the comparison ratios. The second one is that the flexibility degree offered to each experts has the question of being arbitrary [6,7,31,32,38]. In fact, we do not know the exact values of the flexibility degree of the experts when applying the methods in [6,7,31,32,38]. The proposed method of constructing an interval number can overcome a more or less arbitrariness.

Fitness function
In what follows, two objectives should be achieved. One is the acceptable consistency of the collective PCM. The other is to reach the consensus by considering the distance between individual PCMs and the collective one. First, let us consider the acceptable consistency of the collective PCM. For convenience, it is supposed that the collective matrix is written as R = (r i j ) n×n . To quantify the inconsistency degree of R, any consistency index can be used theoretically. Here the GCI of R is considered and one has the following function: The smaller the value of Q 1 , the more consistent the collective matrix R is. Second, to reach the consensus among all the DMs, the distance between individual PCMs and the collective one is considered. Hence, the following function is constructed: Obviously, the smaller the value of Q 2 , the higher the group consensus level is. From the viewpoint of the ideal consideration, the case with Q 1 (R) = 0 corresponding to a consistent collective PCM and the smallest value of Q 2 (R) is the optimal solution. This means that the group of experts reach the highest consensus level while the final decision is perfectly consistent. Therefore, according to the consideration of seeking the smallest values of Q 1 (R) and Q 2 (R), the two objectives do not conflict. However, the smallest values of the two objectives may be not reached at the same time. This means that a multi-objective optimization problem with conflict criteria should be solved. In fact, to keep the decision information as much as possible, it is sufficient to ensure the condition of Q 1 < GC I and the perfectly consistent matrix is not pursued. Hence, for the sake of simplicity, the liner combination of Q 1 and Q 2 is used to deal with the multiobjective optimization problem, and it is written as follows [31,32]: where the parameters p ≥ 0 and q ≥ 0. When p = 0, Q = q Q 2 , meaning that the group consensus is only considered. When q = 0, Q = pQ 1 , implying that the group consistency is only studied. Generally, for p = 0 and q = 0, the values of p and q have influences on the functions Q 1 and Q 2 , respectively. It should be investigated through numerical computations.

Modeling the consensus process
In the fitness function Q, the unknown quantity is the collective matrix R. The consensus process in GDM requires that the opinions of the experts are close to the collective matrix R. Then an optimization problem is constructed as follows: with the constraint condition: where GC I (R) stands for the threshold of acceptable consistency for a PCM having the same dimensionality as R. It is seen from the methods in [6,7,31,32,38] that the individual PCMs are changed in the iteration process. Here we directly consider the method of obtaining the collective matrix from the constructed interval-valued comparison matrixĀ. Moreover, it is worth noting that there are some other methods of determining the collective matrix such as the aggregation operators [56,60,61] and the mathematical programming models [48,49]. Here the optimization problem (8) is constructed and solved to determine the collective matrix R.
One can see from the expressions of Q 1 and Q 2 together with (1) that the function Q is nonlinear and multi-variable. In other words, the difficulty of the optimization problem (8) makes its solution uneasy to be obtained through the typical methods such as the differentiation one. Therefore, to solve the nonlinear optimization problem (8) with the constraint condition (9), the algorithm of particle swarm optimization (PSO) is used to obtain a globally optimal solution [27]. It is worth noting that the PSO was developed by Kennedy and Eberhart when studying the social behavior of bird flocking or fishing schooling [27,43]. The theory and applications of the PSO have been studied widely such as those presented in the books [12,28] and the review papers [3,39]. Recently, the large-scale optimization problems have attracted much attention and solved by developing the PSO algorithm [9,10,36]. For the constructed optimization problem (8), the decision variables are the entries of the collective matrix R. Even if the number of the alternatives is 9, the order of (8) is 36 by considering the reciprocal property of the PCM. Therefore, the PSO algorithm developed by Shi and Eberhart [43] is still feasible. As compared to the existing models in [6,7,31,32,38], it is found that the proposed method can decrease the order of the constructed optimization problem and reduce the difficulty to obtain the optimal solution.
For convenience, the formulae of the applied PSO algorithm are given as follows [43]: Here x t is the current position of particle; v t is the velocity of particle; p t is the previous best position and p g is the global best position. The constants c 1 and c 2 are chosen as 2. r 1 and r 2 are the random numbers uniformly distributed in [0, 1] n . The weight ω varies from 0.9 to 0.4 with regard to the change of the generation number. It is seen that the decision variables of the optimization problem (8) are the entries of R = (r i j ) n×n . Using the reciprocal property of r i j = 1/r ji , the encoding strategy is to set the following vector as a particle: r = (r 12 , r 13 , . . . , r 1n , r 23 , . . . , r (n−1)n ).
The values of the entries in r stand for the positions of the particle r. The initial positions and velocities of the swarm are randomly generated. The fitness function (7) is used to adjust the positions of the particles in the swarm until the optimal solution is determined. In what follows, a numerical example is carried to verify the above algorithm and some comparisons are offered.

Numerical results and comparisons
In the following, let us offer an example to illustrate the proposed consensus model. Suppose that one should choose the best from the four alternatives X = {x 1 , x 2 , x 3 , x 4 }. A group of five experts provide the PCMs as follows [19,51]: According to (2) and (3) It is found that the matrix satisfies Theorem 1 and it is an interval multiplicative reciprocal preference relation (Definition 2). Then the optimization problem (8) with (9) is solved and the collective matrix is determined. When running the PSO algorithm, the sizes of the swarm and the maximal iteration generation are all chosen as 100. For some selected values of p and q, the variations of the fitness function versus the generation number are depicted in Fig. 1. One can see from Fig. 1 that with the increasing of the generation number, the values of Q decrease and tend to a stable one. The phenomenon is in accordance with the findings in [6,7,31,32,38]. It is also found that when the parameters p and q are different, the final stable values of Q are different. The bigger the values of p or q, the bigger the optimal solution of Q. In particular, here we are interest in the collective matrices under various parameters. To make the results more reliable, the PSO algorithm is run for multiple times and a mean matrix M = (m i j ) n×n is determined, that is, when the PSO algorithm is run for n times to give R k = (r (k) i j ) n×n (k = 1, 2, . . . , n), the mean matrix can be computed as the following form: In addition, to show the dispersion degree of R k , the standard deviation is defined as follows: where • stands for a matrix norm. Here the Frobenius norm is used to numerically compute and it follows: For convenience, it is supposed that the mean matrices M 1 , M 2 and M 3 correspond to p = q = 1, p = 3, q = 1 and p = 1, q = 1.5, respectively. We choose n = 100 and obtain the following results: The weights of alternatives are computed using the row geometric mean method [14] and shown in Table 1. It is seen from Table 1 that the values of Q 1 are all less than the threshold 0.35 of the matrix with the order 4 and the ranking of alternatives is Moreover, the values of d are also given in Table 1. Since a small value of d is always determined, the matrix R is close to M by performing the PSO algorithm.
On the other hand, it is considered that the PSO algorithm is based on the technique of random optimization. The determined matrices R k = (r (k) i j ) n×n (k = 1, 2, . . . , n) are with randomness and a test method should be considered [25]. Here we apply the traditional Wilcoxon's rank-sum test for two independent samples. It is assumed that the two samples with n 1 and n 2 matrices satisfying n 1 ≤ n 2 are created independently and written as follows: The mean matrices obtained by S 1 and S 2 are expressed as M = (m i j ) n×n andM = (m i j ) n×n , respectively. The observations r (1) i j ,r (2) i j , . . . ,r (n 1 ) i j  i j ,r (2) i j , . . . ,r (n 2 ) i j are used to make the Wilcoxon's rank-sum test. Since the samples S 1 and S 2 come from the same sample space, the null hypothesis is written as Here we choose n 1 = 3, n 2 = 4 and the significance level α = 0.05. By running the PSO algorithm with p = q = 1, the matrices in S 1 and S 2 are obtained and given in Appendix A. Without loss of generality, we only need to choose a pair of i and j to test. For example, when i = 2, j = 3, the ranks of the random data and the sum are shown in Table 2. Since we have 9 > 6, the null hypothesis H 0 is true according to the Wilcoxons table [25], meaning that the computed results in Table 1 are convincing. At the end, some comparisons with the existing methods in [19,51] are offered. As an example, we consider the case of p = q = 1 in the proposed method. The computing results are shown in Table 3, where RGMM and EM denote the priority methods of the row geometric mean method [14,19] and the eigenvector method [41,51], respectively. It is found from Table 3 that the rankings of alternatives for all the methods are identical. The values of Q 1 are also less than 0.35, meaning that the collective matrices are of acceptable consistency. The main difference is the values of Q 2 , which reflects the consensus degree within the group of experts.
Since 0.6288 < 0.6966 < 0.7396, the least value of Q 2 has been given for M 1 , implying that the consensus in GDM can be improved using the present method.

The application to large-scale group decision-making
In the typical GDM, the invited experts always have the professional knowledge about the considered problem. The rational judgements could be provided when comparing the alternatives within a sufficient time domain. However, the provided judgements of DMs could exhibit a high degree of uncertainty and complexity in some practical situations. For example, when the number of DMs is sufficiently increasing, the discrete degree of judgements of DMs could increase. When lots of agents in a social network provide their opinions on a decision-making problem, the interaction of these judgements could lead to an extreme decision. When the emergency events happen, the lack of time yields the opinions of DMs being irrational to a certain degree. Hence, the large-scale GDM is becoming an attractive research direction [20,46,50,59]. One of the main challenges in the large-scale GDM is to reach a good consensus among all DMs. Here we extend the proposed model to address the large-scale GDM problem. An exact case is studied and some comparisons are offered.
In what follows, we investigate an emergency event where a large-scale group of DMs are involved [59]. The background of the example is the earthquake that occurred in Ya'an City, Sichuan, China on April 20, 2013. The emergency response should be adopted immediately under the complex environment with the blocked traffic, the interrupted communication, the insufficient rescue staff and medical facilities. To minimize the damage, the existing rescue team should choose the best alternative from the following four plans: Here it should be pointed out that the matrices A k (k = 1, 2, . . . , 20) are obtained from the known additive reciprocal matrices in [59] using the following transformation formula [58]: where a i j stands for an entry in a PCM A = (a i j ) n×n and b i j denotes an entry in an additive reciprocal matrix B = (b i j ) n×n . Moreover, it is noted that the scale of numerical examples for simulating the large-scale GDM is always no more than 50 DMs, for instance, 50 experts in [50], 35 agents in [20], 30 experts in [29], 25 agents in [46] and 20 experts in [59]. Additionally, one can see that the scale (order) of system dynamics model for simulating a complex social and economic system is about 10-100 [47]. Hence, a GDM with twenty experts can be considered to be the large scale. In what follows, we give the solution process of the large-scale GDM with 20 experts.
Step 1 Using the minimum-maximum method shown in (2) and (3), an interval-valued preference relation is constructed in virtue of the above twenty matrices as follows: It is seen from the given matrixÃ that the entries behave peculiar. That is, several entries are all [1/9, 9] and the bounds of the others are 1/9 or 9. The above phenomenon reveals that the judgements of DMs exhibit great divergence in the complex decision-making environment. Therefore, it is difficult to reach the consensus for the GDM problem.
Step 2 Because the time is pressing, the DMs cannot discuss endlessly. A fast yet reasonable decision should be reached. Here the proposed model is used to maximize the consensus degree of DMs. For example, we choose p = q = 1 in the fitness function (7) to compute. Figure 2 is drawn to show the variations of the fitness function versus the generation number. When the generation number is increasing, the values of the fitness function are decreasing and tending to a stable one. This means that the optimal solution can be obtained after generating 100 times.
Step 3 By running the PSO algorithm for 100 times, the mean matrix is given as follows:  [59] that an exit-delegation mechanism has been proposed for constructing a dynamical consensus model. However, in the practical case, the time is the most precious and a fast yet effective decision should be reached. The proposed model can provide an intelligent and fast decision procedure. In addition, it is found that the optimal solution is not given in [59]. For the sake of comparison, we complete the consensus process in [59] and it is shown in Appendix 2. The obtained result in [59] is in agreement with the present observation. This means that the proposed model is effective to save time and reach a good consensus among DMs.
At the end, it is of interest to analyze the sensitivity of the parameters p and q to the final solution of the GDM problem. For convenience, the parameters p and q are written as the style of two-tuple ( p, q). Based on the RGMM method, the priority weights of alternatives are determined in Tables 4  and 5 using the selected values of ( p, q). It is found from Tables 4 and 5 that the ranking of alternatives is not changed with the variations of ( p, q). This implies that the final solu-tion of the large-scale GDM problem is not sensitive to the parameters p and q.

Conclusions
It is an important task to reach the optimal solution of a group decision-making (GDM) problem under a high consensus level within a group of experts. In particular, for a largescale GDM with a high degree of uncertainty and emergency, a fast yet effective decision should be reached. In the present study, a novel GDM model has been proposed and applied to the large-scale one. A new optimization problem has been constructed by considering the acceptable consistency of the collective matrix and minimizing the distance between individual pairwise comparison matrices (PCMs) and the collective one. The particle swarm optimization (PSO) algorithm has been applied to solve the constructed optimization problem. Numerical examples have been carried out and some comparisons have been offered. The obtained results show that the proposed model is effective in solving typical and large-scale GDM problems with PCMs. Some novel observations are put forward as follows: • The minimum-maximum method has been proposed to construct an interval multiplicative reciprocal preference relation, which is used as the constraints of decisionmakers' opinions. • As compared to the existing methods, the proposed model can decrease the scale (order) of the optimization problem. • A fast yet effective decision can be reached using the proposed consensus model for the large-scale GDM arising in emergency management.
In addition, it should be pointed out that the two-objective optimization problem has been simplified as the single objective one in the present study. In fact, of much significance is to give multiple optimal solutions by solving the multiobjective optimization problem [13,45]. In the future works, the evolutionary computation methods such as the PSO algorithm could be used to solve the constructed multi-objective optimization problems such that various decision schemes can be offered.