Introduction

For dealing with a complex decision-making problem, the single DM should have the ability to address the situation with multi-objectives, uncertainty, time dynamic and competitiveness. However, one has been far from meeting the requirements. So we need to extract the group wisdom from multiple people involved in the decision-making problem and integrate a set of different individual opinions into a collective preference relation [1,2,3]. The process of extracting wisdom from a group of DMs is called GDM and it is a key activity in companies and organizations. Moreover, in a social group, due to the differences in cultural values and the conflicts of personal interests, the members could inevitably possess different preferences for various things [4,5,6,7]. Then it is interesting to develop the models of GDM; and an extensive attention has been attracted by considering various decision-making environments [8,9,10,11]. For the preference information of DMs, multiplicative and additive pairwise comparison matrices are the two typical forms when the alternatives are compared in pairs [12,13,14,15]. It is worth noting that many consensus reaching models have been proposed in terms of FPRs [16,17,18,19,20]. However, the complexity of the actual decision-making problems, the limitations of DM’s level and the subjectivity in making judgments all lead to the fact that it is difficult for DMs to give completely accurate and logical judgments. Therefore, it is interesting to use fuzzy numbers to express preference information of DMs, such as interval numbers, triangular fuzzy numbers and intuitionistic fuzzy numbers [21,22,23,24,25]. Here to cope with the uncertainty experienced by DMs, a generic case is considered where the decision information is expressed as FPRs without additively reciprocal property (NrFPRs).

In the process of reaching consensus in GDM, DMs could discuss and negotiate many times. In each round of discussions, DMs could accept the suggestions of the others and constantly modify the initial judgments; then a final decision could be accepted by each DM [26, 27]. But the consensus reaching process does not mean the perfect consistency of decision information and the full agreement among DMs [27]. Then the consistency level of individual preference relations and consensus measurement of the group play a key role. Consistency level ensures the quantification of the random and illogical behavior in DMs’ pairwise comparisons of alternatives. Consensus measurement quantifies the degree of agreement among DMs [15]. For example, the deviation degree of inconsistent FPRs from consistent ones is always used to propose a consistency index [15, 18, 28,29,30]. For the consensus levels, the distances between individual preference values and the collective one are generally used [18, 29, 30]. This study focuses on a novel approach to the consistency index of NrFPRs, where the relationships between two column/row vectors are considered. In addition, it is worth noting that there is not a feasible method to determine the threshold of the consensus level. Here we provide an acceptable consensus standard to obtain the collective matrix.

In GDM, a group of experts work together to pursue a common goal [31]. But the negotiation process of GDM is complex [32, 33], and FPRs with the entries of fixed numerical values can not perfectly reflect the complexity of a GDM process. In the process of negotiation, DMs need to make a certain compromise and modify their initial judgments, meaning that DMs should have a certain degree of flexibility to express their opinions. This flexibility determines that the entries in preference relations are not simple real numbers, but information granularity such as intervals, fuzzy sets, rough sets and others [34,35,36,37]. It is seen that the PSO algorithm has been used to deal with the GDM problems with an allocation of information granularity [19, 38,39,40,41,42]. The PSO algorithm is a dynamic iterative process that initializes a group of random particles (random solutions) and finds the optimal solution through iteration [43, 44]. In each iteration, particles update themselves by tracking two “extremes”. One is the optimal solution found by the particle itself and the other is the optimal solution exhibited by the whole population. The PSO algorithm has the advantages of less control parameters, simple operation, fast convergence speed and optimizing multi-objective problems [45, 46]. In this paper, the consensus reaching process in GDM is addressed by proposing a novel fitness function to construct an optimization problem, which is solved by the PSO algorithm.

As shown in the above analysis, the novelty and contribution of the present study are covered as follows:

  • The uncertainty in decision information is characterized by proposing the concept of NrFPRs. It is found that interval FPRs can be decomposed into two NrFPRs.

  • The consistency index of NrFPRs is proposed using the relations between two column/row vectors. The thresholds of the proposed consistency index are computed for NrFPRs with acceptable additive consistency.

  • A consensus model in GDM is established where the PSO algorithm is adopted to simulate the process of discussing and learning from each other. An acceptable consensus standard is defined such that the threshold of consensus level can be captured.

For achieving the above observations, the rest of the paper is divided into four parts. In Sect. “Fuzzy preference relations and additive consistency indexes”, the concept of NrFPRs is introduced and an additive consistency index is proposed. It is found that the relations between two columns/rows in NrFPRs can be naturally used to quantify the inconsistency degree. Section “A novel consensus model in group decision making” offers a novel consensus model for GDM with NrFPRs. The novelty comes with the introductions of the novel fitness function and the acceptable standard of consensus level. Moreover, a new algorithm for solving the consensus model in GDM with NrFPRs is elaborated on. In Sect. “Comparison and discussion”, numerical computations are carried out to illustrate the proposed model and compare with the existing models. Finally, we give some conclusions and directions for the future study in Sect. “Conclusions and the future study”.

Fuzzy preference relations and additive consistency indexes

Let us consider a GDM problem to choose the best one from a set of alternatives \(X=\{x_{1},x _{2},\ldots ,x_{n}\}\ (n\ge 2).\) A group of DMs \(E=\{e_{1},e_{2},\ldots ,e_{m}\}\ (m\ge 2)\) are invited to provide judgments on X through their knowledge, motivation, ideas, attitudes and others. After some discussion and comprehensive consideration of individual opinions, the ranking of alternatives is obtained [1, 47]. To formalize the decision-making process, we need a tool to effectively capture the opinions of DMs. In the following, we introduce the concept of NrFPRs and propose a novel additive consistency index to quantify the inconsistency degree.

Non-reciprocal fuzzy preference relations

Following the idea of fuzzy set theory [48], the definition of fuzzy binary relations is given as follows:

Definition 1

[13] A fuzzy binary relation B on a set of alternatives X is a fuzzy set on the Cartesian product \(X\times X\) characterized by a membership function \(\mu _{B}: X\times X\mapsto [0,1].\)

A fuzzy binary relation B is expressed by the \(n\times n\) matrix \(B=(b_{ij})_{n\times n},\) where \(b_{ij}=\mu _{B}(x_{i},x_{j})\ (\forall i,j\in \{1,2,\ldots ,n\})\) is interpreted as the preference degree or the preference intensity of the alternative \(x_{i}\) over the alternative \(x_{j}.\) \(b_{ij}=0.5\) means the indifference between \(x_{i}\) and \(x_{j}\) expressed as \(x_{i}\sim x_{j}.\) \(b_{ij}=1\) indicates that \(x_{i}\) is absolutely preferred to \(x_{j}.\) \(b_{ij}>0.5\) implies that \(x_{i}\) is preferred to \(x_{j}\) \((x_{i}\succ x_{j}).\) In particular, we have \(b_{ii}=0.5\) for \(\forall i\in \{1,\ldots ,n\}\) since \(x_{i}\sim x_{i}.\) Moreover, it is usually to assume that FPRs have the following additively reciprocal property:

Definition 2

[13] If a FPR B satisfies \(b_{ij}+b_{ji}=1\) \((\forall i,j\in \{1,2,\ldots ,n\}),\) we call it having the additively reciprocal property.

It is seen that the assumption of additively reciprocal property in Definition 2 can decrease the workload of DMs in comparing alternatives. That is, one just compares the n alternatives \(n(n-1)/2\) times to produce the matrix B. However, for a practical decision-making problem, the complexity could yield the uncertainty of the DMs’ opinions. For example, interval-valued comparison matrices have been proposed to capture the uncertainty experienced by DMs [21, 22]. Here it is found that the uncertainty can be characterized by relaxing the additively reciprocal property. For instance, if the DM gives \(b_{ij}=0.3\) and \(b_{ji}=0.6\) when separately offering the preference strength between the alternatives \(x_{i}\) and \(x_{j},\) it means that the interval-valued preference values \({\bar{b}}_{ij}=[0.3, 0.4]\) and \({\bar{b}}_{ji}=[0.6, 0.7]\) are determined. Moreover, it has been pointed out that FPRs may not always satisfy additively reciprocal property [12, 19, 49]. But to our best knowledge, the theory and methods related to NrFPRs are not studied systemically. Therefore, the definition of NrFPRs is given as follows:

Definition 3

If a FPR B does not satisfy \(b_{ij}+b_{ji}=1\) for at least a pair of \(i,j\in \{1,\ldots ,n\},\) which makes \(0\le b_{ij}+b_{ji}<1\) or \(1<b_{ij}+b_{ji}\le 2\) hold, we call it a NrFPR.

For the alternatives \(x_{i}\) and \(x_{j}\), when \(0\le b_{ij}+b_{ji}<1\), we say that the hesitancy of DMs is regarded as \(1-(b_{ij}+b_{ji}),\) and when \(1<b_{ij}+b_{ji}\le 2\), the hesitancy of DMs is computed as \((b_{ij}+b_{ji})-1\). It is found that the difference between the above two situations is only one negative sign. In general, we can define the hesitancy of DMs to give additively reciprocal comparisons as

$$\begin{aligned} h_{ij}=|1-(b_{ij}+b_{ji})|. \end{aligned}$$

Moreover, we define the hesitancy degree as

$$\begin{aligned} h=\sum _{i<j}h_{ij}. \end{aligned}$$
(1)

When \(h=0,\) NrFPRs degenerate to FPRs with additively reciprocal property. When \(h\ne 0,\) this means a NrFPR has been given. In particular, an interval FPR \({\bar{B}}=([b_{ij}^{-}, b_{ij}^{+}])_{n\times n}\) can be decomposed into the two NrFPRs \(B^{l}=(b_{ij}^{-})_{n\times n}\) and \(B^{r}=(b_{ij}^{+})_{n\times n}.\) This means that the decision making models with interval FPRs can be restudied according to the two derived NrFPRs.

A novel additive consistency index

When investigating the decision information provided by individuals, we need to pay special attention to the consistency degree. For instance, one has the additive consistency definition of FPRs as follows:

Definition 4

[13] A FPR \(B=(b_{ij})_{n\times n}\) is additively consistent if

$$\begin{aligned} b_{ij}=b_{ik}+b_{kj}-0.5, \quad \forall i, j, k\in \{1,2,\ldots ,n\}. \end{aligned}$$
(2)

It is easy to compute that the additive consistency means additively reciprocal property, since we have \(b_{ij}+b_{ji}=1\) \((\forall i,j\in \{1,\ldots ,n\})\) by applying (2). In other words, the additively reciprocal property is a necessary condition of consistent FPRs. Hence, a NrFPR must be inconsistent due to the existence of \(0\le b_{ij}+b_{ji}<1\) or \(1<b_{ij}+b_{ji}\le 2.\) It is of much importance to introduce a consistency index to quantify the inconsistency degree of NrFPRs. One can find that the existing consistency indexes of inconsistent FPRs are always based on the deviation degree from a consistent one [18, 29, 30]. Here we introduce a novel viewpoint to capture the consistency degree of FPRs.

First, let us report an equivalent finding of FPRs with additive consistency.

Theorem 1

For a FPR \(B=(b_{ij})_{n\times n}\) and \(\forall i,j\in \{1,2,\cdots ,n\},\) the row and column vectors of B are expressed as \(\mathbf {b}_{i\cdot }=(b_{i1}, b_{i2}, \cdots , b_{in})\) and \(\mathbf {b}_{\cdot j}=(b_{1j}, b_{2j}, \cdots , b_{nj})^{T},\) respectively. If and only if \(B=(b_{ij})_{n\times n}\) is additively consistent, then we have \(\mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }=r_{ik}\cdot (1,1,\cdots ,1)\) and \(\mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}=c_{ik}\cdot (1, 1, \cdots , 1)^{T},\) where \(r_{ik}\) and \(c_{ik}\) are constants for \(\forall i,k\in \{1,2,\cdots ,n\}.\)

Proof

It is calculated that

$$\begin{aligned} \mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }= & {} (b_{i1}-b_{k1}, b_{i2}-b_{k2}, \cdots , b_{in}-b_{kn}), \end{aligned}$$
(3)
$$\begin{aligned} \mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}= & {} (b_{1i}-b_{1k}, b_{2i}-b_{2k}, \cdots , b_{ni}-b_{nk})^{T}. \end{aligned}$$
(4)

When \(B=(b_{ij})_{n\times n}\) is additively consistent according to Definition 4, the application of (2) leads to the following results:

$$\begin{aligned}&b_{ij}=b_{il}+b_{lj}-0.5,\nonumber \\&b_{kj}=b_{kl}+b_{lj}-0.5,\quad \forall i, j, k, l\in \{1,2,\ldots ,n\}. \end{aligned}$$
(5)

Then we have

$$\begin{aligned} b_{ij}-b_{kj}=b_{il}-b_{kl},\quad \forall i,j,k,l\in \{1,2,\cdots ,n\}. \end{aligned}$$
(6)

Letting \(b_{ij}-b_{kj}=r_{ik}\) (constant), this results \(\mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }=r_{ik}\cdot (1,1,\cdots ,1)\) by virtue of (3). Similarly, one can obtain the result of \(\mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}=c_{ik}\cdot (1, 1, \cdots , 1)^{T}\) with a constant coefficient \(c_{ik}.\)

On the contrary, when we have \(\mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }=r_{ik}\cdot (1,1,\cdots ,1)\) and \(\mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}=c_{ik}\cdot (1, 1, \cdots , 1)^{T}\) for \(\forall i,k\in \{1,2,\cdots ,n\}.\) Then in terms of (3) and (4), it follows:

$$\begin{aligned}&b_{ii}-b_{ki}=b_{ij}-b_{kj},\nonumber \\&b_{ii}-b_{ik}=b_{ji}-b_{jk},\quad \forall i,j,k\in \{1,2,\cdots ,n\}. \end{aligned}$$
(7)

This means

$$\begin{aligned}&b_{ij}+b_{jk}+b_{ki}=b_{ik}+b_{kj}+b_{ji},\nonumber \\&\quad \forall i,j,k\in \{1,2,\cdots ,n\}. \end{aligned}$$
(8)

Following the observation in [50], the matrix \(B=(b_{ij})_{n\times n}\) is additively consistent and the proof is completed. \(\square \)

It is seen from Theorem 1 that the additive consistency of FPRs can be captured using the special relationship of row and column vectors. This urges us to construct a novel additive consistency index of FPRs. The mean values of the elements in \(\mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }\) and \(\mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}\) are given as follows:

$$\begin{aligned} {\bar{r}}_{ik}= & {} \frac{1}{n}\sum _{l=1}^{n}(b_{il}-b_{kl}),\nonumber \\ {\bar{c}}_{ik}= & {} \frac{1}{n}\sum _{l=1}^{n}(b_{li}-b_{lk}),\quad \forall i,k\in \{1,2,\cdots ,n\}. \end{aligned}$$
(9)

Furthermore, the corresponding variances are computed as the following forms:

$$\begin{aligned} v_{ik}^{r}= & {} \sum _{l=1}^{n}\left( b_{il}-b_{kl}-{\bar{r}}_{ik}\right) ^{2}, \end{aligned}$$
(10)
$$\begin{aligned} v_{ik}^{c}= & {} \sum _{l=1}^{n}\left( b_{li}-b_{lk}-{\bar{c}}_{ik}\right) ^{2}. \end{aligned}$$
(11)

For convenience, the values in (9)–(11) are used to construct four matrices as:

$$\begin{aligned} {\bar{R}}= & {} ({\bar{r}}_{ik})_{n\times n},\quad {\bar{C}}=({\bar{c}}_{ik})_{n\times n},\quad V^{r}=(v_{ik}^{r})_{n\times n},\nonumber \\ V^{c}= & {} (v_{ik}^{c})_{n\times n}. \end{aligned}$$
(12)

According to Theorem 1, we obtain the following corollary:

Corollary 1

If and only if a FPR \(B=(b_{ij})_{n\times n}\) is additively consistent, then the defined variances in (10) and (11) satisfy \(V^{r}=V^{c}=0.\)

Proof

As shown in Theorem 1, the additive consistency of \(B=(b_{ij})_{n\times n}\) implies \(\mathbf {b}_{i\cdot }-\mathbf {b}_{k\cdot }=r_{ik}\cdot (1,1,\cdots ,1)\) and \(\mathbf {b}_{\cdot i}-\mathbf {b}_{\cdot k}=c_{ik}\cdot (1, 1, \cdots , 1)^{T}.\) That is, we have \({\bar{r}}_{ik}=r_{ik}\) and \({\bar{c}}_{ik}=c_{ik}\) for \(\forall i,k\in \{1,2,\cdots ,n\},\) meaning that \(V^{r}=V^{c}=0.\)

On the other hand, when \(V^{r}=V^{c}=0,\) it follows \(b_{il}-b_{kl}={\bar{r}}_{ik}\) and \(b_{li}-b_{lk}={\bar{c}}_{ik}\) for \(\forall i,k,l\in \{1,2,\cdots ,n\}.\) Using Theorem 1, the matrix \(B=(b_{ij})_{n\times n}\) is additively consistent. \(\square \)

In addition, based on the construction of the four matrices in (12), an interesting result is determined as follows:

Theorem 2

For a NrFPR B,  the constructed matrices \({\bar{R}}\) and \({\bar{C}}\) are antisymmetric. The two matrices \(V^{r}\) and \(V^{c}\) are symmetric.

Proof

For \(\forall i,j,k\in \{1,2,\cdots ,n\}\), we can get:

$$\begin{aligned} {\bar{r}}_{ik}=\frac{1}{n}\sum _{l=1}^{n}(b_{il}-b_{kl}) =-\frac{1}{n}\sum _{l=1}^{n}(b_{kl}-b_{il})=-{\bar{r}}_{ki}, \end{aligned}$$
(13)

and

$$\begin{aligned} {\bar{c}}_{ik}=\frac{1}{n}\sum _{l=1}^{n}(b_{li}-b_{lk}) =-\frac{1}{n}\sum _{l=1}^{n}(b_{lk}-b_{li})=-{\bar{c}}_{ki}. \end{aligned}$$
(14)

So the matrices \({\bar{R}}\) and \({\bar{C}}\) are antisymmetric. Moreover, it follows

$$\begin{aligned} v_{ik}^{r}= & {} \sum _{l=1}^{n}\left( b_{il}-b_{kl}-{\bar{r}}_{ik}\right) ^{2} =\sum _{l=1}^{n}\left( b_{il}-b_{kl}+{\bar{r}}_{ki}\right) ^{2}\nonumber \\= & {} \sum _{l=1}^{n}\left( b_{kl}-b_{il}-{\bar{r}}_{ki}\right) ^{2}=v_{ki}^{r}, \end{aligned}$$
(15)

and

$$\begin{aligned} v_{ik}^{c}= & {} \sum _{l=1}^{n}\left( b_{li}-b_{lk}-{\bar{c}}_{ik} \right) ^{2}=\sum _{l=1}^{n}\left( b_{li}-b_{lk}+{\bar{c}}_{ki}\right) ^{2}\nonumber \\= & {} \sum _{l=1}^{n}\left( b_{lk}-b_{li}-{\bar{c}}_{ki}\right) ^{2}=v_{ki}^{c}. \end{aligned}$$
(16)

This implies that the matrices \(V^{r}\) and \(V^{c}\) are symmetric. \(\square \)

In what follows, the novel additive consistency index of FPRs is defined using the variances in (10) and (11).

Definition 5

For a FPR B,  its additive consistency index \(ACI_{V}\) is defined as follows:

$$\begin{aligned} ACI_{V}(B)=\frac{1}{n^{2}}\sum _{i=1}^{n}\sum _{k=1}^{n}\left( v_{ik}^{r} +v_{ik}^{c}\right) . \end{aligned}$$
(17)

According to Corollary 1, it is found that if and only if \(ACI_{V}(B)=0,\) the matrix B is additively consistent. The larger the values of \(ACI_{V}(B),\) the more inconsistent degree the matrix B has. In particular, one can see that the additively reciprocal property is not assumed in Theorem 1. This means that the proposed additive consistency index is suitable for quantifying the inconsistency degree of NrFPRs. This observation is similar to the consistency measure of FPRs proposed in [18]. As compared to the additively consistency indexes in [18, 30], the basic ideas are different. In [18], the consistency level of B is measured using the derivation degree of each entry from a consistent relationship. In [30], the additive consistency index is defined using the distance from a constructed consistent matrix. Here the additive consistency index is based on the relationships between two column/row vectors of B. The derived variances are used to quantify the inconsistency degree of a FPR. Furthermore, it is interesting to investigate the threshold of a NrFPR with acceptable additive consistency. The concept of acceptable consistency was proposed by Saaty for a multiplicative pairwise comparison matrix [14]. Recently, the idea in [14] was developed by considering the percentage of the values of the consistency index in [51]. The percentage \(22.086\%\) corresponds to the threshold 0.1 of the consistency ratio in [14]. Hence, we choose the percentage \(22.086\%\) to determine the threshold of \(ACI_{V}(B).\) By randomly generating 100,000 NrFPRs, the thresholds of \(ACI_{V}\) are obtained for different orders of NrFPRs and given in Table 1.

Table 1 The thresholds of \(ACI_{V}\) for a NrFPR with acceptable additive consistency under \(22.086\%\)

Example 1

For illustrating the above consistency index, we compute the additive consistency index of the NrFPR \(B_{1}\) where

$$\begin{aligned} B_{1}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.2 &{} 0.5 &{} 0.6 &{} 0.4 \\ 0.8 &{} 0.5 &{} 0.7 &{} 0.3 &{} 0.6 \\ 0.3 &{} 0.2 &{} 0.5 &{} 0.8 &{} 0.2 \\ 0.3 &{} 0.5 &{} 0.2 &{} 0.5 &{} 0.3 \\ 0.5 &{} 0.4 &{} 0.6 &{} 0.6 &{} 0.5 \\ \end{array} \right) . \end{aligned}$$

According to (9), we can get

$$\begin{aligned} {\bar{R}}_{1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.00 &{} -0.14 &{} 0.04 &{} 0.08 &{} -0.08 \\ 0.14 &{} 0.00 &{} 0.18 &{} 0.22 &{} 0.06 \\ -0.04 &{} -0.18 &{} 0.00 &{} 0.04 &{} -0.12 \\ -0.08 &{} -0.22 &{} -0.04 &{} 0.00 &{} -0.16 \\ 0.08 &{} -0.06 &{} 0.12 &{} 0.16 &{} 0.00 \\ \end{array} \right) ,\\ {\bar{C}}_{1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.00 &{} 0.12 &{} -0.02 &{} -0.08 &{} 0.08 \\ -0.12 &{} 0.00 &{} -0.14 &{} -0.20 &{} -0.04 \\ 0.02 &{} 0.14 &{} 0.00 &{} 0.06 &{} 0.10 \\ 0.08 &{} 0.20 &{} -0.06 &{} 0.00 &{} 0.16 \\ -0.08 &{} 0.04 &{} -0.10 &{} -0.16 &{} 0.00 \\ \end{array} \right) . \end{aligned}$$

Then, using (10) and (11), we have

$$\begin{aligned} V^{r}_{1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.0000 &{} 0.2520 &{} 0.1120 &{} 0.2080 &{}0.0280 \\ 0.2520 &{} 0.0000 &{} 0.6280 &{} 0.3880 &{}0.1920 \\ 0.1120 &{} 0.6280 &{} 0.0000 &{} 0.2720 &{} 0.1480 \\ 0.2080 &{} 0.3880 &{} 0.2720 &{} 0.0000 &{} 0.1320 \\ 0.0280&{} 0.1920 &{} 0.1480 &{}0.1320&{} 0.0000 \\ \end{array} \right) ,\\ V^{c}_{1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.0000 &{} 0.1680 &{} 0.0680 &{} 0.5280 &{}0.0280 \\ 0.1680 &{} 0.0000 &{} 0.2520 &{} 0.4000 &{} 0.0920 \\ 0.0680 &{} 0.2520 &{} 0.0000 &{} 0.3320 &{} 0.0800 \\ 0.5280 &{} 0.4000 &{} 0.3320 &{} 0.0000 &{} 0.4120 \\ 0.0280 &{} 0.0920 &{} 0.0800 &{} 0.4120 &{} 0.0000 \\ \end{array} \right) . \end{aligned}$$

As shown in Theorem 2, it is seen that the matrices \({\bar{R}}_{1}\) and \({\bar{C}}_{1}\) are antisymmetric, and \(V^{r}_{1}\) and \(V^{c}_{1}\) are symmetric. In terms of (17), we can obtain \(ACI_{V}(B_{1})=0.3776.\) As shown in Table 1, the threshold of acceptable additive consistency is \(ACI_{V}=0.1278<0.3776\) for a NrFPR with \(n=5.\) This means that the matrix \(B_{1}\) is not acceptable and it should be modified by proposing a method for obtaining a convincing priority vector [14, 30]. In the following consensus model, the proposed additive consistency index is applied to measure inconsistency degrees of matrices. An optimization model is constructed such that the collective matrix can be adjusted to be with acceptable additive consistency.

A novel consensus model in group decision making

In GDM, the group of DMs usually need to negotiate before reaching the optimal solution. In this negotiation process, each DM could make a certain compromise to get a result accepted by all members of the group and reach a high consensus. Obviously, if DMs’ preference relations over alternatives are very close to each other, there could exist a high degree of consensus among DMs. Therefore, the distance between individual preference relations and the collective one is always used to measure the consensus level of DMs [18, 26, 38, 39]. However, the objective of GDM is to give the optimal solution to a complex decision making problem. The existing consensus process has no direct dependence with respect to the ranking of alternatives provided by individuals. In this study, we propose a novel consensus reaching process in GDM where the optimal solutions provided by more than half of DMs are controlled to be identical.

Acceptable consensus level under the control of optimal solution

It is assumed that a group of experts \(E=\{e_{1},e_{2},\ldots ,e_{m}\} (m\ge 2)\) evaluate their opinions on \(X=\{x_{1},x _{2},\ldots ,x_{n}\}\ (n\ge 2)\) as NrFPRs \(B^{k}=(b_{ij}^{k})_{n\times n}\) for \(k=1,2,\cdots ,m.\) Applying an aggregation method, the collective matrix is written as \(B^{c}=(b_{ij}^{c})_{n\times n}.\) To evaluate the degree of consensus reached by DMs, we focus on the distance between individual and collective matrices. As shown in the existing works [15, 18, 26, 38, 39], the method of computing the distance-based consensus level is widely used. For the two NrFPRs \(B^{k}\) and \(B^{c},\) a similarity matrix \(SM^{kc}=(sm_{ij}^{kc})_{n\times n}\) is defined where

$$\begin{aligned} sm_{ij}^{kc}=|b_{ij}^{k}-b_{ij}^{c}|. \end{aligned}$$
(18)

Then the consensus degree of \(B^{k}\) and \(B^{c}\) is computed as

$$\begin{aligned} cl_{k}=\frac{1}{n(n-1)}\sum _{i=1}^{n}\sum _{j=1}^{n}sm_{ij}^{kc}, \end{aligned}$$
(19)

where the value of \(sm_{ii}^{kc}=0\) for \(i\in \{1,2,\cdots ,n\}\) has been considered. When the consensus level of all DMs is quantified, we give the following equality:

$$\begin{aligned} cl=\sum _{k=1}^{m}\lambda _{k}cl_{k}. \end{aligned}$$
(20)

Hereafter, we always assume that \(\lambda _{k}\in [0,1]\) and \(\sum _{k=1}^{m}\lambda _{k}=1.\) One can see from (20) that the closer the value of cl is to 0,  the greater the agreement among all DMs’ opinions.

In the consensus reaching process, one of the important issues is that it is unnecessary to require the complete consensus of all opinions. Hence there is an important problem of how to give a threshold of the consensus level cl. In the existing works, the threshold of cl is usually given in advance or not defined [15, 18, 26, 38, 39]. In what follows, we focus on the important problem and propose a method under the control of the optimal solutions of individuals. The following acceptable consensus standard is defined:

Definition 6

If the optimal solutions to a decision making problem determined by more than half of DMs are identical, the corresponding consensus level is called to be acceptable.

Definition 6 shows that under the acceptable consensus level, the group of DMs have an acceptable consensus on the optimal solution. Now let us formulate the above consideration. By considering an individual matrix \(B^{k}=(b_{ij}^{k})_{n\times n},\) the priorities of alternatives should be derived using a method such as [52]:

$$\begin{aligned} \omega _{i}^{k}=\frac{2}{n}\sum _{j=1}^{n}b_{ij}^{k}, \end{aligned}$$
(21)

where \(\omega _{i}^{k}\) stands for the weight of \(x_{i}\) elicited from \(B^{k}.\) It is supposed that the maximum of \(\omega _{i}^{k}\) for \(\forall i\in \{1,2,\cdots ,n\}\) corresponds to the alternative \(x_{s}^{k}.\) When the opinions of more than half of DMs can be adjusted such that the optimal solutions are identical, the consensus level cl is acceptable. Moreover, it should be pointed out that the simple prioritization method (21) is based on the consideration of summing the preference intensities of an alternative over the others. It is effective according to the discussion and comparison analysis with the existing methods [52]. When the other prioritization methods are applied to NrFPRs, some different rankings of alternatives may be obtained due to the inconsistency of NrFPRs. However, when NrFPRs are with acceptable additive consistency, the rankings of alternatives based on most prioritization methods could be identical. For the sake of simplicity, the formula in (21) is only adopted to derive the priority vector from a NrFPR with acceptable additive consistency. In the future, some novel prioritization methods will be developed to elicit priorities from NrFPRs.

A novel optimization problem

To reach the consensus of GDM, it is requisite to offer each DM a flexibility degree [26, 38, 39]. Following the idea in [38], the granularity level of the expert \(e_{k}\) is given as \(\alpha _{k}.\) Then the preference intensity of \(e_{k}\) can be changed under the following constraint conditions [20]:

$$\begin{aligned} \text{ Case } \text{ I: }{\bar{b}}_{ij}^{k}\in \left[ \max \left( 0.5, b_{ij}^{k}-\frac{\alpha _{k}}{2}\right) , \min \left( 1,b_{ij}^{k} +\frac{\alpha _{k}}{2}\right) \right] , \nonumber \\ \end{aligned}$$
(22)

for \(0.5 < b_{ij}^{k}\le 1,\) and

$$\begin{aligned} \text{ Case } \text{ II: }{\bar{b}}_{ij}^{k}\in \left[ \max \left( 0,b_{ij}^{k} -\frac{\alpha _{k}}{2}\right) , \min \left( 0.5,b_{ij}^{k}+\frac{\alpha _{k}}{2}\right) \right] , \nonumber \\ \end{aligned}$$
(23)

for \(0\le b_{ij}^{k}<0.5.\) When \(b_{ij}^{k}=0.5,\) the preference intensity always remains unchanged in the optimization process. The above considerations are attributed to the idea that the decision maker has the ability to give the transitivity relation of two alternatives. For convenience, the set of all the matrices whose entries satisfy (22) or (23) is written as \(P(B^{k}).\)

Moreover, we construct an optimization model to optimize the individual NrFPRs. Two objectives are always considered [20, 26, 38, 39]: (1) the consistency degree of preference relations, (2) the consensus level of a group of experts. For the first consideration, according to the novel consistency index of FPRs (17), the function is given as:

$$\begin{aligned} Q_{1}=\sum _{k=1}^{m}\lambda _{k}ACI_{V}(B^{k}). \end{aligned}$$
(24)

For the second objective, using (20), one has:

$$\begin{aligned} Q_{2}=cl. \end{aligned}$$
(25)

It is seen that the smaller the values of \(Q_{1}\) \((Q_{2}),\) the more consistency (consensus) the individual NrFPRs (the group of experts). Therefore, the optimization problem is established as follows:

$$\begin{aligned} \min (Q_{1}, Q_{2}). \end{aligned}$$
(26)

This is a multi-objective optimization problem and the simplest solving method is to rewrite (26) as a linear case [20, 26, 38, 39]:

$$\begin{aligned} \min Q=pQ_{1}+qQ_{2}, \end{aligned}$$
(27)

where p and q are non-negative real numbers.

In addition, the decision variables and constraint conditions of the optimization problem (27) should be determined. Based on the above discussions, the individual matrices \(B^{k}\) should be adjusted under the flexibility degree \(\alpha _{k}\) \((k=1,2,\cdots ,m).\) The collective matrix \(B^{c}\) is determined using \(B^{k}\) through an aggregation operator. For the sake of simplicity, the weighted averaging operator is used such that

$$\begin{aligned} B^{c}=\sum _{k=1}^{m}\lambda _{k}B^{k}. \end{aligned}$$
(28)

Hence the entries in individual matrices \(B^{k}\) are the decision variables except for those on the diagonal lines. This means that the dimension of the optimization problem is \(mn(n-1).\) Furthermore, the entries in \(B^{k}\) should be subject to some constraint conditions. Here we consider the case with the conditions (22), (23), the consistency index ACI threshold and the acceptable consensus level in Definition 6.

As compared to the existing fitness functions [20, 26, 38, 39], the novelty comes with the novel consistency index and the standard of the acceptable consensus level.

Solution process based on particle swarm optimization

One can see that the constructed optimization problem (27) subject to the constraint conditions is nonlinear and complex. It is difficult to obtain the optimal solution in the closed form due to the nonlinearity and high dimension. For example, if there are 3 experts and 4 alternatives, the dimension of the optimization problem (27) is 36,  where the preference intensities \(b_{ii}^{k}\) \((i=1,2,3,4; k=1,2,3)\) are always chosen as 0.5. The PSO algorithm is a population-based stochastic optimization technique proposed by Kennedy and Eberhart [43, 44]. It is inspired by the social behavior of bird flocking and fish schooling. Particle swarm are a group of particles, which are the possible solutions of optimization problems in multi-dimensional search space [43, 53, 54]. The PSO algorithm has been successfully used to simulate the consensus reaching process in GDM [20, 26, 38,39,40]. Here the modified PSO algorithm is used to solve the constructed optimization problem.

The initial positions of DMs are the proposed NrFPRs \(B^{k}\) \((k=1,2,\cdots ,m).\) When the flexibility degree \(\alpha _{k}\) is offered, the entries in \(B^{k}\) are changed within the ranges shown in (22) and (23). For a randomly generated particle \(x\in [0,1],\) the linear transformation \(z=a+(b-a)x\) is used [38], where \(z\in [a,b].\) For example, let us consider that \(b_{ij}^{k}\) is equal to 0.6 and the admissible level of granularity \(\alpha _{k}=0.1.\) Applying (22), it follows \([a,b]=[0.55,0.65].\) If \(x=0.4,\) we have \(z=0.59,\) meaning that the initial position with \(b_{ij}^{k}=0.6\) is changed to a new one with the value of 0.59. When all the entries \(b_{ij}^{k}\) for \(i,j\in \{1,2,\cdots ,n\}\) with \(i\ne j\) and \(k\in \{1,2,\cdots ,m\}\) should be optimized, the particle in the PSO algorithm is generally expressed as the following vector:

$$\begin{aligned} \mathbf {z}=\left( b_{12}^{1},b_{13}^{1},\cdots ,b_{n(n-1)}^{1},b_{12}^{2}, \cdots ,b_{n(n-1)}^{m}\right) . \end{aligned}$$
(29)

It is seen that the dimension of the particle is \(mn(n-1),\) which is based on the non-reciprocal property of preference relations. When the additively reciprocal property is considered, the dimension of the particle should be \(mn(n-1)/2\) [20, 38, 39]. Each particle is updated using the following guidelines [54]:

  • The particle velocity is computed as

    $$\begin{aligned} \mathbf {v}(t+1)= & {} w\cdot \mathbf {v}(t)+\mathbf {u}(0, \phi _{1})\cdot (\mathbf {z}_{p} -\mathbf {z}(t))+\mathbf {u}(0, \phi _{2})\nonumber \\&\cdot (\mathbf {z}_{g}-\mathbf {z}(t)), \end{aligned}$$
    (30)

    where t is the index of iteration. \(\mathbf {z}_{p}\) represents the individual best position and \(\mathbf {z}_{g}\) is the global best position developed in the whole population so far. The inertia weight w emphasizes the effect of opposing the current speed change. \(\mathbf {u}(0, \phi _{i})(i=1,2)\) stand for the vectors of randomly generated numbers uniformly distributed in \([0, \phi _{i}].\)

  • The next position of the particle is calculated directly as follows:

    $$\begin{aligned} \mathbf {z}(t+1)=\mathbf {z}(t)+\mathbf {v}(t+1). \end{aligned}$$
    (31)

Moreover, it is noted that the values of the parameters w\(\phi _{1}\) and \(\phi _{2}\) are important and they have been discussed widely [54, 55]. When the inertia weight w is relatively large, the PSO has better global searching capability and less local searching capability. When the inertia weight w is relatively small, the PSO has less global searching capability and better local searching capability. Here the weight w is linearly changed from 0.9 to 0.4 with respect to the iteration times by following the observations in [55]. Then the global and local searching capability can be controlled when the PSO algorithm is performed. The learning factors \(\phi _{1}\) and \(\phi _{2}\) determine the effects of the particle’s original optimal experience and group optimal experience on the particle’s trajectory. It is considered that too large or too small values of \(\phi _{1}\) and \(\phi _{2}\) are not good for searching the optimal solution. The finding in [56] shows that the values of \(\phi _{1}\) and \(\phi _{2}\) are suitable to choose between 1 and 2.5. Here we choose the standard value of \(\phi _{1}=\phi _{2}=2\) in the formula (30), which is the best learning factor verified by many experimental observations [54]. When the optimal solution is determined, the individual matrices can be reconstructed. The collective matrix is determined using (28) and the ranking of alternatives is given.

A new algorithm

It is convenient to provide the algorithm to solve the GDM problem with NrFPRs by controlling the optimal solutions of DMs. The resolution process of a GDM with NrFPRs is shown in Fig. 1 and elaborated on as follows:

Step 1::

In a GDM problem, a group of experts \(E=\{e_{1}, e_{2},\ldots , e_{m}\}\) are invited to evaluate the preference intensities of alternatives in \(X=\{x_{1},\) \( x_{2},\ldots ,x_{n}\}.\)

Step 2::

The NrFPR \(B^{k}\) is determined to represent the initial position of the expert \(e_{k}\) with the flexibility degree \(\alpha _{k}\) for \(k=1, 2, \cdots , m.\)

Step 3::

The fitness function Q is constructed and the constraint conditions with (22) and (23) are considered.

Step 4::

The PSO algorithm is used to solve the optimization problem (27). The matrices \(B^{k}\) \((k=1, 2, \cdots , m)\) are optimized and written as \({\bar{B}}^{k}\) \((k = 1,\ldots ,m)\).

Step 5::

The acceptable consensus standard in Definition 6 and the consistency index value are checked. When it is not satisfied, one returns to Step 2 and the values of \(\alpha _{k} (k=1,2,\cdots , m)\) are adjusted. When it is satisfied, one proceeds to the next step.

Step 6::

Using the optimized matrices \({\bar{B}}^{k} (k=1,2,\cdots ,m)\), the collective one \(B^{c}\) is obtained using (28).

Step 7::

According to \(B^{c}=(b_{ij}^{c})_{n\times n},\) the priorities of alternatives are computed by (21) and the final solution is reached.

Fig. 1
figure 1

Resolution process of a GDM problem with NrFPRs

It should be noted that the computational complexity of the entire solution process is worth investigating [57]. One can see that the consensus model in GDM is proposed for m DMs and n alternatives. The constructed optimization model (27) is nonlinear according to the functions \(Q_{1}\) and \(Q_{2}.\) When the numbers of DMs and alternatives are increasing, the computational complexity increases rapidly. By considering the dimension of particles \(mn(n-1)\) in the PSO, at least the entire algorithmic complexity is \(O(mn(n-1)).\)

On the other hand, it is interesting to investigate the convergence of the GDM algorithm. First, it is convincing to consider that the PSO algorithm is convergent to reach the optimal solution to the optimization problem. The above result is based on lots of numerical experiments and applications of the PSO algorithm [54,55,56]. Second, we can arrive at the threshold of additive consistency index for the collective matrix. The underlying reason is attributed to the objective function \(Q_{1},\) which is tending to the minimum value in the optimization process of individual matrices. With the increasing of the flexibility degrees, the minimum value of \(ACI_{V}(B^{k})\) is tending to zero. Third, the consensus standard in Definition 6 can be reached due to the objective function \(Q_{2}.\) When minimizing the objective function Q,  the distances between individual matrices and the collective one are tending to minimum values. This implies that individual and collective matrices are tending to an identical matrix. Then the rankings of alternatives could be identical when using individual matrices. The above analysis shows that the convergence of the proposed GDM algorithm is independent on the sizes of decision problems. When a sufficiently large iteration number of the PSO algorithm is used under a sufficiently large flexibility degree, the threshold of the proposed consistency index and the consensus standard can be simultaneously achieved. The observation will be verified by carrying out numerical examples in the following section.

Comparison and discussion

In what follows, we report numerical examples to illustrate the proposed concepts and the effects of the parameters \(\alpha ,\) p and q according to the algorithm. Then some comparisons with the existing models are offered to show the novelty of the proposed model.

The effects of the parameters

It is interesting to investigate the effects of the parameters \(\alpha ,\) p and q on the optimal values of Q\(Q_{1}\) and \(Q_{2},\) respectively.

Example 2

Suppose that the four NrFPRs \(\{B^{1},B^{2},B^{3},\) \(B^{4}\}\) are provided by the four DMs \(E=\{e_{1},e_{2},e_{3},e_{4}\}\) according to pairwise comparisons over the four alternatives \(X=\{x_{1},x_{2},x_{3},x_{4}\}.\) The initial positions of DMs are expressed as follows:

$$\begin{aligned} B^{1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.1 &{} 0.6 &{} 0.4 \\ 0.8 &{} 0.5 &{} 0.8 &{} 0.7 \\ 0.4 &{} 0.1 &{} 0.5 &{} 0.2 \\ 0.6 &{} 0.3 &{} 0.7 &{} 0.5 \\ \end{array} \right) , \\ B^{2}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.2 &{} 0.7 &{} 0.6 \\ 0.6 &{} 0.5 &{} 0.8 &{} 0.3 \\ 0.3 &{} 0.2 &{} 0.5 &{} 0.5 \\ 0.1 &{} 0.7 &{} 0.5 &{} 0.5 \\ \end{array} \right) ,\\ B^{3}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.7 &{} 0.5 &{} 0.3 \\ 0.3 &{} 0.5 &{} 0.6 &{} 0.8 \\ 0.5 &{} 0.4 &{} 0.5 &{} 0.8 \\ 0.6 &{} 0.1 &{} 0.2 &{} 0.5 \\ \end{array} \right) , \\ B^{4}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.7 &{} 0.8 &{} 0.6 \\ 0.3 &{} 0.5 &{} 0.6 &{} 0.2 \\ 0.2 &{} 0.4 &{} 0.5 &{} 0.5 \\ 0.4 &{} 0.8 &{} 0.5 &{} 0.5 \\ \end{array} \right) . \end{aligned}$$
Fig. 2
figure 2

Plots of Q versus the generation number with \((p, q)=(0.25, 0.75)\) for \(\alpha =0.3\) and \(\alpha =0.4,\) respectively

In the following, for the sake of simplicity, we choose \(\lambda _{1}=\lambda _{2}=\lambda _{3}=\lambda _{4}=0.25\) and \(\alpha =\alpha _{1}=\alpha _{2}=\alpha _{3}=\alpha _{4}\) for numerically computations. When running the PSO algorithm, the dimension of the particle is 48, the swarm size and the maximum number of generations are all selected as 100. Figure 2 is drawn to show the variations of the fitness function Q versus the generation number with \((p, q)=(0.25, 0.75)\) for \(\alpha =0.3\) and \(\alpha =0.4,\) respectively. It is seen from Fig. 2 that with the increasing of the generation number, the values of Q are decreasing to a stable one. This means that the optimal solution of the fitness function Q can be obtained by running the PSO algorithm for 100 generations. The above phenomenon is in accordance with the known finding in [19, 38, 41]. One can also conclude from Fig. 2 that the iteration number 100 of the PSO algorithm is sufficiently large to obtain the optimal solution to the optimization problem. In addition, one can determine the values of \(ACI_{V},\) cl\(Q_{1},\) \(Q_{2}\) and the priorities of alternatives. For instance, we choose \(\alpha =0.3\) to give the various values in Table 2 and the collective matrix as follows:

$$\begin{aligned} {\bar{B}}^{c}_{0.3}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.4250 &{} 0.5500 &{} 0.4250 \\ 0.5125 &{} 0.5000 &{} 0.6500 &{} 0.6500 \\ 0.2000 &{} 0.2625 &{} 0.5000 &{} 0.4250 \\ 0.4375 &{} 0.4000 &{} 0.4375 &{} 0.5000 \\ \end{array} \right) . \end{aligned}$$

It is found from Table 2 that the final ranking is \(x_{2}\succ x_{1}\succ x_{4}\succ x_{3}.\) The acceptable consensus standard in Definition 6 is satisfied since the best alternative is \(x_{2}\) according to the priorities derived from \({\bar{B}}^{k}_{0.3}\) \((k=1,2,3,4).\)

Table 2 The values of \(ACI_{V},\) cl\(Q_{1},\) \(Q_{2}\) and the priorities of alternatives according to \({\bar{B}}^{1}_{0.3}-{\bar{B}}^{4}_{0.3}\) and \({\bar{B}}^{c}_{0.3},\) respectively

Moreover, the effects of the flexibility degree \(\alpha \) on the optimal values of Q\(Q_{1}\) and \(Q_{2}\) are shown in Fig. 3 by choosing \((p,q)=(0.25,0.75).\) The values of \(\alpha \) are chosen from 0 to 0.4 with the step length 0.005. The underlying reason is that the flexibility degrees of DMs are considered to be not too large. Certainly, from the view of numerical computations, the values of \(\alpha \) could be any non-negative number. One can see from Fig. 3 that the optimal values of Q\(Q_{1}\) and \(Q_{2}\) are not strictly monotonic decreasing and they exhibit some oscillations. The above observations are similar to the results in [19, 38, 41]. In addition, as compared to the findings in [19, 38, 41], there is a difference among the optimal values of Q. It is seen that the greater the value of the flexibility degree \(\alpha ,\) the stronger the oscillation of the value of Q is. The main reason is that the optimal matrices have not the constraint of additively reciprocal property. When the preference relations are not with additively reciprocal property, the dimension of the particles in the PSO is twice as that with additively reciprocal property. The higher dimension of the particles leads to the greater oscillation of the value of Q.

Fig. 3
figure 3

Plots of the optimal values of Q\(Q_{1}\) and \(Q_{2}\) versus \(\alpha \) for the selected values of \((p, q)=(0.25, 0.75)\)

At the end, the influences of p and q on the optimal values of Q are investigated and shown in Fig. 4. The step length 0.05 of \(\alpha \in [0, 0.4]\) is chosen, which is different to 0.005 adopted in Fig. 3. For the values of p and q,  we consider the three cases: (a) \(p=0.25\) together with the selected values of q;  (b) \(q=0.75\) together with the selected values of p; and (c) the selected values of p under \(p+q=1.\) It is found from Fig. 4a, b that the increasing of p and q for a fixed \(\alpha \) increases the values of Q. The observed results are in agreement with the finding in [20]. When considering the constraint \(p+q=1,\) Fig. 4c shows that there are some intersections in the lines with \(p=0, 0.25, 0.5, 0.75, 1,\) respectively. The observation is different to those in Fig. 4a, b and similar to the phenomenon observed in [19, 38]. Based on the above observations, some results are covered as follows:

  1. (1)

    There are some small differences among the curves of Q in Fig. 4a–c when the same values of p and q are used such as \(p=0.25\) and \(q=0.75.\) The reason behind this phenomenon is that some random parameters have been used in the PSO algorithm.

  2. (2)

    When the value of \(\alpha \) is fixed, different combinations of p and q could yield different values of Q.

  3. (3)

    The parameters of p and q are mainly used as the weights of \(Q_{1}\) and \(Q_{2}\) to affect the optimal values.

Fig. 4
figure 4

Plots of the optimal values of Q versus \(\alpha \) with the step length 0.05 under the conditions of a \(p=0.25\) together with the selected values of q; b \(q=0.75\) together with the selected values of p; and c the selected values of p under \(p+q=1,\) respectively

Table 3 Priorities of alternatives and the optimal solutions according to \(B^{5}-B^{8}\)
Table 4 The priorities of alternatives and the optimal solutions according to \({\bar{B}}^{5}_{0.1}-{\bar{B}}^{8}_{0.1}\) and \({\bar{B}}^{c}_{0.1},\) respectively
Table 5 The priorities of alternatives and the optimal solutions according to \({\bar{B}}^{5}_{0.2}-{\bar{B}}^{8}_{0.2}\) and \({\bar{B}}^{c}_{0.2},\) respectively
Table 6 The priorities of alternatives and the optimal solutions according to \({\bar{B}}^{5}_{0.3}-{\bar{B}}^{8}_{0.3}\) and \({\bar{B}}^{c}_{0.3},\) respectively
Table 7 The values of additive consistency index of \({\bar{B}}^{5}-{\bar{B}}^{8}\) and \(Q_{1}\) under various flexibility degrees

Comparative analysis

It is worth noting that the consensus models in GDM with FPRs or additive reciprocal matrices have been investigated in [19, 20]. The initial positions of DMs are characterized using NrFPRs in [19] and FPRs with additive reciprocity in [20]. Here we still use NrFPRs to express the initial opinions of DMs. The main novelties are the novel consistency index (17) and the standard of acceptable consensus level (Definition 6). It is interesting to compare the consensus model in [19] using numerical results.

Example 3

For convenience, the existing matrices without additively reciprocal property in [19] are still used for numerically computations:

$$\begin{aligned} B^{5}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.1 &{} 0.6 &{} 0.4 \\ 0.8 &{} 0.5 &{} 0.8 &{} 0.7 \\ 0.4 &{} 0.1 &{} 0.5 &{} 0.2 \\ 0.6 &{} 0.3 &{} 0.7 &{} 0.5 \\ \end{array} \right) , \\ B^{6}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.2 &{} 0.7 &{} 0.6 \\ 0.6 &{} 0.5 &{} 0.9 &{} 0.3 \\ 0.3 &{} 0.3 &{} 0.5 &{} 0.5 \\ 0.1 &{} 0.7 &{} 0.5 &{} 0.5 \\ \end{array} \right) ,\\ B^{7}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.7 &{} 0.5 &{} 0.3 \\ 0.3 &{} 0.5 &{} 0.6 &{} 0.8 \\ 0.5 &{} 0.4 &{} 0.5 &{} 0.9 \\ 0.6 &{} 0.1 &{} 0.3 &{} 0.5 \\ \end{array} \right) , \\ B^{8}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.8 &{} 0.2 &{} 0.6 \\ 0.4 &{} 0.5 &{} 0.6 &{} 0.2 \\ 0.8 &{} 0.4 &{} 0.5 &{} 0.5 \\ 0.4 &{} 0.8 &{} 0.5 &{} 0.5 \\ \end{array} \right) . \end{aligned}$$

First, let us compute the weights of alternatives according to the matrices \(B^{5}-B^{8}\) and show in Table 3. It is seen from Table 3 that the optimal solutions using \(B^{5}\) and \(B^{6}\) are \(x_{2},\) and the others are \(x_{3}\) or \(x_{4}.\) This means that the standard of acceptable consensus level in Definition 6 is not satisfied.

Second, the flexibility degrees are offered to DMs and the optimization process of NrFPRs is performed. By selecting \(p=0.25\), \(q=0.75\) and the maximum number of iterations 100,  some cases for different values of \(\alpha \) are investigated. For example, when \(\alpha =0.1,\) the consensus model is applied to give the optimized matrices as follows:

$$\begin{aligned} {\bar{B}}^{5}_{0.1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.1500 &{} 0.6500 &{} 0.4500 \\ 0.7500 &{} 0.5000 &{} 0.7500 &{} 0.6500 \\ 0.4500 &{} 0.1500 &{} 0.5000 &{} 0.2500 \\ 0.5500 &{} 0.3500 &{} 0.6500 &{} 0.5000 \\ \end{array} \right) ,\\ {\bar{B}}^{6}_{0.1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.2500 &{} 0.6500 &{} 0.5500 \\ 0.5500 &{} 0.5000 &{} 0.8500 &{} 0.3500 \\ 0.2500 &{} 0.3500 &{} 0.5000 &{} 0.5000 \\ 0.1500 &{} 0.6500 &{} 0.5000 &{} 0.5000 \\ \end{array} \right) ,\\ {\bar{B}}^{7}_{0.1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.6500 &{} 0.4500 &{} 0.3500 \\ 0.2500 &{} 0.5000 &{} 0.6500 &{} 0.7500 \\ 0.4500 &{} 0.3500 &{} 0.5000 &{} 0.8500 \\ 0.5500 &{} 0.0500 &{} 0.3500 &{} 0.5000 \\ \end{array} \right) ,\\ {\bar{B}}^{8}_{0.1}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.8500 &{} 0.2500 &{} 0.6500 \\ 0.4500 &{} 0.5000 &{} 0.5500 &{} 0.2500 \\ 0.7500 &{} 0.4500 &{} 0.5000 &{} 0.4500 \\ 0.4500 &{} 0.7500 &{} 0.5000 &{} 0.5000 \\ \end{array} \right) . \end{aligned}$$

The collective matrix is obtained as:

$$\begin{aligned} {\bar{B}}^{c}_{0.1}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.4750 &{} 0.5000 &{} 0.5000 \\ 0.5000 &{} 0.5000 &{} 0.7000 &{} 0.5000 \\ 0.4750 &{} 0.3250 &{} 0.5000 &{} 0.5125 \\ 0.4250 &{} 0.4500 &{} 0.5000 &{} 0.5000 \\ \end{array} \right) . \end{aligned}$$

The priorities of alternatives and the optimal solutions using \({\bar{B}}^{5}_{0.1}-{\bar{B}}^{8}_{0.1}\) and \({\bar{B}}^{c}_{0.1}\) are determined and shown in Table 4. It is found that the standard of acceptable consensus level is reached. The final solution is \(x_{2}\) and the result has a high consensus level of DMs. In addition, based on the consensus model in [19], the optimized matrices for \(\alpha =0.1\) are computed. The collective matrix is obtained as:

$$\begin{aligned} {\tilde{B}}^{c}_{0.1}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0.5000 &{} 0.4500 &{} 0.4750 &{} 0.4750 \\ 0.5250 &{} 0.5000 &{} 0.7250 &{} 0.5000 \\ 0.5000 &{} 0.3000 &{} 0.5000 &{} 0.5000 \\ 0.4000 &{} 0.4750 &{} 0.4750 &{} 0.5000 \\ \end{array} \right) . \end{aligned}$$

Then one can determine the priority vector as \((0.9500,1.1250,0.9000,0.9250)\) and the ranking \(x_{2}\succ x_{1} \succ x_{4}\succ x_{3}.\) The obtained result is in agreement with the finding in Table 4. The main differences and novelties of the present study are the novel consistency index and the standard of acceptable consensus level. Moreover, letting \(\alpha =0.2,\) 0.3 and others, the optimized and the collective matrices can be obtained. For the sake of simplicity, the obtained priorities and the optimal solutions for \(\alpha =0.2\) and 0.3 are given in Tables 5 and 6, respectively. It is seen from Tables 45 and 6 that with the increasing of the values of the flexibility degree \(\alpha ,\) the ranking of alternatives could be changed. Under the proposed model, the optimal solution is kept with the high consensus level. Therefore, the proposed standard of acceptable consensus level can be considered as a good strategy to reach the final solution accepted by most DMs in GDM.

Third, it is interesting to present the variations of the additive consistency index and the consensus level. The computed results are shown in Tables 7 and 8, respectively. One can find from Tables 7 and 8 that with the increasing of the flexibility degrees, the values of \(Q_{1}\) and \(Q_{2}\) decrease. When the acceptable consensus standard is chosen as a value of \(Q_{2},\) it can also be achieved by adjusting the values of the flexibility degrees.

Table 8 The values of consensus level of \({\bar{B}}^{5}-{\bar{B}}^{8}\) and \(Q_{2}\) under various flexibility degrees
Table 9 The best alternatives determined by individual matrices \(B^{9}-B^{12}\) and the collective ones under various flexibility degrees

On the other hand, it is noted that the number of alternatives is only 4 in Examples 2 and 3. The proposed algorithm is suitable for different sizes of decision making problems with various alternatives and DMs. When the numbers of alternatives and DMs are increasing, the computational complexity increases rapidly due to the dimension of the particle in the PSO algorithm. In spite of this, the computational results could be similar to those in Examples 2 and 3, respectively. As an illustration, here we choose the number of alternatives as 6 to give some further computations.

Example 4

Considering the alternatives \(x_{1}-x_{6},\) the four DMs \(e_{1}-e_{4}\) give the initial NrFPRs as follows:

$$\begin{aligned} B^{9}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.4 &{} 0.6 &{} 0.6 &{} 0.4 &{} 0.3 \\ 0.6 &{} 0.5 &{} 0.7 &{} 0.5 &{} 0.4 &{} 0.5 \\ 0.4 &{} 0.4 &{} 0.5 &{} 0.4 &{} 0.6 &{} 0.3 \\ 0.4 &{} 0.5 &{} 0.6 &{} 0.5 &{} 0.4 &{} 0.3 \\ 0.7 &{} 0.6 &{} 0.4 &{} 0.6 &{} 0.5 &{} 0.4 \\ 0.7 &{} 0.5 &{} 0.6 &{} 0.7 &{} 0.6 &{} 0.5 \\ \end{array} \right) ,\\ B^{10}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.5 &{} 0.4 &{} 0.4 &{} 0.4 &{} 0.4 \\ 0.5 &{} 0.5 &{} 0.3 &{} 0.4 &{} 0.8 &{} 0.5 \\ 0.6 &{} 0.7 &{} 0.5 &{} 0.4 &{} 0.4 &{} 0.5 \\ 0.7 &{} 0.5 &{} 0.6 &{} 0.5 &{} 0.3 &{} 0.4 \\ 0.3 &{} 0.2 &{} 0.7 &{} 0.7 &{} 0.5 &{} 0.3 \\ 0.5 &{} 0.5 &{} 0.5 &{} 0.4 &{} 0.6 &{} 0.5 \\ \end{array} \right) ,\\ B^{11}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.3 &{} 0.4 &{} 0.6 &{} 0.7 &{} 0.5 \\ 0.7 &{} 0.5 &{} 0.5 &{} 0.4 &{} 0.6 &{} 0.5 \\ 0.7 &{} 0.4 &{} 0.5 &{} 0.8 &{} 0.3 &{} 0.4 \\ 0.4 &{} 0.5 &{} 0.2 &{} 0.5 &{} 0.4 &{} 0.4 \\ 0.4 &{} 0.4 &{} 0.7 &{} 0.6 &{} 0.5 &{} 0.3 \\ 0.6 &{} 0.5 &{} 0.5 &{} 0.6 &{} 0.6 &{} 0.5 \\ \end{array} \right) ,\\ B^{12}= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0.5 &{} 0.6 &{} 0.4 &{} 0.6 &{} 0.7 &{} 0.6 \\ 0.3 &{} 0.5 &{} 0.6 &{} 0.5 &{} 0.6 &{} 0.5 \\ 0.6 &{} 0.5 &{} 0.5 &{} 0.5 &{} 0.3 &{} 0.4 \\ 0.4 &{} 0.6 &{} 0.5 &{} 0.5 &{} 0.4 &{} 0.4 \\ 0.4 &{} 0.3 &{} 0.7 &{} 0.6 &{} 0.5 &{} 0.3 \\ 0.5 &{} 0.5 &{} 0.5 &{} 0.6 &{} 0.7 &{} 0.5 \\ \end{array} \right) . \end{aligned}$$

Let us still choose \(p=0.25,\) \(q=0.75\) and the maximum iteration number 100. Under various flexibility degrees, Table 9 shows the best alternatives determined by individual NrFPRs and the collective ones. It is seen from Table 9 that when \(\alpha =0,\) the acceptable consensus standard in Definition 6 is not satisfied. When the individual NrFPRs are optimized by offering a certain flexibility degrees, the acceptable consensus standard is achieved. The final solution is determined as \(x_{6},\) which is in agreement with the result based on the model in [19]. Furthermore, the values of additive consistency index, consensus level, \(Q_{1}\) and \(Q_{2}\) are computed and given in Tables 10 and 11, respectively. One can see that with the increasing of the flexibility degrees, the values of \(Q_{1}\) and \(Q_{2}\) are decreasing. This means that the acceptable additive consistency level and acceptable consensus measure can be achieved in terms of the corresponding thresholds. The obtained results are similar to those in Example 3. By considering the thresholds in Table 1, the acceptable additive consistency has been reached for \(\alpha =0.1,0.2,0.3,\) respectively.

Table 10 The values of additive consistency index of \({\bar{B}}^{9}-{\bar{B}}^{12}\) and \(Q_{1}\) under various flexibility degrees
Table 11 The values of the consensus level of \({\bar{B}}^{9}-{\bar{B}}^{12}\) and \(Q_{2}\) under various flexibility degrees

Conclusions and the future study

This paper has reported a consensus model in group decision making (GDM) where non-reciprocal fuzzy preference relations (NrFPRs) are used to express the opinions of decision makers (DMs). The novel consistency index has been proposed to quantify the inconsistency degree of NrFPRs. A novel optimization model has been constructed to consider the consistency degrees of NrFPRs and the consensus level. The particle swarm optimization (PSO) algorithm has been used to model the consensus process of reaching the final solution. Some findings are shown as follows:

  • The proposed consistency index can be effectively used to quantify the inconsistency degree of NrFPRs. And it is easy to be computed and understood as compared to the existing ones.

  • The standard of acceptable consensus level is adopted to keep the final solution to a GDM problem accepted by the most of DMs.

  • The observations show that with the increasing of the flexibility degrees of DMs, the ranking of alternatives could be changed considerably.

In the future, the idea shown in NrFPRs could be used to propose the concept of non-reciprocal pairwise comparison matrices (NrPCMs) in the analytic hierarchy process (AHP). The relations among various preference relations could be investigated. The prioritization methods of NrFPRs and NrPCMs could be developed. The standard of acceptable consensus level could be extended to propose the consensus models with incomplete NrFPRs and others.