Introduction

Multi-criteria group decision-making (MCGDM) has always been a hotspot due to its complexity and universality in daily life and business management, such as supplier selection [1,2,3,4], hospital management [5] and site selection of new energy power station [6]. A MCGDM problem requires multiple decision-makers (DMs) to evaluate a series of alternatives from distinct perspectives and eventually determine the best one. However, classic fuzzy sets are inadequate to describe the dependability of evaluation information. For describing uncertain, imprecise and incomplete information more accurately, the notion of Z-number is introduced [7]. A Z-number is an ordered pair of fuzzy numbers configured as Z = (A, B). The constraint part A and the reliability measure B endow Z-numbers the capacity to reflect the constraint and reliability of information synchronously, making it superior to traditional fuzzy sets and widely applicable to natural human expression. For example, the sentence “The salt content of seawater reverse osmosis influent is usually 8 g/L ~ 50 g/L.” can be represented by a Z-number as X is Z = (8 g/L ~ 50 g/L, usually), where X indicates the water salinity of seawater reverse osmosis. The existing literature mainly studies Z-numbers in two categories. The first category can be classified as fundamental research, including arithmetic operations over Z-numbers [8,9,10], comparison and measurement methods [11,12,13,14], converting methods [15] and so on. The second is the employment of Z-number in the decision-making process, such as medical diagnosis [16] and failure mode identification and sequencing [17]. Also, the two elements of a Z-number are most expressed as the trapezoidal or triangular fuzzy number [16, 18], or a mix of both [17].

Linguistic Z-numbers (LZNs) is a subclass of Z-numbers, whose two components merge in the form of linguistic terms. The fuzziness and randomness of LZNs precisely accord to the restraint and reliability of Z-numbers, respectively [19]. This extension caters more to human habits and makes qualitative information to be described more accurately, thus can help reduce the information distortion, increase the flexibility and credibility of decision-making. The fuzzy restriction of a LZN may be represented by linguistic terms like “a little bad”, “good”, “very good”, the reliability can be measured by linguistic terms like “uncertain”, “relatively certain”, “very certain” or “seldom”, “often” and “usual”. LZNs are practical tools for expressing most decision-making information. For example, an expert can use a LZN (very good, certain) to evaluate the effect of evaporative crystallization technology on the solidification of terminal wastewater. Due to the universality and applicability, LZN shave a wide range of application. For example, Song et al. [20] designed a novel quality function deployment framework employing LZNs. Jiang et al. [5] developed an extended evaluation laboratory method under The LZNs environment as a large group evaluation approach.

However, the determination of membership functions linked with linguistic terms is not a simple matter; aggregating linguistic terms is also a complex undertaking. Since constraint and reliability fall into distinct categories, the direct conversion of LZNs to classical fuzzy numbers will cause distortion and loss of initial information, which is nevertheless a common approach for the aggregation of evaluations in Z-numbers form [17, 21,22,23]. As a generalization of probability theory and promotion of traditional Bayes reasoning approach, Dempster–Shafer evidence theory (DEST) has been found great effectiveness in processing uncertainty resulting from the unknown or incomplete information [24, 25]. Its core, Dempster’s combination rule, can effectively fuse the evaluation information from multiple individuals. Li and Chen (2018) provided an approach to transfer fuzzy information into BPAs and adopted DEST to integrate evaluation opinions of multiple experts [26]. Liu and Zhang (2019) developed a hesitant fuzzy linguistic fusion arithmetic based on DEST and suggested a novel multiple attribute decision making (MADM) method [4]. Ren et al. (2020) built a bridge between Z-numbers and the DEST, which formed the foundation to aggregate Z-evaluations [21]. Nevertheless, there has been no research on how to apply DEST to the effective integration of LZNs. Thus, this paper employs LZNs to express the initial evaluation information. Then the evaluations in the form of LZNs are converted into BPAs by extending the method in [27] to the LZNs environment. To be more specific, the membership function is substituted with the utility function, and the reliability is measured using linguistic scale function.

There are many traditional multicriteria decision making (MCDM) methods, such as VIKOR, grey relational analysis, etc. On the basis of these methods, many scholars have improved these methods or developed new methods to better solve the MCDM problem. For example, Gou et al. (2020) corrected the neglect of the traditional VIKOR method on the relationship between the alternatives and the negative ideal solution, and used the improved VIKOR method in probabilistic double hierarchy linguistic context [28]. Jiang et al. (2020) extended the decision-making trial and evaluation laboratory (DEMATEL) method by incorporating LZNs, to make up for the shortcomings of existing DEMATEL methods in expressing reliability of DMs’ cognition [5]. Geetha et al. (2021) provided a hesitant Pythagorean fuzzy (HPF) ELECTRE III method which extended the ELECTRE III method to HPF environment [29]. Lin et al. (2021) proposed two methods based on a new score function of probabilistic linguistic term sets (PLTSs), which named TOPSIS-ScoreC-PLTS and VIKOR-ScoreC-PLTS, respectively [30]. As a classification of multitudinous multicriteria decision making (MCDM) methods, the outranking methods perform best in decision support at the strategic level. The outranking methods have the indisputable advantage in leveraging incomplete information and coping with uncertain or fuzzy information. Among them, the ELECTRE method is the most extensively used owing to its prominent ability to effectively avoid the compensation effect between attributes while fully considering incomparability and indifference. In other words, an alternative with a significantly weaker performance value cannot be directly compensated by other good attribute values. The ELECTRE method has so far developed multiple versions suitable for different types of problems. Among them, ELECTRE II concerned more about the degree to which one alternative outranks another, which serve the purpose of settling sorting problems [31]. ELECTRE II method can be applied in copious fuzzy circumstances. Wan et al. (2016) suggested a MCDM method combining AHP and ELECTRE II for supplier selection problems with interval 2-tuple linguistic information [32]. Liao et al. (2017) proposed two ELECTRE-based MCDM approaches under hesitant fuzzy linguistic environment [33]. Tian et al. (2020) put forward a novel MCGDM method suitable to the single-valued neutrosophic environment and weight unknown circumstances [34]. However, no scholars have applied the ELECRE II method to the LZNs context currently. Consequently, this article chooses ELECTRE II as the ranking method of alternatives.

Another point worth noting is that the existing MCDM or MCGDM methods, including the ELECTRE II method, typically take DMs as risk-neutral, even though the fact is often not the case. DMs usually have reference dependence and loss-aversion psychology during the decision-making process, which means DMs are more sensitive to losses than to profits. Inspired by prospect theory, the TODIM method is introduced with the strength of considering the psychological characteristics of the DMs [35] and extended to various information contexts. For instance, Krohling and de Souza (2012) proposed the fuzzy TODIM [36], which is then generalized to uncertain and random environment successively [37, 38]. In addition, how to reasonably combine the TODIM method with other methods to impart decision-making higher reliability and validity is also a problem deserves research. Passos et al. (2014) proposed the TODIM-FSE method by adopting some stages of Fuzzy Synthetic Evaluation (FSE) and the TODIM respectively, which provided the possibility to consider the prevalent inaccuracies in human judgment when constructing the contribution function [39]. Zhang et al. (2017) developed the SMAA-TODIM method to process the intrinsic indeterminacy of the TODIM or TODIM-based models [40]. Wu et al. (2018) extended TODIM to fuzzy context and combined it with PROMETHEE-II method to settle the compensation problem during the ranking process [41]. Llamazares (2018) pointed out two types of paradoxes in the traditional TODIM method, thus the generalized TODIM method is introduced [42]. This generalized version can effectively avoid these two paradoxes and has been gradually expanded [43]. Therefore, generalized TODIM is managed to integrate into ELECTRE II so that the influence of DM’s bounded rationality on the ranking results can be subtly considered in this article.

After the foregoing analysis, a decision-making framework is constructed for the ranking and selection problem. The main work of this article can be outlined as follows:

  1. 1.

    A LZN is an appropriate and remarkable tool to express assessments. On the one hand, it embraces both fuzzy evaluation and reliability information. On the other hand, the introduction of linguistic term sets makes it retain more original evaluation information and reduce the information distortion compared to a Z-number. Therefore, LZNs are employed in this article to describe original evaluation values.

  2. 2.

    DEST is extended to the LZNs context, which not only reduces the distortion of evaluation information but also better handles the fusion of multi-source information under uncertain conditions. Moreover, a novel discounting coefficients determination method is established to modify BPAs, which considers both the inner and outer credibility of the evidence.

  3. 3.

    A novel decision framework for MCGDM problems with LZNs and unknown weight information is established based on DEST and Deng entropy. Within this framework, the alternatives are ranked according to the calculation results of the generalized TODIM-ELECTRE II method under the LZNs environment. The novel method can deal with non-compensatory issues of criteria while reflecting the loss-avoidance psychology of DMs, hence further enhances the persuasiveness and reliability of the results.

The remainder of this paper is fabricated as follows: Section “Preliminaries” reviews the relevant knowledge concerning LZNs, DEST and Deng entropy. In Section “An integrated decision-making framework based on the generalized TODIM-ELECTER II method and DEST”, the decision framework developed in this paper is introduced. A case of terminal wastewater solidification technology selection is devised in Section “An illustrative example of terminal wastewater solidification technology selection” to verify the feasibility of the above method. Section “Analysis and discussion” carries out a series of analysis, including sensitivity and comparative analyses, the robustness and superiority of the proposed method are manifested in this section. Finally, the conclusion comes in section “Conclusion”.

Preliminaries

Z-number

The notion of Z-numbers is proposed by Zadeh (2011) as an implementation to depict the reliability or the confidence in the information released by natural language statement, which appears as a 2-tuple: Z = (A, B).

Typically, A and B occur in the form of words or clauses, and both are fuzzy numbers. Component A plays the role of a fuzzy restriction, R(X), on the values taken by a real-valued uncertain variable X. More specifically, R(X): X is A → Poss (X = u) = μA (u), where μA (u) is the membership function of A and u represents a general value of X, μA (u) can be constructed as the degree to which u belongs to the constraint.

In light of the relation between Z-numbers and Z+-numbers, the meaning of a Z-number can be explained by \(Z = (A,\;B) = Z^{ + } (A,\;\mu_{A} \cdot p_{{X_{A} }} \;is\;B)\), where XA is a random variable and \(p_{{X_{A} }}\) is the probability of XA for fuzzy set A. In addition, \(\mu_{A} \cdot p_{{X_{A} }} \;is\;B\) indicates \(\mu_{B} \left( {\int {\mu_{A} \left( X \right) \cdot p_{{X_{A} }} } \left( X \right)dx} \right)\) [7]. The diagram of a Z-number is shown in Fig. 1 [27], where b represents an element in fuzzy set B.

Fig. 1
figure 1

The diagram of a simple Z-number [27]

Linguistic Z-numbers

The linguistic Z-numbers can effectively avoid information loss and distortion while considering uncertainty in the decision-making process.

Definition 2.1

Let X be the discourse domain, \(S_{1} = \left\{ {s_{0} ,s_{1} , \cdots ,s_{2t} } \right\}\) and \(S_{2} = \left\{ {s^{\prime}_{0} ,s^{\prime}_{1} , \cdots ,s^{\prime}_{2t} } \right\}\) are two linguistic term sets containing a finite number of elements, the terms is ordered and discrete, t is a non-negative integer. Suppose Aϕ(x) ∈ S1 and Bφ(x) ∈ S2. Then the form of a linguistic Z-number set in X is as follows: Z = {(x, Aϕ(x), Bφ(x))| x ∈ X}, where Aϕ(x) is the limit on the possible value of the uncertain variable x, and Bφ(x) is +the measure of the reliability of the first component. These two components usually contain different linguistic terms, representing different preference information.

When X contains only a single element, the linguistic Z-number set degenerates into a linguistic Z-number Zα = (x, Aϕ(α), Bφ(α)), where Aϕ(α) ∈ S1 and Bφ(α) ∈ S2 are two linguistic terms.

Dempster–Shafer evidence theory

As an extension of probability theory, Dempster–Shafer evidence theory (DEST) has outstanding performance in the fusion of multi-source uncertain information. The following is a brief review of DEST.

Definition 2.2

Let \(\Theta = \left\{ {H_{1} ,H_{2} , \cdots ,H_{N} } \right\}\), the elements in the collection are exhaustive and mutually exclusive, we define the set Θ as a frame of discernment (FOD). The power set of Θ is denoted as \(P\left( \Theta \right) = \left\{ {\emptyset ,\left\{ {H_{1} } \right\},\left\{ {H_{2} } \right\}, \cdots ,\left\{ {H_{N} } \right\}, \cdots ,\left\{ {H_{1} \cup H_{2} } \right\}, \cdots ,\Theta } \right\}\).

Every element in P(Θ) can be called a proposition, which represents the possible values of the evaluated object.

The mass function is also called basic probability assignment, or BPA, which is defined as m: P(Θ) → [0,1]. It satisfies the conditions: \(\left( i \right)m\left( \emptyset \right) = 0;\;\left( {ii} \right)\sum\nolimits_{A \in P\left( \Theta \right)} {m\left( A \right)} = 1\), m(A) reflects the extent to which the evidence supports A. If m(A) > 0, A is called a focal element. Any belief that is not assigned to a specific subset is considered “unexpressed” and assigned to the environment Θ, denote by m(Θ).

To show the reliability of evidence, we introduce the concept of discounting coefficient. A discounting coefficient \(\alpha \in \left[ {0,1} \right]\) can be seen as the weight of a piece of evidence. By updating the BPA with a discounting coefficient, we can get the discounted BPA:

$$ m^{\alpha } \left( A \right) = \alpha \times m\left( A \right), $$
(1)
$$ m^{\alpha } \left( \Theta \right) = \alpha \times m\left( \Theta \right) + \left( {1 - \alpha } \right). $$
(2)

The core of DEST is its rules of combination, which is commonly used to integrate multi-source information.

Definition 2.3

For two bodies of evidence m1 and m2, the combination indicated by m = m1 ⊕ m2 is defined by Dempster’s rule as follows:

$$ \left\{ \begin{gathered} m\left( \emptyset \right) = 0\; \hfill \\ m\left( A \right) = \frac{1}{1 - K}\sum\nolimits_{B \cap C = A} {m_{1} \left( B \right)m_{2} \left( C \right)} \; \hfill \\ \end{gathered} \right., $$
(3)

where B and C are both focal elements. In the formula, K is called the conflict coefficient and \(K = \sum\nolimits_{B \cap C = \emptyset } {m_{1} \left( B \right)m_{2} \left( C \right)}\).

Pignistic probability represents a point estimate in a belief interval. A BPA can be transformed into a probability distribution by Pignistic probability function defined below:

$$ BetP_{m} \left( A \right) = \sum\limits_{B \subseteq \Theta } {\frac{{\left| {A \cap B} \right|}}{\left| B \right|}} \frac{m\left( B \right)}{{1 - m\left( \emptyset \right)}},\forall A \subseteq \Theta , $$
(4)

where A is a focal element and \( \left| \cdot \right|\) represent the cardinality of corresponding sets. Resent the cardinality of corresponding sets.

Definition 2.4

Let m1 and m2 be two BPAs on the same frame of discernment Θ, which contains N mutually exclusive and exhaustive hypotheses. The distance between m1 and m2 can be calculated by Eq. (5)

$$ d_{BPA} (m_{1} ,m_{2} ) = \sqrt {\frac{1}{2}\,\,(\overrightarrow {{m_{1} }} - \overrightarrow {{m_{2} }} )^{T} \,\underline{\underline{D\,}} (\overrightarrow {{m_{1} }} - \overrightarrow {{m_{2} }} )} , $$
(5)

where \(\overrightarrow {{m_{1} }}\) and \(\overrightarrow {{m_{2} }}\) are the vector representations of the corresponding BPA m1 and m2 respectively and \(\underline{\underline{D}}\) is a \(2^{N} \times 2^{N}\) matrix with elements are evaluated as Eq. (6).

$$ \underline{\underline{D}} (A,B) = \frac{{\left| {A \cap B} \right|}}{{\left| {A \cup B} \right|}},\;A,B \in P\left( \Theta \right). $$
(6)

Note that the element 1/2 in Eq. (5) is required for normalizing \(d_{BPA}\) and guaranteeing \(0 \le d_{BPA} \le 1\).

Deng entropy and the maximum Deng entropy

Deng entropy was first proposed by Deng, it can measure the uncertainty of basic probability assignments (BPAs). Deng entropy is the generalization of Shannon entropy and expands the boundary of the latter. The basic form is as follows:

$$ E_{d} \left( m \right) = - \sum\limits_{A \subseteq X} {m\left( A \right)\log_{2} \frac{m\left( A \right)}{{2^{\left| A \right|} - 1}}} , $$
(7)

m is the mass function defined in the framework of discriminate. A is the focus element, and |A| is the cardinality of A. This formula shows that a belief is assigned to 2|A|-1 possible states contained in A (except for the empty set). Shannon entropy can be regarded as a special case of Deng entropy where belief is only assigned to a single element and |A| = 1.

$$ E_{d} \left( m \right) = - \sum\limits_{A \subseteq X} {m\left( A \right)\log_{2} \frac{m\left( A \right)}{{2^{\left| A \right|} - 1}}} = - \sum\limits_{A \subseteq X} {m\left( A \right)\log_{2} m\left( A \right)} , $$
(8)

which can be interpreted as the measure of discord of the BPA among diverse focal elements.

The classical maximum entropy is calculated by log|N|. However, Kang and Deng (2019) pointed out that if the state of the system was uncertain, ambiguous, the actual maximum entropy might be greater than the traditional maximum entropy [44]. Then the maximum Deng entropy condition was given and the analytical solution of the maximum Deng entropy was obtained.

Theorem 1

The maximum Deng entropy:

$$ \begin{aligned} D_{\max } &= - \sum\limits_{i} {m\left( {F_{i} } \right)\log_{2} \frac{{m\left( {F_{i} } \right)}}{{2^{{\left| {F_{i} } \right|}} - 1}}} ,\quad if\;and\;only\;if\;m\left( {F_{i} } \right) \\ &=\frac{{2^{{\left| {F_{i} } \right|}} - 1}}{{\sum\nolimits_{i} {2^{{\left| {F_{i} } \right|}} - 1} }},\end{aligned} $$
(9)

where Fi is a focal element and m(Fi) is the mass function of Fi. It is not difficult to infer that the analytical solution of the maximum Deng entropy is \(D_{\max } = \log_{2} \sum\nolimits_{i} {\left( {2^{{\left| {F_{i} } \right|}} - 1} \right)}\).

An integrated decision-making framework based on the generalized TODIM-ELECTER II method and DEST

This section introduced the novel decision-making frame constructed to solve problems of ranking and selecting. Figure 2 is the flowchart of the proposed method, which displays the procedure of the decision with evaluation information in the form of LZNs. First, we convert the LZNs into BPAs. Then the discounted BPAs are determined by identifying the credibility of evidence, and the decision matrix is acquired by fusing evaluations from all experts. Further, Deng entropy is employed to obtain criteria weights. Finally, the generalized TODIM-ELECTRE II method is presented by combining the generalized TODIM method and ELECTRE II, the newly proposed method inherits the advantages of both methods and thus can provide a more credible ranking result. The flow of the decision framework will be visualized in the diagram illustrated in Fig. 2.

Fig. 2
figure 2

The flow chart of the proposed MCGDM decision framework

Statement of a MCGDM problem in LZNs environment

Suppose there is a problem for prioritization purpose, which contains m alternatives \(\left\{ {Aa_{1} ,Aa_{2} , \cdots ,Aa_{m} } \right\}\) and n evaluation criteria \(\left\{ {C_{1} ,C_{2} , \cdots ,C_{n} } \right\},\) k experts \(\left\{ {E_{1} ,E_{2} , \cdots ,E_{k} } \right\}\) independently assess the performance of each alternative under each criterion. The criteria weights and expert weights are completely unknown. The evaluation information is given by experts in the form of LZNs. Let \(h_{uv}^{t}\) represents the evaluation for alternative Aau under the attribute Cv provided by expert Et, where \(h_{uv}^{t} = \left( {A_{{\phi_{uv}^{t} }} ,B_{{\varphi_{uv}^{t} }} } \right)\) is a LZN. The original evaluation matrix Ht of Et is shown below:

$$ H_{t} =\left[ {h_{uv}^{t} } \right]_{m \times n} =\left( {\begin{array}{*{20}c} {h_{11}^{t} } & \ldots & {h_{1n}^{t} } \\ \vdots & \ddots & \vdots \\ {h_{m1}^{t} } & \cdots & {h_{mn}^{t} } \\ \end{array} } \right),\;t = 1,2, \cdots ,k. $$

Generate the basic probability assignment based on linguistic Z-numbers

LZNs are more conform to human expression habits and have a strong ability to retain the original information. For dealing with the uncertainty in the assessment and facilitating subsequent information integration, the recognition framework in DEST is used to represent fuzzy restriction of the variable value and reliability measure in LZNs.

Let \(S = \left\{ {s_{{{ - }l}} , \cdots ,s_{ - 1} ,s_{0} ,s_{1} , \cdots ,s_{l} } \right\}\) be a linguistic term set, \(B_{{\varphi_{uv}^{t} }} =\left\{ {S_{\alpha } |S_{\alpha } \in S;\;\alpha \in \left\{ {{ - }l, \cdots , - 1,0,1, \cdots ,l} \right\}} \right\}\) is the reliability component of a LZN \(h_{uv}^{t} = \left( {A_{{\phi_{uv}^{t} }} ,B_{{\varphi_{uv}^{t} }} } \right)\). Then \(B_{{\varphi_{uv}^{t} }}\) can be quantified by the linguistic scale function L, which is manifested in Eq. (10). Since the conversation result increases with the confidential level, it can be regarded as a score function of the reliability part embedded in the evaluation.

$$ L(B_{{\varphi_{uv}^{t} }} \left( \alpha \right)) = \frac{\alpha + \tau }{{2\tau }}. $$
(10)

In a recent study, Ren et al. (2020) substitute the membership function μ(x) for the utility function u(ϕj) and used Shapley value method to revise the utility values of non-independent evaluation grades [27].

Definition 3.1

[26]. A fuzzy measure u is a set function on the set of \(\phi =\;\left\{ {\phi_{ - g} ,\; \cdots ,\;\phi_{ - 1} ,\;\phi_{0} ,\;\phi_{1} ,\; \cdots ,\;\phi_{g} } \right\} :\) \(2^{N} \to \left[ {0,1} \right]\) sufficing the following condition: \(\left( 1 \right)\;u\left( \emptyset \right) = 0,\;u\left( \phi \right) = 0\); \(\;\left( 2 \right)\;u\left( P \right) \le \;u\left( Q \right),\;if\;P \subseteq \;Q\;for\;all\) \(P,\;Q \subseteq \phi\). u({ϕj}) indicates the utility or the weight of element ϕj. The Shapley index for each j ∈ ϕ is defined as:

$$ \begin{aligned}\rho \left( {\phi_{j} } \right) &= \sum\limits_{{K \subset \phi \backslash \phi_{j} }} {\frac{{\left| {n - \left| K \right| - 1} \right|!\left| K \right|!}}{n!}} \; \left[ {u\left( {K \cup \left\{ {\phi_{j} } \right\}} \right) - u\left( K \right)} \right],\;\\ & j = \left\{ { - g, \cdots , - 1,0,1, \cdots ,g} \right\}. \end{aligned}$$
(11)

Shapley value is the average value of the marginal contribution, which represents a particular constituent’s share of in the value of all possible unions and can be maneuvered as a contrivance to determine a reasonable marginal contribution of every distinct element \(\phi_{j} \left( {j = \left\{ { - g, \cdots , - 1,0,1, \cdots ,g} \right\}} \right)\) to the set ϕ. Through this operation, we have \(\rho \left( \Delta \right) = \sum\nolimits_{{\phi_{j} \in \Delta }} {\rho \left( {\left\{ {\phi_{j} } \right\}} \right)} ,\;for\;all\;\Delta \subseteq \phi\).

Combining the related concepts of Z-numbers, the probability distribution can be extended to BPAs, so that we can derive the evidence \(m\left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right)\), which refers to the evaluation \(h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)=\left( {A_{{\phi_{uv}^{t} }} \left( j \right),B_{{\varphi_{uv}^{t} }} \left( \alpha \right)} \right)\), that is:

$$ m\left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right) = m\left( {\left( {A_{{\phi_{uv}^{t} }} \left( j \right),B_{{\varphi_{uv}^{t} }} \left( \alpha \right)} \right)} \right) = \frac{{L(B_{{\varphi_{uv}^{t} }} \left( \alpha \right))}}{{\rho \left( {\phi_{j} } \right)}}, $$
(12)

where \(A_{{\phi_{uv}^{t} }}\) indicates the evaluation grade of Aau under Cv provided by Et, \(L\left( {B_{{\varphi_{uv}^{t} }} \left( \alpha \right)} \right)\) represents the score of corresponding reliability part, and ρ(ϕj) is the revised utility value of the evaluation grade.

The remaining beliefs are assigned to the FOD Θ:

$$\begin{aligned} & m(h_{{_{uv} }}^{t} \left( \Theta \right))=1 -m\left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right),\quad \\ & j =- {{ g,}} \dots {, - 1,0,1,} \dots {{,g}}{.} \end{aligned}$$
(13)

Since there are multiple evaluations for each alternative under the same criterion, the BPA is indispensable to be normalized according to Eq. (14).

$$ \widetilde{m}(h_{{_{uv} }}^{t} ) = \frac{{m(h_{{_{uv} }}^{t} )}}{{\sum\nolimits_{t = 1}^{k} {m(h_{{_{uv} }}^{t} )} }}. $$
(14)

For simplicity, we still use m instead of \(\widetilde{m}\) to represent the normalized BPAs hereunder.

The rationality of this step is that the performance of each alternative in the same dimension is objective, while the evaluation is relatively subjective. When a decision maker is proficient in a certain aspect of knowledge, he will give a higher confidence level to his own evaluation, and the corresponding evaluation is more reliable. Conversely, the lower the decision maker’s confidence in the evaluation is and the lower the credibility of the opinion. An extreme example is that a decision maker on their own evaluation of no confidence thoroughly, then we may not put the comments into account.

Determine the decision matrix and weights of criteria

The entropy weight method takes the degree of dispersion of the index as the basis for determining the weight. As an objective weighting method, it has high credibility and accuracy. Moreover, it has the capacity to reflect the distinguishing ability of indicators and simple algorithm.

Deng entropy is an effective way to measure the uncertainty degree of BPAs [45]. Therefore, this article calculates evidence’s inner reliability and criteria weights based on Deng entropy for avoiding the intervention arisen from subjective factors. The determination of BPAs’ outer reliability is based on the conflict relationship. The following are the specific steps to obtain the final decision matrix and weights of criteria.

Step 1: for every piece of evidence provided by each expert, calculate the Deng entropy of BPA according to Eqs. (7)–(9). Then the inner reliability is determined by Eq. (15)

$$ I(m) = 1 - \frac{{E_{D} (m)}}{{D_{{_{\max } }} }}. $$
(15)

Step 2: measuring the outer reliability of evidence. As proposed in the previous study [3], the outer reliability of a mass function depends mainly upon the conflict relationship between the polymerized BPAs. Referring to the multi-source data integration framework [46], we know that the outer reliability of a BPA is related to three metrics, that is, the weight or strength of credibility source S, the support for the proposed solution A from the data derived from S and the compatibility of A with As, As means the value proposed by the source S. So below we will introduce the concepts of support degree and the credibility degree for mass functions.

Definition 3.2

Let \(Q = \left\{ {m_{1} ,m_{2} , \cdots ,m_{q} } \right\}\) be q independent sources of evidence, the similarity degree of mρ and mε is defined as.

$$ Sim\left( {m_{\rho } ,m_{\varepsilon } } \right) = 1 - d\left( {m_{\rho } ,m_{\varepsilon } } \right), $$
(16)

where d (mρ, mε) is the distance measure between mρ and mε.

Since the distance measure can evaluate the difference between BPAs, a greater distance between \(m_{\rho }\) and evidence from other sources means a lower compatibility between them, thus the support of this piece of evidence for mρ will be lower. The support degree of a BPA is defined as follows:

Definition 3.3

Let \(Q = \left\{ {m_{1} ,m_{2} , \cdots ,m_{q} } \right\}\) be q independent sources of evidence, the support degree of mρ is defined as

$$ Sup\left( {m_{\rho } } \right) = \sum\limits_{\varepsilon = 1,\varepsilon \ne \rho }^{q} {Sim} \left( {m_{\rho } ,m_{\varepsilon } } \right). $$
(17)

The support degree of other evidence for mρ can just indicate its credibility. So, we can define the credibility degree of a mass function as follows.

Definition 3.4

Let \(Q = \left\{ {m_{1} ,m_{2} , \cdots ,m_{q} } \right\}\) be q independent sources of evidence, the credibility degree of mρ is defined as

$$ Crd\left( {m_{\rho } } \right) = \frac{{Sup\left( {m_{\rho } } \right)}}{{\sum\nolimits_{\rho = 1}^{q} {Sup} \left( {m_{\rho } } \right)}}. $$
(18)

The definitions of support degree and credibility degree meet the properties in [36], and the outer reliability of a BPA is denoted as \( O\left( \right)\), thus we have:

$$ O\left( {m_{\rho } } \right) = Crd\left( {m_{\rho } } \right). $$
(19)

Step 3: combine the outer reliability with the inner reliability, and the reliability of each piece of evidence can be obtained, which can be regarded as the discounting coefficient of a BPA.

Definition 3.5

Let \(Q = \left\{ {m_{1} ,m_{2} , \cdots ,m_{q} } \right\}\) be q independent sources of evidence, the reliability of mρ is defined as.

$$ {\text{Re}} b\left( {m_{\rho } } \right) = \eta I\left( {m_{\rho } } \right) + \left( {1 - \eta } \right)O\left( {m_{\rho } } \right), $$
(20)

where \(\eta \in \left[ {0,1} \right]\) is a parameter used to adjust the influence of inner and outer reliability on the overall reliability.

Step 4: calculate the discounted BPAs using discounting coefficients. Take \(m\left( {h_{uv}^{t} } \right)\) as an example. For the beliefs assigned to the evaluation level ϕj, the discounted BPA can be calculate as

$$ \overline{m} \left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right) = {\text{Re}} b\left( {m\left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right)} \right) \times m\left( {h_{{_{uv} }}^{t} \left( {\phi_{j} } \right)} \right), $$
(21)
$$ \overline{m} \left( {h_{{_{uv} }}^{t} \left( \Theta \right)} \right) = {\text{Re}} b\left( {m(h_{{_{uv} }}^{t} \left( \Theta \right))} \right) \times m(h_{{_{uv} }}^{t} \left( \Theta \right)) + \left( {1 - {\text{Re}} b\left( {m(h_{{_{uv} }}^{t} \left( \Theta \right))} \right)} \right). $$
(22)

Step 5: aggregate the discounted evidence from all experts using the combination rule introduced in section “Preliminaries”. All evaluation information of the same alternative under the same attribute \(\overline{m} \left( {h_{uv}^{t} } \right)\;\left( {u = 1,2 \cdots ,m;\;\,v = 1,2, \cdots ,n;\,\;t = 1,2, \cdots ,k} \right)\) is aggregated into the synthesized evaluation value \(m\left( {\overline{h}_{uv} } \right)\;\left( {u = 1,\;2, \cdots ,m;\,\;v = 1,\;2, \cdots ,n} \right)\). Let \(\overline{H} = \left( {m\left( {\overline{h}_{uv} } \right)} \right)_{m \times n} .\)

Step 6: determine the criteria weights. For an actual decision-making problem, different criteria usually have different importance. In this paper, Deng entropy is used for the determination of each criterion’s weight.

Fuse different BPAs that correspond to the alternatives under the same criterion converted in last step using the Dempster’s combination algorithm, and the contribution of all alternatives for Cv is obtained according to the following definition.

Definition 3.6

[2]. The degree of contribution of all the alternatives for criterion Cv is defined as BPAv:

$$ BPA_{v} = \left\{ {m_{v} \left( {\phi_{1} } \right),m_{v} \left( {\phi_{2} } \right), \cdots ,m_{v} \left( {\phi_{L} } \right)} \right\}. $$
(23)

Further, the indetermination of Cv is estimated by Deng entropy and recorded as UCv. The calculation steps are shown in Eq. (8). Define the value of UCv between 0 and 1 through standardized operations, then the consistency of criterion Cv can be defined as.

$$ CS_{v} = 1 - UC_{v} . $$
(24)

Based on the idea that an index with a lower value of Deng entropy has a greater impact on the comprehensive evaluation (i.e. weight), the criterion weight can be calculated as

$$ W_{v} = \frac{{CS_{v} }}{{\sum\nolimits_{v = 1}^{n} {CS_{v} } }}. $$
(25)

Finally, the weights of criteria are obtained through the normalization operation.

Step 7: convert BPAs into probability distributions BetPm according to Eq. (4) using the Pignistic probability function. With an aggregate function, the probability distributions can be integrated into a numerical value as follows.

Definition 3.7

Assume the importance of linguistic terms is \( I = \left\{ {I_{1} ,I_{2} , \cdots I_{L} } \right\}\), and their responding values are \( V = \left\{ {v_{1} ,v_{2} , \cdots v_{L} } \right\}\), the probability distributions \( P = \left\{ {p_{1} ,p_{2} , \cdots p_{L} } \right\}\) can be aggregated as

$$ F\left( {I_{1} ,I_{2} , \cdots ,I_{L} } \right) = PV^{T} = p_{1} v_{1} + p_{2} v_{2} + \cdots + p_{L} v_{L} . $$
(26)

Through the above operation, the decision matrix can be obtained as \(F_{M} = \left( {f_{{M_{uv} }} } \right)_{m \times n} = \left( {\begin{array}{*{20}c} {f_{{M_{11} }} } & \ldots & {f_{{M_{1n} }} } \\ \vdots & \ddots & \vdots \\ {f_{{M_{m1} }} } & \cdots & {f_{{M_{mn} }} } \\ \end{array} } \right).\)

Obtain the ranking results using the generalized TODIM-ELECTRE II method

For ranking all alternatives in the decision matrix obtained above, a novel method named the generalized TODIM-ELECTRE II method is presented in this subsection. This newly raised method is a hybrid of the generalized TODIM method and the ELECTRE II method, it can not only reflect the outranking relationship between the alternatives, but also reflect the degree of transcendence. Thus, it can handle the compensatory problem between criteria while consider the psychological characteristics of DMs.

Step 1: for the criterion Cv, its positive ideal value and negative ideal value can be determined according to the decision matrix FM:

$$ f_{M}^{v + } = \mathop {\max }\limits_{u} \left\{ {f_{{M_{uv} }} } \right\};\;f_{M}^{v - } = \mathop {\min }\limits_{u} \left\{ {f_{{M_{uv} }} } \right\},u = 1,2, \cdots ,m. $$
(27)

What’s more, the distance measure between decision values can be calculated as

$$ d_{lz} (f_{{M_{i} }} ,f_{{M_{j} }} ) = \left| {f_{{M_{i} }} - f_{{M_{j} }} } \right|, $$
(28)

the absolute value means the size of the gap between the two decision values, rather than comparing the numerical size of their own.

Next, borrow the concept of relative distance function from the TOPSIS method, the relative distance for decision values \(f_{{M_{uv} }}\) is computed by Eq. (29) and used for the subsequent research.

$$ \overline{{d_{{lz}} }} \left( {f_{{M_{{uv}} }} } \right) = \frac{{d\left( {f_{{M_{{uv}} }} ,f_{M}^{{^{{v - }} }} } \right)}}{{d_{{lz}} \left( {f_{{M_{{uv}} }} ,f_{M}^{{^{{v - }} }} } \right) + d_{{lz}} \left( {f_{{M_{{uv}} }} ,f_{{LZ}}^{{^{{v + }} }} } \right)}}. $$
(29)

Step 2: determine three types of linguistic Z-number concordance sets (LZCSs) for fχv and fγv, which represent the decision values corresponding to any two alternatives’ performance ratings on criterion Cv by the following classification standards:

(1) the strong concordance set

$$\begin{aligned} CC_{\chi \gamma }^{s} &= \left\{ {v|d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right) < d_{lz} \left( {f_{\gamma v} ,f_{M}^{v + } } \right)\;and\;}\right. \\ &\left. \quad {d_{lz} \left( {f_{\chi v} ,f_{M}^{v - } } \right)> d_{lz} \left( {f_{\gamma v} ,f_{M}^{v - } } \right)} \right\}; \end{aligned}$$
(30)

(2) the medium concordance set

$$ CC_{\chi \gamma }^{m} = \left\{ \begin{gathered} v|\left( {d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{^{v + } }} } \right) < d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{^{v + } }} } \right)\;and\;d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{^{v - } }} } \right) = d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{^{v - } }} } \right)} \right)\;or \hfill \\ \left( {d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{^{v + } }} } \right) = d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{^{v + } }} } \right)\;and\;d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{^{v - } }} } \right) > d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{^{v - } }} } \right)} \right) \hfill \\ \end{gathered} \right\}; $$
(31)

(3) the weak concordance set

$$ CC_{\chi \gamma }^{w} = \left\{ {v|\overline{{d_{lz} }} \left( {f_{{M_{\chi v} }} } \right) > \overline{{d_{lz} }} \left( {f_{{M_{\gamma v} }} } \right)} \right\}. $$
(32)

Since \(d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right)\) is the distance between \(f_{{M_{\chi v} }}\) and \(f_{M}^{v + }\), it can identify the gap between the performance rating \(f_{{M_{\chi v} }}\) and the positive ideal values, for \(d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right)\) is the same principle. Therefore the determination of \(CC_{\chi \gamma }^{s}\) and \(CC_{\chi \gamma }^{s}\) is unambiguous to explain. However, for the determination of the weak concordance set, this paper considers two peculiar conditions in which \(f_{{M_{\chi v} }}\) may transcend \(f_{{M_{\gamma v} }}\):

$$\begin{aligned} & ({\text{a}})\quad d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right) < d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;and\; \\ & d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v - } } \right) < d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v - } } \right); \end{aligned}$$
$$\begin{aligned} & ({\text{b}})\quad d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right) > d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;{\text{and}}\; \\ & d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v - } } \right) > d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v - } } \right). \end{aligned} $$

If \(\overline{{d_{lz} }} \left( {f_{{M_{\chi v} }} } \right) > \overline{{d_{lz} }} \left( {f_{{M_{\gamma v} }} } \right)\), it can be considered Aaχ behaves better than Aaγ in adjusting its relationship with the positive and negative benchmarks.

Step 3: determine the linguistic Z-number strong, medium, weak discordance sets (Collectively called LZDSs) for \(f_{{M_{\chi v} }}\) and \(f_{{M_{\gamma v} }}\).

(1) the strong discordance set

$$ \begin{aligned} DC_{\chi \gamma }^{s} &= \left\{ {v|d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right) > d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;and}\right. \\ &\quad\left. {\;d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right) < d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{v{ - }}} } \right)} \right\}; \end{aligned}$$
(33)

(2) the medium discordance set

$$ DC_{\chi \gamma }^{m} = \left\{ \begin{gathered} v|\left( {d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right) > d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;and\;d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{v{ - }}} } \right)} \right)or \hfill \\ \left( {d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;and\;d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right) < d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{v{ - }}} } \right)} \right) \hfill \\ \end{gathered} \right\}; $$
(34)

(3) the weak discordance set

$$ DC_{\chi \gamma }^{w} = \left\{ {v|\overline{{d_{lz} }} \left( {f_{{M_{\chi v} }} } \right) < \overline{{d_{lz} }} \left( {f_{{M_{\gamma v} }} } \right)} \right\}. $$
(35)

Step 4: in addition to the above classification, there is another situation when \(d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\) and \(d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{v + { - }}} } \right)\), then we regard these two alternatives as nondiscriminatory concerning the criterion Cv. The linguistic Z-numbers indifference set LZIS is defined as

$$\begin{aligned} ID_{\chi \gamma } &= \left\{ {v|d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{v + } } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{v + } } \right)\;and\;}\right. \\ &\quad\left.{d_{lz} \left( {f_{{M_{\chi v} }} ,f_{M}^{{v{ - }}} } \right)=d_{lz} \left( {f_{{M_{\gamma v} }} ,f_{M}^{{v{ - }}} } \right)} \right\}. \end{aligned}$$
(36)

Step 5: compute the dominance degree between alternative Aaχ and Aaγ concerning the criterion Cv and denote as ϖv (Aaχ, Aaγ). Draw from [42], let f1(x) in the generalized TODIM method be xα, and f2(x) be λxβ, and the weight vector w will not be modified. Therefore the expression for calculating dominance degree can be denoted as:

$$ \varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )=\left\{ \begin{gathered} w_{v} (f_{{M_{\chi v} }} - f_{{M_{\gamma v} }} )^{\alpha } ,\;if\;f_{{M_{\chi v} }} \ge f_{{M_{\gamma v} }} , \hfill \\ - \lambda w_{v} (f_{{M_{\chi v} }} - f_{{M_{\gamma v} }} )^{\beta } ,\;if\;f_{{M_{\chi v} }} < f_{{M_{\gamma v} }} . \hfill \\ \end{gathered} \right. $$
(37)

Step 6: calculate the linguistic Z-number concordance index LZCI on any pair of alternatives Aaχ and Aaγ of LZCSs by

$$ \begin{aligned} CI_{\chi \gamma } =\, & \frac{{\omega^{s} \sum\nolimits_{{v \in CC_{\chi \gamma }^{s} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } ) + } \omega^{m} \sum\nolimits_{{v \in CC_{\chi \gamma }^{m} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } ) + } \omega^{w} \sum\nolimits_{{v \in CC_{\chi \gamma }^{w} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} + \omega^= \sum\nolimits_{{v \in ID_{\chi \gamma } }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} }}{{\sum\nolimits_{v = 1}^{n} {w_{v} } }} \\ =\, & \omega^{s} \sum\nolimits_{{v \in CC_{\chi \gamma }^{s} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } ) + } \omega^{m} \sum\nolimits_{{v \in CC_{\chi \gamma }^{m} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } ) + } \omega^{w} \sum\nolimits_{{v \in CC_{\chi \gamma }^{w} }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} + \omega^= \sum\nolimits_{{v \in ID_{\chi \gamma } }} {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} \\ \end{aligned} $$
(38)

where wv is the weight of Cv, \(\omega^{s}\), \(\omega^{m}\), \(\omega^{w}\),and \(\omega^=\) are attitude weights. CIχγ mirrors the relative importance of Aaχ over Aaγ, and CIχγ ∈ [0,1]. Then all LZCIs of pairs of alternatives comprise the linguistic Z-number concordance matrix CLZ = [ CIχγ]m×m (χ, γ = 1, 2, …, m):

$$ C_{LZ} = \left[ {\begin{array}{*{20}c} - & \cdots & {CI_{1j} } & \cdots & {CI_{1m} } \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {CI_{i1} } & \cdots & {CI_{ij} } & \cdots & {CI_{im} } \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {CI_{m1} } & \cdots & {CI_{mj} } & \cdots & - \\ \end{array} } \right] $$

Step 7: calculate the linguistic Z-number discordance index LZDI on any pair of alternatives Aaχ and Aaγ of LZCSs by

$$ DI_{\chi \gamma } = \frac{{\left\{ {\omega^{s} \times \mathop {\max }\nolimits_{{v \in DC_{\chi \gamma }^{s} }} \left\{ {\left| {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} \right|} \right\},\omega^{m} \times \mathop {\max }\nolimits_{{v \in DC_{\chi \gamma }^{m} }} \left\{ {\left| {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} \right|} \right\},\omega^{w} \times \mathop {\max }\nolimits_{{v \in DC_{\chi \gamma }^{w} }} \left\{ {\left| {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} \right|} \right\}} \right\}}}{{\mathop {\max }\nolimits_{v} \left| {\varpi_{v} (Aa_{\chi } ,Aa_{\gamma } )} \right|}}. $$
(39)

DIχγ can mirror the relative inferior of alternative Aaχ over alternative Aaγ, and DIχγ ∈ [0,1]. Then all LZDIs of pairs of alternatives constitute the linguistic Z-number concordance matrix DLZ = [ DIχγ]m×m (χ, γ = 1, 2, …, m):

$$ D_{LZ} = \left[ {\begin{array}{*{20}c} - & \cdots & {DI_{1j} } & \cdots & {DI_{1m} } \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {DI_{i1} } & \cdots & {DI_{ij} } & \cdots & {DI_{im} } \\ \vdots & \ddots & \vdots & \ddots & \vdots \\ {DI_{m1} } & \cdots & {DI_{mj} } & \cdots & - \\ \end{array} } \right] $$

Step 8: calculate the concordance threshold by \(\overline{C}_{lz} = \frac{{\sum\nolimits_{\chi = 1}^{m} {\sum\nolimits_{\gamma = 1}^{m} {CI_{\chi \gamma } } } }}{{m\left( {m - 1} \right)}}\) and construct the linguistic Z-number concordance Boolean matrix \(B_{LZ}^{C} =\left( {e_{\chi \gamma } } \right)_{m \times m}\), where eχγ is a Boolean variable. Specifically, \(e_{\chi \gamma } =1 \Leftrightarrow e_{\chi \gamma } \ge \overline{C}_{lz}\) and \(e_{\chi \gamma } { = 0} \Leftrightarrow e_{\chi \gamma } < \overline{C}_{lz}\). Furthermore, eχγ = 1 denotes that Aaχ dominants Aaγ in the perspective of concordance.

Compute the discordance threshold as \(\overline{D}_{lz} = \frac{{\sum\nolimits_{\chi = 1}^{m} {\sum\nolimits_{\gamma = 1}^{m} {DI_{\chi \gamma } } } }}{{m\left( {m - 1} \right)}}\) and establish the linguistic Z-number discordance matrix \(B_{LZ}^{D} =\left( {q_{ij} } \right)_{m \times m}\), where qχγ is also a Boolean variable. When (1) \(q_{\chi \gamma } < \overline{D}_{lz} ,\;q_{\chi \gamma } =1\) and (2) \(q_{\chi \gamma } \ge \overline{D}_{lz} ,\;q_{\chi \gamma } { = 0}\), where \(q_{\chi \gamma } =1\) denotes that Aaχ is inferior to Aaγ.

Step 9: multiply \(B_{LZ}^{C}\) and \(B_{LZ}^{D}\) component-wise and obtain the synthetic Boolean matrix BLZ with element bχγ = eχγ × qχγ (χ, γ = 1, 2, …, m).

Step 10: depict the outranking graph according to the preference relationships embodied in BLZ, which can be displayed by a digraph G = (V, A). A digraph is composed of a vector set V and a set of arcs linked with vertices, i.e., A. The vertices represent the alternatives involved in the MCGDM, and a directed arc represents an outranking relation. For example, an arc pointing from vχ to vγ signifies Aaχ precedes Aaγ. A two-way arrow appears between these two alternatives when Aaχ and Aaγ is indifferent. In addition, the absence of arrows between two vertices occurs when these two alternatives are incomparable.

An illustrative example of terminal wastewater solidification technology selection

Case description

Although the total amount of freshwater resources in our country is not small, the per capita amount of freshwater is only about one-fourth of the world average, in general, a serious shortage. Since entering the twenty-first century, the contradiction between the supply and demand of water resources in my country has further intensified. Resource-based and water-quality-based water shortage has become one of the primary bottlenecks limiting our country’s sustainable economic and social development. Water is a high degree of unity of quantity and quality. Pollution reduces the quality of water resources. Due to the increase in sewage discharge and toxicity, not all sewage can be properly treated before it is discharged, which exacerbates the shortage of water resources. The state has successively promulgated relevant laws and regulations. For instance, the “Action Plan for Water Pollution Prevention and Control” issued in 2015 has put forward higher requirements for water resources utilization, water pollution prevention and control, and pollution discharge.

Actively responding to the national call and meeting the requirements of relevant water-saving and environmental protection laws and regulations, a seawater DC power plant intends to treat the power plant’s high-salt wastewater (mainly including desulfurization wastewater and boiler make-up water treatment system high-salt wastewater) through terminal wastewater solidification technology so as to realize zero discharge of wastewater from the whole plant. Figure 3 shows the factors affecting the water quality and quantity of desulfurization wastewater and the sources of pollutants.

Fig. 3
figure 3

Influencing factors of water quality and quantity and pollutant tracer

Now it is necessary to compare a variety of terminal wastewater treatment technical routes by synthesis and select the most appropriate terminal wastewater treatment scheme. Terminal wastewater treatment generally includes two stages: wastewater concentration reduction and terminal wastewater solidification. At present, there are several technical routes for zero discharge of desulfurization wastewater as shown in Fig. 4.

Fig. 4
figure 4

Usual technical routes for zero discharge of desulfurization wastewater

According to the actual situation, there are five technologies for the power plant to choose from: Pretreatment + Multi-effect evaporation (Tc1), Pretreatment + double alkali method + double membrane method + main flue evaporation (Tc2), Triple tank (pH adjustment tank, a reaction tank, flocculation tank) pretreatment + High-temperature flue bypass evaporation (Tc3), Flue gas concentration + End pressure filtration (Tc4) and Flue gas concentration + Fluidization evaporation of end secondary air (Tc5). Four experts {Ex1, Ex2, Ex3, Ex4} were asked to make evaluation independently on the performance of every technology under four criteria, including Operating cost (Cr1), Technology maturity (Cr2), Post-processing effect (Cr3) and Environmental impact (Cr4). Given the complexity of human cognition and the asymmetry of information, neither weights of experts nor weights of criteria are unknown. With the aim of improving the accuracy of the information, experts are required to offer evaluation information in LZNs form. The first linguistic term set S = {δ-3, δ-2, δ-1, δ0, δ1, δ2, δ3} = {very poor (VP), poor (P), slightly poor (SP), medium (M), slightly good (SG), good (G), very good (VG)} was adopted by the experts to judge the performance grade of every alternative concerning each criterion, and the confidence in an evaluation was expressed employing the other linguistic term set S = {ξ-3, ξ-2, ξ-1, ξ0, ξ1, ξ2, ξ3} = {very uncertain (VU), uncertain (U), a little uncertain (LU), normal (N), a little certain (LC), certain (C), very certain (VC)}.For better performance, the evaluation level is higher.

Case processing

Step 1: obtain the evaluation results of experts in the form of LZNs. Every expert provides his own judgments and corresponding confidence levels concerning each technology’s performance under each criterion, which are displayed in Tables 1, 2, 3, 4.

Table 1 Evaluation provided by Ex1
Table 2 Evaluation provided by Ex2
Table 3 Evaluation provided by Ex3
Table 4 Evaluation provided by Ex4

Step 2: update evaluation grades’ utility values using Shapley value method in Section “An integrated decision-making framework based on the generalized TODIM-ELECTER II method and DEST”. Firstly, assuming the utility values of performance grades δj ( j = − 3, − 2, − 1, 0, 1, 2, 3) and all possible coalitions, part of which are shown as follows:

$$ \begin{aligned}\mu \left( \emptyset \right) &= 0,\;\mu \left( {\delta_{ - 3} } \right) = 0.03,\;\mu \left( {\delta_{ - 2} } \right) = 0.06,\;\\\mu \left( {\delta_{ - 1} } \right) &= 0.1,\;\mu \left( {\delta_{0} } \right) = 0.14,\;\mu \left( {\delta_{1} } \right) = 0.18,\;\mu \left( {\delta_{2} } \right) = 0.22, \end{aligned} $$
$$ \begin{aligned} \mu \left( {\delta_{3} } \right) &= 0.26,\;\mu \left( {\delta_{ - 3} ,\delta_{ - 2} } \right) = 0.07,\; \cdots ,\;\\ \mu \left( {\delta_{ - 3} ,\delta_{ - 2} ,\delta_{ - 1} } \right) &= 0.17,\; \cdots ,\;\\ \mu \left( {\delta_{ - 3} ,\delta_{ - 2} ,\delta_{ - 1} ,\delta_{0} } \right)& = 0.31,\; \cdots ,\end{aligned} $$
$$\begin{aligned} &\mu \left( {\delta_{ - 3} ,\delta_{ - 2} ,\delta_{ - 1} ,\delta_{0} ,\delta_{1} } \right) = 0.5,\;\\ &\qquad \cdots ,\;\mu \left( {\delta_{ - 3} ,\delta_{ - 2} ,\delta_{ - 1} ,\delta_{0} ,\delta_{1} ,\delta_{2} } \right) = 0.75,\\ &\qquad \mu \left( {\delta_{ - 3} ,\delta_{ - 2} ,\delta_{ - 1} ,\delta_{0} ,\delta_{1} ,\delta_{3} } \right) = 1.\end{aligned} $$

Then, the utility of each evaluation grade can be determined through being modified by the Shapley value method according to Eq. (11) and shown as follows:

\(\rho \left( {\delta_{ - 3} } \right) = 0.025,\;\rho \left( {\delta_{ - 2} } \right) = 0.059,\;\rho \left( {\delta_{ - 1} } \right) = 0.098,\;\rho \left( {\delta_{0} } \right) = 0.144,\;\rho \left( {\delta_{1} } \right) = 0.194,\;\rho \left( {\delta_{2} } \right) = 0.229,\;\rho \left( {\delta_{3} } \right) = 0.251\) .

Step 3: calculate the scores of reliability grades according to Eq. (10) and obtain the BPAs by Eqs. (12)–(14). The conversion results obtained are shown in Tables 5, 6, 7, 8.

Table 5 The transformed decision matrix of Ex1
Table 6 The transformed decision matrix of Ex2
Table 7 The transformed decision matrix of Ex3
Table 8 The transformed decision matrix of Ex4

Step 4: determine the inner reliability of BPA. First, compute the Deng entropy of BPAs by Eq. (8), and next calculate the maximum Deng entropy. In the present example, the scale of the FOD is 7. Hence, its corresponding power set involves 27 elements, and the distribution of propositions (empty set is omitted) should satisfy Eq. (9). The maximum Deng entropy is calculated as 11.0077. The inner reliability can thus be determined (Table 9).

Table 9 Deng entropy and the inner reliability

Step 5: determine the outer reliability of BPAs and calculate the discounted BPAs.

First, determine the distance between BPAs according to Eqs. (5) and (6), then the value of support degree of BPAs can be calculated by Eqs. (16) and (17) such as:

$$ Sup\left( {m_{11}^{1} } \right) = Sim\left( {m_{11}^{1} ,m_{11}^{2} } \right) + Sim\left( {m_{11}^{1} ,m_{11}^{3} } \right) + Sim\left( {m_{11}^{1} ,m_{11}^{4} } \right) = 2.8847 $$

Eventually, the credibility degree can be calculated by Eq. (18), and the outer reliability can be determined. Take the value of η in Eq. (20) as 0.5, the overall reliability and discounting coefficients of BPAs can be determined. Table 10 shows the discounted BPA calculated by Eqs. (1) and (2).

Table 10 Values of outer reliability, discounting coefficients and discounted BPAs

Step 6: aggregate the evaluation results of the same alternative under the same criterion from multiple experts using Dempster’s rule. The results obtained by Eq. (3) are shown as follows.

Step 7: determine the weights of criteria. Aggregate all BPAs concerning different alternatives under the same criterion using combination algorithm and obtain the combined BPAs. The weights of criteria can be obtained on the basis of Deng entropy.

Step 8: obtain the final decision matrix. First, convert BPAs in Table 11 into probability distributions by the Pignistic probability distribution (Table 12), the results can be seen in Table 13. Then, the probability distributions are integrated into numerical values according to Eq. (26). Table 14 is the final decision matrix (Table 15).

Table 11 Values of combined BPAs
Table 12 Weights of four criteria
Table 13 BetP presentation of the alternatives’ performance under the criteria
Table 14 Decision matrix FM
Table 15 The positive and negative ideal value of each criterion

Step 9: identify the positive ideal value \(f_{M}^{v + }\) and negative ideal value \(f_{M}^{v - }\) of every criterion, and the relative distance for decision values \(f_{{M_{uv} }} \left( {u = 1,2,3,4,5;v = 1,2,3,4} \right)\) calculated by Eq. (29) are displayed in Table 16.

Table 16 The relative distance of \(f_{{M_{\chi v} }}\)

Step 10: classify three types of concordance/discordance sets and the indifference set.

$$ C^{s} = \left[ {CC_{\rho \varepsilon }^{S} } \right]_{5 \times 5} = \left[ {\begin{array}{*{20}c} - & {\left\{ {2,3,4} \right\}} & {\left\{ 4 \right\}} & {\left\{ {2,3,4} \right\}} & {\left\{ {2,4} \right\}} \\ {\left\{ 1 \right\}} & - & - & {\left\{ {3,4} \right\}} & {\left\{ 4 \right\}} \\ {\left\{ {1,2,3} \right\}} & {\left\{ {1,2,3,4} \right\}} & - & {\left\{ {1,2,3,4} \right\}} & {\left\{ {1,2,3,4} \right\}} \\ {\left\{ 1 \right\}} & {\left\{ {1,2} \right\}} & - & - & - \\ {\left\{ {1,3} \right\}} & {\left\{ {1,2,3} \right\}} & - & {\left\{ {1,2,3,4} \right\}} & - \\ \end{array} } \right]; $$
$$ D^{s} = \left[ {DC_{\rho \varepsilon }^{S} } \right]_{5 \times 5} = \left[ {\begin{array}{*{20}c} - & {\left\{ 1 \right\}} & {\left\{ {1,2,3} \right\}} & {\left\{ 1 \right\}} & {\left\{ {1,3} \right\}} \\ {\left\{ {2,3,4} \right\}} & - & {\left\{ {1,2,3,4} \right\}} & {\left\{ {1,2} \right\}} & {\left\{ {1,2,3} \right\}} \\ {\left\{ 4 \right\}} & - & - & - & - \\ {\left\{ {2,3,4} \right\}} & {\left\{ {3,4} \right\}} & {\left\{ {1,2,3,4} \right\}} & - & {\left\{ {1,2,3,4} \right\}} \\ {\left\{ {2,4} \right\}} & {\left\{ 4 \right\}} & {\left\{ {1,2,3,4} \right\}} & - & - \\ \end{array} } \right]. $$

Step 11: calculate the dominance degree between any two technologies. With α = β = 0.88 and λ = 1 in Eq. (37), the dominance degree between Tcρ and Tcε can be determined.

Step 12: set the position weights of the strong, medium, weak concordance/discordance sets and the indifference set as \( =\left( {\omega_{c}^{s} ,\omega_{c}^{m} ,\omega_{c}^{w} ,\omega^= ,\omega_{d}^{s} ,\omega_{d}^{m} ,\omega_{d}^{w} } \right)=\left( {1,0.9,0.8,1,0.9,0.8,0.7} \right)\). Then calculate the linguistic Z-number concordance index LZCI by Eq. (38) and build the concordance matrix CLZ = [CIχγ]5×5. Correspondingly, set up the discordance matrix DLZ = [DIχγ]5×5.

$$\begin{aligned} C_{LZ} &=\left[ {CI_{\chi \gamma } } \right]_{5 \times 5} \\ &=\left[ {\begin{array}{*{20}c} - &\quad {{0}{\text{.0724}}} &\quad {{0}{\text{.0100}}} &\quad {{0}{\text{.1170}}} &\quad {{0}{\text{.3483}}} \\ {{0}{\text{.0070}}} &\quad - &\quad {0} &\quad {{0}{\text{.0640}}} &\quad {{0}{\text{.0128}}} \\ {{0}{\text{.1162}}} &\quad {{0}{\text{.1709}}} &\quad - &\quad {{0}{\text{.2096}}} &\quad {{0}{\text{.1362}}} \\ {{0}{\text{.0113}}} &\quad {{0}{\text{.0210}}} &\quad {0} &\quad - &\quad {0} \\ {{0}{\text{.0357}}} &\quad {{0}{\text{.0552}}} &\quad {0} &\quad {{0}{\text{.0879}}} & - \\ \end{array} } \right]; \end{aligned} $$
$$\begin{aligned} D_{LZ} &=\left[ {DI_{\chi \gamma } } \right]_{5 \times 5} \\ &=\left[ {\begin{array}{*{20}c} - &\quad {{0}{\text{.0418}}} &\quad {{0}{\text{.4267}}} &\quad {{0}{\text{.0359}}} &\quad {{0}{\text{.1685}}} \\ {{0}{\text{.4310}}} &\quad - &\quad {{0}{\text{.6771}}} &\quad {{0}{\text{.1206}}} &\quad {{0}{\text{.5167}}} \\ {{0}{\text{.0907}}} &\quad {0} &\quad - & \quad {0} &\quad {0} \\ {{0}{\text{.3712}}} &\quad {{0}{\text{.3671}}} &\quad {{0}{\text{.7469}}} &\quad - &\quad {{0}{\text{.4733}}} \\ {{0}{\text{.2786}}} &\quad {{0}{\text{.2201}}} &\quad {{0}{\text{.6314}}} &\quad {0} &\quad - \\ \end{array} } \right]. \end{aligned}$$

Step 13: compute the concordance threshold \(\overline{C}_{lz} = 0.0738\), construct the concordant Boolean matrix:

$$ B_{LZ}^{C} =\left[ {\begin{array}{*{20}c} - &\quad 0 &\quad 0 &\quad 1 &\quad 1 \\ 0 &\quad - &\quad 0 &\quad 0 &\quad 0 \\ 1 &\quad 1 &\quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 &\quad 0 &\quad - &\quad 0 \\ 0 &\quad 0 &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

Similarly, compute the discordance threshold \(\overline{D}_{lz} = 0.2799\) and construct the discordant Boolean matrix:

$$ B_{LZ}^{D} =\left[ {\begin{array}{*{20}c} - &\quad 1 &\quad 0 &\quad 1 &\quad 1 \\ 0 &\quad - &\quad 0 &\quad 1 &\quad 0 \\ 1 &\quad 1 &\quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 &\quad 0 &\quad - &\quad 0 \\ 1 &\quad 1 &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

Step 14: obtain the synthesized matrix by multiplying the corresponding elements of \(B_{LZ}^{C}\) and \(B_{LZ}^{D}\) in pairs.

$$ B_{LZ} =\left[ {\begin{array}{*{20}c} - &\quad 0 &\quad 0 &\quad 1 &\quad 1 \\ 0 &\quad - &\quad 0 &\quad 0 &\quad 0 \\ 1 &\quad 1 &\quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 &\quad 0 &\quad - &\quad 0 \\ 0 &\quad 0 &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

Step 15: depict the strong outranking graph as Fig. 5.

Fig. 5
figure 5

The strong outranking graph

It can be noticed from the synthesized matrix BLZ that the elements at symmetrical positions are not complementary completely. It is therefore essential to mark the asymmetric elements and make amendments.

$$ B_{LZ} =\left[ {\begin{array}{*{20}c} - & {\widehat{0}} & \quad 0 &\quad 1 &\quad 1 \\ {\widehat{0}} & \quad - & \quad 0 &\quad {\widehat{0}} &\quad {\widehat{0}} \\ 1 &\quad 1 & \quad - &\quad 1 &\quad 1 \\ 0 &\quad {\widehat{0}} &\quad 0 &\quad - &\quad 0 \\ 0 &\quad {\widehat{0}} &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

Then \(B_{LZ}^{C}\) and \(B_{LZ}^{D}\) can be revised as follows:

$$ B_{LZ}^{C} =\left[ {\begin{array}{*{20}c} - &\quad {\widehat{1}} &\quad 0 &\quad 1 &\quad 1 \\ 0 &\quad - &\quad 0 &\quad {\widehat{1}} &\quad 0 \\ 1 &\quad 1 &\quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 &\quad 0 &\quad - &\quad 0 \\ 0 &\quad {\widehat{1}} &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right],\quad B_{LZ}^{D} =\left[ {\begin{array}{*{20}c} - &\quad 1 &\quad 0 &\quad 1 &\quad 1 \\ 0 &\quad - &\quad 0 &\quad 1 &\quad 0 \\ 1 &\quad 1 & \quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 & \quad 0 & \quad - & \quad 0 \\ 1 &\quad 1 &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

Reconstruct the synthesized matrix based on the revised \(B_{LZ}^{C}\) and \(B_{LZ}^{D}\) as:

$$ \widetilde{B}_{LZ} =\left[ {\begin{array}{*{20}c} - &\quad {\widehat{1}} &\quad 0 &\quad 1 &\quad 1 \\ 0 & \quad - & \quad 0 &\quad {\widehat{1}} &\quad 0 \\ 1 &\quad 1 &\quad - &\quad 1 &\quad 1 \\ 0 &\quad 0 &\quad 0 &\quad - &\quad 0 \\ 0 &\quad {\widehat{1}} &\quad 0 &\quad 1 &\quad - \\ \end{array} } \right]. $$

The outranking graph is redrawn in Fig. 6, where the colors of the arrows are used to distinguish the intensity of the outranking relationships. The added arrows represent weak preference relationships between alternatives.

Fig. 6
figure 6

The ultimate outranking graph

Step 16: rank the five technologies based on the revised outranking graph and the result is:

$$ Tc_{3} \succ Tc_{1} \succ Tc_{5} \succ Tc_{2} \succ Tc_{4} . $$

Thus the third technology (Triple tank pretreatment + High temperature flue bypass evaporation) is the most desirable option.

Analysis and discussion

Sensitivity analysis and discussion

As mentioned in section “Obtain the ranking results using the generalized TODIM-ELECTRE II method”, the parameter λ in Eq. (37) measures the attitude of DMs in the face of loss, which may vary as the decision-making body changes. When λ > 1, the DM’s perception of damage and defect is intensified, this may suggest the DM is a loss-sensitive individual and more reluctant to take a chance. Conversely, the DM is not susceptible to losses, which may mean that he concerns more about other attributes such as quality and cost. Consequently, for two alternatives with different performance under the same criterion, different types of DMs with different λ values have different judgments on the dominance degree between the two compared, and the discordance index is affected accordingly. Take different values for λ, that is, λ = 0.8, λ = 2.25, λ = 3 and λ = 3.75, then the changes in the discordance index between the five technologies can be observed.

In Fig. 7, the x-axis denotes all five alternatives under different values of λ, the y-axis indicates the sum of discordance indices. Rectangles of different colors indicate the discordance indices between the fixed alternative and other alternatives. It can be seen that when the value of λ increases, the rectangles’ areas of different colors increase with it, which means that the value of the discordance index increases. It is easier to understand the reasons for this trend in combination with the actual situation: a greater λ indicates the loss has a deeper impact on the decision body, that is, for the established two alternatives, although the gap between them is objective, the “inferior” one gives the DM a worse feeling when he is loss-sensitive. In this case, the dominance degree will have a greater absolute value and the discordance index will be higher.

Fig. 7
figure 7

The discordance index under different λ

It should also be noted that when the value of λ varies from 0.8 to 3, the discordance relationship between the alternatives will not change with the increment of the discordance indices and will constantly be \(dTc_{3} \prec dTc_{1} \prec dTc_{5} \prec dTc_{2} \prec dTc_{4}\), where \(dTc_{i}\) denotes the discordance index of \(Tc_{i}\). This verifies to a certain extent the robustness of the decision-making framework established in this paper. In addition, when the value of λ increases to 3.75, there is a subtle alteration in this relationship, that is, the order of Tc1 and Tc5 is swapped. It can be seen that the decision subject will exert influence on the priority relationship of the alternatives, and the DM’s attitude towards losses may change the final ranking result. However, regardless of the tweak of the parameter’s value, in the present embodiment, the best choice is still Tc3, which further verifies the robustness of the method.

Feasibility analysis and discussion

Considering the the LZNs environment and the reference to the TOPSIS method [47], two existing method are used to recalculate this example. One is a MCGDM method proposed by Peng and Wang [48], the other is the traditional TOPSIS method.

As can be seen from Table 17, the order acquired by the above two approaches is roughly the same as that of the method proposed in this article. In particular, for the method of Peng and Wang, the ranking results by Q and U value are consistent with that of our method. The above results verify that our method is effective and feasible.

Table 17 Result of validity test

It is worth mentioning that TOPSIS’s ignorance of the relative importance of distance to the ideal solution can frequently lead to inaccurate results. Moreover, the method in [45] is an incorporation of the power aggregation (PA) operators and VIKOR. Compared to the PA operator, DEST is more effective in processing uncertainty originating from the unknown or incomplete information. In addition, the method in [48] is essentially the traditional VIKOR method, which overlooks the relationship between every alternative and the negative ideal solution [28]. The last one is that the method framework in [48] only suitable for the case where the expert weights and attribute weights are partially known, while the decision framework developed in this article is available for MCGDM problems with completely unknown weight information.

Superiority recognition and discussion

In this part, five other comparable methods are employed for the selection problem of terminal wastewater solidification technology. They are the traditional ELECTRE II method, the generalized method, the DS-VIKOR method proposed in [2], the ELECTRE-Based method in [3] and the PDHL-VIKOR method developed by Gou et al. [28], respectively. Among them, due to the difference in language environment, we only use the part of the improved VIKOR in method proposed in [28] for comparison, thence we don’t obtain the probabilistic double hierarchy linguistic compromise (PDHLCi) but the linguistic Z-number compromise (LZCi) measure. The superiority of the newly proposed method is further demonstrated in this subsection.

The ranking results calculated by the four methods in Table 18 are at variance with that of the introduced framework. However, the best solution is always Tc3, and except for the ELETRE II method and the improved method in [28], the worst option points to Tc4. The above results further prove the effectiveness of the method in this article.

Table 18 Ranking results of the four methods

The reason why the ranking result obtained by the ELETRE II method is different from that of the method proposed in this article is that, the concordance index and the discordance index obtained by the two methods have different values. This leads to the concordance matrix and the discordance matrix obtained accordingly are different, so the comprehensive performance of Tc2 and Tc4 are distinct in their respective synthesized matrix. As can be seen in concordance set and discordance set in Step 10 of section “Case description”, Tc2 outperforms Tc4 under Cr3 and Cr4, however, Tc4 outperforms Tc2 under Cr1 and Cr2. The calculation of concordance and discordance indices of ELECTRE II method only considers the criteria weights, under the circumstances, since the weights of Cr1 and Cr2 are greater than the weights of Cr3 and Cr4, it seems reasonable to think Tc4 precedes Tc2. But is this really the case? It can be found when employing the generalized TODIM method, the performance of Cr2 is significantly better than that of Cr4. Obviously, the bounded rationality of the DM is an important reason that affects the result of the MCDM problems. This is the reason for the different results between method in 28[] and the method in this article, and is also one of the reasons why the ELECTRE-Based method proposed by Fei et al. [3] cannot distinguish the priority relationship between Tc2 and Tc4. In the process of determining the linguistic Z-number concordance and discordance matrices, the method proposed in this article not only covers the criteria weight information, but also takes into account the impact of the gap between the alternatives on the decision-making of loss-averse DMs. Compared with the improved VIKOR in [28], the newly proposed method concerns the relationship between alternatives and negative ideal solution, too. Therefore, it can overcome the flaw of traditional VIKOR and achieve the goal of [28] to improve the VIKOR method. In addition, the construction of the concordance and discordance indices effectively avoids the compensation effect between the criteria, the consideration of the DMs’ psychology make the results more convincing.

Furthermore, the ELECTRE-Based method in [3] can merely infers that Tc3 is preferred Tc2 and Tc4, and Tc5 is preferred Tc4. However, the relationship between Tc1, Tc2 and Tc4 cannot be determined, which reveals the method in [3] only has the ability to prioritize, but cannot get the priority relationship of all the alternatives. In contrast, the total order can be obtained using our method, which can visibly identify the gaps between all solutions.

Compared with the results of newly proposed framework, the preference relation between Tc1 and Tc5 obtained by ELECTRE II, DS-VIKOR and ELECTRE-Based methods is different. In section “Sensitivity analysis and discussion”, it has already been shown that when λ increases to a certain value, the positions of Tc1 and Tc5 will be exchanged. Therefore, this sequence can be regarded as a special case where the DM is more sensitive to losses. One of the advantages of the generalized TODIM-ELECTRE II method has thus emerged, that is, fully consider the loss-aversion behavior of the DMs, the degree or the extent to which one alternative is dominated by the other is also reflected. Furthermore, the incorporation of ELECTRE II method allows the compensatory effects among criteria to be solved with precision. Obviously the generalized TODIM method neglects this point, which will inevitably lead to the unreliable results and may explain why the positions of Tc2 and Tc5 are swapped.

Conclusion

In the paper, a new integrated framework suitable for MCGDM problems with unknown weight information in LZNs environment is established. In this framework, the original information in the form of LZNs is first transformed into the basic probability distributions in DEST, which not only fully considers the inner and outer reliability of the evidence through discount procedure, but can also cope with uncertain information during the process of fusing multi-source information. Next, for eliminating uncertainty arising from the subjective judgment, Deng entropy is employed to acquire criteria weights. Afterwards, the generalized TODIM-ELECTRE II method is put forward to obtain the full priority order of the alternatives, which is competent to handle the compensation problems between criteria while taking the loss-aversion behavior of DMs into account. The feasibility of this method is demonstrated by the solution to a terminal wastewater solidification technology selection problem. Finally, the sensitivity analysis is conducted to further verify the effectiveness and robustness of the method, the superiority is also illustrated by a series of comparative analysis.