A generalized TODIM-ELECTRE II method based on linguistic Z-numbers and Dempster–Shafer evidence theory with unknown weight information

How to effectively reflect the randomness and reliability of decision information under uncertain circumstances, and thereby improve the accuracy of decision-making in complex decision scenarios, has become a crucial topic in the field of uncertain decision-making. In this article, the loss –aversion behavior of decision-makers and the non-compensation between attributes are considered. Furthermore, a novel generalized TODIM-ELECTRE II method under the linguistic Z-numbers environment is proposed based on Dempster–Shafer evidence theory for multi-criteria group decision-making problems with unknown weight information. Firstly, the evaluation information and its reliability are provided simultaneously by employing linguistic Z-numbers, which have the ability to capture the arbitrariness and vagueness of natural verbal information. Then, the evaluation information is used to derive basic probability assignments in Dempster–Shafer evidence theory, and with the consideration of both inner and outer reliability, this article employed Dempster’s rule to fuse evaluations. Subsequently, a generalized TODIM-ELECTRE II method is conceived under the linguistic Z-numbers environment, which considers both compensatory effects between attributes and the bounded rationality of decision-makers. In addition, criteria weights are obtained by applying Deng entropy which has the ability to deal with uncertainty. Finally, an example of terminal wastewater solidification technology selection is offered to prove this framework’s availability and robustness. The predominance is also verified by a comparative analysis with several existing methods.


Introduction
Multi-criteria group decision-making (MCGDM) has always been a hotspot due to its complexity and universality in daily life and business management, such as supplier selection [1][2][3][4], hospital management [5] and site selection of new energy power station [6]. A MCGDM problem requires multiple decision-makers (DMs) to evaluate a series of alternatives from distinct perspectives and eventually determine the best one. However, classic fuzzy sets are inadequate to describe the dependability of evaluation information. For describing uncertain, imprecise and incomplete information more accurately, the notion of Z-number is introduced [7]. A Z-number is an ordered pair of fuzzy numbers configured as Z (A, B). The constraint part A and the reliability measure B endow Z-numbers the capacity to reflect the constraint and reliability of information synchronously, making it superior to traditional fuzzy sets and widely applicable to natural human expression. For example, the sentence "The salt content of seawater reverse osmosis influent is usually 8 g/L~50 g/L." can be represented by a Z-number as X is Z (8 g/L~50 g/L, usually), where X indicates the water salinity of seawater reverse osmosis. The existing literature mainly studies Z-numbers in two categories. The first category can be classified as fundamental research, including arithmetic operations over Z-numbers [8][9][10], comparison and measurement methods [11][12][13][14], converting methods [15] and so on. The second is the employment of Z-number in the decisionmaking process, such as medical diagnosis [16] and failure mode identification and sequencing [17]. Also, the two elements of a Z-number are most expressed as the trapezoidal or triangular fuzzy number [16,18], or a mix of both [17].
Linguistic Z-numbers (LZNs) is a subclass of Z-numbers, whose two components merge in the form of linguistic terms. The fuzziness and randomness of LZNs precisely accord to the restraint and reliability of Z-numbers, respectively [19]. This extension caters more to human habits and makes qualitative information to be described more accurately, thus can help reduce the information distortion, increase the flexibility and credibility of decision-making. The fuzzy restriction of a LZN may be represented by linguistic terms like "a little bad", "good", "very good", the reliability can be measured by linguistic terms like "uncertain", "relatively certain", "very certain" or "seldom", "often" and "usual". LZNs are practical tools for expressing most decision-making information. For example, an expert can use a LZN (very good, certain) to evaluate the effect of evaporative crystallization technology on the solidification of terminal wastewater. Due to the universality and applicability, LZN shave a wide range of application. For example, Song et al. [20] designed a novel quality function deployment framework employing LZNs. Jiang et al. [5] developed an extended evaluation laboratory method under The LZNs environment as a large group evaluation approach.
However, the determination of membership functions linked with linguistic terms is not a simple matter; aggregating linguistic terms is also a complex undertaking. Since constraint and reliability fall into distinct categories, the direct conversion of LZNs to classical fuzzy numbers will cause distortion and loss of initial information, which is nevertheless a common approach for the aggregation of evaluations in Z-numbers form [17,[21][22][23]. As a generalization of probability theory and promotion of traditional Bayes reasoning approach, Dempster-Shafer evidence theory (DEST) has been found great effectiveness in processing uncertainty resulting from the unknown or incomplete information [24,25]. Its core, Dempster's combination rule, can effectively fuse the evaluation information from multiple individuals. Li and Chen (2018) provided an approach to transfer fuzzy information into BPAs and adopted DEST to integrate evaluation opinions of multiple experts [26]. Liu and Zhang (2019) developed a hesitant fuzzy linguistic fusion arithmetic based on DEST and suggested a novel multiple attribute decision making (MADM) method [4]. Ren et al. (2020) built a bridge between Z-numbers and the DEST, which formed the foundation to aggregate Z-evaluations [21]. Nevertheless, there has been no research on how to apply DEST to the effective integration of LZNs. Thus, this paper employs LZNs to express the initial evaluation information. Then the evaluations in the form of LZNs are converted into BPAs by extending the method in [27] to the LZNs environment. To be more specific, the membership function is substituted with the utility function, and the reliability is measured using linguistic scale function.
There are many traditional multicriteria decision making (MCDM) methods, such as VIKOR, grey relational analysis, etc. On the basis of these methods, many scholars have improved these methods or developed new methods to better solve the MCDM problem. For example, Gou et al. (2020) corrected the neglect of the traditional VIKOR method on the relationship between the alternatives and the negative ideal solution, and used the improved VIKOR method in probabilistic double hierarchy linguistic context [28]. Jiang et al. (2020) extended the decision-making trial and evaluation laboratory (DEMATEL) method by incorporating LZNs, to make up for the shortcomings of existing DEMA-TEL methods in expressing reliability of DMs' cognition [5]. Geetha et al. (2021) provided a hesitant Pythagorean fuzzy (HPF) ELECTRE III method which extended the ELECTRE III method to HPF environment [29]. Lin et al. (2021) proposed two methods based on a new score function of probabilistic linguistic term sets (PLTSs), which named TOPSIS-ScoreC-PLTS and VIKOR-ScoreC-PLTS, respectively [30]. As a classification of multitudinous multicriteria decision making (MCDM) methods, the outranking methods perform best in decision support at the strategic level. The outranking methods have the indisputable advantage in leveraging incomplete information and coping with uncertain or fuzzy information. Among them, the ELECTRE method is the most extensively used owing to its prominent ability to effectively avoid the compensation effect between attributes while fully considering incomparability and indifference. In other words, an alternative with a significantly weaker performance value cannot be directly compensated by other good attribute values. The ELECTRE method has so far developed multiple versions suitable for different types of problems. Among them, ELECTRE II concerned more about the degree to which one alternative outranks another, which serve the purpose of settling sorting problems [31]. ELECTRE II method can be applied in copious fuzzy circumstances. Wan  Another point worth noting is that the existing MCDM or MCGDM methods, including the ELECTRE II method, typically take DMs as risk-neutral, even though the fact is often not the case. DMs usually have reference dependence and loss-aversion psychology during the decision-making process, which means DMs are more sensitive to losses than to profits. Inspired by prospect theory, the TODIM method is introduced with the strength of considering the psychological characteristics of the DMs [35] and extended to various information contexts. For instance, Krohling and de Souza (2012) proposed the fuzzy TODIM [36], which is then generalized to uncertain and random environment successively [37,38]. In addition, how to reasonably combine the TODIM method with other methods to impart decision-making higher reliability and validity is also a problem deserves research. Passos et al. (2014) proposed the TODIM-FSE method by adopting some stages of Fuzzy Synthetic Evaluation (FSE) and the TODIM respectively, which provided the possibility to consider the prevalent inaccuracies in human judgment when constructing the contribution function [39]. Zhang et al. (2017) developed the SMAA-TODIM method to process the intrinsic indeterminacy of the TODIM or TODIM-based models [40].  extended TODIM to fuzzy context and combined it with PROMETHEE-II method to settle the compensation problem during the ranking process [41]. Llamazares (2018) pointed out two types of paradoxes in the traditional TODIM method, thus the generalized TODIM method is introduced [42]. This generalized version can effectively avoid these two paradoxes and has been gradually expanded [43]. Therefore, generalized TODIM is managed to integrate into ELECTRE II so that the influence of DM's bounded rationality on the ranking results can be subtly considered in this article.
After the foregoing analysis, a decision-making framework is constructed for the ranking and selection problem. The main work of this article can be outlined as follows: 1. A LZN is an appropriate and remarkable tool to express assessments. On the one hand, it embraces both fuzzy evaluation and reliability information. On the other hand, the introduction of linguistic term sets makes it retain more original evaluation information and reduce the information distortion compared to a Z-number. Therefore, LZNs are employed in this article to describe original evaluation values. 2. DEST is extended to the LZNs context, which not only reduces the distortion of evaluation information but also better handles the fusion of multi-source information under uncertain conditions. Moreover, a novel discounting coefficients determination method is established to modify BPAs, which considers both the inner and outer credibility of the evidence.

A novel decision framework for MCGDM problems
with LZNs and unknown weight information is established based on DEST and Deng entropy. Within this framework, the alternatives are ranked according to the calculation results of the generalized TODIM-ELECTRE II method under the LZNs environment. The novel method can deal with non-compensatory issues of criteria while reflecting the loss-avoidance psychology of DMs, hence further enhances the persuasiveness and reliability of the results.
The remainder of this paper is fabricated as follows: Section "Preliminaries" reviews the relevant knowledge concerning LZNs, DEST and Deng entropy. In Section "An integrated decision-making framework based on the generalized TODIM-ELECTER II method and DEST", the decision framework developed in this paper is introduced. A case of terminal wastewater solidification technology selection is devised in Section "An illustrative example of terminal wastewater solidification technology selection" to verify the feasibility of the above method. Section "Analysis and discussion" carries out a series of analysis, including sensitivity and comparative analyses, the robustness and superiority of the proposed method are manifested in this section. Finally, the conclusion comes in section "Conclusion".

Z-number
The notion of Z-numbers is proposed by Zadeh (2011) as an implementation to depict the reliability or the confidence in the information released by natural language statement, which appears as a 2-tuple: Z (A, B).
Typically, A and B occur in the form of words or clauses, and both are fuzzy numbers. Component A plays the role of a fuzzy restriction, R(X), on the values taken by a realvalued uncertain variable X. More specifically, R(X): X is A → Poss (X u) μ A (u), where μ A (u) is the membership function of A and u represents a general value of X, μ A (u) can be constructed as the degree to which u belongs to the constraint.
In light of the relation between Z-numbers and Z +numbers, the meaning of a Z-number can be explained by [7]. The diagram of a Z-number is shown in Fig. 1 [27], where b represents an element in fuzzy set B.

Linguistic Z-numbers
The linguistic Z-numbers can effectively avoid information loss and distortion while considering uncertainty in the decision-making process. Fig. 1 The diagram of a simple Z-number [27] Definition 2.1 Let X be the discourse domain, S 1 {s 0 , s 1 , · · · , s 2t } and S 2 s 0 , s 1 , · · · , s 2t are two linguistic term sets containing a finite number of elements, the terms is ordered and discrete, t is a non-negative integer. Suppose A φ(x) ∈ S 1 and B ϕ(x) ∈ S 2 . Then the form of a linguistic Znumber set in X is as follows: where A φ(x) is the limit on the possible value of the uncertain variable x, and B ϕ(x) is +the measure of the reliability of the first component. These two components usually contain different linguistic terms, representing different preference information.

Dempster-Shafer evidence theory
As an extension of probability theory, Dempster-Shafer evidence theory (DEST) has outstanding performance in the fusion of multi-source uncertain information. The following is a brief review of DEST.

Definition 2.2 Let
{H 1 , H 2 , · · · , H N }, the elements in the collection are exhaustive and mutually exclusive, we define the set as a frame of discernment (FOD). The power set of is denoted as P( ) Every element in P( ) can be called a proposition, which represents the possible values of the evaluated object.
The mass function is also called basic probability assignment, or BPA, which is defined as m: P( ) → If m(A) > 0, A is called a focal element. Any belief that is not assigned to a specific subset is considered "unexpressed" and assigned to the environment , denote by m( ).
To show the reliability of evidence, we introduce the concept of discounting coefficient. A discounting coefficient α ∈ [0, 1] can be seen as the weight of a piece of evidence. By updating the BPA with a discounting coefficient, we can get the discounted BPA: The core of DEST is its rules of combination, which is commonly used to integrate multi-source information.

Definition 2.3
For two bodies of evidence m 1 and m 2 , the combination indicated by m m 1 ⊕ m 2 is defined by Dempster's rule as follows: where B and C are both focal elements. In the formula, K is called the conflict coefficient and K B∩C ∅ m 1 (B)m 2 (C).
Pignistic probability represents a point estimate in a belief interval. A BPA can be transformed into a probability distribution by Pignistic probability function defined below: where A is a focal element and |·| represent the cardinality of corresponding sets. Resent the cardinality of corresponding sets.

Definition 2.4
Let m 1 and m 2 be two BPAs on the same frame of discernment , which contains N mutually exclusive and exhaustive hypotheses. The distance between m 1 and m 2 can be calculated by Eq. (5) where − → m 1 and − → m 2 are the vector representations of the corresponding BPA m 1 and m 2 respectively and D is a 2 N × 2 N matrix with elements are evaluated as Eq. (6).

D( A, B)
|A ∩ B| |A ∪ B| , A, B ∈ P( ). (6) Note that the element 1/2 in Eq. (5) is required for normalizing d B P A and guaranteeing 0 ≤ d B P A ≤ 1.

Deng entropy and the maximum Deng entropy
Deng entropy was first proposed by Deng, it can measure the uncertainty of basic probability assignments (BPAs). Deng entropy is the generalization of Shannon entropy and expands the boundary of the latter. The basic form is as follows: m is the mass function defined in the framework of discriminate. A is the focus element, and |A| is the cardinality of A. This formula shows that a belief is assigned to 2 |A| -1 possible states contained in A (except for the empty set). Shannon entropy can be regarded as a special case of Deng entropy where belief is only assigned to a single element and |A| 1.
which can be interpreted as the measure of discord of the BPA among diverse focal elements. The classical maximum entropy is calculated by log|N|. However, Kang and Deng (2019) pointed out that if the state of the system was uncertain, ambiguous, the actual maximum entropy might be greater than the traditional maximum entropy [44]. Then the maximum Deng entropy condition was given and the analytical solution of the maximum Deng entropy was obtained.

Theorem 1 The maximum Deng entropy:
where F i is a focal element and m(F i ) is the mass function of F i . It is not difficult to infer that the analytical solution of the maximum Deng entropy is D max log 2 i 2 |F i | − 1 .

An integrated decision-making framework based on the generalized TODIM-ELECTER II method and DEST
This section introduced the novel decision-making frame constructed to solve problems of ranking and selecting. Figure 2 is the flowchart of the proposed method, which displays the procedure of the decision with evaluation information in the form of LZNs. First, we convert the LZNs into BPAs. Then the discounted BPAs are determined by identifying the credibility of evidence, and the decision matrix is acquired by fusing evaluations from all experts. Further, Deng entropy is employed to obtain criteria weights. Finally, the generalized TODIM-ELECTRE II method is presented by combining the generalized TODIM method and ELECTRE II, the newly proposed method inherits the advantages of both methods and thus can provide a more credible ranking result. The flow of the decision framework will be visualized in the diagram illustrated in Fig. 2.

Statement of a MCGDM problem in LZNs environment
Suppose there is a problem for prioritization purpose, which contains m alternatives {Aa 1 , Aa 2 , · · · , Aa m } and n evalua- independently assess the performance of each alternative under each criterion. The criteria weights and expert weights are completely unknown. The evaluation information is given by experts in the form of LZNs. Let h t uv represents the evaluation for alternative Aa u under the attribute C v provided by

Generate the basic probability assignment based on linguistic Z-numbers
LZNs are more conform to human expression habits and have a strong ability to retain the original information. For dealing with the uncertainty in the assessment and facilitating subsequent information integration, the recognition framework in DEST is used to represent fuzzy restriction of the variable value and reliability measure in LZNs.
Then B ϕ t uv can be quantified by the linguistic scale function L, which is manifested in Eq. (10). Since the conversation In a recent study, Ren et al. (2020) substitute the membership function μ(x) for the utility function u(φ j ) and used Shapley value method to revise the utility values of nonindependent evaluation grades [27].
Definition 3.1 [26]. A fuzzy measure u is a set function on the set of φ φ −g , · · · , φ −1 , φ 0 , φ 1 , · · · , φ g : 2 N → [0, 1] sufficing the following condition: indicates the utility or the weight of element φ j . The Shapley index for each j ∈ φ is defined as: Shapley value is the average value of the marginal contribution, which represents a particular constituent's share of in the value of all possible unions and can be maneuvered as a contrivance to determine a reasonable marginal contribution of every distinct element φ j Combining the related concepts of Z-numbers, the probability distribution can be extended to BPAs, so that we can derive the evidence m h t uv φ j , which refers to the evalua- where represents the score of corresponding reliability part, and ρ(φ j ) is the revised utility value of the evaluation grade.
The remaining beliefs are assigned to the FOD : Since there are multiple evaluations for each alternative under the same criterion, the BPA is indispensable to be normalized according to Eq. (14).
For simplicity, we still use m instead of m to represent the normalized BPAs hereunder.
The rationality of this step is that the performance of each alternative in the same dimension is objective, while the evaluation is relatively subjective. When a decision maker is proficient in a certain aspect of knowledge, he will give a higher confidence level to his own evaluation, and the corresponding evaluation is more reliable. Conversely, the lower the decision maker's confidence in the evaluation is and the lower the credibility of the opinion. An extreme example is that a decision maker on their own evaluation of no confidence thoroughly, then we may not put the comments into account.

Determine the decision matrix and weights of criteria
The entropy weight method takes the degree of dispersion of the index as the basis for determining the weight. As an objective weighting method, it has high credibility and accuracy. Moreover, it has the capacity to reflect the distinguishing ability of indicators and simple algorithm.
Deng entropy is an effective way to measure the uncertainty degree of BPAs [45]. Therefore, this article calculates evidence's inner reliability and criteria weights based on Deng entropy for avoiding the intervention arisen from subjective factors. The determination of BPAs' outer reliability is based on the conflict relationship. The following are the specific steps to obtain the final decision matrix and weights of criteria.
Step 1: for every piece of evidence provided by each expert, calculate the Deng entropy of BPA according to Eqs. (7)- (9). Then the inner reliability is determined by Eq. (15) Step 2: measuring the outer reliability of evidence. As proposed in the previous study [3], the outer reliability of a mass function depends mainly upon the conflict relationship between the polymerized BPAs. Referring to the multi-source data integration framework [46], we know that the outer reliability of a BPA is related to three metrics, that is, the weight or strength of credibility source S, the support for the proposed solution A from the data derived from S and the compatibility of A with A s , A s means the value proposed by the source S. So below we will introduce the concepts of support degree and the credibility degree for mass functions. Definition 3.2 Let Q m 1 , m 2 , · · · , m q be q independent sources of evidence, the similarity degree of m ρ and m ε is defined as.
where d (m ρ , m ε ) is the distance measure between m ρ and m ε .
Since the distance measure can evaluate the difference between BPAs, a greater distance between m ρ and evidence from other sources means a lower compatibility between them, thus the support of this piece of evidence for m ρ will be lower. The support degree of a BPA is defined as follows: m q be q independent sources of evidence, the support degree of m ρ is defined as The support degree of other evidence for m ρ can just indicate its credibility. So, we can define the credibility degree of a mass function as follows.

Definition 3.4
Let Q m 1 , m 2 , · · · , m q be q independent sources of evidence, the credibility degree of m ρ is defined as The definitions of support degree and credibility degree meet the properties in [36], and the outer reliability of a BPA is denoted as O(), thus we have: Step 3: combine the outer reliability with the inner reliability, and the reliability of each piece of evidence can be obtained, which can be regarded as the discounting coefficient of a BPA.
where η ∈ [0, 1] is a parameter used to adjust the influence of inner and outer reliability on the overall reliability.
Step 4: calculate the discounted BPAs using discounting coefficients. Take m h t uv as an example. For the beliefs assigned to the evaluation level φ j , the discounted BPA can be calculate as Step 5: aggregate the discounted evidence from all experts using the combination rule introduced in section "Preliminaries". All evaluation information of the same alternative under the same attribute m h t uv (u 1, 2 · · · , m; v 1, 2, · · · , n; t 1, 2, · · · , k) is aggregated into the synthesized evaluation value m h uv (u 1, 2, · · · , m; v 1, 2, · · · , n). Let H m h uv m×n .
Step 6: determine the criteria weights. For an actual decision-making problem, different criteria usually have different importance. In this paper, Deng entropy is used for the determination of each criterion's weight.
Fuse different BPAs that correspond to the alternatives under the same criterion converted in last step using the Dempster's combination algorithm, and the contribution of all alternatives for C v is obtained according to the following definition.
Definition 3.6 [2]. The degree of contribution of all the alternatives for criterion C v is defined as BPA v : Further, the indetermination of C v is estimated by Deng entropy and recorded as UC v . The calculation steps are shown in Eq. (8). Define the value of UC v between 0 and 1 through standardized operations, then the consistency of criterion C v can be defined as.
Based on the idea that an index with a lower value of Deng entropy has a greater impact on the comprehensive evaluation (i.e. weight), the criterion weight can be calculated as Finally, the weights of criteria are obtained through the normalization operation.
Step 7: convert BPAs into probability distributions BetP m according to Eq. (4) using the Pignistic probability function. With an aggregate function, the probability distributions can be integrated into a numerical value as follows.
Definition 3.7 Assume the importance of linguistic terms is I {I 1 , I 2 , · · · I L }, and their responding values are V {v 1 , v 2 , · · · v L }, the probability distributions P { p 1 , p 2 , · · · p L } can be aggregated as Through the above operation, the decision matrix can be

Obtain the ranking results using the generalized TODIM-ELECTRE II method
For ranking all alternatives in the decision matrix obtained above, a novel method named the generalized TODIM-ELECTRE II method is presented in this subsection. This newly raised method is a hybrid of the generalized TODIM method and the ELECTRE II method, it can not only reflect the outranking relationship between the alternatives, but also reflect the degree of transcendence. Thus, it can handle the compensatory problem between criteria while consider the psychological characteristics of DMs.
Step 1: for the criterion C v , its positive ideal value and negative ideal value can be determined according to the decision matrix F M : What's more, the distance measure between decision values can be calculated as the absolute value means the size of the gap between the two decision values, rather than comparing the numerical size of their own. Next, borrow the concept of relative distance function from the TOPSIS method, the relative distance for decision values f M uv is computed by Eq. (29) and used for the subsequent research.
Step 2: determine three types of linguistic Z-number concordance sets (LZCSs) for f χ v and f γ v , which represent the decision values corresponding to any two alternatives' performance ratings on criterion C v by the following classification standards: (1) the strong concordance set ( 3 0 ) (2) the medium concordance set (3) the weak concordance set Since d lz f M χv , f v+ M is the distance between f M χv and f v+ M , it can identify the gap between the performance rating f M χv and the positive ideal values, for d lz f M χv , f v− M is the same principle. Therefore the determination of CC s χγ and CC s χγ is unambiguous to explain. However, for the determination of the weak concordance set, this paper considers two peculiar conditions in which f M χv may transcend f M γ v : Step 5: compute the dominance degree between alternative Aa χ and Aa γ concerning the criterion C v and denote as v (Aa χ , Aa γ ). Draw from [42], let f 1 (x) in the generalized TODIM method be x α , and f 2 (x) be λx β , and the weight vector w will not be modified. Therefore the expression for calculating dominance degree can be denoted as: Step 6: calculate the linguistic Z-number concordance index LZCI on any pair of alternatives Aa χ and Aa γ of LZCSs by where w v is the weight of C v , ω s , ω m , ω w ,and ω are attitude weights. CI χγ mirrors the relative importance of Aa χ over Aa γ , and CI χγ ∈ [0,1]. Then all LZCIs of pairs of alternatives comprise the linguistic Z-number concordance matrix C LZ [ CI χγ ] m×m (χ , γ 1, 2, …, m): Step 7: calculate the linguistic Z-number discordance index LZDI on any pair of alternatives Aa χ and Aa γ of LZCSs by DI χγ can mirror the relative inferior of alternative Aa χ over alternative Aa γ , and DI χγ ∈ [0,1]. Then all LZDIs of pairs of alternatives constitute the linguistic Z-number concordance matrix D LZ [ DI χγ ] m×m (χ , γ 1, 2, …, m): Step 8: calculate the concordance threshold by C lz m χ 1 m and construct the linguistic Z-number concordance Boolean matrix B C L Z e χγ m×m , where e χγ is a Boolean variable. Specifically, e χγ 1 ⇔ e χγ ≥ C lz and e χγ 0 ⇔ e χγ < C lz . Furthermore, e χγ 1 denotes that Aa χ dominants Aa γ in the perspective of concordance.
Compute the discordance threshold as D lz and establish the linguistic Z-number discordance matrix B D L Z q i j m×m , where q χγ is also a Boolean variable. When (1) q χγ < D lz , q χγ 1 and (2) q χγ ≥ D lz , q χγ 0, where q χγ 1 denotes that Aa χ is inferior to Aa γ .
Step 9: multiply B C L Z and B D L Z component-wise and obtain the synthetic Boolean matrix B LZ with element b χγ e χγ × q χγ (χ , γ 1, 2, …, m).
Step 10: depict the outranking graph according to the preference relationships embodied in B LZ , which can be displayed by a digraph G (V, A). A digraph is composed of a vector set V and a set of arcs linked with vertices, i.e., A. The vertices represent the alternatives involved in the MCGDM, and a directed arc represents an outranking relation. For example, an arc pointing from v χ to v γ signifies Aa χ precedes Aa γ . A two-way arrow appears between these two alternatives when Aa χ and Aa γ is indifferent. In addition, the absence of arrows between two vertices occurs when these two alternatives are incomparable.

Case description
Although the total amount of freshwater resources in our country is not small, the per capita amount of freshwater is only about one-fourth of the world average, in general, a serious shortage. Since entering the twenty-first century, the contradiction between the supply and demand of water resources in my country has further intensified. Resourcebased and water-quality-based water shortage has become one of the primary bottlenecks limiting our country's sustainable economic and social development. Water is a high degree of unity of quantity and quality. Pollution reduces the quality of water resources. Due to the increase in sewage discharge and toxicity, not all sewage can be properly treated before it is discharged, which exacerbates the shortage of water resources. The state has successively promulgated relevant laws and regulations. For instance, the "Action Plan for Water Pollution Prevention and Control" issued in 2015 has put forward higher requirements for water resources utilization, water pollution prevention and control, and pollution discharge.
Actively responding to the national call and meeting the requirements of relevant water-saving and environmental protection laws and regulations, a seawater DC power plant intends to treat the power plant's high-salt wastewater (mainly including desulfurization wastewater and boiler make-up water treatment system high-salt wastewater) through terminal wastewater solidification technology so as to realize zero discharge of wastewater from the whole plant. Figure 3 shows the factors affecting the water quality and quantity of desulfurization wastewater and the sources of pollutants. Now it is necessary to compare a variety of terminal wastewater treatment technical routes by synthesis and select the most appropriate terminal wastewater treatment scheme. Terminal wastewater treatment generally includes two stages: wastewater concentration reduction and terminal wastewater solidification. At present, there are several technical routes for zero discharge of desulfurization wastewater as shown in Fig. 4. According to the actual situation, there are five technologies for the power plant to choose from: Pretreatment + Multi-effect evaporation (Tc 1 ), Pretreatment + double alkali method + double membrane method + main flue evaporation (Tc 2 ), Triple tank (pH adjustment tank, a reaction tank, floc-culation tank) pretreatment + High-temperature flue bypass evaporation (Tc 3 ), Flue gas concentration + End pressure filtration (Tc 4 ) and Flue gas concentration + Fluidization evaporation of end secondary air (Tc 5 ). Four experts {Ex 1 , Ex 2 , Ex 3 , Ex 4 } were asked to make evaluation independently on the performance of every technology under four criteria, including Operating cost (Cr 1 ), Technology maturity (Cr 2 ), Post-processing effect (Cr 3 ) and Environmental impact (Cr 4 ). Given the complexity of human cognition and the asymmetry of information, neither weights of experts nor weights of criteria are unknown. With the aim of improving the accuracy of the information, experts are required to offer evaluation information in LZNs form. The first linguistic term set S {δ -3, δ -2, δ -1, δ 0, δ 1, δ 2, δ 3 } {very poor (VP), poor (P), slightly poor (SP), medium (M), slightly good (SG), good (G), very good (VG)} was adopted by the experts to judge the performance grade of every alternative concerning each criterion, and the confidence in an evaluation was expressed employing the other linguistic term set S ' {ξ -3, ξ -2, ξ -1, ξ 0, ξ 1, ξ 2, ξ 3 } {very uncertain (VU), uncertain (U), a little uncertain (LU), normal (N), a little certain (LC), certain (C), very certain (VC)}.For better performance, the evaluation level is higher.

Case processing
Step 1: obtain the evaluation results of experts in the form of LZNs. Every expert provides his own judgments and corresponding confidence levels concerning each technology's performance under each criterion, which are displayed in Tables 1, 2, 3, 4.
Step 4: determine the inner reliability of BPA. First, compute the Deng entropy of BPAs by Eq. (8), and next calculate the maximum Deng entropy. In the present example, the scale of the FOD is 7. Hence, its corresponding power set involves 2 7 elements, and the distribution of propositions (empty set is omitted) should satisfy Eq. (9). The maximum Deng entropy is calculated as 11.0077. The inner reliability can thus be determined (Table 9).
Step 5: determine the outer reliability of BPAs and calculate the discounted BPAs.
First, determine the distance between BPAs according to Eqs. (5) and (6), then the value of support degree of BPAs can be calculated by Eqs. (16) and (17)  Eventually, the credibility degree can be calculated by Eq. (18), and the outer reliability can be determined. Take the value of η in Eq. (20) as 0.5, the overall reliability and discounting coefficients of BPAs can be determined. Table  10 shows the discounted BPA calculated by Eqs. (1) and (2).
Step 6: aggregate the evaluation results of the same alternative under the same criterion from multiple experts using Dempster's rule. The results obtained by Eq. (3) are shown as follows.
Step 7: determine the weights of criteria. Aggregate all BPAs concerning different alternatives under the same criterion using combination algorithm and obtain the combined BPAs. The weights of criteria can be obtained on the basis of Deng entropy.
Step 8: obtain the final decision matrix. First, convert BPAs in Table 11 into probability distributions by the Pignistic probability distribution (Table 12), the results can be seen in Table 13. Then, the probability distributions are integrated into numerical values according to Eq. (26). Table 14 is the final decision matrix (Table 15).          Step 9: identify the positive ideal value f v+ M and negative ideal value f v− M of every criterion, and the relative distance for decision values f M uv (u 1, 2, 3, 4, 5; v 1, 2, 3, 4) calculated by Eq. (29) are displayed in Table 16.
Step 10: classify three types of concordance/discordance sets and the indifference set.
Step 11: calculate the dominance degree between any two technologies. With α β 0.88 and λ 1 in Eq. (37), the dominance degree between Tc ρ and Tc ε can be determined.
Step 12: set the position weights of the strong, medium, weak concordance/discordance sets and the indifference set as     Step 13: compute the concordance threshold C lz 0.0738, construct the concordant Boolean matrix: Similarly, compute the discordance threshold D lz 0.2799 and construct the discordant Boolean matrix: Step 14: obtain the synthesized matrix by multiplying the corresponding elements of B C L Z and B D L Z in pairs.
Step 15: depict the strong outranking graph as Fig. 5.  It can be noticed from the synthesized matrix B LZ that the elements at symmetrical positions are not complementary completely. It is therefore essential to mark the asymmetric elements and make amendments.  Then B C L Z and B D L Z can be revised as follows: Fig. 6 The ultimate outranking graph Reconstruct the synthesized matrix based on the revised B C L Z and B D L Z as: The outranking graph is redrawn in Fig. 6, where the colors of the arrows are used to distinguish the intensity of the outranking relationships. The added arrows represent weak preference relationships between alternatives.
Step 16: rank the five technologies based on the revised outranking graph and the result is: Thus the third technology (Triple tank pretreatment + High temperature flue bypass evaporation) is the most desirable option.

Sensitivity analysis and discussion
As mentioned in section "Obtain the ranking results using the generalized TODIM-ELECTRE II method", the parameter λ in Eq. (37) measures the attitude of DMs in the face of loss, which may vary as the decision-making body changes. When λ > 1, the DM's perception of damage and defect is intensified, this may suggest the DM is a loss-sensitive individual Fig. 7 The discordance index under different λ and more reluctant to take a chance. Conversely, the DM is not susceptible to losses, which may mean that he concerns more about other attributes such as quality and cost. Consequently, for two alternatives with different performance under the same criterion, different types of DMs with different λ values have different judgments on the dominance degree between the two compared, and the discordance index is affected accordingly. Take different values for λ, that is, λ 0.8, λ 2.25, λ 3 and λ 3.75, then the changes in the discordance index between the five technologies can be observed.
In Fig. 7, the x-axis denotes all five alternatives under different values of λ, the y-axis indicates the sum of discordance indices. Rectangles of different colors indicate the discordance indices between the fixed alternative and other alternatives. It can be seen that when the value of λ increases, the rectangles' areas of different colors increase with it, which means that the value of the discordance index increases. It is easier to understand the reasons for this trend in combination with the actual situation: a greater λ indicates the loss has a deeper impact on the decision body, that is, for the established two alternatives, although the gap between them is objective, the "inferior" one gives the DM a worse feeling when he is loss-sensitive. In this case, the dominance degree will have a greater absolute value and the discordance index will be higher.
It should also be noted that when the value of λ varies from 0.8 to 3, the discordance relationship between the alternatives will not change with the increment of the discordance indices and will constantly be dT c 3 ≺ dT c 1 ≺ dT c 5 ≺ dT c 2 ≺ dT c 4 , where dT c i denotes the discordance index of T c i . This verifies to a certain extent the robustness of the decision-making framework established in this paper. In addition, when the value of λ increases to 3.75, there is a subtle alteration in this relationship, that is, the order of Tc 1 and Tc 5 is swapped. It can be seen that the decision subject will exert influence on the priority relationship of the alternatives, and the DM's attitude towards losses may change the final ranking result. However, regardless of the tweak of the parameter's value, in the present embodiment, the best choice is still Tc 3 , which further verifies the robustness of the method.

Feasibility analysis and discussion
Considering the the LZNs environment and the reference to the TOPSIS method [47], two existing method are used to recalculate this example. One is a MCGDM method proposed by Peng and Wang [48], the other is the traditional TOPSIS method.
As can be seen from Table 17, the order acquired by the above two approaches is roughly the same as that of the method proposed in this article. In particular, for the method of Peng and Wang, the ranking results by Q and U value are consistent with that of our method. The above results verify that our method is effective and feasible.
It is worth mentioning that TOPSIS's ignorance of the relative importance of distance to the ideal solution can frequently lead to inaccurate results. Moreover, the method in [45] is an incorporation of the power aggregation (PA) operators and VIKOR. Compared to the PA operator, DEST is more effective in processing uncertainty originating from the unknown or incomplete information. In addition, the method in [48] is essentially the traditional VIKOR method, which overlooks the relationship between every alternative and the negative ideal solution [28]. The last one is that the method framework in [48] only suitable for the case where the expert weights and attribute weights are partially known, while the decision framework developed in this article is available for MCGDM problems with completely unknown weight information.

Superiority recognition and discussion
In this part, five other comparable methods are employed for the selection problem of terminal wastewater solidification The generalized TODIM [42] φ technology. They are the traditional ELECTRE II method, the generalized method, the DS-VIKOR method proposed in [2], the ELECTRE-Based method in [3] and the PDHL-VIKOR method developed by Gou et al. [28], respectively. Among them, due to the difference in language environment, we only use the part of the improved VIKOR in method proposed in [28] for comparison, thence we don't obtain the probabilistic double hierarchy linguistic compromise (PDHLC i ) but the linguistic Z-number compromise (LZC i ) measure. The superiority of the newly proposed method is further demonstrated in this subsection. The ranking results calculated by the four methods in Table 18 are at variance with that of the introduced framework. However, the best solution is always Tc 3 , and except for the ELETRE II method and the improved method in [28], the worst option points to Tc 4 . The above results further prove the effectiveness of the method in this article.
The reason why the ranking result obtained by the ELE-TRE II method is different from that of the method proposed in this article is that, the concordance index and the discordance index obtained by the two methods have different values. This leads to the concordance matrix and the discordance matrix obtained accordingly are different, so the comprehensive performance of Tc 2 and Tc 4 are distinct in their respective synthesized matrix. As can be seen in concordance set and discordance set in Step 10 of section "Case description", Tc 2 outperforms Tc 4 under Cr 3 and Cr 4 , however, Tc 4 outperforms Tc 2 under Cr 1 and Cr 2 . The calculation of concordance and discordance indices of ELECTRE II method only considers the criteria weights, under the circumstances, since the weights of Cr 1 and Cr 2 are greater than the weights of Cr 3 and Cr 4 , it seems reasonable to think Tc 4 precedes Tc 2 . But is this really the case? It can be found when employing the generalized TODIM method, the performance of Cr 2 is significantly better than that of Cr 4 . Obviously, the bounded rationality of the DM is an important reason that affects the result of the MCDM problems. This is the reason for the different results between method in 28[] and the method in this article, and is also one of the reasons why the ELECTRE-Based method proposed by Fei et al. [3] cannot distinguish the priority relationship between Tc 2 and Tc 4 . In the process of determining the linguistic Z-number concordance and discordance matrices, the method proposed in this article not only covers the criteria weight information, but also takes into account the impact of the gap between the alternatives on the decision-making of loss-averse DMs. Compared with the improved VIKOR in [28], the newly proposed method concerns the relationship between alternatives and negative ideal solution, too. Therefore, it can overcome the flaw of traditional VIKOR and achieve the goal of [28] to improve the VIKOR method. In addition, the construction of the concordance and discordance indices effectively avoids the compensation effect between the criteria, the consideration of the DMs' psychology make the results more convincing.
Furthermore, the ELECTRE-Based method in [3] can merely infers that Tc 3 is preferred Tc 2 and Tc 4 , and Tc 5 is preferred Tc 4 . However, the relationship between Tc 1 , Tc 2 and Tc 4 cannot be determined, which reveals the method in [3] only has the ability to prioritize, but cannot get the priority relationship of all the alternatives. In contrast, the total order can be obtained using our method, which can visibly identify the gaps between all solutions.
Compared with the results of newly proposed framework, the preference relation between Tc 1 and Tc 5 obtained by ELECTRE II, DS-VIKOR and ELECTRE-Based methods is different. In section "Sensitivity analysis and discussion", it has already been shown that when λ increases to a certain value, the positions of Tc 1 and Tc 5 will be exchanged. Therefore, this sequence can be regarded as a special case where the DM is more sensitive to losses. One of the advantages of the generalized TODIM-ELECTRE II method has thus emerged, that is, fully consider the loss-aversion behavior of the DMs, the degree or the extent to which one alternative is dominated by the other is also reflected. Furthermore, the incorporation of ELECTRE II method allows the compensatory effects among criteria to be solved with precision. Obviously the generalized TODIM method neglects this point, which will inevitably lead to the unreliable results and may explain why the positions of Tc 2 and Tc 5 are swapped.

Conclusion
In the paper, a new integrated framework suitable for MCGDM problems with unknown weight information in LZNs environment is established. In this framework, the original information in the form of LZNs is first transformed into the basic probability distributions in DEST, which not only fully considers the inner and outer reliability of the evidence through discount procedure, but can also cope with uncertain information during the process of fusing multi-source information. Next, for eliminating uncertainty arising from the subjective judgment, Deng entropy is employed to acquire criteria weights. Afterwards, the generalized TODIM-ELECTRE II method is put forward to obtain the full priority order of the alternatives, which is competent to handle the compensation problems between criteria while taking the loss-aversion behavior of DMs into account. The feasibility of this method is demonstrated by the solution to a terminal wastewater solidification technology selection problem. Finally, the sensitivity analysis is conducted to further verify the effectiveness and robustness of the method, the superiority is also illustrated by a series of comparative analysis.