Introduction

Multiattribute group decision-making (MAGDM) is a process to rank alternatives based on multiple attributes by several decision-makers (DMs), which has been widely used in engineering, economy, management, and military, such as green supplier selection [43], stock investment evaluation [35], selection of financial technologies [23], evaluation in design projects [12], and so forth [2, 36, 44]. Initially, the decision-making information was usually expressed in exact values. However, sometimes, it is difficult to give a quantitative evaluation value for an attribute, but it may be qualitatively evaluated using linguistic terms (LT). For example, a student’s examination results can be precisely expressed by exact values, but his/her morality is usually evaluated by LTs such as {poor, fair, good}. In some more complex situations, owing to the limitation of time or knowledge, uncertain LTs [42] or interval-valued intuitionistic LTs [38] are also used to describe evaluation information, such as [poor, moderate], [good, extremely good].

However, there is another kind of MAGDM problem in real life, such as the evaluation of science and technology projects and blind reviews of doctoral dissertations, etc. DMs not only need to give the result of the evaluation indicators, but also give a reliability result for the given evaluation results in the form of ‘familiarity degree’. Considering these points, Zhu et al. [50] put forward the 2-dimensional expression model, that is, the I and II class LTs are simultaneously used to express the evaluation values. The I class LT expresses the evaluation value of the evaluated object, while the II class LT depicts the DM’s subjective assessment on the reliability of the given evaluation. By the use of 2-dimensional LTs, DMs can express their opinions more accurately and thereby making decision-making results more reliable. Further, Liu [14] extended it to the 2-dimensional uncertain linguistic variables (2DULVs) where both I and II class LTs are replaced by uncertain LTs (ULTs). The 2DULV is very helpful to express uncertain and ambiguous decision-making information, and many aggregation operators were developed based on it, including power generalized weighted aggregation operator [21], generalized hybrid aggregation operators [18], density generalized hybrid weighted averaging operator [17], Bonferroni harmonic mean operators [19], and dual generalized Bonferroni mean operators [20]. At the same time, some traditional multiattribute decision-making (MADM) methods have been extended to 2DULVs. Liu and Teng [16] extended the TODIM method to 2DULVs to address MAGDM problems. Liu [22] developed the maximizing deviation method for attribute weights in 2DULV and proposed an improved TOPSIS method. Ding and Liu [5] proposed the DEMATEL method with 2DULVs for the identification of critical factors of success. Liu et al. [13] improved the failure mode and effects analysis method using 2DULVs and applied it to the maintenance of water treatment. Ding and Liu [4] explored an emergency decision-making model with 2DULVs based on the prospect theory and the VIKOR method. Zhang et al. [48] proposed a MAGDM model for the selection of construction equipment based on the EDAS method, in which the evaluation values take 2DULVs or real numbers.

The choice of methods is crucial in the MAGDM process, a series of traditional MAGDM methods have been extended to different environments, including the TODIM method [3, 49], the VIKOR method [31, 47], the TOPSIS method [7, 8], etc. Recently, Pamučar and Ćirović [24] developed a novel MABAC method for the evaluation and selection of Forklifts. At the same time, they explained the stability and consistency by sensitivity analysis and the comparisons with other methods, including the VIKOR, SAW, COPRAS, MOORA, and TOPSIS method. As an effective and reliable tool to solve MAGDM problems, a lot of researches on the fuzzy MABAC method have been conducted. For example, Peng and Dai [26] presented the revised MABAC method based on the single-valued neutrosophic number. Pamučar et al. [25] modified the traditional MABAC method based on interval-valued fuzzy-rough numbers. Xue et al. [45] proposed an extended MABAC to select the optimal material. Sun et al. [33] gave a projection-based MABAC method with hesitant LTs for patients’ prioritization. Xue et al. [45] presented the improved MABAC method with interval-valued intuitionistic fuzzy information for the selection of material. Yu et al. [46] extended the MABAC method to interval type-2 fuzzy information and explored its application in hotel selection on travel websites.

The attribute weights are also important factors and they often affect the decision results in MADM problems. Sometimes, they are unknown due to time pressure and limitation of professional knowledge. It is of importance to establish a scientific and practical method to obtain reasonable attribute weights. In general, commonly used weight models can be separated into two types. One is for subjective weights, such as the AHP [30] and best–worst method (BWM) [28, 29], another is for objective weights, such as the entropy methods [6, 10], and the maximizing deviations method [32]. Subjective weights reflect DM’s subjective judgment for the importance of attributes based on their own experience and knowledge. However, when faced with complex decision-making situations, the judgment is hard to make, and psychological factors and subjective arbitrariness of experts may lead to the bias of decision results. In objective weight models, the weights are obtained by solving mathematical models based on actual values, but they ignore DMs’ subjective preference and may be affected by the amount of data and the selection of optimization models. So, it is more scientific and reasonable to use comprehensive weights which combine the subjective and objective weights. In this article, the BWM is utilized to calculate the subjective weights of attributes, and the maximum deviation method is extended to 2DULVs to obtain the objective weights. Then, the combination weights of attributes are determined based on the maximum comprehensive evaluation value method [15].

To be clear, the main motivation of this study can be explained by the following three points.

  1. (1)

    The 2DULVs have the advantages of both ULTs and 2-dimensional LTs, which can effectively use the linguistic evaluation information for the reliability of the judgments given by DMs. The application of 2DULVs can better express subjective uncertain information, thus can give more convincing results.

  2. (2)

    The MABAC method has the advantages that the computation is simple and the results are stable, which is a helpful and reliable tool for MADM. However, the MABAC method with 2DULVs has not been studied.

  3. (3)

    The BWM has lower complexity of pairwise comparison of attributes compared to AHP when determining subjective weights. The maximum deviation method can determine objective weights by measuring the difference in the evaluation values under each attribute. To balance the subjective preference of DMs and the objectivity of evaluation data, it is a more reasonable and flexible way to derive combination weights.

Based on the above discussion, this study aims to establish an extended MABAC method with 2DULVs to address complex MAGDM problems with unknown attribute weights. The main innovations of this study lie in the following three points.

  1. (1)

    The MABAC method is extended to 2DULVs to propose a new MAGDM method called 2-dimension uncertain linguistic MABAC (2DUL-MABAC) method.

  2. (2)

    The maximum deviation method and comprehensive evaluation value method are extended to 2DULVs and combined with BWM to determine combination weights for MADM problems with 2DULVs.

  3. (3)

    Comparations with the extended TODIM method and EDAS method are conducted to show the validity and superiority of the proposed method.

The remainder of this study is arranged as follows. Section “Preliminaries” briefly reviews some related notions of 2DULVs and the basic steps of the MABAC method. Section “Models for the attribute weights” establishes a model to determine the attribute weights with 2DULVs. In Sect. “The 2-dimension uncertain linguistic MABAC method”, the 2DUL-MABAC method is developed for the 2DULVs. In Sect. “A numerical example”, a case study is conducted to indicate the effectiveness and practicality of the developed method. In Sect. “Conclusion”, some conclusions of this study are presented.

Preliminaries

In this part, we can review some basic knowledges so as to easily understand this article, including uncertain linguistic variables, 2DULV, and MABAC method.

Uncertain linguistic variables

Let \(S = \left\{ {s_{0} ,s_{1} , \ldots ,s_{l - 1} } \right\}\) be a predefined and completely ordered set with an odd number of LTs, where \(s_{i} \in S \, (i = 0,1, \ldots ,l - 1)\) is known as a linguistic variable (LV) [9]. In general,\(l\) can be \(3,5,7,9,\) etc., When \(l = 5\), the LT set (LTS) can be defined as follows:

$$ S = (s_{0} ,s_{1} ,s_{2} ,s_{3} ,s_{4} ) = ({\text{very poor, poor, moderate, good, very good}}); $$

To avoid the information loss, a continuous set \(\overline{S} = \{ s_{\alpha } |\alpha \in [0,q]\}\) is extended from the LTS, where \(q\) is a sufficient large positive integer. Generally, the original LTs are applied to express the DMs’ evaluations, and the extended LTs are only employed in the operational process [40].

In some complex situations, owing to time pressure and limited expertise of DMs, the evaluation values may be in uncertain LVs (ULVs):

Definition 1

[41] Let \(\tilde{s} = [s_{a} ,s_{b} ]\), where \(s_{a} ,s_{b} \in \overline{S}\),\(s_{a}\) and \(s_{b}\) are the lower and upper ends, separately, then \(\tilde{s}\) is called an ULV. The set of all the ULVs is denoted as \(\tilde{S}\).

The 2DULV

Definition 2

[14] Suppose \(\dot{s}_{a} ,\dot{s}_{b} \in S_{1} = \left\{ {s_{0} ,s_{1} , \ldots ,s_{l - 1} } \right\}\),\(\ddot{s}_{c} ,\ddot{s}_{d} \in S_{2} = \left\{ {s_{0} ,s_{1} , \ldots ,s_{t - 1} } \right\}\), and \(\hat{s} = ([\dot{s}_{a} ,\dot{s}_{b} ][\ddot{s}_{c} ,\ddot{s}_{d} ])\), in which \([\dot{s}_{a} ,\dot{s}_{b} ]\) is I class ULV, expressing the evaluation value for one object, and \([\ddot{s}_{c} ,\ddot{s}_{d} ]\) is II class ULV, denoting the subjective assessment for the given evaluation result, then \(\hat{s}\) is called a 2DULV.

To minimize the loss of given information, the 2DULVs are extended to continuous ones by Liu and Yu [21], such that \(\dot{s}_{a} ,\dot{s}_{b} \in \overline{S}_{\rm I} = \left\{ {\dot{s}_{\alpha } \left| {\alpha \in [0,q]} \right.} \right\}\) and \(\ddot{s}_{c} ,\ddot{s}_{d} \in \overline{S}_{{{\rm I}{\rm I}}} = \left\{ {\ddot{s}_{\alpha } \left| {\alpha \in [0,q^{\prime}]} \right.} \right\}\), where \(q\) and \(q^{\prime}\) are two large real numbers. Meanwhile,\(\hat{S}\) can be defined as the set of all 2DULVs.

Suppose \(\hat{s}_{1} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ])\) and \(\hat{s}_{2} = ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ])\) are any two 2DULVs, then there are the following operational laws [21]:

  1. (1)

    \(\hat{s}_{1} \oplus \hat{s}_{2} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ]) \oplus ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ]) = ([\dot{s}_{{a_{1} + a_{2} }} ,\dot{s}_{{b_{1} + b_{2} }} ][\ddot{s}_{{\min (c_{1} ,c_{2} )}} ,\ddot{s}_{{\min (d_{1} ,d_{2} )}} ])\)

  2. (2)

    \(\hat{s}_{1} \otimes \hat{s}_{2} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ]) \otimes ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ]) = ([\dot{s}_{{a_{1} \times a_{2} }} ,\dot{s}_{{b_{1} \times b_{2} }} ][\ddot{s}_{{\min (c_{1} ,c_{2} )}} ,\ddot{s}_{{\min (d_{1} ,d_{2} )}} ])\)

  3. (3)

    \(\lambda \hat{s}_{1} = \lambda ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ]) = ([\dot{s}_{{\lambda \times a_{1} }} ,\dot{s}_{{\lambda \times b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ]), \, \lambda \ge 0\)

  4. (4)

    \(\left( {\hat{s}_{1} } \right)^{\lambda } = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ])^{\lambda } = ([\dot{s}_{{\left( {a_{1} } \right)^{\lambda } }} ,\dot{s}_{{\left( {b_{1} } \right)^{\lambda } }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ]), \, \lambda \ge 0\)

Example 1

Let \(\hat{s}_{1} = ([\dot{s}_{5} ,\dot{s}_{5} ][\ddot{s}_{2} ,\ddot{s}_{3} ])\),\(\hat{s}_{2} = ([\dot{s}_{2} ,\dot{s}_{3} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\), and \(\lambda = 2\), then:

  1. (1)

    \(\hat{s}_{1} \oplus \hat{s}_{2} = ([\dot{s}_{5 + 2} ,\dot{s}_{5 + 3} ][\ddot{s}_{\min (2,3)} ,\ddot{s}_{\min (3,3)} ]) = ([\dot{s}_{7} ,\dot{s}_{8} ],[\ddot{s}_{2} ,\ddot{s}_{3} ])\)

  2. (2)

    \(\hat{s}_{1} \otimes \hat{s}_{2} = ([\dot{s}_{5 \times 2} ,\dot{s}_{5 \times 3} ][\ddot{s}_{\min (2,3)} ,\ddot{s}_{\min (3,3)} ]) = ([\dot{s}_{10} ,\dot{s}_{15} ],[\ddot{s}_{2} ,\ddot{s}_{3} ])\)

  3. (3)

    \(2\hat{s}_{2} = ([\dot{s}_{2 \times 2} ,\dot{s}_{2 \times 3} ][\ddot{s}_{3} ,\ddot{s}_{3} ]) = ([\dot{s}_{4} ,\dot{s}_{6} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\)

  4. (4)

    \((\hat{s}_{2} )^{2} = ([\dot{s}_{{2^{2} }} ,\dot{s}_{{3^{2} }} ][\ddot{s}_{3} ,\ddot{s}_{3} ]) = ([\dot{s}_{4} ,\dot{s}_{9} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\)

Let \(\hat{s}_{1} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ])\),\(\hat{s}_{2} = ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ])\) and \(\hat{s}_{3} = ([\dot{s}_{{a_{3} }} ,\dot{s}_{{b_{3} }} ][\ddot{s}_{{c_{3} }} ,\ddot{s}_{{d_{3} }} ])\) are any three 2DULVs, then they satisfy the following properties [21]:

  1. (1)

    \(\hat{s}_{1} \oplus \hat{s}_{2} = \hat{s}_{2} \oplus \hat{s}_{1}\)

  2. (2)

    \(\hat{s}_{1} \otimes \hat{s}_{2} = \hat{s}_{2} \otimes \hat{s}_{1}\)

  3. (3)

    \(\hat{s}_{1} \oplus \hat{s}_{2} \oplus \hat{s}_{3} = \hat{s}_{1} \oplus (\hat{s}_{2} \oplus \hat{s}_{3} )\)

  4. (4)

    \(\hat{s}_{1} \otimes \hat{s}_{2} \otimes \hat{s}_{3} = \hat{s}_{1} \otimes (\hat{s}_{2} \otimes \hat{s}_{3} )\)

  5. (5)

    \(\hat{s}_{1} \otimes (\hat{s}_{2} \oplus \hat{s}_{3} ) = (\hat{s}_{1} \otimes \hat{s}_{2} ) \oplus (\hat{s}_{1} \otimes \hat{s}_{3} )\)

  6. (6)

    \(\lambda (\hat{s}_{1} \oplus \hat{s}_{2} ) = (\lambda \hat{s}_{1} ) \oplus (\lambda \hat{s}_{2} ){, }\lambda \ge 0\)

  7. (7)

    \((\lambda_{1} + \lambda_{2} )\hat{s}_{1} = (\lambda_{1} \hat{s}_{1} ) \oplus (\lambda_{2} \hat{s}_{1} ), \, \lambda_{1} ,\lambda_{2} \ge 0\)

Definition 3

[21] Suppose \(\hat{s} = ([\dot{s}_{\vartheta } ,\dot{s}_{\upsilon } ][\ddot{s}_{c} ,\ddot{s}_{d} ])\) is a 2DULV, then the expectation of \(\hat{s}\) is defined as:

$$ E(\hat{s}) = \frac{\vartheta + \upsilon }{{2 \times (l - 1)}} \times \frac{c + d}{{2 \times (t - 1)}} $$
(1)

where \(l\) and \(t\) represent the number of I and II class LTs, respectively.

For two 2DULVs \(\hat{s}_{1} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ])\) and \(\hat{s}_{2} = ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ])\)\(\hat{s}_{2} = ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ])\), if \(E(\hat{s}_{1} ) \ge E(\hat{s}_{2} )\), then \(\hat{s}_{1} \ge \hat{s}_{2}\), and vice versa.

Example 2

Suppose the I class LTS is \(S_{\rm I} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} ,\dot{s}_{5} ,\dot{s}_{6} )\) and the II class LTS is \(S_{{{\rm I}{\rm I}}} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} )\), \(\hat{s}_{1} = ([\dot{s}_{4} ,\dot{s}_{5} ][\ddot{s}_{2} ,\ddot{s}_{3} ])\), and \(\hat{s}_{2} = ([\dot{s}_{2} ,\dot{s}_{3} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\), then \(E(\hat{s}_{1} ) = \frac{4 + 5}{{2 \times (7 - 1)}} \times \frac{2 + 3}{{2 \times (5 - 1)}} = 0.4688\),\(E(\hat{s}_{2} ) = \frac{2 + 3}{{2 \times (7 - 1)}} \times \frac{3 + 3}{{2 \times (5 - 1)}} = 0.3125\), and \(E(\hat{s}_{1} ) > E(\hat{s}_{2} )\).

Definition 4

[21] Let \(\hat{s}_{1} = ([\dot{s}_{{a_{1} }} ,\dot{s}_{{b_{1} }} ][\ddot{s}_{{c_{1} }} ,\ddot{s}_{{d_{1} }} ])\) and \(\hat{s}_{2} = ([\dot{s}_{{a_{2} }} ,\dot{s}_{{b_{2} }} ][\ddot{s}_{{c_{2} }} ,\ddot{s}_{{d_{2} }} ])\) be two 2DULVs, then their distance is defined as:

$$ \begin{gathered} d(\hat{s}_{1} ,\hat{s}_{2} ) = \frac{1}{4 \times (l - 1)}\left( {\left| {a_{1} \times \frac{{c_{1} }}{t - 1} - a_{2} \times \frac{{c_{2} }}{t - 1}} \right|} \right. + \left| {a_{1} \times \frac{{d_{1} }}{t - 1} - a_{2} \times \frac{{d_{2} }}{t - 1}} \right| \hfill \\ { + }\left| {b_{1} \times \frac{{c_{1} }}{t - 1} - b_{2} \times \frac{{c_{2} }}{t - 1}} \right| + \left. {\left| {b_{1} \times \frac{{d_{1} }}{t - 1} - b_{2} \times \frac{{d_{2} }}{t - 1}} \right|} \right) \hfill \\ \end{gathered} $$
(2)

where \(l\) and \(t\) represent the number of I and II class LTs, respectively.

Example 3

Suppose the I class LTS is \(S_{\rm I} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} ,\dot{s}_{5} ,\dot{s}_{6} )\) and the II class LTS is \(S_{{{\rm I}{\rm I}}} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} )\), \(\hat{s}_{1} = ([\dot{s}_{4} ,\dot{s}_{5} ][\ddot{s}_{2} ,\ddot{s}_{3} ])\), and \(\hat{s}_{2} = ([\dot{s}_{2} ,\dot{s}_{3} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\), then the distance between them is

$$ \begin{gathered} d(\hat{s}_{1} ,\hat{s}_{2} ) = \frac{1}{4 \times 7} \times \left( {\left| {4 \times \frac{2}{5 - 1} - 2 \times \frac{3}{5 - 1}} \right| + \left| {4 \times \frac{3}{5 - 1} - 2 \times \frac{3}{5 - 1}} \right| + \left| {5 \times \frac{2}{5 - 1} - 3 \times \frac{3}{5 - 1}} \right| + \left| {5 \times \frac{3}{5 - 1} - 3 \times \frac{3}{5 - 1}} \right|} \right) \hfill \\ \, = 0.1339 \hfill \\ \end{gathered} $$

Definition 5

[18] Let \(\hat{s}_{j} = ([\dot{s}_{{a_{j} }} ,\dot{s}_{{b_{j} }} ][\ddot{s}_{{c_{j} }} ,\ddot{s}_{{d_{j} }} ]) \, (j = 1,2, \ldots ,n)\) be a set of 2DULVs, and 2DULGWA:\(\Omega^{n} \to \Omega\), if.

$$ 2DULGWA(\hat{s}_{1} ,\hat{s}_{2} , \ldots ,\hat{s}_{n} ) = \left( {\sum\nolimits_{j = 1}^{n} {\omega_{j} \hat{s}_{j}^{\lambda } } } \right)^{1/\lambda } $$
(3)

where \(\Omega\) is the set of all 2DULVs, and \(\omega_{j}\) is the weight of \(\hat{s}_{j} \, (j = 1,2, \ldots ,n)\) with \(\omega_{j} > 0,\sum\nolimits_{j = 1}^{n} {\omega_{j} = 1}\),\(\lambda\) is a parameter satisfying \(\lambda \in ( - \infty ,0) \cup (0, + \infty )\), then 2DULGWA is called a 2DULGWA operator.

Example 4

Suppose the I class LTS is \(S_{\rm I} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} ,\dot{s}_{5} ,\dot{s}_{6} )\) and the II class LTS is \(S_{{{\rm I}{\rm I}}} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} )\), \(\hat{s}_{1} = ([\dot{s}_{4} ,\dot{s}_{5} ][\ddot{s}_{2} ,\ddot{s}_{3} ])\),\(\hat{s}_{2} = ([\dot{s}_{2} ,\dot{s}_{3} ][\ddot{s}_{3} ,\ddot{s}_{3} ])\),\(\hat{s}_{3} = ([\dot{s}_{4} ,\dot{s}_{5} ][\ddot{s}_{4} ,\ddot{s}_{4} ])\) and \(\lambda = 2\), the weight vector of \(\hat{s}_{j} \, (j = 1,2,3)\) is \(\omega = (0.3,0.4,0.3)\), then:

$$ \begin{gathered} 2DULGWA(\hat{s}_{1} ,\hat{s}_{2} ,\hat{s}_{3} ) = \left( {0.3\left( {\hat{s}_{1} } \right)^{2} \oplus 0.4\left( {\hat{s}_{2} } \right)^{2} \oplus 0.3\left( {\hat{s}_{3} } \right)^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} \hfill \\ = \left( {0.3([\dot{s}_{{4^{2} }} ,\dot{s}_{{5^{2} }} ][\ddot{s}_{2} ,\ddot{s}_{3} ]) \oplus 0.4([\dot{s}_{{2^{2} }} ,\dot{s}_{{3^{2} }} ][\ddot{s}_{3} ,\ddot{s}_{3} ]) \oplus 0.3([\dot{s}_{{4^{2} }} ,\dot{s}_{{5^{2} }} ][\ddot{s}_{4} ,\ddot{s}_{4} ])} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} \hfill \\ = \left( {([\dot{s}_{0.3 \times 16} ,\dot{s}_{0.3 \times 25} ][\ddot{s}_{2} ,\ddot{s}_{3} ]) \oplus ([\dot{s}_{0.4 \times 4} ,\dot{s}_{0.4 \times 9} ][\ddot{s}_{3} ,\ddot{s}_{3} ]) \oplus ([\dot{s}_{0.3 \times 16} ,\dot{s}_{0.3 \times 25} ][\ddot{s}_{4} ,\ddot{s}_{4} ])} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} \hfill \\ = ([\dot{s}_{4.8 + 1.6 + 4.8} ,\dot{s}_{7.5 + 3.6 + 7.5} ][\ddot{s}_{\min (2,3,4)} ,\ddot{s}_{\min (3,3,4)} ])^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} \hfill \\ = ([\dot{s}_{{11.2^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }} ,\dot{s}_{{18.6^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}} }} ][\ddot{s}_{2} ,\ddot{s}_{3} ]) \hfill \\ = ([\dot{s}_{3.35} ,\dot{s}_{4.31} ][\ddot{s}_{2} ,\ddot{s}_{3} ]) \hfill \\ \end{gathered} $$

The MABAC method

In this subpart, we will introduce the basic steps using the classical MABAC to obtain the optimal alternative.

Step 1: Build the original decision matrix (DMA)\(X = \left[ {x_{ij} } \right]_{m \times n}\). Here \(x_{ij}\) is the evaluation value of attribute \(C_{j}\) under the alternative \(A_{i}\)\(\left( {i = 1,2, \ldots ,m;j = 1,2, \ldots ,n} \right)\).

Step 2. Normalize the DMA \(X = \left[ {x_{ij} } \right]_{m \times n}\) into \(\tilde{X} = \left[ {\tilde{x}_{ij} } \right]_{m \times n}\).

Step 3. Calculate the weighted DMA \(R = \left[ {r_{ij} } \right]_{m \times n}\) using the following equation:

$$ r_{ij} = \omega_{j} \left( {\tilde{x}_{ij} + 1} \right) $$
(4)

where \(\omega_{j}\) is the weight of the attribute \(C_{j}\).

Step 4. Determine the border approximation area (AA) matrix \(G = \left[ {g_{j} } \right]_{1 \times n}\):

$$ g_{j} = \left( {\prod\nolimits_{i = 1}^{m} {r_{ij} } } \right)^{\frac{1}{m}} $$
(5)
$$ \begin{gathered} \, C_{1} \, C_{2} \, \cdots \, C_{n} \hfill \\ G = \left[ {g_{1} {\text{ g}}_{2} \, \cdots {\text{ g}}_{n} } \right] \hfill \\ \end{gathered} $$
(6)

Step 5. Calculate the distance matrix \(T = \left[ {t_{ij} } \right]_{m \times n}\). The distance from the border AA to alternatives is obtained by:

$$ t_{ij} = r_{ij} - g_{j} $$
(7)

The alternative \(A_{i}\) may belong to the border AA (BAA)\(G\), the upper AA (UAA)\(G^{ + }\) or lower AA (LAA)\(G^{ - }\). Obviously, the UAA \(G^{ + }\) contains the best alternative \(A^{ + }\), and the BAA \(G^{ - }\) contains the worst alternative \(A^{ - }\)(Fig. 1).

Fig. 1
figure 1

Presentation of the UAA (\(G^{ + }\)), BAA (\(G\)) and LAA (\(G^{ - }\))

Step 6. Select the optimal alternative according to the total distance \(S_{i}\). The total distances of each alternative from the BAA are determined as:

$$ S_{i} = \sum\nolimits_{j = 1}^{n} {t_{ij} } $$
(8)

Then, the optimal choice is selected by the highest \(S_{i}\).

Models for the attribute weights

In this part, we use some models to determine subjective weights based on BWM, objective weights based on the maximum deviation principle, and combination weights based on the maximum comprehensive evaluation values.

Model for the subjective weights of attributes based on BWM

Suppose \(C = \{ C_{1} ,C_{2} , \ldots ,C_{n} \}\) is the collection of attributes, then the best and the worst attributes can be identified by DMs. Thus, we can obtain the vectors of Best-to-Others (BTO)\(\zeta_{B} = \left( {\zeta_{B1} ,\zeta_{B2} , \ldots ,\zeta_{Bn} } \right)\) and Others-to-Worst (OTW)\(\zeta_{W} = \left( {\zeta_{1W} ,\zeta_{2W} , \ldots ,\zeta_{nW} } \right)\) by giving the preferences of the best attribute over all the others and all the attributes over the worst on a scale of 1 to 9. It is clear that \(\zeta_{BB} = \zeta_{WW} = 1\).

The weights \(\left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{n} } \right)\) will satisfy that \({{\omega_{B} } / {\omega_{j} }} = \zeta_{Bj}\) and \({{\omega_{j} } / {\omega_{W} }} = \zeta_{jW}\) for each pair of \({{\omega_{B} } / {\omega_{j} }}\) and \(\omega_{j} /\omega_{W}\). Therefore, we should minimize max(\(\left| {\frac{{\omega_{B} }}{{\omega_{j} }} - \zeta_{Bj} } \right|\) and \(\left| {\frac{{\omega_{j} }}{{\omega_{W} }} - \zeta_{jW} } \right|\)) for all \(j\), and the model can be formulated as follows [28, 29]:

$$ \begin{gathered} \min \, \mathop {\max }\nolimits_{j} \left\{ {\left| {\frac{{\omega_{B} }}{{\omega_{j} }} - \zeta_{Bj} } \right|,\left| {\frac{{\omega_{j} }}{{\omega_{W} }} - \zeta_{jW} } \right|} \right\} \hfill \\ {\text{s.t.}} \hfill \\ \sum\nolimits_{j = 1}^{n} {\omega_{j} } = 1 \hfill \\ \omega_{j} \ge 0,{\text{ for all }}j \hfill \\ \end{gathered} $$
(9)

Then, we can convert this model to the following form:

$$ \begin{gathered} \min \, \xi \hfill \\ {\text{s.t.}} \hfill \\ \left| {\omega_{B} - \zeta_{Bj} \omega_{j} } \right| \le \xi , \, {\text{for all }}j \hfill \\ \left| {\omega_{j} - \zeta_{jW} \omega_{W} } \right| \le \xi , \, {\text{for all }}j \hfill \\ \sum\nolimits_{j = 1}^{n} {\omega_{j} } = 1 \hfill \\ \omega_{j} \ge 0,{\text{for all }}j \hfill \\ \end{gathered} $$
(10)

Solving the programming, we obtain the subjective weights \(\left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{n} } \right)\) and the value of \(\xi\).

Notably, the consistency of pairwise comparisons should be tested according to the consistency ratio (CR).

$$ CR = \frac{\xi }{CI} $$
(11)

where \(CI \) is a consistency index shown in Table 1. A smaller value of \(CR \)(close to zero) indicates higher consistency of the pairwise comparisons, whereas a larger value of \(CR\)(close to one) indicates inferior consistency [51]. If \(CR \ge 0.1\), the comparisons have to be adjusted [39].

Table 1 CI table

Model for the objective weights by the maximum deviation principle

For the attribute \(C_{j}\), the deviation of the evaluation values of alternative \(A_{i}\) from that of all other alternatives can be denoted as \(D_{ij} (\nu_{j} ) = \sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )\nu_{j} }\), where \(\hat{r}_{ij}\) and \(\hat{r}_{lj}\) are the normalized evaluation values expressed by 2DULVs, and \(d(\hat{r}_{ij} ,\hat{r}_{lj} )\) is the distance between them which can be determined according to Eq. (2). We use \(D_{j} (\nu_{j} ) = \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )v_{j} } }\) to denote the total deviation among all the alternatives under the attribute \(C_{j}\) and \(D(\nu_{j} ) = \sum\nolimits_{j = 1}^{n} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )\nu_{j} } } }\) to represent the total deviation among all the alternatives for all the attributes. Then, the following optimal model can be formulated [15]:

$$ \begin{gathered} \max D(\nu_{j} ) = \sum\nolimits_{j = 1}^{n} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )\nu_{j} } } } \hfill \\ s.t.\sum\nolimits_{j = 1}^{n} {\nu_{j}^{2} } = 1,\nu_{j} \ge 0,j = 1,2, \ldots ,n \hfill \\ \end{gathered} $$
(12)

Then, we construct the Lagrange multiplier function:

$$ L(\nu_{j} ,\eta ) = \sum\nolimits_{j = 1}^{n} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} \nu_{j} + \eta (\sum\nolimits_{j = 1}^{n} {\nu_{j}^{2} } - 1)} } $$
(13)

and have

$$ \left\{ \begin{gathered} \frac{{\partial L(\nu_{j} ,\eta )}}{{\partial \nu_{j} }} = \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } + 2\eta \nu_{j} = 0 \hfill \\ \frac{{\partial L(\nu_{j} ,\eta )}}{\partial \eta } = \sum\nolimits_{j = 1}^{n} {\nu_{j}^{2} } - 1 = 0 \hfill \\ \end{gathered} \right. $$
(14)

Solving the model, we can obtain:

$$ \left\{ \begin{gathered} 2\eta = \sqrt {\sum\nolimits_{j = 1}^{n} {(\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} )^{2} } } } \hfill \\ \nu_{j} = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } }}{{\sqrt {\sum\nolimits_{j = 1}^{n} {(\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} )^{2} } } } }} \hfill \\ \end{gathered} \right. $$
(15)

After normalized the vector \(\nu = (\nu_{1} ,\nu_{2} , \ldots ,\nu_{n} )\), the objective weights can be calculated as:

$$ \nu_{j} = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } }}{{\sum\nolimits_{j = 1}^{n} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } } }} $$
(16)

Model for the combination weights based on the maximum comprehensive evaluation value

Suppose \(W = (w_{1} ,w_{2} , \ldots ,w_{n} )\) is the combination weight vector, then the weighted average value for each alternative can be denoted as:

$$ ER_{i} = \sum\nolimits_{j = 1}^{n} {(w_{j} E(\hat{r}_{ij} )) = \sum\nolimits_{j = 1}^{n} {[(\alpha \omega_{j} + \beta \nu_{j} )E(\hat{r}_{ij} )]} } $$
(17)

where \(E(\hat{r}_{ij} )\) is the expectation of the normalized evaluation value \(\hat{r}_{ij}\) and can be calculated by Eq. (1).

The choice of \(\alpha\) and \(\beta\) should maximum the weighted average value for each alternative \(ER_{i}\), i.e., \(\max \, E = (ER_{1} ,ER_{2} , \ldots ,ER_{m} )\).

Since there is no preference for all alternatives, the \(\max \, E = (ER_{1} ,ER_{2} , \ldots ,ER_{m} )\) is integrated into [15]:

$$ \begin{gathered} \max \mathrm{E} = \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\alpha \omega_{j} + \beta \nu_{j} )E(\hat{r}_{ij} )} ]^{2} } \hfill \\ \mathrm{s.t. } \hfill \\ \alpha + \beta = 1, \hfill \\ \alpha \ge 0,\beta \ge 0 \hfill \\ \end{gathered} $$
(18)

Then, we can get:

$$ L(\alpha ,\beta ,\eta ) = \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\alpha \omega_{j} + \beta \nu_{j} )E(\hat{r}_{ij} )} ]^{2} } + \eta (\alpha + \beta - 1) $$
(19)

Let

$$ \left\{ \begin{gathered} \frac{\partial L(\alpha ,\beta ,\eta )}{{\partial \alpha }} = 2\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {([(\alpha \omega_{j} + \beta \nu_{j} )E(\hat{r}_{ij} )} ]} \times (\omega_{j} E(\hat{r}_{ij} ))) + \eta = 0 \hfill \\ \frac{\partial L(\alpha ,\beta ,\eta )}{{\partial \beta }} = 2\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {([(\alpha \omega_{j} + \beta \nu_{j} )E(\hat{r}_{ij} )} ]} \times (\lambda_{j} E(\hat{r}_{ij} ))) + \eta = 0 \hfill \\ \frac{\partial L(\alpha ,\beta ,\eta )}{{\partial \eta }} = \alpha + \beta - 1 = 0 \hfill \\ \end{gathered} \right. $$
(20)

Solving Eq. (20), we can obtain the optimal values of \(\alpha\) and \(\beta\):

$$ \left\{ \begin{gathered} \alpha = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} }}{{\left( {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ] + \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} } } \right)}} \hfill \\ \beta = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} }}{{\left( {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ] + \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} } } \right)}} \hfill \\ \end{gathered} \right. $$
(21)

Finally, the combination weights of attributes can be determined by \(W = \alpha \times \omega + \beta \times \nu\).

The 2-dimension uncertain linguistic MABAC method

In this part, we propose an extended MABAC method for MAGDM problem with the information of 2DULV under the situation of unknown attribute weights.

Consider a MAGDM problem as follows: \(\Upsilon = \left\{ {\Upsilon_{1} ,\Upsilon_{2} , \ldots ,\Upsilon_{m} } \right\}\) is the group of alternatives, \(C = \{ C_{1} ,C_{2} , \ldots ,C_{n} \}\) is the group of attributes and their weights are unknown. The group of DMs is represented by \(E = \{ E_{1} ,E_{2} , \ldots ,E_{p} \}\), and the weights of them are \(\gamma = (\gamma_{1} ,\gamma_{2} , \ldots ,\gamma_{p} )^{T}\)(\(0 \le \gamma_{k} \le 1\),\(\sum\nolimits_{k = 1}^{p} {\gamma_{k} }\)), expressing the authority of experts. The evaluation on the alternative \(\Upsilon_{i}\) under the attribute \(C_{j}\) given by DM \(E_{k}\) is described by a 2DULV \(\hat{s}^{k}_{ij} = ([\dot{s}_{{a_{ij} }}^{k} ,\dot{s}_{{b_{ij} }}^{k} ][\ddot{s}_{{c_{ij} }}^{k} ,\ddot{s}_{{d_{ij} }}^{k} ])\), where \([\dot{s}_{{a_{ij} }}^{k} ,\dot{s}_{{b_{ij} }}^{k} ]\) is the I class ULT,\(\dot{s}_{{a_{ij} }}^{k} ,\dot{s}_{{b_{ij} }}^{k} \in S_{\rm I}\),\(S_{\rm I} = (\dot{s}_{0} ,\dot{s}_{1} , \ldots ,\dot{s}_{l - 1} )\), and \([\ddot{s}_{{c_{ij} }}^{k} ,\ddot{s}_{{d_{ij} }}^{k} ]\) is II class ULT, \(\ddot{s}_{{c_{ij} }}^{k} ,\ddot{s}_{{d_{ij} }}^{k} \in S_{{{\rm I}{\rm I}}}\), \(S_{{{\rm I}{\rm I}}} = (\ddot{s}_{0} ,\ddot{s}_{1} , \ldots ,\ddot{s}_{t - 1} )\). The detailed steps of the proposed 2DUL-MABAC method are shown in Fig. 2 and as follows:

Fig. 2
figure 2

The steps of the proposed approach

Step 1. Construct the individual DMA of the DM \(E_{k}\):

$$ \hat{S}^{k} = \left[ {\begin{array}{*{20}c} {([\dot{s}^{k}_{{a_{11} }} ,\dot{s}^{k}_{{b_{11} }} ][\ddot{s}^{k}_{{c_{11} }} ,\ddot{s}^{k}_{{d_{11} }} ])} & {([\dot{s}^{k}_{{a_{12} }} ,\dot{s}^{k}_{{b_{12} }} ][\ddot{s}^{k}_{{c_{12} }} ,\ddot{s}^{k}_{{d_{12} }} ])} & \cdots & {([\dot{s}^{k}_{{a_{1n} }} ,\dot{s}^{k}_{{b_{1n} }} ][\ddot{s}^{k}_{{c_{1n} }} ,\ddot{s}^{k}_{{d_{1n} }} ])} \\ {([\dot{s}^{k}_{{a_{21} }} ,\dot{s}^{k}_{{b_{21} }} ][\ddot{s}^{k}_{{c_{21} }} ,\ddot{s}^{k}_{{d_{21} }} ])} & {([\dot{s}^{k}_{{a_{22} }} ,\dot{s}^{k}_{{b_{22} }} ][\ddot{s}^{k}_{{c_{22} }} ,\ddot{s}^{k}_{{d_{22} }} ])} & \cdots & {([\dot{s}^{k}_{{a_{2n} }} ,\dot{s}^{k}_{{b_{2n} }} ][\ddot{s}^{k}_{{c_{2n} }} ,\ddot{s}^{k}_{{d_{2n} }} ])} \\ \vdots & \vdots & \vdots & \vdots \\ {([\dot{s}^{k}_{{a_{m1} }} ,\dot{s}^{k}_{{b_{m1} }} ][\ddot{s}^{k}_{{c_{m1} }} ,\ddot{s}^{k}_{{d_{m1} }} ])} & {([\dot{s}^{k}_{{a_{m2} }} ,\dot{s}^{k}_{{b_{m2} }} ][\ddot{s}^{k}_{{c_{m2} }} ,\ddot{s}^{k}_{{d_{m2} }} ])} & \cdots & {([\dot{s}^{k}_{{a_{mn} }} ,\dot{s}^{k}_{{b_{mn} }} ][\ddot{s}^{k}_{{c_{mn} }} ,\ddot{s}^{k}_{{d_{mn} }} ])} \\ \end{array} } \right] $$
(22)

Step 2. Calculate the normalized individual DMA \(\hat{R}^{k} = [\hat{r}^{k}_{ij} ]_{m \times n}\)(\(\hat{r}_{ij}^{k} = ([r_{{a_{ij} }}^{k} ,\dot{r}_{{b_{ij} }}^{k} ][\ddot{r}_{{c_{ij} }}^{k} ,\ddot{r}_{{d_{ij} }}^{k} ])\)).

For benefit attributes:

$$ \hat{r}^{k}_{ij} = ([\dot{s}^{k}_{{\dot{a}_{ij} }} ,\dot{s}^{k}_{{\dot{b}_{ij} }} ][\ddot{s}^{k}_{{\dot{c}_{ij} }} ,\ddot{s}^{k}_{{\dot{d}_{ij} }} ]) = ([\dot{s}^{k}_{{a_{ij} }} ,\dot{s}^{k}_{{b_{ij} }} ][\ddot{s}^{k}_{{c_{ij} }} ,\ddot{s}^{k}_{{d_{ij} }} ]) $$
(23)

For cost attributes:

$$ \hat{r}^{k}_{ij} = ([\dot{s}^{k}_{{\dot{a}_{ij} }} ,\dot{s}^{k}_{{\dot{b}_{ij} }} ][\ddot{s}^{k}_{{\dot{c}_{ij} }} ,\ddot{s}^{k}_{{\dot{d}_{ij} }} ]) = ([{\text{neg}}(\dot{s}^{k}_{{a_{ij} }} ),{\text{neg}}(\dot{s}^{k}_{{b_{ij} }} )][\ddot{s}^{k}_{{c_{ij} }} ,\ddot{s}^{k}_{{d_{ij} }} ]) $$
(24)

Step 3. Aggregate the evaluation values given by all DMs based on the 2DULGWA operators. Then, the group aggregated DMA \(\hat{R} = [\hat{r}_{ij} ]_{m \times n}\) can be gotten, where \(\hat{r}_{ij} = ([\dot{r}_{{a_{ij} }} ,\dot{r}_{{b_{ij} }} ][\ddot{r}_{{c_{ij} }} ,\ddot{r}_{{d_{ij} }} ])\) is calculated by Eq. (25).

$$ \hat{r}_{ij} = 2DULGWA(\hat{r}_{ij}^{1} ,\hat{r}_{ij}^{2} ,...,\hat{r}_{ij}^{p} ) $$
(25)

Step 4. Determine the subjective weights of attributes.

Step 4.1. Identify the best and worst attributes, then construct the vectors of BTOs \(\zeta_{B} = \left( {\zeta_{B1} ,\zeta_{B2} , \ldots ,\zeta_{Bn} } \right)\) and OTW \(\zeta_{W} = \left( {\zeta_{1W} ,\zeta_{2W} , \ldots ,\zeta_{nW} } \right)\).

Step 4.2. Solving the optimization model based on Eq. (10), we can get the subjective weights \(\omega = (\omega_{1} ,\omega_{2} , \ldots ,\omega_{n} )^{T}\).

Step 4.3. Test the consistency of the comparisons. If \(CR = \frac{\xi }{CI} \ge 0.1\), then adjust the pairwise comparisons and calculate the optimal weights again.

Step 5. Determine the objective weights of attributes \(\nu = \left( {\nu_{1} ,\nu_{2} , \ldots ,\nu_{n} } \right)^{T}\) by

$$ \nu_{j} = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } }}{{\sum\nolimits_{j = 1}^{n} {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{l = 1}^{m} {d(\hat{r}_{ij} ,\hat{r}_{lj} )} } } }} $$
(26)

where \(d(\hat{r}_{ij} ,\hat{r}_{lj} )\) is defined by Eq. (2).

Step 6. Obtain the combination weights of attributes.

Step 6.1. Obtain the expectation of the attribute \(C_{j}\) under the alternative \(\Upsilon_{i}\) based on Eq. (1).

Step 6.2 Calculate the parameters of \(\alpha\) and \(\beta\) according to Eq. (21), and get the combination weights by \(W = \alpha \times \omega + \beta \times \nu\).

$$ \left\{ \begin{gathered} \alpha = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} }}{{\left( {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ] + \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} } } \right)}} \hfill \\ \beta = \frac{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} }}{{\left( {\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\nu_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ] + \sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {[(\omega_{j}^{2} - \omega_{j} \nu_{j} } )(E(\hat{r}_{ij} ))^{2} ]} } } \right)}} \hfill \\ \end{gathered} \right. $$
(27)

Step 7. Calculate the weighted DMA \(\hat{U} = [\hat{u}_{ij} ]_{m \times n}\):

$$ \hat{u}_{ij} = w_{j} \otimes \hat{r}_{ij} $$
(28)

Step 8. Calculate the BAA matrix \(\hat{G} = \left[ {\hat{g}_{j} } \right]_{1 \times n}\):

$$ \hat{g}_{j} = \left( {\prod\nolimits_{i = 1}^{m} {\hat{u}_{ij} } } \right)^{\frac{1}{m}} $$
(29)

Step 9. Calculate the distance matrix \(T = \left[ {t_{ij} } \right]_{m \times n}\).

$$ t_{ij} = \left\{ {\begin{array}{*{20}l} {d(\hat{u}_{ij} ,\hat{g}_{j} ), \, if \, \hat{u}_{ij} > \hat{g}_{j} } \\ { \, 0, \, if \, \hat{u}_{ij} = \hat{g}_{j} \, } \\ { - d(\hat{u}_{ij} ,\hat{g}_{j} ), \, if \, \hat{u}_{ij} < \hat{g}_{j} } \\ \end{array} } \right. $$
(30)

Here \(d(\hat{u}_{ij} ,\hat{g}_{j} )\) is the distance between \(\hat{u}_{ij}\) and \(\hat{g}_{j}\) which can be determined by Eq. (2), and two 2DULVs \(\hat{u}_{ij}\) and \(\hat{g}_{j}\) are compared based on their expectation obtained by Eq. (1).

Step 10. Obtain the most ideal alternative.

The total distances of each alternative from the BAA are determined as:

$$ S_{i} = \sum\nolimits_{j = 1}^{n} {t_{ij} } $$
(31)

The most ideal alternative is gotten by the highest value of \(S_{i}\).

A numerical example

Technological innovation is an important source of competitive advantages for enterprises. The scientific evaluation on the level of technology innovation is undoubtedly a critical part to improve innovation capability. Now, we need to evaluate the alternative enterprises \(\left\{ {\Upsilon_{1} ,\Upsilon_{2} ,\Upsilon_{3} ,\Upsilon_{4} } \right\}\) from the perspective of technological innovation, and four attributes are considered: (1)\(C_{1}\) is innovative resource input capability; (2)\(C_{2}\) is innovative management capability; (3)\(C_{3}\) is innovation tendency; (4)\(C_{4}\) is research and development capability (adapted from Liu [14]). The weight of each attribute is unknown. It is obvious that the above four attributes are all benefit attributes and independent of each other. Three experts \(\{ E_{1} ,E_{2} ,E_{3} \}\) with weight vector \(\gamma = \left( {\frac{1}{3},\frac{1}{3},\frac{1}{3}} \right)\) evaluate the technology innovation capability of four enterprises, and the evaluation values are described by 2DULVs. Here, we define the I class LTS as \(S_{\rm I} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} ,\dot{s}_{5} ,\dot{s}_{6} )\) and the II class LTS as \(S_{{{\rm I}{\rm I}}} = (\dot{s}_{0} ,\dot{s}_{1} ,\dot{s}_{2} ,\dot{s}_{3} ,\dot{s}_{4} )\).

Steps of proposed method

  1. (1)

    The initial individual DMAs are presented in Tables 2, 3 and 4.

  2. (2)

    The step of normalization can be omitted because all the attributes are beneficial types.

  3. (3)

    Aggregating the evaluation information given by three experts according to Eq. (25), we get the group DMA shown in Table 5 (let \(\lambda = 1\)).

  4. (4)

    Calculate the subjective weights of attributes:

  5. (i)

    Determine the BTO vector \(\left( {1,3,5,2} \right)\) and the OTW vector \(\left( {5,2,1,3} \right)\).

  6. (ii)

    Solving the programming Eq. (10), the subjective weights are obtained:

  7. (iii)

    \(\omega = \left( {0.4846,0.1692,0.0923,0.2539} \right)\)

  8. (iv)

    The consistency ratio is \(CR = \frac{0.0231}{{2.3}} \approx 0.01 < 0.1\), implying the pairwise comparisions are acceptable.

  9. (5)

    According to Eq. (16), we get the objective weights of attributes:

  10. (1)

    \(\lambda = \left( {0.2906,0.2405,0.3727,0.0962} \right)\)

  11. (6)

    Determine the combination weights of attributes.

  12. (i)

    According to Eq. (1), we get the expectation of each evaluation values listed in Table 6.

  13. (ii)

    Based on Eq. (21), we can obtain \(\alpha = 0.6648\),\(\beta = 0.3352\). Then, the combination weight vector is \(w = \alpha \times \omega + \beta \times \lambda = \left( {0.4196,0.1931,0.1863,0.2010} \right)\).

  14. (7)

    Calculate the weighted DMA:

    $$ \hat{U} = \left[ {\begin{array}{*{20}c} {([\dot{s}_{1.958} ,\dot{s}_{1.958} ][\ddot{s}_{2} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.515} ,\dot{s}_{0.644} ][\ddot{s}_{2} ,\ddot{s}_{2} ])} & {([\dot{s}_{0.683} ,\dot{s}_{0.807} ][\ddot{s}_{3} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.804} ,\dot{s}_{1.005} ][\ddot{s}_{1} ,\ddot{s}_{1} ])} \\ {([\dot{s}_{1.539} ,\dot{s}_{1.818} ][\ddot{s}_{2} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.708} ,\dot{s}_{0.837} ][\ddot{s}_{2} ,\ddot{s}_{2} ])} & {([\dot{s}_{0.497} ,\dot{s}_{0.621} ][\ddot{s}_{3} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.603} ,\dot{s}_{0.670} ][\ddot{s}_{1} ,\ddot{s}_{1} ])} \\ {([\dot{s}_{1.119} ,\dot{s}_{1.539} ][\ddot{s}_{2} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.772} ,\dot{s}_{0.837} ][\ddot{s}_{2} ,\ddot{s}_{2} ])} & {([\dot{s}_{0.373} ,\dot{s}_{0.497} ][\ddot{s}_{3} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.737} ,\dot{s}_{0.871} ][\ddot{s}_{1} ,\ddot{s}_{1} ])} \\ {([\dot{s}_{1.678} ,\dot{s}_{1.958} ][\ddot{s}_{2} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.451} ,\dot{s}_{0.644} ][\ddot{s}_{2} ,\ddot{s}_{2} ])} & {([\dot{s}_{0.435} ,\dot{s}_{0.621} ][\ddot{s}_{3} ,\ddot{s}_{3} ])} & {([\dot{s}_{0.737} ,\dot{s}_{0.871} ][\ddot{s}_{1} ,\ddot{s}_{1} ])} \\ \end{array} } \right] $$
  15. (8)

    Calculate the BAA matrix:

    $$ G = \left[ {([\dot{s}_{1.542} ,\dot{s}_{1.810} ][\ddot{s}_{2} ,\ddot{s}_{3} ]),([\dot{s}_{0.597} ,\dot{s}_{0.734} ][\ddot{s}_{2} ,\ddot{s}_{2} ]),([\dot{s}_{0.484} ,\dot{s}_{0.627} ][\ddot{s}_{3} ,\ddot{s}_{3} ]),([\dot{s}_{0.716} ,\dot{s}_{0.845} ][\ddot{s}_{1} ,\ddot{s}_{1} ])} \right] $$
  16. (9)

    Calculate the distance matrix:

    $$ T = \left[ {\begin{array}{*{20}c} {0.0294} & { - 0.0072} & {0.0237} & {0.0052} \\ {0.0006} & {0.0089} & {0.0012} & { - 0.0060} \\ { - 0.0362} & {0.0116} & { - 0.0151} & {0.0009} \\ {0.0148} & { - 0.0099} & { - 0.0035} & {0.0009} \\ \end{array} } \right] $$
  17. (10)

    Determine the total distances of each alternative from the BAA:

    $$ S_{1} = 0.0511,S_{2} = 0.0047,S_{3} = - 0.0387,S_{4} = 0.0025 $$
Table 2 DMA of expert \(E_{1}\)
Table 3 DMA of expert \(E_{2}\)
Table 4 DMA of expert \(E_{3}\)
Table 5 The group DMA
Table 6 The expectations of evluation values

Thus, the alternatives can be ranked as \(\Upsilon_{1} \succ \Upsilon_{2} \succ \Upsilon_{4} \succ \Upsilon_{3}\). The enterprise with highest technological innovation ability is \(\Upsilon_{1}\).

Comparion analysis

To demonstrate the validity and superiority of the proposed 2DUL-MABAC approach, we will compare it with the 2DUL-TODIM method proposed by Liu and Teng [16], the extended TOPSIS method proposed by Liu [22], and the EDAS method adopted by Zhang et al. [48]. These three methods are adopted to handle the above numerical example, and the results are shown in Table 7. Besides, to keep consistency, all the methods use the combination weights determined in Sect. 4.

Table 7 Final ranking from different approaches

From Table 7 we can observe that the ranking result from Liu and Teng’s extended TODIM method [16] and Liu’s extended TOPSIS method [22] are the same as the proposed 2DUL-MABAC method, while Zhang et al.’s extended EDAS method is slightly different with others. But the best alternatives selected by four methods are all \(\Upsilon_{1}\). Therefore, the effectiveness of the proposed method can be verified. The difference between the proposed method and the other three methods are analyzed as the following aspects:

(1) Comparison with Liu and Teng’s extended TODIM method [16]

TODIM is a classical MADM method based on prospect theory, which takes the DMs’ bounded rationality into consideration. However, when Liu and Teng’s improved TODIM approach [16] was used to solve the above MAGDM problem, there are twelve dominance matrices and three global dominance matrices should be calculated. By contrast, the MABAC method has an easier calculation process and no parameter in it. From this perspective, our proposed method is superior to the extended TODIM method [16].

Besides, in the extended TODIM method proposed by Liu and Teng16, the weights of attributes are given, while in the proposed 2DUL-MABAC approach, the weights are obtained according to the mathematical models which integrate the subjective and objective weights scientifically. In this way, DMs’ experience and knowledge are fully utilized, at the same time, the objectivity of decision information is also considered. Therefore, the rankings from our proposed framework can be more scientific and reliable.

(2) Comparison with Liu’s extended TOPSIS method [22]

TOPSIS is a classical MADM method that has a simple and efficient calculation procedure, but it has an obvious weakness that prone to reverse order problems, that is, adding or removing alternatives will affect the sorting result. In this respect, the proposed 2DUL-MABAC method has higher stability.

In the extended TOPSIS method proposed by Liu [22], the attribute weights are obtained by the objective weighting method, namely the maximum deviation method. When using only the objective weight vector of attributes,\(\lambda = \left( {0.2906,0.2405,0.3727,0.0962} \right)\), to solve the above numerical example, the final ranking is \(\Upsilon_{1} \succ \Upsilon_{2} \succ \Upsilon_{3} \succ \Upsilon_{4}\). It can be seen that the ranking result has been affected slightly. But the evaluation of the importance of attributes ignores the preferences of DMs compared with the proposed 2DUL-MABAC method.

(3) Comparison with Zhang et al.’s extended EDAS method [48]

Zhang et al.’s framework has the advantages that the weights of experts are derived by social network analysis (SNA), and the EDAS method with high calculation efficiency is used to ranking the alternatives. As seen in Table 7, the result from Zhang et al.’s method [48] is \(\Upsilon_{1} \succ \Upsilon_{4} \succ \Upsilon_{2} \succ \Upsilon_{3}\), which is slightly different from the other methods. The main reason is that in Zhang et al.’s method [48], the attribute values take 2DULVs and real numbers, so the 2DULVs are converted into real numbers before the implementation of the EDAS method. However, in our proposed method, the operations of 2DULVs are used in the procedure of the MABAC method, thus avoiding the information loss before ranking. In this way, the integrity of decision information can be ensured, so that the final ranking is more reliable and convincing.

Conclusion

This paper proposes the 2DUL-MABAC method for MAGDM problems with unknown attribute weights. To this end, the BWM and the deviation maximum are utilized to obtain the subjective and objective weights of attributes, respectively. Besides, the combination weights with 2DULVs are gotten based on the maximum comprehensive values. Then, the 2DULGWA operator is adopted to obtain the aggregated group evaluation values, and the MABAC method is extended to 2DULVs to select the best alternative. To be clear, the contributions of this study are summarized in the following three points: (1) This study establishes a new MADM model based on the MABAC method and 2DULVs. The application of 2DULVs enables DMs to express subjective uncertain information better, thus give more reliable decision-making results. (2) This study develops a combination weights determination model for attribute values in 2DULVs. The comprehensive evaluation value method is extended to 2DULVs to combine the subjective weights obtained by the BWM and objective weights determined by the maximum deviation method. (3) The proposed method is compared with the methods based on the TODIM and EDAS to show the feasibility of it.

In the future, the present work should be improved from the following aspect: (1) Explore the application of the proposed method in practical decision-making scenarios, such as optimal site selection [37], and evaluation of service quality [34]. (2) Develop scientific ways to derive weights of experts, such as the method based on similarity [1, 11]. (3) Extend the present work to deal with decision-making problems with prioritized criteria [27].