1 Introduction

Group decision making (GDM) can be defined as an environment where there exist a set of possible alternatives and a set of individuals (experts, judgements, etc.). These individuals present their opinions or preferences over the set of possible alternatives. A moderator, who is a distinguished person, is responsible for directing the session until all individuals reach an agreement on the solution (Herrera et al. 1996). A GDM process mainly consists of three parts: (1) Evaluation process; (2) Consensus reaching process; (3) Selection process. In recent decades, the pairwise comparison methods are more accurate than non-pairwise methods, and the main advantage of pairwise comparison is that the experts only need to focus exclusively on two alternatives at a time when expressing their preferences (Chiclana et al. 2009). Therefore, preference relation is one of the most widely used preference representation structures in GDM, and a large number of preference relations have been developed in recent years, such as fuzzy preference relations (Herrera-Viedma et al. 2004), multiplicative preference relations (Saaty 1980), linguistic preference relations (Xu 2005), hesitant fuzzy linguistic preference relations (Zhu and Xu 2014; Gou et al. 2019b), probabilistic linguistic preference relations (Zhang et al. 2016; Liao et al. 2020), and double hierarchy hesitant fuzzy linguistic preference relations (Gou et al. 2018, 2019a, 2020a).

Linguistic expression is the closest form of human opinion. Therefore, it is particularly important to describe linguistic information reasonably in GDM problem. However, with the progress of society and the complexity of human evaluation information, it is more and more urgent to put forward a reasonable linguistic model to express complex linguistic information. For example, when an expert evaluates a car, he may say that “the performance is nearly perfect” or “the acceleration of one hundred kilometers is incredibly fast”. clearly, both “nearly perfect” and “incredibly fast” are the real cognitive of the expert. Therefore, how to correctly express these cognitive complex linguistic information is very important in actual decision-making process. As a cognitive complex linguistic information representation tool, the double hierarchy linguistic term set (DHLTS) (Gou et al. 2017) can express complex linguistic information by two hierarchy linguistic term sets (LTSs) where the second hierarchy LTS is a linguistic feature or detailed supplementary of each linguistic term included in the first hierarchy LTS. Based on the advantages of preference relation and DHLTS, Gou et al. (2020b) defined the concept of double hierarchy linguistic preference relation (DHLPR), which can not only make the cognitive complex linguistic information more correctly, but also can clearly reflect the relationships of any two alternatives. More explanations are discussed in Sect. 2.

In recent years, some scholars (Liu et al. 2017) have proposed a novel preference relation, which considers the self-confident degrees of the basic elements of the preference relation. The self-confident degrees can be used to depict the degrees of confidences that the experts have in their own evaluation information, as well as enrich the integrity of evaluation information. As we know, the basic element in DHLPR, denoted as double hierarchy linguistic terms (DHLT), is only a linguistic expression and cannot reflect the self-confident degree of expert. Considering that there is little research about the DHLPRs with self-confident degrees in the literature, and the experts’ self-confident degrees in the DHLPR have to be perfect. Motivated by the research of Liu et al. (2017), this paper defines a concept of self-confident DHLPR, in which the basic element consists of the DHLT and the self-confident degree simultaneously. Then, we develop a double hierarchy linguistic preference values and self-confident degrees modifying (DHSM)-based consensus model to manage the GDM problems with self-confident DHLPRs based on the priority ordering theory. The main structure of this paper includes the experts’ weights-determining method, the consensus reaching process and the simulation experiment, and these three parts are discussed as follows:

  1. (1)

    In different decision-making areas, the experts maybe various and each of them usually has different specialized knowledge or influence. Therefore, it is very important to determine the experts’ weights in GDM. Up to now, there has existed a large number of weight-determining methods including the dynamic weights-determining approach based on the goal programming model (Liu et al. 2019), the optimization model (Park et al. 2011), and the AHP method (Ramanathan and Ganesh 1994), etc. However, these methods only utilize evaluation values to calculate the experts’ weights and obtain only one kind of objective weights. Therefore, they have a common weakness, i.e., the subjective weights and the other objective weights are lost. To overcome this shortcoming, this paper fully considers all kinds of information and obtains the weight vector of experts including the subjective weights and two kinds of objective weights. Firstly, the experts can evaluate themselves where the evaluation values can be regarded as their subjective weights; Additionally, each expert can be evaluated by the remaining experts and one kind of objective weights are obtained; Moreover, the evaluation matrix provided by each expert can be utilized to calculate the other kind of objective weights. Finally, the synthetic weights of experts can be obtained by combining all of these three weights.

  2. (2)

    In the process of GDM with preference relations, the element of priority vector reflects the importance degree of the corresponding alternative, and the difference between the individual priority vector and the collective priority vector represents the proximity degree of an expert’s preference and group’s preference (Saaty 1980). Therefore, obtaining the individual priority vector and the collective priority vector are very important to reach consensus and make a decision. Based on this, in the consensus reaching process, this paper develops two models to calculate the individual priority vector of each expert and the collective priority vector of all experts. These two priority vectors can not only be used to judge whether all experts reach consensus, but also be used to obtain the ranking of all alternatives.

  3. (3)

    We hope that the consensus can be reached as soon as possible, and the number of iterations is as small as possible. In this regard, three comparison criteria are proposed to reflect the consensus efficiency of the proposed DHSM-based consensus model, including the number of iterations, the consensus success ratio and the distance between the original and adjusted preferences. Motivated by the analyses above, a simulation experiment is devised to testify the proposed DHSM-based consensus model by comparing it with two other consensus reaching models: One is the DHLPR (Gou et al. 2020b) without the self-confident degrees; the other is that the self-confident degrees are not changed in the consensus reaching process.

We highlight the paper by the following innovative work:

  1. (1)

    We define a novel concept of self-confident DHLPR, which gives each DHLT a self-confident degree. The self-confident DHLPR makes the complex linguistic information completely.

  2. (2)

    A weight-determining method is developed, which fully considers all kinds of information including the subjective weights and two kinds of objective weights by fusing each expert’s subjective information, the evaluation information, and the remaining experts’ objective information.

  3. (3)

    A DHSM-based consensus model is set up to manage the GDM problem with self-confident DHLPRs based on the priority ordering theory.

  4. (4)

    We describe the operation process and illustrate the effectiveness of the proposed DHSM-based consensus model by a case study concerning the selection of optimal hospital in the field of telemedicine.

  5. (5)

    We devise a simulation experiment to testify the proposed DHSM-based consensus model by comparing it with other consensus reaching models from three angles including the number of iterations, the consensus success ratio and the distance between the original and adjusted preferences.

The paper is organized as follows: Sect. 2 reviews some concepts of DHLTS and DHLPR. Section 3 presents the concept of self-confident DHLPR. Section 4 develops a weight-determining method, and proposes the consensus measure and adjustment mechanism with self-confident DHLPRs. Section 5 sets up a case study. Section 6 devises a simulation experiment. Concluding remarks are given in Sect. 7.

2 Preliminaries

Linguistic information is in line with the real thoughts of experts, and Zadeh (1975) proposed fuzzy linguistic approach to deal with it. Uncertain linguistic information and complex linguistic information can be found in our life, and a large number of complex linguistic models have been developed. As we discussed in Sect. 1, DHLTSs (Gou et al. 2017) can be used to represent complex linguistic information directly based on two hierarchy LTSs. Let \(S = \left\{ {s_{t} \left| {t = - \tau , \ldots , - 1,0,1, \ldots ,\tau } \right.} \right\}\) be the first hierarchy LTSs, and \(O^{t} = \left\{ {o_{k}^{t} \left| {k = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma } \right.} \right\}\) be the second hierarchy LTS of \(s_{t}\). A DHLTS, \(S_{O}\), is in mathematical form of

$$S_{O} = \left\{ {s_{{t \langle o_{k}^{t} \rangle }} \left| {t = - \tau , \ldots , - 1,0,1, \ldots ,\tau ;\;\;k = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma } \right.} \right\},$$
(1)

we call \(s_{{t \langle o_{k}^{t} \rangle }}\) the DHLT, where \(o_{k}^{t}\) expresses the second hierarchy linguistic term when the first hierarchy linguistic term is \(s_{t}\). Especially, all second hierarchy LTSs may be different in the actual situations. For convenience, we use the unified form \(S_{O} = \{ s_{{t \langle o_{k} \rangle }} |t = - \tau , \ldots , - 1,0,1, \ldots ,\tau ;\;k = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \}\) to express them. For more details and explanations regarding the DHLTS, please refer to Gou et al. (2017).

To understand the DHLTS clearly, Fig. 1 is drawn as follows:

Fig. 1
figure 1

The second hierarchy LTS of a linguistic term \(s_{1}\) in first hierarchy LTS

Remark 1

Figure 1 shows that the linguistic term \(s_{1}\) has its own second hierarchy LTS \(O = \{ o_{ - 2} = far\;from,\;o_{ - 1} = a\;little,o_{0} = just\;right,o_{1} = much,o_{2} = very\;much\}\). Similarly, the others can also have their own second hierarchy LTSs. Additionally, some main characteristics of the DHLTS are summarized: Firstly, all elements in a DHLTS are expressed in linguistic labels, which reflects the semantics of original natural language to a greater extent. Secondly, each second hierarchy LTS can be regarded as a set of adverbs and can therefore extend the linguistic representations. Thirdly, each linguistic term of the first hierarchy LTS has its own second hierarchy LTS (Gou et al. 2017).

Then, Gou et al. (2017) defined a monotonic function for making the mutual transformations between the double hierarchy linguistic term and the numerical scale when extending the double hierarchy linguistic term to a continuous form. The monotonic function provides convenience for using the mathematical expressions to make the operations among DHLTs, as well as reducing the difficulty of computation.

In general, the independent variable of a function should be in the continuous form. Therefore, motivated by the idea of virtual linguistic terms (Xu 2005), a continuous DHLTS, denoted as \(\bar{S}_{O} = \{ s_{{t \langle o_{k} \rangle }} |t \in [ - \tau ,\tau ];\;\;k \in [ - \varsigma ,\varsigma ]\}\), can be defined, where the ranges of independent variables \(t\) and \(k\) are \([ - \tau ,\tau ]\) and \([ - \varsigma ,\varsigma ]\), respectively. Specially, the continuous DHLTs only appear in the process of operations, and they have no clear semantics. For example, based on the continuous DHLTS, some DHLTs, such as \(s_{{2.5 \langle o_{1.2} \rangle }}\) and \(s_{{ - 1.5 \langle o_{ - 0.6} \rangle }}\), can be obtained in the process of operations.

Based on the continuous DHLTS, two equivalent transformation functions were proposed to make the mutual transformations between the DHLT and the numerical scale (Gou et al. 2017):

Definition 1

(Gou et al. 2017) Let \(\bar{S}_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t \in [ { - \tau ,\tau } ];\;\;k \in [ { - \varsigma ,\varsigma } ]} \right.} \right\}\) be a continuous DHLTS, \(h_{{S_{O} }} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {s_{{t \langle o_{k} \rangle }} \in \bar{S}_{O} ;\,l = 1,2, \ldots ,L;\;t \in [ { - \tau ,\tau } ];\,\;k \in [ { - \varsigma ,\varsigma } ]} \right.} \right\}\) be a DHHFLE with \(L\) being the number of DHLTs in \(h_{{S_{O} }}\), and \(\gamma (0 \le \gamma \le 1)\) be a real number. Then the real number \(\gamma\) and the subscript \(( {t,k} )\) of the DHLT \(s_{{t \langle o_{k} \rangle }}\) which expresses the equivalent information to the real number \(\gamma\) can be transformed to each other by the following functions \(f\) and \(f^{ - 1}\):

$$f:[ { - \tau ,\tau } ] \times [ { - \varsigma ,\varsigma } ] \to [ 0,1 ],\;\;f( {t,k} ) = \frac{{k + ( {\tau + t} )\varsigma }}{2\varsigma \tau } = \gamma ,$$
(2)
$$\begin{aligned} & f^{ - 1} :[ {0,1} ] \to [ { - \tau ,\tau } ] \times [ { - \varsigma ,\varsigma } ],\;\;f^{ - 1} ( \gamma ) = [ {2\tau \gamma - \tau } ] \langle o_{{\varsigma ( {2\tau \gamma - \tau - [ {2\tau \gamma - \tau } ]} )}} \rangle {\kern 1pt} \\ & \quad = [ {2\tau \gamma - \tau }] + 1 \langle o_{{\varsigma ( {( {2\tau \gamma - \tau - [ {2\tau \gamma - \tau } ]} ) - 1} )}} \rangle , \\ \end{aligned}$$
(3)

where \([ \bullet ]\) is a rounding operation.

Remark 2

In Definition 1, Eq. (2) can be used to transform a DHLT into the corresponding real number. Firstly, we can transform the second hierarchy linguistic term into the first hierarchy linguistic term, and then transform the result into the real number. For example, let \(\bar{S}_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t \in [ { - 4,4} ];\;\;k \in [ { - 4,4} ]} \right.} \right\}\) be a continuous DHLTS, and \(s_{{2 \langle o_{3} \rangle }}\) be a DHLT. Then, we have \(f\left( {s_{{2 \langle o_{3} \rangle }} } \right) = \frac{{3 + ( {4 + 2} ) \times 4}}{2 \times 4 \times 4}{ = }\frac{27}{32}\).

Similarly, the function \(f^{ - 1}\) is an anti-function of \(f\). From Fig. 1, it is obvious that any two adjacent first hierarchy linguistic terms have a common part. Therefore, when using the function \(f^{ - 1}\), one real number can be transformed into two equivalent DHLTs. For example, let 0.65 be a real number, then \(f^{ - 1} ( {0.65} ) = s_{{1 \langle o_{0.8} \rangle }} = s_{{2 \langle o_{ - 3.2} \rangle }}\).

Before giving the concept of DHLPR, some operations of DHLTs should be proposed. Suppose that \(s_{{t \langle o_{k} \rangle }}\), \(s_{{t^{1} \langle o_{{k^{1} }} \rangle }}\) and \(s_{{t^{2} \langle o_{{k^{2} }} \rangle }}\) are three DHLTs, and \(\lambda ( {0 \le \lambda \le 1} )\) is a real number. Then,

  1. (1)

    (Addition) \(s_{{t^{1} \langle o_{{k^{1} }} \rangle }} \oplus s_{{t^{2} \langle o_{{k^{2} }} \rangle }} = s_{{t^{1} + t^{2} \langle o_{{k^{1} + k^{2} }} \rangle }}\), if \(t^{1} + t^{2} \le \tau ,k^{1} + k^{2} \le \varsigma\);

  2. (2)

    (Multiplication) \(\lambda s_{{t \langle o_{k} \rangle }} = s_{{\lambda t \langle o_{\lambda k} \rangle }}\);

  3. (3)

    (Complementary) \(neg\left( {s_{{t \langle o_{k} \rangle }} } \right) = s_{{ - t \langle o_{ - k} \rangle }}\).

In a GDM problem with double hierarchy linguistic preference information, let \(\left\{ {A_{1} ,A_{2} , \ldots ,A_{m} } \right\}( {m \ge 2} )\) be a fixed set of alternatives. A set of experts \(\left\{ {e^{1} ,e^{2} , \ldots ,e^{n} } \right\}( {n \ge 2} )\) are invited to evaluate alternatives and provide their preferences expressed by additive DHLPRs (Gou et al. 2020b). The additive DHLPR can be defined as follows:

Definition 2

(Gou et al. 2017) Let \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - \tau , \ldots , - 1,0,1, \ldots ,\tau ;\;k = - \varsigma } \right., \ldots , - 1,0,1, \ldots ,\varsigma } \right\}\) be a DHLTS. An additive DHLPR \({\mathbb{R}}\) is presented by a matrix \({\mathbb{R}} = ( {r_{ij} } )_{m \times m} \subset A \times A\), where \(r_{ij} \in S_{O}\) \(( {i,j = 1,2, \ldots ,m} )\) is a DHLT, indicating the degree of the alternative \(A_{i}\) over \(A_{j}\). For all \(i,j = 1,2, \ldots ,m\), \(r_{ij} ( {i < j} )\) satisfies the conditions \(r_{ij} \oplus r_{ji} = s_{{0 \langle o_{0} \rangle }}\) and \(r_{ii} = s_{{0 \langle o_{0} \rangle }}\).

Example 1

In an actual decision-making process, suppose that there are three alternatives \(\left\{ {A_{1} ,A_{2} ,A_{3} } \right\}\), \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - 2, \ldots ,2;\;k = - 2} \right., \ldots ,2} \right\}\) is a DHLTS with \(S = \left\{ {s_{ - 2} = very\;bad,s_{ - 1} = bad,s_{0} = equal,s_{1} = good,} \right.\;\;\left. {s_{2} = very\;good} \right\}\) and \(O = \left\{ {o_{ - 2} = only\;a\;littlle,o_{ - 1} = a\;little,o_{0} = just\;right,o_{1} = much,o_{2} = very\;much} \right\}\). Considering that the preference relation can reflect the relationship of any two alternatives clearly, an expert makes pairwise comparison between any two alternatives and provides the assessments as “the degree of \(A_{1}\) over \(A_{2}\) is just right good”, “\(A_{1}\) is much good than \(A_{3}\)”, and “the degree of \(A_{2}\) over \(A_{3}\) is only a little good”. Then, a DHLPR \({\mathbb{R}}\) is established as:

$${\mathbb{R}} = \left( {\begin{array}{*{20}c} {s_{{0 \langle o_{0} \rangle }} } & {s_{{1 \langle o_{0} \rangle }} } & {s_{{1 \langle o_{1} \rangle }} } \\ {s_{{ - 1 \langle o_{0} \rangle }} } & {s_{{0 \langle o_{0} \rangle }} } & {s_{{1 \langle o_{ - 2} \rangle }} } \\ {s_{{ - 1 \langle o_{ - 1} \rangle }} } & {s_{{ - 1 \langle o_{2} \rangle }} } & {s_{{0 \langle o_{0} \rangle }} } \\ \end{array} } \right).$$

Based on Example 1, the advantages of using DHLPR in actual decision-making processes are listed as follows:

  1. (1)

    The elements included in a DHLPR can reflect the relationships of any two alternatives clearly;

  2. (2)

    The DHLT makes the linguistic information more correctly. On the contrary, if we delete the second hierarchy linguistic information, then all the assessments of \(A_{1}\) over \(A_{2}\), \(A_{1}\) over \(A_{3}\), and \(A_{2}\) over \(A_{3}\) become “good”. Clearly, the single hierarchy linguistic information cannot integrally express the original thoughts of the expert.

It can be seen from Definition 2 that the additive DHLPR \({\mathbb{R}}\) only contains double hierarchy linguistic information, which can express the complex linguistic information completely. However, it cannot reflect the self-confident degrees of the experts given for all DHLTs. In recent years, Liu et al. (2017) proposed lots of preference relations with self-confidences including multiplicative preference relation with self-confidence, additive preference relation with self-confidence and ordinal 2-tuple linguistic preference relation with self-confidence. This research orientation is very important to reflect the comprehensive decision information of the experts by adding the self-confident degree to each element of preference relation. Based on which, this paper defines the self-confident DHLPR, which is discussed in detail in Sect. 3.

3 Self-confident DHLPR

In actual decision-making process with preference relation, the priority vector is a useful tool to obtain the ranking of alternatives. In this section, as an important tool to calculate the priority vector from DHLPR, a least squares approach is developed firstly. Then, considering that adding the self-confident degree to each element of preference relation can reflect the comprehensive decision information of the experts, and motivated by the preference relations with self-confidences proposed by Liu et al. (2017), a novel concept of self-confident DHLPR is proposed by adding the self-confident degrees to the basic elements of DHLPRs.

3.1 A least squares approach

As we know, the least squares approach is one of the important methods to derive a priority vector from preference relations such as multiplicative preference relations (Saaty 1980) and subjective preference relations (Crawford and Williams 1985). In this subsection, we develop a least squares approach to calculate the priority vector of a DHLPR, which can be used as a basis for ranking alternatives and obtaining the consensus degree in consensus reaching process.

Let \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{m} } \right)^{T}\) be the priority vector of the DHLPR \({\mathbb{R}} = ( {r_{ij} } )_{m \times m}\), where \(w_{i} > 0\) \(( {i = 1,2, \ldots ,m} )\) and \(\sum\nolimits_{i = 1}^{m} {w_{i} = 1}\). The priority vector can be used to characterize a consistent DHLPR, i.e., the DHLPR \({\mathbb{R}} = \left( {r_{ij} } \right)_{m \times m}\) satisfies \(f\left( {r_{ij} } \right) = f\left( {r_{ik} } \right) - f\left( {r_{jk} } \right) + 0.5,\quad \forall i,j,k\). Therefore, for a consistent DHLPR, there exists

$$f\left( {r_{ij} } \right) = w_{i} - w_{j} + 0.5,\quad i,j = 1,2, \ldots ,m.$$
(4)

In general, the DHLPR is not always consistent. The error between the preference element \(r_{ij}\) and the corresponding consistent preference element can be obtained by the following formula:

$$\varepsilon_{ij} = 0.5 \times \left( {w_{i} - w_{j} } \right) + 0.5 - f\left( {r_{ij} } \right),\quad i,j = 1,2, \ldots ,m,$$
(5)

where the adjustment coefficient 0.5 is used to ensure that the range of \(\varepsilon_{ij}\) belongs to [− 1, 1].

In general, the priority vector will be more reasonable if the DHLPR is more consistent. In other words, the smaller the error \(\varepsilon_{ij}\) between the preference element and the corresponding consistent preference element is, the more reasonable the priority vector should be. Therefore, based on Eq. (5), the priority vector \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{m} } \right)^{T}\) can be obtained by establishing and solving the following least squares model:

$${\text{Model}}\;1.\quad \begin{array}{*{20}l} {\hbox{min} \sum\limits_{i = 1}^{m} {\sum\limits_{i < j}^{m} {\left( {0.5 \times \left( {w_{i} - w_{j} } \right) + 0.5 - f\left( {r_{ij} } \right)} \right)^{2} } } } \hfill \\ {{s.t.}\left\{ {\begin{array}{*{20}l} {\sum\limits_{i = 1}^{m} {w_{i} = 1} ,} \hfill \\ {w_{i} > 0,\;i = 1,2, \ldots ,m.} \hfill \\ \end{array} } \right.} \hfill \\ \end{array}$$

3.2 Self-confident DHLPR

When providing the preference values, the experts may give self-confident degrees to those values to reflect their attitudes. Therefore, adding self-confident degrees to the DHLPR makes the complete evaluations. Let \(S^{SL} = \{ {1,2, \ldots ,N} \}\) be a numerical set used by the experts to express their self-confident degrees over the provided preference values in the DHLPR, where the experts have \(N\) ratings (Friedman and Amoo 1999). Without loss of generality, this paper utilizes 7-point numerical set to express the experts’ self-confident degrees, and we can list the meaning of each element in numerical set in Table 1.

Table 1 The detailed information about the 7-point numerical set

Motivated by the DHLPR and the concept of self-confident degree, given a fixed set of alternatives \(A = \left\{ {A_{1} ,A_{2} , \ldots ,A_{m} } \right\}\), the self-confident DHLPR can be defined as follows:

Definition 3

A matrix \(\Re = \left( {\left( {r_{ij} ,sl_{ij} } \right)} \right)_{m \times m}\) is called a self-confident DHLPR when its basic elements have the following two parts: (1) the first part, \(r_{ij} \in S_{O}\), expresses the preference value of the alternative \(A_{i}\) over \(A_{j}\); (2) the second part, \(sl_{ij} \in S^{SL}\), expresses the self-confident degree with respect to the preference value \(r_{ij}\). The following conditions are assumed: \(r_{ij} \oplus r_{ji} = s_{{0 \langle o_{0} \rangle }}\), \(r_{ii} = s_{{0 \langle o_{0} \rangle }}\), \(sl_{ij} = sl_{ji}\), and \(sl_{ii} = N\) for \(i,j = 1,2, \ldots ,m\).

Example 2

Let \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - 4, \ldots ,4;\;k = - 4} \right., \ldots ,4} \right\}\) be a DHLTS, \(S^{SL} = \{ {1,2, \ldots ,7} \}\) be a 7-point numerical set to express the experts’ self-confident degrees, and \(A = \left\{ {A_{1} ,A_{2} ,A_{3} } \right\}\) be a set of three alternatives. Then, an expert can provide his/her self-confident DHLPR, denoted by \(\Re\), as follows:

$$\Re = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{ - 1} \rangle }} ,2} \right)} & {\left( {s_{{1 \langle o_{2} \rangle }} ,5} \right)} \\ {\left( {s_{{ - 2 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 1 \langle o_{ - 2} \rangle }} ,5} \right)} & {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right).$$

In self-confident DHLPR \(\Re\), \(r_{13} = s_{{1 \langle o_{2} \rangle }}\) expresses that the preference value of the alternative \(A_{1}\) over \(A_{3}\) is \(s_{{1 \langle o_{2} \rangle }}\), and its concrete meaning can be obtained by the DHLTS \(S_{O}\). \(sl_{13} = 5\) expresses that the self-confident degree with respect to \(r_{13}\) is 5. Similarly, the remaining elements can also be explained.

4 The consensus measure and adjustment mechanism for self-confident DHLPRs

In GDM, when we obtain the self-confident DHLPRs provided by the experts, the next step is to ensure whether all experts reach consensus. Therefore, a suitable consensus reaching model for self-confident DHLPRs needs to be developed. Additionally, if the experts do not reach consensus, the consensus reaching model also needs to consist of two aspects’ adjustments. One is to check whether the expert’s consensus degree is lower than the overall consensus degree and to adjust the double hierarchy linguistic preference information, the other is to adjust the self-confident degrees by increasing the self-confident degrees of the expert whose consensus degree is higher than the overall consensus degree and decreasing the self-confident degrees of the expert whose consensus degree is lower than the overall consensus degree. Based on these, this section mainly proposes an iteration-based consensus reaching model for self-confident DHLPRs. The proposed consensus reaching model is a double hierarchy linguistic preference values and self-confident degrees modifying-based consensus model, which can be abbreviated as a DHSM-based consensus model.

4.1 The description of the consensus reaching problem with self-confident DHLPRs

Let \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - \tau , \ldots , - 1,0,1, \ldots ,\tau ;\;k = - \varsigma } \right., \ldots , - 1,0,1, \ldots ,\varsigma } \right\}\) be a DHLTS, and \(S^{SL} = \{ {1,2, \ldots ,N} \}\) be a N-point numerical set to express the experts’ self-confident degrees. For a GDM problem under double hierarchy linguistic environment, a set of experts \(\left\{ {e^{1} ,e^{2} , \ldots ,e^{n} } \right\}\,( {n \ge 2} )\) are invited to evaluate a set of alternatives \(\left\{ {A_{1} ,A_{2} , \ldots ,A_{m} } \right\}\,( {m \ge 2} )\). Let \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{n} } \right)^{T}\) be the weight vector of the experts, where \(\omega_{k} \ge 0\) \(( {k = 1,2, \ldots ,n} )\) expresses the weight of the expert \(e^{k}\), and \(\sum\nolimits_{k = 1}^{n} {\omega_{k} = 1}\). Let \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\) be a self-confident DHLPR provided by the expert \(e^{k}\), where \(r_{ij}^{k}\) expresses the preference value of the alternative \(A_{i}\) over \(A_{j}\); and \(sl_{ij}^{k} \in S^{SL}\) expresses the self-confident degree with respect to the preference value \(r_{ij}^{k}\).

4.2 Weights-determining method

As we discussed in Sect. 1, the existing weights-determining methods can only determine objective weights of experts by making some operations on evaluation information. However, the subjective information cannot be neglected if we want to consider all useful information. Additionally, the objective weight information of one expert may contain another part, i.e., the evaluation information provided by the remaining experts. Therefore, the synthetic weight of each expert can be obtained by taking into account subjective weights and objective weights simultaneously:

  1. (1)

    Subjective weights: The expert’s own evaluation information, which can be obtained by each expert directly and denoted by \(\kappa_{k}^{S}\, ( {k = 1,2, \ldots ,n} )\). Then the subjective weight \(\omega_{k}^{S}\) of each expert can be obtained by normalizing the evaluation information as: \(\omega_{k}^{S} = \kappa_{k}^{S} /\sum\nolimits_{k = 1}^{n} {\kappa_{k}^{S} }\).

  2. (2)

    Objective weights: One is obtained from all of the provided self-confident DHLPRs \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\, ( {k = 1,2, \ldots ,n} )\), denoted by \(\omega_{k}^{{O_{1} }}\); the other is provided by the remaining experts, denoted by \(\omega_{k}^{{O_{2} }}\).

To calculate the first objective weight \(\omega_{k}^{{O_{1} }}\), a distance measure between any two self-confident DHLPRs needs to be defined.

Definition 4

Let \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m} \,( {k = 1,2} )\) be two self-confident DHLPRs. The distance between them can be obtained by

$$d_{12} = d\left( {\Re^{1} ,\Re^{2} } \right) = \sqrt {\frac{2}{{m( {m - 1} )}}\sum\limits_{i = 1}^{m} {\sum\limits_{i < j}^{m} {\left( {f\left( {r_{ij}^{1} } \right) \times sl_{ij}^{1} - f\left( {r_{ij}^{2} } \right) \times sl_{ij}^{2} } \right)^{2} } } } .$$
(6)

Based on Eq. (6), all the distance between any two pair of experts can be established as a matrix:

$$D = \left( {d_{zk} } \right)_{n \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {e^{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {e^{2} } & {\;\, \cdots } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} e^{n} } \\ \end{array} } \\ {\begin{array}{*{20}c} {e^{1} } \\ {e^{2} } \\ \vdots \\ {e^{n} } \\ \end{array} } & {\left( {\begin{array}{*{20}c} 0 & {d_{12} } & \cdots & {d_{1n} } \\ {d_{21} } & 0 & \cdots & {d_{2n} } \\ \vdots & \vdots & \ddots & \vdots \\ {d_{n1} } & {d_{n2} } & \cdots & 0 \\ \end{array} } \right)} \\ \end{array} .$$

Then, one kind of the objective weight of each expert can be obtained by

$$\omega_{k}^{{O_{1} }} = \sum\limits_{z = 1}^{n} {d_{zk} } /\sum\limits_{k = 1}^{n} {\sum\limits_{z = 1}^{n} {d_{zk} } } .$$
(7)

In addition, in GDM, each expert can provide the evaluations for others according to how well he/she knows them. The evaluations can compose of a mutual evaluation matrix \(\Upsilon = \left( {\lambda_{zk} } \right)_{n \times n}\) as follows:

$$\Upsilon = \left( {\lambda_{zk} } \right)_{n \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {e^{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {e^{2} } & {\;\, \cdots } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} e^{n} } \\ \end{array} } \\ {\begin{array}{*{20}c} {e^{1} } \\ {e^{2} } \\ \vdots \\ {e^{n} } \\ \end{array} } & {\left( {\begin{array}{*{20}c} {-\!\!-} & {\lambda_{12} } & \cdots & {\lambda_{1n} } \\ {\lambda_{21} } & {-\!\!-} & \cdots & {\lambda_{2n} } \\ \vdots & \vdots & \ddots & \vdots \\ {\lambda_{n1} } & {\lambda_{n2} } & \cdots & {-\!\!-} \\ \end{array} } \right)} \\ \end{array} ,$$

where the element \(\lambda_{zk} \in [ {0,1} ]\) expresses one expert’s evaluation value for other expert. The larger the value of \(\lambda_{zk}\) is, the higher the evaluation is. Combining all these evaluation values of each column, we can obtain the importance degree of each expert, i.e.,

$$\lambda_{k} = \frac{1}{n - 1}\sum\limits_{\begin{subarray}{l} z = 1 \\ z \ne k \end{subarray} }^{n} {\lambda_{zk} } .$$
(8)

Then, we can normalize them and the other kind of objective weight of each expert can be obtained by

$$\omega_{k}^{{O_{2} }} = \lambda_{k} /\sum\limits_{k = 1}^{n} {\lambda_{k} } .$$
(9)

Combining these three kinds of weights of the experts, the synthetic weight of each expert can be obtained by

$$\omega_{k} = \alpha \omega_{k}^{S} + \beta \omega_{k}^{{O_{1} }} + \gamma \omega_{k}^{{O_{2} }} ,\quad k = 1,2, \ldots ,n,$$
(10)

where \(\alpha\), \(\beta\), \(\gamma\) are three adjustment parameters and satisfy \(\alpha ,\beta ,\gamma \ge 0\) and \(\alpha + \beta + \gamma = 1\). In general, these parameters can be given by the moderator in the GDM process directly.

An example can be set up to explain the weight-determining process:

Example 3

Let \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - 4, \ldots ,4;\;k = - 4} \right., \ldots ,4} \right\}\) be a DHLTS, three experts \(\left\{ {e^{1} ,e^{2} ,e^{3} } \right\}\) are invited to evaluate three alternatives \(\left\{ {A_{1} ,A_{2} ,A_{3} } \right\}\). Suppose that

  1. (1)

    The experts’ subjective weight vector is \(\kappa^{S} = (0.8,0.9,0.8)^{T}\),

  2. (2)

    Their evaluations \(\Re^{k}\, ( {k = 1,2,3} )\) are

    $$\begin{aligned} \Re^{1} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{ - 1} \rangle }} ,2} \right)} & {\left( {s_{{1 \langle o_{2} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 2 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,5} \right)} \\ {\left( {s_{{ - 1 \langle o_{ - 2} \rangle }} ,4} \right)} & {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,5} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{2} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{1 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{1 \langle o_{ - 2} \rangle }} ,5} \right)} \\ {\left( {s_{{ - 1 \langle o_{ - 1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{0 \langle o_{1} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 1 \langle o_{2} \rangle }} ,5} \right)} & {\left( {s_{{0 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{3} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{2 \langle o_{1} \rangle }} ,3} \right)} \\ {\left( {s_{{ - 2 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{0} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 2 \langle o_{ - 1} \rangle }} ,3} \right)} & {\left( {s_{{1 \langle o_{0} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right). \\ \end{aligned}$$
  3. (3)

    Each expert can provide the evaluations for others, then a mutual evaluation matrix \(\Upsilon = \left( {\lambda_{zk} } \right)_{3 \times 3}\) is established as follows:

    $$\Upsilon = \left( {\lambda_{zk} } \right)_{3 \times 3} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {e^{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} e^{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} e^{3} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \\ \end{array} } \\ \begin{aligned} e^{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ e^{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ e^{3} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ \end{aligned} & {\left( {\begin{array}{*{20}c} - & {0.8} & {0.7} \\ {0.8} & - & {0.7} \\ {0.6} & {0.5} & - \\ \end{array} } \right)} \\ \end{array} .$$

Then, the weights of the experts can be calculated by:

  1. (a)

    By normalizing the experts’ subjective weight vector, the subjective weight vector is \(\omega^{S} = ( {0.32,0.36,0.32} )^{T}\);

  2. (b)

    Based on Eqs. (6) and (7), the first objective weight vector is obtained as \(\omega^{{O_{1} }} = ( {0.25,0.27,0.48} )^{T}\);

  3. (c)

    Based on Eqs. (8) and (9), the second objective weight vector is obtained as \(\omega^{{O_{2} }} = ( {0.34,0.32,0.34} )^{T}\);

Based on Eq. (10), and let \(\alpha = 0.4\), \(\beta = 0.3\), and \(\gamma = 0.3\), then the synthetic weight vector is obtained as \(\omega = ( {0.31,0.32,0.37} )^{T}\).

4.3 Consensus measure

In the consensus reaching process, a large number of consensus models have been developed, including the consensus models with different preference expression structures (Chen et al. 2015), the consensus models based on consistency and consensus measures (Gou et al. 2018), the multiple stages optimization consensus models (Gou et al. 2020c), the consensus models under dynamic contexts (Dong et al. 2017), and the consensus models considering the behaviors/attitudes of the experts (Wu et al. 2017), etc. However, the self-confident DHLPR has not been considered in the existing consensus models due to the complexity of this kind of decision making environment. To fill this research gap, this paper proposes a consensus measure and consensus adjustment mechanism. In this subsection, based on Model 1, two models are established to obtain the individual and collective priority vectors of self-confident DHLPRs, respectively. Additionally, the consensus measure is developed by the obtained individual and collective priority vectors.

  • Model for obtaining the individual priority vector


    Motivated by Model 1, an extended least squares model is set up to calculate the individual priority vector of the expert \(e^{k}\):

    $${\text{Model}}\;2.\quad \begin{array}{*{20}l} {\hbox{min} \sum\limits_{i = 1}^{m} {\sum\limits_{i < j}^{m} {sl_{ij}^{k} \times \left( {0.5 \times \left( {w_{i}^{k} - w_{j}^{k} } \right) + 0.5 - f\left( {r_{ij}^{k} } \right)} \right)^{2} } } } \hfill \\ {{s.t.}\left\{ {\begin{array}{*{20}l} {\sum\limits_{i = 1}^{m} {w_{i}^{k} = 1} ,} \hfill \\ {w_{i}^{k} > 0,\;i = 1,2, \ldots ,m.} \hfill \\ \end{array} } \right.} \hfill \\ \end{array}$$

    The self-confident degree \(sl_{ij}^{k}\) in Model 2 reflects the magnification of error degree, and the larger the value of \(sl_{ij}^{k}\) is, the higher the magnification of error degree should be. Solving Model 2, the optimum result of the individual priority vector \(w^{k} = \left( {w_{1}^{k} ,w_{2}^{k} , \ldots ,w_{m}^{k} } \right)^{T}\) can be obtained.

  • Model for obtaining the collective priority vector


    Let \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{n} } \right)^{T}\) be the weight vector of all experts. The collective priority vector \(w^{c} = \left( {w_{1}^{c} ,w_{2}^{c} , \ldots ,w_{m}^{c} } \right)^{T}\) can be obtained by solving Model 3:

    $${\text{Model}}\;3.\quad \begin{array}{*{20}l} {\hbox{min} \sum\limits_{k = 1}^{n} {\omega_{k} \sum\limits_{i = 1}^{m} {\sum\limits_{i < j}^{m} {sl_{ij}^{k} \times \left( {0.5 \times \left( {w_{i}^{c} - w_{j}^{c} } \right) + 0.5 - f\left( {r_{ij}^{k} } \right)} \right)^{2} } } } } \hfill \\ {{s.t.}\left\{ {\begin{array}{*{20}l} {\sum\limits_{i = 1}^{m} {w_{i}^{c} = 1} ,} \hfill \\ {w_{i}^{c} > 0,\;i = 1,2, \ldots ,m.} \hfill \\ \end{array} } \right.} \hfill \\ \end{array}$$

Generally, the consensus measure for the GDM problems is obtained by calculating the distance or similarity degree between individual and collective priority vectors. This paper develops a consensus measure by measuring the similarity degree between individual and collective priority vectors:

  1. (1)

    Individual consensus degree


    Let \(w^{k} = \left( {w_{1}^{k} ,w_{2}^{k} , \ldots ,w_{m}^{k} } \right)^{T}\) and \(w^{c} = \left( {w_{1}^{c} ,w_{2}^{c} , \ldots ,w_{m}^{c} } \right)^{T}\) be the individual and collective priority vectors obtained from Model 2 and Model 3, respectively. Then, the individual consensus degree \(CD\left( {e^{k} } \right)\) of the expert \(e^{k}\) is defined as:

    $$CD\left( {e^{k} } \right) = 1 - \sqrt {\frac{1}{m}\sum\limits_{i = 1}^{m} {\left( {w_{i}^{k} - w_{i}^{c} } \right)^{2} } } .$$
    (11)
  2. (2)

    Collective consensus degree

    Based on the individual consensus degrees \(CD\left( {e^{k} } \right)\) \(( {k = 1,2, \ldots ,n} )\), the collective consensus degree among all experts is measured by

    $$CD = \frac{1}{n}\sum\limits_{k = 1}^{n} {CD\left( {e^{k} } \right)} .$$
    (12)

Obviously, \(CD\left( {e^{k} } \right) \in [ {0,1} ]\) and \(CD \in [ {0,1} ]\). The bigger the value of \(CD\) is, the higher the collective consensus degree among all experts should be. Specially, \(CD = 1\) means that all experts have full consensus with the collective opinion. In the consensus reaching process for a GDM, a consensus threshold \(\xi \in [ {0,1} ]\) is usually provided in advance. If \(CD \ge \xi\), then all experts’ collective consensus is reached. Otherwise, the corresponding individual opinion should be adjusted.

4.4 Feedback adjustment mechanism

In the consensus reaching process, the experts need to adjust preference information if all experts’ collective consensus is not reached. To communicate fully with the experts and get their feedbacks, this section develops a feedback adjustment mechanism to help the experts adjust their preferences and improve the collective consensus degree. In general, the feedback adjustment mechanism consists of two consensus rules:

  1. (1)

    Identification rule (IR). The IR can be used to identify the experts who should adjust their preferences to improve the collective consensus degree.

  2. (2)

    Direction rule (DR). The DR can be used to find out the adjustment direction and guide the experts to improve their preferences.

Based on these two rules, we establish a feedback adjustment mechanism, which can be used to identify and adjust both the preferences and self-confident degrees. The feedback adjustment mechanism is descripted as follows:

Step 1. Identify preferences and self-confident degrees that need to be adjusted.

Based on Eqs. (11) and (12), we can find out that the experts whose individual consensus degrees are lower than the given consensus threshold \(\xi \in [ {0,1} ]\), i.e., \(ECD = \left\{ {e^{k} \left| {CD\left( {e^{k} } \right) < \xi } \right.} \right\}\). Then we transform the collective priority vector \(w = \left( {w_{1}^{c} ,w_{2}^{c} , \ldots ,w_{m}^{c} } \right)^{T}\) into the DHLPR and obtain the collective DHLPR \(W^{c} = \left( {w_{ij}^{c} } \right)_{m \times m} = \left( {f^{ - 1} \left( {0.5 \times \left( {w_{i}^{c} - w_{j}^{c} } \right) + 0.5} \right)} \right)_{m \times m}\). Let \(\Phi^{k} = \left( {\phi_{ij}^{k} } \right)_{m \times m}\) be the error matrix with respect to the self-confident DHLPR \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\) of the expert \(e^{k} \in ECD\), and

$$\phi_{ij}^{k} = \left( {f\left( {r_{ij}^{k} } \right) - f\left( {w_{ij}^{c} } \right)} \right)^{2} ,$$
(13)

where \(\phi_{ij}^{k}\) reflects the error degree of the preference \(r_{ij}^{k}\). The larger the value of \(\phi_{ij}^{k}\) is, the higher the error degree of the preference \(r_{ij}^{k}\) should be. Then, we can identify the experts whose preferences with error degrees are larger than the upper error threshold \(\bar{\phi }\,( {\bar{\phi } \ge 0} )\):

$$EL^{k} = \left\{ {( {i,j} )\left| {\phi_{ij}^{k} > \bar{\phi },\;i < j} \right.} \right\}.$$
(14)

Similarity, we can also identify the experts whose preferences with error degrees are smaller than the lower error threshold \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } \left( {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } \ge 0} \right)\):

$$EM^{k} = \left\{ {( {i,j} )\left| {\phi_{ij}^{k} < \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } ,\;i < j} \right.} \right\}.$$
(15)

Step 2. Based on the DR, we can determine the adjustment suggestions to improve the preferences and self-confident degrees.

Let \(\widetilde{{\Re^{k} }} = \left( {\left( {\widetilde{{r_{ij}^{k} }},\widetilde{{sl_{ij}^{k} }}} \right)} \right)_{m \times m}\) be the adjusted self-confident DHLPR with respect to the preference relation \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\) of the expert \(e^{k}\). Based on the collective DHLPR \(W^{c} = \left( {w_{ij}^{c} } \right)_{m \times m}\), the adjustment method can be developed by the following DRs:

  1. (1)

    For each \(( {i,j} ) \in EL^{k}\), we can feedback it to the expert \(e^{k}\) and suggest him/her to adjust the preference \(r_{ij}^{k}\) based on the following rule:

    $$\widetilde{{r_{ij}^{k} }} \in \left[ {\hbox{min} \left( {r_{ij}^{k} ,w_{ij}^{c} } \right),\hbox{max} \left( {r_{ij}^{k} ,w_{ij}^{c} } \right)} \right].$$
    (16)

    Based on Eq. (16), it is clear that the preference \(r_{ij}^{k}\) hinders the consensus reaching process because it makes a bigger error degree between the preference \(r_{ij}^{k}\) and the collective preference \(w_{ij}^{c}\). Therefore, it is necessary to decrease the self-confident degree \(sl_{ij}^{k}\) of the expert \(e^{k}\). Then we provide the following suggestion to the expert \(e^{k}\):

    $$\widetilde{{sl_{ij}^{k} }} \in \left[ {1,sl_{ij}^{k} } \right].$$
    (17)
  2. (2)

    For each \(( {i,j} ) \in EM^{k}\), the preference \(r_{ij}^{k}\) promotes the consensus reaching process because the preference \(r_{ij}^{k}\) is close to the collective preference \(w_{ij}^{c}\). Therefore, it is necessary to increase the self-confident degree \(sl_{ij}^{k}\) of the expert \(e^{k}\). Then we provide the following suggestions to the expert \(e^{k}\):

    $$\widetilde{{r_{ij}^{k} }} = r_{ij}^{k} \quad {\text{and}}\quad \widetilde{{sl_{ij}^{k} }} \in \left[ {sl_{ij}^{k} ,N} \right].$$
    (18)
  3. (3)

    For the remaining preference \(( {i,j} ) \notin EL^{k}\) and \(( {i,j} ) \notin EM^{k}\), we suggest that it is not necessary to adjust the preference of the expert \(e^{k}\), i.e., \(\widetilde{{r_{ij}^{k} }} = r_{ij}^{k}\) and \(\widetilde{{sl_{ij}^{k} }} = sl_{ij}^{k}\).

For the remaining self-confident DHLPR of the experts whose consensus degrees are bigger than the given consensus threshold \(\xi\), i.e., \(e^{k} \notin ECD\), we have \(\Re^{k} = \widetilde{{\Re^{k} }}\).

4.5 The DHSM-based consensus model

Based on the discussion above, the DHSM-based consensus model can be established. Firstly, based on Models 2 and 3, the individual and collective priority vectors can be obtained respectively based on the self-confident DHLPRs provided by all experts. Then, the individual consensus degrees and the collective consensus degree are calculated by Eqs. (11) and (12), respectively. If the collective consensus degree is lower than the given threshold value and the maximum number of iterations (i.e., \({\mathbb{Z}}_{\hbox{max} } \ge 1\)) is not reached, then we can provide the suggestions obtained by the feedback adjustment mechanism to the experts for repairing their self-confident DHLPRs.

Next we develop an algorithm to show the DHSM-based consensus model:


Algorithm 1. The DHSM-based consensus model

Input: The self-confident DHLPRs \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\) \(( {k = 1,2, \ldots ,n} )\), the subjective evaluation values \(\kappa_{k}^{S} \,( {k = 1,2, \ldots ,n} )\), the mutual evaluation matrix \(\Upsilon = \left( {\lambda_{zk} } \right)_{n \times n}\), \(\xi\), \(\bar{\phi }\), \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi }\), and \({\mathbb{Z}}_{\hbox{max} } \ge 1\).

Output: The adjusted self-confident DHLPR \(\widetilde{{\Re^{k} }} = \left( {\left( {\widetilde{{r_{ij}^{k} }},\widetilde{{sl_{ij}^{k} }}} \right)} \right)_{m \times m}\) \(( {k = 1,2, \ldots ,n} )\), the collective priority vector \(w^{c} = \left( {w_{1}^{c} ,w_{2}^{c} , \ldots ,w_{m}^{c} } \right)^{T}\), and the number of iterations \({\mathbb{Z}}\).

Step 1. Let \({\mathbb{Z}} = 0\), \(\Re^{{k( {\mathbb{Z}} )}} = \left( {\left( {r_{ij}^{{k( {\mathbb{Z}} )}} ,sl_{ij}^{{k( {\mathbb{Z}} )}} } \right)} \right)_{m \times m} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\).

Step 2. Based on Eqs. (6)–(10), the experts’ weight vector \(\omega^{{( {\mathbb{Z}} )}} = \left( {\omega_{1}^{{( {\mathbb{Z}} )}} ,\omega_{2}^{{( {\mathbb{Z}} )}} , \ldots ,\omega_{n}^{{( {\mathbb{Z}} )}} } \right)^{T}\) is obtained.

Step 3. By Models 2 and 3, the individual priority vectors \(w^{{k( {\mathbb{Z}})}} = \left( {w_{1}^{{k( {\mathbb{Z}} )}} ,w_{2}^{{k( {\mathbb{Z}} )}} , \ldots ,w_{m}^{{k( {\mathbb{Z}} )}} } \right)^{T}\)\(( {k = 1,2, \ldots ,n} )\) of the experts and the collective priority vector \(w^{{c( {\mathbb{Z}} )}} = \left( {w_{1}^{{c( {\mathbb{Z}} )}} ,w_{2}^{{c( {\mathbb{Z}} )}} , \ldots ,w_{m}^{{c( {\mathbb{Z}} )}} } \right)^{T}\) are obtained.

Step 4. Based on Eqs. (11) and (12), the individual consensus degrees \(CD\left( {e^{k} } \right)^{{( {\mathbb{Z}} )}}\) \(( {k = 1,2, \ldots ,n} )\) and the collective consensus degree \(CD^{{( {\mathbb{Z}} )}}\) are obtained. If \(CD^{{( {\mathbb{Z}} )}} \ge \xi\) and \({\mathbb{Z}} \ge {\mathbb{Z}}_{\hbox{max} }\), then go to Step 6; Otherwise, go to the next step.

Step 5. Identify the experts with \(CD\left( {e^{k} } \right)^{{( {\mathbb{Z}} )}} < \xi\), and calculate the collective DHLPR \(W^{{c( {\mathbb{Z}} )}} = \left( {w_{ij}^{{c( {\mathbb{Z}} )}} } \right)_{m \times m} = \left( {f^{ - 1} \left( {0.5 \times \left( {w_{i}^{{c( {\mathbb{Z}} )}} - w_{j}^{{c( {\mathbb{Z}} )}} } \right) + 0.5} \right)} \right)_{m \times m}\) and the error matrix \(\Phi^{{k( {\mathbb{Z}} )}} = \left( {\phi_{ij}^{{k( {\mathbb{Z}} )}} } \right)_{m \times m}\) based on Eq. (13). Then, the self-confident \(\Re^{{k( {{\mathbb{Z}} + 1} )}} = \left( {\left( {r_{ij}^{{k( {{\mathbb{Z}} + 1} )}} ,sl_{ij}^{{k( {{\mathbb{Z}} + 1} )}} } \right)} \right)_{m \times m}\) of the expert \(e^{k}\) is established. The expert is advised to adjust the preference information and the self-confident degree as follows:

(1) For \(( {i,j} ) \in EL^{k}\), \(r_{ij}^{{k( {\mathbb{Z}} )}} \in \left[ {\hbox{min} \left( {r_{ij}^{{k( {\mathbb{Z}} )}} ,w_{ij}^{{c( {\mathbb{Z}} )}} } \right),\hbox{max} \left( {r_{ij}^{{k( {\mathbb{Z}} )}} ,w_{ij}^{{c( {\mathbb{Z}} )}} } \right)} \right]\) and \(sl_{ij}^{{k( {{\mathbb{Z}} + 1} )}} \in \left[ {1,sl_{ij}^{{k( {\mathbb{Z}} )}} } \right]\).

(2) For \(( {i,j} ) \in EM^{k}\), \(r_{ij}^{{k( {{\mathbb{Z}} + 1} )}} = r_{ij}^{{k( {\mathbb{Z}} )}}\) and \(sl_{ij}^{{k( {{\mathbb{Z}} + 1} )}} \in \left[ {sl_{ij}^{{k( {\mathbb{Z}} )}} ,N} \right]\).

(3) For \(( {i,j} ) \notin EL^{k}\) and \(( {i,j} ) \notin EM^{k}\), \(r_{ij}^{{k( {{\mathbb{Z}} + 1} )}} = r_{ij}^{{k( {\mathbb{Z}} )}}\) and \(sl_{ij}^{{k( {{\mathbb{Z}} + 1} )}} = sl_{ij}^{{k( {\mathbb{Z}} )}}\).

Besides, if \(CD\left( {e^{k} } \right)^{{( {\mathbb{Z}} )}} \ge \xi\), then we have \(\Re^{{k( {{\mathbb{Z}} + 1} )}} = \Re^{{k( {\mathbb{Z}} )}}\).

Let \({\mathbb{Z}} = {\mathbb{Z}} + 1\). Go back to Step 2.

Step 6. Let \(\widetilde{{\Re^{k} }} = \Re^{{k( {\mathbb{Z}} )}}\)\(( {k = 1,2, \ldots ,n} )\) and \(w^{c} = w^{{c( {\mathbb{Z}} )}}\). Output \(\widetilde{{\Re^{k} }}\), \(w^{c}\) and \({\mathbb{Z}}\). The final ranking of alternatives can be obtained on the basis of \(w^{c}\).’

To understand Algorithm 1 clearly, the DHSM-based consensus model for the GDM problems with self-confident DHLPRs can be descripted in Fig. 2.

Fig. 2
figure 2

The DHSM-based consensus model for GDM problems with self-confident DHLPRs

Remark 3

Figure 2 mainly consists of four parts:

  1. (1)

    Weights’ determining process. This part can obtain all experts’ weights, then the weights can be used to calculate the collective priority vector.

  2. (2)

    Priority vectors’ calculated process. In this part, the main work is to calculate the individual priority vectors of experts and the collective priority vector.

  3. (3)

    Consensus process. If the consensus level is acceptable or the number of iterations reaches the given maxrounds, then go to the selection process. If not, then the experts should adjust their preferences until the group consensus is reached or the number of iterations reaches the given maxrounds.

  4. (4)

    Selection process. Based on the temporal collective priority vector, the ranking of alternatives is obtained.

5 Case study

In this section, we apply the proposed DHSM-based consensus model to deal with a practical GDM problem concerning the selection of optimal Telemedicine technology.

Broadly speaking, Telemedicine refers to the long-distance diagnosis, treatment and consultation of the wounded and sick in remote areas, islands or ships with poor medical conditions by taking advantages of the medical technology and medical equipment of large hospitals or specialized medical centers through computer technology, remote sensing, telemetry and remote control technology. In recent years, China has received attention and development of Telemedicine. First of all, Telemedicine mitigates to some extent the unbalanced status of China’s expert resources and population distribution; The second is to alleviate the problem of high patient referral rate and high cost in remote areas. To promote the development of Telemedicine, a lot of hospitals have carried out the pilot work of telemedicine.

Now a city in China decides to choose an optimal hospital in the field of Telemedicine from four alternatives \(A = \left( {A_{1} ,A_{2} ,A_{3} ,A_{4} } \right)\). Four experts \(E = \left\{ {e^{1} ,e^{2} ,e^{3} ,e^{4} } \right\}\) are invited to form a group to provide their self-confident DHLPRs over \(A\), where \(S^{SL} = \{ {1,2, \ldots ,7} \}\) is a 7-point numerical set to express the experts’ self-confident degrees, and \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - 4, \ldots ,4;\;k = - 4} \right., \ldots ,4} \right\}\) is a DHLTS with

$$\begin{aligned} S & = \left\{ {s_{ - 4} = extremely\;bad,s_{ - 3} = very\;bad,s_{ - 2} = bad,s_{ - 1} = slightly\;bad,} \right. \\ & \left. {\quad s_{0} = equal,s_{1} = slightly\;good,s_{2} = good,s_{3} = very\;good,s_{4} = extremely\;good} \right\}, \\ \end{aligned}$$
$$\begin{aligned} O & = \left\{ {o_{ - 4} = far\;from,o_{ - 3} = scarcely,o_{ - 2} = only\;a\;littlle,o_{ - 1} = a\;little,} \right. \\ & \left. {\quad o_{0} = just\;right,o_{1} = much,o_{2} = very\;much,o_{3} = extremely\;much,o_{4} = entirely} \right\}. \\ \end{aligned}$$

The self-confident DHLPRs \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{4 \times 4} \,( {k = 1,2,3,4} )\) provided by the experts are shown as follows:

$$\begin{aligned} \Re^{1} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{1 \langle o_{ - 2} \rangle }} ,3} \right)} & {\left( {s_{{2 \langle o_{ - 3} \rangle }} ,5} \right)} \\ {\left( {s_{{1 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{3 \langle o_{ - 1} \rangle }} ,2} \right)} & {\left( {s_{{2 \langle o_{3} \rangle }} ,6} \right)} \\ {\left( {s_{{ - 1 \langle o_{2} \rangle }} ,3} \right)} & {\left( {s_{{ - 3 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{2} \rangle }} ,2} \right)} \\ {\left( {s_{{ - 2 \langle o_{3} \rangle }} ,5} \right)} & {\left( {s_{{ - 3 \langle o_{ - 3} \rangle }} ,6} \right)} & {\left( {s_{{1 \langle o_{ - 3} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{2} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,3} \right)} & {\left( {s_{{0 \langle o_{ - 2} \rangle }} ,2} \right)} & {\left( {s_{{ - 2 \langle o_{2} \rangle }} ,5} \right)} \\ {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,3} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{2} \rangle }} ,6} \right)} & {\left( {s_{{2 \langle o_{3} \rangle }} ,3} \right)} \\ {\left( {s_{{ - 3 \langle o_{2} \rangle }} ,2} \right)} & {\left( {s_{{ - 2 \langle o_{ - 2} \rangle }} ,6} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{3} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 3 \langle o_{ - 2} \rangle }} ,5} \right)} & {\left( {s_{{ - 1 \langle o_{ - 3} \rangle }} ,3} \right)} & {\left( {s_{{1 \langle o_{ - 3} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{3} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{1 \langle o_{2} \rangle }} ,1} \right)} & {\left( {s_{{1 \langle o_{ - 2} \rangle }} ,5} \right)} \\ {\left( {s_{{0 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{0} \rangle }} ,3} \right)} & {\left( {s_{{2 \langle o_{2} \rangle }} ,5} \right)} \\ {\left( {s_{{ - 1 \langle o_{ - 2} \rangle }} ,1} \right)} & {\left( {s_{{ - 2 \langle o_{0} \rangle }} ,3} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{ - 3} \rangle }} ,2} \right)} \\ {\left( {s_{{2 \langle o_{2} \rangle }} ,5} \right)} & {\left( {s_{{2 \langle o_{ - 2} \rangle }} ,5} \right)} & {\left( {s_{{ - 3 \langle o_{3} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{4} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{1 \langle o_{1} \rangle }} ,3} \right)} & {\left( {s_{{2 \langle o_{ - 1} \rangle }} ,5} \right)} \\ {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,2} \right)} & {\left( {s_{{2 \langle o_{1} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 3 \langle o_{ - 1} \rangle }} ,3} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 2 \langle o_{1} \rangle }} ,5} \right)} \\ {\left( {s_{{ - 2 \langle o_{ - 3} \rangle }} ,5} \right)} & {\left( {s_{{ - 2 \langle o_{ - 1} \rangle }} ,4} \right)} & {\left( {s_{{2 \langle o_{ - 1} \rangle }} ,5} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right). \\ \end{aligned}$$

Additionally, each expert’s own evaluation information is \(\kappa_{k}^{S} = ( {0.8,0.9,0.7,0.8} )^{T}\). Furthermore, each expert can provide the evaluations for others, which establish a mutual evaluation matrix \(\Upsilon = \left( {\lambda_{zk} } \right)_{4 \times 4}\) (\(\lambda_{zk} \in [ {0,1} ]\)) as follows:

$$\Upsilon = \left( {\lambda_{zk} } \right)_{4 \times 4} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {e^{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {{\kern 1pt} {\kern 1pt} e^{2} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} e^{3} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} e^{4} } \\ \end{array} } \\ {\begin{array}{*{20}c} {e^{1} } \\ {e^{2} } \\ {e^{3} } \\ {e^{4} } \\ \end{array} } & {\left( {\begin{array}{*{20}c} {-\!\!-} & {0.5} & {0.7} & {0.5} \\ {0.6} & {-\!\!-} & {0.6} & {0.7} \\ {0.7} & {0.6} & {-\!\!-} & {0.5} \\ {0.5} & {0.6} & {0.9} & {-\!\!-} \\ \end{array} } \right)} \\ \end{array} .$$

For this GDM problem, the given consensus threshold is \(\xi = 0.95\), the upper and lower error thresholds are \(\bar{\phi } = 0.15\) and \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } = 0.05\), and the maximum number of iterations \({\mathbb{Z}}_{\hbox{max} } = 5\).

Step 1. Let \({\mathbb{Z}} = 0\), \(\Re^{k( 0 )} = \left( {\left( {r_{ij}^{k( 0 )} ,sl_{ij}^{k( 0 )} } \right)} \right)_{4 \times 4} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{4 \times 4} \,( {k = 1,2,3,4} )\).

Step 2. Let \(\alpha = 0.2\), \(\beta = 0.5\), and \(\gamma = 0.3\). Based on Eqs. (6)–(10), the experts’ weight vector is obtained as \(\omega^{( 0 )} = ( {0.24,0.30,0.23,0.24} )^{T}\).

Step 3. The individual priority vectors of all experts can be calculated from the self-confident DHLPRs by Model 2:

$$\begin{aligned} w^{1( 0 )} & = ( {0.1311,0.5357,0.3332,0} )^{T} , \\ w^{2( 0 )} & = ( {0.3268,0.5143,0.0768,0.0820} )^{T} , \\ w^{3( 0 )} & = ( {0.2865,0.5326,0.1810,0} )^{T} , \\ w^{4( 0 )} & = ( {0.2808,0.4683,0.2058,0.0451} )^{T} . \\ \end{aligned}$$

Based on Model 3, the collective priority vector can be obtained as:

$$w^{c( 0 )} = ( {0.2920,0.5334,0.1746,0} )^{T} .$$

Step 4. Based on Eqs. (11) and (12), we can calculate the individual consensus degrees \(CD\left( {e^{k} } \right)^{( 0 )}\) \(( {k = 1,2, \ldots ,n} )\) and the collective consensus degree \(CD^{( 0)}\), which are shown in Table 2.

Table 2 The individual and collective consensus degrees

Clearly, \(CD^{( 0 )} = 0.9433 < \xi = 0.95\). This indicates that the consensus degree among all experts is not acceptable. Therefore, it is necessary to improve the consensus degree among all experts by the feedback adjustment mechanism.

Step 5. Based on Table 2, the individual consensus degrees of the experts \(\left\{ {e^{1} ,e^{2} } \right\}\) are lower than the given consensus threshold, i.e., \(ECD^{( 0 )} = \left\{ {e^{1} ,e^{2} } \right\}\). Then we transform the collective priority vector \(w^{c( 0 )}\) and obtain the collective DHLPR:

$$W^{c( 0 )} = \left( {w_{ij}^{c( 0 )} } \right)_{4 \times 4} \left( {\begin{array}{*{20}c} {s_{{0 \langle o_{0} \rangle }} } & {s_{{ - 1 \langle o_{0.14} \rangle }} } & {s_{{0 \langle o_{1.88} \rangle }} } & {s_{{1 \langle o_{0.67} \rangle }} } \\ {s_{{1 \langle o_{ - 0.14} \rangle }} } & {s_{{0 \langle o_{0} \rangle }} } & {s_{{1 \langle o_{1.74} \rangle }} } & {s_{{2 \langle o_{0.53} \rangle }} } \\ {s_{{0 \langle o_{ - 1.88} \rangle }} } & {s_{{ - 1 \langle o_{ - 1.74} \rangle }} } & {s_{{0 \langle o_{0} \rangle }} } & {s_{{0 \langle o_{2.79} \rangle }} } \\ {s_{{ - 1 \langle o_{ - 0.67} \rangle }} } & {s_{{ - 2 \langle o_{ - 0.53} \rangle }} } & {s_{{0 \langle o_{ - 2.79} \rangle }} } & {s_{{0 \langle o_{0} \rangle }} } \\ \end{array} } \right).$$

The error matrix \(\Phi^{k( 0 )} = \left( {\phi_{ij}^{k( 0 )} } \right)_{4 \times 4}\) of each expert \(e^{k} \in ECD\) can be generated by Eq. (13):

$$\begin{aligned} \Phi^{1( 0 )} & = \left( {\begin{array}{*{20}c} 0 &\quad {0.0355} & \quad{0.0038} &\quad {0.0103} \\ {0.0355} &\quad 0 &\quad {0.1643} &\quad {0.0770} \\ {0.0038} &\quad {0.1643} &\quad 0 &\quad {0.1498} \\ {0.0103} &\quad {0.0770} &\quad {0.1498} &\quad 0 \\ \end{array} } \right)\;\;\;{\text{and}} \\ \Phi^{2( 0)} & = \left( {\begin{array}{*{20}c} 0 &\quad {0.0270} &\quad {0.1212} &\quad {0.3335} \\ {0.0270} &\quad 0 &\quad {0.1331} &\quad {0.0770} \\ {0.1212} &\quad {0.1331} &\quad 0 &\quad {0.1186} \\ {0.3335} &\quad {0.0770} &\quad {0.1186} &\quad 0 \\ \end{array} } \right). \\ \end{aligned}$$

The upper error threshold is \(\bar{\phi } = 0.15\). Based on Eq. (14), we can identify the experts whose preference information with error degrees are larger than \(\bar{\phi }\):

$$EL^{1( 0 )} = \{ {( {2,3} )} \}\;\;\;{\text{and}}\;\;\;EL^{2( 0 )} = \{ {( {1,4} )} \}.$$

Similarly, the lower error threshold is \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } = 0.05\). Based on Eq. (15), we can identify the experts whose preference information with error degrees are lower than \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi }\):

$$EM^{1( 0 )} = \{ {( {1,2} ),( {1,3} ),( {1,4} )} \}\;\;\;{\text{and}}\;\;\;EM^{2( 0 )} = \{ {( {1,2} )} \}.$$

Let \(\Re^{k( 1 )} = \left( {\left( {r_{ij}^{k( 1 )} ,sl_{ij}^{k( 1 )} } \right)} \right)_{4 \times 4}\) be the adjusted self-confident DHLPRs of the experts \(e^{k} ( {1,2} )\), respectively. Then,

  • When constructing \(\Re^{1( 1 )} = \left( {\left( {r_{ij}^{1( 1 )} ,sl_{ij}^{1( 1 )} } \right)} \right)_{4 \times 4}\), we advise that

    1. (1)

      \(r_{23}^{1( 1 )} \in \left[ {s_{{1 \langle o_{1.74} \rangle }} ,s_{{3 \langle o_{ - 1} \rangle }} } \right]\) and \(sl_{23}^{1( 1 )} \in [ {1,2} ]\).

    2. (2)

      \(r_{12}^{1( 1 )} = s_{{ - 1 \langle o_{ - 1} \rangle }}\) and \(sl_{12}^{1( 1 )} \in [ {4,7} ]\); \(r_{13}^{1( 1 )} = s_{{1 \langle o_{ - 2} \rangle }}\) and \(sl_{13}^{1( 1 )} \in [ {3,7} ]\); \(r_{14}^{1( 1 )} = s_{{2 \langle o_{ - 3} \rangle }}\) and \(sl_{14}^{1( 1 )} \in [ {5,7} ]\).

The adjusted results can be shown as: For \(i < j\), the remain unidentified elements of \(\Re^{2}\) are not changed. For \(i > j\), \(r_{ji}^{k( 1 )} = neg\left( {r_{ij}^{k( 1 )} } \right)\) and \(sl_{ji}^{k( 1 )} = sl_{ij}^{k( 1 )}\) are used.

  • When constructing \(\Re^{2( 1 )} = \left( {\left( {r_{ij}^{2( 1 )} ,sl_{ij}^{2( 1 )} } \right)} \right)_{4 \times 4}\), there are:

    1. (1)

      \(r_{14}^{2( 1 )} \in \left[ {s_{{ - 2 \langle o_{2} \rangle }} ,s_{{1 \langle o_{0.67} \rangle }} } \right]\) and \(sl_{14}^{2( 1 )} \in [ {1,5} ]\);

    2. (2)

      \(r_{12}^{2( 1 )} = s_{{ - 1 \langle o_{1} \rangle }}\) and \(sl_{12}^{2( 1 )} \in [ {3,7} ]\).

The adjusted results can be shown as: For \(i < j\), the remaining unidentified elements of \(\Re^{2}\) are not changed. For \(i > j\), \(r_{ji}^{k( 1 )} = neg\left( {r_{ij}^{k( 1 )} } \right)\) and \(sl_{ji}^{k( 1 )} = sl_{ij}^{k( 1 )}\) are used.

Then, we can provide these adjustment suggestions to the experts \(e^{1}\) and \(e^{2}\), and they are advised to adjust the preference information and the self-confident degrees. Without loss of generality, they provide the adjusted self-confident DHLPRs according to the adjustment suggestions as:

$$\begin{aligned} \Re^{1( 1 )} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{ - 1} \rangle }} ,5} \right)} & {\left( {s_{{1 \langle o_{ - 2} \rangle }} ,4} \right)} & {\left( {s_{{2 \langle o_{ - 3} \rangle }} ,6} \right)} \\ {\left( {s_{{1 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{ - 2} \rangle }} ,1} \right)} & {\left( {s_{{2 \langle o_{3} \rangle }} ,6} \right)} \\ {\left( {s_{{ - 1 \langle o_{2} \rangle }} ,3} \right)} & {\left( {s_{{ - 3 \langle o_{1} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{2} \rangle }} ,2} \right)} \\ {\left( {s_{{ - 2 \langle o_{3} \rangle }} ,5} \right)} & {\left( {s_{{ - 3 \langle o_{ - 3} \rangle }} ,6} \right)} & {\left( {s_{{1 \langle o_{ - 3} \rangle }} ,2} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right), \\ \Re^{2} & = \left( {\begin{array}{*{20}c} {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{1} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{ - 2} \rangle }} ,2} \right)} & {\left( {s_{{1 \langle o_{0} \rangle }} ,2} \right)} \\ {\left( {s_{{1 \langle o_{ - 1} \rangle }} ,3} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{2 \langle o_{2} \rangle }} ,6} \right)} & {\left( {s_{{2 \langle o_{3} \rangle }} ,3} \right)} \\ {\left( {s_{{ - 3 \langle o_{2} \rangle }} ,2} \right)} & {\left( {s_{{ - 2 \langle o_{ - 2} \rangle }} ,6} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} & {\left( {s_{{ - 1 \langle o_{3} \rangle }} ,4} \right)} \\ {\left( {s_{{ - 3 \langle o_{ - 2} \rangle }} ,5} \right)} & {\left( {s_{{ - 1 \langle o_{ - 3} \rangle }} ,3} \right)} & {\left( {s_{{1 \langle o_{ - 3} \rangle }} ,4} \right)} & {\left( {s_{{0 \langle o_{0} \rangle }} ,7} \right)} \\ \end{array} } \right). \\ \end{aligned}$$

Then, go back to Step 2. Firstly, the weight vector of experts is obtained as \(\omega^{( 1 )} = ( {0.25,0.29,0.23,0.24} )^{T}\). Then, by Model 2, the individual priority vectors of all experts can be calculated from the self-confident DHLPRs:

$$\begin{aligned} w^{1( 1 )} & = ( {0.1396,0.4849,0.3755,0} )^{T} , \\ w^{2( 1 )} & = ( {0.2887,0.5417,0.1696,0} )^{T} , \\ w^{3( 1 )} & = ( {0.2865,0.5326,0.1810,0} )^{T} , \\ w^{4( 1 )} & = ( {0.2808,0.4683,0.2058,0.0451} )^{T} . \\ \end{aligned}$$

Based on Model 3, the collective priority vector can be obtained as:

$$w^{c( 1 )} = ( {0.2538,0.5173,0.2290,0} )^{T} .$$

The individual consensus degrees \(CD\left( {e^{k} } \right)^{( 1)}\) \(( {k = 1,2,3,4} )\) and collective consensus degree \(CD^{( 1)}\) can be obtained and shown in Table 3.

Table 3 The individual and collective consensus degrees

Clearly, \(CD^{( 1 )} = 0.9504 > \xi = 0.95\). This indicates that all experts reach consensus. Based on the collective priority vector \(w^{c( 1 )}\), the final ranking of alternatives can be obtained as \(A_{2} \succ A_{1} \succ A_{3} \succ A_{4}\).

6 Comparative analyses and simulation

This section mainly makes some comparative analyses about the validity of the proposed consensus reaching method by setting up a simulation experiment.

6.1 Comparison objects

The comparative analyses mainly consist of three objects:

  1. (1)

    The proposed DHSM-based consensus model.

  2. (2)

    Without considering the adjustments of the self-confident degrees, we can obtain a double hierarchy linguistic preference values and self-confidence degrees-unchanged modifying-based (DHSUM-based) consensus model. Therefore, another new algorithm about the DHSUM-based consensus model can be established if all self-confident degrees remain unchanged in the consensus reaching process.

  3. (3)

    Without the self-confident degrees, the self-confident DHLPR can be transformed into DHLPR. That is to say, the DHLPR implies that the expert is fully self-confident of his/her evaluation information. In other words, the self-confident degrees of all evaluations in a DHLPR are the same, and they can be omitted for notation simplification. Therefore, the DHLPR can be regarded as a special case of the self-confident DHLPR. Here, a consensus reaching method of DHLPR, named as double hierarchy linguistic preference values modifying-based (DHM-based) consensus model can be established by omitting the adjustment of self-confident degrees in Algorithm 1. Then the DHM-based consensus model can be established by deleting all self-confidence degrees and their adjustment rules in Algorithm 1.

Then, we can compare the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model in the next simulation experiment.

6.2 Comparison criteria

In the consensus reaching process, it is expected that the consensus can be reached as soon as possible, and the adjustment numbers are as small as possible. Therefore, three comparison criteria are proposed to reflect the consensus efficiency of the consensus reaching models.

  1. (1)

    It is unavoidable that a number of consensus rounds will be needed to reach the given consensus threshold. Therefore, the number of iterations (denoted as \({\mathbb{Z}}\)) in the consensus reaching process reflects the consensus efficiency, and can be regarded as the first criterion in comparison.

  2. (2)

    The consensus success ratio (denoted as \(P\)) expresses the ratio of reaching the consensus within the range of the allowed number of iterations \({\mathbb{Z}} < {\mathbb{Z}}_{\hbox{max} }\).

  3. (3)

    The distance between the original and the adjusted preference information is denoted as \(AD\). Suppose that \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m}\) and \(\widetilde{{\Re^{k} }} = \left( {\left( {\widetilde{{r_{ij}^{k} }},\widetilde{{sl_{ij}^{k} }}} \right)} \right)_{m \times m}\) are the original and adjusted preferences, respectively. Then, the overall distance of the experts \(E = \left\{ {e^{1} ,e^{2} , \ldots ,e^{n} } \right\}\) can be obtained by

    $$AD = \sum\limits_{k = 1}^{n} {\sqrt {\frac{2}{{m( {m - 1} )}}\sum\limits_{\begin{subarray}{l} i,j = 1 \\ i < j \end{subarray} }^{m} {\left( {r_{ij}^{k} - \widetilde{{r_{ij}^{k} }}} \right)^{2} } } } .$$
    (19)

Obviously, \(AD \in [ {0,1} ]\). The smaller the value of \(AD\), the less information loss of the preference relation.

6.3 Simulation and comparison results

Let \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\) and \({\mathbb{Z}}_{3}\) be the numbers of iterations of the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively; \(P_{1}\), \(P_{2}\) and \(P_{3}\) be the consensus success ratios of the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively; \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\) be the adjustment distances of the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively. Then the simulation model can be described as follows:


Algorithm 2. Simulation model

Input: \(m\), \(n\), \(S^{SL}\), \(\omega\), \(\xi\), \(\overline{\phi }\), \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi }\), and \({\mathbb{Z}}_{\hbox{max} }\).

Output: \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\), \({\mathbb{Z}}_{3}\), \(P_{1}\), \(P_{2}\), \(P_{3}\), \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\).

Step 1. Generate \(n\) self-confident DHLPRs \(\Re^{k} = \left( {\left( {r_{ij}^{k} ,sl_{ij}^{k} } \right)} \right)_{m \times m} \,( {k = 1,2, \ldots ,n} )\).

Step 2. Utilize the DHSM-based consensus model (Algorithm 1) to deal with all self-confident DHLPRs \(\Re^{k}\) to obtain the adjusted self-confident DHLPRs \(\widetilde{{\Re^{k,1} }} = \left( {\left( {\widetilde{{r_{ij}^{k,1} }},\widetilde{{sl_{ij}^{k,1} }}} \right)} \right)_{m \times m} \,\,( {k = 1,2, \ldots ,n} )\) and the number of consensus rounds \({\mathbb{Z}}_{1}\).

Step 3. Utilize the DHSUM-based consensus model (Algorithm 1) to deal with all self-confident DHLPRs \(\Re^{k}\) to obtain the adjusted self-confident DHLPRs \(\widetilde{{\Re^{k,2} }} = \left( {\left( {\widetilde{{r_{ij}^{k,2} }},\widetilde{{sl_{ij}^{k,2} }}} \right)} \right)_{m \times m} \,( {k = 1,2, \ldots ,n} )\) and the number of consensus rounds \({\mathbb{Z}}_{2}\).

Step 4. Transform all self-confident DHLPRs \(\Re^{k}\)\(( {k = 1,2, \ldots ,n} )\) to the DHLPRs \({\mathbb{R}}^{k} = \left( {r_{ij}^{k} } \right)_{m \times m}\)\(( {k = 1,2, \ldots ,n} )\) by deleting the self-confident degrees, and utilize the DHM-based consensus model to deal with all DHLPRs \({\mathbb{R}}^{k}\) to obtain the adjusted DHLPRs \(\widetilde{{{\mathbb{R}}^{k} }}\) \(( {k = 1,2, \ldots ,n} )\) and the number of consensus rounds \({\mathbb{Z}}_{3}\).

Step 5. Calculate the overall distance between \(\Re^{k}\) and \(\widetilde{{\Re^{k,1} }}\) \(( {k = 1,2, \ldots ,n} )\) and obtain \(AD_{1}\); Calculate the overall distance between \(\Re^{k}\) and \(\widetilde{{\Re^{k,2} }}\) \(( {k = 1,2, \ldots ,n} )\) and obtain \(AD_{2}\); calculate the overall distance between \({\mathbb{R}}^{k}\) and \(\widetilde{{{\mathbb{R}}^{k} }}\) \(( {k = 1,2, \ldots ,n} )\) and obtain \(AD_{3}\).

Step 6. If \(CD^{{\left( {{\mathbb{Z}}_{1} } \right)}} > \xi\), then \(P_{1} = 1\); otherwise, \(P_{1} = 0\). If \(CD^{{\left( {{\mathbb{Z}}_{2} } \right)}} > \xi\), then \(P_{2} = 1\); otherwise, \(P_{2} = 0\). If \(CD^{{\left( {{\mathbb{Z}}_{3} } \right)}} > \xi\), then \(P_{3} = 1\); otherwise, \(P_{3} = 0\).

Output \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\), \({\mathbb{Z}}_{3}\), \(P_{1}\), \(P_{2}\), \(P_{3}\), \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\).

In the simulation method, we set \(S_{O} = \left\{ {s_{{t \langle o_{k} \rangle }} \left| {t = - 4, \ldots , - 1,0,1, \ldots ,4;\;k = - 4} \right., \ldots , - 1,0,1, \ldots ,4} \right\}\), \(S^{SL} = \{ {1,2, \ldots ,7} \}\), \(\xi = 0.82\), \(\overline{\phi } = 0.2\), \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\phi } = 0.05\), and \({\mathbb{Z}}_{\hbox{max} } = 5\). The weights of all experts are equal. Then, based on different parameters \(m\) and \(n\), we run the simulation method 1000 times to obtain the average values of \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\), \({\mathbb{Z}}_{3}\), \(P_{1}\), \(P_{2}\), \(P_{3}\), \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\), respectively. Specially, \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\) and \({\mathbb{Z}}_{3}\) reflect the average values of the number of iterations required to reach the consensus in the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively; \(P_{1}\), \(P_{2}\) and \(P_{3}\) reflect the average values of the consensus success ratios of the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively; \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\) reflect the average values of the adjustment distances in the DHSM-based consensus model, the DHSUM-based consensus model and the DHM-based consensus model, respectively.

The simulation results can be shown in Figs. 3, 4 and 5.

Fig. 3
figure 3

The average values of \({\mathbb{Z}}_{1}\), \({\mathbb{Z}}_{2}\) and \({\mathbb{Z}}_{3}\) with different values of \(m\) and \(n\)

Fig. 4
figure 4

The average values of \(AD_{1}\), \(AD_{2}\) and \(AD_{3}\) with different values of \(m\) and \(n\)

Fig. 5
figure 5

The average values of \(P_{1}\), \(P_{2}\) and \(P_{3}\) under different values of \(m\) and \(n\)

Based on Figs. 3, 4 and 5, some results and discussions can be summarized as follows:

  1. (1)

    Figure 3 shows that the proposed DHSM-based consensus model needs less consensus round than the DHSUM-based consensus model with respect to different values of \(m\) and \(n\), which means that the speed to reach consensus is accelerated by adjusting the self-confident degrees in the feedback consensus reaching process. Additionally, the proposed DHSM-based consensus model needs more consensus round than the DHM-based consensus model under different values of \(m\) and \(n\). Considering that the DHLPR has no self-confident degree and is simpler than the self-confident DHLPR, it is obvious that the consensus reaching speed of the DHLPR must be faster than the self-confident DHLPR. Furthermore, the number of iterations gradually decreases as the number of experts increases as well as the dimension of the matrices increases.

  2. (2)

    Figure 4 shows that compared with the DHSUM-based consensus model, the DHSM-based consensus model needs less adjustments with respect to different values of \(m\) and \(n\), which means that the adjustments of self-confident degrees can decrease the loss of preference information. Similar to Fig. 3, without the self-confident degrees, the DHM-based consensus model based on the DHLPR will lose the least preference information compared with the DHSUM-based consensus model and the DHSM-based consensus model.

  3. (3)

    Figure 5 shows that the proposed DHSM-based consensus model has a higher success ratio than the DHSUM-based consensus model with respect to different values of \(m\) and \(n\), which means that the success ratio can be increased by adjusting the preference information and the self-confidence degrees simultaneously. Meanwhile, the DHM-based consensus model has the highest success ratio considering that it only needs to adjust the preference information that the DHLPR has.

7 Conclusions

This paper developed a DHSM-based consensus model to manage the GDM problem with self-confident DHLPRs. The main advantages of the paper can be summarized as follows: Firstly, the novel self-confident DHLPR consists of both the preference information and the self-confident degrees of experts simultaneously, which makes the evaluation information complete. Secondly, the weight-determining method is reasonable by considering the subjective weights and two kinds of objective weights. Thirdly, an iteration-based consensus reaching model (DHSM-based consensus model) is set up to manage the GDM problem with self-confident DHLPRs based on the priority ordering theory. Fourthly, a case study concerning the selection of optimal hospital in the field of Telemedicine is provided to illustrate the effectiveness of the proposed DHSM-based consensus model. Finally, to highlight the advantages of the proposed DHSM-based consensus model, a simulation experiment is devised to testify the DHSM-based consensus model by comparing it with other consensus reaching models in different states from three different angles.

Meanwhile, some interesting research directions can be pointed out:

  1. (1)

    With the increase of the number of experts, decision making becomes more and more complex. Large-scale GDM has become a hot topic in recent years. In the future, we can develop some useful consensus reaching models to deal with the large-scale GDM problems with self-confident DHLPRs.

  2. (2)

    In the process of GDM, it is common that there are some non-cooperative behaviors and minority opinions, which may greatly affect the decision efficiency and accuracy. Therefore, it is interesting to propose effective methods to manage non-cooperative behaviors and minority opinions in GDM or large-scale GDM with self-confident DHLPRs.