Introduction

In recent years, many enterprises around the world have achieved sustainable development, improved their core competitiveness, and enhanced their business operations via digital transformation (DT). Generally, the most recent technologies, including big data [1, 2], block chain [3], internet of things [4], cloud computing [5], and artificial intelligence (AI) [6,7,8], have been adopted when implementing DT for enterprises [9, 10]. However, due to the barriers of expense and a lack of staff skilled in advanced technology applications [11], many enterprises have faced uncertainty risks and failed to implement DT [11,12,13,14,15], especially small and medium-sized enterprises (SMEs). Thus, many SMEs have hesitated to make a decision whether to implement DT, because of a dilemma: not transforming was waiting for death and transforming was seeking death [9].

To help SMEs to solve these dilemmas and successfully implement DT, various research has been conducted, which can be generally divided into three categories:

  1. (1)

    Mechanism and dynamic capability [16,17,18,19]: Li et al. [16] revealed the key steps to successful DT, which include managerial social capital development, managerial cognition renewal, organizational capability, and business team-building. Khurana et al. [17] examined SMEs to construct their resilience capability during a crisis by utilizing digital technology. Zhang et al. [18] revealed the growth mechanism and evolution path of dynamic capability affecting enterprise niche changes during the process of cross-boundary innovation of SMEs. Matarazzo et al. [19] studied the impact of DT on customer value creation of SMEs in the “Made in Italy” sector and revealed that DT can create new distribution channels and new ways to create and deliver new value to customers.

  2. (2)

    Policy [20,21,22]: Peng and Tao [20] investigated the relationship between DT and enterprise performance, revealing that DT can greatly improve enterprise performance, with the policy effect of enterprise innovation being the most important. Kunkel and Matthess [21] analyzed digital and industrial policy to explore the impact of information and telecommunications technology (ICT) in industry on environmental sustainability. Their results showed that policy express the broad range of vague expectation focusing on positive indirect impact of DT. Parra-López et al. [22] analyzed the current policy of the Andalusia olive region in the implementation of DT and pointed out five important policies necessary to foster DT.

  3. (3)

    Risk [12, 14, 23,24,25]: Gölcük [12] proposed an interval type-2 fuzzy reasoning method for risk evaluations of DT project implementation. Chouaibi et al. [14] analyzed the impact of DT on an organization’s performance and provided a global view of the potential risks, using a linear regression method to analyze the data collected from over 300 companies in Tunisia. Casey and Souvignet [23] described DT strategies in forensic science laboratories and suggested that, to mitigate the risks, forensic laboratories should ensure that the technology must abide by core principles and processes, such as authenticity, integrity, quality, and efficiency. Tian et al. [24] investigated how DT affects risk-taking by corporations and found that DT promotes risk-taking by enterprises by improving firms’ flexibility and finance availability. Liu [25] studied and constructed risk prediction approaches for the DT of manufacturing supply chains by using an artificial neural network.

These studies can be effective in helping SMEs to implement DT and provide countermeasures for risk management during DT implementation. However, for SMEs that lack the capability and have limited resources to drive DT, some researchers suggest that one of the best options is to choose DT solutions that are already on the market, provided by third-party professional DT suppliers or platforms [10, 16].

Different DT solutions, however, can have different performances, costs, benefits and so forth. If an unsuitable solution is chosen, it can lead SMEs to incur significant losses or go bankrupt. Therefore, how to effectively evaluate and select the most suitable DT solution is major challenge for SMEs, before they can implement DT. The assessment of solutions involves many aspects and factors, each of which may be uncertain or have vague information. These are difficult to score by decision-makers (DMs) and/or experts [10] using crisp number or linguistic terms; normally, such issues must be solved by assessment methods that use fuzzy extension set-/linguistic set-based multi-criteria group decision-making (MCGDM) or multi-criteria decision-making (MCDM).

As so far, many fuzzy extension set/linguistic set [26, 27] -based assessment methods of MCGDM/MCDM were proposed [28,29,30,31], such as intuition fuzzy sets/linguistic sets-based methods [32,33,34], hesitant fuzzy sets/linguistic sets-based methods [35,36,37], probability fuzzy sets/linguistic sets-based methods [38,39,40], picture fuzzy sets/linguistic sets-based methods [41,42,43,44,45,46], and neutrosophic sets/linguistic sets-based methods [47, 48]. These methods have enriched assessment theory and methods of MCGDM/MCDM and widely applied to evaluate smart system [49,50,51], healthcare problem [52, 53], DT solutions [10, 32, 54, 55] and so forth. With regard to DT solutions evaluation, Yang et al. [10] proposed the information error-driven T- spherical fuzzy cloud algorithm to evaluate the DT solutions of SMEs, Zeng et al. [32] proposed intuitionistic fuzzy social network hybrid MCDM model to evaluate the DT of manufacturing industry in china, Yüksel and Dinçer [54] proposed an evaluation method with quantum spherical fuzzy modeling for DT sustainability analysis, Netati et al. [55] proposed a maturity model combing SF-AHP and SF-TODIM approaches to evaluate digital transformation in the defense industry. However, considering that the practical SMEs DT solution assessment decision-making issues, different DMs/experts can have different attitude (e.g. attitude for support, neutral, oppose, refusal) of voting for the criteria of DT solution and be hesitated to given the score among several options for each criteria of solution alternatives, meanwhile, they can have different expectation value for the DT solution implementation. Thus, in contrast to other extensions of fuzzy linguistic sets, hesitant picture fuzzy linguistic sets (HPFLSs) [56, 57], which extend picture fuzzy sets with hesitant linguistic sets, are more suitable to well describe the information requirement of these decision issue of practical SMEs DT solution evaluation, and prospect theory is more suitable to express the different expectation value of DMs for DT solution effectiveness. In addition, as far as we know, there has been no theoretical description of novel distance, prospect theory, and ER of HPFLSs in the literature. Therefore, with these motivations in mind, and to enrich the fuzzy MCGDM assessment theory and method, we proposed a novel MCGDM method with prospect theory-based ER of HPFLSs and adopted this to solve decision-making issues of actual SMEs selection of DT solutions.

The main contributions of this study, which are distinct from previous, similar fuzzy MCGDM assessment methods, are as follows:

  1. (1)

    The novel distance measure of picture fuzzy sets (PFSs) and HPFLSs are proposed and their detailed proofs are provided.

  2. (2)

    A novel prospect theory formula of HPFLSs, which extends the prospect theory with the proposed distance measure of HPFLSs, is constructed.

  3. (3)

    HPFLSs are extended based on ER, and a novel approach of HPFLSs-based ER is proposed.

  4. (4)

    Combining HPFLSs-based ER with prospect theory, a novel prospect theory-based ER approach under the HPFLS environment is constructed.

The remainder of this paper is organized as follows. The definitions related to prospect theory, ER, and HPFLSs are introduced in “Preliminaries”. In “Novel distance measure definition of HPFLSs”, the novel distance measures of picture fuzzy numbers (PFNs) and HPFLSs are proposed, and the related mathematically inducted proofs are presented. By utilizing the proposed distance measure, in “MCGDM approach of prospect theory-based ER of HPFLS”, an approach of the detailed decision-making steps for solving MCGDM problems under an HPFLS environment with completely unknown criteria weights is presented. Furthermore, a practical application on DT solution evaluation of SMEs is conducted to demonstrate the effectiveness of the method proposed in “Illustrative example”. Finally, conclusions are drawn in “Conclusion”.

Preliminaries

In this section, the definitions of PFSs, HPFLSs, prospect theory, and evidential theory are briefly reviewed to lay the groundwork for the later analyses.

Definitions of PFSs and HPFLSs

Definition 1

[46] Let \(X\) be a universe space, a PFS is defined as

$$ A = \left\{ {\left( {x,\mu_{A} (x),\eta_{A} (x),\nu_{A} (x)} \right)\left| {x \in X} \right.} \right\}, $$

where \(\mu_{A} (x) \in [0,1]\) is called the degree of positive membership of \(x\) in \(A\), \(\eta_{A} (x) \in [0,1]\) is called the degree of neutral membership of \(x\) in \(A\), \(\nu_{A} (x) \in [0,1]{\kern 1pt}[0,1]\) is called the degree of negative membership of \(x\) in \(A\), and \(\mu_{A} (x)\),\( \eta_{A} (x)\),\( \nu_{A} (x)\) satisfy the condition \(0 \le \mu_{A} (x) + \eta_{A} (x) + \nu_{A} (x) \le 1\). Moreover, \(\pi_{A} (x) = 1 - (\mu_{A} (x) + \eta_{A} (x) + \nu_{A} (x))\) can be called the degree of refusal.

Definition 2

[57] Let \(S = \{ s_{J} |J = 0,1,2, \ldots ,m\} \) be an LTS, an HPFLS is defined as

$$ H_{s} = \left\{ {\left\langle {\left( {s_{i}^{k} } \right),\left( {\mu_{i}^{k} ,\eta_{i}^{k} ,\nu_{i}^{k} } \right)} \right\rangle \left| {i \in (0,1,2, \ldots ,m);k = 1,2, \ldots ,\alpha } \right.} \right\},$$
(1)

where, \(\mu_{i}^{k} \in \left[ {0,1} \right]\),\(\eta_{i}^{k} \in \left[ {0,1} \right]\),\(\nu_{i}^{k} \in \left[ {0,1} \right]\) and \({0} \le \mu_{i}^{k} + \eta_{i}^{k} { + }\nu_{i}^{k} \le {1}\). \(\mu_{i}^{k} \),\(\eta_{i}^{k} \) and \(\nu_{i}^{k} \) represent the positive membership, indeterminacy membership, and negative membership of the linguistic term \(s_{i}^{k} \), respectively;\(\pi_{i}^{k} = 1 - \left( {\mu_{i}^{k} + \eta_{i}^{k} { + }\nu_{i}^{k} } \right)\) is the refusal membership of the linguistic term \(s_{i}^{k} \), and its complementary set is \(H_{s}^{c} = \left\{ {\left\langle {\left( {s_{i}^{k} } \right),\left( {\nu_{i}^{k} ,\eta_{i}^{k} ,\mu_{i}^{k} } \right)} \right\rangle \left| {i \in (0,1,2, \ldots ,m);k = 1,2, \ldots ,\alpha } \right.} \right\} \).

Definition 3

[57] For any HPFLS \(H_{s} = \left\{ {\left\langle {\left( {s_{i}^{k} } \right),\left( {\mu_{i}^{k} ,\eta_{i}^{k} ,\nu_{i}^{k} } \right)} \right\rangle } \right.\) \(\left. {\left| {i \in (0,1,2, \ldots ,m);k = 1,2, \ldots ,\alpha } \right.} \right\},\) its score function is defined as \(S(H_{s} ) = s_{{\overline{x}}} \) where \(\overline{x} = \sum\nolimits_{k = 1}^{\alpha } {i^{k} \otimes } \tfrac{{1 + \mu_{i}^{k} - \nu_{i}^{k} }}{2}/\alpha \), \(i^{k} \) is the maximum subscript of linguistic terms in the linguistic term set \(S_{i}^{k} \).

Definition 4

[57] For any two HPFLSs, \(H_{{s_{1} }} \), \(H_{{s_{2} }} \), if \(S(H_{{s_{1} }} ) > S(H_{{s_{2} }} )\), then \(H_{{s_{1} }} \succ H_{{s_{2} }} \), that is \(H_{{s_{1} }} \) is superior to \(H_{{s_{2} }} \) and vice versa. If \(S(H_{{s_{1} }} ) = S(H_{{s_{2} }} )\), then \(H_{{s_{1} }} \) and \(H_{{s_{2} }} \) cannot be distinguished. For such cases, the accuracy function is determined as below:

Let \(H_{s} \) be one HPFLS \(H_{s} = \left\{ {\left\langle {\left( {s_{i}^{k} } \right),\left( {\mu_{i}^{k} ,\eta_{i}^{k} ,\nu_{i}^{k} } \right)} \right\rangle \left| {i \in (0,1,2, \ldots ,m);k = 1,2, \ldots ,\alpha } \right.} \right\},\) then, the accuracy function of \(H_{s} \) is defined as \(A(H_{s} ) = s_{y} \), where \(y = {{\sum\nolimits_{k = 1}^{\alpha } {i^{k} \otimes \tfrac{{1 + \mu_{i}^{k} + \eta_{i}^{k} - \nu_{i}^{k} }}{2}} } \mathord{/{\vphantom {{\sum\nolimits_{k = 1}^{\alpha } {i^{k} \otimes \tfrac{{1 + \mu_{i}^{k} + \eta_{i}^{k} - \nu_{i}^{k} }}{2}} } \alpha }} } \alpha }\). For any two HPFLSs \(H_{{s_{1} }} \) and \(H_{{s_{2} }} \), if \(A(H_{{s_{1} }} ) > A(H_{{s_{2} }} )\), then \(H_{{s_{1} }} \) is superior to \(H_{{s_{2} }} \); if \(A(H_{{s_{1} }} ) < A(H_{{s_{2} }} )\), then \(H_{{s_{2} }} \) is superior to \(H_{{s_{1} }} \); if \(A(H_{{s_{1} }} ) = A(H_{{s_{2} }} )\) then \(H_{{s_{1} }} = H_{{s_{2} }} \).

Evidential theory

Definition 5

[58] Let \(X\) be a finite set space, \(2^{X}\) be a power set of \(X\), if a mapping \(f:2^{X} \to [0,1]\), satisfies the condition \(f\left( \emptyset \right) = 0\) and \(\sum {_{A \subseteq X} } f\left( A \right) = 1\), where \(f\) is the basic probability mass function of \(X\), if \(f\left( A \right) > 0\), \(A\) is a focal element.

Definition 6

[58] For two elements \(B\) and \(C\), the combination rule is defined as follows,

$$ f\left( A \right) = \frac{{\sum\nolimits_{B,C \subseteq X,B \cap C = A} {f_{1} \left( B \right)f_{2} \left( C \right)} }}{1 - k} $$
(2)

where \(k = \sum\nolimits_{B,C \subseteq X,B \cap C = \emptyset } {f_{1} \left( B \right)f_{2} \left( C \right)} \).

Prospect theory

One of the important decision-making methods, prospect theory was first proposed by Kahneman and Tversk [59]; the main part is the value function, which is a power function, and this is shown as follows,

$$ v\left( x \right) = \left\{ \begin{gathered} \left( x \right)^{\alpha } ,x \ge 0 \hfill \\ - \lambda \left( { - x} \right)^{\beta } ,x < 0 \hfill \\ \end{gathered} \right. $$
(3)

where, \(x \ge 0\) mean gain, \(x < 0\), mean loss;\(\alpha\) and \(\beta\) are coefficients of risk attitude, and \(0 \le \alpha ,\beta \le 1\), higher values of \(\alpha \) and \(\beta\) show that a decision-maker is more willing to take a risk, \(\lambda\) is the coefficient of loss aversion,\(\lambda > 1\) means that the DMs are more sensitive toward to loss risk.

Novel distance measure definition of HPFLSs

In this section, first, the comparison rules of PFNs are defined. Then, a novel distance measure for PFNs is proposed, and a detailed proof is presented. Finally, a novel distance measure of HPFLSs is given and proven in detail.

Picture fuzzy number comparison rules

Definition 7

A PFN \(a_{1}\) is greater than or equal to other PFNs \(a_{2}\),denoted by \(a_{1} \ge a_{2}\), if and only if \(\mu_{{a_{1} }} \ge \mu_{{a_{2} }} ,\eta_{{a_{1} }} \le \eta_{{a_{2} }} ,\nu_{{a_{1} }} \le \nu_{{a_{2} }}\).

Definition 8

Let \(a = \left\langle {\mu_{a} ,\eta_{a} ,\nu_{a} } \right\rangle\) be a PFN, the score function is defined as \(s_{a} = 1 + \mu_{a} - \eta_{a} - \nu_{a}\), and the accuracy function is defined as \(h_{a} = \mu_{a} + \eta_{a} + \nu_{a}\), assume that \(a_{1}\) and \(a_{2}\) are two PFNs, then we obtain:

  1. (1)

    if \(s_{{a_{1} }} > s_{{a_{2} }}\), then \(a_{{1}} \succ a_{2}\).

  2. (2)

    if \(s_{{a_{1} }} < s_{{a_{2} }}\), then \(a_{{1}} \prec a_{2}\).

  3. (3)

    if \(s_{{a_{1} }} = s_{{a_{2} }}\), then

    1. (a)

      if \(h_{{a_{1} }} > h_{{a_{2} }}\),then \(a_{{1}} \succ a_{2} \)

    2. (b)

      if \(h_{{a_{1} }} < h_{{a_{2} }}\), then \(a_{{1}} \prec a_{2}\)

    3. (c)

      if \(h_{{a_{1} }} = h_{{a_{2} }}\), then \(a_{{1}} = a_{2}\)

Novel distance definition of PFNs

Definition 9

A function \(d:a^{2} \to a\) is called a picture fuzzy distance measure, where \(a\) is any picture fuzzy collection of universal set \(X\), assume that \(\beta_{i} \in a\left( {i = 1,2,3} \right)\), then, the picture fuzzy distance measure of between \(\beta_{1}\) and \(\beta_{2}\) is presented as \(d(\beta_{1} ,\beta_{2} )\) and should satisfy the conditions as below:

  1. (1)

    \(d(\beta_{1} ,\beta_{2} )\) is a PFN

  2. (2)

    \(d(\beta_{1} ,\beta_{2} ) = \left\langle {0,0,1} \right\rangle\),if and only if \(\beta_{1} = \beta_{2}\)

  3. (3)

    \(d(\beta_{1} ,\beta_{2} ) = d(\beta_{2} ,\beta_{1} )\)

  4. (4)

    if \(\beta_{1} \le \beta_{2} \le \beta_{3}\) then \(d(\beta_{1} ,\beta_{2} ) \prec = d(\beta_{1} ,\beta_{3} )\) and \(d(\beta_{2} ,\beta_{3} ) \prec = d(\beta_{1} ,\beta_{3} )\).

Based on the above definition, and let \(a_{1}\),\(a_{2}\) be two PFNs, the formulas are constructed as follows:

$$\begin{aligned} p(a_{1} ,a_{2} ) &= \frac{{\min (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}}{{\max (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}},\\ I(a_{1} ,a_{2} ) &= \frac{{\min (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}}{{\max (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}},\\ N(a_{1} ,a_{2} ) & = \frac{{\min (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}}{{\max (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}} \end{aligned}$$

then, a novel PFN distance measure is obtained:

$$ \begin{aligned} d(a_{1} ,a_{2} ) &= \left\langle {1 - \max \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right)} \right., \\ & \quad \left( \max \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right)\right.\\ & \quad \left.- \min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right) \right) \\ & \quad \left. \times \min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right),\right.\\ & \quad \left.\min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right) \right\rangle \\ \end{aligned} $$
(4)
  1. (1)

    \(d(a_{1} ,a_{2} )\) is a PFN.

Proof

Assume that \(\mu_{p} = \max \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right)\) \(\nu_{n} = \min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right)\).

$$ \eta_{i} = \left( {\mu_{p} - \nu_{n} } \right) \times \nu_{n} , $$

Because \(0 \le P(a_{1} ,a_{2} ) \le 1\), \(0 \le I(a_{1} ,a_{2} ) \le 1\), \(0 \le N(a_{1} ,a_{2} ) \le 1\), we obtain \(0 \le 1 - \mu_{p} \le 1\), \(0 \le \eta_{i} \le 1\), \(0 \le \nu_{n} \le 1\),

Then, construct that \(res = 1 - \mu_{p} + \eta_{i} + \nu_{n}\), \(x = P(a_{1} ,a_{2} )\),\(y = I(a_{1} ,a_{2} )\), \(z = N(a_{1} ,a_{2} )\),

Thus, \(res = 1 - \max (x,y,z) + \left( {\max (x,y,z) - \min (x,y,z)} \right) \times \min (x,y,z) + \min (x,y,z)\),

Because \(0 \le \min (x,y,z) \le 1\) and \(0 \le \max (x,y,z) \le 1\), we obtain that \(0 \le res \le 1\).

Obviously, it holds that \(d(a_{1} ,a_{2} )\) is a PFN.

  1. (2)

    \(d(a_{1} ,a_{2} ) = \left\langle {0,0,1} \right\rangle\), if and only if \(a_{1} = a_{2}\).

Proof

Necessary condition:

In accordance with \(d(a_{1} ,a_{2} ) = \left\langle {0,0,1} \right\rangle\), we conclude that \(\max (x,y,z) = \min (x,y,z) = 1\), thus, \(x = y = z = 1\) is obtained, that are

$$\begin{aligned} p(a_{1} ,a_{2} ) & = \frac{{\min (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}}{{\max (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}} = 1,\\ I(a_{1} ,a_{2} ) & = \frac{{\min (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}}{{\max (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}} = 1,\\ N(a_{1} ,a_{2} ) & = \frac{{\min (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}}{{\max (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}} = 1 \end{aligned}$$

hence, we obtain \(\mu_{{a_{1} }} = \mu_{{a_{2} }}\), \(\eta_{{a_{1} }} = \eta_{{a_{2} }}\), \(\nu_{{a_{1} }} = \nu_{{a_{2} }}\), that is \(a_{1} = a_{2}\).

Sufficient condition:


According to \(a_{1} = a_{2}\), we can conclude \(\mu_{{a_{1} }} = \mu_{{a_{2} }}\), \(\eta_{{a_{1} }} = \eta_{{a_{2} }}\), \(\nu_{{a_{1} }} = \nu_{{a_{2} }}\), and that are

$$\begin{aligned} P(a_{1} ,a_{2} ) & = \frac{{\min (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}}{{\max (\mu_{{a_{1} }} ,\mu_{{a_{2} }} )}} = 1,\\ I(a_{1} ,a_{2} ) & = \frac{{\min (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}}{{\max (1 - \eta_{{a_{1} }} ,1 - \eta_{{a_{2} }} )}} = 1,\\ N(a_{1} ,a_{2} ) & = \frac{{\min (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}}{{\max (1 - \nu_{{a_{1} }} ,1 - \nu_{{a_{2} }} )}} = 1\end{aligned} $$

And then, we obtain \(\max (x,y,z) = \min (x,y,z) = 1\), \(\max (x,y,z) - \min (x,y,z) = 0\), \(1 - \max (x,y,z) = 0\).

Accordingly, we can conclude that \(d(a_{1} ,a_{2} ) = \left\langle {0,0,1} \right\rangle\) hold.

  1. (3)

    Obviously, \(d(a_{1} ,a_{2} ) = d(a_{2} ,a_{1} )\) holds, the proof detail is omitted.

  2. (4)

    if \(a_{1} \le a_{2} \le a_{3}\), then \(d(a_{1} ,a_{2} ) \le d(a_{1} ,a_{3} )\) and \(d(a_{2} ,a_{3} ) \le d(a_{1} ,a_{3} )\) hold.

Proof

According to \(a_{1} \le a_{2} \le a_{3}\), we obtain \(\mu_{{a_{1} }} \le \mu_{{a_{2} }} \le \mu_{{a_{3} }}\),\(\eta_{{a_{1} }} \ge \eta_{{a_{2} }} \ge \eta_{{a_{3} }}\),\(\nu_{{a_{1} }} \ge \nu_{{a_{2} }} \ge \nu_{{a_{3} }}\),

Hence, we can conclude \(P(a_{1} ,a_{2} ) \ge P(a_{1} ,a_{3} )\), \(I(a_{1} ,a_{2} ) \ge I(a_{1} ,a_{3} )\), \(N(a_{1} ,a_{2} ) \ge N(a_{1} ,a_{3} )\),

Given that \(x_{12} = P(a_{1} ,a_{2} )\), \(y_{12} = I(a_{1} ,a_{2} )\), \(z_{12} = N(a_{1} ,a_{2} )\).

$$ x_{13} = P(a_{1} ,a_{3} ),\quad y_{13} = I(a_{1} ,a_{3} ),\quad z_{13} = N(a_{1} ,a_{3} ) $$

Then, we obtain \(\max \left( {x_{12} ,y_{12} ,z_{12} } \right) \ge \max \left( {x_{13} ,y_{13} ,z_{13} } \right)\), \(\min \left( {x_{12} ,y_{12} ,z_{12} } \right) \ge \min \left( {x_{13} ,y_{13} ,z_{13} } \right)\).

Given that \(\max_{12} = \max \left( {x_{12} ,y_{12} ,z_{12} } \right)\), \(\max_{13} = \max \left( {x_{13} ,y_{13} ,z_{13} } \right)\), \(\min_{12} = \min \left( {x_{12} ,y_{12} ,z_{12} } \right)\), \(\min_{13} = \min \left( {x_{13} ,y_{13} ,z_{13} } \right)\).

In accordance with the proof of condition (1), knowing that \(d(a_{1} ,a_{2} )\) and \(d(a_{1} ,a_{3} )\) are PFNs, according to Definitions 78, we obtain:

$$ \begin{aligned} s\left( {d(a_{1} ,a_{2} )} \right) & = 1 - \max_{12} - \left( {\max_{12} - \min_{12} } \right) \times \min_{12} - \min_{12} \\ & = 1 - \max_{12} \left( {1 + \min_{12} } \right) - \min_{12} \times \left( {1 - \min_{12} } \right) \\ & = 1 - \left( {\max_{12} + \max_{12} \times \min_{12} + \min_{12} - (\min_{12} )^{2} } \right) \\ s\left( {d(a_{1} ,a_{3} )} \right) & = 1 - \left( {\max_{13} + \max_{13} \times \min_{13} + \min_{13} - (\min_{13} )^{2} } \right) \\ \end{aligned} $$

The proof of \(s\left( {d(a_{1} ,a_{2} )} \right) \le s\left( {d(a_{1} ,a_{3} )} \right)\) is equal to the proof of

$$\begin{aligned} &\left( {\max_{12} + \max_{12} \times \min_{12} + \min_{12} - (\min_{12} )^{2} } \right) \\ & \quad \ge \left( {\max_{13} + \max_{13} \times \min_{13} + \min_{13} - (\min_{13} )^{2} } \right),\end{aligned} $$

that are

$$ \begin{aligned} & \left( {\max_{12} \times \min_{12} + \min_{12} - (\min_{12} )^{2} } \right) - \left( {\max_{13} \times \min_{13} + \min_{13} - (\min_{13} )^{2} } \right) \hfill \\ & \quad \ge \left( {\max_{13} \times \min_{12} + \min_{12} - (\min_{12} )^{2} } \right) - \left( {\max_{13} + \max_{13} \times \min_{13} + \min_{13} - (\min_{13} )^{2} } \right) \hfill \\ & \quad = \max_{13} \times \left( {\min_{12} - \min_{13} } \right) + \left( {\min_{12} - \min_{13} } \right) \times \left( {1 - \left( {\min_{12} + \min_{13} } \right)} \right) \hfill \\ & \quad = \left( {\min_{12} - \min_{13} } \right) \times \left( {1 + \max_{13} - \left( {\min_{12} + \min_{13} } \right)} \right) \ge 0 \hfill \\ \end{aligned} $$

Hence,\(s\left( {d(a_{1} ,a_{2} )} \right) \le s\left( {d(a_{1} ,a_{3} )} \right)\) holds.

Thus, based on Definition 8, the inequality \(d(a_{1} ,a_{2} ) \prec = d(a_{1} ,a_{3} )\) holds; similarly, this proves that \(d(a_{2} ,a_{3} ) \prec = d(a_{1} ,a_{3} )\) holds.

Definition of the HPFLs distance measure

Definition 10

Let \(d:b^{2} \to b\) be a function, where \(b\) is any HPFL collection of universal set \(X\), assume that \(\alpha {}_{i} \in b(i = 1,2,3)\), then, the HPFLs distance measure of between \(\alpha_{1}\) and \(\alpha_{2}\), presented as \(d(\alpha_{1} ,\alpha_{2} )\), should satisfy the following conditions:

  1. (1)

    \(d(\alpha_{1} ,\alpha_{2} )\) is the HPFLS.

  2. (2)

    \(d(\alpha_{1} ,\alpha_{2} ) = \left\{ {\overbrace {{\left\langle {\left( {s_{1} } \right),\left( {0,0,1} \right)} \right\rangle , \cdots ,\left\langle {\left( {s_{1} } \right),\left( {0,0,1} \right)} \right\rangle }}^{n}} \right\}\) if and only if \(\alpha_{1} = \alpha_{2}\)

  3. (3)

    \(d(\alpha_{1} ,\alpha_{2} ) = d(\alpha_{2} ,\alpha_{1} )\)

  4. (4)

    if \(\alpha_{1} \le \alpha_{2} \le \alpha_{3}\) then \(d(\alpha_{1} ,\alpha_{2} ) \prec = d(\alpha_{1} ,\alpha_{3} )\) and \(d(\alpha_{2} ,\alpha_{3} ) \prec = d(\alpha_{1} ,\alpha_{3} )\).

Based on Definition 9, let \(H_{{s_{1} }} = \left\{ {\left\langle {\left( {s_{{i_{1} }}^{{k_{1} }} } \right),\left( {\mu_{{i_{1} }}^{{k_{1} }} ,\eta_{{i_{1} }}^{{k_{1} }} ,\nu_{{i_{1} }}^{{k_{1} }} } \right)} \right\rangle \left| {i_{1} \in (0,1,2, \ldots ,m);k_{1} = 1,2, \ldots ,\alpha_{1} } \right.} \right\}\) and \(H_{{s_{2} }} = \left\{ {\left\langle {\left( {s_{{i_{2} }}^{{k_{2} }} } \right),\left( {\mu_{{i_{2} }}^{{k_{2} }} ,\eta_{{i_{2} }}^{{k_{2} }} ,\nu_{{i_{2} }}^{{k_{2} }} } \right)} \right\rangle \left| {i_{2} \in (0,1,2, \ldots ,m);k_{2} = 1,2, \ldots ,\alpha_{2} } \right.} \right\}\) be two HPFL sets, \(a_{1}^{{k_{1} }} = \left\{ {\left. {\left\langle {\mu_{{i_{1} }}^{{k_{1} }} ,\eta_{{i_{1} }}^{{k_{1} }} ,\nu_{{i_{1} }}^{{k_{1} }} } \right\rangle } \right|k_{1} = 1,2, \cdots ,\alpha_{1} } \right\}\) and \(a_{2}^{{k_{2} }} = \left\{ {\left. {\left\langle {\mu_{{i_{2} }}^{{k_{2} }} ,\eta_{{i_{2} }}^{{k_{2} }} ,\nu_{{i_{2} }}^{{k_{2} }} } \right\rangle } \right|k_{2} = 1,2, \cdots ,\alpha_{2} } \right\}\) are two PFN sets. In accordance with formula (4), we can obtain that the distance measure of PFNs \(a_{1}\) and \(a_{2}\) is:

$$ \begin{aligned} d(a_{1} ,a_{2} ) & = \left\langle {1 - \max \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right)} \right., \\ & \quad \times \left( \max \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right) \right.\\ & \quad \left.- \min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right) \right) \\ & \quad \times \min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right),\\ & \quad \times \left.\min \left( {P(a_{1} ,a_{2} ),I(a_{1} ,a_{2} ),N(a_{1} ,a_{2} )} \right) \right\rangle \\ \end{aligned} $$

Hence, we can construct that

$$\begin{aligned} d\left( {H_{{s_{1} }} ,H_{{s_{2} }} } \right) & = \left\{ \left. {\left\langle {\left( {{{s_{{i_{{k_{1} }} }} } \mathord{\left/ {\vphantom {{s_{{i_{{k_{1} }} }} } {s_{{i_{{k_{2} }} }} }}} \right. \kern-0pt} {s_{{i_{{k_{2} }} }} }}} \right),\left( {d\left( {a_{1}^{{k_{1} }} ,a_{2}^{{k_{2} }} } \right)} \right)} \right\rangle } \right|\right.\\& \left.k_{1} = 1,2, \cdots ,\alpha_{1} ;k_{2} = 1,2, \cdots ,\alpha_{2} \vphantom{\left( {a_{1}^{{k_{1} }} ,a_{2}^{{k_{2} }} } \right)} \right\} \end{aligned}$$
(5)

where, \(i_{{k_{1} }}\) is the subscript value of the linguistic evaluation term \(s_{{i_{1} }}^{{k_{1} }}\), and \(i_{{k_{2} }}\) is the subscript value of the linguistic evaluation term \(s_{{i_{2} }}^{{k_{2} }}\).

The proof process for formula (4) is similar and so has been omitted here.

MCGDM approach of prospect theory-based ER of HPFLS

In the section, the MCGDM approach, based on prospect theory and ER of HPFLSs, is described in detail. The detailed decision-making steps are as follows:

Let \(A = \left\{ {a_{1} ,a_{2} , \cdots ,a_{l} } \right\}\) be a collection of \(l\) alternatives, \(C = \left\{ {c_{1} ,c_{2} , \cdots ,c_{n} } \right\}\) is the collection of \(n\) criteria, and \(w = \{ w_{1} ,w_{2} , \cdots ,w_{n} \}\) is the weights collection of the criteria. Next, for the criterion \(C_{j}\), the assessment score of the alternative \(A_{i}\) is provided by experts; meanwhile, in accordance with the expected values of DMs, their expected reference value for each criteria is given. Then, the decision-making matrix of HPFLSs \(H = \left( {h_{ij} } \right)_{l \times n}\) can be constructed, with the detailed decision-making steps shown as follows:

  1. Step 1.

    Normalize the evaluation value of the decision matrix.

All criteria in the decision-making matrix must be distinguished as either benefit-type or cost-type criteria. To normalize the criteria values, normally, the evaluation values of the benefit criteria do not need to be changed, while the evaluation values of the cost criteria must be replaced with their complementary sets:

The following formula is utilized to normalize the decision-making matrix:

$$ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\tilde{\beta }}_{ij} = \left\{ \begin{gathered} \tilde{\beta }_{ij} , \quad C_{j} \in B_{S} \hfill \\ \tilde{\beta }_{ij}^{c} ,\quad C_{j} \in C_{S} \hfill \\ \end{gathered} \right., $$

where \(B_{S}\) is the set of benefit criteria,\(C_{S}\) is the set of cost criteria, and \(\tilde{\beta }_{ij}^{c}\) is the complementary set of \(\tilde{\beta }_{ij}\). The normalized decision-making matrix is denoted by: \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{M} = (\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\tilde{\beta }}_{ij} )m \times n\).

  1. Step 2.

    Construction of the prospect decision-making matrix.

The decision reference value is the key factor for the construction of a prospect decision-making matrix. the decision reference points of each criteria \(C_{j} \left( {j = 1,2, \ldots ,n} \right)\) constructed the one dimensional vector of decision reference values, denoted by \(r = \left[ {r_{j} } \right]_{1 \times n} = \left[ {\left\langle {\left( {s_{{i_{rj} }}^{{k_{rj} }} } \right),\left( {\mu_{{i_{rj} }}^{{k_{rj} }} ,\eta_{{i_{rj} }}^{{k_{rj} }} ,\nu_{{i_{rj} }}^{{k_{rj} }} } \right)} \right\rangle } \right]_{1 \times n}\)\(\left( {i_{rj} \in \left\{ {0,1,2, \ldots ,m} \right\};k_{rj} = 1,2, \ldots ,\alpha_{rj} } \right)\). In prospect theory, DMs pay great attention to the deviation between the actual results and the expected results, rather than the actual results; hence, the reference value selection is determined by the expected results of a decision-maker. Based on the prospect theory formula, the prospect decision-making matrix of HPFLSs can be obtained as follows:

$$ \nu \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} } \right) = \left\{ \begin{gathered} \left( {d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right)} \right)^{\alpha } ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} \ge r_{j} \hfill \\ - \lambda \left( {d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right)} \right)^{\beta } ,\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} < r_{j} \hfill \\ \end{gathered} \right. $$
(6)

where \(\alpha ,\beta\) are risk parameters,\(0 \le \alpha ,\beta \le 1\), the greater their value, that greater the decision-maker’s preference for risk.

\(\nu \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} } \right)\) is the expected value of HPFLSs, and the score function of HPFLSs of Definitions 3 is used to determine whether \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} \ge r_{j}\) or \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} < r_{j}\), \(d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right)\) is the deviation between HPFLS \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij}\) and \(r_{j}\). Based on formula (5), it is easy to obtain that \(d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right)\) is HPFLSs. \(\nu \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} } \right)\) and can be calculated and obtained using formulas (7) and (8), as shown below:

$$ \begin{aligned}&- \lambda \otimes d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right) = \left\langle \bigcup\limits_{{k_{1} = 1,2, \cdots ,\alpha_{1} }} \left\{ \left( {s_{\max } \times \left( {\frac{{s_{{i_{1} }}^{{k_{1} }} }}{{s_{\max } }}} \right)^{ - \lambda } } \right),\right.\right.\\&\quad\left.\left.\left( {\left( {\eta_{{i_{1} }}^{{k_{1} }} + \nu_{{i_{1} }}^{{k_{1} }} } \right)^{ - \lambda } - \left( {\eta_{{i_{1} }}^{{k_{1} }} } \right)^{ - \lambda } ,\left( {\eta_{{i_{1} }}^{{k_{1} }} } \right)^{ - \lambda } ,1 - \left( {1 - \mu_{{i_{1} }}^{{k_{1} }} } \right)^{ - \lambda } } \right) \vphantom{{s_{\max } \times \left( {\frac{{s_{{i_{1} }}^{{k_{1} }} }}{{s_{\max } }}} \right)^{ - \lambda } } } \right\} \right\rangle {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \lambda < - 1 \end{aligned}$$
(7)
$$ \begin{aligned}&\left( {d\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} ,r_{j} } \right)} \right)^{\lambda } = \left\langle \bigcup\limits_{{k_{1} = 1,2, \cdots ,\alpha_{1} }} \left\{ \left( {s_{\max } \times \left( {\frac{{s_{{i_{1} + 1}}^{{k_{1} }} }}{{s_{\max + 1} }}} \right)^{\lambda } } \right),\right.\right.\\&\quad\left.\left.\left( {\left( {\eta_{{i_{1} }}^{{k_{1} }} + \mu_{{i_{1} }}^{{k_{1} }} } \right)^{\lambda } - \left( {\eta_{{i_{1} }}^{{k_{1} }} } \right)^{\lambda } ,\left( {\eta_{{i_{1} }}^{{k_{1} }} } \right)^{\lambda } ,1 - \left( {1 - \nu_{{i_{1} }}^{{k_{1} }} } \right)^{\lambda } } \right)\vphantom{{s_{\max } \times \left( {\frac{{s_{{i_{1} }}^{{k_{1} }} }}{{s_{\max } }}} \right)^{ - \lambda } } } \right\} \right\rangle {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \lambda > 0 \end{aligned}$$
(8)
  1. Step 3.

    Obtain the criteria weights based on the linear programming function.

Assume that the prospect decision-making matrix of HPFLSs is \(\tilde{H} = \left( {\tilde{h}_{ij} } \right)_{l \times n} = \left( {\left\langle {\left( {s_{{i_{ij} }}^{{k_{ij} }} } \right),\left( {\mu_{{i_{ij} }}^{{k_{ij} }} ,\eta_{{i_{ij} }}^{{k_{ij} }} ,\nu_{{i_{ij} }}^{{k_{ij} }} } \right)} \right\rangle } \right)_{l \times n}\), where the value \(h_{ij}\) is the score function value of HPFLSs \(\tilde{h}_{ij}\), and based on the transformed decision matrix \(H = \left( {h_{ij} } \right)_{l \times n}\), the linear programming model to solve the maximum value of function \(w = (0.3,0.7)^{T}\) is constructed as shown below:

$$ Max(g) = \sum\limits_{i = 1}^{l} {\sum\limits_{j = 1}^{n} {\left( {w_{j} \times h_{ij} } \right)} } $$
(M1)

where the weight is the optimal weight of criteria \(C_{j}\), \(w_{j} \in \left[ {0,1} \right]\), and \(\sum\nolimits_{j = 1}^{n} {w_{j} } = 1\).

By solving the above model (M-1), the maximum value of function \(g\) is obtained. Hence, the optimal weights \(w_{j} \left( {j = 1,2, \ldots ,n} \right)\) of each alternative \(a_{i} \left( {i = 1,2, \ldots ,l} \right)\) can be calculated.

  1. Step 4.

    Transform the HPFLSs prospect decision matrix to a PFSs decision matrix.

The details for transforming an HPFLSs decision matrix \(M\) to a PFSs decision matrix \(M_{1}\) are shown below:

$$\begin{array}{l} M = \begin{array}{*{20}{c}} {}&{\begin{array}{*{20}{c}} {{c_1}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{{c_2}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{ \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{{c_n}} \end{array}}\\ {\begin{array}{*{20}{c}} {{a_1}}\\ {{a_2}}\\ \vdots \\ {{a_l}} \end{array}}&{\left\{ {\begin{array}{*{20}{c}} {{H_{{s_{11}}}},}&{{H_{{s_{12}}}},}&{ \cdots ,}&{{H_{{s_{1n}}}}}\\ {{H_{{s_{21}}}},}&{{H_{{s_{22}}}},}&{ \cdots ,}&{{H_{{s_{2n}}}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{H_{{s_{l1}}}},}&{{H_{{s_{l2}}}},}&{ \cdots ,}&{{H_{{s_{\ln }}}}} \end{array}} \right\}} \end{array}\\ = \begin{array}{*{20}{c}} {}&{\begin{array}{*{20}{c}} {{c_1}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {c_2}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{ \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }&{{c_n}} \end{array}}\\ {\begin{array}{*{20}{c}} {{a_1}}\\ {{a_2}}\\ \vdots \\ {{a_l}} \end{array}}&{\left\{ {\begin{array}{*{20}{c}} {\left\langle {\left( {s_{{i_{11}}}^{{k_{11}}}} \right),\left( {\mu _{{i_{11}}}^{{k_{11}}},\eta _{{i_{11}}}^{{k_{11}}},\nu _{{i_{11}}}^{{k_{11}}}} \right)} \right\rangle ,}&{\left\langle {\left( {s_{{i_{12}}}^{{k_{12}}}} \right),\left( {\mu _{{i_{12}}}^{{k_{12}}},\eta _{{i_{12}}}^{{k_{12}}},\nu _{{i_{12}}}^{{k_{12}}}} \right)} \right\rangle ,}&{ \cdots ,}&{\left\langle {\left( {s_{{i_{1n}}}^{{k_{1n}}}} \right),\left( {\mu _{{i_{1n}}}^{{k_{1n}}},\eta _{{i_{1n}}}^{{k_{1n}}},\nu _{{i_{1n}}}^{{k_{1n}}}} \right)} \right\rangle }\\ {\left\langle {\left( {s_{{i_{21}}}^{{k_{21}}}} \right),\left( {\mu _{{i_{21}}}^{{k_{21}}},\eta _{{i_{21}}}^{{k_{21}}},\nu _{{i_{21}}}^{{k_{21}}}} \right)} \right\rangle ,}&{\left\langle {\left( {s_{{i_{22}}}^{{k_{22}}}} \right),\left( {\mu _{{i_{22}}}^{{k_{22}}},\eta _{{i_{22}}}^{{k_{22}}},\nu _{{i_{22}}}^{{k_{22}}}} \right)} \right\rangle ,}&{ \cdots ,}&{\left\langle {\left( {s_{{i_{2n}}}^{{k_{2n}}}} \right),\left( {\mu _{{i_{2n}}}^{{k_{2n}}},\eta _{{i_{2n}}}^{{k_{2n}}},\nu _{{i_{2n}}}^{{k_{2n}}}} \right)} \right\rangle }\\ \vdots & \vdots & \ddots & \vdots \\ {\left\langle {\left( {s_{{i_{l1}}}^{{k_{l1}}}} \right),\left( {\mu _{{i_{l1}}}^{{k_{l1}}},\eta _{{i_{l1}}}^{{k_{l1}}},\nu _{{i_{l1}}}^{{k_{l1}}}} \right)} \right\rangle ,}&{\left\langle {\left( {s_{{i_{l2}}}^{{k_{l2}}}} \right),\left( {\mu _{{i_{l2}}}^{{k_{l2}}},\eta _{{i_{l2}}}^{{k_{l2}}},\nu _{{i_{l2}}}^{{k_{l2}}}} \right)} \right\rangle ,}&{ \cdots ,}&{\left\langle {\left( {s_{{i_{\ln }}}^{{k_{\ln }}}} \right),\left( {\mu _{{i_{\ln }}}^{{k_{\ln }}},\eta _{{i_{\ln }}}^{{k_{\ln }}},\nu _{{i_{\ln }}}^{{k_{\ln }}}} \right)} \right\rangle } \end{array}} \right\}} \end{array}\end{array}$$

where \(i_{ij} \in \left\{ {0,1,2, \ldots ,m} \right\};k_{ij} = 1,2, \ldots ,\alpha_{ij}\).

Let \(u_{ij} = \sqrt[{\alpha_{ij} }]{{\sum\nolimits_{{k_{ij} = 1}}^{{\alpha_{ij} }} {\frac{{\left( {\mu_{{i_{ij} }}^{{k_{ij} }} \times i_{{k_{ij} }} } \right)^{{\alpha_{ij} }} }}{{\alpha_{ij} }}} }}\), \(\eta_{ij} = \sqrt[{\alpha_{ij} }]{{\sum\nolimits_{{k_{ij} = 1}}^{{\alpha_{ij} }} {\frac{{\left( {\eta_{{i_{ij} }}^{{k_{ij} }} \times i_{{k_{ij} }} } \right)^{{\alpha_{ij} }} }}{{\alpha_{ij} }}} }}\), and \(\nu_{ij} = \sqrt[{\alpha_{ij} }]{{\sum\nolimits_{{k_{ij} = 1}}^{{\alpha_{ij} }} {\frac{{\left( {\nu_{{i_{ij} }}^{{k_{ij} }} \times i_{{k_{ij} }} } \right)^{{\alpha_{ij} }} }}{{\alpha_{ij} }}} }}\). \(i_{{k_{ij} }} = \left( {1 + \exp \left( {{ - }i_{ij} } \right)} \right)^{ - p} \left( {i = 1,2, \ldots ,l} \right)\, (j = 1,2, \ldots ,n)\), \(i_{ij}\) the subscript value of linguistic evaluation term \(s_{{i_{ij} }}^{{k_{ij} }}\), and \(p > 0\), by utilizing the above formula, the decision matrix \(M\) can be equally transformed to the prospect decision matrix \(M_{1}\) as follows:

$$ M_{{1}} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {c_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} c_{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & { \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } & {c_{n} } \\ \end{array} } \\ {\begin{array}{*{20}c} {a_{1} } \\ {a_{2} } \\ \vdots \\ {a_{l} } \\ \end{array} } & {\left\{ {\begin{array}{*{20}c} {\left\langle {\mu_{{{11}}} ,\eta_{11} ,\nu_{11} } \right\rangle ,} & {\left\langle {\mu_{{{12}}} ,\eta_{12} ,\nu_{12} } \right\rangle ,} & { \cdots ,} & {\left\langle {\mu_{{{\text{1n}}}} ,\eta_{1n} ,\nu_{1n} } \right\rangle } \\ {\left\langle {\mu_{{{21}}} ,\eta_{21} ,\nu_{21} } \right\rangle ,} & {\left\langle {\mu_{{{22}}} ,\eta_{22} ,\nu_{22} } \right\rangle ,} & { \cdots ,} & {\left\langle {\mu_{{{\text{2n}}}} ,\eta_{2n} ,\nu_{2n} } \right\rangle } \\ \vdots & \vdots & \ddots & \vdots \\ {\left\langle {\mu_{{l{1}}} ,\eta_{l1} ,\nu_{l1} } \right\rangle ,} & {\left\langle {\mu_{l2} ,\eta_{l2} ,\nu_{l2} } \right\rangle ,} & { \cdots ,} & {\left\langle {\mu_{\ln } ,\eta_{\ln } ,\nu_{\ln } } \right\rangle } \\ \end{array} } \right\}} \\ \end{array} $$
  1. Step 5.

    Evidential aggregation of the prospect matrix.

Assume that \(H_{1}\),\(H_{2}\),\(H_{3}\),\(H\) are the evaluation grades, which are used to assess the criteria of alternatives, where \(H_{1}\) represents completely support, \(H_{2}\) represents neutral, \(H_{3}\) represents completely oppose, and \(H\) represents refusal. The method based on HPFLSs and ER can then be presented as follows:

$$ v\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} } \right) = \begin{array}{*{20}c} \begin{gathered} \hfill \\ a_{1} \hfill \\ a_{1} \hfill \\ \vdots \hfill \\ a_{l} \hfill \\ \end{gathered} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\kern 1pt} {\kern 1pt} {\kern 1pt} \begin{array}{*{20}c} {\kern 1pt} & {} & {} & {c_{1}\qquad {\kern 1pt} } \\ \end{array} } & {} & {} & {} \\ \end{array} } & {} & {} & \qquad {c_{{2{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }} } \\ \end{array} } & {} & {} & {\kern 1pt} \\ \end{array} } & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\qquad \quad\,\, \cdots } & {\kern 1pt} & {} \\ \end{array} } & {} & \qquad{c_{n} } & {} \\ \end{array} } & {} & {\begin{array}{*{20}c} {} & {} \\ \end{array} } \\ \end{array} {\kern 1pt} } \\ {\left( \begin{gathered} \left\langle {\beta_{1,1} (a_{1} ),\beta_{2,1} (a_{1} ),\beta_{3,1} (a_{1} )} \right\rangle ,\left\langle {\beta_{1,2} (a_{1} ),\beta_{2,2} (a_{1} ),\beta_{3,2} (a_{1} )} \right\rangle ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\left\langle {\beta_{1,n} (a_{1} ),\beta_{2,n} (a_{1} ),\beta_{3,n} (a_{1} )} \right\rangle \hfill \\ \left\langle {\beta_{1,1} (a_{2} ),\beta_{2,1} (a_{2} ),\beta_{3,1} (a_{2} )} \right\rangle ,\left\langle {\beta_{1,2} (a_{2} ),\beta_{2,2} (a_{2} ),\beta_{3,2} (a_{2} )} \right\rangle ,{\kern 1pt} {\kern 1pt} {\kern 1pt} \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\left\langle {\beta_{1,n} (a_{2} ),\beta_{2,n} (a_{2} ),\beta_{3,n} (a_{2} )} \right\rangle \hfill \\ \begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\kern 1pt} & {} & {} & \qquad{ \vdots \qquad{\kern 1pt} } \\ \end{array} } & {} & {} & {} \\ \end{array} } & {} & {} &\qquad\qquad \ddots \\ \end{array} } & {} & {} & {\kern 1pt} \\ \end{array} } & {} & {} & {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\quad \,\,\,\, \vdots } \\ \end{array} } & {} & {} & {} \\ \end{array} } & \qquad\qquad\vdots & {\begin{array}{*{20}c} {} & {} \\ \end{array} } \\ \end{array} \hfill \\ \left\langle {\beta_{1,1} (a_{l} ),\beta_{2,1} (a_{l} ),\beta_{3,1} (a_{l} )} \right\rangle ,\left\langle {\beta_{1,2} (a_{l} ),\beta_{2,2} (a_{l} ),\beta_{3,2} (a_{l} )} \right\rangle ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \cdots {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\left\langle {\beta_{1,n} (a_{l} ),\beta_{2,n} (a_{l} ),\beta_{3,n} (a_{l} )} \right\rangle \hfill \\ \end{gathered} \right)} \\ \end{array} } \\ \end{array} $$

where \(\left\langle {\beta_{1,j} (a_{i} ),\beta_{2,j} (a_{i} ),\beta_{3,j} (a_{i} )} \right\rangle = \left\langle {\mu_{ij} ,\eta_{ij} ,\nu_{ij} } \right\rangle\), \(\beta_{1,j} (a_{i} )\),\(\beta_{2,j} (a_{i} )\), and \(\beta_{3,j} (a_{i} )\) are the degrees of belief of all DMs with respect to criteria \(c_{j}\) of alternative \(a_{i}\) regarding the evaluation grades \(H_{1}\),\(H_{2}\) and \(H_{3}\), respectively, \(0 \le \beta_{1,j} (a_{i} ) \le 1\), \(0 \le \beta_{2,j} (a_{i} ) \le 1\), \(0 \le \beta_{3,j} (a_{i} ) \le 1\), \(1 \le i \le l\) and \(1 \le j \le n\), based on decision matrix \(v\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{m}_{ij} } \right)\) and criteria weight collection \(w\),do the following sub-steps:

  1. Step 5.1.

    Transform degrees of belief to basic probability mass.

Transform the degrees of belief \(\beta_{p,j} (a_{i} )\) with respect to criteria \(c_{j}\) of alternative \(a_{i}\) regarding the evaluation grade \(H_{p} \left( {p = 1,2,3} \right)\) into the basic probability mass \(m_{p,j} (a_{i} )\), and the remaining probability mass \(m_{H,j} (a_{i} )\) with respect to criteria \(c_{j}\) of alternative \(a_{i}\) regarding the evaluation grade \(H_{p} ,H\). The transformation formulas are shown as follows:

$$ m_{p,j} (a_{i} ) = w_{j} \otimes \beta_{p,j} (a_{i} ) $$
(9)
$$ m_{H,j} (a_{i} ){ = 1 - }\sum\limits_{p = 1}^{3} {m_{p,j} (a_{i} )} $$
(10)

where\(w_{j}\) is the weight of criteria \(c_{j}\), satisfying the condition \(0 \le w_{j} \le 1\), \(\sum\nolimits_{j = 1}^{n} {w_{j} = 1}\) and \(1 \le p \le 3\),\(1 \le i \le l\),\(1 \le j \le n\). Based on formula (4), we can obtain the basic probability distribution matrix as follows:

$$\begin{aligned} P_{b} = \left[ {\left\langle {m_{1,j} (a_{i} ),m_{2,j} (a_{i} ),m_{3,j} (a_{i} )} \right\rangle } \right]_{l \times n}\end{aligned} $$
(11)

Let the combined probability mass with respect to criteria \(c_{1}\) of alternative \(a_{i}\) be \(n_{p,1} (a_{i} )\), its initial value is equal to \(m_{p,1} (a_{i} )\), that is \(n_{p,1} (a_{i} ) = m_{p,1} (a_{i} )\). The remaining probability mass is \(n_{H,1} (a_{i} )\), its initial value is equal to \(m_{H,1} (a_{i} )\), that is \(n_{H,1} (a_{i} ) = m_{H,1} (a_{i} )\). Compute the combined probability mass \(n_{p,y} (a_{i} )\) and remaining probability mass \(n_{H,y} (a_{i} )\) regarding to criteria \(C_{y}\) of alternative \(a_{i}\), respectively, where \(2 \le y \le n\), are as follows:

$$\begin{aligned} &n_{p,y} (a_{i} ) \\ &= \frac{{n_{p,y - 1} (a_{i} )m_{p,y} (a_{i} ) + n_{p,y - 1} (a_{i} )m_{H,y} (a_{i} ) + n_{H,y - 1} (a_{i} )m_{p,y} (a_{i} )}}{{1 - \sum\nolimits_{\delta = 1}^{3} {\sum\nolimits_{\tau \ne \delta }^{3} {n_{\delta ,y - 1} (a_{i} )m_{\tau ,y} (a_{i} )} } }} \end{aligned}$$
(12)
$$ n_{H,y} (a_{i} ) = \frac{{n_{H,y - 1} (a_{i} )m_{H,y} (a_{i} )}}{{1 - \sum\nolimits_{\delta = 1}^{3} {\sum\nolimits_{\tau \ne \delta }^{3} {n_{\delta ,y - 1} (a_{i} )m_{\tau ,y} (a_{i} )} } }} $$
(13)
  1. Step 5.2.

    Evidential aggregation of degrees of belief of alternative criteria.

Aggregating the degrees of belief of evaluation grade \(H_{p}\) with respect to criteria \(c_{y}\) of alternative \(a_{i}\) to obtain the value of degrees of belief regarding to evaluation grade \(H_{p}\) of alternative \(a_{i}\), denoted as \(\beta_{p} \left( {a_{i} } \right)\), the value of degrees of belief producing by waiver information is denoted as \(\beta_{H} \left( {a_{i} } \right)\), and the formula to calculate it is as follows:

$$ \beta_{H} \left( {a_{i} } \right) = \sum\limits_{y = 1}^{n} {w_{y} \left( {1 - \sum\limits_{p = 1}^{3} {\beta_{p,y} \left( {a_{i} } \right)} } \right)} $$
(14)
$$\begin{aligned} \beta_{p} \left( {a_{i} } \right) &= \frac{{1 - \beta_{H} \left( {a_{i} } \right)}}{{1 - n_{H,n} \left( {a_{i} } \right)}}n_{p,n} \left( {a_{i} } \right)\left( {p = 1,2,3} \right) \end{aligned}$$
(15)
  1. Step 6.

    Rank the alternatives.

In accordance with the picture fuzzy score function in Definition 8, obtain the score of the picture fuzzy evidential aggregation values in Step 5; according to the ordered score, the higher the score, the better that alternative is.

Illustrative example

To verify the effectiveness of the proposed method, an illustrative example of a selection of practical DT solutions for SMEs [10] is adopted. The DMs evaluated four DT solutions on the market, provided by third-party platforms. These were the China Cloud Manufacturing Platform (CCMP) (\(a_{1}\)), Inspur Cloud Industrial Internet Platform (ICIIP) (\(a_{2}\)), Aliyun ET Industrial Brain (AETIB) (\(a_{3}\)), and the Root Cloud Platform (RCP) (\(a_{4}\)). According to the factors influencing SMEs, four key criteria were selected for evaluation, which were the matching degree between the investment in digital resources needed and original resource level of the enterprise (\(c_{1}\)), the extent to which an enterprise accepts the required digital strategic planning and organizational change (\(c_{{2}}\)), the expected application level of digital transformation solutions (\(c_{{3}}\)), and the expected benefits created by the DT solution (\(c_{{4}}\)). In addition, a team of 20 experts, which included the chief executive officers (CEOs), directors of financial departments, and experts and researchers from universities or research institutes, then, the experts were asked to provide the evaluation values with HPFLSs, which are summarized in Table 1.

Table 1 The decision matrix of evaluation value \(M_{1}\)
  1. Step 1.

    Normalize the evaluation values of the decision matrix.

    Note that all of the criteria are benefit criteria, none are cost criteria; thus, it is not necessary to normalize the criteria.

  2. Step 2.

    Construction of the prospect decision matrix.

According to the decision matrix of evaluation values and expected reference values shown in Table 1, utilizing Eqs. (58), where, based on the prospect theory [59], the parameters in Eq. (6) are assigned with values that are \(\alpha = \beta = 0.88\) and \(\lambda = - 2.25\), then, the prospect theory-based matrix can be computed and is shown in Table 2.

Table 2 The prospect theory-based decision matrix information
  1. Step 3.

    Obtain the criteria weights based on the linear programming function.

In accordance with the values of the prospect decision matrix obtained in Step 2 and utilizing the score function in Definition 3, we can obtain the score of each criteria of all alternatives:

$$ h_{ij} = \left\lceil {\begin{array}{*{20}l} {0.6725} & {1.0926} & {0.7924} & {0.8005} \\ {0.7950} & {0.6317} & {0.5800} & {0.5573} \\ {1.0108} & {0.6948} & {0.8005} & {0.8049} \\ {1.0108} & {0.8186} & {1.0384} & {1.0216} \\ \end{array} } \right\rceil_{4 \times 4} $$

The above score is used to solve the model (M-1), to obtain the equations for linear programming as follows:

$$\begin{aligned} Max(g) &= 3.4891 \times w_{1} + 3.2377 \times w_{2} + 3.2113 \times w_{3} \\ & \quad + 3.1843 \times w_{4} \end{aligned}$$
(16)
$$ s.t\left\{ \begin{gathered} \sum\limits_{j = 1}^{1} {j = 1,w_{j} \ge 0} \hfill \\ w_{1} \ge w_{2} \ge w_{4} \ge w_{3} \hfill \\ 0.2 \le w_{1} \le 0.4 \hfill \\ 0.15 \le w_{2} \le 0.3 \hfill \\ 0.1 \le w_{3} \le 0.2 \hfill \\ 0.1 \le w_{4} \le 0.25 \hfill \\ \end{gathered} \right. $$

Utilizing the tools in \(T(H_{{s_{1} }} ) < T(H_{{s_{2} }} )\), the above linear programming equations were processed with code programming and the weight vector was calculated and obtained to give the following:\(w = \left( {0.275,0.275,0.2,0.25} \right)\).

  1. Step 4.

    Transform the HPFLSs prospect decision matrix to a PFSs decision matrix.

Utilizing the equivalent transformation formulas of Step 4 in “MCGDM approach of prospect theory-based ER of HPFLS”, the equivalent transformation process of the HPFLSs prospect decision matrix \(M\) to a PFSs prospect decision matrix \(M_{1}\) was carried out. The transformation results are shown in Table 3 as follows: (Note: for the convenience of the transformation, the parameter was assigned with a value that was \(p = 1\)).

Table 3 The prospect decision matrix of PFSs \(M_{1}\)
  1. Step 5.

    Evidential aggregation of the prospect matrix.

Based on the criteria weight vector obtained in Step 3 and the prospect decision matrix of PFSs obtained in Step 4, and utilizing the transformation method in Step 5 of “MCGDM approach of prospect theory-based ER of HPFLS”, PFS-based values of degrees of belief were obtained. Then, using the method described in Sub-step. 5.1, the PFS-based values of degrees of belief were transformed as basic probability mass. The transformation results are shown in Table 4.

Table 4 The basic and remaining probability mass collection \(\big\langle m_{1,j} \left( {a_{i} } \right),m_{2,j} \left( {a_{i} } \right), m_{3,j} \left( {a_{i} } \right),m_{H,j} \big( {a_{i} } \big) \big\rangle\)

Next, in accordance with Sub-step 5.2 in “MCGDM approach of prospect theory-based ER of HPFLS”, the evidential aggregation value can be obtained.

$$\begin{aligned}E_{{a_{1} }} & = \left\langle {0.0801,0.0375,0.0181} \right\rangle\quad E_{{a_{2} }} = \left\langle {0.0180,0.0310,0.0453} \right\rangle \\ E_{{a_{3} }} & = \left\langle {0.0801,0.0287,0.0343} \right\rangle \quad E_{{a_{4} }} = \left\langle {0.1551,0.0220,0.0413} \right\rangle\end{aligned}$$
  1. Step 6.

    Rank the alternatives.

According to the PFN-based score function in Definition 8, the scores of PFS-based evidential aggregation values are given, and the following results are obtained.

$$ s_{{a_{1} }} = 1.0246\quad s_{{a_{2} }} = 0.9417\quad s_{{a_{3} }} = 1.0172\quad s_{{a_{4} }} = 1.0917 $$

In accordance with the ordered score and the rule that the greater the score is bigger, the better the alternative, we determined that the best alternative was \(a_{4}\), the worst alternative was \(a_{2}\), and the rank of the alternatives was \(a_{4} \succ a_{1} \succ a_{3} \succ a_{2}\).

Further analysis and inference

To obtain the rank of alternatives influenced by parameters \(p\), \(\alpha ,\beta ,\lambda\), let the parameter \(p\) be \(0.001 \le p \le 20\), the parameters \(\alpha ,\beta\) are assigned the representative values of 0.1, 0.5, 0.88 and 0.98. and the parameter \(\lambda\) is allocated the representative numbers of − 1.25, − 2.25, − 4.25 and − 8.25. The detailed rank values are shown in Figs. 1, 2, 3, 4, 5, 6 and 7.

Fig. 1
figure 1

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.1}}\) and \(\lambda = - 2{\text{.25}}\)

Fig. 2
figure 2

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.5}}\) and \(\lambda = - 2{\text{.25}}\)

Fig. 3
figure 3

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.88}}\) and \(\lambda = - 2{\text{.25}}\)

Fig. 4
figure 4

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.98}}\) and \(\lambda = - 2 {\text{.25}}\)

Fig. 5
figure 5

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.88}}\) and \(\lambda = - 1 {\text{.25}}\)

Fig. 6
figure 6

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0{\text{.88}}\) and \(\lambda = - 4 {\text{.25}}\)

Fig. 7
figure 7

Alternative ranks of different values of \(p(0.01 \le p \le 20)\) with \(\alpha = \beta = 0 {\text{.88}}\) and \(\lambda = - 8 {\text{.25}}\)

As seen from the results shown in Figs. 1, 2, 3, 4, 5, 6 and 7, for any fixed value assigned to parameters \(\alpha ,\beta\) and \(\lambda\), if the parameter \(p\) approaches zero,\(p \to 0\), as the value increasing of parameter \(p\), the score of the alternative, which is greater than one, is decreasing and approach to one. The score of alternatives, which is less than one, is increasing and approach to one. In summary, no matter whether the value of parameter \(p\) is increasing or decreasing, the ranking order of the alternatives does not vary; however, the rates of the increase and decrease of alternative values are different. If the values of parameters \(\alpha ,\beta\) are decreasing, the scores of all alternatives are also decreasing; if the values of parameter \(\lambda\) are decreasing, the scores of all alternatives are also decreasing. The main reasons are the interaction influencing by different value of parameter \(p\) and the parameters \(\alpha ,\beta ,\lambda\) of prospect-based algorithm of hesitant picture fuzzy.

According to the further analysis results shown in Figs. 1, 2, 3, 4, 5, 6 and 7, the following inferences can be made:

  1. (1)

    The score of alternatives can be more flexible to adjustments by changing the value of parameter \(p\).

  2. (2)

    The score of alternatives can be different by adopting the different value of parameter \(p > 1\); however, the rank order of alternative is not changed.

  3. (3)

    The score of alternatives can be varied by changing the value of parameters \(\alpha ,\beta ,\lambda\), and the rank order of alternatives can be slightly changed for the typical values, such as in Fig. 5. \(\lambda = - 1.25\), the score of alternative \(a_{1} ,a_{3}\) approach to the same value no matter what value is assigned for the parameter \(p\).

In actual applications, to effectively distinguish the scores of all alternatives, the value of parameter \(p\) might be assigned a small value, for example \(p = 0.1\). The values of parameters \(\alpha ,\beta\) can be assigned at around 0.88, and the value of parameter \(\lambda\) can be allocated with a value of around − 2.25 or less.

Comparison and discussion

For convenience, to verify the effectiveness of the proposed approach, the previous approach of HPFLSs-based weighted cross-entropy TOPSIS(Technique for Order Preference by Similarity to an Ideal Solution), proposed by Wu et al. [56], was adopted to conduct the comparison analysis, in which the parameters \(\theta\) and \(n\) were assigned four pairs of values. Then, in accordance with our results of the further analysis and inferences presented in “Further analysis and inference”, three groups of representative values of parameters \(p,\alpha ,\beta ,\lambda\) were adopted to carry out the comparison. The results for the comparison of methods are shown in Table 5.

Table 5 Results of the comparison of the methods

Table 5 shows that the results obtained by our previously [56] and currently proposed methods were slightly different. For our previously proposed method, if different representative values of parameter \(\theta\) and \(n\) are selected, the final order of the alternatives is slightly different. In some cases, the alternative \(a_{{3}}\) is superior to alternative \(a_{4}\), in other cases, the alternative \(a_{4}\) is superior to alternative \(a_{{3}}\). For our currently proposed method, if different groups of representative values for parameters \(p,\alpha ,\beta\) and \(\lambda\) are chosen, the final order of all of the alternatives does not change. However, differences exist between the ranking order of all alternatives for our previously and currently proposed methods. The main reasons for these differences might be that our previous method paid greater attention to membership, such as the membership degree of “vote for” or “vote against”, while the current method pays greater attention to the DMs’ expectation of the alternatives.

According to our further analysis and inference in “Further analysis and inference” and the comparison and discussion in “Comparison and discussion”, the advantages of our currently proposed method can be summarized as follows:

  1. (1)

    The proposed method is flexible to adjusting the score of the alternatives by changing the values of parameters \(p,\alpha ,\beta\) and \(\lambda\).

  2. (2)

    The prospect theory is extended with HPFLSs and is more suitable to expressing the expectations of alternatives of DMs with different attitudes.

In a real DT solution assessment of decision-making environments, DMs might pay greater attention to the expectations of solution alternatives. Thus, the currently proposed approach is more flexible in providing reasonable ranking results by changing the values of the parameters \(p,\alpha ,\beta\) and \(\lambda\) compared with the previous method and provides more acceptable results for DMs.

Conclusion

In this paper, considering in the decision process of DT solution selection of SMEs, DMs often have the expectation for the selected solution, experts can have different attitudes voting for the solutions and be often hesitate to given evaluation value among several options for the each criteria of solution alternatives. In order to solve such actual decision-making problems of DT solution evaluation, an assessment method based on prospect theory-based ER of HPFLSs is proposed. In this paper, a novel distance measure of HPFLSs is defined and proven, and it was used to construct the value function of prospect theory of HPFLSs, then, a novel ER method is developed to aggregate the prospect value of each solution. Afterwards, a comprehensive decision framework based on the proposed approach of prospect theory-based ER of HPFLSs was established to help the assessment and selection of DT solutions for SMEs.

This study has some limitations. The criteria selected to assess DT solutions in this study were fewer than those used actual DT decisions, with some evaluation criteria not considered, such as cost and financial criteria. Therefore, the evaluation value given by DMs might be other forms, such as crisp number and other extension of fuzzy set/fuzzy linguistic set, how to reasonable transforming such forms to HPFLSs is required to study.

For future studies, in accordance with the practical requirements of DT solution assessment, this research could be extended to other MCGDM methods (TODIM, ELECTRI, etc.), and these methods could be used to verify the effectiveness of the proposed approach. In addition, the proposed method could be used in other applications, such as the selection of medical devices, the assessment of sustainable energy power plants, or other MCGDM evaluation methods with interactive operators [60, 61], projections [62, 63], and artificial intelligence methods [64,65,66] could be developed for the selection of DT solutions.