Decision-making theory is one of the most important theories to trace the finest objects among the set of the feasible ones. In our day-to-day lie situations, we always make a decision to access our decision in such a way that we can get much benefit from them based on our past records. However, due to complex environment these days and insufficient knowledge about the systems due to lack of information or human errors, it is sometimes very difficult to make an optimal decision in a reasonable time. To address the uncertainties in the data, a concept of fuzzy sets (FSs) introduced by Zadeh [1] to handle the uncertain information. In FSs theory, each element is measured with a membership degree (MD) lying between 0 and 1 to represent the partial information of the set. However, FSs does not encounter about the hesitancy between the element of the set. To address it, an extension of the FSs named intuitionistic fuzzy sets (IFSs) [2] by considering a non-membership degree (NMD) \(\nu \) of an element along with MD \(\mu \), such that they satisfy the linear inequality \(\mu +\nu \le 1\). After their existence, several authors have addressed the decision-making problems (DMPs) under the IFSs’ environment. For example, Ye [3] presented a cosine SMs for IFSs. Garg and Kumar [4] presented similarity measures (SMs) for IFSs using the concept of set pair analysis. Hwang et al. [5] defined the SMs based on Jaccard index and applied it to solve the clustering problems. Garg and Kumar [6] defined the exponential-based distance measure for solving the DMPs. However, apart from them, some others kinds of SMs by utilizing the fuzzy information are summarized in [7,8,9,10,11,12,13,14,15,16]. In addition, a complete bibliometric analysis of DMPs is summarized in [17, 18].

All the above work has been conducted under the IFSs environment which is own restricted to the domain of feasible region \(\mu +\nu \le 1\). Hence, the theory of IFSs is very narrow, and hence, under some special cases, this theory is unable to quantify the analysis. For example, if preference towards the object is given as 0.6 MD and 0.7 as NMD then clearly \(0.6+0.7\nleq 1\). Thus, to handle it, a concept of Pythagorean fuzzy sets (PFSs) [19] introduced by expanding the domain of feasible region from \(\mu +\nu \le 1\) to \(\mu ^2+\nu ^2\le 1\). It is clearly seen that PFSs expand the region and hence more effective than IFSs. Furthermore, it can easily handle the DMPs, where IFSs fail. Therefore, every IFSs is also PFSs. After its existence, several researchers have studied and enhanced the theory of PFSs using aggregation operators (AOs) or the SMs to different fields. For example, Peng and Yang [20] presented some results on PFSs. Beliakov and James [21] defined the averaging AOs for PFSs. Garg [22, 23] proposed the weighted averaging and geometric AOs using Einstein t-norm operators for solving DMPs under PFSs environment. Wei [24] defined the interactive averaging AOs for solving the DMPs. Wei and Lu [25] defined the Maclaurin symmetric mean operators to the Pythagorean fuzzy (PF) environment. Ma and Xu [26] proposed the symmetric averaging AOs for the PF information. Garg [27, 28] developed exponential and logarithms operations and their based AOs for solving the DMPs under the PFS environment. However, apart from them, several authors [29,30,31] handled the DMPs under the PFS environment.

The above-stated work is based on the AOs; however, information measures such as SMs, score and accuracy functions, divergence etc., are also useful to solve the DMPs. Under such measures, researchers have also actively participated which can easily be seen through the literature. For example, Zhang and Xu [32] presented the concept of PF numbers (PFNs) and a TOPSIS (“Technique for Order Preference with respect to the Similarity to the Ideal Solution”) method to solve the DMPs with PFSs’ information. Zeng et al. [33] developed an approach utilizing the AOs and the distance measures for solving the DMPs. Garg [34] defined the correlation coefficients measures for PFSs. Zhang [35] defined an SM-based algorithm to solve the DMPs for PFNs. Wei and Wei [36] define the SMs based on the cosine measures for PFSs. Apart from them, several authors have addressed the extensions of the PFSs such as interval-valued PFSs [42], hesitant PFS [43, 44], and linguistic PFS [45] and applied them to solve the various DMPs under the different environments such as health [46] and site selection [47]. Furthermore, some other measures such as an accuracy function [37, 38], operations [39], and improved score functions [40, 41] are defined for PFS and interval-valued PFS. In the context of DMPs problems, a comparison between two or more objects is an important principle and thus for it, and a concept of SMs is useful.

The existing SMs are based on the Hamming distance which ignore the influences of the MD and NMD independently. Furthermore, to extend the existing measures, in this paper, we introduce some new SMs for PFSs based on the exponential functions defined on both the MDs and NMDs’ function. The salient features of these measures are also studied in detail. Furthermore, an algorithm for solving DMPs is addressed in the paper based on the proposed SMs. Finally, numerical examples are taken to illustrate them .

The remaining work is summarized as follows. The basic concepts of PFSs and the SMs are reviewed briefly in “Preliminaries”. In “New similarity measures on Pythagorean fuzzy sets”, we define some new SMs based on the exponential function for PFSs and studied their properties. Section “Applications of the proposed SMs” deals with the applications of the proposed measures. Finally, a conclusion is given.


In this section, we briefly review the basic concepts related to PFS and SM over the set X.

Definition 1

[2] An IFS A in X is given by

$$\begin{aligned} A = \{ \langle x, \mu _A(x),\nu _{A}(x) \rangle \mid x \in X \}, \end{aligned}$$

where \(\mu _{A}, \nu _{A} : X \rightarrow [0, 1]\) be the MD and NMD function, such that \(\mu _{A} + \nu _{A} \le 1\), \(\forall x \in X.\) For conveniences, Xu [48] denoted this pair as \(A=(\mu _{A}, \nu _{A})\).

Definition 2

[19] A PFS \(\mathcal {P}\) is given by

$$\begin{aligned} \mathcal {P} = \{ (x, \mu _{\mathcal {P}}(x),\nu _{\mathcal {P}}(x)) \mid x \in X \} \end{aligned}$$

where \(0\le \mu _{\mathcal {P}}, \nu _{\mathcal {P}}, \mu _{\mathcal {P}}^2+\nu _{\mathcal {P}}^2\le 1\). A pair of these is written by \(\mathcal {P}=(\mu _{\mathcal {P}}, \nu _{\mathcal {P}})\) and called as PFN [32]. Also, the degree of indeterminacy is given as \(\pi _{\mathcal {P}}=\sqrt{1-\mu _{\mathcal {P}}^2 - \nu _{\mathcal {P}}^2}\).

Note 1

The collection of all PFSs over X is written as \(\varPhi (X)\).

Definition 3

[19, 20, 27] Let \(\mathcal {P} = (\mu , \nu )\), \(\mathcal {P}_1 = ( \mu _{1}, \nu _{1} )\) and \(\mathcal {P}_2 = ( \mu _{2}, \nu _{2} )\) be three PFNs, then we have

  1. (i)

    \(\mathcal {P}^c = ( \vartheta , \zeta )\).

  2. (ii)

    \(\mathcal {P}_1\subseteq \mathcal {P}_2\) if \(\mu _{1}\le \mu _{2}\) and \(\nu _{1}\ge \nu _{2}\).

  3. (iii)

    \(\mathcal {P}_1 = \mathcal {P}_2\) if \(\mathcal {P}_1\subseteq \mathcal {P}_2\) and \(\mathcal {P}_2\subseteq \mathcal {P}_1\).

  4. (iv)

    \(\mathcal {P}_1\cap \mathcal {P}_2=( \min (\mu _{1}, \mu _{2}),\max (\nu _{1}, \nu _{2}))\).

  5. (v)

    \(\mathcal {P}_1\cup \mathcal {P}_2 = ( \max (\mu _{1}, \mu _{2}), \min (\nu _{1}, \nu _{2})) \).

  6. (vi)

    \(\mathcal {P}_1 \oplus \mathcal {P}_2 = \left( \sqrt{\mu _1^{2}+\mu _2^{2}-\mu _1^{2}\mu _2^{2}}, \nu _1\nu _2\right) \).

  7. (vii)

    \(\mathcal {P}_1 \otimes \mathcal {P}_2= \left( \mu _1\mu _2, \sqrt{\nu _1^{2}+\nu _2^{2}-\nu _1^{2}\nu _2^{2}}\right) \).

  8. (viii)

    \(\lambda \mathcal {P}_1= \left( \sqrt{1-(1-\mu _1^{2})^\lambda }, \nu _1^\lambda \right) \), \(\lambda > 0\).

  9. (ix)

    \(\mathcal {P}_1^{\lambda }=\left( {\mu _1}^\lambda , \sqrt{1-(1-\nu _1^{2})^\lambda }\right) \), \(\lambda > 0\).

  10. (x)

    \(\lambda ^{\mathcal {P}} = {\left\{ \begin{array}{ll} \left( \lambda ^{\sqrt{1-\zeta ^{2}}}, \sqrt{1-\lambda ^{2\vartheta }} \right) &{} \text {if } \lambda \in (0, 1) \\ \left( (1/\lambda )^{\sqrt{1-\zeta ^{2}}}, \sqrt{1-(1/\lambda )^{2\vartheta }} \right) &{} \text {if } \lambda \ge 1\\ \end{array}\right. }\)

Definition 4

A real-valued function \(S:\varPhi (X) \times \varPhi (X) \rightarrow [0,1]\) is called similarity measure if the following properties are satisfied:

  1. (P1)

    \(0\le S(\mathcal {P}, \mathcal {Q})\le 1\).

  2. (P2)

    \(S(\mathcal {P}, \mathcal {Q})=1\) \(\Leftrightarrow \) \(\mathcal {P}=\mathcal {Q}\)

  3. (P3)

    \(S(\mathcal {P},\mathcal {Q})=S(\mathcal {Q}, \mathcal {P})\)

  4. (P4)

    If \(\mathcal {P}\subseteq \mathcal {Q}\subseteq \mathcal {R}\) then, \(S(\mathcal {P},\mathcal {R})\le S(\mathcal {P},\mathcal {Q})\) and \(S(\mathcal {P},\mathcal {R})\le S(\mathcal {Q},\mathcal {R})\), where \(\mathcal {P}, \mathcal {Q}, \mathcal {R} \in \varPhi (X)\).

Definition 5

[35] For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\) over the finite \(X=\{x_1,x_2,\ldots ,x_n\}\). Then, the SM-based on distance measure is defined as

$$\begin{aligned} \text {Sm}(\mathcal {P}, \mathcal {Q}) =\sum _{i = 1}^{n}\omega _i \frac{d(\mathcal {P}_i,\mathcal {Q}_i^C)}{d(\mathcal {P}_i,\mathcal {Q}_i)+d(\mathcal {P}_i,\mathcal {Q}_i^C)}, \end{aligned}$$

where \(\omega _i>0\) be the normalized weight vector of \(x_i\in X\) and \(\mathcal {Q}^C\) is the complement of PFS \(\mathcal {Q}\). In addition, \(d(\mathcal {P}_i, \mathcal {Q}_i) = \frac{1}{2} \{|\mu _\mathcal {P}^2(x_i) - \mu _\mathcal {Q}^2(x_i)| + |\nu _\mathcal {P}^2(x_i) - \nu _\mathcal {Q}^2(x_i)| + |\pi _\mathcal {P}^2(x_i) - \pi _\mathcal {Q}^2(x_i)|\}\) is the distance measure between the PF elements \(\mathcal {P}_i\) and \(\mathcal {Q}_i\) for all \(i = 1, 2, \ldots , n\).

Definition 6

[36] For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\), two cosine SMs between them is defined as

$$\begin{aligned}&\text {PFC}^1(\mathcal {P}, \mathcal {Q}) \nonumber \\&\quad = \sum _{i = 1}^{n} \omega _i \left( \frac{\mu _\mathcal {P}^2(x_i)\mu _\mathcal {Q}^2(x_i)+\nu _\mathcal {P}^2(x_i) \nu _\mathcal {Q}^2(x_i)}{\sqrt{\mu _\mathcal {P}^4(x_i)+\mu _\mathcal {Q}^4(x_i)}\sqrt{\nu _\mathcal {P}^4(x_i)+\nu _\mathcal {Q}^4(x_i)}} \right) ,\nonumber \\ \end{aligned}$$


$$\begin{aligned}&\text {PFC}^2(\mathcal {P}, \mathcal {Q}) = \sum _{i = 1}^{n} \omega _i\nonumber \\&\quad \times \,\left( \frac{\mu _\mathcal {P}^2(x_i)\mu _\mathcal {Q}^2(x_i)+\nu _\mathcal {P}^2(x_i)\nu _\mathcal {Q}^2(x_i)+ \pi _\mathcal {P}^2(x_i)\pi _\mathcal {Q}^2(x_i)}{\sqrt{\mu _\mathcal {P}^4(x_i)+ \mu _\mathcal {Q}^4(x_i)+\pi _\mathcal {P}^4(x_i)}\sqrt{\nu _\mathcal {P}^4(x_i)+\nu _\mathcal {Q}^4(x_i)+\pi _\mathcal {Q}^4(x_i)}} \right) ,\nonumber \\ \end{aligned}$$

where \(\omega _i>0\) is the normalized weight vector of \(x_i\in X\).

New similarity measures on Pythagorean fuzzy sets

This section presents a new SM-based on exponential functions for MDs and NMDs under PFS environment over the finite set X.

Definition 7

For two PFSs \(\mathcal {P} = \{ \langle x_i, \mu _\mathcal {P}(x_i),\nu _\mathcal {P}(x_i) \rangle | x_i \in X \}\) and \(\mathcal {Q} = \{ \langle x_i, \mu _\mathcal {Q}(x_i),\nu _\mathcal {Q}(x_i) \rangle | x_i \in X \}\), the two exponential functions are defined as

$$\begin{aligned} \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = e^{-|\mu _\mathcal {P}^2(x_i) - \mu _\mathcal {Q}^2(x_i)|} \end{aligned}$$


$$\begin{aligned} \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = e^{-|\nu _\mathcal {P}^2(x_i) - \nu _\mathcal {Q}^2(x_i)|}. \end{aligned}$$

Theorem 1

For any two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\), we have

  1. (P1)

    \(0 \le \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) \le 1\);

  2. (P2)

    \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\mu }(\mathcal {Q},\mathcal {P})\) and \(\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {Q},\mathcal {P})\);

  3. (P3)

    \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = 1\) if and only if \(\mathcal {P} = \mathcal {Q}\);

  4. (P4)

    if \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\), then \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\mu }(\mathcal {Q}, \mathcal {R}) \}\) and \(\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\nu }(\mathcal {Q}, \mathcal {R}) \}\).


Let \(\mathcal {P}=(\mu _\mathcal {P}(x_i), \nu _\mathcal {P}(x_i) )\) and \(\mathcal {Q}{=}(\mu _\mathcal {Q}(x_i), \nu _\mathcal {Q}(x_i))\) be two PFSs over X.

  1. (P1)

    By definition of PFSs, we have \(\mu _\mathcal {P}(x_i), \mu _\mathcal {Q}(x_i)\le 1\) and \(\mu _\mathcal {P}^2(x_i)+\nu _\mathcal {P}^2(x_i)\le 1\) for all \(x_i\in X\). Thus, we have

    $$\begin{aligned}&-1 \le - |\mu _\mathcal {P}^2(x_i) - \mu _\mathcal {Q}^2(x_i)| \le 0 \ \text {and } \\&\quad -1 \le - |\nu _\mathcal {P}^2(x_i) - \nu _\mathcal {Q}^2(x_i)| \le 0. \end{aligned}$$


    $$\begin{aligned} 0 \le e^{ - |\mu _\mathcal {P}^2(x_i) - \mu _\mathcal {Q}^2(x_i)|} \le 1 \ \text {and } 0 \le e^{- |\nu _\mathcal {P}^2(x_i) - \nu _\mathcal {Q}^2(x_i)|} \le 1. \end{aligned}$$

    Thus, (P1) holds.

  2. (P2)

    It is obtained from the definition.

  3. (P3)

    If \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = 1\), then \(\mu _\mathcal {P}(x_i) = \mu _\mathcal {Q}(x_i)\) and \(\nu _\mathcal {P}(x_i) = \nu _\mathcal {Q}(x_i)\) for all \(x_i \in X\). It means that \(\mathcal {P} = \mathcal {Q}\). On the other hand, if \(\mathcal {P} = \mathcal {Q}\), then it is clearly gives that \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = 1\).

  4. (P4)

    If \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\), then for \(x_i \in X\), we have

    $$\begin{aligned}0 \le \mu _\mathcal {P}(x_i) \le \mu _\mathcal {Q}(x_i) \le \mu _{\mathcal {R}}(x_i) \le 1\end{aligned}$$


    $$\begin{aligned}1 \ge \nu _\mathcal {P}(x_i) \ge \nu _\mathcal {Q}(x_i) \ge \nu _{\mathcal {R}}(x_i) \ge 0.\end{aligned}$$

    This implies that

    $$\begin{aligned} 0 \le \mu _\mathcal {P}^2(x_i) \le \mu _\mathcal {Q}^2(x_i) \le \mu _{\mathcal {R}}^2(x_i) \le 1 \end{aligned}$$


    $$\begin{aligned} 1 \ge \nu _\mathcal {P}^2(x_i) \ge \nu _\mathcal {Q}^2(x_i) \ge \nu _{\mathcal {R}}^2(x_i) \ge 0. \end{aligned}$$


    $$\begin{aligned}&-|\mu _\mathcal {P}^2(x_i) - \mu _{\mathcal {R}}^2(x_i)| \\&\quad \le \min \{-|\mu _\mathcal {P}^2(x_i) - \mu _\mathcal {Q}^2(x_i)|, -|\mu _\mathcal {Q}^2(x_i) - \mu _{\mathcal {R}}^2(x_i)|\} \end{aligned}$$


    $$\begin{aligned}&-|\nu _\mathcal {P}^2(x_i) - \nu _{\mathcal {R}}^2(x_i)| \\&\quad \le \min \{-|\nu _\mathcal {P}^2(x_i) - \nu _\mathcal {Q}^2(x_i)|, -|\nu _\mathcal {Q}^2(x_i) - \nu _{\mathcal {R}}^2(x_i)|\}. \end{aligned}$$

    It means that

    $$\begin{aligned} \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}),\mathcal {S}_i^{\mu }(\mathcal {Q}, \mathcal {R}) \} \end{aligned}$$


    $$\begin{aligned} \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}),\mathcal {S}_i^{\nu }(\mathcal {Q}, \mathcal {R}) \}. \end{aligned}$$

\(\square \)

Next, based on the two functions defined in Eqs. (6) and (7), we define the weighted SMs for PFSs as below.

Definition 8

Let \(\mathcal {P}\), \(\mathcal {Q}\) be two PFSs defined over X and \(\omega _i>0\) is the weight of the element of X which satisfy \(\sum _{i=1}^{n} \omega _i = 1\). Then, a weighted SMs between them is defined as

$$\begin{aligned} \mathcal {S}_0(\mathcal {P}, \mathcal {Q}) = \sum _{i=1}^{n} \omega _i \times \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) \times \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}). \end{aligned}$$

Theorem 2

The measure defined in Definition 8 is a valid SM for PFSs.


For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\) and from the Theorem 1, we have

  1. (P1)

    Since \(0 \le \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) \le 1\) which implies that \(0 \le \mathcal {S}_0(\mathcal {P}, \mathcal {Q}) \le \sum _{i=1}^{n} \omega _i = 1.\)

  2. (P2)

    As \(\mathcal {S}_i^{\mu }\) and \(\mathcal {S}_i^{\nu }\) are symmetrical for PFSs, so \(\mathcal {S}_0\) also have this property.

  3. (P3)

    As \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = 1\) if and only if \(\mathcal {P} = \mathcal {Q}\), so we get \(\mathcal {S}_0(\mathcal {P}, \mathcal {Q}) = 1\) if only if \(\mathcal {P} = \mathcal {Q}\), because \(\sum _{i=1}^n \omega _i=1\).

  4. (P4)

    For three PFSs \(\mathcal {P}, \mathcal {Q}\) and \(\mathcal {R}\) satisfying \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\), we observed from Theorem 1 that \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\mu }(\mathcal {Q}, \mathcal {R}) \}\) and \(\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}),\mathcal {S}_i^{\nu }(\mathcal {Q}, \mathcal {R}) \}\). Thus, based on it, Eq. (8) becomes \(\mathcal {S}_0(\mathcal {P}, \mathcal {R}) \le \mathcal {S}_0(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}_0(\mathcal {P}, \mathcal {R}) \le \mathcal {S}_0(B, C)\).

\(\square \)

Besides this, we can also define some other types of the SMs based on Eqs. (6) and (7), which are summarized in Definitions 9 and 10.

Definition 9

For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\), the weighted average SM of the functions \(\mathcal {S}_i^{\mu }\) and \(\mathcal {S}_i^{\nu }\) is defined as

$$\begin{aligned} \mathcal {S}_1(\mathcal {P}, \mathcal {Q}) = \sum _{i = 1}^n \omega _i \left( \frac{\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q})+ \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q})}{2}\right) , \end{aligned}$$

where \(\omega _i>0\) be the normalized weight vector of element of X.

Theorem 3

The measure given in Definition 9 is a valid SM for PFSs.


For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\) and from the Theorem 1, we have

  1. (P1)

    Since \(0 \le \mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}), \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) \le 1\), we have

    $$\begin{aligned} 0 \le \mathcal {S}_1(\mathcal {P}, \mathcal {Q}) \le \sum _{i=1}^{n} \omega _i = 1. \end{aligned}$$
  2. (P2)

    It can be easily proven, so we omit here.

  3. (P3)

    As \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}) = \mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}) = 1\) if and only if \(\mathcal {P} = \mathcal {Q}\), so by the definition of \(\mathcal {S}_1\), we get \(\mathcal {S}_1(\mathcal {P}, \mathcal {Q}) = 1\) if only if \(\mathcal {P} = \mathcal {Q}.\)

  4. (P4)

    For PFSs \(\mathcal {P}\), \(\mathcal {Q}\) and \(\mathcal {R}\) such that \(\mathcal {P} \subseteq \mathcal {Q} \subseteq \mathcal {R}\) then \(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}),\mathcal {S}_i^{\mu }(\mathcal {Q}, \mathcal {R}) \}\) and \(\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {R}) \le \min \{\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}),\mathcal {S}_i^{\nu }(\mathcal {Q}, \mathcal {R}) \}\), so that \(\mathcal {S}_1(\mathcal {P}, \mathcal {R}) \le \mathcal {S}_1(\mathcal {P}, \mathcal {Q})\) and \(\mathcal {S}_1(\mathcal {P}, \mathcal {R}) \le \mathcal {S}_1(\mathcal {Q}, \mathcal {R})\).

\(\square \)

Definition 10

For two PFSs \(\mathcal {P}\) and \(\mathcal {Q}\) and using functions \(\mathcal {S}_i^{\mu }\) and \(\mathcal {S}_i^{\nu }\), a generalized weighted SM \(\mathcal {S}_p\) is defined as

$$\begin{aligned} \mathcal {S}_p(\mathcal {P}, \mathcal {Q})= & {} \sum _{i = 1}^n \omega _i \left( \frac{\root p \of {(\mathcal {S}_i^{\mu }(\mathcal {P}, \mathcal {Q}))^p+ (\mathcal {S}_i^{\nu }(\mathcal {P}, \mathcal {Q}))^p}}{2} \right) \nonumber \\&\quad \text {for all } p \in \mathbb {N}^* = \{1, 2, 3, \ldots \}. \end{aligned}$$

Theorem 4

The function \(\mathcal {S}_p\) given in Definition 10 is an SM.

The proof can be obtained as similar to Theorem 3.

Applications of the proposed SMs

This section explored the advantages of the proposed SMs in terms of solving pattern recognition problem and DMPs.

Verification and the comparative analysis

To show the superiority as well as advantages of the proposed measures, we first compare their performance with measures [3, 7,8,9,10,11,12,13,14,15,16, 35, 36, 49] defined in Table 1 on some common data sets.

Table 1 Existing similarity measures
Table 2 Comparison of SMs adopted from [3]

The results computed by the proposed SMs (\(\mathcal {S}_0\) and \(\mathcal {S}_1\)) and the existing SMs [3, 7,8,9,10,11,12,13,14,15,16, 35, 36, 49] are listed in Table 2, which suggests that proposed ones \(\mathcal {S}_{BA}\) [16] and \(\mathcal {S}_{CC}\) [9] can overcome the drawbacks of the several other existing SMs (\(\mathcal {S}_{\mathcal {R}}\) [8], \(\mathcal {S}_{HY1}\) [10], \(\mathcal {S}_{HY2}\) [10], \(\mathcal {S}_{HY3}\) [10], \(\mathcal {S}_{HK}\) [11], \(\mathcal {S}_{LC}\) [12], \(\mathcal {S}_{LX}\) [13], \(\mathcal {S}_{L}\) [7], \(\mathcal {S}_{LS1}\) [14], \(\mathcal {S}_{LS2}\) [14], \(\mathcal {S}_{LS3}\) [14], \(\mathcal {S}_{M}\) [15], \(\mathcal {S}_{Y}\) [3], \(\mathcal {S}_{P1}\) [49], \(\mathcal {S}_{P2}\) [49], \(\mathcal {S}_{P3}\) [49], \(\mathcal {S}_{Z}\) [35], and \(\mathcal {S}_W\) [35]).

Furthermore, to achieve more advantages of the proposed SMs with the existing measures, we consider another data sets and the results computed by the existing measures [3, 7,8,9,10,11,12,13,14,15,16, 35, 36, 49] as well as proposed measures \((\mathcal {S}_0, \mathcal {S}_1)\) are given in Table 3. It is clearly seen from this table that the proposed SMs overcome the certain drawbacks of the existing measures \(\mathcal {S}_{BA}\) [16], \(\mathcal {S}_{\mathcal {R}}\) [8], \(\mathcal {S}_{HY1}\) [10], \(\mathcal {S}_{HY2}\) [10], \(\mathcal {S}_{HY3}\) [10], \(\mathcal {S}_{HK}\) [11], \(\mathcal {S}_{LC}\) [12], \(\mathcal {S}_{LX}\) [13], \(\mathcal {S}_{L}\) [7] , \(\mathcal {S}_{LS1}\) [14], \(\mathcal {S}_{LS2}\) [14], \(\mathcal {S}_{LS3}\) [14], \(\mathcal {S}_{M}\) [15], \(\mathcal {S}_{Y}\) [3], \(\mathcal {S}_{P1}\) [49], \(\mathcal {S}_{P2}\) [49], \(\mathcal {S}_Z\) [35], and \(\mathcal {S}_W\) [36].

Table 3 Comparison of SMs adopted from [9]

Finally, we further shows that existing measures [3, 7,8,9,10,11,12,13,14,15,16, 35, 36, 49] also suffer from the shortcoming under some special cases that are listed in Table 4. The computed results by the proposed SM \((\mathcal {S}_0, \mathcal {S}_1)\) show the best results as compared to the existing measures \(\mathcal {S}_{BA}\) [16], \(\mathcal {S}_{\mathcal {R}}\) [8], \(\mathcal {S}_{HY1}\) [10], \(\mathcal {S}_{HY2}\) [10], \(\mathcal {S}_{HY3}\) [10], \(\mathcal {S}_{HK}\) [11], \(\mathcal {S}_{LC}\) [12], \(\mathcal {S}_{LX}\) [13], \(\mathcal {S}_{L}\) [7] , \(\mathcal {S}_{LS1}\) [14], \(\mathcal {S}_{LS2}\) [14], \(\mathcal {S}_{LS3}\) [14], \(\mathcal {S}_{M}\) [15], \(\mathcal {S}_{Y}\) [3], \(\mathcal {S}_{P1}\) [49], \(\mathcal {S}_{P2}\) [49], \(\mathcal {S}_Z\) [35], and \(\mathcal {S}_W\) [36].

Table 4 Comparison of SMs

Applications related to pattern recognition

Example 1

Consider a three known patterns \(\mathcal {P}_i (i = 1, 2, 3)\) whose characteristics are represented in terms of PFSs over the feature space \(X = \{x_1, x_2, x_3\}\) as follows:

$$\begin{aligned} \mathcal {P}_1&= \{(x_1, 1, 0), (x_2, 0.8, 0), (x_3, 0.7, 0.1)\} ; \\ \mathcal {P}_2&= \{(x_1, 0.8, 0.1), (x_2, 1, 0), (x_3, 0.9, 0.1)\} ; \\ \mathcal {P}_3&= \{(x_1, 0.6, 0.2), (x_2, 0.8, 0), (x_3, 1, 0)\}. \end{aligned}$$

Consider an unknown sample \(\mathcal {Q}\) under PFSs and defined as

$$\begin{aligned} \mathcal {Q}= \{(x_1, 0.5, 0.3), (x_2, 0.6, 0.2), (x_3, 0.8, 0.1)\}. \end{aligned}$$

Our goal is to find out the recognition of the pattern \(\mathcal {Q}\) with one of \(\mathcal {P}_i\). To achieve it, we choose the arbitrary weight vector \(\omega = (0.5, 0.3, 0.2)\) of the elements of X, and hence, the measurement values of the SMs along with existing SMs [35, 36] are computed and listed their results in Table 5. From it, we found that the pattern \(\mathcal {Q}\) recognizes with \(\mathcal {P}_3\) and coincides with the existing measures.

Table 5 Comparison analysis and the ranking order
Table 6 Rating values in terms of PFNs
Table 7 Comparative study for Example 2

Application to the DMPs

This section states the DMP method based on the proposed SMs under PFS environment to determine the finest alternative(s). For it, assume that \(\mathcal {P} = \{ \mathcal {P}_1, \mathcal {P}_2, \ldots , \mathcal {P}_m\}\) be the set of “m” alternatives and \(\mathcal {G} = \{\mathcal {G}_1, \mathcal {G}_2, \ldots , \mathcal {G}_n\}\) be the set of “n” criteria, whose weight vector is \(\omega _j>0\) with \(\sum _{j=1}^n \omega _j = 1\). An expert evaluates these alternatives and rates them in terms of PFNs \(\gamma _{ij}=(\mu _{ij}, \nu _{ij})\), such that \(\mu _{ij}^2+\nu _{ij}^2\le 1\) satisfied. The complete PF decision matrix D is defined as


Then, the following steps are proposed based on the proposed SMs to evaluate them.

  1. Step 1:

    Determine the weight of each criteria

    We determine the weight vector \(\omega _{j,k}, (k = 0, 1, 2, \ldots )\) of each criteria \(\mathcal {G}_j\) using the following equation:

    $$\begin{aligned} \omega _{j,k} = \frac{(d_j)^k}{\sum _{j = 1}^n(d_j)^k}, k = 0, 1, 2, \ldots \end{aligned}$$

    where \(d_j = d_{1j} + d_{2j}\) in which \(d_{1j} = \max \nolimits _{i} \mu _{ij}\), \(d_{2j} = \min \nolimits _{i}\nu _{ij}\) for all \(j = 1, 2, \ldots , n\), such that \(\sum _{j = 1}^{n} \omega _{j,k} = 1\) for \(k = 0, 1, 2, \ldots \).

  2. Step 2:

    Determine the ideal values

    The given criteria are divided into two disjoint sets, namely, the cost \(\mathcal {F}_1\) and the benefit \(\mathcal {F}_2\). For \(\mathcal {F}_1\) criteria, the ideal values are taken as (0,1), while for \(\mathcal {F}_2\) criteria, we take (1,0). It is noted here that (1, 0) is the largest value of a PFNs and (0, 1) is the smallest value of a PFN. Therefore, we represent the ideal values for all criteria as \(\mathcal {P}_b = (\mathcal {P}_b(1), \mathcal {P}_b(2), \ldots , \mathcal {P}_b(n))\), where \(\mathcal {P}_b(j) = (1, 0)\) if \(\mathcal {G}_j \in \mathcal {F}_2\) and \(\mathcal {P}_b(j) = (0, 1)\) if \(\mathcal {G}_j\in \mathcal {F}_1\) for all \(j = 1, 2, \ldots , n.\)

  3. Step 3:

    Calculate the SMs of each alternative from its ideal values

    Using the proposed SMs, i.e., \(\mathcal {S}_0\), \(\mathcal {S}_1\) or \(\mathcal {S}_p\), compute the measurement values of each alternative. Based on the rating values and the ideal measures, compute the SMs values using either \(\mathcal {S}_0\), \(\mathcal {S}_1\), or \(\mathcal {S}_p\) measures.

  4. Step 4:

    Rank the alternatives

    Based on the assessment values of the SMs, rank the given alternatives with the following rules:

    $$\begin{aligned}&\mathcal {P}_i \prec \mathcal {P}_p \ \text {if and only if } S(\mathcal {P}_i, \mathcal {P}_b) \le S(\mathcal {P}_p, \mathcal {P}_b) \\&\quad \text {for all } i, p = 1, 2, \ldots , m. \end{aligned}$$

    Here, S represent the SM.

Example 2

To demonstrate the above method, we consider an example related to the invest the money in a certain company. For it, a person chooses the five possible companies \(\mathcal {P}_i, i = 1, 2, \ldots , 5\) and considered as an alternative. To evaluate these alternatives, a person hires an investment expert which evaluates these companies under the set of six criteria, namely, \(\mathcal {G}_1\): “technical ability”, \(\mathcal {G}_2\): “expected benefit”, \(\mathcal {G}_3\): “competitive power on the market”, \(\mathcal {G}_4\): “ability to bear risk”, \(\mathcal {G}_5\): “management capability”, and \(\mathcal {G}_6\): “organizational culture”. The values of each alternative are listed in Table 6 using PFNs.

Then, the steps of the method are executed as follows:

  1. Step 1:

    By Eq. (12) with \(k=1\), we can get

    $$\begin{aligned} \omega = (0.13, 0.19, 0.16, 0.16, 0.18,0.18). \end{aligned}$$
  2. Step 2:

    As \(\mathcal {G}_4 \in \mathcal {F}_1\), while others are belongs to \(\mathcal {F}_2\), so the ideal values are \(\mathcal {P}_b(1) = \mathcal {P}_b(2) = \mathcal {P}_b(3) = \mathcal {P}_b(5) = \mathcal {P}_b(6) = (1, 0)\) and \(\mathcal {P}_b(4) = (0,1)\).

  3. Step 3:

    Utilize the similarity measure \(\mathcal {S}_1\) to compute the measurement values and get \(\mathcal {S}_1(\mathcal {P}_1,\mathcal {P}_b)=0.60576\), \(\mathcal {S}_1(\mathcal {P}_2,\mathcal {P}_b)=0.62673\), \(\mathcal {S}_1(\mathcal {P}_2,\mathcal {P}_b)=0.6468\), \(\mathcal {S}_1(\mathcal {P}_4,\mathcal {P}_b)=0.53046\), and \(\mathcal {S}_1(\mathcal {P}_5,\mathcal {P}_b)=0.59481\).

  4. Step 4:

    Since the measurement value of \(\mathcal {P}_3\) alternative is the highest and hence the best company is \(\mathcal {P}_3\). However, the overall ordering is \(\mathcal {P}_3\succ \mathcal {P}_2\succ \mathcal {P}_1\succ \mathcal {P}_5\succ \mathcal {P}_4\).

Furthermore, using existing measures [35, 36] and the other proposed SMs \((\mathcal {S}_0, \mathcal {S}_p)\), we rank the given alternatives in Table 7. This table shows the consistency of the proposed measures as the finest alternative remains the same by all the methods.


In this paper, we introduce some new SMs between PFSs based on the exponential function of the MDs and NMDs. The desirable combinations and their features are studied in detail. To show the efficiency of the proposed SMs, we give some counter-intuitive examples which shows that existing measures fail under some certain cases, while the proposed one classifies the objects. Later, we solve the pattern recognition as well as DMPs using the proposed SMs. The numerical results are compared with the existing ones to show its consistency. It is revealed from the proposed method that the solution obtained is good compromise than the existing ones and shows it conservative in nature. In the future, we shall expand the proposed measures under the different uncertain and fuzzy environments [50,51,52,53,54].