Introduction

The connection between precision and uncertainty has perplexed humanity for centuries. Lukasiewicz [1], a Polish logician and philosopher, gave the first formulation of multi-valued logic which led to the study of possibility theory. The first simple fuzzy set and fundamental thoughts of fuzzy set operations were proposed by Black [2]. To overcome the problem of handling uncertain and imprecise information in decision making, Zadeh [3] presented the concept of fuzzy set, where the membership degree of each element in a fuzzy set is a single value in the interval of [0,1]. Fuzzy set theory has been widely applied in a plethora of application fields, including medical diagnosis, engineering, economics, image processing and object recognition (Phuong et al. [4]; Shahzadi et al. [5]; Tobias and Seara [6]).

The general fuzzy set was extended to the intuitionistic fuzzy set (IFS) by Atanassov [7]. The IFS model has a degree of membership \(\mu_{A} \left( {x_{i} } \right) \in \left[ {0,1} \right]\) and a degree of non-membership \(\nu_{A} \left( {x_{i} } \right) \in \left[ {0,1} \right]\), such that \(\mu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right) \le 1\) for each \(x \in X.\) The IFS model definitely extends the classical fuzzy set model; however, it is often difficult to be applied in real-life decision making situations, as only incomplete and vague information can be dealt with but not indeterminate or inconsistent information. Hence, Smarandache [8] initially proposed the idea of the neutrosophic set (NS) which, from a philosophical point of view, more effectively deals with imprecise, indeterminate and inconsistent information, that often exists in real-life decision making problems, compared to the classical fuzzy set model [3] and the IFS model [7]. The neutrosophic set [9] is characterized by a truth function \(T_{A} \left( x \right)\), an indeterminacy \(I_{A} \left( x \right) \) function and a falsify \(F_{A} \left( x \right)\) function, where all these three functions are completely independent. The functions \(T_{A} \left( x \right), I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\) in \(X\) assume real values in the standard or non-standard subsets of \({}_{ }^{ - } 0,1_{ }^{ + } [,\) such that \(T_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } \left[ {,I_{A} \left( x \right):X \to } \right]{}_{ }^{ - } 0,1_{ }^{ + } } \right[ \) and \(F_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[\). Since its introduction, a lot of extensions of the neutrosophic set have been proposed by scholars, including the single-valued neutrosophic set (SVNS) by Wang et al. [10], the interval neutrosophic set by Wang et al. [11], the simplified neutrosophic set by Peng et al. [12], the neutrosophic soft set by Maji [13], the single-valued neutrosophic linguistic set by Ye [14], the simplified neutrosophic linguistic set by Tian et al. [15], the multi-valued neutrosophic set by Wang and Li [16], the rough neutrosophic set (RNS) by Broumi et al. [17], the ņeutrosophic cubic set by Jun et al. [18], the complex neutrosophic set by Ali and Smarandache [19], and the complex ņeutrosophic cubic set by Gulistan and Khan [20]. Additionally, a large number of aggregation operators have been presented, based on various techniques, including algebraic methods, Bonferroni mean (Bonferroni [21]), power average (Yager [22]), exponential operational law, prioritized average (Yager [23]) and operations of Dombi T-conorm and T-norm (Dombi [24]). All these aggregation operators have been proposed to be used for analyzing many multi-criteria decision making (MCDM) problems.

In this paper, we focus on the single-valued neutrosophic set (SVNS) which was presented by Wang et al. [10]. Since its inception, a lot of scholars have actively contributed to the development of this variation of the NS. In addition, a lot of scholars have applied SNVS in various application fields of decision making. For example, Zavadskas et al. [25] presented a new extension of the weighted aggregated sum product assessment (WASPAS) decision making method (namely WASPAS-SVNS) to solve the problem of site selection for waste incineration plants. Vafadarnikjoo et al. [26] applied the fuzzy Delphi method in combination with SVNS for assessing consumers’ motivations to purchase a remanufactured product. Selvachandran et al. [27] presented a modified Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) with maximizing deviation method based on the SVNS model and applied this technique to determine objective attribute weights in a supplier selection problem. Broumi et al. [28] did an analysis of the strength of a wi-fi connection using SVNSs. Biswas et al. [29] proposed a non-linear programming approach based on TOPSIS method for solving multi-criteria group decision making (MCGDM) problems under the SVNS environment. Abdel-Basset et al. [30] used a neutrosophic approach to minimize the cost of project scheduling under uncertain environmental conditions by assuming linear time–cost trade-offs. Abdel-Basset and Mohamed [31] proposed a combination of the plithogenic multi-criteria decision making approach based on TOPSIS and the criteria importance through inter-criteria correlation (CRITIC) method to evaluate the sustainability of a supply chain risk management system. Abdel-Basset et al. [32] considered the resource leveling problem in construction projects using neutrosophic sets with the aim to overcome the ambiguity surrounding the project scheduling decision making process. Besides these, many other scientific studies related to various extensions of the neutrosophic set model have also been published over the years. Akram et al. [33] developed an approach based on the maximizing deviation method and TOPSIS for solving MCDM problems under the assumptions of a simplified neutrosophic hesitant fuzzy environment. Zhan et al. [34] proposed an efficient algorithm to solve MCDM problems based on bipolar neutrosophic information. Aslam [35] introduced a novel neutrosophic analysis of variance, whereas Sumathi and Sweety [36] suggested a new form of fuzzy differential equation using trapezoid neutrosophic numbers.

Moreover, a lot of information measures for the SVNS model have been proposed over the years, such as similarity measures, distance measures, entropy measures, inclusion measures and also correlation coefficients. Some of the most important research works pertaining to similarity and distance measures for SVNSs are due to Broumi and Smarandache [37], Ye [38,39,40,41,42,43,44], Ye and Zhang [45], Majumdar and Samanta [46], Mondal and Pramanik [47], Ye and Fu [48], Liu and Luo [49], Huang [50], Mandal and Basu [51], Sahin et al. [52], Pramanik et al. [53], Garg and Nancy [54], Fu and Ye [55], Wu et al. [56], Cui and Ye [57], Mondal et al. [58, 59], Liu [60], Liu et al. [61], Ren et al. [62], Sun et al. [63] and Peng and Smarandache [64]. Research related to entropy and inclusion measures for the SVNS model can be found in Majumdar and Samanta [46], Aydoğdu [65], Garg and Nancy [66], Wu et al. [56], Cui and Ye [67], Aydoğdu and Şahin [68] and Sinha and Majumdar [69]. Lastly, correlation coefficients for SVNSs were proposed by Ye [38, 70, 71] and Hanafy et al. [72].

Since the first formulas expressing the similarity measure between two fuzzy sets were initially introduced by Bonissone [73], Eshragh and Mamdani [74] and Lee-Kwang et al. [75] years ago, a lot of scholars and researchers have been continuously proposing new similarity measures for fuzzy based models, including the SVNS model, and applying these measures in solving various practical problems related to MCDM (Ye [41]; Ye and Zhang [45]; Pramanik et al. [53]; Mondal and Pramanik [47]; Aydoğdu [65]; Mandal and Basu [76]), pattern recognition (Sahin et al. [52]), medical diagnosis (Shahzadi, Akram and Saeid [5]; Ye and Fu [48]; Abdel-Basset et al. [77]), clustering analysis (Ye [41, 43]), image processing (Guo et al. [78, 79]; Guo and Şengür [80]; Qi et al. [81]) and minimum spanning tree (Mandal and Basu [51]). The existing similarity measures for SVNSs have been found to have many problems and shortcomings, such as: (1) failing to differentiate between positive and negative differences over the sets that are being considered, (2) facing the division by zero problem, and (3) providing unreasonable results that are counter-intuitive with the concept of similarity measures and/or not compatible with the axiomatic definition of similarity measures for SVNSs. These assertions were correctly pointed out by Peng and Smarandache [64] who analyzed problems inherent in many of the existing similarity measures.

In view of the above, the objective of this paper is to propose new distance and similarity measures for the SVNS model which are able to overcome the shortcomings of existing measures. The paper presents a detailed comparative analysis between the proposed similarity measures and other existing similarity measures for SVNSs. The comparative analysis applies all these measures in different cases with the aim to demonstrate the effectiveness, the feasibility and the superiority of the proposed formulas compared to existing formulas. The newly proposed measures are applied to MCDM problems related to pattern recognition and medical diagnosis.

The rest of this article is organized as follows. Section “Preliminaries” provides a brief overview of some of the most important concepts related to SVNSs. In Sect. “New distance and similarity measures for SVNSs”, several new distance measures and similarity measures for the SVNS model are introduced and some important algebraic properties of these measures are presented and verified. In Sect. “Comparative studies”, a comparative analysis is given between the proposed similarity measures and other existing similarity measures presented in the literature. In Sect. “Applications of the proposed similarity measures”, the proposed similarity measures are applied to two MCDM problems, related respectively to pattern recognition and medical diagnosis, using numerical examples aiming to prove the feasibility and effectiveness of the proposed similarity measures. The results obtained are then compared to the results obtained using the existing similarity measures, as well as analyzed and discussed. Concluding remarks and directions of future research are presented in Sect. “Conclusions” followed by the acknowledgements and the list of references.

Preliminaries

Definition 2.1

[8]. A neutrosophic set \(A\) in a universal set \(X\) is characterized by a truth-membership function \(T_{A} \left( x \right),\) an indeterminacy-membership function \(I_{A} \left( x \right)\) and a falsity-membership function \(F_{A} \left( x \right).\) These three functions \(T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\) in \(X\) are real standard or non-standard subsets of \(\left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[,\) such that \(T_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } \left[ {,I_{A} \left( x \right):X \to } \right]{}_{ }^{ - } 0,1_{ }^{ + } } \right[,\) and \(F_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[.\) Thus, there is no restriction on the sum of \(T_{A} \left( x \right), I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\), so that \({}_{ }^{ - } 0 \le \sup T_{A} \left( x \right) + \sup I_{A} \left( x \right) + \sup F_{A} \left( x \right) \le 3_{ }^{ + } .\)

Smarandache [8] introduced the neutrosophic set from a philosophical point of view as an extension of the fuzzy set, the IFS, and the interval-valued IFS. Although the concept was a novel one, it was found to be difficult to apply neutrosophic sets in practical problems, mainly due to the range of values of the membership functions which lie in the non-standard interval of \(\left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[\). Datasets in many real-life situations are often imprecise, uncertain and/or incomplete. Any discrepancies or deficiencies in the used datasets will have an adverse effect on the decision making process and, by extension, on the results that are generated. Hence, it is often pertinent to have a robust framework to effectively represent all types of imprecise, uncertain and incomplete information. Fuzzy set theory was introduced as a good alternative to deal with imprecise, inconsistent and incomplete information as classical methods, such as set theory and probability theory, were unable to deal with such deficiencies in information. However, fuzzy set theory was found to be less than ideal in dealing with imprecise, inconsistent and incomplete information, as it only takes into consideration the truth component of any information and it is not able to handle the falsity and indeterminacy components of the information. As fuzzy set theory evolved into other fuzzy based models, neutrosophic sets were introduced by Smarandache [8] as an efficient mathematical model to deal with imprecise, inconsistent and incomplete information. The SVNS model, which was conceptualized by Wang et al. [10] as an extension of the neutrosophic set model, has proven to be an effective model for handling imprecise, inconsistent and incomplete information in a systematic manner due its ability to consider the degree of truth, falsity and indeterminacy for each piece of information. In addition, the structure of the SVNS model in which its membership functions assume values in the standard interval of [0, 1] makes it compatible with the other fuzzy based models, thereby making it more convenient to be applied to solving real-life decision making problems with actual datasets. All these served as reasons to choose the SVNS model as the object of study in this paper. The formal definition of the SVNS is presented below.

Definition 2.2

[10]. Let \(X\) be a universal set. An SVNS \(A \) in X is concluded by a truth-membership function \(T_{A} \left( x \right),\) an indeterminacy-membership function \(I_{A} \left( x \right)\) and a falsity-membership function \(F_{A} \left( x \right).\) An SVNS \(A\) can be signified by \(A = \left\{ {x,T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\left| {x \in X} \right.} \right\}\) where \(T_{A} \left( x \right), I_{A} \left( x \right), F_{A} \left( x \right) \in \left[ {0,1} \right]\) for each \(x\) in \(X\). Then, the sum of \(T_{A} \left( x \right),I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\) satisfies the condition \(0 \le T_{A} \left( x \right) + I_{A} \left( x \right) + F_{A} \left( x \right) \le 3\). For an SVNS \(A\) in \(X\), the triplet \(T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\) is called single-valued neutrosophic number (SVNN), which is a fundamental element in an SVNS.

Definition 2.3

[10]. For any two given SVNSs \(A\) and \(B\), the union, intersection, equality, complement and inclusion of \(A\) and \(B\) are defined as shown below:

  1. 1.

    Complement: \(A^{c} = \left\{ {\left\langle {x,F_{A} \left( x \right),1 - I_{A} \left( x \right),T_{A} \left( x \right)} \right\rangle \left| {x \in X} \right.} \right\}\).

  2. 2.

    Inclusion: \(A \subseteq B\) if and only if \(T_{A} \left( x \right) \le T_{B} \left( x \right),I_{A} \left( x \right) \ge I_{B} \left( x \right),F_{A} \left( x \right) \ge F_{B} \left( x \right)\) for any \(x\) in \(X\).

  3. 3.

    Equality: \(A = B\) if and only if \(A \subseteq B{ }\) and \(B \subseteq A\).

  4. 4.

    Union: \(A \cup B = \{ \langle x,T_{A} \left( x \right) \vee T_{B} \left( x \right),I_{A} \left( x \right) \wedge I_{B} \left( x \right),F_{A} \left( x \right) \wedge F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}\).

  5. 5.

    Intersection: \(A \cap B = \{ \langle x,T_{A} \left( x \right) \wedge T_{B} \left( x \right),I_{A} \left( x \right) \vee I_{B} \left( x \right),F_{A} \left( x \right) \vee F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}\).

Definition 2.4

[82]. For any two given SVNSs \(A\) and \(B\), the subtraction and division operation of \(A\) and \(B\) are defined as shown below:

  1. 1.

    \(A \ominus B= \left\{ {\left. {\left\langle {x,\frac{{T_{A} \left( x \right) - T_{B} \left( x \right)}}{{1 - T_{B} \left( x \right)}},\frac{{I_{A} \left( x \right)}}{{I_{B} \left( x \right)}},\frac{{F_{A} \left( x \right)}}{{F_{B} \left( x \right)}}} \right\rangle } \right|x \in X} \right\}\), which is valid under the conditions \(A \ge B,T_{B} \left( x \right) \ne 1,I_{B} \left( x \right) \ne 0,F_{B} \left( x \right) \ne 0.\)

  2. 2.

    A ⊘ B\(= \left\{ {\left. {\left\langle {x,\frac{{T_{A} \left( x \right)}}{{T_{B} \left( x \right)}},\frac{{I_{A} \left( x \right) - I_{B} \left( x \right)}}{{1 - I_{B} \left( x \right)}},\frac{{F_{A} \left( x \right) - F_{B} \left( x \right)}}{{1 - F_{B} \left( x \right)}}} \right\rangle } \right|x \in X} \right\}\), which is valid under the conditions \(B \ge A,T_{B} \left( x \right) \ne 0,I_{B} \left( x \right) \ne 1,F_{B} \left( x \right) \ne 1. \)

Definition 2.5

[83]. For any two given SVNSs \(A \) and \(B,\) the addition and multiplication operation of \(A\) and \(B\) are defined as shown below:

  1. 1.
    $$ \begin{aligned} A \oplus B & = \{ \langle x,T_{A} \left( x \right) + T_{B} \left( x \right) - T_{A} \left( x \right)T_{B} \left( x \right), \\ &\quad I_{A} \left( x \right)I_{B} \left( x \right),F_{A} \left( x \right)F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}.\end{aligned} $$
  2. 2.
    $$ \begin{aligned} A \otimes B & = \{ \langle x,T_{A} \left( x \right)T_{B} \left( x \right),I_{A} \left( x \right) + I_{B} \left( x \right) \\ &\quad - I_{A} \left( x \right)I_{B} \left( x \right),F_{A} \left( x \right) + F_{B} \left( x \right) \\ &\quad - F_{A} \left( x \right)F_{B} \left( x \right)\rangle \left| {x \in X} \right. \}. \end{aligned} $$

Definition 2.6

Let \(A\) be an SVNS over a universe \(U.\)

  1. 1.

    \(A\) is said to be an absolute SVNS, denoted by \(\tilde{A},\) if \(T_{{\tilde{A}}} \left( x \right) = 1, I_{{\tilde{A}}} \left( x \right) = 0\) and \(F_{{\tilde{A}}} \left( x \right) = 0,\) for all \(x \in U.\)

  2. 2.

    \(A\) is said to be an empty or null SVNS, denoted by \(\phi_{A} ,\) if \(T_{{\phi_{A} }} \left( x \right) = 0, I_{{\phi_{A} }} \left( x \right) = 0\) and \(F_{{\phi_{A} }} \left( x \right) = 1,\) for all \(x \in U.\)

New distance and similarity measures for SVNSs

In this section, we introduce several new formulas for the distance and similarity measures of SVNSs based on the axiomatic definition of the distance and similarity between SVNSs.

Distance measures for single-valued neutrosophic sets

Definition 1

[37] A real function \(D:\Phi \left( X \right) \times \Phi \left( X \right) \to \left[ {0,1} \right]\) is called a distance measure, where \(d\) satisfies the following axioms for \(A,B,C \subseteq \Phi \left( X \right)\):

  1. (D1)
    $$0 \le D\left( {A,B} \right) \le 1.$$
  2. (D2)
    $$D\left( {A,B} \right) = 0\, {\text{iff }}A = B.$$
  3. (D3)
    $$D\left( {A,B} \right) = D\left( {B,A} \right).$$
  4. (D4)

    If \( \begin{aligned} & A \subseteq B \subseteq C, {\text{then }}D\left( {A,C} \right) \\ &\quad \ge D\left( {A,B} \right){\text{ and }}D\left( {A,C} \right) \ge D\left( {B,C} \right). \end{aligned} \)

Let \(A = \left\langle {x_{i} ,T_{A} \left( {x_{i} } \right),I_{A} \left( {x_{i} } \right),F_{A} \left( {x_{i} } \right)|x_{i} \in X} \right\rangle\) and \(B = \left\langle {x_{i} ,T_{B} \left( {x_{i} } \right),I_{B} \left( {x_{i} } \right),F_{B} \left( {x_{i} } \right)|x_{i} \in X} \right\rangle , i = 1,2, \ldots ,n,\) be two SVNSs over the universe \(X\).

Theorem 1

Let \(A\) and \(B\) be two SVNSs, then \(D_{i} \left( {A,B} \right)\), for \(i = 1,2, \ldots ,11\), is a distance measure between SVNSs.

  1. 1.
    $$\begin{array}{ll} D_{1} \left( {A,B} \right) = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \Big( \left| T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \right| \\ \quad + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \Big)\end{array}$$
  2. 2.
    $$\begin{aligned} & D_{2} \left( {A,B} \right) = \frac{1}{3\left| X \right|} \mathop \sum \limits_{x \in X} \Big| \left( {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right) \\ &\quad - \left( {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right) - \left( {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right) \Big|\end{aligned} $$
  3. 3.
    $$\begin{aligned}& D_{3} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \Big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right|\\ &\quad \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \Big) \end{aligned}$$
  4. 4.
    $$\begin{aligned} & D_{4} \left( {A,B} \right) = \frac{2}{\left| X \right|}\\ &\quad \mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{1 + \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}} \right\}\end{aligned}$$
  5. 5.
    $$\begin{aligned} & D_{5} \left( {A,B} \right) \\ &\quad = \frac{{2\mathop \sum \nolimits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {1 + \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}} \end{aligned} $$
  6. 6.
    $$\begin{aligned} D_{6} \left( {A,B} \right) & \;\; = 1 - \alpha \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \beta \frac{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \gamma \frac{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ & \quad \;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
  7. 7.
    $$\begin{aligned} D_{7} \left( {A,B} \right) & = 1 - \frac{\alpha }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \frac{\beta }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \frac{\gamma }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ & \;\;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
  8. 8.
    $$ \begin{aligned} & D_{8} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \\ &\quad \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}} \right\} \end{aligned} $$
  9. 9.
    $$\begin{aligned} & D_{9} \left( {A,B} \right) = 1 \\ &\quad - \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}} \end{aligned} $$
  10. 10.

    \(\begin{aligned}& D_{10} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\\ & \mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}} } \right\}\end{aligned}\)

  11. 11.

    \(\begin{aligned}& D_{11} \left( {A,B} \right) = 1\\ & - \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}}\end{aligned}\)

Proof

In order for \(D_{i} \left( {A,B} \right)\left( {i = 1,2, \ldots ,11} \right)\) to be qualified as a valid distance measure for SVNSs, it must satisfy conditions \(\left( {D1} \right)\) to \(\left( {D4} \right)\) in Definition 1. It is straightforward to prove condition (D1), so we prove only conditions (D2) to (D4) for the distance measure \(D_{1} \left( {A,B} \right)\). These conditions can be proven for the rest of the formulas \(D_{2} \left( {A,B} \right)\) to \(D_{11} \left( {A,B} \right)\) in a similar manner.

(D2)\( \left( \Rightarrow \right)\) If \(D_{1} \left( {A,B} \right) = 0,\)

then \(\frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)

\(\therefore \mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)

which would occur if \(T_{A}^{2} \left( x \right) = T_{B}^{2} \left( x \right), I_{A}^{2} \left( x \right) = I_{B}^{2} ,F_{A}^{2} \left( x \right) = F_{B}^{2} \left( x \right)\).

i.e., \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right)\).

i.e., \(A = B\).

\(\left( \Leftarrow \right)\) If \(A = B\), then \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right), \forall x \in X. \)

\(\begin{aligned} \therefore D_{1} \left( {A,B} \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{A}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{B}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{B}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 0. \\ \end{aligned}\)

(D3)

$$\begin{aligned} D_{1} \left( {A,B} \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right); \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| + \left| {F_{B}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = D_{1} \left( {B,A} \right). \\ \end{aligned}$$

(D4) If \(A \subseteq B \subseteq C\), then we have:

\(T_{A} \left( x \right) \le T_{B} \left( x \right) \le T_{C} \left( x \right),\;\;I_{A} \left( x \right) \ge I_{B} \left( x \right) \ge I_{C} \left( x \right),\;\;F_{A} \left( x \right) \ge F_{B} \left( x \right) \ge F_{C} \left( x \right).\)

Therefore, we have:

\(T_{A} \left( x \right) - T_{B} \left( x \right) \le T_{A} \left( x \right) - T_{C} \left( x \right),\;\;I_{A} \left( x \right) - I_{B} \left( x \right) \le I_{A} \left( x \right) - I_{C} \left( x \right),\)

\(F_{A} \left( x \right) - F_{B} \left( x \right) \le F_{A} \left( x \right) - F_{C} \left( x \right)\)

\(\therefore T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \le T_{A}^{2} \left( x \right) - T_{C}^{2} \left( x \right),\;\;I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right) \le I_{A}^{2} \left( x \right) - I_{C}^{2} \left( x \right),\)

\(F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right) \le F_{A}^{2} \left( x \right) - F_{C}^{2} \left( x \right).\)

Theorem 2

For \(i = 1,2, \ldots ,11,\) if \(\alpha = \beta = \gamma = \frac{1}{3},\) the following hold:

  1. (i)

    \(D_{i} \left( {A,B^{c} } \right) = D_{i} \left( {A^{c} ,B} \right), i \ne 11,12\)

  2. (ii)
    $$D_{i} \left( {A,B} \right) = D_{i} \left( {A \cap B,A \cup B} \right)$$
  3. (iii)
    $$D_{i} \left( {A,A \cap B} \right) = D_{i} \left( {B,A \cup B} \right)$$
  4. (iv)
    $$D_{i} \left( {A,A \cup B} \right) = D_{i} \left( {B,A \cap B} \right)$$

Proof

(i) Let \(A = \left( {T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)} \right), A^{c} = \left( {F_{A} \left( x \right),1 - I_{A} \left( x \right),T_{A} \left( x \right)} \right).\)

For \( \begin{aligned} D_{1} \left( {A,B} \right) & = \frac{1}{{3\left| X \right|}}\sum\limits_{{{\text{x}} \in {\text{X}}}} {\left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right|} \right.} \\ & \;\;\;\left. { + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right), \\ \end{aligned} \) the following hold:

$$\begin{aligned} D_{1} \left( {A,B^{c} } \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - \left( {1 - I_{B}^{2} \left( x \right)} \right)} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left(\left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) + I_{B}^{2} \left( x \right) - 1} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right|\right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {1 - I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = D_{1} \left( {A^{c} ,B} \right). \\ \end{aligned}$$

(ii)

$$\begin{aligned} & D_{1} \left( {A \cap B,A \cup B} \right) \\ &\quad = \frac{1}{3\left| X \right|}\sum\nolimits_{x \in X} {\left( {\left| {\left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right.} \\ &\quad \;\; + \left| {\left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| \\ & \quad \;\; + \;\left. {\left| {\left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \\ &\quad = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right) \\ &\quad = D_{1} \left( {A,B} \right). \\ \end{aligned}$$

(iii)

$$\begin{aligned} & D_{1} \left( {A,A \cap B} \right) \\ &\quad = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - \left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {I_{A}^{2} \left( x \right)} \right.} \right. \\ & \;\;\; - \left. {\left| { \left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {F_{A}^{2} \left( x \right) - \left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \\ \end{aligned}$$
$$\begin{aligned} & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( {\left| {T_{B}^{2} \left( x \right) - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right.^{2} } \right| + \left| {I_{B}^{2} \left( x \right) - } \right|} \right. \\ & \;\;\; - \left. { \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right|\left. { + \left| {F_{B}^{2} \left( x \right) - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \\ \end{aligned}$$
$$\begin{gathered} \because \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| = \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right|} \right) \hfill \\ = D_{1} \left( {B,A \cup B} \right). \hfill \\ \end{gathered}$$

(iv) The proof is similar to that of (iii) and is therefore omitted.

New similarity measures for SVNSs

Definition 2

[37]. Let \(A\) and \(B\) be two SVNSs, and \(S\) is a mapping \(S:SVNSs\left( X \right) \times SVNSs\left( X \right) \to \left[ {0,1} \right]\). We call \(S\left( {A,B} \right)\) a similarity measure between \(A\) and \(B \) if it satisfies the following properties:

  1. (S1)
    $$0 \le S\left( {A,B} \right) \le 1.$$
  2. (S2)
    $$S\left( {A,B} \right) = 1 {\text{iff}}\, A = B.$$
  3. (S3)
    $$S\left( {A,B} \right) = S\left( {B,A} \right).$$
  4. (S4)
    $$S\left( {A,C} \right) \le S\left( {A,B} \right) {\text{and}}$$
    $$S\left( {A,C} \right) \le S\left( {B,C} \right) {\text{if}}$$
    $$A \subseteq B \subseteq C {\text{, when}} C \in SVNS\left( X \right).$$

Theorem 3

Let \(A\) and \(B\) be two SVNSs, then \(S_{i} \left( {A,B} \right)\), for \(i = 1,2, \ldots ,11\), is a similarity measure between SVNSs.

  1. (i)
    $$ \begin{aligned} & S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{{{\text{x}} \in {\text{X}}}} \left(\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|\right) \end{aligned}$$
  2. (ii)
    $$ \begin{aligned} & S_{2} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left| \left( {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right) \right. \\ &\quad \left. - \left( {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right) - \left( {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right) \right| \end{aligned} $$
  3. (iii)
    $$ \begin{aligned} & S_{3} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \end{aligned} $$
  4. (iv)
    $$S_{4} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{1 - \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{1 + \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}} \right\}$$
  5. (v)
    $$S_{5} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {1 - \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {1 + \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}$$
  6. (vi)
    $$\begin{aligned} S_{6} \left( {A,B} \right) & = \alpha \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} + \beta \frac{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} + \gamma \frac{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ { } & \;\;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
  7. (vii)
    $$\begin{aligned} S_{7} \left( {A,B} \right) & = \frac{\alpha }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} + \frac{\beta }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} + \frac{\gamma }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ { } & \;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right]{ } \\ \end{aligned}$$
  8. (viii)
    $$S_{8} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}} \right\}$$
  9. (ix)
    $$S_{9} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}$$
  10. (x)
    $$S_{10} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}} } \right\}$$
  11. (xi)
    $$S_{11} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}}$$

Proof

In order for \(S_{i} \left( {A,B} \right)\left( {i = 1,2, \ldots ,11} \right)\) to be qualified as a practical similarity measure for SVNSs, it must satisfy the conditions \(\left( {S1} \right)\) to \(\left( {S4} \right)\), listed in Definition 2. It is straightforward to prove condition \(\left( {S1} \right)\) and therefore we only prove conditions \(\left( {S2} \right) \) to \( \left( {S4} \right)\). For the sake of brevity, we only present the proof for \(S_{1} \left( {A,B} \right)\). The proof for the other formulas can be generated in a similar manner.

(D2) For \(S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{{{\text{x}} \in {\text{X}}}} (\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|,\) we have the following:

\(\left( \Rightarrow \right)\) If \(S_{1} \left( {A,B} \right) = 1,\)

then \(1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 1\)

\(\therefore \mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)

which would occur if \(T_{A}^{2} \left( x \right) = T_{B}^{2} \left( x \right), I_{A}^{2} \left( x \right) = I_{B}^{2} ,F_{A}^{2} \left( x \right) = F_{B}^{2} \left( x \right)\).

i.e., \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right)\).

i.e., \(A = B\).

\(\left( \Leftarrow \right)\) If \(A = B\), then \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right), \forall x \in X. \)

\(\begin{aligned} \therefore S_{1} \left( {A,B} \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{B}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1. \\ \end{aligned} \)

(D3) \(\begin{aligned} S_{1} \left( {A,B} \right) & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| + \left| {F_{B}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = S_{1} \left( {B,A} \right). \\ \end{aligned} \)

(D4) If \(A \subseteq B \subseteq C\), then we have:

\(T_{A} \left( x \right) \le T_{B} \left( x \right) \le T_{C} \left( x \right),\) \(I_{A} \left( x \right) \ge I_{B} \left( x \right) \ge I_{C} \left( x \right)\), \(F_{A} \left( x \right) \ge F_{B} \left( x \right) \ge F_{C} \left( x \right)\).

Therefore, we have:

\(T_{A} \left( x \right) - T_{B} \left( x \right) \le T_{A} \left( x \right) - T_{C} \left( x \right)\), \(I_{A} \left( x \right) - I_{B} \left( x \right) \le I_{A} \left( x \right) - I_{C} \left( x \right),\) \(F_{A} \left( x \right) - F_{B} \left( x \right) \le F_{A} \left( x \right) - F_{C} \left( x \right)\).

\(\therefore T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \le T_{A}^{2} \left( x \right) - T_{C}^{2} \left( x \right)\), \(I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right) \le I_{A}^{2} \left( x \right) - I_{C}^{2} \left( x \right)\), \(F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right) \le F_{A}^{2} \left( x \right) - F_{C}^{2} \left( x \right)\).

Hence, \(S_{1} \left( {A, B} \right)\) is a similarity measure between SVNSs.

Theorem 4

For \(i = 1,2, \ldots ,11,\) if \(\alpha = \beta = \gamma = \frac{1}{3},\) we have:

  1. (i)
    $$S_{i} \left( {A,B^{c} } \right) = S_{i} \left( {A^{c} ,B} \right), i \ne 11,12$$
  2. (ii)
    $$S_{i} \left( {A,B} \right) = S_{i} \left( {A \cap B,A \cup B} \right)$$
  3. (iii)
    $$S_{i} \left( {A,A \cap B} \right) = S_{i} \left( {B,A \cup B} \right)$$
  4. (iv)
    $$S_{i} \left( {A,A \cup B} \right) = S_{i} \left( {B,A \cap B} \right)$$

Proof

For the sake of brevity, we only prove property (i) to (iii) for \(S_{1} \left( {A,B} \right)\); it can be easily shown in a similar manner that \(S_{i} \left( {A, B} \right), i = 2, 3, \ldots , 11\), also satisfies properties (i) to (iv) above. The proof for property (iv) is similar to that of property (iii) and is therefore omitted.

  1. (i)

    For \(S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big)\), we have the following:

    $$\begin{aligned} S_{1} \left( {A,B^{c} } \right) & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - \left( {1 - I_{B}^{2} \left( x \right)} \right)} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) + I_{B}^{2} \left( x \right) - 1} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {1 - I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = S_{1} \left( {A^{c} ,B} \right). \\ \end{aligned}$$
  2. (ii)
    $$\begin{aligned} & S_{1} \left( {A \cap B,A \cup B} \right) \\ &\quad = 1 - \frac{1}{3\left| X \right|}\sum\nolimits_{x \in X} \\ &\quad \begin{gathered} \left( {\left| {\left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right. \hfill \\ + \qquad \quad \left| {\left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| \hfill \\ \left. { + \left| {\left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \hfill \\ \end{gathered} \\ &\quad = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ &\quad = S_{1} \left( {A,B} \right). \\ \end{aligned}$$
  3. (iii)
    $$\begin{aligned} & S_{1} \left( {A,A \cap B} \right) \\ &\quad = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - \left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {I_{A}^{2} \left( x \right)} \right.} \right. \\ & \;\;\quad - \left. {\left. { \left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {F_{A}^{2} \left( x \right) - \left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \\ \end{aligned}$$
    $$\begin{aligned} & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| \right. \\ &\quad \left. + \left| {F_{B}^{2} \left( x \right) - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right| \right) \end{aligned} $$
    $$\begin{gathered} \because \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| = \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right|} \right) \hfill \\ = S_{1} \left( {B,A \cup B} \right) \hfill \\ \end{gathered}$$
  4. (iv)

    The proof is similar to that of (iii) and is therefore omitted.

Comparative studies

In this section, we conduct a comparative analysis between the proposed similarity measures and other existing similarity measures presented in the literature to show the drawbacks of the existing similarity measures and the advantages of the suggested similarity measures.

Existing similarity measures for SVNSs

In this subsection, we present a detailed and comprehensive comparative study of the previously defined similarity measures and some existing similarity measures in the literature. The existing similarity measures that will be considered in this comparative study are listed in Table 1.

Table 1 Existing similarity measure

Comparison between the proposed and existing similarity measures for SVNSs using artificial sets

In this subsection, we use 10 artificial sets of SVNSs that consist of a combination of special SVNNs to do a thorough comparison between the proposed similarity measures and existing similarity measures which are listed in Table 1. The results from this comparative study are presented in Table 2, where all values in bold indicate unreasonable results. From Table 2, it can be clearly seen that the proposed similarity measures \(S_{10}\) and \(S_{11} \) are able to overcome the shortcomings that are inherent in the existing similarity measures by producing reasonable results in all 10 cases that are studied. The drawbacks and problems that are inherent in existing similarity measures are discussed in detail in “Discussion and analysis of results”.

Table 2 Comparison of the results obtained for the different similarity measures

Discussion and analysis of results

The results obtained when the 10 sets of SVNSs were applied to the formulas in Table 1 are discussed and analyzed in the current subsection. The results which are shown in bold in Table 2 indicate unreasonable results, and the reasons for classifying these specific results as unreasonable are discussed below.

  1. (i)

    It can be clearly seen that condition (S2) is not satisfied in similarity measures \(S_{Y3} , S_{Y11} \) and \(S_{CY}\), when \(A\) and \(B\) are clearly not equal:

    • \(S_{Y3} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.6} \right)\) and \(B = \left( {0.2,0.1,0.3} \right)\)

    • \(S_{Y3} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.3} \right)\) and \(B = \left( {0.8,0.4,0.6} \right)\)

    • \(S_{Y11} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.3} \right)\) and \(B = \left( {0.8,0.4,0.6} \right)\)

    • \(S_{CY} \left( {A,B} \right) = 1\), when \(A = \left( {0.3,0.3,0.4} \right)\) and \(B = \left( {0.4,0.3,0.3} \right)\)

    • \(S_{Y11} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.6} \right)\) and \(B = \left( {0.2,0.1,0.3} \right)\)

    • \(S_{CY} \left( {A,B} \right) = 1\), when \(A = \left( {1,0,0} \right)\) and \(B = \left( {0,0,1} \right)\).

  2. (ii)

    Some similarity measures fail to handle the division by zero problem. These include case 8, for \(S_{YZ} , S_{6}\) and \(S_{7}\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,1} \right)\), and case 9, for \(S_{Y3} , S_{11} , S_{DGZ2} ,\) \(S_{YZ} ,S_{P} ,S_{6} \) and \(S_{7}\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,0} \right)\).

  3. (iii)

    It can be clearly seen that condition (S1) is not met in similarity measure \(S_{S}\), since \(S_{{}} \left( {A,B} \right) = - 0.0833\), when \(A = \left( {1,0,0} \right)\) and \(B = \left( {1,0,0} \right)\).

  4. (iv)

    We also can see that \(S_{Y1} \left( {A,B} \right) = S_{Y2} \left( {A,B} \right) = S_{Y3} \left( {A,B} \right) = S_{Y4} \left( {A,B} \right) = S_{Y8} \left( {A,B} \right) = S_{Y9} \left( {A,B} \right) = S_{Y11} \left( {A,B} \right) = S_{YF1} \left( {A,B} \right) = S_{M} \left( {A,B} \right) = S_{L} \left( {A,B} \right) = S_{P} \left( {A,B} \right) = S_{BS} \left( {A,B} \right) = S_{3} \left( {A,B} \right) = S_{4} \left( {A,B} \right) = S_{5} \left( {A,B} \right) = S_{8} \left( {A,B} \right) = S_{9} \left( {A,B} \right) = 0\), when \(A = \left( {1,0,0} \right)\), \(B = \left( {0,0,1} \right)\), when \(A\) and \(B\) are clearly not completely different (i.e., not 100% different). A similar case exists for \(S_{Y1} \left( {A,B} \right) = S_{Y2} \left( {A,B} \right) = S_{Y4} \left( {A,B} \right) = S_{Y8} \left( {A,B} \right) = S_{Y9} \left( {A,B} \right) = S_{YF1} \left( {A,B} \right) = S_{M} \left( {A,B} \right) = S_{L} \left( {A,B} \right) = S_{CY} \left( {A,B} \right) = S_{BS} \left( {A,B} \right) = S_{3} \left( {A,B} \right) = S_{4} \left( {A,B} \right) = S_{5} \left( {A,B} \right) = S_{8} \left( {A,B} \right) = S_{9} \left( {A,B} \right) = 0\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,0} \right)\), that is when these two values are also clearly not completely different (i.e., not 100% different).

  5. (v)

    Moreover, \(S_{MNVAF} , S_{MP1} , S_{MP3}\) and \(S_{CY}\) produce unreasonable results in case 7, when \(A = \left( {1,0,0} \right), B = \left( {0,1,1} \right),\) that is when \(A\) and \(B\) are clearly opposites:

    \(S_{SOUKS} \left( {A,B} \right) = 0.2222\)

    \(S_{MP1} \left( {A,B} \right) = 0.5432\)

    \(S_{MP3} \left( {A,B} \right) = 0.0893\)

    \(S_{CY} = 0.6667\)

  6. (vi)

    Some of the existing similarity measures (namely the measures \(S_{Y1} , S_{Y2} , S_{Y3} , S_{Y4} , S_{Y5} , S_{Y6} , S_{Y8} , S_{Y9} , S_{Y10} , S_{Y11} , S_{Y12} ,S_{YF1} , S_{YF2, } S_{YZ} ,S_{M} ,S_{DGZ1} ,S_{DGZ2} ,S_{SOUKS} ,S_{H} ,S_{L} ,S_{P} ,\)

    \(S_{GN} , S_{MP2} , S_{MP3} , S_{FY} , S_{CY} , S_{W}\) and \(S_{BS}\)) and the proposed similarity measures (namely the measures \(S_{1} , S_{2} , S_{3} , S_{4} , S_{5} ,\) \(S_{6} ,S_{7} ,S_{8}\) and \(S_{9}\)) fail to distinguish the positive difference and negative difference. For instance, \(S_{Y1} \left( {A,B} \right) = S_{Y1} \left( {C,D} \right) = 0.9737\), when \(A = \left( {0.3,0.3,0.4} \right), B = \left( {0.4,0.3,0.4} \right)\) and \(C = \left( {0.3,0.3,0.4} \right), D = \left( {0.3,0.4,0.4} \right).\)

  7. (vii)

    Many of the similarity measures have been found to produce unconscionable results in some of the cases which are shown in Table 2. These findings are the following:

    • Case 3 and case 6 for \(S_{Y1}\)

    • Case 3 and case 6 for \(S_{Y2}\)

    • Case 4 for \(S_{Y4}\)

    • Case 3 and case 6 for \(S_{Y8}\)

    • Case 4 for \(S_{Y9}\)

    • Case 3, case 4, case 5, case 9 for \(S_{Y12}\)

    • Case 3 and case 6 for \(S_{YZ}\)

    • Case 3 and case 6 for \(S_{M}\)

    • Case 4 and case 5 for \(S_{SOUKS}\)

    • Case 2 and case 4 for \(S_{MB1}\)

    • Case 2 and case 4 for \(S_{MB2}\)

    • Case 3 and case 6 for \(S_{P}\)

    • Case 3, case 4 and case 5 for \(S_{GN}\)

    • Case 3 and case 6 for \(S_{CY}\)

    • Case 4 for \(S_{BS}\)

    • Case 4 and 5 for \(S_{PS}\)

    • Case 3 and case 6 for \(S_{6}\)

    • Case 3 and case 6 for \(S_{7}\)

    • Case 3 and case 6 for \(S_{8}\)

    • Case 3 and case 6 for \(S_{9}\)

      This observation indicates that the aforementioned similarity measures may be impractical and difficult to be used in practical applications.

  1. (viii)

    From Table 2, it can be seen that existing similarity measures \(S_{Y7} , S_{RXZ} \) and the proposed similarity measures \(S_{10} ,S_{11}\) are the only similarity measures that are able to produce reasonable results for every one of the 10 cases that were examined in this subsection. Hence, it can be co concluded that the proposed similarity measures \(S_{10}\) and \(S_{11}\) are superior to all of the existing similarity measures and as effective as the existing similarity measures \(S_{YZ}\) and \(S_{RXZ}\).

Applications of the proposed similarity measures

In this section, we study the performance of the existing similarity measures and the proposed similarity measures by applying all these measures to two MCDM problems related to pattern recognition and medical diagnosis. The rankings obtained are further tested using the Spearman’s rank correlation coefficient test, and the results obtained clearly prove that the proposed similarity measures \(S_{10}\) and \(S_{11}\) are superior compared to the existing similarity measures \(S_{RXZ} {\text{and }}S_{Y7}\).

Application of the similarity measures in a pattern recognition problem

Suppose that there are \(r\) patterns and they are expressed by SVNSs. Suppose \(A_{i} = \left\{ {x_{j} ;T_{A} \left( {x_{j} } \right), I_{A} \left( {x_{j} } \right),F_{A} \left( {x_{j} } \right)} \right\},\left( {i = 1,2, \ldots ,r} \right)\) are \(r\) patterns in a given universe of discourse \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\) Let \(B = \left\{ {x_{j} ;T_{B} \left( {x_{j} } \right), I_{B} \left( {x_{j} } \right),F_{B} \left( {x_{j} } \right)} \right\}\) be a sample that needs to be recognized. The objective is to categorize pattern \(B\) to one of the patterns \(A_{1} ,A_{2} , \ldots ,A_{r}\) based on the principle of maximum similarity, i.e. the larger the value of the similarity measure between \(A_{i}\) and \(B\), the more similar are \(A_{i}\) and \(B\).

Example 1

A numerical example adapted from Garg and Nancy [54] is used here to illustrate the effectiveness of the proposed similarity measures. Suppose that there are 3 known patterns \(A_{1} , A_{2} , \) and \(A_{3}\) which are represented by specific SVNSs, in a given universe of discourse \(X = \left\{ {x_{1} , x_{2} ,x_{3} ,x_{4} } \right\}\), and an unknown pattern \(B \in SVNS\left( X \right),\) all of which are presented in Table 3.

Table 3 Patterns \(A_{1} , A_{2} , A_{3}\) and \(B\) represented in the form of SVNSs

The values of the similarity measures between \(B\) and \(A_{k}\), \(k = 1, 2, 3\) have been computed for all of the proposed similarity measures, \(S_{i} ,i = 1,2 \ldots ,11\), and the results are presented in Table 4. Note that values in bold indicate the largest value of the corresponding similarity measure.

Table 4 The values of the similarity measures for our proposed formulae

From Table 4, it can be seen that all of the proposed similarity measures produced the same ranking (i.e., \(A_{2} > A_{3} > A_{1}\)), except for measure \(S_{6}\) which produced a slightly different ranking (i.e., \(A_{2} > A_{1} > A_{3}\)). However, based on the ranking orders produced by all of the proposed similarity measures it can be clearly concluded that sample \(B\) belongs to pattern \(A_{2}\).

Performance of existing similarity measures in the pattern recognition problem

In the following, we present a comparative analysis of the performance of the existing similarity measures and the proposed similarity measures to further illustrate the effectiveness of the proposed similarity measures. The existing similarity measures of SVNSs, which were given in Table 1, are applied to the pattern recognition problem presented in Example 1. The results obtained are summarized in Table 5. Note that the row in bold indicates a different ranking order.

Table 5 Ranking order of the existing similarity measures

From Table 5, it can be seen that all of the existing similarity measures produced the same ranking order as the proposed similarity measures except for measure \(S_{CY}\) which produced the same ranking as the ranking produced by measure \(S_{6}\). This demonstrates the consistency and effectiveness of the proposed similarity measures.

Application of the similarity measures in a medical diagnosis problem

Ye [42] proposed a medical diagnosis method which considers a set of diagnoses \(Q = \left\{ {Q_{1} ,Q_{2} , \ldots ,Q_{n} } \right\}\) and a set of symptoms \(S = \left\{ {s_{1} ,s_{2} ,s_{3} , \ldots ,s_{m} } \right\}\). Assume that a patient \(P\) with varying degrees of all the symptoms is taken as a sample. The characteristic information of \(Q, S\) and \(P\) are represented in the form of SVNSs. The diagnosis \(Q_{i}\) for patient \(P\) is defined as \(i = \arg \max { }\left\{ {S\left( {P,Q_{i} } \right)} \right\}.\) In the following, we will consider a numerical example adapted from [42] to illustrate the feasibility and effectiveness of the proposed new similarity measures.

Example 2

A medical diagnosis problem adapted from [42] is described below. Assume a set of diagnoses \(Q\) and a set of symptoms \(R\) which are defined as follows:

\(Q = \{ Q_{1}\) (viral fever), \(Q_{2}\) (malaria), \(Q_{3}\) (typhoid), \(Q_{4}\) (gastritis), \(Q_{5}\) (stenocardia)\(\}\)

and \(R = \{ r_{1}\) (fever), \(r_{2}\) (headache), \(r_{3} \)(stomach pain), \(r_{4}\) (cough), \(r_{5}\) (chest pain)\(\} .\)

The characteristic values of the considered diseases are represented in the form of SVNSs and they are shown in Table 6.

Table 6 Characteristic values of the considered diseases represented in the form of SVNSs

In the medical diagnosis, assume that we take a sample from a patient \(P_{1}\) with all the symptoms, which is represented by the following SVNS information:

\(P_{1} = \left\{ {\begin{array}{*{20}l} <{r_{1} ,0.8, 0.2,0.1>, <r_{2} ,0.6,0.3,0.1>, <r_{3} ,0.2,0.1,0.8>,} \\ {<r_{4} ,0.6,0.5,0.1>, <r_{5} ,0.1,0.4,0.6>} \\ \end{array} } \right\}\).

By applying the proposed formulas (\(S_{1} , S_{2} , S_{3} , S_{4} , S_{5} , S_{6} , S_{7} ,S_{8} , S_{9} , S_{10}\) and \(S_{11}\)), we obtain the corresponding similarity measure values \(S_{i} \left( {P_{1} ,Q_{i} } \right)\left( {i = 1,2, \ldots ,11} \right)\) which are shown in Table 7. Note that values in bold indicate the largest value of the corresponding similarity measure.

Table 7 The similarity measures between \(P_{1}\) and \(Q_{i}\) for the proposed formulas

From Table 7, it can be seen that only formulas \(S_{2} \) and \(S_{7}\) produced results that are not consistent with the results produced by the other proposed formulas. Since the largest value of similarity indicates the proper diagnosis, we can conclude that the diagnosis of patient \(P_{1}\) is \(Q_{2} \)(malaria) in all cases except for the cases of \(S_{2}\) and \(S_{7}\) in which the patient was diagnosed as having viral fever (\(S_{2} )\) and typhoid \((S_{7} )\), respectively. These results are consistent with the results presented by Ye in [42] from where this dataset and the corresponding example were adapted. The medical diagnosis process presented in [42] also concludes that the diagnosis of patient \(P_{1}\) is malaria, and this shows that the proposed similarity measures are feasible, practical and effective ones.

Performance of the existing similarity measures in the medical diagnosis problem

To demonstrate the feasibility and effectiveness of the proposed similarity measures in the medical diagnosis that is studied, the performance of existing similarity measures of SVNSs listed in Table 1 are studied by applying these measures to Example 2. The results obtained are given in Table 8. Note that values in bold indicate the largest value of the corresponding similarity measure.

Table 8 Results of the similarity values between \(P_{1}\) and \(Q_{i}\) for all the existing similarity measures

As we can see from Table 8, \(S_{GN}\) produces inconclusive results as it gives the same values for both \(Q_{2}\) and \(Q_{3}\). Therefore, additional steps or further analysis would be needed in this case to distinguish these values and determine the correct diagnosis for the patient, a fact which indicates that the corresponding similarity formula is not able to handle all types of data.

Furthermore, patient \(P_{1}\) is still assigned to malaria \(\left( {Q_{2} } \right)\) for all of the existing similarity measures except in the cases of \(S_{YZ} , S_{RXZ}\) and \(S_{SOUKS}\), which is a clear indication that the results produced by the proposed similarity measures are consistent with those of the existing similarity measures, thereby proving that the proposed formulas are feasible, effective and practical measures of computing the similarity between SVNSs.

Ranking analysis with Spearman’s rank correlation coefficient

From “Discussion and analysis of results”, it could be clearly seen that only the existing similarity measures of \(S_{YZ}\) and \(S_{RXZ}\) as well as our proposed similarity measures of \(S_{10}\) and \(S_{11}\) are able to solve the problem of obtaining unconscionable or unreasonable results in all of the 10 cases that were studied. In the pattern recognition problem, all of these 4 similarity measures of \(S_{YZ} , S_{RXZ} , _{10}\) and \(S_{11}\) also produced the exact same rankings. However, in the medical diagnosis problem in Example 2, the ranking of the diagnosis obtained by \(S_{YZ}\) and \(S_{RXZ}\) and our proposed similarity measures of \(S_{10}\) and \(S_{11}\) are different. The proposed similarity measures of \(S_{10}\) and \(S_{11}\) obtained \(Q_{2}\) as the optimal decision value and, therefore, diagnosed the patient as having malaria, whereas \(S_{YZ}\) and \(S_{RXZ}\) obtained \(Q_{3}\) and \(Q_{1}\) as the optimal decision values, respectively and, therefore, diagnosed the patient as having typhoid and viral fever, respectively.

To analyze in more detail the differences in the rankings, a further verification of the results is done using the Spearman’s rank correlation coefficient test. The Spearman’s rank correlation coefficient, denoted by \(\rho\), is shown below and the results of the test are presented in Table 9.

Table 9 Correlation between the actual ranking calculated by Ye in [42] and the rankings produced by the similarity measures
$$\rho = 1 - \frac{{6\mathop \sum \nolimits_{i = 1}^{n} d_{i}^{2} }}{{n\left( {n^{2} - 1} \right)}}$$

From the results in Table 9, it can be clearly seen that our proposed similarity measures of \(S_{10}\) and \(S_{11}\) produced rankings that are perfectly correlated with the actual ranking calculated by Ye in [42] from where the dataset and the corresponding example were adapted, while the rankings obtained by the existing measures of \(S_{RXZ}\) and \(S_{YZ}\) are clearly less correlated to the actual ranking presented in [42]\(.\) This clearly proves that our proposed similarity measures are not only as feasible and effective as the existing similarity measures but also superior to the best similarity measures among the existing similarity measures in the relevant literature listed in Table 1.

Summary of the discussion and overall evaluation of the results

Through the comparative analyses that have been done, a few major weaknesses and inherent problems were identified in many of the existing similarity measures. Some of the existing measures did not fulfill the axiomatic requirement, failed to distinguish the positive difference and negative difference, failed to produce any results due to the division by zero problem, produced counter-intuitive results or produced unreasonable results in some cases. From the results of the comparative study presented in “Comparison between the proposed and existing similarity measures for SVNSs using artificial sets” and shown in in Table 2, it was found that only 2 of the existing similarity measures (\(S_{RXZ}\) and \(S_{YZ}\)) and 2 of the proposed similarity measures (\(S_{10}\) and \(S_{11}\)) did not produce any unreasonable or counter-intuitive results. However, through the Spearman’s rank correlation coefficient test done in “Ranking analysis with Spearman's rank correlation coefficient” it was evident that the proposed similarity measures \(S_{10}\) and \(S_{11}\) had also the highest correlation with the actual ranking, thereby proving that these similarity measures are superior to the existing measures \(S_{RXZ}\) and \(S_{YZ}\).

We also compared the performance of these two proposed similarity measures (\(S_{10}\) and \(S_{11}\)) in terms of the discriminative power of the results obtained via the corresponding these two formulas. From the illustrative examples given in “Application of the similarity measures in a pattern recognition problem” and “Application of the similarity measures in a medical diagnosis problem”, it can be observed that both of these proposed similarity measures (\(S_{10}\) and \(S_{11}\)) produced the exact same rankings as the actual rankings which indicates that both of these measures are effective and feasible. However, \(S_{11}\) has a higher level of discriminative power compared to \(S_{10}\), and this can be observed by the results obtained from the application of these measures to the pattern recognition and medical diagnosis problems in Tables 4 and 7, respectively, in which the values of the decision values are extremely close to another. It can be seen that \(S_{11}\) could better discriminate the values of the decision values and produce results that show a clear distinction between the decision values. By using this specific measure, we managed to distinguish between the decision values, a result that enabled us to rank the alternatives clearly and, consequently, enabled clear and firm decisions to be made. Furthermore, \(S_{11}\) has a lower level of computational complexity. Hence, it can be concluded that \(S_{11}\) is superior to \(S_{10}\).

Conclusions

The concluding remarks and the significant contributions of the presented approach are summarized below:

  1. 1.

    New formulas for the distance and similarity measures for SVNSs have been developed in an effort to improve and/or overcome the drawbacks that are inherent in existing distance and similarity measures for SVNSs.

  2. 2.

    The fundamental algebraic properties for the proposed distance and similarity measures were presented and verified.

  3. 3.

    To demonstrate the effectiveness and superiority of our proposed formulas, a comprehensive comparative analysis was conducted by considering all the existing similarity measures in the relevant literature. The analysis was done using 10 cases corresponding to different combinations of SVNNs, some of which were counter-intuitive. Many of the existing similarity measures produced unreasonable results and counter-intuitive results, while others could not even produce any results due to the division by zero problem. Our proposed similarity measures, on the other hand, were able to produce reasonable results for most cases, and two of the proposed similarity measures (\(S_{10}\) and \(S_{11}\)) were found to be the best among all of the proposed formulas and superior to almost all of the existing formulas, as they were able to produce reasonable and accurate results in every single one of the cases that were studied.

  4. 4.

    The proposed similarity measures and existing similarity measures were applied to two MCDM problems related to pattern recognition and medical diagnosis which were adapted from Garg and Nancy [54] and Ye [42], respectively. It was proven that the proposed similarity measures produced results that are consistent with the results obtained via the existing similarity measures, thereby confirming that the suggested similarity measures are feasible and effective measures that are also practical to be used in solving MCDM problems.

  5. 5.

    We went a step further in this study by conducting a two-prong comparative study to determine the performance of the existing and proposed similarity measures. From the first comparative study stated in (3) above, it was concluded that only 2 of the existing similarity measures and 2 of our proposed similarity measures were able to produce reasonable results in every single case for all the 10 cases that were studied. After eliminating all but 4 of the existing and proposed similarity measures, we proceeded to study the performance of the existing and proposed similarity measures in two MCDM problems related to pattern recognition and medical diagnosis as expounded in (4) above. The rankings obtained were further scrutinized by applying the Spearman’s rank correlation coefficient test to the rankings obtained by the 4 similarity measures: 2 existing measures of \(S_{YZ}\) and \(S_{RXZ}\) and 2 proposed similarity measures of \(S_{10}\) and \(S_{11}\). The results of the Spearman’s rank test verified the superiority of our proposed similarity measures of \(S_{10}\) and \(S_{11}\) as both produced rankings that are perfectly correlated with the actual rankings, thereby proving the superiority of our proposed similarity measures compared to the existing similarity measures.

  6. 6.

    To further determine the more superior measure between these two proposed similarity measures (\(S_{10}\) and \(S_{11}\)), we analyzed the discriminative power of these measures. It was concluded that \(S_{11}\) is superior to \(S_{10}\) as it had a higher discriminative power and a lower computational complexity compared to \(S_{10}\).

Suggestions for future research

The future direction of this work involves the development of other improved information measures such as entropy measures, cross-entropy measures and inclusion measures for SVNSs that are free from problems inherent in corresponding existing measures. We are also looking at applying the proposed measures to actual datasets of real-world problems instead of hypothetical datasets [85,86,87,88,89,90,91]. However, to accomplish these goals, an effective method of converting crisp data in real-life datasets has to be developed so that available crisp data can be converted effectively without any significant loss of data that would possibly affect the accuracy of the obtained results.