Abstract
The single-valued neutrosophic set (SVNS) is a well-known model for handling uncertain and indeterminate information. Information measures such as distance measures, similarity measures and entropy measures are very useful tools to be used in many applications such as multi-criteria decision making (MCDM), medical diagnosis, pattern recognition and clustering problems. A lot of such information measures have been proposed for the SVNS model. However, many of these measures have inherent problems that prevent them from producing reasonable or consistent results to the decision makers. In this paper, we propose several new distance and similarity measures for the SVNS model. The proposed measures have been verified and proven to comply with the axiomatic definition of the distance and similarity measure for the SVNS model. A detailed and comprehensive comparative analysis between the proposed similarity measures and other well-known existing similarity measures has been done. Based on the comparison results, it is clearly proven that the proposed similarity measures are able to overcome the shortcomings that are inherent in existing similarity measures. Finally, an extensive set of numerical examples, related to pattern recognition and medical diagnosis, is given to demonstrate the practical applicability of the proposed similarity measures. In all numerical examples, it is proven that the proposed similarity measures are able to produce accurate and reasonable results. To further verify the superiority of the suggested similarity measures, the Spearman’s rank correlation coefficient test is performed on the ranking results that were obtained from the numerical examples, and it was again proven that the proposed similarity measures produced the most consistent ranking results compared to other existing similarity measures.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The connection between precision and uncertainty has perplexed humanity for centuries. Lukasiewicz [1], a Polish logician and philosopher, gave the first formulation of multi-valued logic which led to the study of possibility theory. The first simple fuzzy set and fundamental thoughts of fuzzy set operations were proposed by Black [2]. To overcome the problem of handling uncertain and imprecise information in decision making, Zadeh [3] presented the concept of fuzzy set, where the membership degree of each element in a fuzzy set is a single value in the interval of [0,1]. Fuzzy set theory has been widely applied in a plethora of application fields, including medical diagnosis, engineering, economics, image processing and object recognition (Phuong et al. [4]; Shahzadi et al. [5]; Tobias and Seara [6]).
The general fuzzy set was extended to the intuitionistic fuzzy set (IFS) by Atanassov [7]. The IFS model has a degree of membership \(\mu_{A} \left( {x_{i} } \right) \in \left[ {0,1} \right]\) and a degree of non-membership \(\nu_{A} \left( {x_{i} } \right) \in \left[ {0,1} \right]\), such that \(\mu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right) \le 1\) for each \(x \in X.\) The IFS model definitely extends the classical fuzzy set model; however, it is often difficult to be applied in real-life decision making situations, as only incomplete and vague information can be dealt with but not indeterminate or inconsistent information. Hence, Smarandache [8] initially proposed the idea of the neutrosophic set (NS) which, from a philosophical point of view, more effectively deals with imprecise, indeterminate and inconsistent information, that often exists in real-life decision making problems, compared to the classical fuzzy set model [3] and the IFS model [7]. The neutrosophic set [9] is characterized by a truth function \(T_{A} \left( x \right)\), an indeterminacy \(I_{A} \left( x \right) \) function and a falsify \(F_{A} \left( x \right)\) function, where all these three functions are completely independent. The functions \(T_{A} \left( x \right), I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\) in \(X\) assume real values in the standard or non-standard subsets of \({}_{ }^{ - } 0,1_{ }^{ + } [,\) such that \(T_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } \left[ {,I_{A} \left( x \right):X \to } \right]{}_{ }^{ - } 0,1_{ }^{ + } } \right[ \) and \(F_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[\). Since its introduction, a lot of extensions of the neutrosophic set have been proposed by scholars, including the single-valued neutrosophic set (SVNS) by Wang et al. [10], the interval neutrosophic set by Wang et al. [11], the simplified neutrosophic set by Peng et al. [12], the neutrosophic soft set by Maji [13], the single-valued neutrosophic linguistic set by Ye [14], the simplified neutrosophic linguistic set by Tian et al. [15], the multi-valued neutrosophic set by Wang and Li [16], the rough neutrosophic set (RNS) by Broumi et al. [17], the ņeutrosophic cubic set by Jun et al. [18], the complex neutrosophic set by Ali and Smarandache [19], and the complex ņeutrosophic cubic set by Gulistan and Khan [20]. Additionally, a large number of aggregation operators have been presented, based on various techniques, including algebraic methods, Bonferroni mean (Bonferroni [21]), power average (Yager [22]), exponential operational law, prioritized average (Yager [23]) and operations of Dombi T-conorm and T-norm (Dombi [24]). All these aggregation operators have been proposed to be used for analyzing many multi-criteria decision making (MCDM) problems.
In this paper, we focus on the single-valued neutrosophic set (SVNS) which was presented by Wang et al. [10]. Since its inception, a lot of scholars have actively contributed to the development of this variation of the NS. In addition, a lot of scholars have applied SNVS in various application fields of decision making. For example, Zavadskas et al. [25] presented a new extension of the weighted aggregated sum product assessment (WASPAS) decision making method (namely WASPAS-SVNS) to solve the problem of site selection for waste incineration plants. Vafadarnikjoo et al. [26] applied the fuzzy Delphi method in combination with SVNS for assessing consumers’ motivations to purchase a remanufactured product. Selvachandran et al. [27] presented a modified Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) with maximizing deviation method based on the SVNS model and applied this technique to determine objective attribute weights in a supplier selection problem. Broumi et al. [28] did an analysis of the strength of a wi-fi connection using SVNSs. Biswas et al. [29] proposed a non-linear programming approach based on TOPSIS method for solving multi-criteria group decision making (MCGDM) problems under the SVNS environment. Abdel-Basset et al. [30] used a neutrosophic approach to minimize the cost of project scheduling under uncertain environmental conditions by assuming linear time–cost trade-offs. Abdel-Basset and Mohamed [31] proposed a combination of the plithogenic multi-criteria decision making approach based on TOPSIS and the criteria importance through inter-criteria correlation (CRITIC) method to evaluate the sustainability of a supply chain risk management system. Abdel-Basset et al. [32] considered the resource leveling problem in construction projects using neutrosophic sets with the aim to overcome the ambiguity surrounding the project scheduling decision making process. Besides these, many other scientific studies related to various extensions of the neutrosophic set model have also been published over the years. Akram et al. [33] developed an approach based on the maximizing deviation method and TOPSIS for solving MCDM problems under the assumptions of a simplified neutrosophic hesitant fuzzy environment. Zhan et al. [34] proposed an efficient algorithm to solve MCDM problems based on bipolar neutrosophic information. Aslam [35] introduced a novel neutrosophic analysis of variance, whereas Sumathi and Sweety [36] suggested a new form of fuzzy differential equation using trapezoid neutrosophic numbers.
Moreover, a lot of information measures for the SVNS model have been proposed over the years, such as similarity measures, distance measures, entropy measures, inclusion measures and also correlation coefficients. Some of the most important research works pertaining to similarity and distance measures for SVNSs are due to Broumi and Smarandache [37], Ye [38,39,40,41,42,43,44], Ye and Zhang [45], Majumdar and Samanta [46], Mondal and Pramanik [47], Ye and Fu [48], Liu and Luo [49], Huang [50], Mandal and Basu [51], Sahin et al. [52], Pramanik et al. [53], Garg and Nancy [54], Fu and Ye [55], Wu et al. [56], Cui and Ye [57], Mondal et al. [58, 59], Liu [60], Liu et al. [61], Ren et al. [62], Sun et al. [63] and Peng and Smarandache [64]. Research related to entropy and inclusion measures for the SVNS model can be found in Majumdar and Samanta [46], Aydoğdu [65], Garg and Nancy [66], Wu et al. [56], Cui and Ye [67], Aydoğdu and Şahin [68] and Sinha and Majumdar [69]. Lastly, correlation coefficients for SVNSs were proposed by Ye [38, 70, 71] and Hanafy et al. [72].
Since the first formulas expressing the similarity measure between two fuzzy sets were initially introduced by Bonissone [73], Eshragh and Mamdani [74] and Lee-Kwang et al. [75] years ago, a lot of scholars and researchers have been continuously proposing new similarity measures for fuzzy based models, including the SVNS model, and applying these measures in solving various practical problems related to MCDM (Ye [41]; Ye and Zhang [45]; Pramanik et al. [53]; Mondal and Pramanik [47]; Aydoğdu [65]; Mandal and Basu [76]), pattern recognition (Sahin et al. [52]), medical diagnosis (Shahzadi, Akram and Saeid [5]; Ye and Fu [48]; Abdel-Basset et al. [77]), clustering analysis (Ye [41, 43]), image processing (Guo et al. [78, 79]; Guo and Şengür [80]; Qi et al. [81]) and minimum spanning tree (Mandal and Basu [51]). The existing similarity measures for SVNSs have been found to have many problems and shortcomings, such as: (1) failing to differentiate between positive and negative differences over the sets that are being considered, (2) facing the division by zero problem, and (3) providing unreasonable results that are counter-intuitive with the concept of similarity measures and/or not compatible with the axiomatic definition of similarity measures for SVNSs. These assertions were correctly pointed out by Peng and Smarandache [64] who analyzed problems inherent in many of the existing similarity measures.
In view of the above, the objective of this paper is to propose new distance and similarity measures for the SVNS model which are able to overcome the shortcomings of existing measures. The paper presents a detailed comparative analysis between the proposed similarity measures and other existing similarity measures for SVNSs. The comparative analysis applies all these measures in different cases with the aim to demonstrate the effectiveness, the feasibility and the superiority of the proposed formulas compared to existing formulas. The newly proposed measures are applied to MCDM problems related to pattern recognition and medical diagnosis.
The rest of this article is organized as follows. Section “Preliminaries” provides a brief overview of some of the most important concepts related to SVNSs. In Sect. “New distance and similarity measures for SVNSs”, several new distance measures and similarity measures for the SVNS model are introduced and some important algebraic properties of these measures are presented and verified. In Sect. “Comparative studies”, a comparative analysis is given between the proposed similarity measures and other existing similarity measures presented in the literature. In Sect. “Applications of the proposed similarity measures”, the proposed similarity measures are applied to two MCDM problems, related respectively to pattern recognition and medical diagnosis, using numerical examples aiming to prove the feasibility and effectiveness of the proposed similarity measures. The results obtained are then compared to the results obtained using the existing similarity measures, as well as analyzed and discussed. Concluding remarks and directions of future research are presented in Sect. “Conclusions” followed by the acknowledgements and the list of references.
Preliminaries
Definition 2.1
[8]. A neutrosophic set \(A\) in a universal set \(X\) is characterized by a truth-membership function \(T_{A} \left( x \right),\) an indeterminacy-membership function \(I_{A} \left( x \right)\) and a falsity-membership function \(F_{A} \left( x \right).\) These three functions \(T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\) in \(X\) are real standard or non-standard subsets of \(\left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[,\) such that \(T_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } \left[ {,I_{A} \left( x \right):X \to } \right]{}_{ }^{ - } 0,1_{ }^{ + } } \right[,\) and \(F_{A} \left( x \right):X \to \left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[.\) Thus, there is no restriction on the sum of \(T_{A} \left( x \right), I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\), so that \({}_{ }^{ - } 0 \le \sup T_{A} \left( x \right) + \sup I_{A} \left( x \right) + \sup F_{A} \left( x \right) \le 3_{ }^{ + } .\)
Smarandache [8] introduced the neutrosophic set from a philosophical point of view as an extension of the fuzzy set, the IFS, and the interval-valued IFS. Although the concept was a novel one, it was found to be difficult to apply neutrosophic sets in practical problems, mainly due to the range of values of the membership functions which lie in the non-standard interval of \(\left] {{}_{ }^{ - } 0,1_{ }^{ + } } \right[\). Datasets in many real-life situations are often imprecise, uncertain and/or incomplete. Any discrepancies or deficiencies in the used datasets will have an adverse effect on the decision making process and, by extension, on the results that are generated. Hence, it is often pertinent to have a robust framework to effectively represent all types of imprecise, uncertain and incomplete information. Fuzzy set theory was introduced as a good alternative to deal with imprecise, inconsistent and incomplete information as classical methods, such as set theory and probability theory, were unable to deal with such deficiencies in information. However, fuzzy set theory was found to be less than ideal in dealing with imprecise, inconsistent and incomplete information, as it only takes into consideration the truth component of any information and it is not able to handle the falsity and indeterminacy components of the information. As fuzzy set theory evolved into other fuzzy based models, neutrosophic sets were introduced by Smarandache [8] as an efficient mathematical model to deal with imprecise, inconsistent and incomplete information. The SVNS model, which was conceptualized by Wang et al. [10] as an extension of the neutrosophic set model, has proven to be an effective model for handling imprecise, inconsistent and incomplete information in a systematic manner due its ability to consider the degree of truth, falsity and indeterminacy for each piece of information. In addition, the structure of the SVNS model in which its membership functions assume values in the standard interval of [0, 1] makes it compatible with the other fuzzy based models, thereby making it more convenient to be applied to solving real-life decision making problems with actual datasets. All these served as reasons to choose the SVNS model as the object of study in this paper. The formal definition of the SVNS is presented below.
Definition 2.2
[10]. Let \(X\) be a universal set. An SVNS \(A \) in X is concluded by a truth-membership function \(T_{A} \left( x \right),\) an indeterminacy-membership function \(I_{A} \left( x \right)\) and a falsity-membership function \(F_{A} \left( x \right).\) An SVNS \(A\) can be signified by \(A = \left\{ {x,T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\left| {x \in X} \right.} \right\}\) where \(T_{A} \left( x \right), I_{A} \left( x \right), F_{A} \left( x \right) \in \left[ {0,1} \right]\) for each \(x\) in \(X\). Then, the sum of \(T_{A} \left( x \right),I_{A} \left( x \right)\) and \(F_{A} \left( x \right)\) satisfies the condition \(0 \le T_{A} \left( x \right) + I_{A} \left( x \right) + F_{A} \left( x \right) \le 3\). For an SVNS \(A\) in \(X\), the triplet \(T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)\) is called single-valued neutrosophic number (SVNN), which is a fundamental element in an SVNS.
Definition 2.3
[10]. For any two given SVNSs \(A\) and \(B\), the union, intersection, equality, complement and inclusion of \(A\) and \(B\) are defined as shown below:
-
1.
Complement: \(A^{c} = \left\{ {\left\langle {x,F_{A} \left( x \right),1 - I_{A} \left( x \right),T_{A} \left( x \right)} \right\rangle \left| {x \in X} \right.} \right\}\).
-
2.
Inclusion: \(A \subseteq B\) if and only if \(T_{A} \left( x \right) \le T_{B} \left( x \right),I_{A} \left( x \right) \ge I_{B} \left( x \right),F_{A} \left( x \right) \ge F_{B} \left( x \right)\) for any \(x\) in \(X\).
-
3.
Equality: \(A = B\) if and only if \(A \subseteq B{ }\) and \(B \subseteq A\).
-
4.
Union: \(A \cup B = \{ \langle x,T_{A} \left( x \right) \vee T_{B} \left( x \right),I_{A} \left( x \right) \wedge I_{B} \left( x \right),F_{A} \left( x \right) \wedge F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}\).
-
5.
Intersection: \(A \cap B = \{ \langle x,T_{A} \left( x \right) \wedge T_{B} \left( x \right),I_{A} \left( x \right) \vee I_{B} \left( x \right),F_{A} \left( x \right) \vee F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}\).
Definition 2.4
[82]. For any two given SVNSs \(A\) and \(B\), the subtraction and division operation of \(A\) and \(B\) are defined as shown below:
-
1.
\(A \ominus B= \left\{ {\left. {\left\langle {x,\frac{{T_{A} \left( x \right) - T_{B} \left( x \right)}}{{1 - T_{B} \left( x \right)}},\frac{{I_{A} \left( x \right)}}{{I_{B} \left( x \right)}},\frac{{F_{A} \left( x \right)}}{{F_{B} \left( x \right)}}} \right\rangle } \right|x \in X} \right\}\), which is valid under the conditions \(A \ge B,T_{B} \left( x \right) \ne 1,I_{B} \left( x \right) \ne 0,F_{B} \left( x \right) \ne 0.\)
-
2.
A ⊘ B\(= \left\{ {\left. {\left\langle {x,\frac{{T_{A} \left( x \right)}}{{T_{B} \left( x \right)}},\frac{{I_{A} \left( x \right) - I_{B} \left( x \right)}}{{1 - I_{B} \left( x \right)}},\frac{{F_{A} \left( x \right) - F_{B} \left( x \right)}}{{1 - F_{B} \left( x \right)}}} \right\rangle } \right|x \in X} \right\}\), which is valid under the conditions \(B \ge A,T_{B} \left( x \right) \ne 0,I_{B} \left( x \right) \ne 1,F_{B} \left( x \right) \ne 1. \)
Definition 2.5
[83]. For any two given SVNSs \(A \) and \(B,\) the addition and multiplication operation of \(A\) and \(B\) are defined as shown below:
-
1.
$$ \begin{aligned} A \oplus B & = \{ \langle x,T_{A} \left( x \right) + T_{B} \left( x \right) - T_{A} \left( x \right)T_{B} \left( x \right), \\ &\quad I_{A} \left( x \right)I_{B} \left( x \right),F_{A} \left( x \right)F_{B} \left( x \right) \rangle \left| {x \in X} \right. \}.\end{aligned} $$
-
2.
$$ \begin{aligned} A \otimes B & = \{ \langle x,T_{A} \left( x \right)T_{B} \left( x \right),I_{A} \left( x \right) + I_{B} \left( x \right) \\ &\quad - I_{A} \left( x \right)I_{B} \left( x \right),F_{A} \left( x \right) + F_{B} \left( x \right) \\ &\quad - F_{A} \left( x \right)F_{B} \left( x \right)\rangle \left| {x \in X} \right. \}. \end{aligned} $$
Definition 2.6
Let \(A\) be an SVNS over a universe \(U.\)
-
1.
\(A\) is said to be an absolute SVNS, denoted by \(\tilde{A},\) if \(T_{{\tilde{A}}} \left( x \right) = 1, I_{{\tilde{A}}} \left( x \right) = 0\) and \(F_{{\tilde{A}}} \left( x \right) = 0,\) for all \(x \in U.\)
-
2.
\(A\) is said to be an empty or null SVNS, denoted by \(\phi_{A} ,\) if \(T_{{\phi_{A} }} \left( x \right) = 0, I_{{\phi_{A} }} \left( x \right) = 0\) and \(F_{{\phi_{A} }} \left( x \right) = 1,\) for all \(x \in U.\)
New distance and similarity measures for SVNSs
In this section, we introduce several new formulas for the distance and similarity measures of SVNSs based on the axiomatic definition of the distance and similarity between SVNSs.
Distance measures for single-valued neutrosophic sets
Definition 1
[37] A real function \(D:\Phi \left( X \right) \times \Phi \left( X \right) \to \left[ {0,1} \right]\) is called a distance measure, where \(d\) satisfies the following axioms for \(A,B,C \subseteq \Phi \left( X \right)\):
-
(D1)
$$0 \le D\left( {A,B} \right) \le 1.$$
-
(D2)
$$D\left( {A,B} \right) = 0\, {\text{iff }}A = B.$$
-
(D3)
$$D\left( {A,B} \right) = D\left( {B,A} \right).$$
-
(D4)
If \( \begin{aligned} & A \subseteq B \subseteq C, {\text{then }}D\left( {A,C} \right) \\ &\quad \ge D\left( {A,B} \right){\text{ and }}D\left( {A,C} \right) \ge D\left( {B,C} \right). \end{aligned} \)
Let \(A = \left\langle {x_{i} ,T_{A} \left( {x_{i} } \right),I_{A} \left( {x_{i} } \right),F_{A} \left( {x_{i} } \right)|x_{i} \in X} \right\rangle\) and \(B = \left\langle {x_{i} ,T_{B} \left( {x_{i} } \right),I_{B} \left( {x_{i} } \right),F_{B} \left( {x_{i} } \right)|x_{i} \in X} \right\rangle , i = 1,2, \ldots ,n,\) be two SVNSs over the universe \(X\).
Theorem 1
Let \(A\) and \(B\) be two SVNSs, then \(D_{i} \left( {A,B} \right)\), for \(i = 1,2, \ldots ,11\), is a distance measure between SVNSs.
-
1.
$$\begin{array}{ll} D_{1} \left( {A,B} \right) = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \Big( \left| T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \right| \\ \quad + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \Big)\end{array}$$
-
2.
$$\begin{aligned} & D_{2} \left( {A,B} \right) = \frac{1}{3\left| X \right|} \mathop \sum \limits_{x \in X} \Big| \left( {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right) \\ &\quad - \left( {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right) - \left( {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right) \Big|\end{aligned} $$
-
3.
$$\begin{aligned}& D_{3} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \Big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right|\\ &\quad \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \Big) \end{aligned}$$
-
4.
$$\begin{aligned} & D_{4} \left( {A,B} \right) = \frac{2}{\left| X \right|}\\ &\quad \mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{1 + \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}} \right\}\end{aligned}$$
-
5.
$$\begin{aligned} & D_{5} \left( {A,B} \right) \\ &\quad = \frac{{2\mathop \sum \nolimits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {1 + \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}} \end{aligned} $$
-
6.
$$\begin{aligned} D_{6} \left( {A,B} \right) & \;\; = 1 - \alpha \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \beta \frac{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \gamma \frac{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ & \quad \;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
-
7.
$$\begin{aligned} D_{7} \left( {A,B} \right) & = 1 - \frac{\alpha }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \frac{\beta }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} \\ &\quad - \frac{\gamma }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ & \;\;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
-
8.
$$ \begin{aligned} & D_{8} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \\ &\quad \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}} \right\} \end{aligned} $$
-
9.
$$\begin{aligned} & D_{9} \left( {A,B} \right) = 1 \\ &\quad - \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}} \end{aligned} $$
-
10.
\(\begin{aligned}& D_{10} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\\ & \mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}} } \right\}\end{aligned}\)
-
11.
\(\begin{aligned}& D_{11} \left( {A,B} \right) = 1\\ & - \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}}\end{aligned}\)
Proof
In order for \(D_{i} \left( {A,B} \right)\left( {i = 1,2, \ldots ,11} \right)\) to be qualified as a valid distance measure for SVNSs, it must satisfy conditions \(\left( {D1} \right)\) to \(\left( {D4} \right)\) in Definition 1. It is straightforward to prove condition (D1), so we prove only conditions (D2) to (D4) for the distance measure \(D_{1} \left( {A,B} \right)\). These conditions can be proven for the rest of the formulas \(D_{2} \left( {A,B} \right)\) to \(D_{11} \left( {A,B} \right)\) in a similar manner.
(D2)\( \left( \Rightarrow \right)\) If \(D_{1} \left( {A,B} \right) = 0,\)
then \(\frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)
\(\therefore \mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)
which would occur if \(T_{A}^{2} \left( x \right) = T_{B}^{2} \left( x \right), I_{A}^{2} \left( x \right) = I_{B}^{2} ,F_{A}^{2} \left( x \right) = F_{B}^{2} \left( x \right)\).
i.e., \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right)\).
i.e., \(A = B\).
\(\left( \Leftarrow \right)\) If \(A = B\), then \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right), \forall x \in X. \)
\(\begin{aligned} \therefore D_{1} \left( {A,B} \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{A}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{B}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {F_{B}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 0. \\ \end{aligned}\)
(D3)
(D4) If \(A \subseteq B \subseteq C\), then we have:
\(T_{A} \left( x \right) \le T_{B} \left( x \right) \le T_{C} \left( x \right),\;\;I_{A} \left( x \right) \ge I_{B} \left( x \right) \ge I_{C} \left( x \right),\;\;F_{A} \left( x \right) \ge F_{B} \left( x \right) \ge F_{C} \left( x \right).\)
Therefore, we have:
\(T_{A} \left( x \right) - T_{B} \left( x \right) \le T_{A} \left( x \right) - T_{C} \left( x \right),\;\;I_{A} \left( x \right) - I_{B} \left( x \right) \le I_{A} \left( x \right) - I_{C} \left( x \right),\)
\(F_{A} \left( x \right) - F_{B} \left( x \right) \le F_{A} \left( x \right) - F_{C} \left( x \right)\)
\(\therefore T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \le T_{A}^{2} \left( x \right) - T_{C}^{2} \left( x \right),\;\;I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right) \le I_{A}^{2} \left( x \right) - I_{C}^{2} \left( x \right),\)
\(F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right) \le F_{A}^{2} \left( x \right) - F_{C}^{2} \left( x \right).\)
Theorem 2
For \(i = 1,2, \ldots ,11,\) if \(\alpha = \beta = \gamma = \frac{1}{3},\) the following hold:
-
(i)
\(D_{i} \left( {A,B^{c} } \right) = D_{i} \left( {A^{c} ,B} \right), i \ne 11,12\)
-
(ii)
$$D_{i} \left( {A,B} \right) = D_{i} \left( {A \cap B,A \cup B} \right)$$
-
(iii)
$$D_{i} \left( {A,A \cap B} \right) = D_{i} \left( {B,A \cup B} \right)$$
-
(iv)
$$D_{i} \left( {A,A \cup B} \right) = D_{i} \left( {B,A \cap B} \right)$$
Proof
(i) Let \(A = \left( {T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)} \right), A^{c} = \left( {F_{A} \left( x \right),1 - I_{A} \left( x \right),T_{A} \left( x \right)} \right).\)
For \( \begin{aligned} D_{1} \left( {A,B} \right) & = \frac{1}{{3\left| X \right|}}\sum\limits_{{{\text{x}} \in {\text{X}}}} {\left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right|} \right.} \\ & \;\;\;\left. { + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right), \\ \end{aligned} \) the following hold:
(ii)
(iii)
(iv) The proof is similar to that of (iii) and is therefore omitted.
New similarity measures for SVNSs
Definition 2
[37]. Let \(A\) and \(B\) be two SVNSs, and \(S\) is a mapping \(S:SVNSs\left( X \right) \times SVNSs\left( X \right) \to \left[ {0,1} \right]\). We call \(S\left( {A,B} \right)\) a similarity measure between \(A\) and \(B \) if it satisfies the following properties:
-
(S1)
$$0 \le S\left( {A,B} \right) \le 1.$$
-
(S2)
$$S\left( {A,B} \right) = 1 {\text{iff}}\, A = B.$$
-
(S3)
$$S\left( {A,B} \right) = S\left( {B,A} \right).$$
-
(S4)
$$S\left( {A,C} \right) \le S\left( {A,B} \right) {\text{and}}$$$$S\left( {A,C} \right) \le S\left( {B,C} \right) {\text{if}}$$$$A \subseteq B \subseteq C {\text{, when}} C \in SVNS\left( X \right).$$
Theorem 3
Let \(A\) and \(B\) be two SVNSs, then \(S_{i} \left( {A,B} \right)\), for \(i = 1,2, \ldots ,11\), is a similarity measure between SVNSs.
-
(i)
$$ \begin{aligned} & S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{{{\text{x}} \in {\text{X}}}} \left(\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|\right) \end{aligned}$$
-
(ii)
$$ \begin{aligned} & S_{2} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left| \left( {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right) \right. \\ &\quad \left. - \left( {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right) - \left( {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right) \right| \end{aligned} $$
-
(iii)
$$ \begin{aligned} & S_{3} \left( {A,B} \right) = 1 - \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \end{aligned} $$
-
(iv)
$$S_{4} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{1 - \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{1 + \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}} \right\}$$
-
(v)
$$S_{5} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {1 - \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {1 + \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \vee \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| \vee \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|} \right)}}$$
-
(vi)
$$\begin{aligned} S_{6} \left( {A,B} \right) & = \alpha \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} + \beta \frac{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} + \gamma \frac{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ { } & \;\;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right] \\ \end{aligned}$$
-
(vii)
$$\begin{aligned} S_{7} \left( {A,B} \right) & = \frac{\alpha }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right)}} + \frac{\beta }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right)}}{{\left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right)}} + \frac{\gamma }{\left| X \right|}\mathop \sum \limits_{x \in X} \frac{{\left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}, \\ { } & \;\;\alpha + \beta + \gamma = 1,\alpha ,\beta ,\gamma \in \left[ {0,1} \right]{ } \\ \end{aligned}$$
-
(viii)
$$S_{8} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}} \right\}$$
-
(ix)
$$S_{9} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \wedge I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \wedge F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {I_{A}^{2} \left( x \right) \vee I_{B}^{2} \left( x \right)} \right) + \left( {F_{A}^{2} \left( x \right) \vee F_{B}^{2} \left( x \right)} \right)}}$$
-
(x)
$$S_{10} \left( {A,B} \right) = \frac{1}{\left| X \right|}\mathop \sum \limits_{x \in X} \left\{ {\frac{{\left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}} } \right\}$$
-
(xi)
$$S_{11} \left( {A,B} \right) = \frac{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \wedge T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \wedge \left( {1 - F_{B}^{2} \left( x \right)} \right)}}{{\mathop \sum \nolimits_{x \in X} \left( {T_{A}^{2} \left( x \right) \vee T_{B}^{2} \left( x \right)} \right) + \left( {1 - I_{A}^{2} \left( x \right)} \right) \vee \left( {1 - I_{B}^{2} \left( x \right)} \right) + \left( {1 - F_{A}^{2} \left( x \right)} \right) \vee \left( {1 - F_{B}^{2} \left( x \right)} \right)}}$$
Proof
In order for \(S_{i} \left( {A,B} \right)\left( {i = 1,2, \ldots ,11} \right)\) to be qualified as a practical similarity measure for SVNSs, it must satisfy the conditions \(\left( {S1} \right)\) to \(\left( {S4} \right)\), listed in Definition 2. It is straightforward to prove condition \(\left( {S1} \right)\) and therefore we only prove conditions \(\left( {S2} \right) \) to \( \left( {S4} \right)\). For the sake of brevity, we only present the proof for \(S_{1} \left( {A,B} \right)\). The proof for the other formulas can be generated in a similar manner.
(D2) For \(S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{{{\text{x}} \in {\text{X}}}} (\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right|,\) we have the following:
\(\left( \Rightarrow \right)\) If \(S_{1} \left( {A,B} \right) = 1,\)
then \(1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 1\)
\(\therefore \mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big) = 0\)
which would occur if \(T_{A}^{2} \left( x \right) = T_{B}^{2} \left( x \right), I_{A}^{2} \left( x \right) = I_{B}^{2} ,F_{A}^{2} \left( x \right) = F_{B}^{2} \left( x \right)\).
i.e., \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right)\).
i.e., \(A = B\).
\(\left( \Leftarrow \right)\) If \(A = B\), then \(T_{A} \left( x \right) = T_{B} \left( x \right), I_{A} \left( x \right) = I_{B} \left( x \right),F_{A} \left( x \right) = F_{B} \left( x \right), \forall x \in X. \)
\(\begin{aligned} \therefore S_{1} \left( {A,B} \right) & = \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{B}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1. \\ \end{aligned} \)
(D3) \(\begin{aligned} S_{1} \left( {A,B} \right) & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - I_{A}^{2} \left( x \right)} \right| + \left| {F_{B}^{2} \left( x \right) - F_{A}^{2} \left( x \right)} \right| \right) \\ & = S_{1} \left( {B,A} \right). \\ \end{aligned} \)
(D4) If \(A \subseteq B \subseteq C\), then we have:
\(T_{A} \left( x \right) \le T_{B} \left( x \right) \le T_{C} \left( x \right),\) \(I_{A} \left( x \right) \ge I_{B} \left( x \right) \ge I_{C} \left( x \right)\), \(F_{A} \left( x \right) \ge F_{B} \left( x \right) \ge F_{C} \left( x \right)\).
Therefore, we have:
\(T_{A} \left( x \right) - T_{B} \left( x \right) \le T_{A} \left( x \right) - T_{C} \left( x \right)\), \(I_{A} \left( x \right) - I_{B} \left( x \right) \le I_{A} \left( x \right) - I_{C} \left( x \right),\) \(F_{A} \left( x \right) - F_{B} \left( x \right) \le F_{A} \left( x \right) - F_{C} \left( x \right)\).
\(\therefore T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right) \le T_{A}^{2} \left( x \right) - T_{C}^{2} \left( x \right)\), \(I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right) \le I_{A}^{2} \left( x \right) - I_{C}^{2} \left( x \right)\), \(F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right) \le F_{A}^{2} \left( x \right) - F_{C}^{2} \left( x \right)\).
Hence, \(S_{1} \left( {A, B} \right)\) is a similarity measure between SVNSs.
Theorem 4
For \(i = 1,2, \ldots ,11,\) if \(\alpha = \beta = \gamma = \frac{1}{3},\) we have:
-
(i)
$$S_{i} \left( {A,B^{c} } \right) = S_{i} \left( {A^{c} ,B} \right), i \ne 11,12$$
-
(ii)
$$S_{i} \left( {A,B} \right) = S_{i} \left( {A \cap B,A \cup B} \right)$$
-
(iii)
$$S_{i} \left( {A,A \cap B} \right) = S_{i} \left( {B,A \cup B} \right)$$
-
(iv)
$$S_{i} \left( {A,A \cup B} \right) = S_{i} \left( {B,A \cap B} \right)$$
Proof
For the sake of brevity, we only prove property (i) to (iii) for \(S_{1} \left( {A,B} \right)\); it can be easily shown in a similar manner that \(S_{i} \left( {A, B} \right), i = 2, 3, \ldots , 11\), also satisfies properties (i) to (iv) above. The proof for property (iv) is similar to that of property (iii) and is therefore omitted.
-
(i)
For \(S_{1} \left( {A,B} \right) = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \big( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \big)\), we have the following:
$$\begin{aligned} S_{1} \left( {A,B^{c} } \right) & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - \left( {1 - I_{B}^{2} \left( x \right)} \right)} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) + I_{B}^{2} \left( x \right) - 1} \right| + \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right) \\ & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {F_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {1 - I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {T_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ & = S_{1} \left( {A^{c} ,B} \right). \\ \end{aligned}$$ -
(ii)
$$\begin{aligned} & S_{1} \left( {A \cap B,A \cup B} \right) \\ &\quad = 1 - \frac{1}{3\left| X \right|}\sum\nolimits_{x \in X} \\ &\quad \begin{gathered} \left( {\left| {\left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right. \hfill \\ + \qquad \quad \left| {\left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| \hfill \\ \left. { + \left| {\left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \hfill \\ \end{gathered} \\ &\quad = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| \right. \\ &\quad \left. + \left| {I_{A}^{2} \left( x \right) - I_{B}^{2} \left( x \right)} \right| + \left| {F_{A}^{2} \left( x \right) - F_{B}^{2} \left( x \right)} \right| \right) \\ &\quad = S_{1} \left( {A,B} \right). \\ \end{aligned}$$
-
(iii)
$$\begin{aligned} & S_{1} \left( {A,A \cap B} \right) \\ &\quad = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( {\left| {T_{A}^{2} \left( x \right) - \left( {\min \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {I_{A}^{2} \left( x \right)} \right.} \right. \\ & \;\;\quad - \left. {\left. { \left( {\max \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| + \left| {F_{A}^{2} \left( x \right) - \left( {\max \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right|} \right) \\ \end{aligned}$$$$\begin{aligned} & = 1 - \frac{1}{3\left| X \right|}\mathop \sum \limits_{x \in X} \left( \left| {T_{B}^{2} \left( x \right) - \left( {\max \left( {T_{A} \left( x \right),T_{B} \left( x \right)} \right)} \right)^{2} } \right| \right. \\ &\quad \left. + \left| {I_{B}^{2} \left( x \right) - \left( {\min \left( {I_{A} \left( x \right),I_{B} \left( x \right)} \right)} \right)^{2} } \right| \right. \\ &\quad \left. + \left| {F_{B}^{2} \left( x \right) - \left( {\min \left( {F_{A} \left( x \right),F_{B} \left( x \right)} \right)} \right)^{2} } \right| \right) \end{aligned} $$$$\begin{gathered} \because \left( {\left| {T_{A}^{2} \left( x \right) - T_{B}^{2} \left( x \right)} \right| = \left| {T_{B}^{2} \left( x \right) - T_{A}^{2} \left( x \right)} \right|} \right) \hfill \\ = S_{1} \left( {B,A \cup B} \right) \hfill \\ \end{gathered}$$
-
(iv)
The proof is similar to that of (iii) and is therefore omitted.
Comparative studies
In this section, we conduct a comparative analysis between the proposed similarity measures and other existing similarity measures presented in the literature to show the drawbacks of the existing similarity measures and the advantages of the suggested similarity measures.
Existing similarity measures for SVNSs
In this subsection, we present a detailed and comprehensive comparative study of the previously defined similarity measures and some existing similarity measures in the literature. The existing similarity measures that will be considered in this comparative study are listed in Table 1.
Comparison between the proposed and existing similarity measures for SVNSs using artificial sets
In this subsection, we use 10 artificial sets of SVNSs that consist of a combination of special SVNNs to do a thorough comparison between the proposed similarity measures and existing similarity measures which are listed in Table 1. The results from this comparative study are presented in Table 2, where all values in bold indicate unreasonable results. From Table 2, it can be clearly seen that the proposed similarity measures \(S_{10}\) and \(S_{11} \) are able to overcome the shortcomings that are inherent in the existing similarity measures by producing reasonable results in all 10 cases that are studied. The drawbacks and problems that are inherent in existing similarity measures are discussed in detail in “Discussion and analysis of results”.
Discussion and analysis of results
The results obtained when the 10 sets of SVNSs were applied to the formulas in Table 1 are discussed and analyzed in the current subsection. The results which are shown in bold in Table 2 indicate unreasonable results, and the reasons for classifying these specific results as unreasonable are discussed below.
-
(i)
It can be clearly seen that condition (S2) is not satisfied in similarity measures \(S_{Y3} , S_{Y11} \) and \(S_{CY}\), when \(A\) and \(B\) are clearly not equal:
-
\(S_{Y3} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.6} \right)\) and \(B = \left( {0.2,0.1,0.3} \right)\)
-
\(S_{Y3} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.3} \right)\) and \(B = \left( {0.8,0.4,0.6} \right)\)
-
\(S_{Y11} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.3} \right)\) and \(B = \left( {0.8,0.4,0.6} \right)\)
-
\(S_{CY} \left( {A,B} \right) = 1\), when \(A = \left( {0.3,0.3,0.4} \right)\) and \(B = \left( {0.4,0.3,0.3} \right)\)
-
\(S_{Y11} \left( {A,B} \right) = 1\), when \(A = \left( {0.4,0.2,0.6} \right)\) and \(B = \left( {0.2,0.1,0.3} \right)\)
-
\(S_{CY} \left( {A,B} \right) = 1\), when \(A = \left( {1,0,0} \right)\) and \(B = \left( {0,0,1} \right)\).
-
-
(ii)
Some similarity measures fail to handle the division by zero problem. These include case 8, for \(S_{YZ} , S_{6}\) and \(S_{7}\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,1} \right)\), and case 9, for \(S_{Y3} , S_{11} , S_{DGZ2} ,\) \(S_{YZ} ,S_{P} ,S_{6} \) and \(S_{7}\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,0} \right)\).
-
(iii)
It can be clearly seen that condition (S1) is not met in similarity measure \(S_{S}\), since \(S_{{}} \left( {A,B} \right) = - 0.0833\), when \(A = \left( {1,0,0} \right)\) and \(B = \left( {1,0,0} \right)\).
-
(iv)
We also can see that \(S_{Y1} \left( {A,B} \right) = S_{Y2} \left( {A,B} \right) = S_{Y3} \left( {A,B} \right) = S_{Y4} \left( {A,B} \right) = S_{Y8} \left( {A,B} \right) = S_{Y9} \left( {A,B} \right) = S_{Y11} \left( {A,B} \right) = S_{YF1} \left( {A,B} \right) = S_{M} \left( {A,B} \right) = S_{L} \left( {A,B} \right) = S_{P} \left( {A,B} \right) = S_{BS} \left( {A,B} \right) = S_{3} \left( {A,B} \right) = S_{4} \left( {A,B} \right) = S_{5} \left( {A,B} \right) = S_{8} \left( {A,B} \right) = S_{9} \left( {A,B} \right) = 0\), when \(A = \left( {1,0,0} \right)\), \(B = \left( {0,0,1} \right)\), when \(A\) and \(B\) are clearly not completely different (i.e., not 100% different). A similar case exists for \(S_{Y1} \left( {A,B} \right) = S_{Y2} \left( {A,B} \right) = S_{Y4} \left( {A,B} \right) = S_{Y8} \left( {A,B} \right) = S_{Y9} \left( {A,B} \right) = S_{YF1} \left( {A,B} \right) = S_{M} \left( {A,B} \right) = S_{L} \left( {A,B} \right) = S_{CY} \left( {A,B} \right) = S_{BS} \left( {A,B} \right) = S_{3} \left( {A,B} \right) = S_{4} \left( {A,B} \right) = S_{5} \left( {A,B} \right) = S_{8} \left( {A,B} \right) = S_{9} \left( {A,B} \right) = 0\), when \(A = \left( {1,0,0} \right), B = \left( {0,0,0} \right)\), that is when these two values are also clearly not completely different (i.e., not 100% different).
-
(v)
Moreover, \(S_{MNVAF} , S_{MP1} , S_{MP3}\) and \(S_{CY}\) produce unreasonable results in case 7, when \(A = \left( {1,0,0} \right), B = \left( {0,1,1} \right),\) that is when \(A\) and \(B\) are clearly opposites:
\(S_{SOUKS} \left( {A,B} \right) = 0.2222\)
\(S_{MP1} \left( {A,B} \right) = 0.5432\)
\(S_{MP3} \left( {A,B} \right) = 0.0893\)
\(S_{CY} = 0.6667\)
-
(vi)
Some of the existing similarity measures (namely the measures \(S_{Y1} , S_{Y2} , S_{Y3} , S_{Y4} , S_{Y5} , S_{Y6} , S_{Y8} , S_{Y9} , S_{Y10} , S_{Y11} , S_{Y12} ,S_{YF1} , S_{YF2, } S_{YZ} ,S_{M} ,S_{DGZ1} ,S_{DGZ2} ,S_{SOUKS} ,S_{H} ,S_{L} ,S_{P} ,\)
\(S_{GN} , S_{MP2} , S_{MP3} , S_{FY} , S_{CY} , S_{W}\) and \(S_{BS}\)) and the proposed similarity measures (namely the measures \(S_{1} , S_{2} , S_{3} , S_{4} , S_{5} ,\) \(S_{6} ,S_{7} ,S_{8}\) and \(S_{9}\)) fail to distinguish the positive difference and negative difference. For instance, \(S_{Y1} \left( {A,B} \right) = S_{Y1} \left( {C,D} \right) = 0.9737\), when \(A = \left( {0.3,0.3,0.4} \right), B = \left( {0.4,0.3,0.4} \right)\) and \(C = \left( {0.3,0.3,0.4} \right), D = \left( {0.3,0.4,0.4} \right).\)
-
(vii)
Many of the similarity measures have been found to produce unconscionable results in some of the cases which are shown in Table 2. These findings are the following:
-
Case 3 and case 6 for \(S_{Y1}\)
-
Case 3 and case 6 for \(S_{Y2}\)
-
Case 4 for \(S_{Y4}\)
-
Case 3 and case 6 for \(S_{Y8}\)
-
Case 4 for \(S_{Y9}\)
-
Case 3, case 4, case 5, case 9 for \(S_{Y12}\)
-
Case 3 and case 6 for \(S_{YZ}\)
-
Case 3 and case 6 for \(S_{M}\)
-
Case 4 and case 5 for \(S_{SOUKS}\)
-
Case 2 and case 4 for \(S_{MB1}\)
-
Case 2 and case 4 for \(S_{MB2}\)
-
Case 3 and case 6 for \(S_{P}\)
-
Case 3, case 4 and case 5 for \(S_{GN}\)
-
Case 3 and case 6 for \(S_{CY}\)
-
Case 4 for \(S_{BS}\)
-
Case 4 and 5 for \(S_{PS}\)
-
Case 3 and case 6 for \(S_{6}\)
-
Case 3 and case 6 for \(S_{7}\)
-
Case 3 and case 6 for \(S_{8}\)
-
Case 3 and case 6 for \(S_{9}\)
This observation indicates that the aforementioned similarity measures may be impractical and difficult to be used in practical applications.
-
-
(viii)
From Table 2, it can be seen that existing similarity measures \(S_{Y7} , S_{RXZ} \) and the proposed similarity measures \(S_{10} ,S_{11}\) are the only similarity measures that are able to produce reasonable results for every one of the 10 cases that were examined in this subsection. Hence, it can be co concluded that the proposed similarity measures \(S_{10}\) and \(S_{11}\) are superior to all of the existing similarity measures and as effective as the existing similarity measures \(S_{YZ}\) and \(S_{RXZ}\).
Applications of the proposed similarity measures
In this section, we study the performance of the existing similarity measures and the proposed similarity measures by applying all these measures to two MCDM problems related to pattern recognition and medical diagnosis. The rankings obtained are further tested using the Spearman’s rank correlation coefficient test, and the results obtained clearly prove that the proposed similarity measures \(S_{10}\) and \(S_{11}\) are superior compared to the existing similarity measures \(S_{RXZ} {\text{and }}S_{Y7}\).
Application of the similarity measures in a pattern recognition problem
Suppose that there are \(r\) patterns and they are expressed by SVNSs. Suppose \(A_{i} = \left\{ {x_{j} ;T_{A} \left( {x_{j} } \right), I_{A} \left( {x_{j} } \right),F_{A} \left( {x_{j} } \right)} \right\},\left( {i = 1,2, \ldots ,r} \right)\) are \(r\) patterns in a given universe of discourse \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\) Let \(B = \left\{ {x_{j} ;T_{B} \left( {x_{j} } \right), I_{B} \left( {x_{j} } \right),F_{B} \left( {x_{j} } \right)} \right\}\) be a sample that needs to be recognized. The objective is to categorize pattern \(B\) to one of the patterns \(A_{1} ,A_{2} , \ldots ,A_{r}\) based on the principle of maximum similarity, i.e. the larger the value of the similarity measure between \(A_{i}\) and \(B\), the more similar are \(A_{i}\) and \(B\).
Example 1
A numerical example adapted from Garg and Nancy [54] is used here to illustrate the effectiveness of the proposed similarity measures. Suppose that there are 3 known patterns \(A_{1} , A_{2} , \) and \(A_{3}\) which are represented by specific SVNSs, in a given universe of discourse \(X = \left\{ {x_{1} , x_{2} ,x_{3} ,x_{4} } \right\}\), and an unknown pattern \(B \in SVNS\left( X \right),\) all of which are presented in Table 3.
The values of the similarity measures between \(B\) and \(A_{k}\), \(k = 1, 2, 3\) have been computed for all of the proposed similarity measures, \(S_{i} ,i = 1,2 \ldots ,11\), and the results are presented in Table 4. Note that values in bold indicate the largest value of the corresponding similarity measure.
From Table 4, it can be seen that all of the proposed similarity measures produced the same ranking (i.e., \(A_{2} > A_{3} > A_{1}\)), except for measure \(S_{6}\) which produced a slightly different ranking (i.e., \(A_{2} > A_{1} > A_{3}\)). However, based on the ranking orders produced by all of the proposed similarity measures it can be clearly concluded that sample \(B\) belongs to pattern \(A_{2}\).
Performance of existing similarity measures in the pattern recognition problem
In the following, we present a comparative analysis of the performance of the existing similarity measures and the proposed similarity measures to further illustrate the effectiveness of the proposed similarity measures. The existing similarity measures of SVNSs, which were given in Table 1, are applied to the pattern recognition problem presented in Example 1. The results obtained are summarized in Table 5. Note that the row in bold indicates a different ranking order.
From Table 5, it can be seen that all of the existing similarity measures produced the same ranking order as the proposed similarity measures except for measure \(S_{CY}\) which produced the same ranking as the ranking produced by measure \(S_{6}\). This demonstrates the consistency and effectiveness of the proposed similarity measures.
Application of the similarity measures in a medical diagnosis problem
Ye [42] proposed a medical diagnosis method which considers a set of diagnoses \(Q = \left\{ {Q_{1} ,Q_{2} , \ldots ,Q_{n} } \right\}\) and a set of symptoms \(S = \left\{ {s_{1} ,s_{2} ,s_{3} , \ldots ,s_{m} } \right\}\). Assume that a patient \(P\) with varying degrees of all the symptoms is taken as a sample. The characteristic information of \(Q, S\) and \(P\) are represented in the form of SVNSs. The diagnosis \(Q_{i}\) for patient \(P\) is defined as \(i = \arg \max { }\left\{ {S\left( {P,Q_{i} } \right)} \right\}.\) In the following, we will consider a numerical example adapted from [42] to illustrate the feasibility and effectiveness of the proposed new similarity measures.
Example 2
A medical diagnosis problem adapted from [42] is described below. Assume a set of diagnoses \(Q\) and a set of symptoms \(R\) which are defined as follows:
\(Q = \{ Q_{1}\) (viral fever), \(Q_{2}\) (malaria), \(Q_{3}\) (typhoid), \(Q_{4}\) (gastritis), \(Q_{5}\) (stenocardia)\(\}\)
and \(R = \{ r_{1}\) (fever), \(r_{2}\) (headache), \(r_{3} \)(stomach pain), \(r_{4}\) (cough), \(r_{5}\) (chest pain)\(\} .\)
The characteristic values of the considered diseases are represented in the form of SVNSs and they are shown in Table 6.
In the medical diagnosis, assume that we take a sample from a patient \(P_{1}\) with all the symptoms, which is represented by the following SVNS information:
\(P_{1} = \left\{ {\begin{array}{*{20}l} <{r_{1} ,0.8, 0.2,0.1>, <r_{2} ,0.6,0.3,0.1>, <r_{3} ,0.2,0.1,0.8>,} \\ {<r_{4} ,0.6,0.5,0.1>, <r_{5} ,0.1,0.4,0.6>} \\ \end{array} } \right\}\).
By applying the proposed formulas (\(S_{1} , S_{2} , S_{3} , S_{4} , S_{5} , S_{6} , S_{7} ,S_{8} , S_{9} , S_{10}\) and \(S_{11}\)), we obtain the corresponding similarity measure values \(S_{i} \left( {P_{1} ,Q_{i} } \right)\left( {i = 1,2, \ldots ,11} \right)\) which are shown in Table 7. Note that values in bold indicate the largest value of the corresponding similarity measure.
From Table 7, it can be seen that only formulas \(S_{2} \) and \(S_{7}\) produced results that are not consistent with the results produced by the other proposed formulas. Since the largest value of similarity indicates the proper diagnosis, we can conclude that the diagnosis of patient \(P_{1}\) is \(Q_{2} \)(malaria) in all cases except for the cases of \(S_{2}\) and \(S_{7}\) in which the patient was diagnosed as having viral fever (\(S_{2} )\) and typhoid \((S_{7} )\), respectively. These results are consistent with the results presented by Ye in [42] from where this dataset and the corresponding example were adapted. The medical diagnosis process presented in [42] also concludes that the diagnosis of patient \(P_{1}\) is malaria, and this shows that the proposed similarity measures are feasible, practical and effective ones.
Performance of the existing similarity measures in the medical diagnosis problem
To demonstrate the feasibility and effectiveness of the proposed similarity measures in the medical diagnosis that is studied, the performance of existing similarity measures of SVNSs listed in Table 1 are studied by applying these measures to Example 2. The results obtained are given in Table 8. Note that values in bold indicate the largest value of the corresponding similarity measure.
As we can see from Table 8, \(S_{GN}\) produces inconclusive results as it gives the same values for both \(Q_{2}\) and \(Q_{3}\). Therefore, additional steps or further analysis would be needed in this case to distinguish these values and determine the correct diagnosis for the patient, a fact which indicates that the corresponding similarity formula is not able to handle all types of data.
Furthermore, patient \(P_{1}\) is still assigned to malaria \(\left( {Q_{2} } \right)\) for all of the existing similarity measures except in the cases of \(S_{YZ} , S_{RXZ}\) and \(S_{SOUKS}\), which is a clear indication that the results produced by the proposed similarity measures are consistent with those of the existing similarity measures, thereby proving that the proposed formulas are feasible, effective and practical measures of computing the similarity between SVNSs.
Ranking analysis with Spearman’s rank correlation coefficient
From “Discussion and analysis of results”, it could be clearly seen that only the existing similarity measures of \(S_{YZ}\) and \(S_{RXZ}\) as well as our proposed similarity measures of \(S_{10}\) and \(S_{11}\) are able to solve the problem of obtaining unconscionable or unreasonable results in all of the 10 cases that were studied. In the pattern recognition problem, all of these 4 similarity measures of \(S_{YZ} , S_{RXZ} , _{10}\) and \(S_{11}\) also produced the exact same rankings. However, in the medical diagnosis problem in Example 2, the ranking of the diagnosis obtained by \(S_{YZ}\) and \(S_{RXZ}\) and our proposed similarity measures of \(S_{10}\) and \(S_{11}\) are different. The proposed similarity measures of \(S_{10}\) and \(S_{11}\) obtained \(Q_{2}\) as the optimal decision value and, therefore, diagnosed the patient as having malaria, whereas \(S_{YZ}\) and \(S_{RXZ}\) obtained \(Q_{3}\) and \(Q_{1}\) as the optimal decision values, respectively and, therefore, diagnosed the patient as having typhoid and viral fever, respectively.
To analyze in more detail the differences in the rankings, a further verification of the results is done using the Spearman’s rank correlation coefficient test. The Spearman’s rank correlation coefficient, denoted by \(\rho\), is shown below and the results of the test are presented in Table 9.
From the results in Table 9, it can be clearly seen that our proposed similarity measures of \(S_{10}\) and \(S_{11}\) produced rankings that are perfectly correlated with the actual ranking calculated by Ye in [42] from where the dataset and the corresponding example were adapted, while the rankings obtained by the existing measures of \(S_{RXZ}\) and \(S_{YZ}\) are clearly less correlated to the actual ranking presented in [42]\(.\) This clearly proves that our proposed similarity measures are not only as feasible and effective as the existing similarity measures but also superior to the best similarity measures among the existing similarity measures in the relevant literature listed in Table 1.
Summary of the discussion and overall evaluation of the results
Through the comparative analyses that have been done, a few major weaknesses and inherent problems were identified in many of the existing similarity measures. Some of the existing measures did not fulfill the axiomatic requirement, failed to distinguish the positive difference and negative difference, failed to produce any results due to the division by zero problem, produced counter-intuitive results or produced unreasonable results in some cases. From the results of the comparative study presented in “Comparison between the proposed and existing similarity measures for SVNSs using artificial sets” and shown in in Table 2, it was found that only 2 of the existing similarity measures (\(S_{RXZ}\) and \(S_{YZ}\)) and 2 of the proposed similarity measures (\(S_{10}\) and \(S_{11}\)) did not produce any unreasonable or counter-intuitive results. However, through the Spearman’s rank correlation coefficient test done in “Ranking analysis with Spearman's rank correlation coefficient” it was evident that the proposed similarity measures \(S_{10}\) and \(S_{11}\) had also the highest correlation with the actual ranking, thereby proving that these similarity measures are superior to the existing measures \(S_{RXZ}\) and \(S_{YZ}\).
We also compared the performance of these two proposed similarity measures (\(S_{10}\) and \(S_{11}\)) in terms of the discriminative power of the results obtained via the corresponding these two formulas. From the illustrative examples given in “Application of the similarity measures in a pattern recognition problem” and “Application of the similarity measures in a medical diagnosis problem”, it can be observed that both of these proposed similarity measures (\(S_{10}\) and \(S_{11}\)) produced the exact same rankings as the actual rankings which indicates that both of these measures are effective and feasible. However, \(S_{11}\) has a higher level of discriminative power compared to \(S_{10}\), and this can be observed by the results obtained from the application of these measures to the pattern recognition and medical diagnosis problems in Tables 4 and 7, respectively, in which the values of the decision values are extremely close to another. It can be seen that \(S_{11}\) could better discriminate the values of the decision values and produce results that show a clear distinction between the decision values. By using this specific measure, we managed to distinguish between the decision values, a result that enabled us to rank the alternatives clearly and, consequently, enabled clear and firm decisions to be made. Furthermore, \(S_{11}\) has a lower level of computational complexity. Hence, it can be concluded that \(S_{11}\) is superior to \(S_{10}\).
Conclusions
The concluding remarks and the significant contributions of the presented approach are summarized below:
-
1.
New formulas for the distance and similarity measures for SVNSs have been developed in an effort to improve and/or overcome the drawbacks that are inherent in existing distance and similarity measures for SVNSs.
-
2.
The fundamental algebraic properties for the proposed distance and similarity measures were presented and verified.
-
3.
To demonstrate the effectiveness and superiority of our proposed formulas, a comprehensive comparative analysis was conducted by considering all the existing similarity measures in the relevant literature. The analysis was done using 10 cases corresponding to different combinations of SVNNs, some of which were counter-intuitive. Many of the existing similarity measures produced unreasonable results and counter-intuitive results, while others could not even produce any results due to the division by zero problem. Our proposed similarity measures, on the other hand, were able to produce reasonable results for most cases, and two of the proposed similarity measures (\(S_{10}\) and \(S_{11}\)) were found to be the best among all of the proposed formulas and superior to almost all of the existing formulas, as they were able to produce reasonable and accurate results in every single one of the cases that were studied.
-
4.
The proposed similarity measures and existing similarity measures were applied to two MCDM problems related to pattern recognition and medical diagnosis which were adapted from Garg and Nancy [54] and Ye [42], respectively. It was proven that the proposed similarity measures produced results that are consistent with the results obtained via the existing similarity measures, thereby confirming that the suggested similarity measures are feasible and effective measures that are also practical to be used in solving MCDM problems.
-
5.
We went a step further in this study by conducting a two-prong comparative study to determine the performance of the existing and proposed similarity measures. From the first comparative study stated in (3) above, it was concluded that only 2 of the existing similarity measures and 2 of our proposed similarity measures were able to produce reasonable results in every single case for all the 10 cases that were studied. After eliminating all but 4 of the existing and proposed similarity measures, we proceeded to study the performance of the existing and proposed similarity measures in two MCDM problems related to pattern recognition and medical diagnosis as expounded in (4) above. The rankings obtained were further scrutinized by applying the Spearman’s rank correlation coefficient test to the rankings obtained by the 4 similarity measures: 2 existing measures of \(S_{YZ}\) and \(S_{RXZ}\) and 2 proposed similarity measures of \(S_{10}\) and \(S_{11}\). The results of the Spearman’s rank test verified the superiority of our proposed similarity measures of \(S_{10}\) and \(S_{11}\) as both produced rankings that are perfectly correlated with the actual rankings, thereby proving the superiority of our proposed similarity measures compared to the existing similarity measures.
-
6.
To further determine the more superior measure between these two proposed similarity measures (\(S_{10}\) and \(S_{11}\)), we analyzed the discriminative power of these measures. It was concluded that \(S_{11}\) is superior to \(S_{10}\) as it had a higher discriminative power and a lower computational complexity compared to \(S_{10}\).
Suggestions for future research
The future direction of this work involves the development of other improved information measures such as entropy measures, cross-entropy measures and inclusion measures for SVNSs that are free from problems inherent in corresponding existing measures. We are also looking at applying the proposed measures to actual datasets of real-world problems instead of hypothetical datasets [85,86,87,88,89,90,91]. However, to accomplish these goals, an effective method of converting crisp data in real-life datasets has to be developed so that available crisp data can be converted effectively without any significant loss of data that would possibly affect the accuracy of the obtained results.
References
Lukasiewicz J (1930) Philosophical remarks on many-valued systems of propositional logic. North-Holland, Amsterdam
Black M (1937) Vagueness: an exercise in logical analysis. Philos Sci 4(4):427–455
Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
Phuong NH, Thang VV, Hirota K (2000) Case based reasoning for medical diagnosis using fuzzy set theory. Int J Biomed Soft Comput Hum Sci 5(2):1–7
Shahzadi G, Akram M, Saeid AB (2017) An application of single-valued neutrosophic sets in medical diagnosis. Neutrosophic Sets Syst 18:80–88
Tobias OJ, Seara R (2002) Image segmentation by histogram thresholding using fuzzy sets. IEEE Trans Image Process 11(12):1457–1465
Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96
Smarandache F (1998) Neutrosophy: neutrosophic probability, set and logic. American Research Press, Mexico
Smarandache F (1999) A unifying field in logics, neutrosophy: neutrosophic probability, set and logic. American Research Press, Mexico
Wang H, Smarandache F, Zhang YQ, Sunderraman R (2010) Single valued neutrosophic sets. Multispace Multistruct 4:410–413
Wang H, Smarandache F, Zhang YQ, Sunderraman R (2005) Interval neutrosophic sets and logic: theory and applications in computing. Hexis, Phoenix
Peng JJ, Wang J, Wang J, Zhang H, Chen X (2016) Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems. Int J Syst Sci 47(10):2342–2358
Maji PK (2013) Neutrosophic soft set. Ann Fuzzy Math Inf 5(1):157–168
Ye J (2015a) An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. J Intell Fuzzy Syst 28(1):247–255
Tian ZP, Wang JQ, Zhang HY (2016) Simplified neutrosophic linguistic normalized weighted Bonferroni mean operator and its application to multi-criteria decision-making problems. Filomat 30(12):3339–3360
Wang JQ, Li XE (2015) An application of the TODIM method with multi-valued neutrosophic set. Control Decis 30(6):1139–1142
Broumi S, Smarandache F, Dhar M (2014) Rough neutrosophic set. Ital J Pure Appl Math 32(32):493–502
Jun YB, Smarandache F, Kim CS (2017) Neutrosophic cubic sets. New Math Natural Comput 13(1):41–54
Ali M, Smarandache F (2017) Complex neutrosophic set. Neural Comput Appl 28(7):1817–1834
Gulistan M, Khan S (2020) Extentions of neutrosophic cubic sets via complex fuzzy sets with application. Compl Intell Syst 6:309–320
Bonferroni C (1950) Sulle medie multiple di potenze. Bolletino dell’Unione Matematica Ital 5(3–4):267–270
Yager RR (2001) The power average operator. IEEE Trans Syst Man Cybern Part A Syst Hum 31(6):724–731
Yager RR (2008) Prioritized aggregation operators. Int J Approx Reason 48(1):263–274
Dombi J (1982) A general class of fuzzy operators, the demorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators. Fuzzy Sets Syst 8(2):149–163
Zavadskas EK, Baušys R, Lazauskas M (2015) Sustainable assessment of alternative sites for the construction of a waste incineration plant by applying WASPAS method with single-valued neutrosophic set. Sustainability 7(12):15923–15936
Vafadarnikjoo A, Mishra N, Govindan K, Chalvatzis K (2018) Assessment of consumers’ motivations to purchase a remanufactured product by applying fuzzy Delphi method and single valued neutrosophic sets. J Clean Prod 196:230–244
Selvachandran G, Quek SG, Smarandache F, Broumi S (2018) An extended technique for order preference by similarity to an ideal solution (TOPSIS) with maximizing deviation method based on integrated weight measure for single-valued neutrosophic sets. Symmetry 10(7):236–252
Broumi S, Singh PK, Talea M, Bakali A, Smarandache F, Rao VV (2018) Single-valued neutrosophic techniques for analysis of WIFI connection. Adv Intell Syst Sustain Dev 915:405–412
Biswas P, Pramanik S, Giri BC (2019) Non-linear programming approach for single-valued neutrosophic TOPSIS method. New Math Natural Comput 15(2):307–326
Abdel-Basset M, Ali M, Atef A (2020a) Uncertainty assessments of linear time-cost tradeoffs using neutrosophic set. Comput Ind Eng 141:106286–106301
Abdel-Basset M, Mohamed R (2020) A novel plithogenic TOPSIS-CRITIC model for sustainable supply chain risk management. J Clean Prod 247:119586–119620
Abdel-Basset M, Ali M, Atef A (2020b) Resource levelling problem in construction projects under neutrosophic environment. J Supercomput 76:964–988
Akram M, Naz S, Smarandache F (2019) Generalization of maximizing deviation and TOPSIS method for MADM in simplified neutrosophic hesitant fuzzy environment. Symmetry 11(8):1058–1084
Zhan J, Akram M, Sitara M (2019) Novel decision-making method based on bipolar neutrosophic information. Soft Comput 23(20):9955–9977
Aslam M (2019) Neutrosophic analysis of variance: application to university students. Compl Intell Syst 5(4):403–407
Sumathi IR, Sweety CAC (2019) New approach on differential equation via trapezoid neutrosophic number. Compl Intell Syst 5(4):417–424
Broumi S, Smarandache F (2013) Several similarity measures of neutrosophic sets. Neutrosophic Sets Syst 1(10):54–62
Ye J (2013) Multi criteria decision-making method using the correlation coefficient under single-valued neutrosophic environment. Int J Gen Syst 42(4):386–394
Ye J (2014a) A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J Intell Fuzzy Syst 26(5):2459–2466
Ye J (2014b) Vector similarity measures of simplified neutrosophic sets and their application in multicriteria decision making. Int J Fuzzy Syst 16(2):204–211
Ye J (2014c) Clustering methods using distance-based similarity measures of single-valued neutrosophic sets. J Intell Syst 23(4):379–389
Ye J (2015b) Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses. Artif Intell Med 63(3):171–179
Ye J (2017a) Single-valued neutrosophic clustering algorithms based on similarity measures. J Classif 34(1):148–162
Ye J (2017b) Single-valued neutrosophic similarity measures based on cotangent function and their application in the fault diagnosis of steam turbine. Soft Comput 21(3):817–825
Ye J, Zhang QS (2014) Single valued neutrosophic similarity measures for multiple attribute decision making. Neutrosophic Sets Syst 2:48–54
Majumdar P, Samanta SK (2014) On similarity and entropy of neutrosophic sets. J Intell Fuzzy Syst 26(3):1245–1252
Mondal K, Pramanik S (2015) Neutrosophic tangent similarity measure and its application to multiple attribute decision making. Neutrosophic Sets Syst 9:80–87
Ye J, Fu J (2016) Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function. Comput Methods Programs Biomed 123:142–149
Liu CF, Luo YS (2016) The weighted distance measure based method to neutrosophic multiattribute group decision making. Math Probl Eng 2016:1–8
Huang HL (2016) New distance measure of single-valued neutrosophic sets and its application. Int J Intell Syst 31(10):1021–1032
Mandal K, Basu K (2016) Improved similarity measure in neutrosophic environment and its application in finding minimum spanning tree. J Intell Fuzzy Syst 31(3):1721–1730
Sahin M, Olgun N, Uluçay V, Kargin A, Smarandache F (2017) A new similarity measure based on falsify value between single valued neutrosophic sets based on the centroid points of transformed single valued neutrosophic numbers with applications to pattern recognition. Neutrosophic Sets Syst 15:31–48
Pramanik S, Biswas P, Giri BC (2017) Hybrid vector similarity measures and their applications to multi-attribute decision making under neutrosophic environment. Neural Comput Appl 28(5):1163–1176
GargNancy HA (2017) Some new biparametric distance measures on single-valued neutrosophic sets with applications to pattern recognition and medical diagnosis. Information 8(4):162–181
Fu J, Ye J (2017) Simplified neutrosophic exponential similarity measures for the initial evaluation/diagnosis of benign prostatic hyperplasia symptoms. Symmetry 9(8):154–163
Wu H, Yuan Y, Wei L, Pei L (2018) On entropy, similarity measure and cross-entropy of single-valued neutrosophic sets and their application in multi-attribute decision making. Soft Comput 22(22):7367–7376
Cui W, Ye J (2018a) Improved symmetry measures of simplified neutrosophic sets and their decision making method based on a sine entropy weight model. Symmetry 10(6):225–236
Mondal K, Pramanik S, Giri BC (2018a) Hybrid binary logarithm similarity measure for MAGDM problems under SVNS assessments. Neutrosophic Sets Syst 20:12–15
Mondal K, Pramanik S, Giri BC (2018b) Single valued neutrosophic hyperbolic sine similarity measure based strategy for MADM problems. Neutrosophic Sets Syst 20:3–11
Liu C (2018) New similarity measures of simplified neutrosophic sets and their applications. J Inf Process Syst 14(3):790–800
Liu D, Liu G, Liu Z (2018) Some similarity measures of neutrosophic sets based on the Euclidean distance and their application in medical diagnosis. Comput Math Methods Med 2018:1–9
Ren HP, Xiao SX, Zhou H (2019) A chi-square distance-based similarity measure of single-valued neutrosophic set and applications. Int J Comput Commun Control 14(1):78–89
Sun R, Hu J, Chen X (2019) Novel single-valued neutrosophic decision-making approaches based on prospect theory and their applications in physician selection. Soft Comput 23(1):211–225
Peng X, Smarandache F (2020) New multiparametric similarity measure for neutrosophic set with big data industry evaluation. Artif Intell Rev 53:3089–3125
Aydoğdu A (2015) On similarity and entropy of single valued neutrosophic sets. Gener Math Notes 29(1):67–74
Garg H, Nancy A (2016) On single-valued neutrosophic entropy of order α. Neutrosophic Sets Syst 14(1):21–28
Cui W-H, Ye J (2018b) Generalised distance-based entropy and dimension root entropy for simplified neutrosophic sets. Entropy 20(11):844–855
Aydoğdu A, Şahin R (2019) New entropy measures based on neutrosophic set and their applications to multi-criteria decision making. J Natural Appl Sci 23(1):40–45
Sinha K, Majumdar P (2018) On single valued neutrosophic signed digraph and its applications. Neutrosophic Sets Syst 22(1):171–179
Ye J (2017c) Correlation coefficient between dynamic single valued neutrosophic multisets and its multiple attribute decision-making method. Information 8(2):41–49
Ye J (2014d) Improved correlation coefficients of single valued neutrosophic sets and interval neutrosophic sets for multiple attribute decision making. J Intel Fuzzy Syst 27(5):2453–2462
Hanafy IM, Salama AA, Mahfouz KM (2013) Correlation coefficients of neutrosophic sets by centroid method. Int J Prob Stat 2(1):9–12
Bonissone PP (1979). A pattern recognition approach to the problem of linguistic approximation in system analysis. In: Proceedings of the IEEE International Conference On Cybernetics And Society, Denver, Colorado, 793–798.
Eshragh F, Mamdani EH (1979) A general approach to linguistic approximation. Int J Man Mach Stud 11(4):501–519
Lee-Kwang H, Song Y-S, Lee K-M (1994) Similarity measure between fuzzy sets and between elements. Fuzzy Sets Syst 62(3):291–293
Mandal K, Basu K (2015) Hypercomplex neutrosophic similarity measure and its application in multicriteria decision making problem. Neutrosophic Sets Syst 9:6–12
Abdel-Basset M, Mohamed M, Elhoseny M, Son LH, Chiclana F, Zaied AE, Nasser H (2019) Cosine similarity measures of bipolar neutrosophic set for diagnosis of bipolar disorder diseases. Artif Intell Med 101:101735–101764
Guo Y, Şengür A, Ye J (2014) A novel image thresholding algorithm based on neutrosophic similarity score. Measurement 58:175–186
Guo Y, Şengür A, Tian J-W (2016) A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set. Comput Methods Programs Biomed 123:43–53
Guo Y, Şengür A (2014) A novel image segmentation algorithm based on neutrosophic similarity clustering. Appl Soft Comput 25:391–398
Qi X, Liu B, Xu J (2016) A neutrosophic filter for high-density salt and pepper noise based on pixel-wise adaptive smoothing parameter. J Vis Commun Image Represent 36:1–10
Ye J (2017d) Subtraction and division operations of simplified neutrosophic sets. Information 8(2):51–58
Zhang HY, Wang JQ, Chen XH (2014) Interval neutrosophic sets and their application in multicriteria decision making problems. Sci World J 2014:1–15
Ye J (2014e) Multiple attribute group decision-making method with completely unknown weights based on similarity measures under single valued neutrosophic environment. J Intell Fuzzy Syst 27(6):2927–2935
Bui QT, Vo B, Do HAN, Hung NQV, Snasel V (2020) F-Mapper: a fuzzy mapper clustering algorithm. Knowl-Based Syst 189:105107
Witarsyah D, Fudzee MFM, Salamat MA, Yanto ITR, Abawajy J (2020) Soft set theory based decision support system for mining electronic government dataset. Int J Data Warehous Min 16(1):39–62
Le T, Vo MT, Kieu T, Hwang E, Rho S, Baik SW (2020) Multiple electric energy consumption forecasting using a cluster-based strategy for transfer learning in smart building. Sensors 20(9):2668
Fan T, Xu J (2020) Image classification of crop diseases and pests based on deep learning and fuzzy system. Int J Data Warehous Min 16(2):34–47
Selvachandran G, Quek SG, Lan LTH, Giang NL, Ding W, Abdel-Basset M, Albuquerque VHC (2019) A new design of Mamdani complex fuzzy inference system for multi-attribute decision making problems. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2019.2961350
Ngan RT, Ali M, Fujita H, Abdel-Basset M, Giang NL, Manogaran G, Priyan MK (2019) A new representation of intuitionistic fuzzy systems and their applications in critical decision making. IEEE Intell Syst 35(1):6–17
Thong NT, Lan LTH, Chou SY, Son LH, Dong DD, Ngan TT (2020) An extended TOPSIS method with unknown weight information in dynamic neutrosophic environment. Mathematics 8(3):401
Acknowledgements
We would like to thank the editors and anonymous reviewers for their valuable comments and suggestions to enhance the quality of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chai, J.S., Selvachandran, G., Smarandache, F. et al. New similarity measures for single-valued neutrosophic sets with applications in pattern recognition and medical diagnosis problems. Complex Intell. Syst. 7, 703–723 (2021). https://doi.org/10.1007/s40747-020-00220-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-020-00220-w