1 Introduction

Decision making is an art of making choices established on information gathering and appraisal of alternative resolutions. An elaborate decision-making procedure enhances more thoughtful and deliberate decisions by establishing pertinent information and defining alternate. Decision making can be tactical, strategic, and operational depending on the aims and alternatives. The process of decision-making could be carried out based on the recognition principle and multiple criteria decision making (MCDM) by using information measures like distance operator, correlation coefficient, and similarity operator. Most often, the art of decision making is hampered by indeterminacies which necessitate the deployment of fuzzy sets and/or fuzzy logic [53] to resolve the indeterminacies interwoven with the decision making process to attain consistent ends. Fuzzy system though resourceful is handicapped because it does not consider the hesitant degree component of the alternatives under consideration. Because of this setback, the construct of intuitionistic fuzzy sets (IFSs) was introduced by Atanassov [1]. In fact, the idea of IFSs expands the scope and modelling capability of fuzzy set. Mathematically speaking, IFS is described by grades of membership and non-membership, with hesitation margin collected from the closed interval, \(I=[0,1]\). While all fuzzy sets can be described as an IFS, the idea of IFSs is different from fuzzy set in the sense that the grades of membership and non-membership are not complementary.

Pertinent applications of the IFSs have been discussed in the solutions of real-world problems such as medical diagnosis [12, 41]. Shi and Ye [10] deliberated on the fuzzy queries process via intuitionistic fuzzy social networks, and Liu and Chen [33] discussed a decision-making process using Heronian aggregation operators under IFSs. In the same way, some information measures have been discussed with applications to decision-making problem and diagnostic analysis [20]. A work on clustering algorithm using IFSs was conducted in [51]. Boran [3] deliberated on the selection process of a facility location by means of IFSs approach, and an attribute selection using Hellwig’s algorithm was unveiled using IFSs [42]. Xu and Yager [52] discussed some intuitionistic fuzzy preference relations for the assessment of group agreement. Belief and plausibility measures for IFSs were developed with application to belief-plausibility TOPSIS [45], and a hybridized correlation coefficient technique was developed under IFSs with application to classification process [16]. Numerous practical problems have been solved based on sundry correlation approaches [13, 17,18,19, 21, 43].

To enlarge the application spectral of IFSs, the concept of intuitionistic fuzzy distance operators (IFDOs) has been extensively discussed. Burillo and Bustince [5] pioneered the concept of IFDOs and extended it to interval-valued fuzzy sets. Szmidt and Kacprzyk [40] improved on the IFDOs in [5] by incorporating all the describing parameters of IFSs to enhance accuracy. Various IFDOs via Hausdorff metric were deliberated on in [7, 23], and a wide-ranging overview on IFDOs was discussed in [50]. Hatzimichailidis et al. [24] presented a new IFDOs and its application in cases of pattern recognition. Wang and Xin [44] discussed a novel IFDO and its weighted variant with application to pattern recognition problem, and Davvaz and Sadrabadi [11] revised certain existing IFDOs and discussed their application in medical diagnosis.

Intuitionistic fuzzy similarity operator (IFSO) is one of the widely used techniques applied to data analysis, machine learning, pattern recognition, and other related decision-making problems. IFSO measures the relationship between two IFSs to examine whether there is relation between two IFSs. Many authors have worked on IFSOs because the concept is very applicable to real-world problems. Boran and Akay [4] developed an IFSO from a biparametric approach and applied it to pattern recognition. A technique of IFSO was developed from transformation approach and used to discuss pattern recognition [8], and Xu [49] applied certain developed techniques of IFSO in MCDM. Certain IFSOs were developed and used to discuss pattern recognition [39, 48]. IFSOs using Hausdorff distance [26] and Lp metric [27] have been studied. In [29], some new IFSOs based on upper, lower and middle fuzzy sets were developed, and Ye [47] developed an IFSO using the cosine function and applied it to mechanical design schemes. Some IFSOs using set pair analysis were developed [22]. Numerous practical problems have been solved based on sundry approaches of IFSO [9, 14, 30, 32].

More so, Chen [6] developed an IFSO technique based on biparametric approach for estimating the similarity of IFSs. This similarity technique cannot be reliable because the hesitation margins of the concerned IFSs are omitted. Similarly, [25] developed an IFSO technique akin to the technique in [6], with the same limitation. Mitchell [36] developed an IFSO technique to augment the Dengfeng–Chuntian similarity approach. In addition, Li et al. [31] developed an IFSO based on an IFDO in [5] by considering the incomplete parameters of IFSs. Ye [46] developed a cosine-based IFSO, and Hung and Yang [28] presented some techniques of IFSO. However, both the techniques [28, 46] do not consider the hesitation margins of IFSs. In [38], an IFSO [46] was modified by including the whole descriptive parameters of IFSs and applied it to fault diagnosis of turbine. Luo and Ren [34] developed an IFSO using biparametric approach and used it to discuss multiple attributes decision making (MADM). In [37], a new IFSO was introduced and used to discuss sundry applications. Most recently, a new IFSO was developed and applied to solve emergency control problem [15]. Though this technique [15] includes all the describing parameters of IFSs, it lacks accuracy, which will lead to misinterpretation in terms of application.

The interpretations from the existing techniques of IFSO cannot be rely upon because

  1. (i)

    some omit hesitation margins of IFSs in their computations,

  2. (ii)

    some could not measure the similarity between two comparable IFSs,

  3. (iii)

    the IFSO technique in [37] violates the axiomatic description of similarity operator,

  4. (iv)

    all the existing IFSO techniques yield outcomes that are not reliable, and so give misleading interpretations.

These drawbacks constitute the justification of this present work. The present work is equipped with the necessary facilities to absolve the drawbacks of the hitherto IFSOs. The aim of work, which form the contributions of the study will be achieved by the following points:

  1. (i)

    evaluating the existing IFSOs to isolate their fault lines,

  2. (ii)

    developing a new IFSO technique with the capacity to resolve the drawbacks in the existing techniques,

  3. (iii)

    elaborating some properties of the new IFSO to describe its alliances with the notion of similarity operator,

  4. (iv)

    discussing the application of the new IFSO in the solution of disaster control problem, pattern recognition, and the process of car selection for purchasing purpose based on recognition principle and MCDM.

For organization purpose, we outline the rest of the work as follows: Sect. 2 dwells on some preliminaries of IFSs and recaps some existing IFSO techniques; Sect. 3 presents the development of the new IFSO and discusses its properties; Sect. 4 discusses the application examples based on recognition principle and MCDM approach; and Sect. 5 gives the summary of the findings and suggest areas of future direction.

2 Intuitionistic Fuzzy Sets and Their Similarity Operators

This section elucidates the preliminaries of IFSs and reiterates some hitherto similarity operators for IFSs.

2.1 Intuitionistic Fuzzy Sets

Assume X to be the classical set in this article. Firstly, we reiterate the definition of fuzzy set:

Definition 2.1

A fuzzy set symbolized by C in X is describes by

$$\begin{aligned} C=\lbrace \langle x, \beta _C(x) \rangle :x\in X\rbrace , \end{aligned}$$
(1)

where \(\beta _C(x)\) is a degree of membership of x in X defined by the function given as \(\beta _C: X\rightarrow [0,1]\) [53].

Definition 2.2

An intuitionistic fuzzy set symbolized by \(\Re\) in X is describes by

$$\begin{aligned} \Re =\lbrace \langle x, \beta _{\Re }(x), \eta _{\Re }(x) \rangle :x\in X\rbrace , \end{aligned}$$
(2)

where \(\beta _{\Re }(x)\) and \(\eta _{\Re }(x)\) are grades of membership and non-membership of x in X defined by the functions \(\beta _{\Re }: X\rightarrow [0,1]\) and \(\eta _{\Re }: X\rightarrow [0,1]\), with the property that \(\beta _{\Re }(x)+ \eta _{\Re }(x)\le 1\) [1].

The hesitation margin of an IFS \({\Re }\) in X denoted by \(\psi _{\Re }(x)\in [0,1]\), expresses the knowledge of the degree to whether x in X or not, and it is defined by \(\psi _{\Re }(x)= 1-\beta _{\Re }(x)-\eta _{\Re }(x)\). For simplicity, we symbolized an IFS \({\Re }=\lbrace \langle x, \beta _{\Re }(x), \eta _{\Re }(x) \rangle :x\in X\rbrace\) as \({\Re }=\Big (\beta _{\Re }(x), \eta _{\Re }(x)\Big )\).

Next, we recap certain operations on IFSs like equality, inclusion, complement, union, and intersection.

Definition 2.3

Assume \(\Re _1\) and \(\Re _2\) are IFSs in X, then the following properties follows [2]:

  1. (i)

    \(\Re _1=\Re _2\) \(\Leftrightarrow\) \(\beta _{\Re _1}(x)=\beta _{\Re _2}(x)\) and \(\eta _{\Re _1}(x)=\eta _{\Re _2}(x)\), \(\forall x\in X\),

  2. (ii)

    \(\Re _1\subseteq \Re _2\) \(\Leftrightarrow\) \(\beta _{\Re _1}(x)\le \beta _{\Re _2}(x)\) and \(\eta _{\Re _1}(x)\ge \eta _{\Re _2}(x)\), \(\forall x\in X\),

  3. (iii)

    \(\Re _1\preceq \Re _2\) \(\Leftrightarrow\) \(\beta _{\Re _1}(x)\le \beta _{\Re _2}(x)\) and \(\eta _{\Re _1}(x)\le \eta _{\Re _2}(x)\), \(\forall x\in X\),

  4. (iv)

    \(\overline{\Re }_1=\lbrace \langle x, \beta _{\Re _1}(x), \eta _{\Re _1}(x)\rangle :x\in X\rbrace\), \(\overline{\Re }_2=\lbrace \langle x, \beta _{\Re _2}(x), \eta _{\Re _2}(x)\rangle :x\in X\rbrace\),

  5. (v)

    \(\Re _1\cup \Re _2=\lbrace \langle x, \max \big \lbrace \beta _{\Re _1}(x),\beta _{\Re _2}(x)\big \rbrace , \min \big \lbrace \eta _{\Re _1}(x),\eta _{\Re _2}(x)\big \rbrace \rangle :x\in X\rbrace\),

  6. (vi)

    \(\Re _1\cap \Re _2=\lbrace \langle x, \min \big \lbrace \beta _{\Re _1}(x),\beta _{\Re _2}(x)\big \rbrace , \max \big \lbrace \eta _{\Re _1}(x),\eta _{\Re }(x)\big \rbrace \rangle :x\in X\rbrace\).

2.2 Some Intuitionistic Fuzzy Similarity Operators

Some obtainable techniques for calculating the similarity between IFSs are recapped before the development of the new IFSO technique. First, the definition of IFSO is presented as in [40].

Definition 2.4

Suppose \(\Re _1\) and \(\Re _2\) symbolize IFSs in \(X=\lbrace x_1,\ldots , x_N\rbrace\), then the similarity measure between the IFSs is a function \(\mathbb {S}(\Re _1,\Re _2):\Re _1 \times \Re _2\rightarrow [0,1]\) which satisfies the following conditions;

  1. (i)

    \(0\le \mathbb {S}(\Re _1,\Re _2)\le 1\),

  2. (ii)

    \(\mathbb {S}(\Re _1,\Re _1)=1\),

  3. (iii)

    \(\mathbb {S}(\Re _1,\Re _2)=1\) \(\Leftrightarrow\) \(\Re _1=\Re _2\),

  4. (iv)

    \(\mathbb {S}(\Re _1,\Re _2)=\mathbb {S}(\Re _2,\Re _1)\),

  5. (v)

    \(\mathbb {S}(\Re _1,\Re _3)\le \mathbb {S}(\Re _1,\Re _2)+\mathbb {S}(\Re _2,\Re _3)\), where \(\Re _3\) is an IFS in X.

When \(\mathbb {S}(\Re _1,\Re _2)\) tends to 1 implies that \(\Re _1\) and \(\Re _2\) are strongly similar, \(\mathbb {S}(\Re _1,\Re _2)\) tends to 0 implies \(\Re _1\) and \(\Re _2\) are weakly similar. In particular, if \(\mathbb {S}(\Re _1,\Re _2)=0\) implies \(\Re _1\) and \(\Re _2\) are not similar, and if \(\mathbb {S}(\Re _1,\Re _2)=1\) then implies \(\Re _1\) and \(\Re _2\) are perfectly similar.

The following are some of the existing techniques for measuring similarity concerning IFSs:

2.2.1 Chen [6]

$$\begin{aligned}{} & {} \mathbb {S}_1(\Re _1, \Re _2)\nonumber \\{} & {} \quad =1-\dfrac{1}{2N}\sum ^N_{j=1}\bigg (|\big (\beta _{\Re _1}(x_j)-\eta _{\Re _1}(x_j)\big )-\big (\beta _{\Re _2}(x_j)-\eta _{\Re _2}(x_j)\big )|\bigg ). \end{aligned}$$
(3)

This similarity operator does not include the hesitation information, and so the operator lacks credibility due to the exclusion. In addition, the operator yields results with low precision as seen in Tables 2 and 3.

2.2.2 Hong and Kim [25]

$$\begin{aligned}{} & {} \mathbb {S}_2(\Re _1, \Re _2)\nonumber \\{} & {} \quad =1-\dfrac{1}{2N}\sum ^N_{j=1}\bigg (|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|+|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|\bigg ). \end{aligned}$$
(4)

This similarity operator modified the operator in [6] with the inclusion of absolute values for the parametric difference. Likewise, the similarity operator excludes the hesitation information, and so lacks credibility. Also, the operator yields results with low precision as seen in Tables 2 and 3.

2.2.3 Mitchell [36]

$$\begin{aligned}{} & {} \mathbb {S}_3(\Re _1, \Re _2)= \dfrac{1}{2}\Bigg ( \bigg [1- \dfrac{1}{N}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|\bigg )\bigg ] \nonumber \\{} & {} \quad +\bigg [1- \dfrac{1}{N}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|\bigg )\bigg ]\Bigg ), \end{aligned}$$
(5)
$$\begin{aligned}{} & {} \mathbb {S}_4(\Re _1, \Re _2)= \dfrac{1}{2}\Bigg ( \bigg [1- \dfrac{1}{\sqrt{N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^2\bigg )^{\frac{1}{2}}\bigg ] \nonumber \\{} & {} \quad +\bigg [1- \dfrac{1}{\sqrt{N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^2\bigg )^{\frac{1}{2}}\bigg ]\Bigg ). \end{aligned}$$
(6)

The similarity operators in [36] improved the operators developed in [6, 25], but the limitation of the exclusion of the hesitation information remains. Similarly, the similarity values from these operators have low precision as seen in Tables 2 and 3.

2.2.4 Li et al. [31]

$$\begin{aligned} \mathbb {S}_5(\Re _1, \Re _2)=1-\sqrt{\dfrac{1}{2N}\sum ^N_{j=1}\bigg (\Big (\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)\Big )^2+\Big (\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)\Big )^2\bigg )}. \end{aligned}$$
(7)

This similarity operator is similar to (6), and hence has the same limitation as the operator proposed in [36].

2.2.5 Hung and Yang [28]

$$\begin{aligned}{} & {} \mathbb {S}_6(\Re _1, \Re _2)=\dfrac{1}{N}\sum _{j=1}^N\Bigg (\dfrac{\min \lbrace \beta _{\Re _1}(x_j), \beta _{\Re _2}(x_j)\rbrace +\min \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace }{\max \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\rbrace + \max \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace }\Bigg ), \end{aligned}$$
(8)
$$\begin{aligned}{} & {} \mathbb {S}_7(\Re _1, \Re _2)=\dfrac{\sum ^N_{j=1}\Big (\min \lbrace \beta _{\Re _1}(x_j), \beta _{\Re _2}(x_j)\rbrace +\min \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace \Big )}{\sum ^N_{j=1}\Big (\max \lbrace \beta _{\Re _1}(x_j), \beta _{\Re _2}(x_j)\rbrace + \max \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace \Big )}, \end{aligned}$$
(9)
$$\begin{aligned}{} & {} \mathbb {S}_8(\Re _1, \Re _2)=1-\dfrac{\sum ^N_{j=1}\Big (|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)| +|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|\Big )}{\sum ^N_{j=1}\Big (\beta _{\Re _1}(x_j)+\beta _{\Re _2}(x_j)+ \eta _{\Re _1}(x_j)+ \eta _{\Re _2}(x_j)\Big )}. \end{aligned}$$
(10)

The similarity operators are based on the extreme values of parameters and absolute values of parametric difference different from the aforementioned operators. Albeit, the outcomes of these operators cannot be relied on because of the omission of the hesitation information. Also, the results from these operators have low precision and accuracy as seen in Tables 2 and 3.

2.2.6 Ye [46]

$$\begin{aligned}{} & {} \mathbb {S}_9(\Re _1, \Re _2)\nonumber \\{} & {} \quad =\dfrac{1}{N}\sum _{j=i}^N\Bigg (\dfrac{\beta _{\Re _1}(x_j)\beta _{\Re _2}(x_j)+\eta _{\Re _1}(x_j)\eta _{\Re _2}(x_j)}{\sqrt{\big (\beta ^2_{\Re _1}(x_j)+\eta ^2_{\Re _1}(x_j)\big )\big (\beta ^2_{\Re _2}(x_j)+\eta ^2_{\Re _2}(x_j)\big )}}\Bigg ). \end{aligned}$$
(11)

This similarity operator has three deficiencies namely, (i) it does not include the hesitation information, and so the operator lacks credibility due to the exclusion, (ii) it yields results with low precision as seen in Tables 2 and 3, and (iii) it violates the metric condition of the similarity operator. To see (iii), we consider the following example.

Example 2.5

Assume IFSs \(\Re _1\) and \(\Re _2\) in \(X=\lbrace x_1,x_2\rbrace\) are given by

$$\begin{aligned}{} & {} \Re _1=\lbrace \langle 1,0\rangle , \langle 0,0.6\rangle \rbrace ,\\{} & {} \Re _2=\lbrace \langle 0,0.5\rangle , \langle 1,0\rangle \rbrace . \end{aligned}$$

By applying (11), we have

$$\begin{aligned} \mathbb {S}_9(\Re _1, \Re _2)=\dfrac{1}{2}\Bigg (\dfrac{(1\times 0)+(0\times 1)+(0\times 0.5)+(0.6\times 0)}{\sqrt{(1^2+0^2+0^2+0.6^2)+(0^2+0.5^2+1^2+0^2)}}\Bigg )=0, \end{aligned}$$

which infringes on the metric condition of the similarity operator.

2.2.7 Shi and Ye [38]

$$\begin{aligned} \mathbb {S}_{10}(\Re _1, \Re _2)=\dfrac{1}{N}\sum _{j=i}^N\Bigg (\dfrac{\beta _{\Re _1}(x_j)\beta _{\Re _2}(x_j)+\eta _{\Re _1}(x_j)\eta _{\Re _2}(x_j)+\psi _{\Re _1}(x_j)\psi _{\Re _2}(x_j)}{\sqrt{\big (\beta ^2_{\Re _1}(x_j)+\eta ^2_{\Re _1}(x_j)+\psi ^2_{\Re _1}(x_j)\big )\big (\beta ^2_{\Re _2}(x_j)+\eta ^2_{\Re _2}(x_j)+\psi ^2_{\Re _2}(x_j)\big )}}\Bigg ). \end{aligned}$$
(12)

This similarity operator improved the operator in [46] by the inclusion of hesitation information. Its deficiencies include (i) it yields results with low precision as seen in Tables 2 and 3, and (ii) it violates the metric condition of similarity operator. To see (ii), we consider Example 2.5 by applying (11), and similarly,

$$\begin{aligned} \mathbb {S}_{10}(\Re _1, \Re _2)=0, \end{aligned}$$

which violates the metric condition of the similarity operator.

2.2.8 Luo and Ren [34]

$$\begin{aligned} \begin{aligned} \mathbb {S}_{11}(\Re _1, \Re _2)=1-\dfrac{1}{3N}\sum ^N_{j=1}\bigg (|\beta ^2_{\Re _1}(x_j)-\beta ^2_{\Re _2}(x_j)|\\ +|\eta ^2_{\Re _1}(x_j)-\eta ^2_{\Re _2}(x_j)|+|m^2_{\Re _1}(x_j)-m^2_{\Re _2}(x_j)|\bigg ), \end{aligned} \end{aligned}$$
(13)

where

$$\begin{aligned}{} & {} m_{\Re _1}(x_j)=\dfrac{\beta _{\Re _1}(x_j)+1-\eta _{\Re _1}(x_j)}{2}, \;\\{} & {} m_{\Re _2}(x_j)=\dfrac{\beta _{\Re _2}(x_j)+1-\eta _{\Re _2}(x_j)}{2}. \end{aligned}$$

Similarly, this similarity operator does not include the hesitation information, and so it lacks credibility due to the exclusion, and yields results with low precision as seen in Tables 2 and 3.

2.2.9 Quynh et al. [37]

$$\begin{aligned} \begin{aligned}&\mathbb {S}_{12}(\Re _1, \Re _2)=\dfrac{3+\min \big \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\big \rbrace }{4}\\&\quad - \dfrac{\max \big \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\big \rbrace }{4}\\&\quad -\dfrac{\big (|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|+|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|\big )}{4}. \end{aligned} \end{aligned}$$
(14)

This similarity operator has three deficiencies namely, (i) it does not include the hesitation information, (ii) it violates the metric condition of similarity operator by yielding results that are not within the close interval [0, 1] (see Table 3), and (iii) it yields results with low precision as seen in Tables 2 and 3.

2.2.10 Ejegwa and Ahemen [15]

$$\begin{aligned} \begin{aligned} \mathbb {S}_{13}(\Re _1, \Re _2)&=1-\dfrac{1}{\sqrt{3N}}\bigg (\sum ^N_{j=1}\Big (|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^2\\&\quad +|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^2\\&\quad +|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^2\Big )\bigg )^{\frac{1}{2}}. \end{aligned} \end{aligned}$$
(15)

This similarity operator includes the hesitation information and satisfies the metric condition of the similarity operator, but yields results with low precision as seen in Tables 2 and 3 compare to the new similarity operator.

3 New Similarity Operator for IFSs

Every one of the enlisted similarity methods of IFSs possesses either one of the setbacks, namely; (1) omission of the indeterminate parameter, (2) unable to produce valid similarity value, and (3) violation of the axioms of the similarity operator. Sequel to these setbacks, we develop a new method similarity operator under IFSs to resolve the setbacks. The new similarity operator concerning IFSs F and G in \(S=\lbrace s_1, s_2, \ldots , s_N\rbrace\) is given by:

$$\begin{aligned} \tilde{\mathbb {S}}(\Re _1, \Re _2)=\dfrac{\mathbb {S}_{\beta }(F,G) + \mathbb {S}_{\eta } + \mathbb {S}_{\psi }(F,G)}{3}, \end{aligned}$$
(16)

where

$$\begin{aligned}{} & {} \mathbb {S}_{\beta }(F,G)=1-\dfrac{1}{\root \alpha \of {N}}\Bigg (\sum ^N_{j=1}\bigg [\max \Big \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\Big \rbrace \\{} & {} \quad - \min \Big \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\Big \rbrace \bigg ]^{\alpha }\Bigg )^{\frac{1}{\alpha }} \\{} & {} \mathbb {S}_{\eta }(F,G)=1-\dfrac{1}{\root \alpha \of {N}}\Bigg (\sum ^N_{j=1}\bigg [\max \Big \lbrace \eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)\Big \rbrace \\{} & {} \quad - \min \Big \lbrace \eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)\Big \rbrace \bigg ]^{\alpha }\Bigg )^{\frac{1}{\alpha }} \\{} & {} \mathbb {S}_{\psi }(F,G)=1-\dfrac{1}{\root \alpha \of {N}}\Bigg (\sum ^N_{j=1}\bigg [\max \Big \lbrace \psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)\Big \rbrace \\{} & {} \quad - \min \Big \lbrace \psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)\Big \rbrace \bigg ]^{\alpha }\Bigg )^{\frac{1}{\alpha }}\end{aligned}$$

for \(\alpha =1, 2\).

We see that (16) includes the indeterminate parameter/hesitation information, can produce valid similarity value (as seen in Tables 2, 3), and satisfies the axioms of similarity operator unlike the discussed similarity operators.

3.1 Some Properties of the New IFSO

Some of the properties of the new similarity operator under IFSs are presented in this subsection to validate the new method. We observe that

$$\begin{aligned} \left. \begin{aligned}&\max \Big \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\Big \rbrace -\min \Big \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\Big \rbrace \\&\quad = |\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|,\\ \max \Big \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\Big \rbrace -\min \Big \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\Big \rbrace \\&\quad = |\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|, \; \text {and}\\ \max \Big \lbrace \psi _{\Re _1}(x_j),\psi _{\Re _2}(x_j)\Big \rbrace -\min \Big \lbrace \psi _{\Re _1}(x_j),\psi _{\Re _2}(x_j)\Big \rbrace \\&\quad = |\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)| \end{aligned} \right\} . \end{aligned}$$
(17)

Substituting (17) into (16), we have

$$\begin{aligned} \begin{aligned} \tilde{\mathbb {S}}(\Re _1,\Re _2) &= \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad +\bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg ). \end{aligned} \end{aligned}$$
(18)

Theorem 3.1

Suppose \(\Re _1\subseteq \Re _2\subseteq \Re _3\) are IFSs in \(X=\lbrace x_1, x_2\ldots , x_n\rbrace\), then

  1. (i)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _3)\le \tilde{\mathbb {S}}(\Re _1,\Re _2)\),

  2. (ii)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _3)\le \tilde{\mathbb {S}}(\Re _2,\Re _3)\),

  3. (iii)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _3)\le \min \big \lbrace \tilde{\mathbb {S}}(\Re _1,\Re _2), \tilde{\mathbb {S}}(\Re _2,\Re _3)\big \rbrace\).

Proof

Since \(\Re _1\subseteq \Re _2\subseteq \Re _3\), then \(\beta _{\Re _1}(x_j)\le \beta _{\Re _2}(x_j)\le \beta _{\Re _3}(x_j)\) and \(\eta _{\Re _1}(x_j)\ge \eta _{\Re _2}(x_j)\ge \eta _{\Re _3}(x_j)\) \(\forall x_j\in X\). By consequence,

$$\begin{aligned} \begin{aligned} |\beta _{\Re _1}(x_j)-\beta _{\Re _3}(x_j)|^{\alpha } \ge |\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha },\\ |\eta _{\Re _1}(x_j)-\eta _{\Re _3}(x_j)|^{\alpha } \ge |\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha },\\ |\psi _{\Re _1}(x_j)-\psi _{\Re _3}(x_j)|^{\alpha } \ge |\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }, \end{aligned} \end{aligned}$$

which implies that

$$\begin{aligned} \begin{aligned}&1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _3}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }} \\&\quad \le 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }},\\ 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _3}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\\&\quad \le 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }},\\ 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _3}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\\&\quad \le 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}. \end{aligned} \end{aligned}$$

Hence, \(\tilde{\mathbb {S}}(\Re _1,\Re _3)\le \tilde{\mathbb {S}}(\Re _1,\Re _2)\) by (18), which proves (i). Similarly, (ii) follows. From (i) and (ii), (iii) follows. \(\square\)

Theorem 3.2

Suppose \(\Re _1\subseteq \Re _2\subseteq \Re _3\) are IFSs in \(X=\lbrace x_1, x_2\ldots , x_n\rbrace\), then

  1. (i)

    \(0\le \tilde{\mathbb {S}}(\Re _1,\Re _2)\le 1\),

  2. (ii)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _1)=1\), \(\tilde{\mathbb {S}}(\Re _2,\Re _2)=1\),

  3. (iii)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _2)=1\) \(\Leftrightarrow\) \(\Re _1=\Re _2\),

  4. (iv)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _2)=\tilde{\mathbb {S}}(\overline{\Re }_1,\overline{\Re }_2)\),

  5. (v)

    \(\tilde{\mathbb {S}}(\Re _1,\Re _2)=\tilde{\mathbb {S}}(\Re _2,\Re _1)\),

  6. (vi)

    \(\mathbb {S}(\Re _1,\Re _3)\le \mathbb {S}(\Re _1,\Re _2)+\mathbb {S}(\Re _2,\Re _3)\),

  7. (vii)

    \(\tilde{\mathbb {S}}(\Re _1\cap \Re _2, \Re _1\cup \Re _2)= \tilde{\mathbb {S}}(\Re _1,\Re _2)\).

Proof

To show (i), we establish that (a) \(\tilde{\mathbb {S}}(\Re _1,\Re _2)\ge 0\), and (b) \(\tilde{\mathbb {S}}(\Re _1,\Re _2)\le 1\). It is easy to see (a) because

$$\begin{aligned} \begin{aligned} 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }} \le 0, \\ 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }} \le 0,\\ 1-\dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }} \le 0. \end{aligned} \end{aligned}$$

To prove (b), we assume that

$$\begin{aligned} \left. \begin{aligned} \bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}=A\\ \bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}= B\\ \bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}=C \end{aligned} \right\} . \end{aligned}$$
(19)

Substituting (19) into 18, we have

$$\begin{aligned} \tilde{\mathbb {S}}(\Re _1,\Re _2) &= \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{A}{\root \alpha \of {N}}\bigg ] + \bigg [1- \dfrac{B}{\root \alpha \of {N}}\bigg ] +\bigg [1- \dfrac{C}{\root \alpha \of {N}}\bigg ]\Bigg )\\ & = \dfrac{3\root \alpha \of {N}-A-B-C}{3\root \alpha \of {N}}. \end{aligned}$$

Then

$$\begin{aligned} \tilde{\mathbb {S}}(\Re _1,\Re _2)-1 &= \dfrac{3\root \alpha \of {N}-A-B-C}{3\root \alpha \of {N}}-1\\ & \le -\dfrac{(A+B+C)}{3\root \alpha \of {N}} \le 0. \end{aligned}$$

Hence, \(\tilde{\mathbb {S}}(\Re _1,\Re _2)-1\le 0 \Rightarrow \tilde{\mathbb {S}}(\Re _1,\Re _2)\le 1\), which proves (i). The proofs of (ii)–(iv) are straightforward.

We prove (v) as follows:

$$\begin{aligned} \tilde{\mathbb {S}}(\Re _1,\Re _2) &= \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)-\eta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad +\bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg )\\ &= \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|-(-\beta _{\Re _1}(x_j)+\beta _{\Re _2}(x_j))|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\& \quad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|-(-\eta _{\Re _1}(x_j)+\eta _{\Re _2}(x_j))|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad +\bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|-(-\psi _{\Re _1}(x_j)+\psi _{\Re _2}(x_j))|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg ) \\ &= \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _2}(x_j)-\beta _{\Re _1}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _2}(x_j)-\eta _{\Re _1}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\ & \quad +\bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _2}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg ), \end{aligned}$$

which shows that \(\tilde{\mathbb {S}}(\Re _1,\Re _2)=\tilde{\mathbb {S}}(\Re _2,\Re _1)\). The proof of (vi) is deductively from Theorem 3.1.

By incorporating the intersection and union of IFSs based on the new similarity operator, we have

$$\begin{aligned}{} & {} \tilde{\mathbb {S}}(\Re _1\cap \Re _2, \Re _1\cup \Re _2)\\{} & {} \quad = \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\min \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\rbrace -\max \lbrace \beta _{\Re _1}(x_j),\beta _{\Re _2}(x_j)\rbrace |^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\max \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace -\min \lbrace \eta _{\Re _1}(x_j),\eta _{\Re _2}(x_j)\rbrace |^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1\cap \Re _2}(x_j)-\psi _{\Re _1\cup \Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg )\\{} & {} \quad = \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _2}(x_j)- \eta _{\Re _1}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg )\\{} & {} \quad = \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|-(-\eta _{\Re _2}(x_j)+ \eta _{\Re _1}(x_j))|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg )\\{} & {} \quad = \dfrac{1}{3}\Bigg ( \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\beta _{\Re _1}(x_j)-\beta _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\eta _{\Re _1}(x_j)- \eta _{\Re _2}(x_j))|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\\{} & {} \qquad + \bigg [1- \dfrac{1}{\root \alpha \of {N}}\bigg (\sum ^N_{j=1}|\psi _{\Re _1}(x_j)-\psi _{\Re _2}(x_j)|^{\alpha }\bigg )^{\frac{1}{\alpha }}\bigg ]\Bigg )\\{} & {} \quad = \tilde{\mathbb {S}}(\Re _1, \Re _2), \end{aligned}$$

which proves (vii). \(\square\)

4 Application Examples

The applications of the new similarity operator in real-world problems based on recognition principle and MCDM technique are discussed in this section.

4.1 Recognition Principle

Recognition principle involves the classification of an unidentified pattern \(\Re\) with known patterns \(\Re _j\) (\(j=1,2,\ldots , N\)) to decide which class \(\Re\) belongs with the aid of an information measure. Now, using the new similarity operator, we determine the classification process of \(\Re\) to \(\Re _j\) by

$$\begin{aligned} \Omega =\text {argmax}_{1\le j\le N}\lbrace \tilde{\mathbb {S}}(\Re , \Re _j)\rbrace . \end{aligned}$$
(20)

4.1.1 Application 1

Emergency control is the management of responsibilities or resources to deal with a disaster to avert or lessen its harmful effects. The incidence of disasters is indeterminate, and so it is good to adopt indeterminate approaches to manage disasters. We deploy IFSO to control disaster because of the ability of IFSs to tackle hesitations.

Suppose emergency rescue workers have documented three typical situations of emergency rescue planning data represented as IFSs denoted by \(\tilde{D}_1\), \(\tilde{D}_2\) and \(\tilde{D}_3\) determined by rescue difficulty (\(s_1\)), scale of people affected (\(s_2\)), traffic conditions (\(s_3\)), and emergency supplies (\(s_4\)), respectively. Assume a new emergency situation occurred denoted by IFS \(\tilde{E}\) also describe by rescue difficulty, scale of people affected, traffic conditions, and emergency supplies. To control the present emergency, the emergency rescue workers will match the present emergency with the past emergencies to classify the current emergency and adopt the corrective measures of the past emergency to the present one. The disaster situation data used in the application is taken from the work in [35].

First, the similarity indexes of \(\tilde{E}\) and \(\tilde{D}_j\) (\(j=1,2,3\)) are computed using (18) as follow:

$$\begin{aligned}\tilde{\mathbb {S}}(\tilde{D}_1, \tilde{E})=0.9,\; \tilde{\mathbb {S}}(\tilde{D}_2, \tilde{E})=0.9167,\; \tilde{\mathbb {S}}(\tilde{D}_3, \tilde{E})=0.9333, \; \text {for}\; \alpha =1,\end{aligned}$$

and

$$\begin{aligned}\tilde{\mathbb {S}}(\tilde{D}_1, \tilde{E})=0.8635,\; \tilde{\mathbb {S}}(\tilde{D}_2, \tilde{E})=0.9118,\; \tilde{\mathbb {S}}(\tilde{D}_3, \tilde{E})=0.9152, \; \text {for}\; \alpha =2.\end{aligned}$$

By applying (20), the current emergency \(\tilde{E}\) is most associated with emergency \(\tilde{D}_3\). Hence, the disaster rescue team should apply the disaster control measures used to control emergency \(\tilde{D}_3\) for the current disaster \(\tilde{E}\). This approach of disaster control is very significant in real life because during an emergency, there are apprehension and confusion that needed to be controlled as quickly as possible. With this approach, disasters can be adequately controlled provided there are intuitionistic fuzzy dataset for past emergencies.

4.1.2 Application 2

The process of pattern recognition or classification is most often enmeshed with uncertainties, and as such, pattern recognition based on IFSO is the significance for the dependable outcome. Suppose there are four classes of building material symbolized by IFSs \(\tilde{P}_1\), \(\tilde{P}_2\), \(\tilde{P}_3\), and \(\tilde{P}_4\) described by the set of features \(\ell =\lbrace \ell _1, \ell _2,\ldots , \ell _{12}\rbrace\). Assume there is an unknown building material symbolized by an IFS \(\tilde{C}\) defined in the same features. The recognition principle associates \(\tilde{C}\) with any of the known patterns using the new similarity operator. The patterns are given in Table 1 [44].

Table 1 Building material patterns

The similarity indexes of \(\tilde{C}\) and \(\tilde{P}_j\) (\(j=1,2,3,4\)) are computed using (18) as follows:

$$\begin{aligned}{} & {} \tilde{\mathbb {S}}(\tilde{P}_1, \tilde{C})=0.7564,\; \tilde{\mathbb {S}}(\tilde{P}_2, \tilde{C})=0.7411,\; \tilde{\mathbb {S}}(\tilde{P}_3, \tilde{C})=0.8802,\\{} & {} \tilde{\mathbb {S}}(\tilde{P}_4, \tilde{C})=0.9772, \; \text {for}\; \alpha =1,\end{aligned}$$

and

$$\begin{aligned}{} & {} \tilde{\mathbb {S}}(\tilde{P}_1, \tilde{C})=0.6807,\; \tilde{\mathbb {S}}(\tilde{P}_2, \tilde{C})=0.6756,\; \tilde{\mathbb {S}}(\tilde{P}_3, \tilde{C})=0.8077,\\{} & {} \tilde{\mathbb {S}}(\tilde{P}_4, \tilde{C})=0.9656, \; \text {for}\; \alpha =2. \end{aligned}$$

Using (20), we see that the unknown \(\tilde{C}\) can be classified with pattern \(\tilde{P}_4\) since their similarity index is the greatest. The unknown pattern is categorized using the greatest similarity value without any indecision. Because of the inevitability of vagueness in the process of pattern recognition, this idea of IFSO will be of a significant help in eye tracking algorithm using multi-model Kalman filter, human action recognition in movies, among others.

4.1.3 Comparison of the IFSOs Based on Recognition Principle

Next, we justify the validity of the newly developed similarity operator by comparison with the other similarity operators [6, 15, 25, 28, 31, 34, 36,37,38, 46], and present the results in Tables 2 and 3.

Table 2 Similarity values for Application 1
Fig. 1
figure 1

Plot of Table 2

From Table 2 and Fig. 1, we see that all the IFSOs give the same classification. Albeit, the new developed similarity operator yields the greatest similarity values when compared to the methods in [6, 15, 25, 28, 31, 34, 36,37,38, 46]. In fact, the developed IFSO gives the most accurate results, and so it is very reliable and appropriate for MCDM problems.

Table 3 Similarity values for Application 2
Fig. 2
figure 2

Plot of Table 3

From Table 3 and Fig. 2, we see that the newly developed IFSO gives the best similarity indexes with a high reliability rate. It is observed that the IFSO [37] violates a condition of similarity measure by yielding values that are not within [0, 1]. Nonetheless, all the similarity operators in [6, 15, 25, 28, 31, 34, 36, 38, 46] and the new similarity operator give the same classification.

4.2 Multiple Criteria Decision Making

The multiplicity of objectives in most human decision-making cases necessitate the use of multiple criteria decision-making (MCDM) approaches. Multiple criteria decision-making (MCDM) is the process use to tackle the multiplicity of objectives in the human decision domain. MCDM is the aspect of operations research that explicitly assesses numerous conflicting criteria in decision-making. The best way to effectively carry out MCDM is through fuzzy logic.

4.2.1 Application 3

Suppose a car buyer wants to buy an effective car for private use, and there are five cars represented by IFSs \(\tilde{C}_j\) \((j=1,2,3,4,5)\) to be selected from. The five selected cars were based on the following evaluating factors (E); fuel consumption, degree of friction, price, comfort, design, and security.

4.2.2 Algorithm for the MCDM

Step 1. Formulate the intuitionisic fuzzy decision matrix (IFDM) \(\tilde{C}_k=\lbrace E_i (\tilde{C}_j)\rbrace _{(m\times n)}\) for \(i=1,\ldots , m\) and \(j=1,\ldots , n\). The information of the cars are represented by IFDM based on the selecting factors in Table 4.

Table 4 Car selection information

The least criterion from Table 4 is price and it is called the cost criterion, and the other criteria are the benefit criteria.

Step 2. Find the normalized IFDM \(\tilde{D}=\langle \beta _{\tilde{C}^*_{j}}(E_i), \eta _{\tilde{C}^*_{j}}(E_i)\rangle _{m\times n}\) for \(\tilde{C}_k\), where \(\langle \beta _{\tilde{C}^*_{j}}(E_i), \eta _{\tilde{C}^*_{j}}(E_i)\rangle\) are intuitionistic fuzzy values, and \(\tilde{D}\) is defined by

$$\begin{aligned} \langle \beta _{\tilde{C}^*_{j}}(E_i), \eta _{\tilde{C}^*_{j}}(E_i)\rangle = \left\{ \begin{array}{ll} \langle \beta _{\tilde{C}_{j}}(E_i), \eta _{\tilde{C}_{j}}(E_i)\rangle , &{} \, \text {for benefit criterion of}\, \tilde{D}\\ \langle \eta _{\tilde{C}_{j}}(E_i), \beta _{\tilde{C}_{j}}(E_i)\rangle , &{} \, \text {for cost criterion of}\, \tilde{D} \end{array} \right. \end{aligned}$$
(21)

Table 5 contains the information of the normalized IFDM.

Table 5 Normalized car selection information

Step 3. Compute positive ideal solution (PIS) and negative ideal solution (NIS) given by

$$\begin{aligned} \begin{aligned} \tilde{C}^+=\lbrace \tilde{C}^+_1,\ldots , \tilde{C}^+_n\rbrace \; \\ \tilde{C}^-=\lbrace \tilde{C}^-_1,\ldots , \tilde{C}^-_n\rbrace \end{aligned} \end{aligned}$$
(22)

where

$$\begin{aligned} \tilde{C}^+= & {} \left\{ \begin{array}{ll} \langle \max \lbrace \beta _{\tilde{C}_{j}}(E_i)\rbrace , \min \lbrace \eta _{\tilde{C}_{j}}(E_i)\rbrace \rangle , &{} \, \text {if}\; E_i \; \text {is a benefit criterion}\\ \langle \min \lbrace \beta _{\tilde{C}_{j}}(E_i)\rbrace , \max \lbrace \eta _{\tilde{C}_{j}}(E_i)\rbrace \rangle , &{} \, \text {if}\; E_i \; \text {is a cost criterion}, \end{array} \right. \end{aligned}$$
(23)
$$\begin{aligned} \tilde{C}^-= & {} \left\{ \begin{array}{ll} \langle \min \lbrace \beta _{\tilde{C}_{j}}(E_i)\rbrace , \max \lbrace \eta _{\tilde{C}_{j}}(E_i)\rbrace \rangle , &{} \, \text {if}\; E_i \; \text {is a benefit criterion}\\ \langle \max \lbrace \beta _{\tilde{C}_{j}}(E_i)\rbrace , \min \lbrace \eta _{\tilde{C}_{j}}(E_i)\rbrace \rangle , &{} \, \text {if}\; E_i \; \text {is a cost criterion}. \end{array} \right. \end{aligned}$$
(24)

The information for NIS and PIS is contains in Table 6.

Table 6 NIP and PIS for car selection information

Step 4. Calculate the similarity indexes, \(\tilde{\mathbb {S}}(\tilde{C}_{j}, \tilde{C}^-)\) and \(\tilde{\mathbb {S}}(\tilde{C}_{j}, \tilde{C}^+)\) using the new method for \(\alpha =1,2\). Applying (18) on Tables 4 and 6, we get Tables 7 and 8.

Table 7 Similarity indexes for \(\alpha =1\)
Table 8 Similarity indexes for \(\alpha =2\)

Step 5. Calculate the closeness coefficients for each cars \(\tilde{C}_j\) using

$$\begin{aligned} f(\tilde{C}_j )=\dfrac{\tilde{\mathbb {S}}(\tilde{C}_{j}, \tilde{C}^+)}{\tilde{\mathbb {S}}(\tilde{C}_{j}, \tilde{C}^+) +\tilde{\mathbb {S}}(\tilde{C}_{j}, \tilde{C}^-)}. \end{aligned}$$
(25)

By applying (25), we get Table 9.

Table 9 Closeness coefficients

Step 6. The car with the greatest closeness coefficient is the most suitable to purchase by the car buyer. From Table 9, we see that car \(\tilde{C}_2\) is the appropriate car to buy because of its economic fuel consumption, commensurate price, comfort, good design, and security.

For the sake of comparison, we deploy the existing similarity operators [6, 15, 25, 28, 31, 34, 36,37,38, 46] and our new similarity operator on the data in Tables 4 and 6. By following Steps 4–6, we get the orderings in Table 10.

Table 10 Comparative Information

Table 10 shows that the MCDM technique embedded with the discussed similarity operators yields the same choice with the exception of the similarity operators in [6, 25]. In fact, the MCDM approach with the similarity operators in [6, 25] gives the same ordering and ranking, \(\tilde{C}_4\succ \tilde{C}_3\succ \tilde{C}_2\succ \tilde{C}_5\succ \tilde{C}_1\). The MCDM approach with the similarity operator in [46] and \(\mathbb {S}_6\) in [28] gives the same ordering and ranking, \(\tilde{C}_2\succ \tilde{C}_3\succ \tilde{C}_5\succ \tilde{C}_1\succ \tilde{C}_4\). The MCDM approach with the similarity operators in [15, 38] gives the same ordering and ranking, \(\tilde{C}_2\succ \tilde{C}_4\succ \tilde{C}_5\succ \tilde{C}_3\succ \tilde{C}_1\). Finally, the MCDM approach with the similarity operators in

[28, 31, 34, 36, 37] and the new similarity operator gives the same ordering and ranking, \(\tilde{C}_2\succ \tilde{C}_4\succ \tilde{C}_3\succ \tilde{C}_5\succ \tilde{C}_1\).

The new similarity method possesses some overriding advantages, which are:

  1. (i)

    the new similarity method produces more precise and reliable similarity values compare to the other similarity methods.

  2. (ii)

    the new similarity method incorporates all the parameters of IFSs to enhance performance rating and avoid exclusion error as witnessed in [6, 25, 28, 31, 34, 36, 37, 46].

  3. (iii)

    the new similarity method completely satisfied the axioms of similarity operator unlike the method in [37].

5 Conclusion

The place of similarity operator in real-world problems of decision making under indeterminate environments cannot be overemphasized. To this end, the concept of similarity operator have been researched by not a few authors using intuitionistic fuzzy information. In this paper, a generalized similarity operator is developed which produces more precise and reliable similarity values compare to the other similarity methods. Besides the fact that the new similarity operator completely satisfied the axioms of the similarity operator unlike the method in [37], it equally incorporates all the descriptive features of IFSs to enhance performance rating unlike the methods in [6, 25, 28, 31, 34, 36, 37, 46]. Some theoretical results of the generalized similarity operator for IFSs were discussed. In addition, the generalized similarity operator was used to discuss problems of real-world decision-making based on the recognition principle and MCDM approach. Some comparative studies were carried out to justify the rationale behind the development of the generalized similarity operator for IFSs. The generalized similarity operator for IFSs could be extended to other uncertain environments besides IFSs with some marginal modifications for future consideration.