Abstract
This paper presents a new methodology for solving multiple-attribute decision-making problems (MADMs) using a complex Pythagorean normal interval-valued fuzzy set (CPNIVFS), which is an extended concept of a complex Pythagorean fuzzy set. Four types of different aggregating operations (AOs), including CPNIVF weighted averaging (CPNIVFWA), CPNIVF weighted geometric (CPNIVFWG), generalized CPNIVFWA (CGPNIVFWA), and generalized CPNIVFWG (CGPNIVFWG), are discussed. The scoring function, accuracy function, and operational laws of the CPNIVFS are defined. Algebraic structures, such as associative, distributive, idempotent, bounded, commutativity, and monotonicity properties, are also shown to be satisfied by complex Pythagorean normal interval-valued fuzzy numbers. Furthermore, an algorithm is proposed to solve the MADM problems based on the defined AOs. The proposed approach is then used for a medical diagnosis problem about brain tumors because computer science and machine tool technology are among the most important components of brain tumor research. The five types of brain tumors diagnosed in these patients are gliomas, meningiomas, metastases, embryonal tumors, and ependymomas. Several types of treatments are available, which are often combined as part of an overall treatment plan. Brain tumors can be treated in various ways, including surgery, radiation therapy, chemotherapy, immunotherapy, and clinical trials. Based on the comparisons and options gathered, the most suitable treatment can be chosen. In this regard, it is evident that the value of the integer \(\Game \) plays a significant role in determining the model. The candidate models under consideration can be validated by comparing them with the previously proposed ones. The proposed technique is compared with the existing method to demonstrate its superiority and validity, and the results conclude that the former is more reliable and effective than the latter. Finally, the criteria are evaluated by expert assessments to determine the most appropriate options.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Complex real-world problems are increasingly challenging for decision-makers to identify optimal solutions. Although it may be difficult to choose between alternatives, selecting the best option is possible. Firms face many challenges in creating opportunities, objectives, and viewpoint constraints. When engaging in decision-making (DM), individuals and groups must consider multiple objectives simultaneously. To effectively deal with difficult situations, uncertainty is commonly used in real-life situations.
Several theories have thus appeared to help DM under uncertain circumstances. They include fuzzy sets (FSs) [1], intuitionistic fuzzy sets (IFSs) [2], and Pythagorean fuzzy sets (PFSs) [3,4,5], among others. According to the FS, each element is assigned a membership degree (MD) between 0 and 1. Xu and Li [6] predicted the regression of a fuzzy time series by clustering fuzzy c-numbers [7]. A few years later, Atanassov introduced the concept of IFS as a generalization of FS [2], and the condition that the sum of a membership grade (MG) and its corresponding non-membership grade (NMG) does not exceed one. Occasionally, it is possible to have a problem in DM problems, where the sum of MD and NMD exceeds one. For this reason, Yager [4] introduced the concept of Pythagorean fuzzy set (PFS). It is a generalization of the IFS and is characterized by the fact that the sum of the squares of the MD and NMD does not exceed one. Akram et al. discussed several applications that can be imitated using PFS [8,9,10]. Rahman et al. [11] discussed a geometric aggregation operator (GAO) using the interval-valued PFS (IVPFS). A discussion of IVPFS with an aggregating operator (AO) was presented by Peng et al. [12]. Rahman et al. [13] proposed several approaches to solve multiple-attribute decision-making (MADM) problems. We use the induced IVPFS based on Einstein AO in MADM with the group and application developed by Khan [14], which relies on Einstein Choquet integral operators. Yang et al. [15] discussed an IVPNF with AOs for an MADM approach. Palanikumar et al. discussed the MADM approach for Pythagorean neutrosophic IVNF with AOs [16].
Ramot et al. [17, 18] introduced a complex FS (CFS) that extends MD to complex numbers. The CFS was systematically reviewed by Yazdanbakhsh et al. [19]. The concept of CFS was introduced by Alkouri et al. [20] for complex IFS (CIFS). They also extended the concept of NMDs in the context of CFS. We extended MD and NMD to a unit circle in a complex system and defined the fundamental operations of CIFS, such as union, intersection, and complement operations. MADM was applied to solve real-life problems associated with CIFS by Garg et al. [21] using AOs. Ullah et al. [22] generalized CFS to CPFS using pattern recognition methods. In addition, we determined the distance measures for a CPFS based on the principles of pattern recognition. principles. Liu et al. [23] developed the concept of a complex root FS (CROFS) derived from the generalizations of CPFSs with the sum of the powers of MD and NMD. Rong et al. [24] developed MacLaurin symmetric means under CROFS. Akram et al. [25] proposed a new concept of complex picture FSs (CPicFSs) that considers the basic operations of Hamacher AOs. Using CPFS, the amplitude and phase terms represent the truth degree (TD), abstinence degree (AD), and falsity degree (FD). Recently, Palanikumar et al. [26] have discussed robotic sensors based on the score and accuracy values in a q-rung complex diophantine neutrosophic normal set. Liu et al. [27] discussed the q-rung orthopair fuzzy Archimedean Bonferroni mean (q-ROFABM) and q-rung orthopair fuzzy-weighted Archimedean Bonferroni mean (q-ROFWABM) operators, and some aspects were examined. Liu et al. [28] proposed a concept of q-rung orthopair fuzzy power averaging (q-ROFPA), q-rung orthopair fuzzy power weighted average (q-ROFPWA), q-ROFPMSM and q-rung orthopair fuzzy power weighted MSM (q-ROFPWMSM) operators for q-ROFNs and described their properties. Liu et al. [29] discussed the q-rung orthopair fuzzy-weighted average (q-ROFWA) and q-rung orthopair fuzzy-weighted geometric (q-ROFWG) operators. Wang et al. [30] introduced the concept of a two-stage optimization process for estimating incomplete probabilistic linguistic preference relations (InPLPRs). Liu et al. [31] introduced the concept of a group consensus decision based on the InPLPR, considering consistency, social trust networks, and unit cost consensus adjustments.
Zhang et al. [32] discussed three types of multi-granularity q-rung orthopair fuzzy preference relations (PRS) and their interesting properties. The MG-3WD approach was also applied to q-ROF complex information systems using the MAGDM algorithm based on q-ROF multi-attribute rules. Zhang et al. [33] analyzed a UCI dataset using MGq-ROF PRSs, the MULTIMOORA method, and the TPOP method using the MAGDM approach. Zhang et al. [34] discussed the neutrosophic fusion of RST in terms of basic models and soft-set models. Zian et al. [35] discussed fuzzy relative knowledge distances using fuzzy granularity spaces, with properties corresponding to fuzzy knowledge distances. Furthermore, several experimental analyses were conducted to demonstrate that precise knowledge distances contain different structural information. Anusha et al. [36] discussed the hybridizations of Archimedean copula and generalized the MSM operators and their applications based on q-rung probabilistic dual hesitant fuzzy sets. It is generally possible to evaluate multiple alternatives based on their evaluation values, such as using MADM [37, 38], to arrive at a comprehensive evaluation result. This can be solved in two ways. TOPSIS, VIKOR, and ELECTRE are examples of the traditional approaches. Compared with traditional approaches, AOs are more effective in solving information integration problems. Traditional approaches can only give ranking results, whereas AOs provide comprehensive values for all alternatives. Bairagi [39] introduced a new logic for homogeneous group DM to select robotic systems using the extended TOPSIS.
Hesitant FSs with real-life applications were discussed in [40, 41]. Lu et al. [42, 43] discussed the consensus progress for DM in social networks with incomplete probabilistic hesitant fuzzy and hesitant multiplicative preference relations. Yazdi et al. [44] introduced the application of artificial intelligence in DM to select a maintenance strategy. Rojek et al. [45] discussed artificial intelligence for supervising machine failures and supporting their repairs. Huang et al. [46] discussed the concept of designing an alternative assessment and selection method using a Z-cloud rough number-based BWM-MABAC model. Xiao [47] introduced q-rung orthopair fuzzy DM with a new score function and the best–worst method for manufacturer selection. Huang et al. [48] discussed failure mode and effect analysis using T-spherical fuzzy maximizing deviation and combined comparison solution methods. Mahmood et al. [49] addressed the concept of prioritized muirhead mean AOs in complex single-valued neutrosophic settings and their application to MADM. Hague et al. [50] discussed the concept of logarithmic operational laws in an interval neutrosophic. Hague et al. [51, 52] communicated an exponential operational law in a generalized spherical fuzzy set. Banik et al. [53] introduced the concept of the GRA and MEREC techniques for an agricultural-based MCGDM problem in a pentagonal neutrosophic set. Recently, Banik et al. [54] discussed the notion of a neutrosophic cosine operator-based linear programming ANP-EDAS MCGDM strategy for selecting anti-pegasus software. We discuss several AOs that belong to the complex Pythagorean normal interval-valued fuzzy-weighted average (CPNIVFWA), complex Pythagorean normal interval-valued fuzzy weighted geometric (CPNIVFWG), complex generalized Pythagorean normal interval-valued fuzzy-weighted average (CGPNIVFWA), and complex generalized Pythagorean normal interval-valued fuzzy weighted geometric (CGPNIVFWG) with respect to their basic properties, such as associative, idempotency, monotonicity, and boundedness.
The main contributions of this study are as follows:
-
1.
We developed some novel CPNIVFSs using AOs.
-
2.
We introduced a new type of score values for CPNIVFSs.
-
3.
Associative, idempotent, bounded, commutativity, and monotonicity properties are satisfied by CPNIVFSs.
-
4.
The CPNIVFWA, CPNIVFWG, CGPNIVFWA, and CGPNIVFWG methods were used to determine the score values.
-
5.
To demonstrate the effectiveness and reliability of the proposed CPNIVFSs using AOs, we applied the proposed operator to a real-world problem.
-
6.
The purpose of this study is to compare existing and proposed AOs.
-
7.
The results indicate that the proposed method is more accurate and effective than existing techniques.
-
8.
The result was obtained based on an integer \(\Game \).
By using AOs, we discuss CPNIVFS and the advantages of this information and establish a ranking list based on these operators. The paper is divided into six sections, each organized in order. Section 2 presents the background and basic preliminaries. Section 3 describes the CPNIVFN and its operations. This Sect. 4 contains some AOs from the CPNIVFN. Section 5 contains a numerical example, analysis, and discussion of the AOs from the CPNIVFN. The results and the conclusion are summarized in Sect. 6.
2 Background
In this section, we recall some basic definitions required in this paper.
Definition 2.1
[4] Let \(\mathscr {U}\) be the universe of discourse. The PFS \(\aleph \) in \(\mathscr {U}\) is \(\aleph = \Big \{\wp , \big \langle \Im ^{{T}}_{\aleph }(\wp ), \Im ^{{F}}_{\aleph }(\wp )\big \rangle \big | \wp \in \mathscr {U}\Big \}\), where \(\Im ^{{T}}_{\aleph }: \mathscr {U} \rightarrow [0,1]\) and \(\Im ^{{F}}_{\aleph }: \mathscr {U} \rightarrow [0,1]\) denote the MD and NMD of \(\wp \in \mathscr {U}\) to \(\aleph \), respectively, and \(0 \preceq (\Im ^{{T}}_{\aleph }(\wp ))^{2}+(\Im ^{{F}}_{\aleph }(\wp ))^{2} \preceq 1\). \(\aleph = \big \langle \Im ^{{T}}_{\aleph },\Im _{\aleph }^{{F}} \big \rangle \) is called a Pythagorean fuzzy number (PFN).
Definition 2.2
[12] The PIVFS \(\aleph \) in \(\mathscr {U}\) is \(\widehat{\aleph }= \Big \{\wp , \Big \langle \widehat{\Im ^{{T}}_{\aleph }}(\wp ), \widehat{\Im ^{{F}}_{\aleph }}(\wp )\Big \rangle \Big | \wp \in \mathscr {U}\Big \}\), where \(\widehat{\Im ^{T}_{\aleph }}: \mathscr {U} \rightarrow Int([0,1])\) and \( \widehat{\Im ^{F}_{\aleph }}: \mathscr {U} \rightarrow Int([0,1])\) denote the MD and NMD of \(\wp \in \mathscr {U}\) to \(\aleph \), respectively, and \(0 \preceq (\Im ^{{\mathscr {T}}u}_{\aleph }(\wp ))^{2} +(\Im _{\aleph }^{{\mathscr {F}}u}(\wp ))^{2} \preceq 1\). \(\aleph =\Big \langle \Big [\Im ^{{\mathscr {T}}l}_{\aleph },\Im ^{{\mathscr {T}}u}_{\aleph }\Big ],\Big [\Im _{\aleph }^{{\mathscr {F}}l},\Im _{\aleph }^{{\mathscr {F}}u}\Big ]\Big \rangle \) is called a Pythagorean IVFN (PIVFN).
Definition 2.3
[12] Let \(\aleph = \Big \langle [\Im ^{{\mathscr {T}}l},\Im ^{{\mathscr {T}}u}],[\Im ^{{\mathscr {F}}l},\Im ^{{\mathscr {F}}u}] \Big \rangle \), \(\aleph _{1}= \Big \langle [\Im ^{{\mathscr {T}}l}_{1},\Im ^{{\mathscr {T}}u}_{1}], [\Im ^{{\mathscr {F}}l}_{1},\Im ^{{\mathscr {F}}u}_{1}] \Big \rangle \) and \(\aleph _{2}= \Big \langle [\Im ^{{\mathscr {T}}l}_{2}, \Im ^{{\mathscr {T}}u}_{2}], [\Im ^{{\mathscr {F}}l}_{2},\Im ^{{\mathscr {F}}u}_{2}] \Big \rangle \) be any three PIVFNs, and \(\Game > 0\). Then,
1. \({\small \aleph _{1}\bigoplus \aleph _{2}= \begin{bmatrix} \Big [\sqrt{(\Im ^{{\mathscr {T}}l}_{1})^{2} + (\Im ^{{\mathscr {T}}l}_{2})^{2} -(\Im ^{{\mathscr {T}}l}_{1})^{2} \cdot (\Im ^{{\mathscr {T}}l}_{2})^{2}},\\ \sqrt{(\Im ^{{\mathscr {T}}u}_{1})^{2} + (\Im ^{{\mathscr {T}}u}_{2})^{2} -(\Im ^{{\mathscr {T}}u}_{1})^{2} \cdot (\Im ^{{\mathscr {T}}u}_{2})^{2}}\,\Big ],\\ \Big [\Im ^{{\mathscr {F}}l}_{1}\cdot \Im ^{{\mathscr {F}}l}_{2}, \,\Im ^{{\mathscr {F}}u}_{1}\cdot \Im ^{{\mathscr {F}}u}_{2}\Big ] \end{bmatrix},}\)
2. \({\small \aleph _{1} \bigcirc \aleph _{2} = \begin{bmatrix} \Big [\Im ^{{\mathscr {T}}l}_{1}\cdot \Im ^{{\mathscr {T}}l}_{2},\, \Im ^{{\mathscr {T}}u}_{1}\cdot \Im ^{{\mathscr {T}}u}_{2}\Big ],\\ \Big [\sqrt{(\Im ^{{\mathscr {F}}l}_{1})^{2} + (\Im ^{{\mathscr {F}}l}_{2})^{2} -(\Im ^{{\mathscr {F}}l}_{1})^{2} \cdot (\Im ^{{\mathscr {F}}l}_{2})^{2}},\\ \sqrt{(\Im ^{{\mathscr {F}}u}_{1})^{2} + (\Im ^{{\mathscr {F}}u}_{2})^{2} -(\Im ^{{\mathscr {F}}u}_{1})^{2} \cdot (\Im ^{{\mathscr {F}}u}_{2})^{2}}\,\Big ] \end{bmatrix},}\)
3. \( \Game \cdot \aleph = \begin{bmatrix} \Big [\sqrt{1-\big ( 1- (\Im ^{{\mathscr {T}}l})^{2}\big )^{\Game }},\\ \sqrt{1-\big ( 1- (\Im ^{{\mathscr {T}}u})^{2}\big )^{\Game }}\,\,\Big ],\\ \Big [(\Im ^{{\mathscr {F}}l})^{\Game }, (\Im ^{{\mathscr {F}}u})^{\Game }\Big ] \end{bmatrix},\)
4. \(\aleph ^{\Game }= \begin{bmatrix} \Big [(\Im ^{{\mathscr {T}}l})^{\Game }, (\Im ^{{\mathscr {T}}u})^{\Game }\Big ], \Big [\sqrt{1-\big ( 1- (\Im ^{{\mathscr {F}}l})^{2}\big )^{\Game }},\\ \sqrt{1-\big ( 1- (\Im ^{{\mathscr {F}}u})^{2}\big )^{\Game }}\,\,\Big ] \end{bmatrix}.\)
Definition 2.4
[12] For any PIVFN \(\aleph = \Big \langle [\Im ^{{\mathscr {T}}l},\Im ^{{\mathscr {T}}u}], [\Im ^{{\mathscr {F}}l},\Im ^{{\mathscr {F}}u}] \Big \rangle \), the score function of \(\aleph \) is defined as
The accuracy function of \(\aleph \) is defined as
Definition 2.5
[15] For any Pythagorean interval-valued normal number \(\widehat{\aleph }=\Big \langle (\varrho , \kappa ); [\Im ^{{\mathscr {T}}l} _{\aleph },\Im ^{{\mathscr {T}}u} _{\aleph }], [\zeta _{\aleph }^{\mathcal {F}-},\zeta _{\aleph }^{\mathcal {F}+}]\Big \rangle \), the score and accuracy functions of \(\widehat{\aleph }\) is defined as follows:
and the accuracy function is determined by
Definition 2.6
[7] Let \(M(x)= e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}},(\kappa >0)\) be the fuzzy number and \(M= (\varrho , \kappa )\) is called the normal fuzzy number (NFN). Subsequently, \(\widehat{N}\) is referred to as the NF set (NFS).
Definition 2.7
[6] Let \(L=(\varrho _{1},\kappa _{1})\in \widehat{N}\) and \(M=(\varrho _{2},\kappa _{2})\in \widehat{N}\) be the NFS. Then the distance between L and M is defined as \(\mathbb {D}(L,M)= \sqrt{(\varrho _{1}-\varrho _{2})^{2} + \frac{1}{2}(\kappa _{1}-\kappa _{2})^{2}}\), where \((\kappa _{1},\kappa _{2}>0)\).
3 Operations for CPNIVFN
Based on the concepts of complex Pythagorean interval-valued fuzzy numbers (CPIVFN) and normal fuzzy numbers (NFN), we define complex Pythagorean normal interval-valued fuzzy numbers (CPNIVFN) and the operations.
Definition 3.1
The CPIVFS \(\widehat{\aleph }\) in \(\mathscr {U}\) is given by \(\widehat{\aleph }= \Big \{\wp , \Big \langle \widehat{\Im ^{{T}}_{\aleph }}(\wp )e^{2i\pi \widehat{\Lambda ^{{T}}_{\aleph }}(\wp )}, \widehat{\Im ^{{F}}_{\aleph }}(\wp )e^{2i\pi \widehat{\Lambda ^{{F}}_{\aleph }}(\wp )}\Big \rangle \Big | \wp \in \mathscr {U}\Big \}\), where \(\widehat{\Im ^{T}_{\aleph }}, \widehat{\Im ^{F}_{\aleph }}: \mathscr {U} \rightarrow Int([0,1])\) denote the amplitude TD, amplitude ID and amplitude FD of the element \(\wp \in \mathscr {U}\) to the set \(\widehat{\aleph }\), respectively. Also, \(\widehat{\Lambda ^{T}_{\aleph }}, \widehat{\Lambda ^{F}_{\aleph }}: \mathscr {U} \rightarrow Int([0,1])\) denote the phase TD, phase ID and phase FD of the element \(\wp \in \mathscr {U}\) to the set \(\widehat{\aleph }\), respectively. A CPIVFS must satisfying \(0 \preceq (\Im ^{{\mathscr {T}}u}_{\aleph }(\wp ))^{2}+(\Im _{\aleph }^{{\mathscr {F}}u}(\wp ))^{2} \preceq 1\) and \(0 \preceq (\Lambda ^{{\mathscr {T}}u}_{\aleph }(\wp ))^{2}+(\Lambda _{\aleph }^{{\mathscr {F}}u}(\wp ))^{2} \preceq 1\). For convenience, \( \widehat{\aleph }= \Big \langle [\Im ^{{\mathscr {T}}l}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{\aleph }},\Im ^{{\mathscr {T}}u}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{\aleph }}],[\Im ^{{\mathscr {F}}l}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{\aleph }},\Im ^{{\mathscr {F}}u}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{\aleph }}\) \(] \Big \rangle \) is called a complex Pythagorean interval-valued fuzzy number (CPIVFN).
Definition 3.2
For any CPIVFN \( \widehat{\aleph }= \Big \langle [\Im ^{{\mathscr {T}}l}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{\aleph }},\Im ^{{\mathscr {T}}u}_{\aleph }\) \(e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{\aleph }}],[\Im ^{{\mathscr {F}}l}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{\aleph }},\Im ^{{\mathscr {F}}u}_{\aleph }e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{\aleph }}] \Big \rangle \), the score function of \(\widehat{\aleph }\) is defined as \(S_{1}(\widehat{\aleph })= \frac{\varrho }{2}\left( X + Y\right) \), \(S_{2}(\widehat{\aleph })= \frac{\kappa }{2}\left( X + Y\right) \) and \(S(\widehat{\aleph })= \frac{S_{1}(\widehat{\aleph })+S_{2}(\widehat{\aleph })}{2}\), where
and \(\,\, S(\widehat{\aleph })\in [-1,1]\).
The accuracy function is defined as \(H_{1}(\widehat{\aleph })= \frac{\varrho }{2}\left( X_{1} + Y_{1}\right) \), \(H_{2}(\widehat{\aleph })= \frac{\kappa }{2}\left( X_{1} + Y_{1}\right) \) and \(H(\widehat{\aleph })= \frac{H_{1}(\widehat{\aleph })+H_{2}(\widehat{\aleph })}{2}\), where
and \(\,\, H(\widehat{\aleph })\in [0,1]\).
Definition 3.3
Let \((\varrho , \kappa ) \in \widehat{N}\), \(\widehat{\aleph }= \Big \langle (\varrho , \kappa ); [\Im ^{{\mathscr {T}}l}e^{i2\pi \Lambda ^{{\mathscr {T}}l}},\) \(\Im ^{{\mathscr {T}}u}e^{i2\pi \Lambda ^{{\mathscr {T}}u}}],[\Im ^{{\mathscr {F}}l}e^{i2\pi \Lambda ^{{\mathscr {F}}l}},\Im ^{{\mathscr {F}}u}e^{i2\pi \Lambda ^{{\mathscr {F}}u}}] \Big \rangle \) is a CPNIVFN, TD, ID and FD are defined as \(\big [\Im ^{{\mathscr {T}}l},\Im ^{{\mathscr {T}}u}\big ]= \Big [\Im ^{{\mathscr {T}}l} e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}},\) \( \Im ^{{\mathscr {T}}u} e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}\Big ]\), \(\big [\Im ^{{\mathscr {F}}l},\Im ^{{\mathscr {F}}u}\big ]= \Big [1-\big (1-\Im ^{{\mathscr {F}}l}\big ) e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}, \) \(1-\big (1-\Im ^{{\mathscr {F}}u}\big ) e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}\Big ]\), respectively. Also, \(\big [\Lambda ^{{\mathscr {T}}l},\Lambda ^{{\mathscr {T}}u}\big ]\)
\(= \Big [\Lambda ^{{\mathscr {T}}l} e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}, \Lambda ^{{\mathscr {T}}u} e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}\Big ]\), \(\big [\Lambda ^{{\mathscr {F}}l},\Lambda ^{{\mathscr {F}}u}\big ]= \Big [1-\big (1-\Lambda ^{{\mathscr {F}}l}\big ) e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}, 1-\big (1-\Lambda ^{{\mathscr {F}}u}\big ) e^{-\big (\frac{x-\varrho }{\kappa }\big )^{2}}\Big ]\), \(x \in X,\) respectively, where X is a non-empty set and \(\big [\Im ^{{\mathscr {T}}l},\Im ^{{\mathscr {T}}u}\big ],\) \(\big [\Im ^{{\mathscr {F}}l},\Im ^{{\mathscr {F}}u}\big ], \big [\Lambda ^{{\mathscr {T}}l},\Lambda ^{{\mathscr {T}}u}\big ], \big [\Lambda ^{{\mathscr {F}}l},\Lambda ^{{\mathscr {F}}u}\big ]\in Int([0,1])\) and \(0 \preceq \big (\Im ^{{\mathscr {T}}u}(\wp )\big )^{2} + \big (\Im ^{{\mathscr {F}}u}(\wp )\big )^{2} \preceq 2\), \(0 \preceq \big (\Lambda ^{{\mathscr {T}}u}(\wp )\big )^{2} + \big (\Lambda ^{{\mathscr {F}}u}(\wp )\big )^{2} \preceq 2\).
Definition 3.4
Let \(\widehat{\aleph }= \Big \langle (\varrho , \kappa ); [\Im ^{{\mathscr {T}}l}e^{i2\pi \Lambda ^{{\mathscr {T}}l}},\Im ^{{\mathscr {T}}u}e^{i2\pi \Lambda ^{{\mathscr {T}}u}}\) \(], [\Im ^{{\mathscr {F}}l}e^{i2\pi \Lambda ^{{\mathscr {F}}l}},\Im ^{{\mathscr {F}}u}e^{i2\pi \Lambda ^{{\mathscr {F}}u}}] \Big \rangle \),
\(\widehat{\aleph }_{1}= \Big \langle (\varrho _{1}, \kappa _{1}); [\Im ^{{\mathscr {T}}l}_{1}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{1}},\Im ^{{\mathscr {T}}u}_{1}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{1}}],\) \([\Im ^{{\mathscr {F}}l}_{1}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{1}},\Im ^{{\mathscr {F}}u}_{1}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{1}}] \Big \rangle \) and
\(\widehat{\aleph }_{2}= \Big \langle (\varrho _{2}, \kappa _{2}); [\Im ^{{\mathscr {T}}l}_{2}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{2}},\Im ^{{\mathscr {T}}u}_{2}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{2}}],\) \([\Im ^{{\mathscr {F}}l}_{2}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{2}},\Im ^{{\mathscr {F}}u}_{2}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{2}}] \Big \rangle \) be three CPNIVFNs, and \(\Game > 0\). Then
-
1.
\(\widehat{\aleph }_{1}\bigoplus \widehat{\aleph }_{2} = \begin{bmatrix} (\varrho _{1} + \varrho _{2}, \kappa _{1} + \kappa _{2});\\ \begin{bmatrix} {}\root 2\Game \of {(\Im ^{{\mathscr {T}}l}_{1})^{2\Game } + (\Im ^{{\mathscr {T}}l}_{2})^{2\Game } -(\Im ^{{\mathscr {T}}l}_{1})^{2\Game } \cdot (\Im ^{{\mathscr {T}}l}_{2})^{2\Game }} e^{i2\pi {}\root 2\Game \of {(\Lambda ^{{\mathscr {T}}l}_{1})^{2\Game } + (\Lambda ^{{\mathscr {T}}l}_{2})^{2\Game } -(\Lambda ^{{\mathscr {T}}l}_{1})^{2\Game } \cdot (\Lambda ^{{\mathscr {T}}l}_{2})^{2\Game }}},\\ \root 2\Game \of {(\Im ^{{\mathscr {T}}u}_{1})^{2\Game } + (\Im ^{{\mathscr {T}}u}_{2})^{2\Game } -(\Im ^{{\mathscr {T}}u}_{1})^{2\Game } \cdot (\Im ^{{\mathscr {T}}u}_{2})^{2\Game }} e^{i2\pi \root 2\Game \of {(\Lambda ^{{\mathscr {T}}u}_{1})^{2\Game } + (\Lambda ^{{\mathscr {T}}u}_{2})^{2\Game } -(\Lambda ^{{\mathscr {T}}u}_{1})^{2\Game } \cdot (\Lambda ^{{\mathscr {T}}u}_{2})^{2\Game }}} \end{bmatrix},\\ \Big [\left( \Im ^{{\mathscr {F}}l}_{1}\cdot \Im ^{{\mathscr {F}}l}_{2}\right) e^{i2\pi (\Lambda ^{{\mathscr {F}}l}_{1}\cdot \Lambda ^{{\mathscr {F}}l}_{2})}, \left( \Im ^{{\mathscr {F}}u}_{1}\cdot \Im ^{{\mathscr {F}}u}_{2} \right) e^{i2\pi (\Lambda ^{{\mathscr {F}}l}_{1}\cdot \Lambda ^{{\mathscr {F}}l}_{2})}\Big ] \end{bmatrix},\)
-
2.
\(\widehat{\aleph }_{1} \bigcirc \widehat{\aleph }_{2} = \begin{bmatrix} (\varrho _{1} \cdot \varrho _{2}, \kappa _{1} \cdot \kappa _{2});\\ \Big [\left( \Im ^{{\mathscr {T}}l}_{1}\cdot \Im ^{{\mathscr {T}}l}_{2}\right) e^{i2\pi (\Lambda ^{{\mathscr {T}}l}_{1}\cdot \Lambda ^{{\mathscr {T}}l}_{2})}, \left( \Im ^{{\mathscr {T}}u}_{1}\cdot \Im ^{{\mathscr {T}}u}_{2} \right) e^{i2\pi (\Lambda ^{{\mathscr {T}}l}_{1}\cdot \Lambda ^{{\mathscr {T}}l}_{2})}\Big ],\\ \begin{bmatrix} \root 2\Game \of {(\Im ^{{\mathscr {F}}l}_{1})^{2\Game } + (\Im ^{{\mathscr {F}}l}_{2})^{2\Game } -(\Im ^{{\mathscr {F}}l}_{1})^{2\Game } \cdot (\Im ^{{\mathscr {F}}l}_{2})^{2\Game }}, e^{i2\pi \root 2\Game \of {(\Lambda ^{{\mathscr {F}}l}_{1})^{2\Game } + (\Lambda ^{{\mathscr {F}}l}_{2})^{2\Game } -(\Lambda ^{{\mathscr {F}}l}_{1})^{2\Game } \cdot (\Lambda ^{{\mathscr {F}}l}_{2})^{2\Game }}},\\ \root 2\Game \of {(\Im ^{{\mathscr {F}}u}_{1})^{2\Game } + (\Im ^{{\mathscr {F}}u}_{2})^{2\Game } -(\Im ^{{\mathscr {F}}u}_{1})^{2\Game } \cdot (\Im ^{{\mathscr {F}}u}_{2})^{2\Game }} e^{i2\pi \root 2\Game \of {(\Lambda ^{{\mathscr {F}}u}_{1})^{2\Game } + (\Lambda ^{{\mathscr {F}}u}_{2})^{2\Game } -(\Lambda ^{{\mathscr {F}}u}_{1})^{2\Game } \cdot (\Lambda ^{{\mathscr {F}}u}_{2})^{2\Game }}} \end{bmatrix} \end{bmatrix},\)
-
3.
\(\Game \cdot \widehat{\aleph } = \begin{bmatrix} (\Game \cdot \varrho , \Game \cdot \kappa );\\ \begin{bmatrix} \root 2\Game \of {1-\big ( 1- (\Im ^{{\mathscr {T}}l})^{2\Game }\big )^{\Game }}e^{i2\pi \root 2\Game \of {1-\big ( 1- (\Lambda ^{{\mathscr {T}}l})^{2\Game }\big )^{\Game }}},\\ \root 2\Game \of {1-\big ( 1- (\Im ^{{\mathscr {T}}u})^{2\Game }\big )^{\Game }} e^{i2\pi \root 2\Game \of {1-\big ( 1- (\Lambda ^{{\mathscr {T}}u})^{2\Game }\big )^{\Game }}} \end{bmatrix}\\ \Big [(\Im ^{{\mathscr {F}}l})^{\Game } e^{i2\pi (\Lambda ^{{\mathscr {F}}l})^{\Game }}, (\Im ^{{\mathscr {F}}u})^{\Game } e^{i2\pi (\Lambda ^{{\mathscr {F}}u})^{\Game }}\Big ] \end{bmatrix},\)
-
4.
\(\widehat{\aleph }^{\Game } = \begin{bmatrix} (\varrho ^{\Game }, \kappa ^{\Game });\\ \Big [(\Im ^{{\mathscr {T}}l})^{\Game } e^{i2\pi (\Lambda ^{{\mathscr {T}}l})^{\Game }}, (\Im ^{{\mathscr {T}}u})^{\Game } e^{i2\pi (\Lambda ^{{\mathscr {T}}u})^{\Game }}\Big ],\\ \begin{bmatrix} \root 2\Game \of {1-\big ( 1- (\Im ^{{\mathscr {F}}l})^{2\Game }\big )^{\Game }}e^{i2\pi \root 2\Game \of {1-\big ( 1- (\Lambda ^{{\mathscr {F}}l})^{2\Game }\big )^{\Game }}},\\ \root 2\Game \of {1-\big ( 1- (\Im ^{{\mathscr {F}}u})^{2\Game }\big )^{\Game }} e^{i2\pi \root 2\Game \of {1-\big ( 1- (\Lambda ^{{\mathscr {F}}u})^{2\Game }\big )^{\Game }}} \end{bmatrix} \end{bmatrix}.\)
4 AOs for CPNIVFNs
Based on the operational rules of Pythagorean normal interval-valued fuzzy numbers, the weighted averaging, and geometric aggregation operators for complex Pythagorean normal interval-valued fuzzy numbers were presented. In this section, we introduce new operators for the CPNIVFWA, CPNIVFWG, CGPNIVFWA, and CGPNIVFWG.
4.1 CPNIV Weighted Averaging (CPNIVFWA)
Definition 4.1
Let \(\widehat{\aleph }_{i} {=} \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be the family of CPNIVFNs, \(W= (\zeta _{1},\zeta _{2},\ldots ,\zeta _{n})\) be the weight of \(\widehat{\aleph }_{i} \), \(\zeta _{i}\succeq 0\) and \(\biguplus ^{n}_{i=1} \zeta _{i}=1\). Then CPNIVFWA operator is defined as CPNIVFWA \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})= \biguplus ^{n}_{i=1} \zeta _{i}\widehat{\aleph }_{i}\,\, (i=1,2,\ldots ,n)\).
Theorem 4.1
Let \(\widehat{\aleph }_{i} {=} \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be a family of CPNIVFNs. Then \(CPNIVFWA(\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\)
This is known as the associativity property.
Proof
The proof of the theorem follows from mathematical induction.
If \(n=2\), then CPNIVFWA\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2})= \zeta _{1} \widehat{\aleph }_{1} \bigoplus \zeta _{2}\widehat{\aleph }_{2}\),
where
Now,
It is valid for \(n \succeq 3\), and hence
If \(n= l+1\), then CPNIVFWA \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{l}, \widehat{\aleph }_{l+1})\)
Theorem 4.2
If all \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}],[\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle (i= 1,2,\ldots ,n)\) are equal with \(\widehat{\aleph }_{i} = \widehat{\aleph }\), then CPNIVFWA\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})= \widehat{\aleph }\) (idempotency property).
Proof
Given that \((\varrho _{i}, \kappa _{i}) = (\varrho , \kappa )\), \([\Im ^{{\mathscr {T}}l}_{i}, \Im ^{{\mathscr {T}}u}_{i}] = [\Im ^{{\mathscr {T}}l}, \Im ^{{\mathscr {T}}u}]\) and \([\Im ^{{\mathscr {F}}l}_{i}, \Im ^{{\mathscr {F}}u}_{i}] = [\Im ^{{\mathscr {F}}l}, \Im ^{{\mathscr {F}}u}]\), \([\Lambda ^{{\mathscr {T}}l}_{i}, \Lambda ^{{\mathscr {T}}u}_{i}] = [\Lambda ^{{\mathscr {T}}l}, \Lambda ^{{\mathscr {T}}u}]\)
and \([\Lambda ^{{\mathscr {F}}l}_{i}, \Lambda ^{{\mathscr {F}}u}_{i}] = [\Lambda ^{{\mathscr {F}}l}, \Lambda ^{{\mathscr {F}}u}]\), for \(i= 1,2,\ldots ,n\) and \(\biguplus ^{n}_{i=1}{\zeta _{i}}=1\). Now, \(CPNIVFWA (\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})\)
Theorem 4.3
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{ij}, \kappa _{ij}); [\Im ^{{\mathscr {T}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{ij}},\Im ^{{\mathscr {T}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{ij}}], [\Im ^{{\mathscr {F}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{ij}},\Im ^{{\mathscr {F}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{ij}}] \Big \rangle \) (for \(i=1,2,\ldots ,n\); \(j=1,2,\ldots ,i_{j}\)) be a collection of CPNIVFWA, which is known as the Boundedness property, where
\(\underline{\varrho }= \min \varrho _{ij}\), \(\overline{\varrho }= \max \varrho _{ij},\)
\(\underline{\kappa }= \max \kappa _{ij}\), \(\overline{\kappa }= \min \kappa _{ij},\)
\(\underline{\Im ^{{\mathscr {T}}l}}= \min \Im ^{{T}^{l}}_{ij}\), \(\overline{\Im ^{{\mathscr {T}}l}}=\max \Im ^{{T}^{l}}_{ij},\)
\(\underline{\Im ^{{\mathscr {T}}u}}=\min \Im ^{{T}^{u}}_{ij}\), \(\overline{\Im ^{{\mathscr {T}}u}}=\max \Im ^{{T}^{u}}_{ij},\)
\(\underline{\Im ^{{\mathscr {F}}l}}=\min \Im ^{{F}^{l}}_{ij}\), \(\overline{\Im ^{{\mathscr {F}}l}}=\max \Im ^{{F}^{l}}_{ij},\)
\(\underline{\Im ^{{\mathscr {F}}u}}=\min \Im ^{{F}^{u}}_{ij}\), \(\overline{\Im ^{{\mathscr {F}}u}}=\max \Im ^{{F}^{u}}_{ij}\),
\(\underline{\Lambda ^{{\mathscr {T}}l}}= \min \Lambda ^{{T}^{l}}_{ij}\), \(\overline{\Lambda ^{{\mathscr {T}}l}}=\max \Lambda ^{{T}^{l}}_{ij},\)
\(\underline{\Lambda ^{{\mathscr {T}}u}}=\min \Lambda ^{{T}^{u}}_{ij}\), \(\overline{\Lambda ^{{\mathscr {T}}u}}=\max \Lambda ^{{T}^{u}}_{ij},\)
\(\underline{\Lambda ^{{\mathscr {F}}l}}=\min \Lambda ^{{F}^{l}}_{ij}\), \(\overline{\Lambda ^{{\mathscr {F}}l}}=\max \Lambda ^{{F}^{l}}_{ij},\)
\(\underline{\Lambda ^{{\mathscr {F}}u}}=\min \Lambda ^{{F}^{u}}_{ij}\), \(\overline{\Lambda ^{{\mathscr {F}}u}}=\max \Lambda ^{{F}^{u}}_{ij}\).
Then,
Proof
Since \(\underline{\Im ^{{\mathscr {T}}l}}= \min \Im ^{{T}^{l}}_{ij}\), \(\overline{\Im ^{{\mathscr {T}}l}}=\max \Im ^{{T}^{l}}_{ij}\)
\(\underline{\Im ^{{\mathscr {T}}u}}=\min \Im ^{{T}^{u}}_{ij}\), \(\overline{\Im ^{{\mathscr {T}}u}}=\max \Im ^{{T}^{u}}_{ij}\) and \(\underline{\Im ^{{\mathscr {T}}l}} \preceq \Im ^{{T}^{l}}_{ij} \preceq \overline{\Im ^{{\mathscr {T}}l}}\) and \(\underline{\Im ^{{\mathscr {T}}u}} \preceq \Im ^{{T}^{u}}_{ij} \preceq \overline{\Im ^{{\mathscr {T}}u}}\). We have
Since \(\underline{\Im ^{{\mathscr {F}}l}}= \min \Im ^{{F}^{l}}_{ij}\), \(\overline{\Im ^{{\mathscr {F}}l}}=\max \Im ^{{F}^{l}}_{ij}\)
\(\underline{\Im ^{{\mathscr {F}}u}}=\min \Im ^{{F}^{u}}_{ij}\), \(\overline{\Im ^{{\mathscr {F}}u}}=\max \Im ^{{F}^{u}}_{ij}\) and \(\underline{\Im ^{{\mathscr {F}}l}} \preceq \Im ^{{F}^{l}}_{ij} \preceq \overline{\Im ^{{\mathscr {F}}l}}\) and \(\underline{\Im ^{{\mathscr {F}}u}} \preceq \Im ^{{F}^{u}}_{ij} \preceq \overline{\Im ^{{\mathscr {F}}u}}\). We have
Since \(\underline{\varrho }= \min \varrho _{ij}\), \(\overline{\varrho }= \max \varrho _{ij}\), \(\underline{\kappa }= \max \kappa _{ij}\), \(\overline{\kappa }= \min \kappa _{ij}\) and \(\underline{\varrho } \preceq \varrho _{ij} \preceq \overline{\varrho }\) and \(\overline{\kappa } \preceq \kappa _{ij} \preceq \underline{\kappa }\).
Hence, \(\biguplus ^{n}_{i=1} \zeta _{i}\underline{\varrho } \preceq \biguplus ^{n}_{i=1} \zeta _{i} \varrho _{ij} \preceq \biguplus ^{n}_{i=1} \zeta _{i} \overline{\varrho }\) and \(\biguplus ^{n}_{i=1} \zeta _{i} \overline{\kappa } \preceq \biguplus ^{n}_{i=1} \zeta _{i} \kappa _{ij} \preceq \biguplus ^{n}_{i=1} \zeta _{i} \underline{\kappa }\).
Therefore,
Hence,
Theorem 4.4
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{t_{ij}}, \kappa _{t_{ij}}); [\Im ^{{\mathscr {T}}l}_{t_{ij}}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{t_{ij}}}, \Im ^{{\mathscr {T}}u}_{t_{ij}}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{t_{ij}}}], [\Im ^{{\mathscr {F}}l}_{t_{ij}}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{t_{ij}}}, \Im ^{{\mathscr {F}}u}_{t_{ij}}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{t_{ij}}}] \Big \rangle \) and \(\widehat{W}_{i} = \Big \langle (\varrho _{h_{ij}}, \kappa _{h_{ij}}); [\Im ^{{\mathscr {T}}l}_{h_{ij}}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{h_{ij}}}, \Im ^{{\mathscr {T}}u}_{h_{ij}}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{h_{ij}}}], [\Im ^{{\mathscr {F}}l}_{h_{ij}}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{h_{ij}}}, \Im ^{{\mathscr {F}}u}_{h_{ij}}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{h_{ij}}}] \Big \rangle \) for \((i= 1,2,\ldots ,n)\) and \((j= 1,2,\ldots ,i_{j})\) be the two families of CPNIVFWAs. For any i, and \(\varrho _{t_{ij}} \preceq \kappa _{h_{ij}}\), the following conditions hold:
or \(\widehat{\aleph }_{i} \preceq \widehat{W}_{i}\), then \(CPNIVFWA \left( \widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n} \right) \preceq CPNIVFWA \left( \widehat{W}_{1}, \widehat{W}_{2},\ldots , \widehat{W}_{n}\right) \) (monotonicity property).
Proof
For any i, \(\varrho _{t_{ij}} \preceq \kappa _{h_{ij}}\). Therefore, \(\biguplus ^{n}_{i=1} \varrho _{t_{ij}} \preceq \biguplus ^{n}_{i=1} \kappa _{h_{ij}}\).
For any i, \(\left( \Im ^{{\mathscr {T}}l}_{t_{ij}}\right) ^{2} + \left( \Im ^{{\mathscr {T}}u}_{t_{ij}}\right) ^{2} \preceq \left( \Im ^{{\mathscr {T}}l}_{h_{ij}}\right) ^{2} + \left( \Im ^{{\mathscr {T}}u}_{h_{ij}}\right) ^{2}\).
Therefore, \(1-\left( \Im ^{{\mathscr {T}}l}_{t_{i}}\right) ^{2} +1- \left( \Im ^{{\mathscr {T}}u}_{t_{i}}\right) ^{2} \succeq 1-\left( \Im ^{{\mathscr {T}}l}_{h_{i}}\right) ^{2} + 1- \left( \Im ^{{\mathscr {T}}u}_{h_{i}}\right) ^{2}\).
For any i, \(\left( \Lambda ^{{\mathscr {T}}l}_{t_{ij}}\right) ^{2} + \left( \Lambda ^{{\mathscr {T}}u}_{t_{ij}}\right) ^{2} \preceq \left( \Lambda ^{{\mathscr {T}}l}_{h_{ij}}\right) ^{2} + \left( \Lambda ^{{\mathscr {T}}u}_{h_{ij}}\right) ^{2}\).
Therefore, \(1-\left( \Lambda ^{{\mathscr {T}}l}_{t_{i}}\right) ^{2} +1- \left( \Lambda ^{{\mathscr {T}}u}_{t_{i}}\right) ^{2} \succeq 1-\left( \Lambda ^{{\mathscr {T}}l}_{h_{i}}\right) ^{2} + 1- \left( \Lambda ^{{\mathscr {T}}u}_{h_{i}}\right) ^{2}\).
Hence,
and
and
\(\left( \Im ^{{\mathscr {F}}l}_{t_{ij}}\right) ^{2} + \left( \Im ^{{\mathscr {F}}u}_{t_{ij}}\right) ^{2} \succeq \left( \Im ^{{\mathscr {F}}l}_{h_{ij}}\right) ^{2} + \left( \Im ^{{\mathscr {F}}u}_{h_{ij}}\right) ^{2}\).
Therefore,
and
Hence,
Hence, \(CPNIVFWA \left( \widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n} \right) \preceq CPNIVFWA \left( \widehat{W}_{1}, \widehat{W}_{2},\ldots , \widehat{W}_{n}\right) \).
4.2 CPNIV Weighted Geometric (CPNIVFWG)
Definition 4.2
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be a family of CPNIVFNs. Then the CPNIVFWG operator is defined as CPNIVFWG \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})= \bigcirc ^{n}_{i=1} \widehat{\aleph }_{i}^{\zeta _{i}}\,\, (i=1,2,\ldots ,n)\).
Theorem 4.5
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be a family of CPNIVFNs. Then,
Proof
The proof follows from Theorem 4.1.
Theorem 4.6
If all \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}],[\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle (i= 1,2,\ldots ,n)\) are equal to \(\widehat{\aleph }_{i} = \widehat{\aleph }\), then CPNIVFWG\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\widehat{\aleph }\).
Proof
The proof follows from Theorem 4.2.
Remark 4.1
It can be proven that the CPNIVFWG operator satisfies the boundedness and monotonicity properties.
Proof
Proof follows from Theorems 4.3 and 4.4.
4.3 Generalized CPNIVFWA (CGPNIVFWA)
In this section, we discuss the generalization of CPNIVFWA, and CPNIVFGA operators, such as generalized CPNIVFWA and generalized CPNIVFGA operators.
Definition 4.3
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be a family of CPNIVFN. Then the CGPNIVFWA \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})= \Big (\biguplus ^{n}_{i=1} \zeta _{i}\widehat{\aleph }_{i}^{\Game } \Big )^{1/\Game }\) is called a CGPNIVFWA operator.
Theorem 4.7
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle \) be a family of CPNIVFNs. Then the CGPNIVFWA \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\)
Proof
We first prove that
\(\biguplus ^{n}_{i=1} \zeta _{i} \widehat{\aleph }_{i}^{\Game }= \begin{bmatrix} \Bigg ( \Big (\biguplus ^{n}_{i=1} \zeta _{i}\varrho _{i}^{\Game }\Big ), \Big (\biguplus ^{n}_{i=1} \zeta _{i}\kappa _{i}^{\Game }\Big )\Bigg );\\ \begin{bmatrix} \root 2\Game \of { \begin{aligned} 1-\bigcirc ^{n}_{i=1} \Bigg ( 1-\Big ((\Im ^{{\mathscr {T}}l}_{i})^{\Game }\Big )^{2\Game }\Bigg )^{\zeta _{i}}\\ \end{aligned}}\,\, e^{i2\pi \root 2\Game \of { \begin{aligned} 1-\bigcirc ^{n}_{i=1} \Bigg ( 1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}l}_{i}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\Bigg )^{\xi _{i}}\\ \end{aligned}}\,\, }, \\ \root 2\Game \of { \begin{aligned} 1-\bigcirc ^{n}_{i=1} \Bigg ( 1-\Big ((\Im ^{{\mathscr {T}}u}_{i})^{\Game }\Big )^{2\Game }\Bigg )^{\zeta _{i}}\\ \end{aligned}} e^{i2\pi \root 2\Game \of { \begin{aligned} 1-\bigcirc ^{n}_{i=1} \Bigg ( 1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}u}_{i}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\Bigg )^{\xi _{i}}\\ \end{aligned}} } \\ \,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc ^{n}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}l}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}}\\ \end{aligned} e^{i2\pi \begin{aligned} \bigcirc ^{n}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}l}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}}\\ \end{aligned}, } \\ \begin{aligned} \bigcirc ^{n}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}u}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}}\\ \end{aligned} e^{i2\pi \begin{aligned} \bigcirc ^{n}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}u}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}}\\ \end{aligned} } \end{bmatrix} \end{bmatrix}.\)
Put \(n=2\), then \(\zeta _{1}\aleph _{1}\bigoplus \zeta _{2}\aleph _{2}\)
In general, \(\biguplus ^{l}_{i=1} \zeta _{i} \widehat{\aleph }_{i}^{\Game }=\begin{bmatrix} \Big (\biguplus ^{l}_{i=1} \zeta _{i}\varrho _{i}^{\Game }, \biguplus ^{l}_{i=1} \zeta _{i}\kappa _{i}^{\Game }\Big );\\ \begin{bmatrix} \root 2\Game \of { 1- \bigcirc ^{l}_{i=1} \bigg (1-\Big ((\Im ^{{\mathscr {T}}l}_{1})^{\Game }\Big )^{2\Game }\bigg )^{\zeta _{i}}} e^{i2\pi \root 2\Game \of { 1- \bigcirc ^{l}_{i=1} \bigg (1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}l}_{1}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\bigg )^{\xi _{i}}} }, \\ \root 2\Game \of { 1- \bigcirc ^{l}_{i=1} \bigg (1-\Big ((\Im ^{{\mathscr {T}}u}_{1})^{\Game }\Big )^{2\Game }\bigg )^{\zeta _{i}}} e^{i2\pi \root 2\Game \of { 1- \bigcirc ^{l}_{i=1} \bigg (1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}u}_{1}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\bigg )^{\xi _{i}}} } \\ \,\,\end{bmatrix},\\ \begin{bmatrix} \bigcirc ^{l}_{i=1}\Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}l}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}} e^{i2\pi \bigcirc ^{l}_{i=1}\Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}l}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}} }, \\ \bigcirc ^{l}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}l}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}} e^{i2\pi \bigcirc ^{l}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}l}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}} } \end{bmatrix} \end{bmatrix}\).
If \(n= l+1\), then \(\biguplus ^{l}_{i=1} \zeta _{i}\widehat{\aleph }_{i}^{\Game } +\zeta _{l+1} \widehat{\aleph }_{l+1}^{\Game }= \biguplus ^{l+1}_{i=1} \zeta _{i}\widehat{\aleph }_{i}^{\Game }\).
Now,\(\biguplus ^{l}_{i=1} \zeta _{i}\widehat{\aleph }_{i}^{\Game } +\zeta _{l+1} \widehat{\aleph }_{l+1}^{\Game } = \zeta _{1}\widehat{\aleph }_{1}^{\Game } \bigoplus \zeta _{2}\widehat{\aleph }_{2}^{\Game } \bigoplus ... \bigoplus \zeta _{l}\widehat{\aleph }_{l}^{\Game } \bigoplus \zeta _{l+1}\widehat{\aleph }_{l+1}^{\Game }\)
\(\left( \biguplus ^{l+1}_{i=1} \zeta _{i}\widehat{\aleph }_{i}^{\Game }\right) ^{1/\Game }\) can be written as follows: \(\begin{bmatrix} \Bigg ( \Big (\biguplus ^{l+1}_{i=1} \zeta _{i}\varrho _{i}^{\Game }\Big )^{1/\Game }, \Big (\biguplus ^{l+1}_{i=1} \zeta _{i}\kappa _{i}^{\Game }\Big )^{1/\Game }\Bigg );\\ \begin{bmatrix} \Bigg (\root 2\Game \of { \begin{aligned} 1-\bigcirc ^{l+1}_{i=1} \Bigg ( 1-\Big ((\Im ^{{\mathscr {T}}l}_{i})^{\Game }\Big )^{2\Game }\Bigg )^{\zeta _{i}}\\ \end{aligned}}\,\,\Bigg )^{1/\Game } e^{i2\pi (\root 2\Game \of { \begin{aligned} 1-\bigcirc ^{l+1}_{i=1} \Bigg ( 1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}l}_{i}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\Bigg )^{\xi _{i}}\\ \end{aligned}}\,\,\Bigg )^{1/\Game } }, \\ \Bigg (\root 2\Game \of { \begin{aligned} 1-\bigcirc ^{l+1}_{i=1} \Bigg ( 1-\Big ((\Im ^{{\mathscr {T}}u}_{i})^{\Game }\Big )^{2\Game }\Bigg )^{\zeta _{i}}\\ \end{aligned}}\,\,\Bigg )^{1/\Game } e^{i2\pi (\root 2\Game \of { \begin{aligned} 1-\bigcirc ^{l+1}_{i=1} \Bigg ( 1-\Big (\left( \frac{\Lambda ^{{\mathscr {T}}u}_{i}}{2\pi }\right) ^{\Game }\Big )^{2\Game }\Bigg )^{\xi _{i}}\\ \end{aligned}}\,\,\Bigg )^{1/\Game } } \,\,\,\, \end{bmatrix},\\ \begin{bmatrix} \root 2\Game \of {1-\Bigg ( \begin{aligned} 1-\Bigg (\bigcirc ^{l+1}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}l}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}}\\ \end{aligned}\,\,\Bigg )^{2}\Bigg )^{1/\Game }} \\ e^{i2\pi \root 2\Game \of {1-\Bigg ( \begin{aligned} 1-\Bigg (\bigcirc ^{l+1}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}l}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}}\\ \end{aligned}\,\,\Bigg )^{2}\Bigg )^{1/\Game }},} \\ \\ \root 2\Game \of {1-\Bigg ( \begin{aligned} 1-\Bigg (\bigcirc ^{l+1}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-(\Im ^{{\mathscr {F}}u}_{i})^{2\Game }\Big )^{\Game }}\Bigg )^{\zeta _{i}}\\ \end{aligned}\,\,\Bigg )^{2}\Bigg )^{1/\Game }} \\ e^{i2\pi \root 2\Game \of {1-\Bigg ( \begin{aligned} 1-\Bigg (\bigcirc ^{l+1}_{i=1} \Bigg ( \root 2\Game \of {1-\Big (1-\left( \frac{\Lambda ^{{\mathscr {F}}u}_{i}}{2\pi }\right) ^{2\Game }\Big )^{\Game }}\Bigg )^{\xi _{i}}\\ \end{aligned}\,\,\Bigg )^{2}\Bigg )^{1/\Game }} }, \\ \end{bmatrix}\\ \end{bmatrix}\).
It is valid for any l.
Remark 4.2
The CGPNIVFWA operator is reduced to the CPNIVFWA operator when \(\Game =1\).
Theorem 4.8
If all \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}]\Big \rangle (i= 1,2,\ldots ,n)\) are equal to \(\widehat{\aleph }_{i} = \widehat{\aleph }\), then CGPNIVFWA\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\widehat{\aleph }\).
Remark 4.3
Based on the CGPNIVFWA operator, the boundedness and monotonicity properties can be satisfied.
4.4 Generalized CPNIVFWG (CGPNIVFWG)
Definition 4.4
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}],[\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}]\Big \rangle \) be a family of CPNIVFNs. Then CGPNIVFWG \((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})= \frac{1}{\Game }\Big (\bigcirc ^{n}_{i=1} (\Game \widehat{\aleph }_{i})^{\zeta _{i}} \Big ) \,\,\, (i=1,2,\ldots ,n)\) is called a CGPNIVFWG operator.
Theorem 4.9
Let \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}]\Big \rangle \) be a family of CPNIVFNs.
Then CGPNIVFWG\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\)
Remark 4.4
If \(\Game =1\), we reduce the CGPNIVFWG operator to the CPNIVFWG operator.
Remark 4.5
The CGPNIVFWG operator is satisfied with both the boundedness and monotonicity property.
Theorem 4.10
If all \(\widehat{\aleph }_{i} = \Big \langle (\varrho _{i}, \kappa _{i}); [\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}},\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}], [\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}},\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}] \Big \rangle (i= 1,2,\ldots ,n)\) are equal with \(\widehat{\aleph }_{i} = \widehat{\aleph }\), then CGPNIVFWG\((\widehat{\aleph }_{1}, \widehat{\aleph }_{2},\ldots , \widehat{\aleph }_{n})=\widehat{\aleph }\).
5 MADM Approach Based on CPNIV
Complex systems make it increasingly challenging to choose the best option among alternatives. Despite the difficulty in combining a single objective, it is not impossible to accomplish this. It is often difficult for many organizations to motivate employees, set goals, or express opinions. To make an informed decision, both individuals and committees must consider multiple objectives simultaneously. The decision-makers involved in a practical problem cannot achieve a perfect solution when they consider flexibly solved criteria. In this way, decision-makers focus on developing methods for identifying the best option that is more reliable and applicable. DM problems often involve ambiguity and uncertainty, rendering classical or crisp methods ineffective. Let \(\aleph = \{\aleph _{1},\aleph _{2},\ldots ,\aleph _{n}\}\) be the set of n-alternatives, \(C= \{\Game _{1},\Game _{2},\ldots ,\Game _{m}\}\) be the set of m-attributes, \(w = \{\zeta _{1},\zeta _{2},\ldots ,\zeta _{m}\}\) be the weights of attributes, for \(i= 1,2,\ldots ,n\) and \(j= 1,2,\ldots ,m\) \(\widehat{\aleph }_{ij} = \Big \langle (\varrho _{ij}, \kappa _{ij}); [\Im ^{{\mathscr {T}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{ij}},\Im ^{{\mathscr {T}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{ij}}],[\Im ^{{\mathscr {F}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{ij}},\Im ^{{\mathscr {F}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{ij}}]\Big \rangle \) denote CPNIVFN of alternative \(\aleph _{i}\) in attribute \(\Game _{j}\). Since \([\Im ^{{\mathscr {T}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{ij}},\Im ^{{\mathscr {T}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{ij}}], [\Im ^{{\mathscr {F}}l}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{ij}},\Im ^{{\mathscr {F}}u}_{ij}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{ij}}]\in [0,1]\) and \(0 \preceq (\Im ^{{\mathscr {T}}u}_{ij}(\wp ))^{2}+(\Im _{ij}^{{\mathscr {F}}u}(\wp ))^{2} \preceq 1\) and \(0 \preceq (\Lambda ^{{\mathscr {T}}u}_{ij}(\wp ))^{2}+(\Lambda _{ij}^{{\mathscr {F}}u}(\wp ))^{2} \preceq 1\). Here n-alternative sets and m-attribute sets gives an \(n \times m\) decision matrix and is denoted by \(\mathscr {D}=(\widehat{\aleph }_{ij})_{n \times m}\).
5.1 Algorithm for CPNIV
Step-1: The decision values are input for CPNIVS.
Step-2: The normalized decision values are determined. The decision matrix \(\mathscr {D}=(\widehat{\aleph }_{ij})_{n \times m}\) is normalized into \(\overrightarrow{\mathscr {D}}=(\overbrace{\aleph _{ij}})_{n \times m}\); where \(\overbrace{\aleph _{ij}}= \Big \langle (\overrightarrow{\varrho _{ij}}, \overrightarrow{\kappa _{ij}}); [\overrightarrow{\Im ^{{\mathscr {T}}l}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {T}}l}_{ij}}},\overrightarrow{\Im ^{{\mathscr {T}}u}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {T}}u}_{ij}}}], [\overrightarrow{\Im ^{{\mathscr {F}}l}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {F}}l}_{ij}}},\overrightarrow{\Im ^{{\mathscr {F}}u}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {F}}u}_{ij}}}]\Big \rangle \) and
Step-3: The aggregated values for each alternative are calculated. Based on CPNIVFS with AOs, attribute \(\Game _{j}\) in \(\widehat{\aleph }_{i}\), \(\overbrace{\aleph _{ij}}=\Big \langle (\overrightarrow{\varrho _{ij}}, \overrightarrow{\kappa _{ij}}); [\overrightarrow{\Im ^{{\mathscr {T}}l}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {T}}l}_{ij}}},\overrightarrow{\Im ^{{\mathscr {T}}u}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {T}}u}_{ij}}}], [\overrightarrow{\Im ^{{\mathscr {F}}l}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {F}}l}_{ij}}},\overrightarrow{\Im ^{{\mathscr {F}}u}_{ij}e^{ij2\pi \Lambda ^{{\mathscr {F}}u}_{ij}}}]\Big \rangle \) is aggregated into \(\overbrace{\aleph _{i}}=\Big \langle (\overrightarrow{\varrho _{i}}, \overrightarrow{\kappa _{i}}); [\overrightarrow{\Im ^{{\mathscr {T}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}l}_{i}}},\overrightarrow{\Im ^{{\mathscr {T}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {T}}u}_{i}}}],[\overrightarrow{\Im ^{{\mathscr {F}}l}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}l}_{i}}},\overrightarrow{\Im ^{{\mathscr {F}}u}_{i}e^{i2\pi \Lambda ^{{\mathscr {F}}u}_{i}}}] \Big \rangle \). Step-4: The scores for each alternative are calculated as follows: \(S_{1}(\widehat{\aleph })= \frac{\varrho }{2}\left( X + Y\right) \), \(S_{2}(\widehat{\aleph })= \frac{\kappa }{2}\left( X + Y\right) \) and \({\mathbb S}(\widehat{\aleph })= \frac{S_{1}(\widehat{\aleph })+S_{2}(\widehat{\aleph })}{2}\). where
where \(\,\, {\mathbb S}(\widehat{\aleph })\in [-1,1]\).
To calculate the accuracy value of each alternative, the following formula is used: \(H_{1}(\widehat{\aleph })= \frac{\varrho }{2}\left( X_{1} + Y_{1}\right) \), \(H_{2}(\widehat{\aleph })= \frac{\kappa }{2}\left( X_{1} + Y_{1}\right) \) and \({\mathbb H}(\widehat{\aleph })= \frac{H_{1}(\widehat{\aleph })+H_{2}(\widehat{\aleph })}{2},\) where
in which \(\,\, {\mathbb H}(\widehat{\aleph })\in [0,1]\).
Step-5: Based on the output, \(\max \mathscr {D}^{*}_{i}\) is determined as the optimal value. Therefore, the decision in this situation is to choose \(\max \mathscr {D}^{*}_{i}\) as the optimal solution for the given problem. Figure 1 shows a flowchart of the algorithm for the MADM process using the CPNIVFS.
5.2 Decision-Making Process for Brain Tumor Patients and Their Treatment
A tumor is called a primary tumor if it occurs in the brain due to traumatic injury. However, it is said to be secondary if it spreads from another part of the body to the brain.
-
1.
Gliomas \((\aleph ^{A})\): In the brain, a glioma is a type of tumor that develops from the supporting tissue in the brain. It can be said that gliomas constitute the largest group of primary brain tumors. Glias are cells of the support found in the parenchyma and the outer layer of the brain. There are two subtypes of gliomas: astrocytomas and oligodendrogliomas. Because of the potential aggressiveness of these tumors, they can be classified as low or high grade. A particular example of this would be a highly malignant glioblastoma, which can proliferate rapidly under the microscope, and other aggressive characteristics, such as microvascular proliferation and necrosis. Depending on their location in the nervous system and the brain, these tumors can develop in many places. There are over \(78\%\) of malignant brain tumors in adults, with gliomas being the most common form. Glial tumors originate in the brain, but can occasionally be found in the spinal cord. Some glial tumors include the following: gliomas are a common type of brain tumor. The number of gliomas in the brain is estimated to be approximately \(33\%\) of all brain tumors. Glial tumors are formed from the surrounding and supporting glial cells of neurons and can develop at any age. Over the years, we have learned a lot about gliomas, and our understanding of them has evolved. As a result of the type of cells forming the glioma and genetic mutations in the cells causing these tumors, the level of aggressiveness may vary. For a better understanding of the tumor behavior, genetic studies should be conducted to obtain more information on its genetic makeup. Diffuse midline glioma is a new type of glioma that has been recently described, whereas hemispheric glioma is a type of glioma that has been associated with specific mutations that are associated with an aggressive nature. Depending on the glioma type, several options are available. The remaining growth of some of these tumors is slow and they are not considered cancerous. The remaining patients were considered to have cancer. Cancerous tumors are also known as malignant tumors. The invasion of healthy brain tissue is a risk factor for malignant gliomas. Gliomas primarily affect the adults. Most of these cases occur in children.
-
2.
Meningiomas \((\aleph ^{B})\): Usually, meningiomas occur on the surface of the brain and develop from cells lining the brain coverings (meninges). Most meningiomas are surgically removed; however, some may be more aggressive and require additional therapy, typically irradiation. Pressure is applied to the brain and spinal cord as they develop inward from the protective layer. Approximately, one-third of all brain tumors originate from these tumors. Approximately, \(30\%\) of all brain tumors are meningiomas, the most common primary brain tumors. There are a number of types of meningiomas, but a meningioma originates in the meninges, or the three layers of tissue surrounding the brain and protects it. Women are more often diagnosed with meningiomas than men. There is an estimated \(85\%\) chance that meningiomas will not be cancerous and will grow slowly. Although most meningiomas are regarded as benign, there are some cases in which they can persist even after treatment has been administered. A small percentage of meningiomas are malignant in nature, but these tumors account for \(10\%\)–\(15\%\) of all brain neoplasms and are the most common benign intracranial tumors. In general, meningiomas are tumors of the membranes that surround the brain and spinal cord and originate in the brain. It is important to remember that most meningiomas grow slowly and are not usually symptomatic for several years. However such devices may cause serious disabilities when they affect nearby brain tissues, nerves, or vessels.
-
3.
Metastases \((\aleph ^{C})\): It is the spread of cancer that is described as metastasis. Unlike normal cells, cancer cells can reproduce outside their origin in the body. In this case, the cancer had spread to other organs and was classified as either metastatic or advanced. Metastasis is a tumor that grows outside of the brain without proper development. The disease is most commonly caused by tumors in other sites, such as the lung, breast, skin, and digestive tract. An individual with metastatic cancer spreads to distant parts of his or her body from their point of origin. The tissue surrounding the tumor can be invaded. Cancer cells can travel to distant parts of the body via the bloodstream. Cancer cells can travel to nearby or distant lymph nodes through the lymphatic system. Cancer cells spread from their original location to the brain, causing brain metastases. Brain metastases can occur in any type of cancer; however, lung, breast, colon, kidney, and melanoma are more likely to cause brain metastases. One or more tumors can form in the brain because of brain metastases. In addition to putting pressure on the surrounding brain tissue, metastatic brain tumors alter their function. Headache, changes in personality, memory loss, and seizures can be the signs and symptoms that result from this. Tumors are abnormal tissue masses that form owing to abnormal growth.
-
4.
Embryonal tumors \((\aleph ^{D})\): In embryonic tumors, the cells that form the central nervous system are similar to those encountered during development. They favor patients younger than 25 years of age. Children with this type of tumor are most likely to develop malignant primary brain tumors. Cells left over from embryonic development give rise to embryonic tumors. These cells remain in the brain after birth and are known as embryonic cells. Chemotherapy and irradiation are required for aggressive treatments. These tumors are known to disperse along the cerebrospinal fluid, so extensive irradiation may be necessary. Medulloblastoma is the most common embryonal tumor. It is usually located in the cerebellum, at the bottom of the brain. The cerebellum coordinates the muscles, maintains balance, and moves the body. Medulloblastomas tend to spread to other parts of the brain and spinal cord via cerebrospinal fluid (CSF). Scientists have noticed important differences between primitive neuroectodermal tumors (PNETs) and other subtypes of these tumors, and have renamed them. This is a significant step forward in allowing clinicians to adjust treatment for patients based on their risk, and researchers to explore new treatments for these challenging tumors. Children under 9 years old are ten times more likely than adults to be diagnosed with embryonic tumors.
-
5.
Ependymomas \((\aleph ^{E})\): A pleomorphic tumor, also known as an epidermoma, is a cancerous tumor that develops in the brain, neck, upper or lower back, or anywhere else along the spine. Initially, they are formed in the ependymal cells located in the middle of the spinal cord and are also found in the fluid-filled cavities in the ventricles of the brain. Surgery may be able to completely remove these solid tumors as they are solid and can be removed completely. However, depending on their location, this may not always be possible, and patients may have to undergo more aggressive therapies. Two to three percent of all brain tumors are derived from ependymomas, which are neoplastic transformations of the ependymal cells that line the ventricular system. Some types of problems are well-defined, but others are not. Various genetic mutations can occur in ependymomas, similar to those observed in gliomas. The behavior of different parts of the nervous system may differ. The spread of ependymomas differs from that of other types of cancer. Although they cannot spread to one area of the brain or spine simultaneously, they can spread to several other areas. In most cases, ependymomas begin to grow slowly; therefore, you will not notice any problems. Ependymomas can affect both men and women. Adults between 40 and 60 years of age are most likely to develop tumors. Most ependymoma cases in children under the age of 3 years occur in children under five years of age. Most ependymomas in young children develop near the base of the brain. Age and 12 years were the most common age group affected by spinal tumors. Ependymomas are primary tumors of the central nervous system (CNS). It is important to note that each grade had different ependymoma types. Molecular testing is used to identify disease subtypes based on their location and characteristics. The grade of an ependymoma is determined by its severity. As a result, the tumor cells continued to grow slowly over time. Subependymomas and myxopapillary ependymomas are the two subtypes of this tumor. Adults are more likely to suffer from both types of diseases than children. The spine is most commonly affected by myxopapillary tumors. As a low-grade tumor, Grade II ependymomas can occur either in the brain or in the spine, and they can occur in either. The first type of ependymoma is benign, whereas the second is malignant (cancerous). The fast-growing nature of these tumors indicates that they are likely to spread rapidly. Anaplastic ependymomas are a subtype of these tumors. The most common lesions occur in the brain, but can also occur in the spine.
A patient’s treatment plan often involves combining different types of treatment. The main treatment options for brain tumors are as follows.
-
1.
Surgery \((\Game ^{1})\): Glioma is usually removed through surgery as part of the treatment; therefore, complete glioma removal is necessary, and gliomas can be surgically removed; however, risks are involved. There are several types of infection and bleeding. Other risks may also arise depending on the location of the tumor in the brain. Surgical removal of a tumor near the nerves connected to the eyes can result in vision loss. It is sometimes impossible to completely remove the gliomas. Surgeons can remove as many gliomas as possible. Occasionally, a subtotal resection was used to describe the procedure. The diagnosis of meningiomas can be followed by active monitoring, or surgery can be performed immediately. If the tumor is causing, or likely to cause, symptoms or problems that interfere with the patient’s day-to-day activities, it will generally be offered to them. The size and location of the tumor within the brain play a role in determining its effects on the patient. Some patients with metastatic brain tumors have only one secondary brain tumor that can be resected easily. This may be the only treatment needed, especially if all tumors are removed. In the present case, the best course of action was surgery. Tumors were surgically removed or reduced in size. It can also ease the pressure and swelling of the brain, and Embryonic tumors are usually treated surgically, even if the tumor is not completely removed. The doctor (neurosurgeon) uses specialized equipment to remove the tumor as much as possible, the doctor (neurosurgeon) uses specialized equipment. There is no guarantee that a tumor can be completely removed surgically, depending on its location. Tumors growing near the nerves or blood vessels can be especially dangerous. Most ependymomas are surgically treated. As much of the tumor as possible was removed by the neurosurgeon while preserving healthy tissue.
-
2.
Radiation therapy \((\Game ^{2})\): Radiation treats gliomas after surgery by killing any remaining cancer cells. Powerful radiation beams destroy tumor cells. Various sources of energy exist, including X-rays and protons. There are different types and doses of radiation therapy; therefore, side effects can vary. As alternatives to conventional radiotherapy, stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) can be used if surgery cannot remove the tumor. Radiation doses can be targeted and delivered precisely using SRS and SRT. Surgery is only suitable for small tumors (less than 3 cm at their widest point), but it can be used instead of surgery to prevent brain damage. Radiation therapy uses high-energy X-ray beams to damage or destroy tumor cells in metastases. An external machine was used for the delivery. An external beam of radiation is used for this purpose. Different types of radiation therapies can be administered individually or simultaneously. The brain and sometimes the entire spinal cord are treated with embryonic radiotherapy. It is typically administered postoperatively. Radiotherapy destroys tumor cells using high-energy X-rays. Radiation oncologists use strong energy beams to shrink or destroy embryonal tumors in ependymomas to reduce the risk of tumor recurrence.
-
3.
Chemotherapy \((\Game ^{3})\): Chemotherapy is usually used in conjunction with radiation therapy as part of the treatment for gliomas. Chemotherapy is a treatment in which drugs are administered to eliminate cancer cells. The most common way to administer chemotherapy is to administer them as pills or inject them directly into the vein. In certain situations, chemotherapy can be administered directly to glioma cells. Chemotherapy can have various side effects, depending on the type and dose of the medicine received. The most common side effects of this medication include nausea, vomiting, hair loss, fever, and fatigue. However, these medications may have side effects. Meningiomas are highly resistant to current chemotherapies, and therefore, chemotherapy is rarely used. Chemotherapy is used to inhibit the growth of cancer cells in metastases. It may still be appropriate in certain situations, such as after surgery or radiotherapy for high-grade meningiomas (grade 3). Compared for surgery or radiation is less effective for most types of metastatic brain cancers. Radiation and chemotherapy can potentially cause significant side effects in patients with embryonic tumors. In such cases, chemotherapy is administered either after surgery or in conjunction with radiotherapy. Cancer cells are killed by chemotherapy using cytotoxic drugs. In ependymomas, medicines that destroy fast-growing cells, including tumor cells, are administered to reduce the risk of the tumor returning or spreading to other parts of the brain. If the tumor spreads to other parts of the body, as is rarely the case in ependymomas, chemotherapy is necessary.
-
4.
Immunotherapy/clinical trials \((\Game ^{4})\): Age determines an individual’s treatment, the remaining tumor after surgery, the type of tumor, and the location of the tumor. Treatments, such as tumor-treating field therapy, are used to harness glioma cells with electrical energy. Glioma cells cannot grow in the presence of sticky pads attached to the scalp. The head was shaved to adhere to the pads. A portable device was connected to the pads by wires. In the case of metastatic tumors, certain cancers that have spread to the brain respond to immunotherapy when the device generates an electrical field that affects glioma cells. Side effects associated with tumor treatment include skin irritation. Treating brain metastases from melanoma, lung cancer, and embryonic tumors is clinically important for some patients. A clinical trial may be recommended as part of the treatment. A new treatment might be tested, or existing treatments may be combined in new ways. In some cases, magnetic resonance imaging (MRI) is performed. MRI scans are usually performed every 6 months. Immunotherapy fights ependymomas more effectively by stimulating their immune system. Although rare, targeted therapy is sometimes necessary for patients with ependymomas whose tumors have spread to other parts of their bodies. Targeted therapies include drugs that attack or prevent the growth of cancer cells. The treatment of ependymomas rarely involves surgery.
We selected five different types of patients with brain tumors based on a random selection process. When choosing brain tumor patients, four types of treatments need to be considered, and their weights are \(w = \{0.4,0.3,0.2,0.1\}\). Our goal was to select the patient with the best treatment among those with brain tumors. Tables 1, 2, 3 and 4 show the decision values as follows.
Tables 5 and 8 shows the normalized decision matrix as given below (Tables 6, 7, 8).
According to the aggregation information provided by the CPNIVFWA operator, for each alternative. Table 9 lists the CPNIVFWA operators as follows.
The score values are calculated as follows: \(\aleph ^{A}=0.3535\), \(\aleph ^{B}=0.4020\), \(\aleph ^{C}=0.4031\), \(\aleph ^{D}=0.3671\), \(\aleph ^{E}=0.3416\). Ranking of alternatives are as follows: \(\aleph ^{C} \succeq \aleph ^{B} \succeq \aleph ^{D} \succeq \aleph ^{A} \succeq \aleph ^{E}.\) Therefore, if the patient metastases \(\aleph ^{C}\), we need the urgent treatment.
5.3 Comparison of the Proposed and Existing Methods
We compare different methods of calculating score values to determine whether it is feasible to use the proposed DM method in practice. Yang et al. [15] constructed a PNIVFS based on AOs. Palanikumar et al. proposed a PNNIVFS based on MADM [16]. Palanikumar et al. [26] discussed robotic sensors based on score and accuracy values in a q-rung complex diophantine neutrosophic normal set. It illustrates the usefulness and the benefits that can be obtained from using it. This study used score values approaches using the CPNIVFWG, CGPNIVFWA, and CGPNIVFWG. Table 10 lists the different types of distances.
Table 11 lists different aggregating operators such as weighted averaging, weighted geometric, generalized weighted averaging, and generalized weighted geometric operators [15].
In this analysis, the advantages of the proposed algorithm were illustrated by comparing the characteristic points of several existing algorithms with those of the proposed algorithm. Based on the symbols \(\checkmark \) and \(\times \), we determine whether the property associated with the operator is satisfied. Furthermore, existing approaches only mention MADM problems and do not discuss GDM problems. We can compare the proposed and existing models using Hamming distances and see if they are useful and valid based on Tables 12 and 13.
As shown in Fig. 2, the proposed and previous models were compared using the score values obtained using weighted averaging operators.
In Fig. 3, the proposed and previous models were compared using the score values obtained using weighted geometric operators.
5.4 Data Analysis
MADM conditional dependability changes with alternatives. This test has the following prerequisites. Using the CPNIVFWG approach, the proximity values and ranking were as follows. The \(\Game \) values were changed from the CPNIVFWG approach. The score values and order are as follows. For each alternative, we provide the following aggregate data using the CPNIVFWG operator. According to the aggregation information provided by the CPNIVFWG operator for each alternative, Table 14 lists the different types of score values.
If \(1 \preceq \Game \preceq 4\), then \(\aleph ^{C} \succeq \aleph ^{B} \succeq \aleph ^{D} \succeq \aleph ^{A} \succeq \aleph ^{E}\). If \(\Game =5\), then new ranking is \(\aleph ^{C} \succeq \aleph ^{B} \succeq \aleph ^{D} \succeq \aleph ^{E} \succeq \aleph ^{A}\). Consequently, it is optimal to change \(\aleph ^{E}\) and \(\aleph ^{A}\) as alternatives. Similarly, the alternative rankings are based on the value of \(\Game \), which is derived from the CPNIVFWA, CGPNIVFWA, and CGPNIVFWG operators.
5.5 Advantages
Several advantages of these applications have been identified in previous sections. CPNIVFS combines the advantages of IVFSs and complex PIVFSs. A normal distribution can be observed in real life from human behavior and natural events according to the CPNIVFS. Based on the options provided by the decision-maker, the CPNIVFS finds the most appropriate alternative. Therefore, the CPNIVFS with AOs can be used as a second approach to determine the optimal DM solution. The outcome of a decision can be chosen according to \(\Game \) and the personal preferences of the decision-maker. Different ranking outcomes can be generated dynamically for each alternative using operators such as CPNIVFWA, CPNIVFWG, CGPNIVFWA and CGPNIVFWG. DM procedures can be assessed objectively and subjectively by several experts using this method, which has many advantages. It is interesting to analyze this example because it involves uncertain steps and procedures. Several options and alternatives are described using an illustrative example based on a set of ideal characteristics for decision-makers that can be compared. Weighted operators derived from weighted aggregation models are crucial components of this method. Consequently, it exhibited the same properties as the original. As the main motivation, its characteristics have expanded to include more complex problems. The main advantage of weighted operators is the introduction of score and accuracy values which can be used to compare an optimal preference set to alternatives or options.
5.6 Discussion
This method is effective because it considers the relationships between the different attributes. Thus, the proposed method produced better-ranking results. Therefore, this method solves practical decision-making problems more effectively and efficiently than in [15, 16, 26]. In this study, scores and accuracy values were established for CPNIVFS. Scores and accuracy values were compared to demonstrate their superiority. We introduce a new concept of accuracy and score values for CPNIVFSs, which are presented in a simple mathematical form. It is an advantage in actual calculations. Consequently, a numerical example illustrated the superiority of the score and accuracy values when these two factors were considered. A real-life example of their application illustrates the applicability of score and accuracy values.
6 Conclusion
This article discusses some properties of the proposed AO rules for CPNIVFWA, CPNIVFWG, CGPNIVFWA, and CGPNIVFWG. In addition to commutativity, idempotency, boundness, associativity, and monotonicity, these operators also have several other properties. Various classical measures have been studied to characterize the weighting vectors. Several prerequisites have been examined for the development of AOs. Furthermore, it provides examples of the methods that can be used to develop these operators. In an indeterminate and inconsistent information environment, people can make informed decisions by selecting the best alternative available from a wide range of options available in the context of CPNIV with the MADM approach as a result of its implementation. An MADM problem was considered in this study by applying the CGPNIVFWA, CGPNIVFWG, CPNIVFWA, and CGPNIVFWG operators under the parameter \(\Game \). It was shown that using the \(\Game \) operator makes it possible to distinguish between alternatives using CPNIVFWA, CPNIVFWG, CGPNIVFWA and CGPNIVFWG use operators. Finally, based on the results of the above analysis, it appears that the most significant influence is exerted on the ranking of alternatives by \(\Game \). For decision-makers to make a reasonable ranking of the candidates, it may be necessary for them to consider the current situation. As a result, the decision-maker can use the values of \(\Game \) to estimate the DM required to determine the result based on the values. Real-world applications can be derived from the score and accuracy values. This study will be useful for future academics in this field. Further discussions will be held on the following topics.
-
(1)
Soft and expert sets were explored in terms of the CPNIVFS.
-
(2)
Using CPNIVFS, we investigate complex generalized q-rung interval-valued normal fuzzy sets and complex cubic q-rung interval-valued normal fuzzy sets.
-
(3)
A complex generalized cubic interval-valued normal fuzzy set and complex interval-valued normal fuzzy set can be used to solve the MADM problem.
Data Availability Statement
The data presented in this study are available upon request from the corresponding author.
References
Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)
Atanassov, K.: Intuitionistic fuzzy sets. Fuzzy Sets Syst. 20(1), 87–96 (1986)
Ejegwa, P.A.: Distance and similarity measures for Pythagorean fuzzy sets. Granul. Comput. 5, 225–238 (2018)
Yager, R.R.: Pythagorean membership grades in multi criteria decision-making. IEEE Trans. Fuzzy Syst. 22, 958–965 (2014)
Zhang, X., Xu, Z.: Extension of TOPSIS to multiple criteria decision-making with Pythagorean fuzzy sets. Int. J. Intell. Syst. 29, 1061–1078 (2014)
Xu, R.N., Li, C.L.: Regression prediction for fuzzy time series. Appl. Math. J. Chin. Univ. 16, 451–461 (2001)
Yang, M.S., Ko, C.H.: On a class of fuzzy c-numbers clustering procedures for fuzzy data. Fuzzy Sets Syst. 84, 49–60 (1996)
Akram, M., Dudek, W.A., Ilyas, F.: Group decision making based on Pythagorean fuzzy TOPSIS method. Int. J. Intell. Syst. 34, 1455–1475 (2019)
Akram, M., Dudek, W.A., Dar, J.M.: Pythagorean Dombi fuzzy aggregation operators with application in multi-criteria decision-making. Int. J. Intell. Syst. 34, 3000–3019 (2019)
Akram, M., Peng, X., Al-Kenani, A.N., Sattar, A.: Prioritized weighted aggregation operators under complex Pythagorean fuzzy information. J. Intell. Fuzzy Syst. 39(3), 4763–4783 (2020)
Rahman, K., Abdullah, S., Shakeel, M., Khan, M.S.A., Ullah, M.: Interval valued Pythagorean fuzzy geometric aggregation operators and their application to group decision-making problem. Cogent Math. 4, 1–19 (2017)
Peng, X., Yang, Y.: Fundamental properties of interval valued Pythagorean fuzzy aggregation operators. Int. J. Intell. Syst. 31, 444–487 (2015)
Rahman, K., Ali, A., Abdullah, S., Amin, F.: Approaches to multi attribute group decision-making based on induced interval valued Pythagorean fuzzy Einstein aggregation operator. New Math. Nat. Comput. 14(3), 343–361 (2018)
Khan, M.S.A.: The Pythagorean fuzzy Einstein Choquet integral operators and their application in group decision making. Comput. Appl. Math. 38(128), 1–35 (2019)
Yang, Z., Chang, J.: Interval-valued Pythagorean normal fuzzy information aggregation operators for multiple attribute decision making approach. IEEE Access 8, 51295–51314 (2020)
Palanikumar, M., Arulmozhi, K., Jana, C.: Multiple attribute decision-making approach for Pythagorean neutrosophic normal interval-valued fuzzy aggregation operators. Comput. Appl. Math. 41(90), 1–22 (2022)
Ramot, D., Milo, R., Friedman, M., Kandel, A.: Complex fuzzy sets. IEEE Trans. Fuzzy Syst. 10, 171–186 (2002)
Ramot, D., Friedman, M., Langholz, G., Kandel, A.: Complex fuzzy logic. IEEE Trans. Fuzzy Syst. 11, 450–461 (2003)
Yazdanbakhsh, O., Dick, S.: Multi-variate time series forecasting using complex fuzzy logic. In: Proceedings of the 2015 Annual Conference of the North American Fuzzy Information Processing Society Held Jointly with 2015 5th World Conference on Soft Computing, Redmond, WA, USA, 17–19, 2015, pp. 1–6 (2015)
Alkouri, A.M.D.J.S., Salleh, A.R.: Complex intuitionistic fuzzy sets. In: AIP Conference Proceedings, pp. 464–470. American Institute of Physics, College Park (2012)
Garg, H., Rani, D.: Some generalized complex intuitionistic fuzzy aggregation operators and their application to multi-criteria decision-making process. Arab. J. Sci. Eng. 44, 2679–2698 (2019)
Ullah, K., Mahmood, T., Ali, Z., Jan, N.: On some distance measures of complex Pythagorean fuzzy sets and their applications in pattern recognition. Complex Intell. Syst. 6, 15–27 (2020). https://doi.org/10.1007/s40747-019-0103-6
Liu, P., Mahmood, T., Ali, Z.: Complex \(q\)-rung orthopair fuzzy aggregation operators and their applications in multi-attribute group decision making. Information 11, 1–5 (2020)
Rong, Y., Liu, Y., Pei, Z.: Complex \(q\)-rung orthopair fuzzy \(2\)-tuple linguistic Maclaurin symmetric mean operators and its application to emergency program selection. Int. J. Intell. Syst. 35, 1749–1790 (2020)
Akram, M., Bashir, A., Garg, H.: Decision-making model under complex picture fuzzy Hamacher aggregation operators. Comput. Appl. Math 39, 226 (2020)
Palanikumar, M., Kausar, N., Garg, H., Kadry, S., Kim, J.: Robotic sensor based on score and accuracy values in q-rung complex diophatine neutrosophic normal set with an aggregation operation. Alex. Eng. J. 77, 149–164 (2023)
Liu, P., Wang, P.: Multiple-attribute decision-making based on Archimedean Bonferroni operators of \(q\)-rung orthopair fuzzy numbers. IEEE Trans. Fuzzy Syst. 27(5), 834–848 (2018)
Liu, P., Chen, S.M., Wang, P.: Multiple attribute group decision-making based on \(q\)-rung orthopair fuzzy power Maclaurin symmetric mean operators. IEEE Trans. Syst. Cybern. Syst. 10(50), 3741–3756 (2020)
Liu, P., Wang, P.: Some \(q\)-rung orthopair fuzzy aggregation operators and their applications to multiple-attribute decision making. Int. J. Intell. Syst. 1, 1–22 (2017)
Wang, P., Liu, P., Chiclana, F.: Multi-stage consistency optimization algorithm for decision making with incomplete probabilistic linguistic preference relation. Inf. Sci. 556, 361–388 (2021)
Liu, P., Dang, R., Wang, P., Xiaoming, W.: Unit consensus cost-based approach for group decision-making with incomplete probabilistic linguistic preference relations. Inf. Sci. 624, 849–880 (2023)
Zhang, C., Ding, J., Li, D., Zhan, J.: A novel multi-granularity three-way decision making approach in \(q\)-rung orthopair fuzzy information systems. Int. J. Approx. Reason. 138, 161–187 (2021)
Zhang, C., Bai, W., Li, D., Zhan, J.: Multiple attribute group decision making based on multi-granulation probabilistic models, MULTIMOORA and TPOP in incomplete \(q\)-rung orthopair fuzzy information systems. Int. J. Approx. Reason. 143, 102–120 (2022)
Zhang, C., Li, D., Xiangping, K., Song, D., Sangaiah, A.K., Broumi, S.: Neutrosophic fusion of rough set theory: an overview. Comput. Ind. 115, 103–117 (2020)
Lian, K., Wang, T., Wang, B., Wang, M., Huang, W., Yang, J.: The research on relative knowledge distances and their cognitive features. Int. J. Cogn. Comput. Eng. 4, 135–148 (2023)
Anusha, G., Ramana, P.V., Sarkar, R.: Hybridizations of Archimedean copula and generalized MSM operators and their applications in interactive decision-making with \(q\)-rung probabilistic dual hesitant fuzzy environment. Decis. Mak. Appl. Manag. Eng. 6(1), 646–678 (2023)
Liu, P.D., Teng, F.: Probabilistic linguistic TODIM method for selecting products through online product reviews. Inf. Sci. 485, 441–455 (2019)
Pang, Q., Wang, H., Xu, Z.S.: Probabilistic linguistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 369, 128–143 (2016)
Bairagi, B.: A homogeneous group decision making for selection of robotic systems using extended TOPSIS under subjective and objective factors. Decis. Mak. Appl. Manag. Eng. 5(2), 300–315 (2022)
Torra, V.: Hesitant fuzzy sets. Int. J. Intell. Syst. 25, 529–539 (2010)
Rodrguez, R.M., Martnez, L., Herrera, F.: Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 20, 109–119 (2012)
Lu, Y., Xu, Y., Viedma, E.H.: Consensus progress for large-scale group decision making in social networks with incomplete probabilistic hesitant fuzzy information. Appl. Soft Comput. 126, 109249 (2022)
Xu, Y., Li, M., Chiclana, F., Viedma, E.H.: Multiplicative consistency ascertaining, inconsistency repairing, and weights derivation of hesitant multiplicative preference relations. IEEE Trans. Syst. Man Cybern. Syst. 52, 6806–6821 (2022)
Yazdi, M., Saner, T., Darvishmotevali, M.: Application of an artificial intelligence decision-making method for the selection of maintenance strategy. In: 10th International Conference on Theory and Application of Soft Computing, pp. 246–253 (2019)
Rojek, I., Kaczmarek, M.J., Piechowski, M., Mikolajewski, D.: An artificial intelligence approach for improving maintenance to supervise machine failures and support their repair. Appl. Sci. 13, 1–16 (2023)
Huang, G., Xiao, L., Pedrycz, W., Pamucar, D., Zhang, G., Martinez, L.: Design alternative assessment and selection: a novel Z-cloud rough number-based BWM-MABAC model. Inf. Sci. 603, 149–189 (2022)
Xiao, L., Huang, G., Pedrycz, W., Pamucar, D., Martinez, L., Zhang, G.: A q-rung orthopair fuzzy decision-making model with new score function and best-worst method for manufacturer selection. Inf. Sci. 608, 153–177 (2022)
Huang, G., Xiao, L., Pedrycz, W., Zhang, G., Martinez, L.: Failure mode and effect analysis using T-spherical fuzzy maximizing deviation and combined comparison solution methods. IEEE Trans. Reliab. 2022, 1–22 (2022)
Mahmood, T., Ali, Z.: Prioritized Muirhead mean aggregation operators under the complex single-valued neutrosophic settings and their application in multi-attribute decision-making. J. Comput. Cogn. Eng. 2, 56–73 (2022)
Haque, T.S., Chakraborty, A., Alrabaiah, H., Alam, S.: Multi-attribute decision-making by logarithmic operational laws in interval neutrosophic environments. Granul. Comput. 7(4), 837–860 (2022)
Haque, T.S., Chakraborty, A., Mondal, S.P., Alam, S.: Approach to solve multi-criteria group decision-making problems by exponential operational law in generalized spherical fuzzy environment. CAAI Trans. Intell. Technol. 5(2), 106–114 (2020)
Haque, T.S., Alam, S., Chakraborty, A.: Selection of most effective COVID-19 virus protector using a novel MCGDM technique under linguistic generalized spherical fuzzy environment. Comput. Appl. Math. 41(2), 84 (2022)
Banik, B., Alam, S., Chakraborty, A.: Comparative study between GRA and MEREC technique on an agricultural-based MCGDM problem in pentagonal neutrosophic environment. Int. J. Environ. Sci. Technol. 2023, 1–16 (2023)
Banik, B., Alam, S., Chakraborty, A.: A novel integrated neutrosophic cosine operator based linear programming ANP-EDAS MCGDM strategy to select anti-pegasus software. Int. J. Inf. Technol. Decis. Mak. 2023, 1–37 (2023)
Acknowledgements
The author declares that the present work was not supported by any financial or material agency.
Funding
There is no funding for this research.
Author information
Authors and Affiliations
Contributions
Conceptualization, M. Palanikumar; methodology, M. Palanikumar; writing the original draft, M. Palanikumar; conceptualization, Nasreen kausar; validation, Nasreen kausar; conceptualization, Dragan Pamucar; review and editing, Dragan Pamucar; writing, reviewing, and editing, Mohd Asif Shah.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Consent for Publication
All the authors approve the submission to this journal.
Ethics Approval and Consent to Participate
This article does not contain any studies involving human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Palanikumar, M., Kausar, N., Pamucar, D. et al. Complex Pythagorean Normal Interval-Valued Fuzzy Aggregation Operators for Solving Medical Diagnosis Problem. Int J Comput Intell Syst 17, 118 (2024). https://doi.org/10.1007/s44196-024-00504-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00504-w
Keywords
- Weighted averaging
- Weighted geometric
- Generalized weighted averaging
- Generalized weighted geometric
- Decision making