Introduction

The notion of fuzzy sets was developed as a way to mathematically represent the ambiguity we often observe in relation with concepts or events that do not have clearly defined boundaries. The items that can be the subject of scientific research are many more when we employ the idea of partial degrees of membership. This position creates both a mathematical description of fuzzy sets [1] and a semantic interpretation for them. However, there is no denying that actions in many real-life scenarios produce both positive and negative independent effects. Undesirability of an action cannot always be retrieved from its desirability. For instance, the use of antibiotics is beneficial in the treatment of various diseases; however, it has some adverse side effects on the body, and they happen independently of its positive results. In formal terms, positive and harmful aspects can be captured by membership (MD) and non-membership (NMD) degrees, respectively. Each such pair of evaluations is nowadays known as an orthopair [2]. Atanassov pioneered the idea of incorporating both figures into a model, and thus conceived the notion of intuitionistic fuzzy set (IFS) [3]. Although NMDs are not derived from the corresponding MD anymore, both evaluations are still linked by the idea that they apportion a sort of “total membership” between desirable and undesirable belongingness, possibly with some slackness. This prompted Atanassov to declare that the total of MDs and NMDs should be universally bound by 1. At any rate, IFSs have been employed to solve several daily-life issues which includes decision making [4], concept selection [5], figure skating [6], and image fusion [7]. Their theoretical basis has been expanded considerably too [8].

Yager [9] launched Pythagorean fuzzy sets (PFSs) to alleviate the constraint on the total of MDs and NMDs in IFSs (see [10] for its early development). Zhang and Xu [12] defined the term Pythagorean fuzzy value (PFV). Superiority and inferiority ranking approach for PFSs was explored in [13]. Complex PFSs and their applications to pattern recognition were discussed by Ullah et al. [14]. PF graphs, PF indiscernibility relation, and their applications to protein–protein interaction networks were discussed by Nawaz et al. [15]. Naeem et al. [16] designed aggregation operators based on TOPSIS and VIKOR methods for PFS. Garg [17], Ashraf et al. [18] and Khan et al. [19], respectively, proposed Einstein operations-based, sine trigonometric, and Dombi aggregation operators for PFSs. The MABAC method for PFS based on Choquet integral was given in [20]. Akram et al. [21] extended the ELECTRE-II approach for PFS. Khan et al. [22] provided a refined VIKOR method with the help of dissimilarity measures.

Other important research directions to handle the uncertainty in different forms include soft sets by Molodtsov [23], bipolar-valued fuzzy sets by Zhang [24], bipolar-valued soft sets by Mahmood [25], complex fuzzy sets [26], and bipolar complex fuzzy sets [27]. The list is not exhaustive.

Independently of the discussion above, in many genuine decision-making situations, it may be challenging for experts to precisely calibrate their judgments by a number (e.g., due to measurement errors, or variety of sources). They can, however, declare that the evaluations lie in an interval (in our setting, the unit interval). The integration of this feature into the IFS approach produced the model called interval-valued IFS (IV-IFS) by Atanassov [28]. The interval-valued PFS (IV-PFS) model given by Peng and Yang [29] expands PFSs with interval evaluations. These authors discussed basic operations and aggregation operators for IV-PFS, and extended the ELECTRE method to this framework. Figure 1 represents an IV-PFS. We can observe that its constituent elements are represented by suitably placed rectangles (instead of the isolated pairs of numbers that characterize the evaluations pertaining to PFSs).

Fig. 1
figure 1

A geometrical presentation of the basic ingredient (the evaluation of an alternative) of an IV-PFS

A similar motivation produced another extension of IFSs, whereby a circular region (rather than a rectangle) with a given radius is allocated estimate the value of every option. This modification produces a representation that is less challenging than the space of orthopairs allowed in the aforementioned IV-IFS. The spirit, however, is similar: whereas IV-IFS allow for separate looseness in the production of MDs and NMDs, the new model permits a certain slackness around the orthopair formed by the MDs and NMDs. Slackness is given by the radius associated with the model, whose name is Circular IFS (C-IFS) after Atanassov [30]. Recently, Khan et al. [31] have discussed the divergence measures for C-IFSs and their applications to decision making, pattern recognition, and multi-period medical diagnosis. Even though, this extension of IFSs suffers from the setback that it cannot operate in cases where the total of MD and NMD exceeds one for the reasons presented above. A more important criticism is that situations exist for which some alternatives can be evaluated with almost certain values (i.e., near to zero slackness) but other alternatives are hard to evaluate unless a large looseness is allowed.

To overcome both the difficulty of the representation in IV-PFSs and the restrictive domain of C-IFSs, in this paper, we propose the ideas of Circular PFSs (C-PFSs) and Disc PFSs (D-PFSs). These are legitimate extensions of all IFSs, C-IFSs, and PFSs, whose geometrical visualizations are given in Figs. 2 and 3, respectively. The new frameworks enlarge the domain of qualified MDs and NMDs of the aforesaid models. Observe that in Fig. 2, circle 1 is allowed by the C-IFS modelization, but circle 2 is not. Nevertheless it belongs to the novel C-PFS model. Figure 3 illustrates the difference with the D-PFS model, where the evaluations are allowed to have their own distinctive girths.

Fig. 2
figure 2

A geometrical comparison: the basic ingredients (the evaluation of each alternative) of C-IFSs versus C-PFSs

Fig. 3
figure 3

A geometrical presentation of the basic ingredients (the evaluation of each alternative) of a D-PFS

Therefore, the aim of this study is to formalize the idea of C-PFS and D-PFS and establish algebraic (union, intersection) and arithmetic (addition, multiplication, and scalar multiplication) operations for D-PFSs, the largest of these two models. Their semantic interpretations will be presented too. We shall also be concerned with aggregation operators. The circular Pythagorean fuzzy weighted average (CPFWA) and circular Pythagorean fuzzy weighted geometric (CPFWG) aggregation operators will be defined, and their properties will be investigated. With these tools, the Hamming and Euclidean distances based CODAS method will be extended to the new model. It will be used to address practical decision-making problems.

The rest of the manuscript is structured as follows. Fundamentals concerning C-IFSs are covered in “Preliminaries”. “Circular versus disc Pythagorean fuzzy sets” introduces the concepts of C-PFS and D-PFS and their essential principles. “Operating with D-PFSs: aggregation and distances” is devoted to the study of technical tools that are necessary for subsequent applications. Specifically, the analysis of aggregation operators for D-PFSs and their main characteristics are included in “D-PF aggregation operators”, and then “Distance measures for D-PFSs” puts forward and examines the Hamming and Euclidean distances between D-PFSs. The CODAS approach is covered in “Disc Pythagorean fuzzy CODAS method”. “Illustrative example: selection of supermarket for fresh fruits” is concerned with how D-PFSs can be applied to CODAS-based decision making. The comparison and concluding notes of the article are presented in “Comparative analysis” and “Conclusion”, respectively.

Preliminaries

Throughout the entire article, L is a finite and non-empty set, and we refer to L as the universal set (of alternatives). Besides, the unit interval [0, 1] is represented by \({\textbf {I}}\).

We proceed to recall the fundamental definitions of fuzzy set, IFS, IV-IFS, PFS, and C-IFS over L. In this section, we also mention some basic facts about C-IFSs that have been established in existing literature.

Definition 1

[1] A fuzzy set A over a universal set L is defined as

$$\begin{aligned} A = \{ (\hbar , \mu _{A}(\hbar ) ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) is the MD.

Definition 2

[3] An IFS A over a universal set L is defined as

$$\begin{aligned} A = \{ (\hbar , \mu _{A}(\hbar ),\nu _{A}(\hbar ) ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) and \(\nu _{A}: L\rightarrow {\textbf {I}}\), with the constraint \(\mu _{A}(\hbar ) + \nu _{A}(\hbar ) \le 1\), are the MD and NMDs, respectively. The expression \(\pi _{A}(\hbar )=1-(\mu _{A}(\hbar )+\nu _{A}(\hbar ))\) gives the hesitancy degree of an element \(\hbar \in L\).

Definition 3

[28] An IV-IFS A over a universal set L is defined as

$$\begin{aligned} A = \{ (\hbar , I_{\mu _{A}}(\hbar ), I_{\nu _{A}}(\hbar ) ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(I_{\mu _{A}}(\hbar )\) and \(I_{\mu _{A}}(\hbar )\), with the constraint \({\text {Sup}}\{I_{\mu _{A}}(\hbar )\} + {\text {Sup}}\{I_{\nu _{A}}(\hbar )\} \le 1\), are the MD and NMD intervals, respectively.

Although IFSs are very effective (and expand fuzzy sets in a sensible manner), a difficulty evolved when practical explorations produced separate MDs and NMDs whose sum exceeded one. This issue should in fact come as no surprise, in view of the independent occurrence of positive and negative effects that justifies the semantic interpretation of IFSs. Yager [9] launched a strategy to escape this problem (see [10] for its early development), which first produced Pythagorean fuzzy sets (PFSs) and ultimately led to q-rung orthopair fuzzy sets [11]. These generalizations allowed ampler autonomy for orthopair selection. In a q-rung orthopair fuzzy set, the MDs and NMDs of each orthopair are only required to satisfy that the sum of their q powers is less than or equal to one. The case \(q=2\) reduces to PFSs, whose success was immediate.

Definition 4

[9, 10] A PFS A over a universal set L is defined as

$$\begin{aligned} A = \{ (\hbar , \mu _{A}(\hbar ) ,\nu _{A}(\hbar ) ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) and \(\nu _{A}: L\rightarrow {\textbf {I}}\), with the constraint \(\mu _{A}^{2}(\hbar ) + \nu _{A}^{2}(\hbar ) \le 1\), are the MDs and NMDs, respectively. The expression \(\pi _{A}(\hbar )=\sqrt{1-(\mu _{A}^{2}(\hbar )+\nu _{A}^{2}(\hbar ))}\) gives the hesitancy degree of an element \(\hbar \in L\).

Definition 5

[30] A C-IFS A over a universal set L is defined as

$$\begin{aligned} A = \{ (\hbar , \mu _{A}(\hbar ) ,\nu _{A} (\hbar ) \, ; r ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) and \(\nu _{A}: L\rightarrow {\textbf {I}}\), with the constraint \(\mu _{A}(\hbar ) + \nu _{A}(\hbar ) \le 1\), are the MDs and NMDs, respectively. Here, r is the radius of the circle having center \(( \mu _{A}(\hbar ),\nu _{A}(\hbar ))\). The expression \(\pi _{A}(\hbar )=1-(\mu _{A}(\hbar )+\nu _{A}(\hbar ))\) gives the hesitancy degree of an element \(\hbar \in L\).

C-IFSs offer a nested family of models, which means that if \(r<r'\) are non-negative radii, then a C-IFS of radius r can be regarded as a C-IFS of radius \(r'\).

Circular versus disc Pythagorean fuzzy sets

In this section, we extend the idea of C-IFSs to C-PFSs. The extension provides more space to choose orthopairs with similar characteristics (they are allowed as long as they are not “too far away” from a given orthopair, as prescribed by a fixed radius). Importantly, to provide yet more flexibility, C-PFSs will then be extended to D-PFSs or disc PFSs by allowing the radius to vary with the characteristics of each alternative. We shall explain the semantic interpretation of the new model and then we shall design suitable set-theoretic and arithmetic operations in this framework. Although this part is essentially technical, it is necessary for a founding work on the topic.

Definition 6

A C-PFS A of radius r over a universal set L is defined as

$$\begin{aligned} A = \{ ( \hbar , \mu _{A}(\hbar ),\nu _{A}(\hbar )\, ; r ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) and \(\nu _{A}: L\rightarrow {\textbf {I}}\), with the constraint \(\mu _{A}^{2}(\hbar ) + \nu _{A}^{2}(\hbar ) \le 1\), are the MDs and NMDs associated with \(\hbar \), respectively, and r is the radius of a circle having center \(( \mu _{A}(\hbar ),\nu _{A}(\hbar ))\). The expression \(\pi _{A}(\hbar )=\sqrt{1-(\mu _{A}^{2}(\hbar )+\nu _{A}^{2}(\hbar ))}\) gives the hesitancy degree of an element \(\hbar \in L\).

The evaluation for \(\hbar \), \(a=(\hbar , \mu _{a}(\hbar ),\nu _{a}(\hbar );r )\), symbolizes a circle with radius r at center \((\mu _{a}(\hbar ),\nu _{a}(\hbar ))\) called circular Pythagorean fuzzy value (C-PFV). In an abstract setting, a C-PFV is written as \((\mu _{a},\nu _{a};r)\) rather than the \((\hbar , \mu _{a}(\hbar ),\nu _{a}(\hbar ); r)\) expression associated with an alternative \(\hbar \).

As in the case of C-IFSs, C-PFSs offer a nested family of models, in the following sense: when \(r<r'\) are non-negative radii, a C-PFS of radius r is also a C-PFS of radius \(r'\).

Let \( T = \{ (s,t) \;\mid \; s,t \in \textbf{I} \; \& \; s^{2}+ t^{2} \le 1 \}\). In order to fully grasp the idea of C-IFSs, a C-PFS A will be spelled as

$$\begin{aligned} A = \left\{ C_{r}(\mu _{A}(\hbar ),\nu _{A}(\hbar )) \; \mid \; \hbar \in L \right\} , \end{aligned}$$

where

$$ \begin{aligned}&C_{r}(\mu _{A}(\hbar ),\nu _{A}(\hbar )) = \bigg \{ (s,t) \;\mid \; s,t \in \textbf{I} \;\nonumber \\&\qquad \& \; \sqrt{ (\mu _{A}(\hbar )-s)^{2} + (\nu _{A}(\hbar )-t)^{2} } \le r \bigg \} \cap T\nonumber \\&\quad = \bigg \{ (s,t) \;\mid \; s,t \in \textbf{I} ,\; \sqrt{ (\mu _{A}(\hbar )-s)^{2} + (\nu _{A}(\hbar )-t)^{2} } \le r \nonumber \\&\qquad \& \; s^{2}+t^{2} \le 1 \bigg \}. \end{aligned}$$
(1)

Definition 7

A D-PFS (for disc PFS) A over a universal set L is defined as

$$\begin{aligned} A = \{ ( \hbar , \mu _{A}(\hbar ),\nu _{A}(\hbar )\, ; r(\hbar ) ) \; \mid \; \hbar \in L \} , \end{aligned}$$

where \(\mu _{A}: L\rightarrow {\textbf {I}}\) and \(\nu _{A}: L\rightarrow {\textbf {I}}\) satisfy the constraint \(\mu _{A}^{2}(\hbar ) + \nu _{A}^{2}(\hbar ) \le 1\) for all \(\hbar \in L\). They are the MDs and NMDs associated with \(\hbar \), respectively, whereas \(r(\hbar )\) is the radius of a circle having center \(( \mu _{A}(\hbar ),\nu _{A}(\hbar ))\). The expression \(\pi _{A}(\hbar )=\sqrt{1-(\mu _{A}^{2}(\hbar )+\nu _{A}^{2}(\hbar ))}\) gives the hesitancy degree of \(\hbar \in L\).

C-PFSs are D-PFSs with \(r(\hbar ) = r(\hbar ')\) for all \(\hbar , \hbar '\in L\). In a D-PFS, the evaluation for \(\hbar \) is \(a=(\hbar , \mu _{a}(\hbar ),\nu _{a}(\hbar ); r(\hbar ) )\). Now it is a circle with a distinctive radius \(r(\hbar )\), which may be different for each alternative, and whose center is \((\mu _{a}(\hbar ),\nu _{a}(\hbar ))\). So this evaluation is a circular Pythagorean fuzzy value too. We insist that in contrast with C-PFSs, these C-PFVs have possibly different radii when the alternative varies. At any rate, operations that depend upon C-PFVs will be valid to operate with both C-PFSs and D-PFSs. We shall use this feature in subsequent sections, where we only need to take care of the D-PFS case.

Proposition 1 below will prove another relationship between C-PFSs and D-PFSs.

Semantic interpretation

It is well known that at each alternative, an IV-IFS allows for some looseness both in the definition of MDs and NMDs. Instead of an orthopair, a pair of intervals characterizes the alternative. These intervals may have very different dimensions. And their lengths may vary with the alternatives in order to account for measurement errors, indeterminacy, et cetera.

In a C-PFS, for the description of \(\hbar \in L \), there is a fixed slackness r around the orthopair formed by \(( \mu _{a}(\hbar ),\nu _{a}(\hbar ) )\). All permitted orthopairs whose separation from \(( \mu _{a}(\hbar ),\nu _{a}(\hbar ) )\) is lower than this radius are admissible evaluations of \(\hbar \). But in a D-PFS, for the description of \(\hbar \in L \) there is a distinctive slackness \(r(\hbar )\), so that permitted orthopairs whose separation from \(( \mu _{a}(\hbar ),\nu _{a}(\hbar ) )\) is under \(r(\hbar )\) are admissible evaluations of \(\hbar \). Both in C-PFSs and D-PFSs, it is apparent that the radii represent a margin of error in terms of the description of the orthopairs. This margin of error is common to all the alternatives in a C-PFS. Wherever we feel that some alternatives must be associated with smaller margins of error (e.g., because they have been evaluated with more precise instruments, or better statistical tools, or more reliable samples), we should resort to a D-PFS.

Notation. We shall abuse notation to avoid cumbersome expressions. Wherever the notation implicitly gives the element \(\hbar \), we shall avoid explicitly mentioning it. For example, we henceforth replace \(( \hbar , \mu _{A}(\hbar ),\nu _{A}(\hbar )\,; r(\hbar ) )\) with the shorter \(( \mu _{A}(\hbar ),\nu _{A}(\hbar )\,; r (\hbar ))\). Consequently, a D-PFV associated with \(\hbar \) will be simply written as \((\mu _{a}(\hbar ),\nu _{a}(\hbar ); r(\hbar ) )\), which is indicative of the alternative it refers to.

Theoretically, Eq. (1) generates five different types of circles, which can be seen in Fig. 4. In addition, Eq. (1) and Fig. 4 assist us in defining the domain of radius r, which can be any real value between 0 and \(\sqrt{2}\). The C-PFS functions become a standard PFS when r is equal to zero.

Fig. 4
figure 4

Geometrical visualization of the types of evaluations allowed by a C-PFS (for each \(\hbar \in L\)). The case of a D-PFS is similar, but with variable radii for the circles

Set-theoretic and arithmetic operations on D-PFSs

The following definitions pertain to the analog of set-theoretic complement, union, and intersection. Their spirit is inspired by similar definitions in the fundamental IFS framework.

Definition 8

Let three D-PFSs A, \(A_1\), and \(A_2\) be defined as follows: \(A = \{ \left( \mu _{A}(\hbar ),\nu _{A}(\hbar ); r(\hbar ) \right) \;\mid \; \hbar \in L \} \), \(A_{1} = \{\left( \mu _{A_1}(\hbar ),\nu _{A_1}(\hbar ); r_{1}(\hbar ) \right) \;\mid \; \hbar \in L \}\) and \(A_{2} = \{\left( \mu _{A_2}(\hbar ),\nu _{A_2}(\hbar ); r_{2}(\hbar ) \right) \;\mid \; \hbar \in L \}\). Then, the following operations are defined:

  • The complement operation is defined as: \(A^{c}= \{(\nu _{A}(\hbar ),\mu _{A}(\hbar ); r(\hbar )) \;\mid \; \hbar \in L \} \).

  • The set containment is defined as: \( A_{1} \subseteq A_{2} \Longleftrightarrow r_{1}(\hbar ) \le r_{2}(\hbar ),\; \mu _{A_{1}}(\hbar ) \le \mu _{A_{2}}(\hbar ) \; \& \; \nu _{A_{1}}(\hbar ) \ge \nu _{A_{2}}(\hbar )\).

  • The union operations are defined as

    $$\begin{aligned} A_{1} \cup _{\max } A_{2}&=\bigg \{ \bigg ( \max \{\mu _{A_{1}}(\hbar ),\mu _{A_{2}}(\hbar ) \}, \min \{\nu _{A_{1}}(\hbar ), \\&\qquad \nu _{A_{2}}(\hbar )\}; \max \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \; \mid \; \hbar \in L \bigg \} \\ A_{1} \cup _{\min } A_{2}&=\bigg \{ \bigg ( \max \{\mu _{A_{1}}(\hbar ),\mu _{A_{2}}(\hbar ) \},\min \{\nu _{A_{1}}(\hbar ),\\ {}&\qquad \nu _{A_{2}}(\hbar )\}; \min \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \;\mid \; \hbar \in L \bigg \}. \end{aligned}$$
  • The intersection operations are defined as

    $$\begin{aligned} A_{1} \cap _{\max } A_{2}&= \bigg \{ \bigg ( \min \{\mu _{A_{1}}(\hbar ),\mu _{A_{2}}(\hbar ) \}, \max \{\nu _{A_{1}}(\hbar ),\\&\qquad \nu _{A_{2}}(\hbar )\}; \max \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \} \\ A_{1} \cap _{\min } A_{2}&= \bigg \{ \bigg ( \min \{\mu _{A_{1}}(\hbar ),\mu _{A_{2}}(\hbar ) \}, \max \{\nu _{A_{1}}\\&\qquad (\hbar ),\nu _{A_{2}}(\hbar )\}; \min \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \}. \end{aligned}$$

With respect to arithmetics, addition, multiplication and power operations for D-PFS can be defined as follows:

Definition 9

In the situation of Definition 8, the following operations are defined:

  • The addition operations are defined as

    $$\begin{aligned}&A_{1} \oplus _{\max } A_{2} = \bigg \{ \bigg (\sqrt{ \mu _{A_{1}}^{2}(\hbar ) + \mu _{A_{2}}^{2}(\hbar ) - \mu _{A_{1}}^{2}(\hbar ) . \mu _{A_{2}}^{2}(\hbar ) }\; ,\\ {}&\quad \nu _{A_{1}}(\hbar ) . \nu _{A_{2}}(\hbar ) ; \max \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \},\\&A_{1} \oplus _{\min } A_{2} = \bigg \{ \bigg (\sqrt{\mu _{A_{1}}^{2}(\hbar ) + \mu _{A_{2}}^{2}(\hbar ) - \mu _{A_{1}}^{2}(\hbar ) . \mu _{A_{2}}^{2}(\hbar ) }\;,\\ {}&\quad \nu _{A_{1}}(\hbar ) . \nu _{A_{2}}(\hbar ) ; \min \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \}. \end{aligned}$$
  • The multiplication operations are defined as

    $$\begin{aligned} A_{1} \otimes _{\max } A_{2}&= \bigg \{ \bigg ( \mu _{A_{1}}(\hbar ) . \mu _{A_{2}}(\hbar ) \;,\\&\qquad \sqrt{ \nu _{A_{1}}^{2}(\hbar ) + \nu _{A_{2}}^{2}(\hbar ) - \nu _{A_{1}}^{2}(\hbar ) \cdot \nu _{A_{2}}^{2}(\hbar )}\; ;\\&\qquad \max \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \}\\ A_{1} \otimes _{\min } A_{2}&= \bigg \{ \bigg ( \mu _{A_{1}}(\hbar ) . \mu _{A_{2}}(\hbar ) \;,\\&\quad \sqrt{ \nu _{A_{1}}^{2}(\hbar ) + \nu _{A_{2}}^{2}(\hbar ) - \nu _{A_{1}}^{2}(\hbar ) \cdot \nu _{A_{2}}^{2}(\hbar )} \;;\\&\qquad \min \{r_{1}(\hbar ),r_{2}(\hbar )\} \bigg ) \mid \hbar \in L \bigg \}. \end{aligned}$$
  • The scalar multiplication operations are defined as: for \(\lambda > 0\)

    $$\begin{aligned} \lambda A&= \left\{ \left( \sqrt{ 1 - (1 - \mu _{A}^{2}(\hbar ))^{\lambda } }\; ,\; \nu _{A}^{\lambda }(\hbar ) \;; r(\hbar ) \right) \;\mid \; \hbar \in L \right\} \\ A^{\lambda }&= \left\{ \left( \mu _{A}^{\lambda }(\hbar ) \;,\;\sqrt{ 1 - (1- \nu _{A}^{2}(\hbar ))^{\lambda } } \;; r(\hbar ) \right) \;\mid \; \hbar \in L \right\} . \end{aligned}$$

The sets resulting from the complement, union, intersection, addition, multiplication, and scalar multiplication operations of D-PFSs are D-PFSs. The proofs are similar to the proofs given in [9, 10, 13] which also inspired the technical definitions and at the same time, validate them.

The difference between C-PFSs and D-PFSs dilutes in a finite environment:

Proposition 1

Any D-PFS A over a finite universal set L is contained in a C-PFS over L.

Proof

We just need to use the radius \(r = \max \{r (\hbar ) \mid \hbar \in L\}\) to justify this relationship between the two models. \(\square \)

We can easily prove the following properties related to the operations mentioned above:

Theorem 1

Let \(\beta =(\mu ,\nu ;r)\), \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), and \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) be three D-PFVs, and \(\lambda , \lambda _1, \lambda _2 > 0\). Then,

  1. 1.

    \(\beta _1 \oplus _{\max } \beta _2 = \beta _2 \oplus _{\max } \beta _1\);

  2. 2.

    \(\beta _1 \oplus _{\min } \beta _2 = \beta _2 \oplus _{\min } \beta _1\);

  3. 3.

    \(\beta _1 \otimes _{\max } \beta _2 = \beta _2 \otimes _{\max } \beta _1\);

  4. 4.

    \(\beta _1 \otimes _{\min } \beta _2 = \beta _2 \otimes _{\min } \beta _1\);

  5. 5.

    \(\lambda ( \beta _1 \oplus _{\max } \beta _2 ) = \lambda \beta _1 \oplus _{\max } \lambda \beta _2\);

  6. 6.

    \(\lambda ( \beta _1 \oplus _{\min } \beta _2 ) = \lambda \beta _1 \oplus _{\min } \lambda \beta _2\);

  7. 7.

    \(\lambda _1 p \oplus _{\max } \lambda _2 p = (\lambda _1 +\lambda _2 ) p \);

  8. 8.

    \(\lambda _1 p \oplus _{\min } \lambda _2 p = (\lambda _1 +\lambda _2 ) p \);

  9. 9.

    \(( \beta _1 \otimes _{\max } \beta _2 )^{\lambda } = \beta _1^{\lambda } \otimes _{\max } \beta _2^{\lambda }\);

  10. 10.

    \(( \beta _1 \otimes _{\min } \beta _2 )^{\lambda } = \beta _1^{\lambda } \otimes _{\min } \beta _2^{\lambda }\);

  11. 11.

    \(\beta ^{\lambda _1} \otimes _{\max } \beta ^{\lambda _2} = \beta ^{\lambda _1 + \lambda _2}\);

  12. 12.

    \(\beta ^{\lambda _1} \otimes _{\min } \beta ^{\lambda _2} = \beta ^{\lambda _1 + \lambda _2}\);

Theorem 2

Let \(\beta =(\mu ,\nu ;r)\), \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), and \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) be three D-PFVs, and \(\lambda , \lambda _1, \lambda _2 > 0\). Then,

  1. 1.

    \((\beta ^c)^\lambda = (\lambda p)^c\);

  2. 2.

    \(\lambda (\beta ^c) = (\beta ^\lambda )^c\);

  3. 3.

    \(\beta _1 \cup _{\max } \beta _2 = \beta _2 \cup _{\max } \beta _1\);

  4. 4.

    \(\beta _1 \cup _{\min } \beta _2 = \beta _2 \cup _{\min } \beta _1\);

  5. 5.

    \(\beta _1 \cap _{\max } \beta _2 = \beta _2 \cap _{\max } \beta _1\);

  6. 6.

    \(\beta _1 \cap _{\min } \beta _2 = \beta _2 \cap _{\min } \beta _1\);

  7. 7.

    \(\lambda ( \beta _1 \cup _{\max } \beta _2 ) = \lambda \beta _1 \cup _{\max } \lambda \beta _2\);

  8. 8.

    \(\lambda ( \beta _1 \cup _{\min } \beta _2 ) = \lambda \beta _1 \cup _{\min } \lambda \beta _2\);

  9. 9.

    \(( \beta _1 \cup _{\max } \beta _2 )^{\lambda } = \beta _1^{\lambda } \cup _{\max } \beta _2^{\lambda }\);

  10. 10.

    \(( \beta _1 \cup _{\min } \beta _2 )^{\lambda } = \beta _1^{\lambda } \cup _{\min } \beta _2^{\lambda }\);

Proof

Here, we will demonstrate parts 1, 3, and 7. The remaining components can be demonstrated similarly.

$$\begin{aligned} 1.\quad (\beta ^c)^\lambda&= (\nu ,\mu ;r)^\lambda = \left( \nu ^\lambda ,\sqrt{1- (1- \mu ^2)^\lambda }\,;r \right) \\ (\lambda \beta )^c&= \left( \sqrt{1- (1- \mu ^2)^\lambda },\nu ^\lambda \, ;r\right) ^c \\&= (\nu ^\lambda ,\sqrt{1- (1- \mu ^2)^\lambda }\,;r) =(\beta ^c)^\lambda . \end{aligned}$$
$$\begin{aligned} 3. \quad \beta _1 \cup _{\max } \beta _2&= \left( \max \{\mu _{{1}},\mu _{{2}} \}, \min \{\nu _{{1}},\nu _{{2}}\}; \max \{r_{1},r_{2}\} \right) \\&= \left( \max \{\mu _{{2}},\mu _{{1}} \}, \min \{\nu _{{2}},\nu _{{1}}\}; \max \{r_{2},r_{1}\} \right) \\&= \beta _2 \cup _{\max } \beta _1\\ \end{aligned}$$
$$\begin{aligned} 7. \quad&\lambda ( \beta _1 \cup _{\max } \beta _2 ) \\&\quad = \lambda \left( \max \{\mu _{{1}},\mu _{{2}} \}, \min \{\nu _{{1}},\nu _{{2}}\}; \max \{r_{1},r_{2}\} \right) \\&\quad = \left( \sqrt{1- (1- \max \{\mu _1^2,\mu _2^2\})^\lambda },\min \{\nu _1^\lambda ,\nu _2^\lambda \} \, ; \quad max\{r_{1},r_{2}\}\right) \\&\lambda \beta _1 \cup _{\max } \lambda \beta _2 \\&\quad = \left( \sqrt{1- (1- \mu _1^2)^\lambda },\nu _1^\lambda \, ;r_1\right) \cup _{\max } \left( \sqrt{1- (1- \mu _2^2)^\lambda },\nu _2^\lambda \, ; \quad r_2\right) \\&\quad = \bigg ( \max \left\{ \sqrt{1- (1- \mu _1^2)^\lambda },\sqrt{1- (1- \mu _2^2)^\lambda }\right\} ,\min \left\{ \nu _1^\lambda , \nu _2^\lambda \right\} ; \,\\ {}&\quad max\{r_{1},r_{2}\} \bigg )\\&\quad = \left( \sqrt{1- (1- \max \{\mu _1^2,\mu _2^2\})^\lambda },\min \{\nu _1^\lambda ,\nu _2^\lambda \} \, ;max\{r_{1},r_{2}\}\right) \\&\quad = \lambda ( \beta _1 \cup _{\max } \beta _2 ). \\ \end{aligned}$$

\(\square \)

Theorem 3

Let \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), and \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) be two D-PFVs. Then,

$$\begin{aligned}&\mathrm{1.}\quad \beta _1^c \cup _{\max } \beta _2^c = (\beta _1 \cap _{\max } \beta _2)^c; \\&\mathrm{2.}\quad \beta _1^c \cup _{\min } \beta _2^c = (\beta _1 \cap _{\min } \beta _2)^c;\\&\mathrm{3.}\quad \beta _1^c \cap _{\max } \beta _2^c = (\beta _1 \cup _{\max } \beta _2)^c;\\&\mathrm{4.}\quad \beta _1^c \cap _{\min } \beta _2^c = (\beta _1 \cup _{\min } \beta _2)^c;\\&\mathrm{5.}\quad \beta _1^c \oplus _{\max } \beta _2^c = (\beta _1 \otimes _{\max } \beta _2 )^c; \\&\mathrm{6.}\quad \beta _1^c \oplus _{\min } \beta _2^c = (\beta _1 \otimes _{\min } \beta _2 )^c;\\&\mathrm{7.}\quad \beta _1^c \otimes _{\max } \beta _2^c = (\beta _1 \oplus _{\max } \beta _2 )^c; \\&\mathrm{8.}\quad \beta _1^c \otimes _{\min } \beta _2^c = (\beta _1 \oplus _{\min } \beta _2 )^c. \end{aligned}$$

Proof

The first and fifth portions will be proved and the remaining components can be demonstrated similarly.

$$\begin{aligned} 1.\quad \beta _1^c \cup _{\max } \beta _2^c&= (\nu _1,\mu _1;r_1) \cup _{\max } (\nu _2,\mu _2;r_2) \\ {}&= ( \max \{\nu _1,\nu _2\} , \min \{\mu _1,\mu _2\} \, ; \quad \max \{r_{1},r_{2}\} ) \\ (\beta _1 \cap _{\max } \beta _2)^c&= ( \min \{\mu _1,\mu _2\} , \max \{\nu _1,\nu _2\} ;\, \max \{r_{1},r_{2}\} )^c \\ {}&= ( \max \{\nu _1,\nu _2\} , \min \{\mu _1,\mu _2\} \,;\quad \max \{r_{1},r_{2}\}) \\&= \beta _1^c \cup _{\max } \beta _2^c .\\ 5.\quad \beta _1^c \oplus _{\max } \beta _2^c&= (\nu _1,\mu _1;r_1) \cup _{\max } (\nu _2,\mu _2;r_2) \\ {}&= \left( \sqrt{\nu _1^2 + \nu _2^2 - \nu _1^2 \nu _2^2}, \mu _1 \mu _2 \,; \quad \max \{r_{1},r_{2}\}\right) \\ (\beta _1 \otimes _{\max } \beta _2 )^c&= \left( \mu _1 \mu _2, \sqrt{\nu _1^2 + \nu _2^2 - \nu _1^2 \nu _2^2} \,; \max \{r_{1},r_{2}\} \right) ^c \\ {}&=\left( \sqrt{\nu _1^2 + \nu _2^2 - \nu _1^2 \nu _2^2}, \mu _1 \mu _2 \,; \quad \max \{r_{1},r_{2}\} \right) \\&= \beta _1^c \oplus _{\max } \beta _2^c. \\ \end{aligned}$$

\(\square \)

Theorem 4

Let \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), and \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) be two D-PFVs. Then,

  1. 1.

    \((\beta _1 \cup _{\max } \beta _2) \oplus _{\max } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \oplus _{\max } \beta _2\);

  2. 2.

    \((\beta _1 \cup _{\min } \beta _2) \oplus _{\min } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \oplus _{\min } \beta _2\);

  3. 3.

    \((\beta _1 \cup _{\max } \beta _2) \oplus _{\min } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \oplus _{\max } \beta _2\);

  4. 4.

    \((\beta _1 \cup _{\min } \beta _2) \oplus _{\max } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \oplus _{\min } \beta _2\);

  5. 5.

    \((\beta _1 \cup _{\max } \beta _2) \oplus _{\min } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \oplus _{\min } \beta _2\);

  6. 6.

    \((\beta _1 \cup _{\min } \beta _2) \oplus _{\max } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \oplus _{\max } \beta _2\);

  7. 7.

    \((\beta _1 \cup _{\max } \beta _2) \otimes _{\max } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \otimes _{\max } \beta _2\);

  8. 8.

    \((\beta _1 \cup _{\min } \beta _2) \otimes _{\min } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \otimes _{\min } \beta _2\);

  9. 9.

    \((\beta _1 \cup _{\max } \beta _2) \otimes _{\min } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \otimes _{\max } \beta _2\);

  10. 10.

    \((\beta _1 \cup _{\min } \beta _2) \otimes _{\max } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \otimes _{\min } \beta _2\);

  11. 11.

    \((\beta _1 \cup _{\max } \beta _2) \otimes _{\min } (\beta _1 \cap _{\min } \beta _2) = \beta _1 \otimes _{\min } \beta _2\);

  12. 12.

    \((\beta _1 \cup _{\min } \beta _2) \otimes _{\max } (\beta _1 \cap _{\max } \beta _2) = \beta _1 \otimes _{\max } \beta _2\).

Proof

In the following, we shall prove parts first, fifth, and seventh. The remaining statements can be proven in a similar manner.

$$\begin{aligned} 1.\quad&(\beta _1 \cup _{\max } \beta _2) \oplus _{\max } (\beta _1 \cap _{\max } \beta _2) \\&\quad = ( \max \{\mu _1,\mu _2\} , \min \{\nu _1,\nu _2\} \, ; \max \{r_{1},r_{2}\} ) \oplus _{\max } \\&\qquad ( \min \{\mu _1,\mu _2\} , \max \{\nu _1,\nu _2\} \, ; \max \{r_{1},r_{2}\} ) \\&\quad = \left( \sqrt{\mu _1^2 + \mu _2^2 - \mu _1^2 \mu _2^2}, \nu _1 \nu _2 \, ;\right. \\ {}&\qquad \left. \max \{r_{1},r_{2}\} \right) \\&\quad = \beta _1 \oplus _{\max } \beta _2.\\ 5.\quad&(\beta _1 \cup _{\max } \beta _2) \oplus _{\min } (\beta _1 \cap _{\min } \beta _2) \\&\quad = ( \max \{\mu _1,\mu _2\} , \min \{\nu _1,\nu _2\} \, ; \max \{r_{1},r_{2}\} ) \oplus _{\min } \\&\qquad ( \min \{\mu _1,\mu _2\} , \max \{\nu _1,\nu _2\} \, ; \min \{r_{1},r_{2}\} ) \\&\quad = \left( \sqrt{\mu _1^2 + \mu _2^2 - \mu _1^2 \mu _2^2}, \nu _1 \nu _2 \, ; \min \{r_{1},r_{2}\} \right) \\&\quad = \beta _1 \oplus _{\min } \beta _2.\\ 7.\quad&(\beta _1 \cup _{\max } \beta _2) \otimes _{\max } (\beta _1 \cap _{\max } \beta _2) \\&\quad = ( \max \{\mu _1,\mu _2\} , \min \{\nu _1,\nu _2\} \, ; \max \{r_{1},r_{2}\} ) \otimes _{\max } \\&\qquad ( \min \{\mu _1,\mu _2\} , \max \{\nu _1,\nu _2\} \, ; \max \{r_{1},r_{2}\} ) \\&\quad = \left( \mu _1 \mu _2, \sqrt{\nu _1^2 + \nu _2^2 - \nu _1^2 \nu _2^2} \, ; \max \{r_{1},r_{2}\} \right) \\&\quad = \beta _1 \otimes _{\max } \beta _2.\\ \end{aligned}$$

\(\square \)

Theorem 5

Let \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), and \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) be two D-PFVs. Then,

  1. 1.

    \((\beta _1 \cup _{\max } \beta _2)\cap _{\min } \beta _2 = \beta _2\);

  2. 2.

    \((\beta _1 \cup _{\min } \beta _2)\cap _{\max } \beta _2 = \beta _2\);

  3. 3.

    \((\beta _1 \cap _{\max } \beta _2)\cup _{\min } \beta _2 = \beta _2\);

  4. 4.

    \((\beta _1 \cap _{\min } \beta _2)\cup _{\max } \beta _2 = \beta _2\).

Proof

It suffices to prove the first part, the others being similar.

$$\begin{aligned} 1.\quad&(\beta _1 \cup _{\max } \beta _2)\cap _{\min } \beta _2 \\&\quad = ( \max \{\mu _1,\mu _2\} , \min \{\nu _1,\nu _2\} \, ; {\max }\{r_{1},r_{2}\} )\\&\quad \quad \cap _{\min } (\mu _{2},\nu _{2};r_2) \\&\quad = ( \min \{\max \{\mu _1 ,\mu _2 \},\mu _2 \} , \max \{\min \{\nu _1 ,\nu _2 \},\nu _2 \} \; ;\\&\quad \qquad {\min }\{ {\max }\{r_{1},r_{2}\} , r_2 \} ) \\&\quad = (\mu _{2},\nu _{2};r_2) = \beta _2. \\ \end{aligned}$$

\(\square \)

Theorem 6

Let \(\beta _1 =(\mu _{1},\nu _{1};r_1)\), \(\beta _2 =(\mu _{2},\nu _{2};r_2)\) and \(\beta _3 =(\mu _{3},\nu _{3};r_3)\) be three D-PFVs. Then,

  1. 1.

    \((\beta _1 \cup _{\max } \beta _2)\cap _{\min } \beta _3 = (\beta _1 \cap _{\min } \beta _3) \cup _{\max } (\beta _2 \cap _{\min } \beta _3)\);

  2. 2.

    \((\beta _1 \cup _{\min } \beta _2)\cap _{\max } \beta _3 = (\beta _1 \cap _{\max } \beta _3) \cup _{\min } (\beta _2 \cap _{\max } \beta _3)\);

  3. 3.

    \((\beta _1 \cap _{\max } \beta _2)\cup _{\min } \beta _3 = (\beta _1 \cup _{\min } \beta _3) \cap _{\max } (\beta _2 \cup _{\min } \beta _3)\);

  4. 4.

    \((\beta _1 \cap _{\min } \beta _2)\cup _{\max } \beta _3 = (\beta _1 \cup _{\max } \beta _3) \cap _{\min } (\beta _2 \cup _{\max } \beta _3)\);

  5. 5.

    \((\beta _1 \cup _{\max } \beta _2)\oplus _{\max } \beta _3 = (\beta _1 \oplus _{\max } \beta _3) \cup _{\max } (\beta _2 \oplus _{\max } \beta _3)\);

  6. 6.

    \((\beta _1 \cup _{\min } \beta _2)\oplus _{\max } \beta _3 = (\beta _1 \oplus _{\max } \beta _3) \cup _{\min } (\beta _2 \oplus _{\max } \beta _3)\);

  7. 7.

    \((\beta _1 \cup _{\max } \beta _2)\oplus _{\min } \beta _3 = (\beta _1 \oplus _{\min } \beta _3) \cup _{\max } (\beta _2 \oplus _{\min } \beta _3)\);

  8. 8.

    \((\beta _1 \cup _{\min } \beta _2)\oplus _{\min } \beta _3 = (\beta _1 \oplus _{\min } \beta _3) \cup _{\min } (\beta _2 \oplus _{\min } \beta _3)\);

  9. 9.

    \((\beta _1 \cap _{\max } \beta _2)\oplus _{\max } \beta _3 = (\beta _1 \oplus _{\max } \beta _3) \cap _{\max } (\beta _2 \oplus _{\max } \beta _3)\);

  10. 10.

    \((\beta _1 \cap _{\min } \beta _2)\oplus _{\max } \beta _3 = (\beta _1 \oplus _{\max } \beta _3) \cap _{\min } (\beta _2 \oplus _{\max } \beta _3)\);

  11. 11.

    \((\beta _1 \cap _{\max } \beta _2)\oplus _{\min } \beta _3 = (\beta _1 \oplus _{\min } \beta _3) \cap _{\max } (\beta _2 \oplus _{\min } \beta _3)\);

  12. 12.

    \((\beta _1 \cap _{\min } \beta _2)\oplus _{\min } \beta _3 = (\beta _1 \oplus _{\min } \beta _3) \cap _{\min } (\beta _2 \oplus _{\min } \beta _3)\);

  13. 13.

    \((\beta _1 \cup _{\max } \beta _2)\otimes _{\max } \beta _3 = (\beta _1 \otimes _{\max } \beta _3) \cup _{\max } (\beta _2 \otimes _{\max } \beta _3)\);

  14. 14.

    \((\beta _1 \cup _{\min } \beta _2)\otimes _{\max } \beta _3 = (\beta _1 \otimes _{\max } \beta _3) \cup _{\min } (\beta _2 \otimes _{\max } \beta _3)\);

  15. 15.

    \((\beta _1 \cup _{\max } \beta _2)\otimes _{\min } \beta _3 = (\beta _1 \otimes _{\min } \beta _3) \cup _{\max } (\beta _2 \otimes _{\min } \beta _3)\);

  16. 16.

    \((\beta _1 \cup _{\min } \beta _2)\otimes _{\min } \beta _3 = (\beta _1 \otimes _{\min } \beta _3) \cup _{\min } (\beta _2 \otimes _{\min } \beta _3)\);

  17. 17.

    \((\beta _1 \cap _{\max } \beta _2)\otimes _{\max } \beta _3 = (\beta _1 \otimes _{\max } \beta _3) \cap _{\max } (\beta _2 \otimes _{\max } \beta _3)\);

  18. 18.

    \((\beta _1 \cap _{\min } \beta _2)\otimes _{\max } \beta _3 = (\beta _1 \otimes _{\max } \beta _3) \cap _{\min } (\beta _2 \otimes _{\max } \beta _3)\);

  19. 19.

    \((\beta _1 \cap _{\max } \beta _2)\otimes _{\min } \beta _3 = (\beta _1 \otimes _{\min } \beta _3) \cap _{\max } (\beta _2 \otimes _{\min } \beta _3)\);

  20. 20.

    \((\beta _1 \cap _{\min } \beta _2)\otimes _{\min } \beta _3 = (\beta _1 \otimes _{\min } \beta _3) \cap _{\min } (\beta _2 \otimes _{\min } \beta _3)\);

Proof

In the following, we shall prove the 1st, 6th, 10th, and 14th parts:

$$\begin{aligned} 1.\;&(\beta _1 \cup _{\max } \beta _2)\cap _{\min } \beta _3 \\&\quad = (\max \{\mu _1,\mu _2\}, min\{\nu _1, \nu _2\}\, ; \max \{r_{1},r_{2}\})\\&\qquad \cap _{\min } (\mu _3, \nu _3\,; r_3) \\&\quad = (\min \{\max \{\mu _1,\mu _2\},\mu _3\}, \max \{\min \{\nu _1, \nu _2\}, \nu _3\}\,; \\&\qquad \min \{\max \{r_{1},r_{2}\} , r_3 \} ) \\&\quad = (\max \{\min \{\mu _1,\mu _3\}, \min \{\mu _2,\mu _3\}\}, \min \{\max \{\nu _1, \nu _3\}, \\&\qquad \max \{\nu _2, \nu _3\}\}\,; \max \{ \min \{r_{1},r_{3}\}, \min \{r_{2},r_{3}\} \} ) \\&\quad = (\min \{\mu _1,\mu _3\}, \max \{\nu _1, \nu _3\} \,; \min \{r_{1},r_{3}\} ) \cup _{\max } \\&\qquad (\min \{\mu _2,\mu _3\}, \max \{\nu _2, \nu _3\} \,; \min \{r_{2},r_{3}\}) \\&\quad = (\beta _1 \cap _{\min } \beta _3) \cup _{\max } (\beta _2 \cap _{\min } \beta _3).\\ 6.\;&(\beta _1 \cup _{\min } \beta _2) \oplus _{\max } \beta _3 \\&\quad = (\max \{\mu _1,\mu _2\}, min\{\nu _1, \nu _2\}\, ; \min \{r_{1},r_{2}\})\\&\qquad \oplus _{\max } (\mu _{3},\nu _{3};r_3) \\&\quad = \bigg ( \sqrt{ \max \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 - \max \bigg \{\mu _1^2, \mu _2^2\bigg \} \mu _3^2 },\\&\qquad \min \{\nu _1,\nu _2\} \nu _3; \\&\qquad \max \{ \min \{r_{1},r_{2}\} , r_3 \} \bigg ) \\&\quad = \bigg ( \sqrt{ \bigg (1-\mu _3^2\bigg ) \max \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 } , \min \{\nu _1 \nu _3,\nu _2 \nu _3\}; \\&\qquad \min \{ \max \{r_{1},r_{2}\} , \max \{r_{1},r_{3}\} \} \bigg ) \\&\bigg (\beta _1 \oplus _{\max } \beta _3\bigg ) \cup _{\min } \bigg (\beta _2 \oplus _{\max } \beta _3\bigg ) \\&\quad = \bigg ( \sqrt{\mu _1^2 + \mu _3^2 - \mu _1^2 \mu _3^2}, \nu _1 \nu _3 \, ; \max \{r_{1},r_{3}\} \bigg ) \cup _{\min } \\&\qquad \bigg ( \sqrt{\mu _2^2 + \mu _3^2 - \mu _2^2 \mu _3^2}, \nu _2 \nu _3 \, ; \max \{r_{2},r_{3}\} \bigg ) \\&\quad = \bigg ( \max \bigg \{ \sqrt{\mu _1^2 + \mu _3^2 - \mu _1^2 \mu _3^2} , \sqrt{\mu _2^2 + \mu _3^2 - \mu _2^2 \mu _3^2} \bigg \} ,\\&\qquad \min \{ \nu _1 \nu _3 , \nu _2 \nu _3 \}; \\&\qquad \quad \, \min \{ \max \{r_{1},r_{3}\} , \max \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \max \bigg \{ \sqrt{\bigg (1-\mu _3^2\bigg ) \mu _1^2 + \mu _3^2 } , \sqrt{\bigg (1-\mu _3^2\bigg ) \mu _2^2 + \mu _3^2} \bigg \} ,\\&\qquad \min \{ \nu _1 \nu _3 , \nu _2 \nu _3 \}; \\&\qquad \quad \, \min \{ \max \{r_{1},r_{3}\} , \max \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \sqrt{ \bigg (1-\mu _3^2\bigg ) \max \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 } , \min \{\nu _1 \nu _3,\nu _2 \nu _3\} ; \\&\qquad \min \bigg \{ \max \{r_{1},r_{2}\} , \max \bigg \{r_{1},r_{3}\bigg \} \bigg \} \bigg ) \\&\quad = (\beta _1 \cup _{\min } \beta _2) \oplus _{\max } \beta _3\\ 10.\;&(\beta _1 \cap _{\max } \beta _2) \oplus _{\min } \beta _3 \\&\quad = (\min \bigg \{\mu _1,\mu _2\}, \max \{\nu _1, \nu _2\}\, ; \max \{r_{1},r_{2}\})\\&\qquad \oplus _{\min } (\mu _{3},\nu _{3};r_3) \\&\quad = \bigg ( \sqrt{ \min \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 - \min \bigg \{\mu _1^2, \mu _2^2\bigg \} \mu _3^2 } ,\\&\qquad \max \{\nu _1,\nu _2\} \nu _3; \min \{ \max \{r_{1},r_{2}\} , r_3 \} \bigg ) \\&\quad = \bigg ( \sqrt{ \bigg (1-\mu _3^2\bigg ) \min \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 } , \max \{\nu _1 \nu _3,\nu _2 \nu _3\}; \\&\qquad \max \{ \min \{r_{1},r_{2}\} , \min \{r_{1},r_{3}\} \} \bigg ) \\&(\beta _1 \oplus _{\min } \beta _3) \cap _{\max } (\beta _2 \oplus _{\min } \beta _3) \\&\quad = \bigg ( \sqrt{\mu _1^2 + \mu _3^2 - \mu _1^2 \mu _3^2}, \nu _1 \nu _3 \, ;\\&\qquad \min \{r_{1},r_{3}\} \bigg ) \cap _{\max } \\&\qquad \bigg ( \sqrt{\mu _2^2 + \mu _3^2 - \mu _2^2 \mu _3^2}, \nu _2 \nu _3 \, ; \min \{r_{2},r_{3}\} \bigg ) \\&\quad = \bigg ( \min \bigg \{ \sqrt{\mu _1^2 + \mu _3^2 - \mu _1^2 \mu _3^2} , \sqrt{\mu _2^2 + \mu _3^2 - \mu _2^2 \mu _3^2} \bigg \} ,\\&\qquad \max \{ \nu _1 \nu _3 , \nu _2 \nu _3 \} \\&\qquad \quad \, ; \max \{ \min \{r_{1},r_{3}\} , \min \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \min \bigg \{ \sqrt{\bigg (1-\mu _3^2\bigg ) \mu _1^2 + \mu _3^2 } , \sqrt{\bigg (1-\mu _3^2\bigg ) \mu _2^2 + \mu _3^2} \bigg \} ,\\&\qquad \quad \max \{ \nu _1 \nu _3 , \nu _2 \nu _3 \}; \\&\qquad \quad \, \max \{ \min \{r_{1},r_{3}\} , \min \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \sqrt{ \bigg (1-\mu _3^2\bigg ) \min \bigg \{\mu _1^2, \mu _2^2\bigg \} + \mu _3^2 } , \max \{\nu _1 \nu _3,\nu _2 \nu _3\} ; \\&\qquad \max \{ \min \{r_{1},r_{2}\} , \min \{r_{1},r_{3}\} \} \bigg ) \\&\quad = (\beta _1 \cap _{\max } \beta _2) \oplus _{\min } \beta _3\\ 14.\;&(\beta _1 \cup _{\min } \beta _2) \otimes _{\max } \beta _3 \\&\quad = (\max \{\mu _1,\mu _2\}, min\{\nu _1, \nu _2\}\, ; \min \{r_{1},r_{2}\})\\&\qquad \otimes _{\max } (\mu _{3},\nu _{3};r_3) \\&\quad = \bigg ( \max \{\mu _1,\mu _2\} \mu _3 ,\\&\qquad \sqrt{ \min \bigg \{\nu _1^2, \nu _2^2\bigg \} + \nu _3^2 - \min \bigg \{\nu _1^2, \nu _2^2\bigg \} \nu _3^2 } ; \\&\qquad \max \{ \min \{r_{1},r_{2}\} , r_3 \} \bigg ) \\&\quad = \bigg ( \max \{\mu _1 \mu _3,\mu _2 \mu _3\} , \sqrt{ \bigg (1-\nu _3^2\bigg ) \min \bigg \{\nu _1^2, \nu _2^2\bigg \} + \nu _3^2 } ; \\&\qquad \min \{ \max \{r_{1},r_{2}\} , \max \{r_{1},r_{3}\} \} \bigg ) \\&(\beta _1 \otimes _{\max } \beta _3) \cup _{\min } (\beta _2 \otimes _{\max } \beta _3) \\&\quad = \bigg ( \mu _1 \mu _3 , \sqrt{\nu _1^2 + \nu _3^2 - \nu _1^2 \nu _3^2} \, ; \max \{r_{1},r_{3}\} \bigg ) \cup _{\min } \\&\qquad \bigg ( \mu _2 \mu _3 , \sqrt{\nu _2^2 + \nu _3^2 - \nu _2^2 \nu _3^2} \, ; \max \{r_{2},r_{3}\} \bigg ) \\&\quad = \bigg ( \max \{ \mu _1 \mu _3 , \mu _2 \mu _3 \} , \min \\&\qquad \bigg \{ \sqrt{\nu _1^2 + \nu _3^2 - \nu _1^2 \nu _3^2} , \sqrt{\nu _2^2 + \nu _3^2 - \nu _2^2 \nu _3^2} \bigg \}; \\ {}&\qquad \quad \, \min \{ \max \{r_{1},r_{3}\} , \max \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \max \{ \mu _1 \mu _3 , \mu _2 \mu _3 \} , \min \bigg \{ \sqrt{\bigg (1-\nu _3^2\bigg ) \nu _1^2 + \nu _3^2 } ,\\&\qquad \sqrt{(1-\nu _3^2) \nu _2^2 + \nu _3^2} \bigg \} ; \, \\&\qquad \quad \min \{ \max \{r_{1},r_{3}\} , \max \{r_{2},r_{3}\} \} \bigg ) \\&\quad = \bigg ( \max \{\mu _1 \mu _3,\mu _2 \mu _3\} , \sqrt{ \bigg (1-\nu _3^2\bigg ) \min \bigg \{\nu _1^2, \nu _2^2\bigg \} + \nu _3^2 } \,;\\&\qquad \min \{ \max \{r_{1},r_{2}\} , \\ {}&\qquad \max \{r_{1},r_{3}\} \} \bigg ) \\&\quad = (\beta _1 \cup _{\min } \beta _2) \otimes _{\max } \beta _3 \\ \end{aligned}$$

\(\square \)

Operating with D-PFSs: aggregation and distances

An operational theory of D-PFSs requires some numerical tools that supplement the basic set-theoretic approach to the topic. This section is devoted to supply two such fundamental tools. First, we shall be concerned with aggregations operators. Then, we shall focus on measuring how far apart two D-PFSs are with the help of distance measures.

D-PF aggregation operators

The mappings that are used to combine data are called aggregation operators. They are used in a wide range of scientific and engineering fields. We proceed to define C-PFVs aggregating operations in this section, which will be called \(\textrm{CPFWA}_{\max }\), \(\textrm{CPFWA}_{\min }\), \(\textrm{CPFWG}_{\max }\), and \(\textrm{CPFWG}_{\min }\). All of them aggregate C-PFVs with varying radii. Hence with their assistance we will be able to define D-PF aggregation operators trivially.

Under the inspiration of weighted arithmetic and geometric averages, the next two Definitions produce natural extensions of the versions that have been adapted to IFSs [28]:

Definition 10

Fix \(\omega = (\omega _1,\omega _2, \ldots , \omega _n)\), a vector of weights (i.e., \(\sum _{i=1}^{n} \omega _i = 1\) with \(\omega _i>0\) for all i). Then, the \(\textrm{CPFWA}_{\max }\) and \(\textrm{CPFWA}_{\min }\) operators are the mappings \(\textrm{CPFWA}_{\max }:\beta ^{n}\rightarrow \beta \) and \(\textrm{CPFWA}_{\min }:\beta ^{n}\rightarrow \beta \), such that when \(\beta _i = (\mu _i, \nu _i; r_i)\) is a C-PFVS, for each \(i=1, \ldots , n\):

$$\begin{aligned}&\textrm{CPFWA}_{\max } (\beta _1,\beta _2,\ldots , \beta _n ) \nonumber \\&\quad = \omega _1 \beta _1 \oplus _{\max } \omega _2 \beta _2 \oplus _{\max } \cdots \oplus _{\max } \omega _n \beta _n \end{aligned}$$
(2)
$$\begin{aligned}&\textrm{CPFWA}_{\min } (\beta _1,\beta _2,\ldots , \beta _n ) \nonumber \\&\quad = \omega _1 \beta _1 \oplus _{\min } \omega _2 \beta _2 \oplus _{\min } \cdots \oplus _{\min } \omega _n \beta _n. \end{aligned}$$
(3)

Definition 11

In the conditions of Definition 10, we define the mappings \(\textrm{CPFWG}_{\max }:\beta ^{n}\rightarrow \beta \) and \(\textrm{CPFWG}_{\min }:\beta ^{n}\rightarrow \beta \), such that when \(\beta _i = (\mu _i, \nu _i; r_i)\) is a C-PFVS, for each \(i=1, \ldots , n\):

$$\begin{aligned}&\textrm{CPFWG}_{\max } (\beta _1,\beta _2,\ldots , \beta _n ) \nonumber \\&\quad = \omega _1 \beta _1 \otimes _{\max } \omega _2 \beta _2 \otimes _{\max } \cdots \otimes _{\max } \omega _n \beta _n \end{aligned}$$
(4)
$$\begin{aligned}&\textrm{CPFWG}_{\min } (\beta _1,\beta _2,\ldots , \beta _n ) \nonumber \\&\quad = \omega _1 \beta _1 \otimes _{\min } \omega _2 \beta _2 \otimes _{\min } \cdots \otimes _{\min } \omega _n \beta _n. \end{aligned}$$
(5)

We can take advantage of the expressions of the arithmetic operations on C-PFVs in order to produce operational formulas for the aggregation operators defined above. We prove these facts in Theorems 7 and 8 below:

Theorem 7

In the conditions of Definition 10, the \(\textrm{CPFWA}_{\max }\) and \(\textrm{CPFWA}_{\min }\) operators can also be written as

$$\begin{aligned}&\textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _n ) \nonumber \\&\quad = \left( \sqrt{1- \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} } , \prod _{i}^{n} \nu _i^{\omega _i} ;max\{r_{1},\ldots , r_{n} \} \right) \end{aligned}$$
(6)
$$\begin{aligned}&\textrm{CPFWA}_{\min } (\beta _1,\ldots , \beta _n )\nonumber \\&\quad = \left( \sqrt{1- \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} } , \prod _{i}^{n} \nu _i^{\omega _i} ;min\{r_{1},\ldots , r_{n} \} \right) . \end{aligned}$$
(7)

Proof

This result can be demonstrated using mathematical induction. For \(n=2\), we have

$$\begin{aligned}&\textrm{CPFWA}_{\max } (\beta _1,\beta _2) = \omega _1 \beta _1 \oplus _{\max } \omega _2 \beta _2&\\&=\left( \sqrt{1- (1- \mu _1^2)^{\omega _1} } , \nu _1^{\omega _1} \, ; r_{1} \right) \oplus _{\max } \left( \sqrt{1- (1- \mu _2^2)^{\omega _2} } , \nu _2^{\omega _2} \, ; r_{2} \right) \\&=\bigg (\sqrt{1{-}(1{-}\mu _1^2)^{\omega _1}{+}1{-} (1{-}\mu _2^2)^{\omega _2}{-}(1{-}(1{-}\mu _1^2)^{\omega _1}) (1{-} (1{-} \mu _2^2)^{\omega _2}) }, \\&\quad \quad \nu _1^{\omega _1} \nu _2^{\omega _2} \; ; max\{r_{1},r_{2} \} \bigg ) \\&= \left( \sqrt{1- \prod _{i}^{2}(1- \mu _i^2)^{\omega _i} } , \prod _{i}^{2} \nu _i^{\omega _i} \, ;max\{r_{1},r_{2} \} \right) \end{aligned}$$

Suppose (6) is true for \(n=k\), that is,

$$\begin{aligned}&\textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _k )\\&\quad = \left( \sqrt{1- \prod _{i}^{k}\bigg (1- \mu _i^2\bigg )^{\omega _i} } , \prod _{i}^{k} \nu _i^{\omega _i} \, ;max\{r_{1},\ldots , r_{k} \} \right) \end{aligned}$$

We need to prove true for \(n=k+1\),

$$\begin{aligned}&\left( \sqrt{1- \prod _{i}^{k}\bigg (1- \mu _i^2\bigg )^{\omega _i} } , \prod _{i}^{k} \nu _i^{\omega _i} \, ;max\{r_{1},\ldots , r_{k} \} \right) \\&\quad \oplus _{\max } \left( \sqrt{1- \bigg (1- \mu _{k+1}^2\bigg )^{\omega _{k+1}} } , \nu _{k+1}^{\omega _{k+1}} \, ; r_{k+1} \right) \\&\quad = \left( \sqrt{1- \prod _{i}^{k+1}\bigg (1- \mu _i^2\bigg )^{\omega _i} } , \prod _{i}^{k+1} \nu _i^{\omega _i} \, ;max\{r_{1},r_{2},\ldots , r_{k+1} \} \right) \end{aligned}$$

This implies (6) is true. Similarly, we can prove (7). \(\square \)

Theorem 8

In the conditions of Definition 11, the \(\textrm{CPFWG}_{\max }\) and \(\textrm{CPFWG}_{\min }\) operators can also be written as

$$\begin{aligned}&\textrm{CPFWG}_{\max } (\beta _1,\ldots , \beta _n ) \nonumber \\&\quad = \left( \prod _{i}^{n} \mu _i^{\omega _i} , \sqrt{1- \prod _{i}^{n}\bigg (1- \nu _i^2\bigg )^{\omega _i} } ;max\{r_{1},\ldots , r_{n} \} \right) \end{aligned}$$
(8)
$$\begin{aligned}&\textrm{CPFWG}_{\min } (\beta _1,\ldots , \beta _n )\nonumber \\&\quad = \left( \prod _{i}^{n} \mu _i^{\omega _i} , \sqrt{1- \prod _{i}^{n}\bigg (1- \nu _i^2\bigg )^{\omega _i} } ;min\{r_{1},\ldots , r_{n} \} \right) . \end{aligned}$$
(9)

Proof

The demonstration resembles the demonstration of Theorem 7. \(\square \)

The next three Theorems are devoted to prove three standard requirements of aggregation operators in this context, namely, monotonicity, boundedness, and idempotency. Hence these technical results guarantee an adequate performance of the new operators. In fact, both monotonicity and idempotency are straightforward properties whose proofs are therefore omitted.

Theorem 9

(Monotonicity) Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) and \(\{q_i = (\mu _{q_i}, \nu _{q_i}; r_{q_i})\}_{i=1, \ldots , n}\, \) be two lists of n C-PFVSs. If \(\mu _{\beta _i}\le \mu _{q_i}\), \(\nu _{\beta _i}\ge \nu _{q_i}\), and \(r_{\beta _i}\le r_{q_i}\), then

$$\begin{aligned}&\mathrm{1.}\quad \textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _n ) \le \textrm{CPFWA}_{\max } (q_1,\ldots , q_n ) \\&\mathrm{2.}\quad \textrm{CPFWA}_{\min } (\beta _1,\ldots , \beta _n ) \le \textrm{CPFWA}_{\min } (q_1,\ldots , q_n ) \\&\mathrm{3.}\quad \textrm{CPFWG}_{\max } (\beta _1,\ldots , \beta _n ) \le \textrm{CPFWG}_{\max } (q_1,\ldots , q_n ) \\&\mathrm{4.}\quad \textrm{CPFWG}_{\min } (\beta _1,\ldots , \beta _n ) \le \textrm{CPFWG}_{\min } (q_1,\ldots , q_n ). \end{aligned}$$

Theorem 10

(Boundedness) Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs. If \(\underline{p}\) and \(\bar{p}\) are two C-PFVs such that \(\underline{p} = (\underline{\mu },\bar{\nu }; \underline{r} )= (\min (\mu _{i}), \max (\nu _{i}) \; \min (r_{i}) )\) and \(\bar{p} = (\bar{\mu },\underline{\nu },\bar{r})=(\max (\mu _{i}), \min (\nu _{i}) \; \max (r_{i}) )\), then

$$\begin{aligned}&\mathrm{1.}\quad (\underline{\mu },\bar{\nu } ; \underline{r} ) \le \textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _n ) \le (\bar{\mu },\underline{\nu },\bar{r});\\&\mathrm{2.}\quad (\underline{\mu },\bar{\nu } ; \underline{r} ) \le \textrm{CPFWA}_{\min } (\beta _1,\ldots , \beta _n ) \le (\bar{\mu },\underline{\nu },\bar{r});\\&\mathrm{3.}\quad (\underline{\mu },\bar{\nu } ; \underline{r} ) \le \textrm{CPFWG}_{\max } (\beta _1,\ldots , \beta _n ) \le (\bar{\mu },\underline{\nu },\bar{r});\\&\mathrm{4.}\quad (\underline{\mu },\bar{\nu } ; \underline{r} ) \le \textrm{CPFWG}_{\min } (\beta _1,\ldots , \beta _n ) \le (\bar{\mu },\underline{\nu },\bar{r}). \end{aligned}$$

Proof

We prove the first part of the theorem. The remaining parts can be proven similarly. To complete the first part, we need to show that \(\underline{\mu } \le \sqrt{1- \prod _{i}^{n}(1- \mu _i^2)^{\omega _i} } \le \bar{\mu }\), \(\bar{\nu } \ge \prod _{i}^{n} \nu _i^{\omega _i} \ge \underline{\nu }\), and \(\underline{r} \le \max \{r_1,\ldots , r_n \} \le \bar{r}\).

Since \(\underline{\mu }^2 \le \mu _i^2 \le \bar{\mu }^2\), we have

$$\begin{aligned} \prod _{i}^{n}\bigg (1- \underline{\mu }^2\bigg )^{\omega _i}&\ge \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} \ge \prod _{i}^{n}(1- \bar{\mu }^2)^{\omega _i} \\ \bigg (1- \underline{\mu }^2\bigg )^{\sum _{i}^{n} \omega _i}&\ge \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} \ge (1- \bar{\mu }^2\bigg )^{\sum _{i}^{n} \omega _i} \\ \underline{\mu }&\le \sqrt{1- \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} } \le \bar{\mu }. \end{aligned}$$

Similarly, we can prove \(\bar{\nu } \ge \prod _{i}^{n} \nu _i^{\omega _i} \ge \underline{\nu }\), and \(\underline{r} \le \max \{r_1,\ldots , r_n \} \le \bar{r}\). \(\square \)

Theorem 11

(Idempotency) Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs such that \(\beta _i = \beta = (\mu , \nu ; r)\). If \(\omega = (\omega _1,\omega _2, \ldots , \omega _n)\) is a weight vector with \(\sum _{i=1}^{n} \omega _i = 1\), then

$$\begin{aligned}&\mathrm{1.} \quad \textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _n ) = \beta ;\\&\mathrm{2.} \quad \textrm{CPFWA}_{\min } (\beta _1,\ldots , \beta _n ) = \beta ;\\&\mathrm{3.} \quad \textrm{CPFWG}_{\max } (\beta _1,\ldots , \beta _n ) = \beta ;\\&\mathrm{4.} \quad \textrm{CPFWG}_{\min } (\beta _1,\ldots , \beta _n ) = \beta . \end{aligned}$$

Theorem 12

Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs and \(\beta = (\mu ,\nu ;r)\) be any C-PFV. If \(\omega = (\omega _1, \ldots , \omega _n)\) is a weight vector with \(\sum _{i=1}^{n} \omega _i = 1\), then

$$\begin{aligned}&\mathrm{1.}\; \textrm{CPFWA}_{M} (\beta _1\oplus _{M} \beta ,\ldots ,\beta _n\oplus _{M} \beta ) \\&\quad \ge \textrm{CPFWA}_{M} (\beta _1\otimes _{M} \beta ,\ldots ,\beta _n\otimes _{M} \beta ); \\&\mathrm{2.}\; \textrm{CPFWA}_{N} (\beta _1\oplus _{N} \beta ,\ldots ,\beta _n\oplus _{N} \beta ) \\&\quad \ge \textrm{CPFWA}_{N} (\beta _1\otimes _{N} \beta ,\ldots ,\beta _n\otimes _{N} \beta ); \\&\mathrm{3.}\; \textrm{CPFWA}_{N} (\beta _1\oplus _{M} \beta ,\ldots ,\beta _n\oplus _{M} \beta ) \\&\quad \ge \textrm{CPFWA}_{N} (\beta _1\otimes _{M} \beta ,\ldots ,\beta _n\otimes _{M} \beta ); \\&\mathrm{4.}\; \textrm{CPFWA}_{N} (\beta _1\oplus _{N} \beta ,\ldots ,\beta _n\oplus _{N} \beta ) \\&\quad \ge \textrm{CPFWA}_{N} (\beta _1\otimes _{N} \beta ,\ldots ,\beta _n\otimes _{N} \beta ); \\&\mathrm{5.}\; \textrm{CPFWG}_{M} (\beta _1\oplus _{M} \beta ,\ldots ,\beta _n\oplus _{M} \beta ) \\&\quad \ge \textrm{CPFWG}_{M} (\beta _1\otimes _{M} \beta ,\ldots ,\beta _n\otimes _{M} \beta ); \\&\mathrm{6.}\; \textrm{CPFWG}_{N} (\beta _1\oplus _{N} \beta ,\ldots ,\beta _n\oplus _{N} \beta ) \\&\quad \ge \textrm{CPFWG}_{N} (\beta _1\otimes _{N} \beta ,\ldots ,\beta _n\otimes _{N} \beta ); \\&\mathrm{7.}\; \textrm{CPFWG}_{N} (\beta _1\oplus _{M} \beta ,\ldots ,\beta _n\oplus _{M} \beta ) \\&\quad \ge \textrm{CPFWG}_{N} (\beta _1\otimes _{M} \beta ,\ldots ,\beta _n\otimes _{M} \beta );\\&\mathrm{8.}\; \textrm{CPFWG}_{N} (\beta _1\oplus _{N} \beta ,\ldots ,\beta _n\oplus _{N} \beta ) \\&\quad \ge \textrm{CPFWG}_{N} (\beta _1\otimes _{N} \beta ,\ldots ,\beta _n\otimes _{N} \beta ), \end{aligned}$$

where the subscripts M and N have been substituted for \(\max \) and \(\min \), respectively.

Proof

For any \(\beta _i = (\mu _{\beta _i}, \nu _{\beta _i}; r_{\beta _i})\) and \(p= (\mu ,\nu ;r)\), we have \(\beta _i\oplus _{M} p \ge \beta _i\otimes _{M} p\) and \(\beta _i\oplus _{N} p \ge \beta _i\otimes _{N} p\). The proofs of all parts can be easily derived from the monotonicity of \(\textrm{CPFWA}_{\max }\), \(\textrm{CPFWA}_{\min }\), \(\textrm{CPFWG}_{\max }\), and \(\textrm{CPFWG}_{\min }\). \(\square \)

Theorem 13

Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs and \(\beta = (\mu ,\nu ;r)\) be any C-PFV. If \(\omega = (\omega _1,\omega _2, \ldots , \omega _n)\) is a weight vector with \(\sum _{i=1}^{n} \omega _i = 1\), then

$$\begin{aligned}&\mathrm{1.}\;\textrm{CPFWA}_{\max }(\beta _1\oplus _{\max } \beta ,\ldots ,\beta _n\oplus _{\max } \beta )\\ {}&\quad \ge \textrm{CPFWA}_{\max }(\beta _1,\ldots ,\beta _n)\otimes _{\max } \beta ;\\&\mathrm{2.}\;\textrm{CPFWA}_{\min }(\beta _1\oplus _{\min } \beta ,\ldots ,\beta _n\oplus _{\min } \beta )\\ {}&\quad \ge \textrm{CPFWA}_{\min }(\beta _1,\ldots ,\beta _n )\otimes _{\min } \beta ;\\&\mathrm{3.}\;\textrm{CPFWA}_{\max } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \beta \\ {}&\quad \ge \textrm{CPFWA}_{\max } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \beta ; \\&\mathrm{4.}\;\textrm{CPFWA}_{\min } (\beta _1,\ldots ,\beta _n )\oplus _{\min } \beta \\ {}&\quad \ge \textrm{CPFWA}_{\min } (\beta _1,\ldots ,\beta _n )\otimes _{\min } \beta ; \\&\mathrm{5.}\;\textrm{CPFWG}_{\max }(\beta _1\oplus _{\max } \beta ,\ldots ,\beta _n\oplus _{\max } \beta )\\ {}&\quad \ge \textrm{CPFWG}_{\max } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \beta ;\\&\mathrm{6.}\;\textrm{CPFWG}_{\min }(\beta _1\oplus _{\min } \beta ,\ldots ,\beta _n\oplus _{\min } \beta )\\ {}&\quad \ge \textrm{CPFWG}_{\min }(\beta _1,\ldots ,\beta _n )\otimes _{\min } \beta ;\\&\mathrm{7.}\;\textrm{CPFWG}_{\max } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \beta \\ {}&\quad \ge \textrm{CPFWG}_{\max } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \beta ; \\&\mathrm{8.}\; \textrm{CPFWG}_{\min } (\beta _1,\ldots ,\beta _n )\oplus _{\min } \beta \\ {}&\quad \ge \textrm{CPFWG}_{\min } (\beta _1,\ldots ,\beta _n )\otimes _{\min } \beta . \end{aligned}$$

Proof

In this theorem, we prove only the first part. The remaining parts can be prove analogously.

Let \(\beta _i\oplus _{\max } \beta = (T_{i}, S_{i}, R_{i}) = \bigg ( \sqrt{\mu _i^2 + \mu ^2 - \mu _i^2 \mu ^2}, \nu _i \nu \;; \max \{r_{i},r \} \bigg )\). Then,

$$\begin{aligned}&\textrm{CPFWA}_{\max }(\beta _1\oplus _{\max } \beta ,\ldots ,\beta _n\oplus _{\max } \beta ) \\&\quad =\left( \sqrt{1- \prod _{i}^{n}(1- T_i^2)^{\omega _i} } , \prod _{i}^{n} S_i^{\omega _i} \, ;max\{R_{1},\ldots , R_{n} \} \right) \end{aligned}$$

For the left hand side, we have

$$\begin{aligned}&\textrm{CPFWA}_{\max }(\beta _1,\beta _2,\ldots ,\beta _n )\otimes _{\max } \beta \\&\quad =\left( \sqrt{1- \prod _{i}^{n}(1- \mu _i^2)^{\omega _i} } , \right. \\&\quad \left. \prod _{i}^{n} \nu _i^{\omega _i} ;max\{r_{1},\ldots , r_{n} \} \right) \otimes _{\max } (\mu ,\nu ;r) \\&\quad = \left( \mu \sqrt{1- \prod _{i}^{n}(1- \mu _i^2)^{\omega _i} } ,\right. \\&\qquad \left. \sqrt{ \bigg ( \prod _{i}^{n} \nu _i^{\omega _i} \bigg )^{2} + \nu ^{2} -\bigg ( \prod _{i}^{n} \nu _i^{\omega _i} \bigg )^{2} \nu ^{2} } ; \max \{R_{1},\ldots , R_{n} \} \right) \end{aligned}$$

We need to show that

$$ \begin{aligned}&\sqrt{1- \prod _{i}^{n}(1- T_i^2)^{\omega _i} } \ge \mu . \sqrt{1- \prod _{i}^{n}(1- \mu _i^2)^{\omega _i} } \quad \& \\&\sqrt{ \left( \prod _{i}^{n} \nu _i^{\omega _i} \right) ^{2} + \nu ^{2} -\left( \prod _{i}^{n} \nu _i^{\omega _i} \right) ^{2} \nu ^{2} } \ge \prod _{i}^{n} S_i^{\omega _i}. \end{aligned}$$

We need to verify the above expressions.

$$\begin{aligned}&1- \prod _{i}^{n}\bigg (1- \bigg (\mu _i^2 + \mu ^2 - \mu _i^2 \mu ^2\bigg )\bigg )^{\omega _i} \\&\quad \ge \mu ^2 \left( 1 - \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} \right) \\&\prod _{i}^{n}\bigg (1- \bigg (\mu _i^2 + \mu ^2- \mu _i^2 \mu ^2\bigg )\bigg )^{\omega _i} \le \prod _{i}^{n}\bigg (1- \mu _i^2\bigg )^{\omega _i} \\&1- \bigg (\mu _i^2 + \mu ^2 - \mu _i^2 \mu ^2\bigg ) \le 1- \mu _i^2 \\&\mu _i^2 \mu ^2 \le \mu ^2 , \end{aligned}$$

which is true in general. If \(\prod _{i}^{n} \nu _i^{\omega _i} = \delta \) and \(\prod _{i}^{n} S_i^{\omega _i} = \prod _{i}^{n}\nu _i^{\omega _i}. \nu \), then

$$\begin{aligned}&\sqrt{ \left( \prod _{i}^{n} \nu _i^{\omega _i} \right) ^{2} + \nu ^{2} -\left( \prod _{i}^{n} \nu _i^{\omega _i} \right) ^{2} \nu ^{2} } \ge \prod _{i}^{n} S_i^{\omega _i} \\ {}&\quad = \prod _{i}^{n} (\nu _i \nu )^{\omega _i} = \prod _{i}^{n} \nu _i^{\omega _i} \nu \sqrt{ \delta ^{2} + \nu ^{2} - \delta ^{2} \nu ^{2} } \ge \delta \nu . \end{aligned}$$

which is true and thus proves the first part. \(\square \)

Theorem 14

Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs. If \(\omega = (\omega _1,\omega _2, \ldots , \omega _n)\) be the weight vector of \(\beta _i\) with \(\sum _{i=1}^{n} \omega _i = 1\) and \(\lambda > 0 \), then

$$\begin{aligned}&\mathrm{1.}\;\textrm{CPFWA}_{\max }(\lambda \beta _1,\ldots ,\lambda \beta _n )\ge \textrm{CPFWA}_{\max }(\beta _1^{\lambda },\ldots ,\beta _n^{\lambda });\\&\mathrm{2.}\;\textrm{CPFWA}_{\min }(\lambda \beta _1,\ldots ,\lambda \beta _n )\ge \textrm{CPFWA}_{\min }(\beta _1^{\lambda },\ldots ,\beta _n^{\lambda });\\&\mathrm{3.}\;\lambda \, \textrm{CPFWA}_{\max }(\beta _1,\ldots ,\beta _n )\ge ( \textrm{CPFWA}_{\max }(\beta _1,\ldots ,\beta _n )^{\lambda };\\&\mathrm{4.}\;\lambda \, \textrm{CPFWA}_{\min }(\beta _1,\ldots ,\beta _n )\ge ( \textrm{CPFWA}_{\min }(\beta _1,\ldots ,\beta _n )^{\lambda };\\&\mathrm{5.}\;\textrm{CPFWG}_{\max }(\lambda \beta _1,\ldots ,\lambda \beta _n )\ge \textrm{CPFWG}_{\max }(\beta _1^{\lambda },\ldots ,\beta _n^{\lambda });\\&\mathrm{6.}\;\textrm{CPFWG}_{\min }(\lambda \beta _1,\ldots ,\lambda \beta _n )\ge \textrm{CPFWG}_{\min }(\beta _1^{\lambda },\ldots ,\beta _n^{\lambda });\\&\mathrm{7.}\;\lambda \, \textrm{CPFWG}_{\max }(\beta _1,\ldots ,\beta _n )\ge ( \textrm{CPFWG}_{\max }(\beta _1,\ldots ,\beta _n )^{\lambda };\\&\mathrm{8.}\;\lambda \, \textrm{CPFWG}_{\min }(\beta _1,\ldots ,\beta _n )\ge ( \textrm{CPFWG}_{\min }(\beta _1,\ldots ,\beta _n )^{\lambda }. \end{aligned}$$

Proof

The proof of this theorem is straightforward from the fact \(\lambda \beta _i \ge \beta _i^{\lambda }\). \(\square \)

Theorem 15

Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) and \(\{q_i = (\mu _{q_i}, \nu _{q_i}; r_{q_i})\}_{i=1, \ldots , n}\,\) be two lists of n C-PFVSs. If \(\omega = (\omega _1, \ldots , \omega _n)\) is a weight vector with \(\sum _{i=1}^{n} \omega _i = 1\), then

$$\begin{aligned}&\mathrm{1.}\;\textrm{CPFWA}_{\max }(\beta _1\oplus _{\max }q_1,\ldots ,\beta _n\oplus _{\max } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWA}_{\max }(\beta _1 \otimes _{\max } q_1,\ldots ,\beta _n\otimes _{\max } q_1);\\&\mathrm{2.}\;\textrm{CPFWA}_{\max }(\beta _1\oplus _{\min }q_1,\ldots ,\beta _n\oplus _{\min } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWA}_{\max }(\beta _1 \otimes _{\min } q_1,\ldots ,\beta _n\otimes _{\min } q_1);\\&\mathrm{3.}\;\textrm{CPFWA}_{\min }(\beta _1\oplus _{\max }q_1,\ldots ,\beta _n\oplus _{\max } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWA}_{\min }(\beta _1 \otimes _{\max } q_1,\ldots ,\beta _n\otimes _{\max } q_1);\\&\mathrm{4.}\;\textrm{CPFWA}_{\min }(\beta _1\oplus _{\min }q_1,\ldots ,\beta _n\oplus _{\min } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWA}_{\min }(\beta _1 \otimes _{\min } q_1,\ldots ,\beta _n\otimes _{\min } q_1);\\&\mathrm{5.}\;\textrm{CPFWA}_{\max } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \textrm{CPFWA}_{\max } (q_1,\ldots ,q_n ) \\ {}&\qquad \quad \ge \textrm{CPFWA}_{\max } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \textrm{CPFWA}_{\max } (q_1,\ldots ,q_n ); \\&(6)\;\textrm{CPFWA}_{\min } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \textrm{CPFWA}_{\min } (q_1,\ldots ,q_n ) \\ {}&\qquad \quad \ge \textrm{CPFWA}_{\min } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \textrm{CPFWA}_{\min } (q_1,\ldots ,q_n ); \\&\mathrm{7.}\;\textrm{CPFWG}_{\max }(\beta _1\oplus _{\max }q_1,\ldots ,\beta _n\oplus _{\max } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWG}_{\max }(\beta _1 \otimes _{\max } q_1,\ldots ,\beta _n\otimes _{\max } q_1);\\&\mathrm{8.}\;\textrm{CPFWG}_{\max }(\beta _1\oplus _{\min }q_1,\ldots ,\beta _n\oplus _{\min } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWG}_{\max }(\beta _1 \otimes _{\min } q_1,\ldots ,\beta _n\otimes _{\min } q_1);\\&\mathrm{9.}\;\textrm{CPFWG}_{\min }(\beta _1\oplus _{\max }q_1,\ldots ,\beta _n\oplus _{\max } q_n)\\ {}&\qquad \quad \ge \textrm{CPFWG}_{\min }(\beta _1 \otimes _{\max } q_1,\ldots ,\beta _n\otimes _{\max } q_1);\\&\mathrm{10.}\;\textrm{CPFWG}_{\min }(\beta _1\oplus _{\min }q_1,\ldots ,\beta _n\oplus _{\min }q_n)\\ {}&\qquad \quad \ge \textrm{CPFWG}_{\min }(\beta _1 \otimes _{\min } q_1,\ldots ,\beta _n\otimes _{\min } q_1);\\&\mathrm{11.}\;\textrm{CPFWG}_{\max } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \textrm{CPFWG}_{\max } (q_1,\ldots ,q_n ) \\ {}&\qquad \quad \ge \textrm{CPFWG}_{\max } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \textrm{CPFWG}_{\max } (q_1,\ldots ,q_n ); \\&\mathrm{12.}\;\textrm{CPFWG}_{\min } (\beta _1,\ldots ,\beta _n )\oplus _{\max } \textrm{CPFWG}_{\min } (q_1,\ldots ,q_n ) \\ {}&\qquad \quad \ge \textrm{CPFWG}_{\min } (\beta _1,\ldots ,\beta _n )\otimes _{\max } \textrm{CPFWG}_{\min } (q_1,\ldots ,q_n ); \\ \end{aligned}$$

Proof

The proof of this theorem is straightforward from the facts that for any C-PFVSs \(\beta _i\) and \(q_i\), we have \(\beta _i\oplus _{\max }q_i \ge \beta _i \otimes _{\max } q_i\), \(\beta _i\oplus _{\min }q_i \ge \beta _i \otimes _{\min } q_i\), and monotonicity of \(\textrm{CPFWA}_{\max }\), \(\textrm{CPFWA}_{\min }\), \(\textrm{CPFWG}_{\max }\), and \(\textrm{CPFWG}_{\min }\). \(\square \)

Theorem 16

Let \(\{\beta _i = (\mu _{i}, \nu _{i}; r_{i})\}_{i=1, \ldots , n}\,\) be a list of n C-PFVSs. If \(\omega = (\omega _1,\omega _2, \ldots , \omega _n)\) is a weight vector with \(\sum _{i=1}^{n} \omega _i = 1\), then

$$\begin{aligned}&\mathrm{1.}\quad \textrm{CPFWA}_{\max } (\beta _1^{c},\ldots , \beta _n^{c}) = \left( \textrm{CPFWG}_{\max } (\beta _1,\ldots , \beta _n ) \right) ^{c};\\&\mathrm{2.}\quad \textrm{CPFWA}_{\min } (\beta _1^{c},\ldots , \beta _n^{c}) = \left( \textrm{CPFWG}_{\min } (\beta _1,\ldots , \beta _n ) \right) ^{c};\\&\mathrm{3.}\quad \textrm{CPFWG}_{\max } (\beta _1^{c},\ldots , \beta _n^{c}) = \left( \textrm{CPFWA}_{\max } (\beta _1,\ldots , \beta _n ) \right) ^{c};\\&\mathrm{4.}\quad \textrm{CPFWG}_{\min } (\beta _1^{c},\ldots , \beta _n^{c}) = \left( \textrm{CPFWA}_{\min } (\beta _1,\ldots , \beta _n ) \right) ^{c}. \end{aligned}$$

Proof

Straightforward. \(\square \)

Distance measures for D-PFSs

The distance metrics are used to determine how far apart two fuzzy sets are from each other. One of the main topics in fuzzy set theory that originated in metric theory is this one. We expand the well-known distance measurements for C-PFSs in this section. The CODAS approach will be expanded for D-PFS in the very next section, where these metrics will be applicable.

Definition 12

Let \(A_1\) and \(A_2\) be two D-PFSs over L. Then, the Hamming distance between them is

$$\begin{aligned} \nonumber&\bar{H}_{1}(A_1,A_2)=\frac{1}{2 N_L} \times \left( \sum _{\hbar \in L} \left[ \frac{\mid r_{A_1}(\hbar ) -r_{A_2}(\hbar )\mid }{\sqrt{2}}\right. \right. \\&\quad +\left. \left. \frac{ \mid \mu _{A_1}^{2}(\hbar )-\mu _{A_2}^{2}(\hbar ) \mid + \mid \nu _{A_1}^{2}(\hbar )-\nu _{A_2}^{2}(\hbar ) \mid }{2} \right] \right) , \end{aligned}$$
(10)

where \(N_L\) is the cardinality of L.

If we consider the hesitancy index of D-PFVs while calculating the distance between D-PFSs, then

$$\begin{aligned}&\bar{H}_{2}(A_1,A_2)= \frac{1}{2 N_L}\left( \sum _{\hbar \in L} \left[ \frac{\mid r_{A_1}(\hbar ) -r_{A_2}(\hbar )\mid }{\sqrt{2}} + \frac{\begin{matrix} \mid \mu _{A_1}^{2}(\hbar )-\mu _{A_2}^{2}(\hbar ) \mid + \mid \nu _{A_1}^{2}(\hbar )-\nu _{A_2}^{2}(\hbar ) \mid \\ + \mid \pi _{A_1}^{2}(\hbar )-\pi _{A_2}^{2}(\hbar ) \mid \end{matrix}}{2} \right] \right) . \end{aligned}$$
(11)

Definition 13

Let \(A_1\) and \(A_2\) be two D-PFSs over L. Then, the Euclidean distance between them is

$$\begin{aligned} \nonumber&\bar{E}_{1}(A_1,A_2)= \frac{1}{2} \Bigg ( \frac{1}{ N_L} \sum _{\hbar \in L} \frac{\mid r_{A_1}(\hbar ) -r_{A_2}(\hbar )\mid }{\sqrt{2}} \\&\quad + \sqrt{\frac{1}{2 N_L} \sum _{\hbar \in L} \left[ \left( \mu _{A_1}^{2}(\hbar )-\mu _{A_2}^{2}(\hbar ) \right) ^2 + \left( \nu _{A_1}^{2}(\hbar )-\nu _{A_2}^{2}(\hbar ) \right) ^2 \right] } \Bigg ), \end{aligned}$$
(12)

where \(N_L\) is the cardinality of L.

If we consider the hesitancy index of D-PFVs while calculating the distance between D-PFSs, then

$$\begin{aligned}&\bar{E}_{2}(A_1,A_2)=\frac{1}{2} \left( \frac{1}{ N_L} \sum _{\hbar \in L} \frac{\mid r_{A_1}(\hbar ) -r_{A_2}(\hbar )\mid }{\sqrt{2}} + \sqrt{\frac{1}{2 N_L} \sum _{\hbar \in L} \left[ \left( \mu _{A_1}^{2}(\hbar )-\mu _{A_2}^{2}(\hbar ) \right) ^2 + \left( \nu _{A_1}^{2}(\hbar )-\nu _{A_2}^{2}(\hbar ) \right) ^2 + \left( \pi _{A_1}^{2}(\hbar )-\pi _{A_2}^{2}(\hbar ) \right) ^2 \right] } \right) .\nonumber \\ \end{aligned}$$
(13)

Applications of D-PFSs in MCDM Problems

This section extends the CODAS method for Disc Pythagorean fuzzy sets. The Euclidean and Hamming distance metrics are employed to build this technique. It can take inputs in the forms of linguistic variables, D-PFSs, C-PFSs, and PFS.

Disc Pythagorean fuzzy CODAS method

Step 1::

Let \( L = \{ \hbar _{1}, \hbar _{2},\ldots , \hbar _{m} \}\) be the set of m alternatives and \( C = \{ C_{1}, C_{2},\ldots , C_{n} \}\) the set of n attributes. Let \(E=\{ E_{1}, E_{2},\ldots ,E_{d}\}\) represents the set of experts. Each alternative \(\hbar _{i}\) is assessed against each attribute \(c_{j}\) by every expert \(E_k\) and there evaluations are saved as D-PFVs \(a_{ij}^{k}\). These judgments constitutes the \({m\times n}\) decision matrix, that is,

(14)
Step 2::

We have obtained k D-PF decision matrices in the last steps. Let \(\varpi = \{ \varpi _{1}, \varpi _{2},\ldots , \varpi _{n} \}\) be the weight vector of experts. The aggregation of these matrices can be performed cell-by-cell using any aggregation operators from \(\textrm{CPFWA}_{\max }\), \(\textrm{CPFWA}_{\min }\), \(\textrm{CPFWG}_{\max }\), and \(\textrm{CPFWG}_{\min }\). We represent the aggregated D-PF decision matrix by M and its generic element by \(a_{ij}=(\mu _{ij},\nu _{ij}; r_{ij})\), where \(i\in \{1,2,\ldots , m \}\) and \(j\in \{1,2,\ldots , n \}\).

Step 3::

Normalize the decision matrix M with respect to the benefit and cost criteria as follows:

$$\begin{aligned} { a_{ij} =} {\left\{ \begin{array}{ll} { (\mu _{ij} , \nu _{ij} \,; r_{ij} ),} &{} \text {for benefit criteria},\\ { (\nu _{ij} , \mu _{ij} \,; r_{ij} ),} &{} \text {for cost criteria} .\\ \end{array}\right. } \end{aligned}$$
(15)
Step 4::

In real-life MCDM problems, the attributes are not the same. For this reason the weight vector \(\omega = \{ \omega _{1}, \omega _{2},\ldots , \omega _{n} \}\) is affirmed to the set of attributes, where \(0 \le \omega _{j} \le 1\) and \( \sum \nolimits _{j=1}^{n} \omega _{j} = 1\).

Step 5::

Formulate the weighted normalized D-PF matrix by scalar multiplication of D-PFVs:

$$\begin{aligned} \omega _{j} a_{ij} = (\chi _{ij} , \psi _{ij}; r_{ij} ) = \left( \sqrt{ 1 - (1 - \mu _{ij}^{2})^{\omega _{j}} }\; ,\; \nu _{ij}^{\omega _{j}} \;; r_{ij} \right) . \end{aligned}$$
(16)
Step 6::

For each attribute, find the negative ideal (NI) D-PFVs as follows:

$$\begin{aligned} \nonumber NI&= \left\{ \left( \hat{\chi }_{j},\hat{\psi }_{j}; \hat{r}_{j} \right) \; \mid \; j=1,\ldots ,n \right\} \\&= \left\{ \left( \min \limits _{i=1}^{m} (\mu _{ij}) , \max \limits _{i=1}^{m} (\nu _{ij}) ;\min \limits _{i=1}^{m}(r_{ij}) \right) \mid j=1,\ldots ,n \right\} . \end{aligned}$$
(17)
Step 7::

Compute the Euclidean and Hamming distances of each alternative \(\hbar _i\) from NI. We can use either \(\bar{H}_{1}\) and \(\bar{E}_{1}\) or \(\bar{H}_{2}\) and \(\bar{E}_{2}\). That is,

$$\begin{aligned} \bar{H}_i&= \bar{H}_{1} (NI, \hbar _i) = \frac{1}{2 n} \sum _{j=1}^{n} \bigg ( \frac{\mid r_{ij} - \hat{r}_{j}\mid }{\sqrt{2}} \\ {}&\quad + \frac{1}{2 } \bigg ( \mid \chi _{ij}^{2}-\hat{\chi }_{j}^{2} \mid + \mid \psi _{ij}^{2}-\hat{\psi }_{j}^{2} \mid \bigg ) \bigg ) \\ \bar{E}_i&= \bar{E}_{1} (NI, \hbar _i) = \frac{1}{2} \bigg ( \frac{1}{n} \sum _{j=1}^{n} \frac{\mid r_{ij} - \hat{r}_{j}\mid }{\sqrt{2}} \\ {}&\quad + \sqrt{\frac{1}{2 n} \sum _{j=1}^{n} \bigg [ \bigg ( \chi _{ij}^{2}-\hat{\chi }_{j}^{2} \bigg )^2 + \bigg ( \psi _{ij}^{2}-\hat{\psi }_{j}^{2} \bigg )^2 \bigg ] } \bigg ). \end{aligned}$$

Or

$$\begin{aligned}&\bar{H}_i = \bar{H}_{2} (NI , \hbar _i) =\frac{1}{2n} \sum _{j=1}^{n} \bigg ( \frac{\mid r_{ij} - \hat{r}_{j}\mid }{\sqrt{2}} \\ {}&\qquad + \frac{1}{2 } \bigg ( \mid \chi _{ij}^{2}-\hat{\chi }_{j}^{2} \mid + \mid \psi _{ij}^{2}-\hat{\psi }_{j}^{2} \mid + \mid \pi _{ij}^{2}-\hat{\pi }_{j}^{2} \mid \bigg ) \bigg ) \\&\bar{E}_i = \bar{E}_{2} (NI , \hbar _i) = \frac{1}{2} \times \bigg ( \frac{1}{ n} \sum _{j=1}^{n} \frac{\mid r_{ij} - \hat{r}_{j}\mid }{\sqrt{2}} \\ {}&\qquad + \sqrt{\frac{1}{2 n} \sum _{j=1}^{n} \bigg [ \bigg ( \chi _{ij}^{2}-\hat{\chi }_{j}^{2} \bigg )^2 + \bigg ( \psi _{ij}^{2}-\hat{\psi }_{j}^{2} \bigg )^2 + \bigg ( \pi _{ij}^{2}-\hat{\pi }_{j}^{2} \bigg )^2 \bigg ] } \bigg ) \end{aligned}$$

where \(\pi _{ij}\) and \(\hat{\pi }_{j}\) are the hesitancy indexes of \((\chi _{ij}, \psi _{ij}; r_{ij} )\) and \(\left( \hat{\chi }_{j},\hat{\psi }_{j}; \hat{r}_{j} \right) \), respectively.

Step 8::

Compose the relative assessment matrix (RAM) based on \(\bar{H}_i\) and \(\bar{E}_i\) calculated in previous step as follows:

$$\begin{aligned} \nonumber&R = [R_{ik}]_{m\times m} ,\\&R_{ik} = (\bar{E}_i - \bar{E}_k) + \left( \lambda _{ik} (\bar{E}_i - \bar{E}_k) \times (\bar{H}_i - \bar{H}_k) \right) , \end{aligned}$$
(18)

wheres \(\lambda _{ik}\) stands for a threshold function that can be used to determine whether two alternatives’ Euclidean distances are identical. Its definition is as follows:

$$\begin{aligned} \lambda _{ik} = {\left\{ \begin{array}{ll} 1 &{} if\; \mid (\bar{E}_i - \bar{E}_k) \mid \le \tau \\ 0 &{} if\; \mid (\bar{E}_i - \bar{E}_k) \mid > \tau \\ \end{array}\right. } \end{aligned}$$
(19)

where \(\tau \) is threshold criterion and ranges from 0.01 to 0.05 [32]. The reason to choose the threshold parameter is to use the Hamming distance between two alternatives when the Euclidean distance is very small between them.

Step 9::

Formulate the evaluation score value of each choice using RAM, shown as follows:

$$\begin{aligned} R_i = \sum _{k=1}^{n} R_{ik} . \end{aligned}$$
Step 10::

In order of decreasing evaluation score values, rank the choices.

Illustrative example: selection of supermarket for fresh fruits

A very well-known hotel in Bangkok has been steadily expanding its branches and receiving praise for its entire menu. However, the hotel has been in a deficit for fresh fruit and vegetables. One of the main causes is the poor quality of its fruits and vegetables, such as no uniform shape, unclean surface, and insect damage. In order to resolve the management problem with fruits and vegetables, the hotel has made a commitment to improving supplier management. In order to supply fresh, high-quality fruits and vegetables that are green, it aims to make use of standardized, clean, and orderly fresh produce. The hotel has been collaborating with a number of local fruit and vegetable wholesalers to address issues with insufficient supply, significant breakage, and lack of recycling. The hotel now makes the decision to embrace green supply chain management and form a long-term collaboration with major supermarkets. It works to ensure fruits and vegetable quality in order to capture the voice of customers and lower expenses. The hotel has identified four feasible supermarkets as \(\hbar _1\), \(\hbar _2\), \(\hbar _3\), and \(\hbar _4\). All four supermarkets acknowledge giving the data needed for the hotel selection process. Five evaluation criteria are chosen while taking the standards for green supermarkets and it includes Quality \(C_1\), Diversity \(C_2\), Price \(C_3\), Deliverability \(C_4\), and Recycling ability \(C_5\).

This case is an MADM problem and aims to choose the best supermarket from a group of four contenders based on how well they perform across five different attributes. In addition, the majority of the traits that were chosen are qualitative in nature. As a result, the suggested approach is appropriate for solving this issue. The hotel gathers relevant information about four contenders.

Three experts, including a manager \(E_1\), a purchasing manager \(E_2\), and a food and beverage director \(E_3\) were invited to give evaluations based on the selected information and their experience and knowledge. The weights of the three specialists have been divided according to their various tasks and levels of experience. Let the weight vector for the experts is \(W = \{ 0.35, 0.40, 0.25\}\). Expert evaluates all supermarkets concerning specified criteria and their assessments are saved in the form of D-PFVs. In this regard, three D-PF decision matrices are obtained, say \(M^1\), \(M^2\), and \(M^3\), and displayed in Table 1.

Table 1 D-PFSs in “Illustrative example: selection of supermarket for fresh fruits”

To solve this problem, we use the D-PF CODAS method discussed in Sect. 5.1. The D-PF decision matrices have already given by the experts in Table 1. Now, we use CPFWA\(_{\min }\) to aggregate three D-PF decision matrices and the resultant matrix is presented in Table 2. This complete the second step. Since the attribute \(c_3\) is of cost type, therefore, the matrix M is normalized using (15). The normalized decision matrix \(M'\) is displayed in Table 3.

Table 2 Aggregate D-PF decision matrix in “Illustrative example: selection of supermarket for fresh fruits”
Table 3 Normalized aggregate D-PF decision matrix in “Illustrative example: selection of supermarket for fresh fruits”
Table 4 Weighted normalized aggregate D-PF decision matrix in “Illustrative example: selection of supermarket for fresh fruits”

Let the weight vector of attributes is \(\omega = \{0.30, 0.20, 0.15, 0.15, 0.20\}\), then the weighted normalized decision matrix is obtained by (16). It is displayed in Table 4. In the next step, the NI D-PFVs are calculated using (17):

$$\begin{aligned} NI&= \{ (0.350,0.766;0.04 ),(0.316,0.733;0.01 ),\\&\qquad (0.039,0.953;0.05),\\&\qquad (0.262,0.823;0.05),(0.318,0.804;0.04) \} \end{aligned}$$

Now, we compute the Hamming and Euclidean distances of each alternative \(\hbar _i\) from NI. We use \(\bar{H}_{2}\) and \(\bar{E}_{2}\) for distances and the results are

$$\begin{aligned}&\bar{H}_{2}(\hbar _1)= 0.0294, \bar{H}_{2}(\hbar _2)=0.0326, \\&\bar{H}_{2}(\hbar _3)=0.0390, \bar{H}_{2}(\hbar _4)=0.0349\\&\bar{E}_{2}(\hbar _1)= 0.0286, \bar{E}_{2}(\hbar _2)=0.0313,\\&\bar{E}_{2}(\hbar _3)=0.0364, \bar{E}_{2}(\hbar _4)=0.0331. \end{aligned}$$

The RAM is calculated using (18). If we take the value of threshold criterion \(\tau \) equal to 0.04, then the value of \(\lambda _{ik}\) in (18) remains 1. The RAM is displayed in (20):

$$\begin{aligned} \left( \begin{array}{cc cc cc} &{} 0. &{}\quad -0.0027 &{}\quad -0.0077 &{}\quad -0.0045 \\ &{} 0.0028 &{}\quad 0. &{}\quad -0.0050 &{}\quad -0.0018 \\ &{} 0.0079 &{} \quad 0.0051 &{}\quad 0. &{}\quad 0.0033 \\ &{} 0.0045 &{}\quad 0.0018 &{}\quad -0.0033 &{} \quad 0. \\ \end{array}\right) \end{aligned}$$
(20)

The evaluation score value of each option is formulated using RAM by adding the rows of Matrix (20), that is, \( R_1= -0.0150\), \(R_2= -0.0040\), \(R_3= 0.0163\), and \(R_4= 0.0030\). The final ranking of alternatives is

$$ \hbar _3 \succ \hbar _4\succ \hbar _2\succ \hbar _1.$$

Remark 1

Using the aggregation operator CPFWA\(_{\min }\), we have been able to address the aforementioned issue. Since we suggested many aggregation operators, including CPFWA\(_{\max }\), CPFWG\(_{\min }\), and CPFWG\(_{\max }\). As a result, we use these aggregation operations to find the optimal solution, and the results are shown as follows:

$$\begin{aligned} \left( \begin{array}{cc cc cc} {\text {CPFWA}}_{\max } &{}-0.0150 &{}-0.0040 &{}0.0163 &{}0.0030 &{} \hbar _3 \succ \hbar _4\succ \hbar _2\succ \hbar _1 \\ {\text {CPFWG}}_{\max } &{}-0.0103 &{}0.0236 &{}-0.0083 &{}-0.0046 &{} \hbar _2 \succ \hbar _4\succ \hbar _3\succ \hbar _1 \\ {\text {CPFWG}}_{\min } &{}-0.0103 &{}0.0236 &{}-0.0083 &{}-0.0046 &{} \hbar _2 \succ \hbar _4\succ \hbar _3\succ \hbar _1 \\ \end{array}\right) \end{aligned}$$
Table 5 Comparison of rankings with existing methods from the PFS literature

Comparative analysis

Being a pioneer work, we cannot yet compare our results with alternative solutions under the pure D-PFS or even C-PFS structure. Henceforth this section compares the suggested methodologies to existing MCDM approaches, focusing on the PF environment. Since both D-PFSs and C-PFSs generalize ordinary PFSs, we can derive a PFS from any D-PFS by inserting \(r(\hbar )=0\) throughout.

When we do this with the data from our case study, we obtain results that are summarized by Table 5. One observes that the ranking of the alternatives changes, and that it is also affected by MCDM method. But our methodologies have the advantage that they make full utilization of the information provided by the experts, because they operate with the more general environment of D-PFSs. In fact, the D-PF CODAS method is extended with the help of empirically successful Hamming and Euclidean distance measures.

Conclusion

The concept of C-PFS, which is an extension of C-IFS, has been defined and studied in this paper. This idea offers a more straightforward mathematical way to express uncertain data than IV-IFS, while keeping the advantages of PFSs in terms of flexibility. Instead of a precise value (an orthopair within the corresponding area of the unit square), a C-PFS describes each element by a circle of fixed radius. When the radius is zero, the model returns the successful PFS structure. When all orthopairs have the property that the total of their MD and NMD is at most 1, the model returns the recent C-IFS structure. The coincidence of both restrictions produces IFSs.

Then, we further extended this model and defined D-PFSs, which describe each element by a circle of variable radius. Of course, when all radii coincide, the model returns a C-PFS. We have expanded the basic operations of union, intersection, addition, multiplication, and scalar multiplication to D-PFSs. Various qualities of these operations have been looked into. These achievements establish their basic set-theoretic algebra and arithmetics of D-PFSs.

The arithmetic and geometric aggregation operators for D-PFS have been defined. We refer to \(\textrm{CPFWA}_{\max }\), \(\textrm{CPFWA}_{\min }\), \(\textrm{CPFWG}_{\max }\), and \(\textrm{CPFWG}_{\min }\), whose underlying characteristics have been investigated. As a way to compare D-PFS, the Euclidean and Hamming distance metrics have been presented. With these tools we have been able to produce a Hamming and Euclidean distances based CODAS technique that has been illustrated with an application to the selection of the best supermarkets to buy fresh fruit for a hotel.

In the future, we will define additional aggregation operators, and we will initiate the study of similarity, dissimilarity, distance, divergence, entropy, and knowledge measures for D-PFSs. All these notions will be applied to better deal with uncertainty in daily life scenarios.

Also, other lines of research will explore combinations of C-IFSs with soft sets [23], N-soft sets [33], bipolar-valued fuzzy sets [24], complex fuzzy sets [26], q-rung orthopair fuzzy sets [11], Fermatean fuzzy sets [34], spherical fuzzy sets [35], T-spherical fuzzy sets [36], linear Diophantine fuzzy sets [37], and temporal IFSs [38].