# Picture fuzzy aggregation information based on Einstein operations and their application in decision making

## Abstract

The information aggregation operator plays a key rule in the group decision making problems. The aim of this paper is to investigate the information aggregation operators method under the picture fuzzy environment with the help of Einstein norms operations. The picture fuzzy set is an extended version of the intuitionistic fuzzy set, which not only considers the degree of acceptance or rejection but also takes into the account of neutral degree during the analysis. Under these environments, some basic aggregation operators namely picture fuzzy Einstein weighted and Einstein ordered weighted operators are proposed in this paper. Some properties of these aggregation operators are discussed in detail. Further, a group decision making problem is illustrated and validated through a numerical example. A comparative analysis of the proposed and existing studies is performed to show the validity of the proposed operators.

## Introduction

The core idea of the fuzzy set (FS) theory was first developed by Zadeh [55] in 1965. In this theory, Zadeh only discussed the positive membership degree of the function. The core idea of the FS theory has been studied in many fields of the real world such as clustering analysis [51], decision making problems [29], medical diagnosis [12], and also pattern recognition [32]. Unfortunately, the idea of the FS theory has been failed due to lack of basic information of the negative membership degree of the function. Therefore, the Atanassov covered these gaps by including the negative membership degree of the function in FS theory. The core idea of the intuitionistic fuzzy set (IFS) theory has been developed by Atanassov [4], in 1986. The concept of the IFS theory is the extension of the FS theory. In this theory, he discussed both the negative membership degree of the function and the positive membership degree of the function of IFS. Hence, the sum of its positive membership function and negative membership function is equal to or less than 1. After the introduction of IFS theory, many researchers attempted the important role in IFS theory and developed different types of techniques in processing the information values by utilizing different operators [8, 9, 18, 24, 26, 28, 34], information measure [7, 35, 39], score and accuracy function [25], under these environments. In particular, the information aggregation is an interesting and important research topic in AIFS theory that has been receiving more and more attention in year 2010 by Xu et al. [52]. Atanassov [4, 6] defined some basic operations and relations of AIFSs, including intersection, union, complement, algebraic sum, and algebraic product, etc., and proved the equality relation between IFSs [5]. But it has been analyzed that the above operators used Archimedean t-norm and t-conorm for the aggregation process. Einstein based t-norm and t-conorm have a best approximation for sum and product of the intuitionistic fuzzy numbers (IFNs) as the alternative of algebraic sum and product. Wang and Liu [44] proposed some geometric aggregation operators based on Einstein operations for intuitionistic fuzzy information. Wang and Liu [45] proposed averaging operators using Einstein operations for intuitionistic fuzzy information. Zhao and Wei [57] defined hybrid averaging and geometric aggregation operators by using Einstein operations. Apart from this, various researchers pay more attention to IFSs for aggregating the different alternatives using different aggregation operators [13, 14, 19,20,21, 23, 27, 33, 38, 53, 54, 56].

In real life, there is some problem which could not be symbolized in IFS theory. For example, in the situation of voting system, human opinions including more answers of such types: yes, no, abstain, refusal. Therefore, Cuong [10] covered these gaps by adding the neutral function in IFS theory. Cuong [10] introduced the core idea of the PFS (picture fuzzy set) model, and the PFS notion is the extension of IFS model. In PFS theory, he basically added the neutral term along with the positive membership degree and negative membership degree of the IFS theory. The only constraint is that in PFS theory, the sum of the positive membership, neutral and negative membership degrees of the function is equal to or less than 1. In 2014, Phong et al. developed some composition of PF relations [37]. Singh in 2015 developed correlation coefficients for the PFS theory. Cuong et al [11], in 2015, gave the core idea about some fuzzy logic operations for PFS. Thong et al. [41] developed a policy to multi-variable fuzzy forecasting by utilizing PF clustering and PF rules interpolation system. Son [40] presented the generalized picture distance measure and also their application. Wei [46] introduced the picture fuzzy cross-entropy for MADM problem. Wei [47] has been introduced the PF aggregation operator and also their applications. The projection model for MADM with picture fuzzy environment has been presented by Wei et al [50]. Bipolar 2-tuple linguistic aggregation operators in MADM have been introduced by Lu M, et al [36]. In 2017 Wei [48] developed the concept about the some cosine similarity measures for PFS. Wei [49] introduced the basic idea of picture 2-tuple linguistic Bonferroni mean operation and also their application to MADM problems. Apart from these, some other scholars are working in the field of picture fuzzy sets theory and introduced different types of decision making approaches (Wang et al. [42],Wang et al. [43]). The different type of aggregation operators has defined for cubic fuzzy numbers, Pythagorean fuzzy numbers [31] and spherical fuzzy numbers [2, 3, 15,16,17, 30].

It is clear that above aggregation operators are based on the algebraic operational laws of PFSs for carrying the combination process. The basic algebraic operations of PFSs are algebraic product and algebraic sum, which are not the only operations that can be chosen to model the intersection and union of the PFSs. Therefore, a good alternative to the algebraic product is the Einstein product, which typically gives the same smooth approximation as the algebraic product. Equivalently, for a union, the algebraic sum is the Einstein sum. Moreover, it seems that in the literature there is little investigation on aggregation techniques using the Einstein operations on PFSs for aggregating a collection of information. Therefore, the focus of this paper is to develop some information aggregation operators based on Einstein operations on PFSs.

The remaining part of this paper is as follows. In “Preliminaries” section, we give some basic definitions of IFS, PFS and score and accuracy function. In “Einstein operations of picture fuzzy sets” section, we proposed picture fuzzy Einstein operations. In “Picture fuzzy Einstein arithmetic averaging operators” section, we introduce some picture fuzzy Einstein arithmetic averaging operators. In “Application of the picture fuzzy Einstein weighted averaging operator to multiple attribute decision making” section, we discuss the application of the picture fuzzy Einstein weighted averaging operator to a multiple attribute decision making problem. In “conclusion” section, we solve MADM problem to illustrate the practicality of the picture fuzzy Einstein operators, and we write the conclusion of the paper in the last section

## Preliminaries

### Definition 1

[4, 6] An IFS $$\beta$$ defined in $$\hat{U}\ne \phi$$ is ordered pair as follows

\begin{aligned} {\beta }=\left\langle {\mu }_{{\beta }}\left( \hat{u} \right),\Upsilon _{{\beta }}\left( \hat{u}\right) |\hat{u}\in \hat{U} \right\rangle \end{aligned}
(1)

where $$\mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u} \right) \in \left[ 0,1\right]$$ and defined as $$\mu _{\beta }\left( \hat{u} \right),\Upsilon _{\beta }\left( \hat{u}\right) :\hat{U}\rightarrow \left[ 0,1\right]$$ for all $$\hat{u}\in \hat{U}.$$ Hence, $$\mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u}\right)$$ are called the degree of membership and non-membership functions, respectively. The pair $$\left\langle \mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle$$ is called the IFN or IPV, where $$\mu _{\beta }\left( \hat{u}\right)$$ and $$\Upsilon _{\beta }\left( \hat{u} \right)$$ satisfy the following condition for all $$\hat{u}\in \hat{U}$$.

\begin{aligned} \left( {\mu }_{{\beta }}\left( \hat{u}\right) +\Upsilon _{ {\beta }}\left( \hat{u}\right) \le 1\right) \end{aligned}

### Definition 2

[22] A PFS $$\beta$$ on $$\hat{U}\ne \phi$$ is defined as

\begin{aligned} {\beta }=\left\langle {\mu }_{{\beta }}\left( \hat{u} \right),\eta _{{\beta }}\left( \breve{u}\right),\Upsilon _{{\beta }}\left( \hat{u}\right) \right\rangle \end{aligned}
(2)

where $$0\le \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{ u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \le 1$$ are called membership, neutral and non-membership degrees of the function, respectively, satisfying the condition $$\mu _{\beta }\left( \hat{u} \right) +\eta _{\beta }\left( \breve{u}\right) +\Upsilon _{\beta }\left( \hat{u}\right) \in \left[ 0,1\right],$$ for all $$\hat{u}\in \hat{U}.$$ Furthermore, for all $$\hat{u}\in \hat{U},$$$$\Phi _{\beta }=1-\mu _{\beta }\left( \hat{u}\right) -\eta _{\beta }\left( \breve{u}\right) -\Upsilon _{\beta }\left( \hat{u}\right)$$ is said to be the degree of refusal membership, and the pair $$\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u} \right) \right\rangle$$ is called the PFN or PFV. Note that every IFS that can be defined as

\begin{aligned} {\beta }=\left\{ \left\langle {\mu }_{{\beta }}\left( \hat{u}\right),0,{\Upsilon }_{{\beta }}\left( \hat{u}\right) \right\rangle |\hat{u}\in \hat{U}\right\} \end{aligned}
(3)

If we put $$\eta _{\beta }\left( \breve{u}\right) \ne 0,$$ in Eq. (3), then we have PFS

Basically, PFSs models are used in those cases in which the human opinions involving more answers, i.e., “no”, “yes”, “abstain”, and “refusal”. A group of students of the department can be a good example for a PFS. There are some groups of the students want to visit two places: one is Islamabad, and other one is Lahore, but there are some students which want to visit Islamabad (membership) not to Lahore (non-membership), but some students want to visit Lahore (membership) not to Islamabad (non-membership), and also some students which want to visit both places Islamabad and Lahore, i.e., neutral students. But there are also a few students which do not want to visit both places, i.e., refusal.

### Definition 3

[22] Let $$\beta =\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle$$ and $$\varrho =\left\langle \mu _{\varrho }\left( \hat{u} \right),\eta _{\varrho }\left( \breve{u}\right),\Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle$$ be two PFNs of $$\hat{U}.$$ Then.

1. 1)

$$\beta \otimes \varrho =\left\langle \begin{array}{c} \mu _{\beta }\left( \hat{u}\right) .\mu _{\varrho }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right) +\eta _{\varrho }\left( \breve{u} \right) -\eta _{\beta }\left( \breve{u}\right) .\eta _{\varrho }\left( \breve{u}\right), \\ \Upsilon _{\beta }\left( \hat{u}\right) +\Upsilon _{\varrho }\left( \hat{u}\right) -\Upsilon _{\beta }\left( \hat{u}\right) .\Upsilon _{\varrho }\left( \hat{u}\right) \end{array} \right\rangle .$$

2. 2)

$$\beta \oplus \varrho =\left\langle \begin{array}{c} \mu _{\beta }\left( \hat{u}\right) +\mu _{\varrho }\left( \hat{u}\right) -\mu _{\beta }\left( \hat{u}\right) .\mu _{\varrho }\left( \hat{u}\right), \\ \eta _{\beta }\left( \breve{u}\right) .\eta _{\varrho }\left( \breve{u} \right),\Upsilon _{\beta }\left( \hat{u}\right) .\Upsilon _{\varrho }\left( \hat{u}\right) \end{array} \right\rangle .$$

3. 3)

$$\lambda .\beta =\left\langle 1-\left( 1-\mu _{\beta }\left( \hat{u} \right) \right) ^{\lambda },\left( \eta _{\beta }\left( \breve{u}\right) \right) ^{\lambda },\left( \Upsilon _{\beta }\left( \hat{u}\right) \right) ^{\lambda }\right\rangle$$

4. 4)

$$\beta ^{\lambda }=\left\langle \left( \mu _{\beta }\left( \hat{u} \right) \right) ^{\lambda },(1-(1-\eta _{\beta }\left( \breve{u}\right) )^{\lambda },(1-(1-\Upsilon _{\beta }\left( \hat{u}\right) )^{\lambda }\right\rangle$$

### Definition 4

[22] Let $$\beta =\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle$$ be a PFN, the score function and accuracy function of $$\beta$$ are defined as

\begin{aligned} S\left( {\beta }\right) ={\mu }_{{\beta }}-\eta _{{\beta }}-{\Upsilon }_{{\beta }}. \end{aligned}
(4)

And

\begin{aligned} H\left( {\beta }\right) = \mu _{{\beta }}+\eta _{{\beta }}+{\Upsilon }_{{\beta }}. \end{aligned}
(5)

### Definition 5

[22] Let $$\beta =\left\langle \mu _{\beta }\left( \hat{u} \right),\eta _{\beta }\left( \breve{u}\right), \Upsilon _{\beta }\left( \hat{u}\right) \right\rangle$$ and $$\varrho =\left\langle \mu _{\varrho }\left( \hat{u}\right), \eta _{\varrho }\left( \breve{u}\right), \Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle$$ be family of two PFNs. Then the following comparison rules can be used.

1. (1)

if $$S\left( \beta \right) >S\left( \varrho \right), \,$$ then $$\beta >\varrho$$

2. (2)

if $$S\left( \beta \right) =S\left( \varrho \right),$$ then

• if $$H\left( \beta \right) >H\left( \varrho \right),$$ then $$\beta >\varrho$$

• if $$H\left( \beta \right) =H\left( \varrho \right),$$ then $$\beta =\varrho$$

### Definition 6

[22] Let $$\left( \beta _{1},\beta _{2},\ldots, \beta _{n}\right)$$ be family of PFNs. Then The picture fuzzy weighted averaging ( PFWA) operator is as follows,

\begin{aligned}&\hbox {PFWA}\left( {\beta }_{1},{\beta }_{2},\ldots, {\beta } _{n}\right) \nonumber \\ &\quad =\left\{ 1-\prod\limits_{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}},\right. \left. \prod\limits_{p=1}^{n}\eta _{{\beta } _{p}}^{\varpi _{p}},\prod\limits_{p=1}^{n}{\Upsilon }_{{\beta } _{p}}^{\varpi _{p}}\right\} . \end{aligned}
(6)

### Definition 7

[22] Let $$\left( \beta _{1},\beta _{2},\ldots, \beta _{n}\right)$$ be family of PFS. Then, the picture fuzzy ordered weighted averaging (PFOWA) operator is defined as

\begin{aligned}&{\hbox {PFOWA}}\left( {\beta }_{1},{\beta }_{2},\ldots, {\beta } _{n}\right) \nonumber \\ &\quad =\left\{ 1-\prod\limits_{p=1}^{n}(1-{\mu }_{{\beta }_{\delta \left( p \right) }})^{\varpi _{p}},\right. \left. \prod\limits_{p=1}^{n}\eta _{{\beta }_{\delta \left( p\right) }}^{\varpi _{p}},\prod\limits_{p=1}^{n}{\Upsilon }_{{ \beta }_{\delta \left( p\right) }}^{\varpi _{p}}\right\}, \end{aligned}
(7)

where $$\left( \delta (1),\delta (2),\ldots, \delta (n\right) )$$ is a permutation of ($$1,2,\ldots n$$) such that $$\beta _{\delta \left( p \right) }\le _{L^{*}}\beta _{\delta \left( p-1\right) }$$ for all $$p=2,3,\ldots, n$$ where $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$ be the associated vector of PFOWA operator such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$

## Einstein operations of picture fuzzy sets

In this part of the paper, we have presented the Einstein operations and also discussed some basic properties of the defined operations on the PFSs. Let the t-norm T, and t-conorm S,  be Einstein product $$T_{\varepsilon }$$ and Einstein sum $$S_{\varepsilon }$$, respectively; then the generalized union and the intersection between two PFSs that is $$\beta$$ and $$\varrho$$ become the Einstein sum and Einstein product, respectively, as given that

\begin{aligned} {\beta }\oplus _{\varepsilon }\varrho& = {} \left\langle r,S_{\varepsilon }({\mu }_{{\beta }}\left( \hat{u}\right), \right. {\mu }_{\varrho }\left( \hat{u}\right) ),T_{\varepsilon }(\eta _{ {\beta }}\left( \breve{u}\right), \eta _{\varrho }\left( \breve{u} \right) ),\nonumber \\ &\left. T_{\varepsilon }({\Upsilon }_{{\beta }}\left( \hat{u} \right), {\Upsilon }_{\varrho }\left( \hat{u}\right) )|\hat{u}\in \hat{U}\right\rangle . \end{aligned}
(8)
\begin{aligned} {\beta }\otimes _{\varepsilon }\varrho& = {} \left\langle r,T_{\varepsilon }({\mu }_{{\beta }}\left( \hat{u}\right), \right. {\mu }_{\varrho }\left( \hat{u}\right) ),S_{\varepsilon }(\eta _{ {\beta }}\left( \breve{u}\right), \eta _{\varrho }\left( \breve{u} \right) ),\nonumber \\ &\left. S_{\varepsilon }({\Upsilon }_{{\beta }}\left( \hat{u} \right), {\Upsilon }_{\varrho }\left( \hat{u}\right) )|\hat{u}\in \hat{U}\right\rangle \end{aligned}
(9)

Furthermore, we can derive the following forms:

### Definition 8

Let $$\beta =\left\langle \mu _{\beta }\left( \hat{u}\right), \eta _{\beta }\left( \breve{u}\right), \Upsilon _{\beta }\left( \hat{u}\right) \right\rangle$$ and $$\varrho =\left\langle \mu _{\varrho }\left( \hat{u} \right), \eta _{\varrho }\left( \breve{u}\right), \Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle$$ be a family of two PFNs. Then

\begin{aligned} {\beta }\otimes _{\varepsilon }\varrho& = {} \left( \frac{{\mu }_{ {\beta }}.{\mu }_{\varrho }}{1+(1-{\mu }_{{\beta }}).\left( 1-{\mu }_{\varrho }\right) },\right. \left. \frac{\eta _{{\beta } }+\eta _{\varrho }}{1+\eta _{{\beta }}.\eta _{\varrho }},\frac{{ \Upsilon }_{{\beta }}+{\Upsilon }_{\varrho }}{1+{ \Upsilon }_{{\beta }}.{\Upsilon }_{\varrho }}\right) . \end{aligned}
(10)
\begin{aligned} {\beta }\oplus _{\varepsilon }\varrho& = {} \left( \frac{{\mu }_{ {\beta }}+{\mu }_{\varrho }}{1+{\mu }_{{\beta }}. {\mu }_{\varrho }},\right. \frac{\eta _{{\beta }}.\eta _{\varrho }}{ 1+(1-\eta _{{\beta }}).(1-\eta _{\varrho })},\nonumber \\ &\left. \frac{{\Upsilon } _{{\beta }}.{\Upsilon }_{\varrho }}{1+(1-{\Upsilon }_{ {\beta }}).(1-{\Upsilon }_{\varrho })}\right) \end{aligned}
(11)
\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\nonumber \\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) \end{aligned}
(12)
\begin{aligned} {\beta }^{\lambda }& = {} \left( \frac{2[{\mu }_{{\beta }}]^{\lambda }}{[2-{\mu }_{{\beta }}]^{\lambda }+[{\mu } _{{\beta }}]^{\lambda }},\right. \frac{[1+\eta _{{\beta }}]^{\lambda }-[1-\eta _{{\beta }}]^{\lambda }}{[1+\eta _{{\beta } }]^{\lambda }+[1-\eta _{{\beta }}]^{\lambda }},\nonumber \\ &\left. \frac{[1+{ \Upsilon }_{{\beta }}]^{\lambda }-[1-{\Upsilon }_{{ \beta }}]^{\lambda }}{[1+{\Upsilon }_{{\beta }}]^{\lambda }+[1-{\Upsilon }_{{\beta }}]^{\lambda }}\right) \end{aligned}
(13)

### Corollary 1

Let $$\beta$$ be a PFS and $$\lambda$$ be any positive real number; then $${\lambda .}_{\varepsilon }\beta$$ is also a PFS, i.e.,

\begin{aligned} 0\,\le\, & {} \left( \begin{array}{c} \frac{[ 1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }-[1- {\mu }_{{\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{ {\beta }}(\breve{u})]^{\lambda }+[1-{\mu }_{{\beta }}( \breve{u})]^{\lambda }} \\ +\,\frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{[2-\eta _{{ \beta }}(\breve{u})]^{\lambda }+[\eta _{{\beta }}(\breve{u} )]^{\lambda }}\\ +\,\frac{2[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}{[2- {\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }} \end{array} \right) \,\le\, 1 \end{aligned}

### Proof

Since $$0\le \mu _{\beta }(\breve{u}),\eta _{\beta }(r),\Upsilon _{\beta }( \breve{u})\le 1,$$ respectively, and $$0\le \mu _{\beta }(\breve{u})+\eta _{\beta }(\breve{u})+\Upsilon _{\beta }(\breve{u})\le 1,$$ then $$1-\Upsilon _{\beta }(\breve{u})\ge \mu _{\beta }(\breve{u})\ge 0,$$ and $$[1-\mu _{\beta }(\breve{u})]^{\lambda }\ge [\Upsilon _{\beta }(\breve{u} )]^{\lambda },$$ and also $$[1-\mu _{\beta }(\breve{u})]^{\lambda }\ge [\eta _{\beta }(\breve{u})]^{\lambda },$$ and then we have

\begin{aligned}&\frac{[1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }-[1- {\mu }_{{\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{ {\beta }}(\breve{u})]^{\lambda }+[1-{\mu }_{{\beta }}( \breve{u})]^{\lambda }}\le \frac{[1+{\mu }_{{\beta }}( \breve{u})]^{\lambda }-[{\Upsilon }_{{\beta }}(\breve{u} )]^{\lambda }}{[1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }+[ {\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }} \end{aligned}

and

\begin{aligned}&\frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{[2-\eta _{{ \beta }}(\breve{u})]^{\lambda }+[\eta _{{\beta }}(\breve{u} )]^{\lambda }}\le \frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{ [1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }+[\eta _{{ \beta }}(\breve{u})]^{\lambda }}. \end{aligned}

and

\begin{aligned}&\frac{2[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}{[2- {\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}\le \frac{2[{\Upsilon }_{ {\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{{\beta }}( \breve{u})]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u} )]^{\lambda }}. \end{aligned}

Thus from the above, we can write as,

\begin{aligned}&\frac{[1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }-[1-{ \mu }_{{\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{{ \beta }}(\breve{u})]^{\lambda }+[1-{\mu }_{{\beta }}(\breve{u} )]^{\lambda }} +\frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{ [2-\eta _{{\beta }}(\breve{u})]^{\lambda }+[\eta _{{\beta }}( \breve{u})]^{\lambda }} \\ &\quad +\frac{2[{\Upsilon }_{{\beta }}(\breve{u })]^{\lambda }}{[2-{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}\le 1 \end{aligned}

Furthermore, we have

\begin{aligned}&\frac{[1+{\mu _{\beta }}(\breve{u})]^{\lambda }-[1-{\mu }_{{\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{{\beta } }(\breve{u})]^{\lambda }+[1-{\mu }_{{\beta }}(\breve{u} )]^{\lambda }} +\frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{ [2-\eta _{{\beta }}(\breve{u})]^{\lambda }+[\eta _{{\beta }}( \breve{u})]^{\lambda }} \\ &\quad +\frac{2[{\Upsilon }_{{\beta }}(\breve{u })]^{\lambda }}{[2-{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}=0 \end{aligned}

iff $$\mu _{\beta }(\breve{u})=\eta _{\beta }(r)=\Upsilon _{\beta }(\breve{u} )=0$$ and

\begin{aligned}&\frac{[ 1+{\mu }_{{\beta }}(\breve{u})]^{\lambda }-[1- {\mu }_{{\beta }}(\breve{u})]^{\lambda }}{[1+{\mu }_{ {\beta }}(\breve{u})]^{\lambda }+[1-{\mu }_{{\beta }}( \breve{u})]^{\lambda }} +\frac{2[\eta _{{\beta }}(\breve{u})]^{\lambda }}{[2-\eta _{{\beta }}(\breve{u})]^{\lambda }+[\eta _{{\beta } }(\breve{u})]^{\lambda }} \\ &\quad +\frac{2[{\Upsilon }_{{\beta }}( \breve{u})]^{\lambda }}{[2-{\Upsilon }_{{\beta }}(\breve{u} )]^{\lambda }+[{\Upsilon }_{{\beta }}(\breve{u})]^{\lambda }}=1 \end{aligned}

iff $$\mu _{\beta }(\breve{u})+\eta _{\beta }(\breve{u})+\Upsilon _{\beta }( \breve{u})=1.$$ Thus the solution of $${\lambda .}_{\varepsilon }\beta$$ is a PFS for any positive real number $${\lambda .}$$$$\square$$

### Theorem 1

Let $$\beta$$ be a PFS and $$\lambda$$ be any positive integer; then show that

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }=\overbrace{{\beta }\oplus _{\varepsilon }{\beta }\oplus _{\varepsilon }\cdots \oplus _{\varepsilon }{\beta }.}^{\lambda } \end{aligned}

### Proof

Now we used the mathematical induction to prove that the above result hold for all positive integer $${\lambda .}$$ The above statement is named as $$Q(\lambda ).$$ Show that the above statement $$Q(\lambda )$$ true for $$\lambda =1.$$ Since

\begin{aligned} 1._{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{{ \beta }}(\breve{u})]-[1-{\mu }_{{\beta }}(\breve{u})]}{[1+ {\mu }_{{\beta }}(\breve{u})]+[1-{\mu }_{{\beta } }(\breve{u})]}, \right. \\ &\frac{2[\eta _{{\beta }}(\breve{u})]}{[2-\eta _{{ \beta }}(\breve{u})]+[\eta _{{\beta }}(\breve{u})]},\left. \frac{2[{ \Upsilon }_{{\beta }}(\breve{u})]}{[2-{\Upsilon }_{{ \beta }}(\breve{u})]+[{\Upsilon }_{{\beta }}(\breve{u})]} \right) \\ & = {} \left\langle {\mu }_{{\beta }}\left( \hat{u}\right), \eta _{ {\beta }}\left( \breve{u}\right), {\Upsilon }_{{\beta } }\left( \hat{u}\right) \right\rangle = {\beta } \end{aligned}

Then $$Q(\lambda )$$ is true for $$\lambda =1,$$ i.e, Q(1) holds. Assume that $$Q(\lambda )$$ holds for $$\lambda =k$$. Now we prove that for $$\lambda =k+1,$$, i.e, $$\left( k+1\right) ._{\varepsilon }\beta =\overbrace{\beta \oplus _{\varepsilon }\beta \oplus _{\varepsilon }\cdots \oplus _{\varepsilon }\beta .}^{k+1}$$Thus on PFSs the Einstein sum as,

\begin{aligned}&\overbrace{{\beta }\oplus _{\varepsilon }{\beta }\oplus _{\varepsilon }\cdots \oplus _{\varepsilon }{\beta }.}^{k+1} = \overbrace{{\beta }\oplus _{\varepsilon }{\beta }\oplus _{\varepsilon }\cdots \oplus _{\varepsilon }{\beta }}^{k}\oplus _{\varepsilon }{\beta }=k._{\varepsilon }{\beta }\oplus _{\varepsilon }{\beta } \\ &\quad =\left( \begin{array}{c} \frac{[ 1+{\mu }_{{\beta }}(\breve{u})]^{k+1}-[1-{ \mu }_{{\beta }}(\breve{u})]^{k+1}}{[1+{\mu }_{{\beta } }(\breve{u})]^{k+1}+[1-{\mu }_{{\beta }}(\breve{u})]^{k+1}}, \frac{2[\eta _{{\beta }}(\breve{u})]^{k+1}}{[2-\eta _{{\beta } }(\breve{u})]^{k+1}+[\eta _{{\beta }}(\breve{u})]^{k+1}}, \frac{2[{\Upsilon }_{{\beta }}(\breve{u})]^{k+1}}{[2-{ \Upsilon }_{{\beta }}(\breve{u})]^{k+1}+[{\Upsilon }_{{ \beta }}(\breve{u})]^{k+1}} \end{array} \right) \\ &\quad =(k+1){\beta } \end{aligned}

Hence, it has been shown that $$Q(k+1)$$ holds. Since for any positive integer $$\lambda$$ we have been proved that $$Q(\lambda )$$ holds. $$\square$$

### Theorem 2

Let$$\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right),$$$$\beta _{1}=(\mu _{\beta _{1}},\eta _{\beta _{2}},\Upsilon _{\beta _{1}})$$and$$\beta _{2}=(\mu _{\beta _{2}},\eta _{\beta _{2}},\Upsilon _{\beta _{2}})$$be family of three PFNs, then both$$\beta _{3}=\beta _{1}\oplus _{\varepsilon }\beta _{2}$$and$$\beta _{4}= {\lambda .}_{\varepsilon }\beta$$ ($$\lambda >0$$) are also PFSs.

### Proof

The result is obviously proved by the corollary 1. As under, we discused some special cases of $$\lambda$$ and $$\beta.$$

1. (1)

If $$\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(1,0,0)$$

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (1,0,0) \\ {\lambda .}_{\varepsilon }(1,0,0)& = {} (1,0,0) \end{aligned}
2. (2)

If $$\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(0,1,1)$$ in this case $$\mu _{\beta }=0,$$$$\eta _{\beta }=1$$ and $$\Upsilon _{\beta }=1,$$ then

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (0,1,1) \\ {\lambda .}_{\varepsilon }(0,1,1)& = {} (0,1,1) \end{aligned}
3. (3)

If $$\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(0,0,0)$$ in this case $$\mu _{\beta }=0,$$$$\eta _{\beta }=0$$ and $$\Upsilon _{\beta }=0,$$ then

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (0,0,0) \\ {\lambda .}_{\varepsilon }(0,0,0)& = {} (0,0,0) \end{aligned}
4. (4)

If $$\lambda \rightarrow 0$$ and $$0<\mu _{\beta },\eta _{\beta },\Upsilon _{\beta }<1,$$then

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) \rightarrow (0,1)\\ { i.e};\, {\lambda .}_{\varepsilon }{\beta }\rightarrow & {} (0,1,1),{ as}(\lambda \rightarrow 0) \end{aligned}
5. (5)

If $$\lambda \rightarrow +\infty$$ and $$0<\mu _{\beta },\eta _{\beta },\Upsilon _{\beta }<1,$$then

Since

\begin{aligned}&\lim _{\lambda \rightarrow +\infty }\frac{[1+{\mu }_{{\beta } }]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{[1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{\beta }}]^{\lambda }} =\lim _{\lambda \rightarrow +\infty }\frac{1^{\lambda }-\left( \frac{1- {\mu }_{{\beta }}}{1+{\mu }_{{\beta }}}\right) ^{\lambda }}{1^{\lambda }+\left( \frac{1-{\mu }_{{\beta }}}{1+ {\mu }_{{\beta }}}\right) ^{\lambda }} \\ &\quad =\frac{1-0}{1-0}=1. \end{aligned}

And since $$0<\eta _{\beta }<1\Leftrightarrow 0<2\eta _{\beta }<2\Leftrightarrow \eta _{\beta }<2-\eta _{\beta }\Leftrightarrow 1<\frac{ 2-\eta _{\beta }}{\eta _{\beta }},$$ then $$\lim _{\lambda \rightarrow +\infty }\left( \frac{2-\eta _{\beta }}{\eta _{\beta }}\right) ^{\lambda }=+\infty ;$$ thus $$\lim _{\lambda \rightarrow +\infty }\frac{2(\eta _{\beta })^{\lambda } }{(2-\eta _{\beta })^{\lambda }+\eta _{\beta }}=\lim _{\lambda \rightarrow +\infty }\frac{2.1^{\lambda }}{\left( \frac{2-\eta _{\beta }}{\eta _{\beta }} \right) ^{\lambda }+1^{\lambda }}=0$$

And similarly $$0<\Upsilon _{\beta }<1\Leftrightarrow 0<2\Upsilon _{\beta }<2\Leftrightarrow \Upsilon _{\beta }<2-\Upsilon _{\beta }\Leftrightarrow 1< \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }},$$ then $$\lim _{\lambda \rightarrow +\infty }\left( \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }} \right) ^{\lambda }=+\infty ;$$thus $$\lim _{\lambda \rightarrow +\infty } \frac{2(\Upsilon _{\beta })^{\lambda }}{(2-\Upsilon _{\beta })^{\lambda }+\Upsilon _{\beta }}5=\lim _{\lambda \rightarrow +\infty }\frac{2.1^{\lambda }}{\left( \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }}\right) ^{\lambda }+1^{\lambda }}=0$$

6. (6)

If $$\lambda =1,$$ then

\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = \left( \frac{[1+{\mu }_{{\beta }}]-[1-{\mu }_{{ \beta }}]}{[1+{\mu }_{{\beta }}]+[1-{\mu }_{{ \beta }}]},\right. \\ &\frac{2[\eta _{{\beta }}]}{[2-\eta _{{\beta } }]+[\eta _{{\beta }}]},\left. \frac{2[{\Upsilon }_{{\beta }}]}{ [2-{\Upsilon }_{{\beta }}]+[{\Upsilon }_{{\beta } }]}\right) \\ & = {} \left( {\mu }_{{\beta }},\eta _{{\beta }},{ \Upsilon }_{{\beta }}\right) \\ \ i.e;\ {\lambda .}_{\varepsilon }{\beta }& = {} {\beta },\ \ \text { when }(\lambda =1) \end{aligned}

$$\square$$

### Proposition 1

Let $$\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right),$$ $$\beta _{1}=(\mu _{\beta _{1}},\eta _{\beta _{2}},\Upsilon _{\beta _{1}})$$ and $$\beta _{2}=(\mu _{\beta _{2}},\eta _{\beta _{2}},\Upsilon _{\beta _{2}})$$ be family of three $$PFNs,\lambda, \lambda _{1},\lambda _{2}>0;$$ then,we have

1. (1)

$$\beta _{1}\oplus _{\varepsilon }\beta _{2}=\beta _{2}\oplus _{\varepsilon }\beta _{1};$$

2. (2)

$${\lambda .}_{\varepsilon }\left( \beta _{1}\oplus _{\varepsilon }\beta _{2}\right) = {\lambda .}_{\varepsilon }\beta _{1}\oplus _{\varepsilon }\lambda ._{\varepsilon }\beta _{2};$$

3. (3)

$$\lambda _{1}._{\varepsilon }\beta \oplus _{\varepsilon }\lambda _{2}._{\varepsilon }\beta =(\lambda _{1}+\lambda _{2})._{\varepsilon }\beta ;$$

4. (4)

$$(\lambda _{1}.\lambda _{2})._{\varepsilon }\beta =\lambda _{1}._{\varepsilon }(\lambda _{2}._{\varepsilon }\beta ).$$

### Proof

(1) It is trivial.

(2) We transform

\begin{aligned}&{\beta }_{1}\oplus _{\varepsilon }{\beta }_{2} =\left( \begin{array}{c} \frac{{\mu }_{{\beta }_{1}}+{\mu }_{{\beta }_{2}} }{1+{\mu }_{{\beta }_{1}}.{\mu }_{{\beta }_{2}}}, \frac{\eta _{{\beta }_{1}}.\eta _{{\beta }_{2}}}{1+(1-\eta _{ {\beta }_{1}})(1-\eta _{{\beta }_{2}})}, \\ \frac{{\Upsilon }_{{\beta }_{1}}.{\Upsilon }_{{ \beta }_{2}}}{1+(1-{\Upsilon }_{{\beta }_{1}})(1-{ \Upsilon }_{{\beta }_{2}})} \end{array} \right) \end{aligned}

into the following form

\begin{aligned}&{\beta }_{1}\oplus _{\varepsilon }{\beta }_{2} = \left( \begin{array}{c} \frac{(1+{\mu }_{{\beta }_{1}})(1+{\mu }_{{\beta }_{2}})-(1-{\mu }_{{\beta }_{1}})(1-{\mu }_{{ \beta }_{2}})}{(1+{\mu }_{{\beta }_{1}})(1+{\mu }_{ {\beta }_{2}})+(1-{\mu }_{{\beta }_{1}})(1-{\mu } _{{\beta }_{2}})}, \\ \frac{2\eta _{{\beta }_{1}}.\eta _{{\beta }_{2}}}{(2-\eta _{ {\beta }_{1}})(2-\eta _{{\beta }_{2}})+\eta _{{\beta } _{1}}.\eta _{{\beta }_{2}}}, \\ \frac{2{\Upsilon }_{{\beta }_{1}}.{\Upsilon }_{{ \beta }_{2}}}{(2-{\Upsilon }_{{\beta }_{1}})(2-{ \Upsilon }_{{\beta }_{2}})+{\Upsilon }_{{\beta }_{1}}. {\Upsilon }_{{\beta }_{2}}} \end{array} \right) \end{aligned}

and let $$a=(1+\mu _{\beta _{1}})(1+\mu _{\beta _{2}}),$$$$b=(1-\mu _{\beta _{1}})(1-\mu _{\beta _{2}}),$$$$c=\eta _{\beta _{2}}.\eta _{\beta _{2}},d=(2-\eta _{\beta _{2}})(2-\eta _{\beta _{2}}),$$

$$e=\Upsilon _{\beta _{1}}.\Upsilon _{\beta _{2}},$$$$f=(2-\Upsilon _{\beta _{1}})(2-\Upsilon _{\beta _{2}});$$

\begin{aligned}&{\beta }_{1}\oplus _{\varepsilon }{\beta }_{2}\\ &\quad =\left( \frac{ a-b}{a+b},\frac{2c}{d+c},\frac{2e}{f+e}\right) \end{aligned}

By the Einstein operation law, we have

\begin{aligned}&{\lambda .}_{\varepsilon }\left( {\beta }_{1}\oplus _{\varepsilon } {\beta }_{2}\right) \ \\ &\quad = {\lambda .}_{\varepsilon }\left( \frac{a-b}{a+b },\frac{2c}{d+c},\frac{2e}{f+e}\right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+\frac{a-b}{a+b}\right) ^{\lambda }-\left( 1-\frac{a-b}{a+b} \right) ^{\lambda }}{\left( 1+\frac{a-b}{a+b}\right) ^{\lambda }+\left( 1- \frac{a-b}{a+b}\right) ^{\lambda }}, \\ \frac{2\left( \frac{2c}{d+c}\right) ^{\lambda }}{\left( 2-\frac{2c}{d+c} \right) ^{\lambda }+\left( \frac{2c}{d+c}\right) ^{\lambda }}, \\ \frac{2\left( \frac{2e}{f+e}\right) ^{\lambda }}{\left( 2-\frac{2e}{f+e} \right) ^{\lambda }+\left( \frac{2e}{f+e}\right) ^{\lambda }} \end{array} \right) \\ &\quad =\left( \frac{a^{\lambda }-b^{\lambda }}{a^{\lambda }+b^{\lambda }}, \frac{2c^{\lambda }}{d^{\lambda }+c^{\lambda }},\frac{2e^{\lambda }}{ f^{\lambda }+e^{\lambda }}\right) \\ &\quad =\left( \begin{array}{c} \frac{(1+{\mu }_{{\beta }_{1}})^{\lambda }(1+{\mu }_{ {\beta }_{2}})^{\lambda }-(1-{\mu }_{{\beta } _{1}})^{\lambda }(1-{\mu }_{{\beta }_{2}})^{\lambda }}{(1+ {\mu }_{{\beta }_{1}})^{\lambda }(1+{\mu }_{{ \beta }_{2}})^{\lambda }+(1-{\mu }_{{\beta }_{1}})^{\lambda }(1-{\mu }_{{\beta }_{2}})^{\lambda }}, \\ \frac{2\varphi _{{\beta }_{1}}^{\lambda }.\eta _{{\beta }_{2}} }{(2-\eta _{{\beta }_{1}})^{\lambda }(2-\eta _{{\beta } _{2}})^{\lambda }+\eta _{{\beta }_{1}}.\eta _{{\beta }_{2}}}, \\ \frac{2\psi _{{\beta }_{1}}^{\lambda }.{\Upsilon }_{{ \beta }_{2}}}{(2-{\Upsilon }_{{\beta }_{1}})^{\lambda }(2- {\Upsilon }_{{\beta }_{2}})^{\lambda }+{\Upsilon }_{ {\beta }_{1}}.{\Upsilon }_{{\beta }_{2}}} \end{array} \right) \end{aligned}

\begin{aligned}&{\lambda .}_{\varepsilon }{\beta }_{1}\\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }_{1}}\right) ^{\lambda }-\left( 1-{\mu }_{{\beta }_{1}}\right) ^{\lambda }}{\left( 1+{ \mu }_{{\beta }_{1}}\right) ^{\lambda }+\left( 1-{\mu }_{ {\beta }_{1}}\right) ^{\lambda }},\frac{2\eta _{{\beta } _{1}}^{\lambda }}{\left( 2-\eta _{{\beta }_{1}}\right) ^{\lambda }+\left( \eta _{{\beta }_{1}}\right) ^{\lambda }}, \\ \frac{2{\Upsilon }_{{\beta }_{1}}^{\lambda }}{\left( 2-{ \Upsilon }_{{\beta }_{1}}\right) ^{\lambda }+{\Upsilon }_{ {\beta }_{1}}} \end{array} \right) \end{aligned}

and

\begin{aligned}&{\lambda .}_{\varepsilon }{\beta }_{2} \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }_{2}}\right) ^{\lambda }-\left( 1-{\mu }_{{\beta }_{2}}\right) ^{\lambda }}{\left( 1+{ \mu }_{{\beta }_{2}}\right) ^{\lambda }+\left( 1-{\mu }_{ {\beta }_{2}}\right) ^{\lambda }},\frac{2\eta _{{\beta } _{2}}^{\lambda }}{\left( 2-\eta _{{\beta }_{2}}\right) ^{\lambda }+\left( \eta _{{\beta }_{2}}\right) ^{\lambda }}, \\ \frac{2{\Upsilon }_{{\beta }_{2}}^{\lambda }}{\left( 2-{ \Upsilon }_{{\beta }_{2}}\right) ^{\lambda }+{\Upsilon }_{ {\beta }_{2}}} \end{array} \right) \end{aligned}

let $$a_{1}=(1+\mu _{\beta _{1}})^{\lambda },$$$$b_{1}=(1-\mu _{\beta _{1}})^{\lambda },$$$$c_{1}=\eta _{\beta _{2}},e_{1}=\Upsilon _{\beta _{1}}a_{2}=(1+\mu _{\beta _{2}})^{\lambda },$$$$b_{2}=$$$$(1-\mu _{\beta _{2}})^{\lambda },$$

$$c_{2}=\eta _{\beta _{2}},$$$$e_{2}=\Upsilon _{\beta _{2}}d_{1}=(2-\eta _{\beta _{2}})^{\lambda },$$$$d_{2}=(2-\eta _{\beta _{2}})^{\lambda },$$$$f_{1}=(2-\Upsilon _{\beta _{1}})^{\lambda },$$$$f_{2}=(2-\Upsilon _{\beta _{2}})^{\lambda };$$ then

\begin{aligned}&{\lambda .}_{\varepsilon }{\beta }_{1}\\ &\quad =\left( \frac{a_{1}-b_{1}}{ a_{1}+b_{1}},\frac{2c_{1}}{d_{1}+c_{1}},\frac{2e_{1}}{f_{1}+e_{1}}\right) \end{aligned}

and

\begin{aligned}&{\lambda .}_{\varepsilon }{\beta }_{2} \\ &\quad =\left( \frac{a_{2}-b_{2}}{ a_{2}+b_{2}},\frac{2c_{2}}{d_{2}+c_{2}},\frac{2e_{2}}{f_{2}+e_{2}}\right) \end{aligned}

By the Einstein operation law, it follows that

\begin{aligned}&{\lambda .}_{\varepsilon }{\beta }_{1}\oplus _{\varepsilon }\ \lambda ._{\varepsilon }{\beta }_{1} =\left( \begin{array}{c} \frac{a_{1}-b_{1}}{a_{1}+a_{1}},\frac{2c_{1}}{d_{1}+c_{1}}, \\ \frac{2e_{1}}{f_{1}+e_{1}} \end{array} \right) \oplus _{\varepsilon }\left( \begin{array}{c} \frac{a_{2}-b_{2}}{a_{2}+b_{2}},\frac{2c_{2}}{d_{2}+c_{2}}, \frac{2e_{2}}{f_{2}+e_{2}} \end{array} \right) \\ &\quad =\left( \begin{array}{c} \frac{\frac{a_{1}-b_{1}}{a_{1}+b_{1}}+\frac{a_{2}-b_{2}}{a_{2}+b_{2}}}{1+ \frac{a_{1}-b_{1}}{a_{1}+b_{1}}.\frac{a_{2}-b_{2}}{a_{2}+b_{2}}},\frac{\frac{ 2c_{1}}{d_{1}+c_{1}}.\frac{2c_{2}}{d_{2}+c_{2}}}{1+\left( 1-\frac{2c_{1}}{ d_{1}+c_{1}}\right) .\left( 1-\frac{2c_{2}}{d_{2}+c_{2}}\right) }, \frac{\frac{2e_{1}}{f_{1}+e_{1}}.\frac{2e_{2}}{f_{2}+e_{2}}}{1+\left( 1- \frac{2e_{1}}{f_{1}+e_{1}}\right) .\left( 1-\frac{2e_{2}}{f_{2}+e_{2}} \right) } \end{array} \right) \\ &\quad =\left( \frac{a_{1}a_{2}-b_{1}b_{2}}{a_{1}a_{2}+b_{1}b_{2}},\right. \left. \frac{ 2c_{1}c_{2}}{d_{1}d_{2}+c_{1}c_{2}},\frac{2e_{1}e_{2}}{f_{1}f_{2}+e_{1}e_{2}} \right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }_{1}}\right) ^{\lambda }\left( 1+{\mu }_{{\beta }_{2}}\right) ^{\lambda }-\left( 1-{\mu }_{{\beta }_{1}}\right) ^{\lambda }\left( 1-{\mu }_{{ \beta }_{2}}\right) ^{\lambda }}{\left( 1+{\mu }_{{\beta } _{1}}\right) ^{\lambda }\left( 1+{\mu }_{{\beta }_{2}}\right) ^{\lambda }+\left( 1-{\mu }_{{\beta }_{1}}\right) ^{\lambda }\left( 1-{\mu }_{{\beta }_{2}}\right) ^{\lambda }}, \\ \frac{2\varphi _{{\beta }_{1}}^{\lambda }.\eta _{{\beta }_{2}} }{\left( 2-\eta _{{\beta }_{1}}\right) ^{\lambda }\left( 2-\eta _{ {\beta }_{2}}\right) ^{\lambda }+\left( \eta _{{\beta } _{1}}\right) ^{\lambda }\left( \eta _{{\beta }_{2}}\right) ^{\lambda } }, \\ \frac{2\psi _{{\beta }_{1}}^{\lambda }.{\Upsilon }_{{\beta }_{2}}}{(2-{\Upsilon }_{{\beta }_{1}})^{\lambda }(2- {\Upsilon }_{{\beta }_{2}})^{\lambda }+{\Upsilon }_{ {\beta }_{1}}.{\Upsilon }_{{\beta }_{2}}} \end{array} \right) \end{aligned}

Hence $${\lambda .}_{\varepsilon }\left( \beta _{1}\oplus _{\varepsilon }\beta _{2}\right) = {\lambda .}_{\varepsilon }\beta _{1}\oplus _{\varepsilon }\ {\lambda .}_{\varepsilon }\beta _{1}$$

(3) Since

\begin{aligned}&\lambda _{1}._{\varepsilon }{\beta }\\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}}}{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}}+\left( 1-{\mu }_{ {\beta }}\right) ^{\lambda _{1}}},\frac{2\left( \eta _{{\beta } }\right) ^{\lambda _{1}}}{\left( 2-\eta _{{\beta }}\right) ^{\lambda _{1}}+\left( \eta _{{\beta }}\right) ^{\lambda _{1}}}, \\ \frac{2\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{1}}}{ \left( 2-{\Upsilon }_{{\beta }}\right) ^{\lambda _{1}}+\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{1}}} \end{array} \right) \end{aligned}

and

\begin{aligned}&\lambda _{2}._{\varepsilon }{\beta }\\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{2}}}{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}+\left( 1-{\mu }_{ {\beta }}\right) ^{\lambda _{2}}},\frac{2\left( \eta _{{\beta } }\right) ^{\lambda _{2}}}{\left( 2-\eta _{{\beta }}\right) ^{\lambda _{2}}+\left( \eta _{{\beta }}\right) ^{\lambda _{2}}}, \\ \frac{2\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}}{ \left( 2-{\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}+\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}} \end{array} \right) \end{aligned}

where $$\lambda _{1}>0,\lambda _{2}>0,$$ let $$a_{1}=(1+\mu _{\beta })^{\lambda _{1}},$$$$b_{1}=(1-\mu _{\beta })^{\lambda _{1}},$$$$c_{1}=\eta _{\beta },e_{1}=\Upsilon _{\beta },$$

$$a_{2}=(1+\mu _{\beta })^{\lambda _{2}},b_{2}=$$ $$(1-\mu _{\beta })^{\lambda _{2}},$$ $$c_{2}=\eta _{\beta },$$ $$e_{2}=\Upsilon _{\beta },d_{1}=(2-\eta _{\beta })^{\lambda _{1}},$$ $$d_{2}=(2-\eta _{\beta })^{\lambda _{2}},$$

$$f_{1}=(2-\Upsilon _{\beta })^{\lambda _{1}},$$$$f_{2}=(2-\Upsilon _{\beta })^{\lambda _{2}};$$then

\begin{aligned} \lambda _{1}._{\varepsilon }{\beta }=\left( \frac{a_{1}-b_{1}}{ a_{1}+b_{1}},\frac{2c_{1}}{d_{1}+c_{1}},\frac{2e_{1}}{f_{1}+e_{1}}\right) \end{aligned}

And

\begin{aligned} \ \lambda _{2}._{\varepsilon }{\beta }=\left( \frac{a_{2}-b_{2}}{ a_{2}+b_{2}},\frac{2c_{2}}{d_{2}+c_{2}},\frac{2e_{2}}{f_{2}+e_{2}}\right) \end{aligned}

By the Einstein operation law, it follows that

\begin{aligned}&\lambda _{1}._{\varepsilon }{\beta }\oplus _{\varepsilon }\ \lambda _{2}._{\varepsilon }{\beta } =\left( \begin{array}{c} \frac{a_{1}-b_{1}}{a_{1}+b_{1}},\frac{2c_{1}}{d_{1}+c_{1}}, \\ \frac{2e_{1}}{f_{1}+e_{1}} \end{array} \right) \oplus _{\varepsilon }\left( \begin{array}{c} \frac{a_{2}-b_{2}}{a_{2}+b_{2}},\frac{2c_{2}}{d_{2}+c_{2}}, \\ \frac{2e_{2}}{f_{2}+e_{2}} \end{array} \right) \\ &\quad =\left( \begin{array}{c} \frac{\frac{a_{1}-b_{1}}{a_{1}+b_{1}}+\frac{a_{2}-b_{2}}{a_{2}+b_{2}}}{1+ \frac{a_{1}-b_{1}}{a_{1}+b_{1}}.\frac{a_{2}-b_{2}}{a_{2}+b_{2}}},\frac{\frac{ 2c_{1}}{d_{1}+c_{1}}.\frac{2c_{2}}{d_{2}+c_{2}}}{1+\left( 1-\frac{2c_{1}}{ d_{1}+c_{1}}\right) .\left( 1-\frac{2c_{2}}{d_{2}+c_{2}}\right) }, \\ \frac{\frac{2e_{1}}{f_{1}+e_{1}}.\frac{2e_{2}}{f_{2}+e_{2}}}{1+\left( 1- \frac{2e_{1}}{f_{1}+e_{1}}\right) .\left( 1-\frac{2e_{2}}{f_{2}+e_{2}} \right) } \end{array} \right) \\ &\quad =\left( \frac{a_{1}a_{2}-b_{1}b_{2}}{a_{1}a_{2}+b_{1}b_{2}},\right. \left. \frac{ 2c_{1}c_{2}}{d_{1}d_{2}+c_{1}c_{2}},\frac{2e_{1}e_{2}}{f_{1}f_{2}+e_{1}e_{2}} \right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}}\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}}\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{2}}}{\left( 1+{\mu }_{{\beta } }\right) ^{\lambda _{1}}\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}+\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}}\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{2}}}, \\ \frac{2\varphi _{{\beta }}^{\lambda _{1}}.\eta _{{\beta }}}{ \left( 2-\eta _{{\beta }}\right) ^{\lambda _{1}}\left( 2-\eta _{{\beta }}\right) ^{\lambda _{2}}+\left( \eta _{{\beta }}\right) ^{\lambda _{1}}\left( \eta _{{\beta }}\right) ^{\lambda _{2}}}, \\ \frac{2\psi _{{\beta }}^{\lambda _{1}}.{\Upsilon }_{{\beta }}}{(2-{\Upsilon }_{{\beta }})^{\lambda _{1}}(2-{\Upsilon }_{{\beta }})^{\lambda _{2}}+{\Upsilon }_{{\beta }}.{\Upsilon }_{{\beta }}} \end{array} \right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}}{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}+\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}}, \frac{2\varphi _{{\beta }}^{\lambda _{1}+\lambda _{2}}}{\left( 2-\eta _{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}+\left( \eta _{{\beta }}\right) ^{\lambda _{1}+\lambda _{2}}}, \\ \frac{2\psi _{{\beta }}^{\lambda _{1}+}{}^{\lambda _{2}}}{(2-{\Upsilon }_{{\beta }})^{\lambda _{1}+\lambda _{2}}+{\Upsilon } _{{\beta }}} \end{array} \right) \\ &\quad =\left( \lambda _{1}+\lambda _{2}\right) ._{\varepsilon }{\beta } \\ &i.e;\ \lambda _{1}._{\varepsilon }{\beta }\oplus _{\varepsilon }\ \lambda _{2}._{\varepsilon }{\beta } \\ &\quad =\left( \lambda _{1}+\lambda _{2}\right) ._{\varepsilon }{\beta } \end{aligned}

(4) $$(\lambda _{1}.\lambda _{2})._{\varepsilon }\beta =\lambda _{1}._{\varepsilon }(\lambda _{2}._{\varepsilon }\beta )$$

Since

\begin{aligned}&\lambda _{2}._{\varepsilon }{\beta }\\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{2}}}{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{2}}+\left( 1-{\mu }_{ {\beta }}\right) ^{\lambda _{2}}},\frac{2\left( \eta _{{\beta } }\right) ^{\lambda _{2}}}{\left( 2-\eta _{{\beta }}\right) ^{\lambda _{2}}+\left( \eta _{{\beta }}\right) ^{\lambda _{2}}}, \\ \frac{2\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}}{ \left( 2-{\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}+\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{2}}} \end{array} \right) \end{aligned}

Let $$a=(1+\mu _{\beta })^{\lambda _{2}},$$$$b=$$$$(1-\mu _{\beta })^{\lambda _{2}},$$$$c=\eta _{\beta },$$$$e=\Upsilon _{\beta },\ d=(2-\eta _{\beta })^{\lambda _{2}},$$$$f=(2-\Upsilon _{\beta })^{\lambda _{2}};$$

\begin{aligned} \lambda _{2}._{\varepsilon }{\beta }=\left( \frac{a-b}{a+b},\frac{2c}{ d+c},\frac{2e}{f+e}\right) \end{aligned}

By the Einstein law, it follows that

\begin{aligned}&\lambda _{1}._{\varepsilon }\left( \ \ \lambda _{2}._{\varepsilon }{\beta }\right) \\ &\quad =\lambda _{1}._{\varepsilon }\left( \frac{a-b}{a+b},\frac{ 2c}{d+c},\frac{2e}{f+e}\right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+\frac{a-b}{a+b}\right) ^{\lambda _{1}}-\left( 1-\frac{a-b}{a+b }\right) ^{\lambda _{1}}}{\left( 1+\frac{a-b}{a+b}\right) ^{\lambda _{1}}+\left( 1-\frac{a-b}{a+b}\right) ^{\lambda _{1}}},\frac{2\left( \frac{2c }{d+c}\right) ^{\lambda _{1}}}{\left( 2-\frac{2c}{d+c}\right) ^{\lambda _{1}}+\left( \frac{2c}{d+c}\right) ^{\lambda _{1}}}, \\ \frac{2\left( \frac{2e}{f+e}\right) ^{\lambda _{1}}}{\left( 2-\frac{2e}{f+e} \right) ^{\lambda _{1}}+\left( \frac{2e}{f+e}\right) ^{\lambda _{1}}} \end{array} \right) \\ &\quad =\left( \frac{a^{\lambda _{1}}-b^{\lambda _{1}}}{a^{\lambda _{1}}+b^{\lambda _{1}}},\right. \\ &\qquad \left. \frac{2c^{\lambda _{1}}}{d^{\lambda _{1}}+c^{\lambda _{1}}},\frac{2e^{\lambda _{1}}}{f^{\lambda _{1}}+e^{\lambda _{1}}}\right) \\ &\quad =\left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}-\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}}{\left( 1+{\mu }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}+\left( 1-{\mu }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}},\frac{2\left( \eta _{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}}{\left( 2-\eta _{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}+\left( \eta _{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}}, \\ \frac{2\left( {\Upsilon }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}}{\left( 2-{\Upsilon }_{{\beta }}\right) ^{\lambda _{1}\lambda _{2}}+\left( {\Upsilon }_{{\beta } }\right) ^{\lambda _{1}\lambda _{2}}} \end{array} \right) \\ &\quad =\ (\lambda _{1}.\lambda _{2})._{\varepsilon }{\beta } \end{aligned}

$$\square$$

## Picture fuzzy Einstein arithmetic averaging operators

In this section, we shall develop some Einstein aggregation operators with picture fuzzy information, such as picture fuzzy Einstein weighted averaging operator, picture fuzzy Einstein ordered weighted averaging operator to aggregate the picture fuzzy information. And also discussed some basic properties of picture fuzzy Einstein aggregation operator.

### Definition 9

Let $$\beta _{p}=(\mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}),\left( p =1,2,\ldots, n\right)$$ be a family of PFNs and $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}\left( p =1,2,\ldots, n\right),$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p =1,2,\ldots, n\right)$$ and $$\sum _{p =1}^{n}\varpi _{p}=1;$$ then, a PFEWA operator of dimension n is a mapping $${\hbox {PFEWA}}:\left( L^{*}\right) ^{n}\rightarrow L^{*},$$ and

\begin{aligned}&{\hbox {PFEWA}}({\beta }_{1},{\beta }_{2},{\ldots},{\beta } _{n}) \nonumber \\ &\quad =\varpi _{1}._{\varepsilon }{\beta }_{1}\oplus _{\varepsilon }\varpi _{2}._{\varepsilon }{\beta }_{2}\oplus _{\varepsilon }\ldots \oplus _{\varepsilon }\varpi _{n}._{\varepsilon }{\beta }_{n}. \end{aligned}
(14)

### Theorem 3

Let$$\beta _{p}=(\mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}),\left( p =1,2,\ldots, n\right)$$be a family of PFNs, then their aggregated value by using thePFEWAoperator is also a PFN, and

\begin{aligned}&{\hbox {PFEWA}}({\beta }_{1},{\beta }_{2},{\ldots},{\beta } _{n}) \nonumber \\ &\quad =\left\{ \begin{array}{c} \frac{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}},\frac{2\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}} \\, \frac{2\prod \nolimits _{p=1}^{n}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \end{array} \right\},\end{aligned}
(15)

where$$\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$be the weighting vector of$$\beta _{p}\left( p=1,2,\ldots, n\right)$$suchthat$$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$and$$\sum _{p=1}^{n}\varpi _{p}=1;$$

### Proof

The first result is easily proved from Theorem 2. Now using mathematical induction method to prove Eq. (15). It is obviously true that Eq. (15) holds for $$n=1.$$ Assume that Eq. (15) holds for $$n=k,$$ i.e.,

\begin{aligned}&{\hbox {PFEWA}}({\beta }_{1},{\beta }_{2},{\ldots},{\beta } _{k}) \\ &\quad =\left\{ \begin{array}{c} \frac{\prod \nolimits _{p=1}^{k}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{k}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}},\frac{2\prod \nolimits _{p=1}^{k}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k}(\eta _{{\beta }_{p}})^{\varpi _{p}}}, \\ \frac{2\prod \nolimits _{p=1}^{k}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \end{array} \right\} \end{aligned}

Then if $$n=k+1,$$ we have

\begin{aligned}&{\hbox {PFEWA}}({\beta }_{1},{\beta }_{2},{\ldots},{\beta } _{k+1}) \\ &\quad =\varpi _{1}._{\varepsilon }{\beta }_{1}\oplus _{\varepsilon }\cdots \oplus _{\varepsilon }\varpi _{k}._{\varepsilon }{\beta }_{k}\oplus _{\varepsilon }\varpi _{p+1}.{\beta }_{k+1}. \\ &\quad = {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots}, {\beta }_{k})\oplus _{\varepsilon }\varpi _{p+1}.{\beta } _{k+1}. \\ &\qquad \left( \begin{array}{c} \frac{\left( 1+{\mu }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}-\left( 1-{\mu }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}}{\left( 1+{\mu }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}+\left( 1-{\mu }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}}, \\ \frac{2\left( \eta _{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}}{\left( 2-\eta _{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}+\left( \eta _{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}}, \\ \frac{2\left( {\Upsilon }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}}{\left( 2-{\Upsilon }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}+\left( {\Upsilon }_{{\beta }_{k+1}}\right) ^{\varpi _{p+1}}} \end{array} \right) \end{aligned}

Let $$a_{1}=\prod \nolimits _{p=1}^{k}(1+\mu _{\beta _{p}})^{\varpi _{p}},$$$$b_{1}=\prod \nolimits _{p=1}^{k}(1-\mu _{\beta _{p}})^{\varpi _{p}},$$$$c_{1}=\prod \nolimits _{p=1}^{k}(\eta _{\beta _{p}})^{\varpi _{p}},$$$$d_{1}=\prod \nolimits _{p=1}^{k}(\Upsilon _{\beta _{p}})^{\varpi _{p}},e_{1}=\prod \nolimits _{p=1}^{k}(2-\eta _{\beta _{p}})^{\varpi _{p}},$$$$f_{1}=\prod \nolimits _{p=1}^{k}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}},a_{2}=\left( 1+\mu _{\beta _{k+1}}\right) ^{\varpi _{p+1}},b_{2}=\left( 1-\mu _{\beta _{k+1}}\right) ^{\varpi _{p+1}},c_{2}=\left( \eta _{\beta _{k+1}}\right) ^{\varpi _{p+1}},$$

$$d_{2}=\left( \Upsilon _{\beta _{k+1}}\right) ^{\varpi _{p+1}},e_{2}=\left( 2-\eta _{\beta _{k+1}}\right) ^{\varpi _{p+1}},f_{2}=\left( 2-\Upsilon _{\beta _{k+1}}\right) ^{\varpi _{p+1}};$$

Then, $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{k})=\left( \frac{a_{1}-b_{1}}{a_{1}+b_{1}},\frac{2c_{1}}{e_{1}+c_{1}},\frac{2d_{1}}{ f_{1}+d_{1}}\right)$$ and $$\varpi _{p+1}.\beta _{p}=\left( \frac{a_{2}-b_{2} }{a_{2}+b_{2}},\frac{2c_{2}}{e_{2}+c_{2}},\frac{2d_{2}}{f_{2}+d_{2}}\right) ;$$ thus,

by the Einstein operational law, we have

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{k+1}) ={\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{k})\oplus _{\varepsilon }\varpi _{k+1}. {\beta }_{p} \\ &\quad =\left( \begin{array}{c} \frac{a_{1}-b_{1}}{a_{1}+b_{1}},\frac{2c_{1}}{e_{1}+c_{1}}, \frac{2d_{1}}{f_{1}+d_{1}} \end{array} \right) \oplus _{\varepsilon }\left( \begin{array}{c} \frac{a_{2}-b_{2}}{a_{2}+b_{2}},\frac{2c_{2}}{e_{2}+c_{2}}, \\ \frac{2d_{2}}{f_{2}+d_{2}} \end{array} \right) \\ &\quad =\left( \frac{a_{1}a_{2}-b_{1}b_{2}}{a_{1}a_{2}+b_{1}b_{2}},\right. \frac{ 2c_{1}c_{2}}{e_{1}e_{2}+c_{1}c_{2}},\\ &\qquad \left. \frac{2d_{1}d_{2}}{f_{1}f_{2}+d_{1}d_{2}} \right) \\ &\quad =\left\{ \begin{array}{c} \frac{\prod \nolimits _{p=1}^{k+1}(1+{\mu }_{{\beta } _{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{k+1}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k+1}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k+1}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}},\frac{2\prod \nolimits _{p=1}^{k+1}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k+1}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k+1}(\eta _{{\beta } _{p}})^{\varpi _{p}}}, \\ \frac{2\prod \nolimits _{p=1}^{k+1}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{k+1}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{k+1}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \end{array} \right\} \end{aligned}

i.e., (15) true for $$n=k+1.$$

Therefore, (15) holds for all n,  which competes the proof of the above theorem. $$\square$$

### Lemma 1

[51, 54] Let$$\beta _{p}>0,\varpi _{p}>0,\left( p=1,2,\ldots, n\right),$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$then

\begin{aligned} \prod\limits_{p=1}^{n}\beta _{p}^{\varpi _{p}}\le \sum _{p=1}^{n}\varpi _{p}\beta _{p}, \end{aligned}

with equality if and only if$$\beta _{1}=\beta _{2}=_{\cdots }=\beta _{n}$$

### Corollary 2

ThePFWAandPFEWAoperators have the following relation

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \nonumber \\ &\quad \le PFWA_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}) \end{aligned}
(16)

where$$\beta _{p}$$$$\left( p=1,2,\ldots, n\right)$$be the family of PFNs and$$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$be the weighting vector of$$\beta _{p}\left( p=1,2,\ldots, n\right)$$such that$$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$and$$\sum _{p=1}^{n}\varpi _{p}=1;$$

### Proof

Since

\begin{aligned}&\prod\limits_{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}} \\ &\qquad +\prod\limits_{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}} \\ &\quad \le \sum _{p=1}^{n}\varpi _{p}\left( 1+{\mu }_{{\beta } _{p}}\right) \\ &\qquad +\sum _{p=1}^{n}\varpi _{p}\left( 1-{\mu }_{{\beta }_{p}}\right) =2, \end{aligned}

then

\begin{aligned}&\frac{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}} \nonumber \\ &\quad =1-\frac{2\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta } _{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta } _{p}})^{\varpi _{p}}} \nonumber \\ &\quad \le 1-\prod\limits_{p=1}^{n}(1-{\mu }_{{\beta } _{p}})^{\varpi _{p}} \end{aligned}
(17)

where that equality holds if and only if $$\mu _{\beta _{1}}=\mu _{\beta _{2}}=\cdots =\mu _{\beta _{n}}.$$

In addition, since $$\prod \nolimits _{p=1}^{n}(2-\eta _{\beta _{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\eta _{\beta _{p}}\le \sum _{p=1}^{n}\varpi _{p}(2-\eta _{\beta _{p}})+\sum _{p=1}^{n}\varpi _{p}\eta _{\beta _{p}}=2,$$ then

\begin{aligned} \frac{2\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{ \prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}} \ge \prod\limits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}} \end{aligned}
(18)

where that equality holds if and only if $$\eta _{\beta _{2}}=\eta _{\beta _{2}}=\cdots =\eta _{\beta _{n}}.$$

Let $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta }^{*},\eta _{\beta }^{*}\right) =\beta ^{*}$$ and $$PFWA_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\eta _{\beta }\right) =\beta ;$$ then (17) and (18) are transformed into the form $$\mu _{\beta }^{*}\le \mu _{\beta }$$ and $$\eta _{\beta }^{*}\ge \eta _{\beta },$$ respectively. Thus

\begin{aligned} s({\beta }^{*})={\mu }_{{\beta }}^{*}-\eta _{ {\beta }}^{*}\le {\mu }_{{\beta }}-\eta _{{\beta }}=s({\beta }). \end{aligned}

If $$s(\beta ^{*})<s(\beta ),$$ then by Definition 5, for every $$\varpi,$$we have

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\nonumber \\ &\quad <{\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}). \end{aligned}
(19)

If $$s(\beta ^{*})=s(\beta ),$$ i.e., $$\mu _{\beta }^{*}-\eta _{\beta }^{*}=\mu _{\beta }-\eta _{\beta },$$ then by the conditions $$\mu _{\beta }^{*}\le \mu _{\beta }$$ and $$\eta _{\beta }^{*}\ge \eta _{\beta },$$ we have $$\mu _{\beta }^{*}=\mu _{\beta }$$ and $$\eta _{\beta }^{*}=\eta _{\beta };$$

Thus $$h(\beta ^{*})=\mu _{\beta }^{*}+\eta _{\beta }^{*}=\mu _{\beta }+\eta _{\beta }=h(\beta ),$$ in this case, from Definition 5, it follows that

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\nonumber \\ &\quad ={\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}). \end{aligned}
(20)

From (19) and (20), we know that (16) always holds, i.e.,

\begin{aligned} {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\le {\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}) \end{aligned}

where that equality holds if and only if $$\beta _{1}=\beta _{2}=\cdots =\beta _{n}.$$

In addition, since $$\prod \nolimits _{p=1}^{n}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\Upsilon _{\beta _{p}}\le \sum _{p=1}^{n}\varpi _{p}(2-\Upsilon _{\beta _{p}})+\sum _{p=1}^{n}\varpi _{p}\Upsilon _{\beta _{p}}=2,$$ then

\begin{aligned} \frac{2\prod \nolimits _{p=1}^{n}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \ge \prod\limits _{p=1}^{n}({\Upsilon }_{{\beta }_{p}})^{\varpi _{p}} \end{aligned}
(21)

where that equality holds if and only if $$\Upsilon _{\beta _{1}}=\Upsilon _{\beta _{2}}=\cdots =\Upsilon _{\beta _{n}}.$$

Let $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta }^{*},\Upsilon _{\beta }^{*}\right) =\beta ^{*}$$ and $${\hbox {PFWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\Upsilon _{\beta }\right) =\beta ;$$ then (17) and (21) are transformed into the form $$\mu _{\beta }^{*}\le \mu _{\beta }$$ and $$\Upsilon _{\beta }^{*}\ge \Upsilon _{\beta },$$ respectively. Thus

\begin{aligned} s({\beta }^{*})={\mu }_{{\beta }}^{*}-{\Upsilon }_{{\beta }}^{*}\le {\mu }_{{\beta }}- {\Upsilon }_{{\beta }}=s({\beta }). \end{aligned}

If $$s(\beta ^{*})<s(\beta ),$$ then by Definition 5, for every $$\varpi,$$we have

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\nonumber \\ &\quad <{\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}). \end{aligned}
(22)

If $$s(\beta ^{*})=s(\beta ),$$ i.e., $$\mu _{\beta }^{*}-\Upsilon _{\beta }^{*}=\mu _{\beta }-\Upsilon _{\beta },$$ then by the conditions $$\mu _{\beta }^{*}\le \mu _{\beta }$$ and $$\Upsilon _{\beta }^{*}\ge \Upsilon _{\beta }^{*},$$ we have $$\mu _{\beta }^{*}=\mu _{\beta }$$ and $$\Upsilon _{\beta }^{*}=\Upsilon _{\beta };$$Thus $$h(\beta ^{*})=\mu _{\beta }^{*}+\Upsilon _{\beta }^{*}=\mu _{\beta }+\Upsilon _{\beta }^{*}=h(\beta ),$$ in this case, from Definition 5, it follows that

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\nonumber \\ &\quad ={\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}). \end{aligned}
(23)

From (22) and (23), we know that (16) always holds, i.e.,

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad \le {\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta } _{2},{\ldots},{\beta }_{n}) \end{aligned}

where that equality holds if and only if $$\beta _{1}=\beta _{2}=\cdots =\beta _{n}.$$$$\square$$

### Example 1

Let $$\beta _{1}=\left( 0.2,0.4,0.3\right), \beta _{2}=\left( 0.3,0.5,0.1\right), \beta _{3}=\left( 0.1,0.2,0.4\right),$$ and $$\beta _{4}=\left( 0.6,0.1,0.2\right)$$ be a four $$\beta FVs$$ and $$\varpi =\left( 0.2,0.1,0.3,0.4\right) ^{T}$$ be the weighting vector of $$\beta _{p}\left( p=1,2,\ldots, n\right).$$ Then $$\prod \nolimits _{p=1}^{4}(1+\mu _{\beta _{p}})^{\varpi _{p}}=1.3221,\prod \nolimits _{p=1}^{4}(1-\mu _{\beta _{p}})^{\varpi _{p}}=0.6197,$$$$2\prod \nolimits _{p=1}^{4}\left( \eta _{\beta _{2}}\right) ^{\varpi _{p}}=0.3816,$$$$\prod \nolimits _{p=1}^{4}\left( \eta _{\beta _{2}}\right) ^{\varpi _{p}}=0.1908,$$$$\prod \nolimits _{p=1}^{4}(2-\eta _{\beta _{p}})^{\varpi _{p}}=1.7637,2\prod \nolimits _{p=1}^{4}(\Upsilon _{\beta _{p}})^{\varpi _{p}}=0.4982,$$$$\prod \nolimits _{p=1}^{4}(\Upsilon _{\beta _{p}})^{\varpi _{p}}=0.2491,$$$$\prod \nolimits _{p=1}^{4}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}}=1.7269.$$

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\beta }_{3}, {\beta }_{4}) \\ &\quad =\left( \begin{array}{c} \frac{\prod \nolimits _{p=1}^{4}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{4}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{4}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{4}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}},\frac{2\prod \nolimits _{p=1}^{4}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{4}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{4}(\eta _{{\beta }_{p}})^{\varpi _{p}}} \\, \frac{2\prod \nolimits _{p=1}^{4}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{4}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{4}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \end{array} \right) \\ &\quad =\left( 0.3617,0.1952,0.2521\right). \end{aligned}

If we use the PFWA operator which is developed by Xu [53], to aggregate the PFVs $$\beta _{i}\left( p=1,2,\ldots, n\right),$$ then we have

\begin{aligned}&{\hbox {PFWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\beta }_{3}, {\beta }_{4}) \\ &\quad =\left( 1-\prod\limits _{p=1}^{4}(1-{\mu }_{ {\beta }_{p}})^{\varpi _{p}},\right. \\ &\qquad \left. \prod\limits _{p=1}^{4}(\eta _{{\beta }_{p}})^{\varpi _{p}},\prod\limits _{p=1}^{4}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}\right) \\ &\quad =\left( 0.3804,0.1908,0.2491\right). \end{aligned}

It is clear that $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},\beta _{3},\beta _{4})<{\hbox {PFWA}}_{\varpi }(\beta _{1},\beta _{2},\beta _{3},\beta _{4}).$$

### Proposition 2

Let $$\beta _{p}=\left( \mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}\right), \left( p=1,2,\ldots, n\right)$$ be a family of PFNs and $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}\left( p=1,2,\ldots, n\right)$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$ $$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then, we have the following.

(1) Idempotency If all$$\beta _{p}$$are equal, i.e.,$$\beta _{p}=\beta$$for all$$p=1,2,\ldots, n$$, then

\begin{aligned} {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})={\beta } \end{aligned}
(24)

### Proof

Since $$\beta _{p}=\beta$$ for all $$p=1,2,\ldots, n,$$ i.e., $$\mu _{\beta _{p}}=\mu _{\beta },\eta _{\beta _{p}}=\eta _{\beta }$$ and $$\Upsilon _{\beta _{p}}=\Upsilon _{\beta },$$$$p=1,2,\ldots, n,$$ then

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad =\left( \begin{array}{c} \frac{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}, \\ \frac{2\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}}{ \prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{p}})^{\varpi _{p}}}, \\ \frac{2\prod \nolimits _{p=1}^{n}({\Upsilon }_{{\beta } _{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}({\Upsilon }_{ {\beta }_{p}})^{\varpi _{p}}} \end{array} \right) \\ &\quad =\left( \begin{array}{c} \frac{\sum _{p=1}^{n}(1+{\mu }_{{\beta }})^{\varpi _{p}}-\sum _{p=1}^{n}(1-{\mu }_{{\beta }})^{\varpi _{p}}}{ \sum _{p=1}^{n}(1+{\mu }_{{\beta }})^{\varpi _{p}}+\sum _{p=1}^{n}(1-{\mu }_{{\beta }})^{\varpi _{p}}}, \\ \frac{2\sum _{p=1}^{n}(\eta _{{\beta }})^{\varpi _{p}}}{ \sum _{p=1}^{n}(2-\eta _{{\beta }})^{\varpi _{p}}+\sum _{p=1}^{n}(\eta _{{\beta }})^{\varpi _{p}}}, \\ \frac{2\sum _{p=1}^{n}({\Upsilon }_{{\beta }_{p}})^{\varpi _{p}} }{\sum _{p=1}^{n}(2-{\Upsilon }_{{\beta }})^{\varpi _{p}}+\sum _{p=1}^{n}({\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}} \end{array} \right) \\ &\quad =\left( \begin{array}{c} \frac{(1+{\mu }_{{\beta }})-(1-{\mu }_{{\beta }}) }{(1+{\mu }_{{\beta }})+(1-{\mu }_{{\beta }})}, \frac{2\eta _{{\beta }}}{(2-\eta _{{\beta }})+\eta _{{\beta }}}, \\ \frac{2\psi _{{\beta }}}{(2-{\Upsilon }_{{\beta }})+ {\Upsilon }_{{\beta }}} \end{array} \right) \\ &\quad =\left( {\mu }_{{\beta }},\eta _{{\beta }},{\Upsilon }_{{\beta }}\right) \\ &\quad ={\beta } \end{aligned}

(2). Boundary

\begin{aligned} {\beta }_{\min }\le {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\le {\beta }_{\max } \end{aligned}
(25)

$$\square$$

where $$\beta _{\min }=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}$$ and $$\beta _{\max }=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.$$

### Proof

Let $$f(r)=\frac{1-r}{1+r},$$$$r\in [0,1];$$ then $$f^{^{\prime }}(r)=[ \frac{1-r}{1+r}]^{^{\prime }}=\frac{-2}{\left( 1+r\right) ^{2}}<0;$$ thus f(x) is decreasing function. Since $$\mu _{\beta _{\min }}\le \mu _{\beta _{p}}\le \mu _{\beta _{\max }},$$ for all p,  then $$f\left( \mu _{\beta _{\max }}\right) \le f\left( \mu _{\beta _{p}}\right) \le f\left( \mu _{\beta _{\min }}\right),$$ for all p, i.e.,$$\frac{1-\mu _{\beta _{\max }}}{ 1+\mu _{\beta _{\max }}}\le \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}} \le \frac{1-\mu _{\beta _{\min }}}{1+\mu _{\beta _{\min }}},\left( p=1,2,\ldots n\right).$$Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}\left( p=1,2,\ldots n\right)$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then for all p,  we have

\begin{aligned}&\left( \frac{1-{\mu }_{{\beta }_{\max }}}{1+{\mu }_{ {\beta }_{\max }}}\right) ^{\varpi _{p}} \le \left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}\le \left( \frac{1-{\mu }_{{\beta }_{\min }}}{1+ {\mu }_{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{\max }} }{1+{\mu }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+ {\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}\le \prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{\min }} }{1+{\mu }_{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \sum _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta } _{\max }}}{1+{\mu }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta } _{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \le \sum _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{\max }}}{ 1+{\mu }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \left( \frac{1-{\mu }_{{\beta }_{\max }}}{1+ {\mu }_{{\beta }_{\max }}}\right) \le \prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+ {\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \le \left( \frac{ 1-{\mu }_{{\beta }_{\min }}}{1+{\mu }_{{\beta } _{\min }}}\right) \\ &\quad \Leftrightarrow \frac{2}{1+{\mu }_{{\beta }_{\max }}}\le 1+\prod\limits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{ 1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \le \frac{2}{1+ {\mu }_{{\beta }_{\min }}} \\ &\quad \Leftrightarrow \frac{1+{\mu }_{{\beta }_{\min }}}{2}\le \frac{1}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}} \le \frac{1+{\mu }_{{\beta }_{\max }}}{2} \\ &\quad \Leftrightarrow 1+{\mu }_{{\beta }_{\min }} \le \frac{2}{ 1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{ 1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}} \le 1+{\mu }_{{\beta }_{\max }} \\ &\quad \Leftrightarrow {\mu }_{{\beta }_{\min }}\le \frac{2}{ 1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{ 1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}}-1\le {\mu }_{{\beta }_{\max }} \end{aligned}

i.e.,

\begin{aligned}&\Leftrightarrow {\mu }_{{\beta }_{\min }} \le \frac{ \prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}} \le {\mu }_{{\beta }_{\max }} \end{aligned}
(26)

Let $$g(y)=\frac{2-y}{y},$$$$y\in (0,1];$$ then $$g^{^{\prime }}(y)=\frac{-2 }{y^{2}}<0,$$ be decreasing function on (0, 1]. Since $$\eta _{\beta _{\max }}\le \eta _{{\beta }_{p}}\le \eta _{\beta _{\min }},$$ for all p,  where $$0<\eta _{\beta _{\max }},$$then $$g\left( \eta _{\beta _{\min }}\right) \le g\left( \eta _{\beta _{p}}\right) \le g\left( \eta _{\beta _{\max }}\right),$$ for all p, i.e.,$$\frac{2-\eta _{\beta _{\min }}}{\eta _{\beta _{\min }}}\le \frac{2-\eta _{\beta _{p}}}{\eta _{\beta _{p}}}\le \frac{ 2-\eta _{\beta _{\max }}}{\eta _{\beta _{\max }}},\left( p=1,2,\ldots, n\right).$$

Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weight vector of $$\beta _{p}\left( p=1,2,\ldots n\right)$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then for all p,  we have

\begin{aligned}&\left( \frac{2-\eta _{{\beta }_{\min }}}{\eta _{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-\eta _{{\beta }_{p}}}{ \eta _{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-\eta _{ {\beta }_{\max }}}{\eta _{{\beta }_{\max }}}\right) ^{\varpi _{p}} \end{aligned}

Thus

\begin{aligned}&\prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{\min }}}{\eta _{ {\beta }_{\min }}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{\max }}}{\eta _{{\beta }_{\max }}} \right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \sum _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{\min }}}{\eta _{{\beta }_{\min }}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}}\le \sum _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{\max }}}{\eta _{ {\beta }_{\max }}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \left( \frac{2-\eta _{{\beta }_{\min }}}{\eta _{ {\beta }_{\min }}}\right) \le \prod\limits_{p=1}^{n}\left( \frac{ 2-\eta _{{\beta }_{p}}}{\eta _{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-\eta _{{\beta }_{\max }}}{\eta _{{\beta }_{\max }}}\right) \\ &\quad \Leftrightarrow \frac{2}{\eta _{{\beta }_{\min }}} \le \prod\limits _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}}+1 \le \frac{2}{\eta _{{\beta }_{\max }}} \\ &\quad \Leftrightarrow \frac{\eta _{{\beta }_{\max }}}{2} \le \frac{1}{ \prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}}+1}\le \frac{\eta _{{\beta }_{\min }}}{2} \\ &\quad \Leftrightarrow \eta _{{\beta }_{\max }} \le \frac{2}{ \prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}}+1} \le \eta _{{\beta } _{\min }} \end{aligned}

i.e.

\begin{aligned}&\Leftrightarrow \eta _{{\beta }_{\max }} \le \frac{ 2\prod \nolimits _{p=1}^{n}\eta _{{\beta }_{p}}}{\prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\eta _{{\beta }_{p}}} \le \eta _{{\beta }_{\min }} \end{aligned}
(27)

Note that (27) also holds even if $$\eta _{\beta _{\max }}=0.$$

Let $$h(z)=\frac{2-z}{z},$$$$z\in (0,1];$$ then $$h^{^{\prime }}(z)=\frac{-2 }{z^{2}}<0,$$ be a decreasing function on (0, 1]. Since $$\Upsilon _{\beta _{\max }}\le \Upsilon _{\beta _{\text {p}}}\le \Upsilon _{\beta _{\min }},$$ for all i,  where $$0<\Upsilon _{\beta _{\max }},$$then $$h\left( \Upsilon _{\beta _{\min }}\right) \le h\left( \eta _{\beta _{p}}\right) \le h\left( \Upsilon _{\beta _{\max }}\right),$$ for all p,  i.e.,$$\frac{2-\Upsilon _{\beta _{\min }}}{\Upsilon _{\beta _{\min }}}\le \frac{2-\Upsilon _{\beta _{p}}}{\Upsilon _{\beta _{p}}}\le \frac{2-\Upsilon _{\beta _{\max }}}{ \Upsilon _{\beta _{\max }}},\left( p=1,2,\ldots, n\right).$$

Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then for all p,  we have

\begin{aligned}&\left( \frac{2-{\Upsilon }_{{\beta }_{\min }}}{{\Upsilon }_{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}} \right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-{\Upsilon }_{{\beta } _{\max }}}{{\Upsilon }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \end{aligned}

Thus

\begin{aligned}&\prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta } _{\min }}}{{\Upsilon }_{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\quad \le \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \le \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{\max }}}{{\Upsilon }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \sum _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{\min }}}{{\Upsilon }_{{\beta }_{\min }}}\right) ^{\varpi _{p}} \\ &\quad \le \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{ {\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \le \sum _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta } _{\max }}}{{\Upsilon }_{{\beta }_{\max }}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \left( \frac{2-{\Upsilon }_{{\beta }_{\min }} }{{\Upsilon }_{{\beta }_{\min }}}\right) \\ &\quad \le \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}} }{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \le \left( \frac{2-{\Upsilon }_{{\beta }_{\max }}}{{\Upsilon }_{ {\beta }_{\max }}}\right) \\ &\quad \Leftrightarrow \frac{2}{{\Upsilon }_{{\beta }_{\min }}} \\ &\quad \le \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}} }{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}}+1 \\ &\quad \le \frac{2}{{\Upsilon }_{{\beta }_{\max }}} \\ &\quad \Leftrightarrow \frac{{\Upsilon }_{{\beta }_{\max }}}{2} \\ &\quad \le \frac{1}{\prod \nolimits _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}}+1} \\ &\quad \le \frac{{\Upsilon }_{{\beta }_{\min }}}{2} \\ &\quad \Leftrightarrow {\Upsilon }_{{\beta }_{\max }} \\ &\quad \le \frac{2}{ \prod \nolimits _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}} }{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}}+1} \\ &\quad \le {\Upsilon }_{{\beta }_{\min }} \end{aligned}

i.e.,

\begin{aligned}&\Leftrightarrow {\Upsilon }_{{\beta }_{\max }} \nonumber \\ &\quad \le \frac{ 2\prod \nolimits _{p=1}^{n}{\Upsilon }_{{\beta }_{p}}}{ \prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}{\Upsilon }_{{\beta }_{p}}} \nonumber \\ &\quad \le {\Upsilon }_{{\beta }_{\min }} \end{aligned}
(28)

Note that (28) also holds even if $$\Upsilon _{\beta _{\max }}=0.$$

Let $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =\beta ;$$ Then (26), (27) and (28) are transformed into the following forms, respectively;

\begin{aligned} {\mu }_{{\beta }_{\min }}\le & {} {\mu }_{{\beta } }\le {\mu }_{{\beta }_{\max }} \\ \eta _{{\beta }_{\max }}\le & {} \eta _{{\beta }}\le \eta _{ {\beta }_{\min }} \\ {\Upsilon }_{{\beta }_{\max }}\le & {} {\Upsilon }_{ {\beta }}\le {\Upsilon }_{{\beta }_{\min }} \end{aligned}

$$\square$$

(3) Monotonicity Let $$\beta _{p}=\left( \mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}\right)$$ and $$\beta _{p}^{*}=\left( \mu _{\beta _{p}^{*}},\eta _{\beta _{p}^{*}},\Upsilon _{\beta _{p}^{*}}\right)$$$$\left( p=1,2,\ldots, n\right)$$ be a two family of PFNs, and $$\beta _{p}\le _{L}\beta _{p}^{*},$$ i.e.,$$\mu _{\beta _{p}}\le \mu _{\beta _{p}^{*}},\eta _{\beta _{p}}\ge \eta _{\beta _{p}^{*}}$$ and $$\Upsilon _{\beta _{p}}\ge \Upsilon _{\beta _{p}^{*}}$$, for all p;  then

\begin{aligned}&{\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \nonumber \\ &\quad \le {\hbox {PFEWA}}_{\varpi }({\beta }_{1}^{*},{\beta } _{2}^{*},{\ldots},{\beta }_{n}^{*}). \end{aligned}
(29)

### Proof

Let $$f(r)=\frac{1-r}{1+r},$$$$r\in [ 0,1],$$ be a decreasing function $$\mu _{\beta _{p}}\le \mu _{\beta _{p}^{*}},$$ for all p,  then $$f\left( \mu _{\beta _{p}^{*}}\right) \le f\left( \mu _{\beta _{p}}\right),$$ for all $$p=1,2,\ldots, n,$$i.e.,$$\frac{1-\mu _{\beta _{p}^{*}}}{1+\mu _{\beta _{p}^{*}}}\le \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}}, \left( p=1,2,\ldots n\right).$$

Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}$$$$\left( p=1,2,\ldots, n\right)$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then for all p,  we have $$\left( \frac{1-\mu _{\beta _{p}^{*}}}{1+\mu _{\beta _{p}^{*}}}\right) ^{\varpi _{p}}\le \left( \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}}\right) ^{\varpi _{p}},\left( p=1,2,\ldots, n\right).$$ Thus

\begin{aligned}&\prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta } _{p}^{*}}}{1+{\mu }_{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}} \le \prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{ {\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow 1+\prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{ {\beta }_{p}^{*}}}{1+{\mu }_{{\beta }_{p}^{*}}} \right) ^{\varpi _{p}} \le 1+\prod\limits_{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \frac{1}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}} \le \frac{1}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}^{*}}}{1+{\mu }_{{\beta } _{p}^{*}}}\right) ^{\varpi _{p}}} \\ &\quad \Leftrightarrow \frac{2}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}} \le \frac{2}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}^{*}}}{1+{\mu }_{{\beta } _{p}^{*}}}\right) ^{\varpi _{p}}} \\ &\quad \Leftrightarrow \frac{2}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}}}{1+{\mu }_{{\beta }_{p}}}\right) ^{\varpi _{p}}}-1 \\ &\quad \le \frac{2}{1+\prod \nolimits _{p=1}^{n}\left( \frac{1-{\mu }_{{\beta }_{p}^{*}}}{1+{\mu }_{{\beta } _{p}}^{*}}\right) ^{\varpi _{p}}}-1 \end{aligned}

i.e.,

\begin{aligned}&\frac{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}})^{\varpi _{p}}}\nonumber \\ &\quad \le \frac{\prod \nolimits _{p=1}^{n}(1+{\mu }_{{\beta } _{p}^{*}})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}^{*}})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+{\mu } _{{\beta }_{p}^{*}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-{\mu }_{{\beta }_{p}^{*}})^{\varpi _{p}}} \end{aligned}
(30)

Let $$g(y)=\frac{2-y}{y},$$$$y\in (0,1],$$ be the decreasing function on (0, 1]. Since $$\eta _{\beta _{p}}\ge \eta _{\beta _{p}^{*}}>0,$$ for all p,  then $$g\left( \eta _{{\beta }_{p}^{*}}\right) \ge g\left( \eta _{ {\beta }_{p}}\right),$$ i.e.,$$\frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\ge \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta }_{p}}},\left( p=1,2,\ldots n\right).$$ Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ we have $$\left( \frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}\ge \left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta } _{p}}}\right) ^{\varpi _{p}},\left( p=1,2,\ldots, n\right).$$ Thus

\begin{aligned}&\prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}^{*}}}{ \eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}} \ge \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{ {\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1 \ge \prod\limits_{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}} }{\eta _{{\beta }_{p}}}\right) ^{\varpi _{p}}+1 \\ &\quad \Leftrightarrow \frac{1}{\prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta }_{p}}}\right) ^{\varpi _{p}}+1} \ge \frac{1}{\prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1} \\ &\quad \Leftrightarrow \frac{2}{\prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta }_{p}}}\right) ^{\varpi _{p}}+1} \ge \frac{2}{\prod \nolimits _{p=1}^{n}\left( \frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1} \end{aligned}

i.e.,

\begin{aligned}&\frac{2\prod \nolimits _{p=1}^{n}\eta _{{\beta }_{p}}}{ \prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\eta _{{\beta }_{p}}} \nonumber \\ &\quad \ge \frac{ 2\prod \nolimits _{p=1}^{n}\left( \eta _{{\beta }_{p}^{*}}\right) ^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{p}^{*}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\left( \eta _{{\beta } _{p}^{*}}\right) ^{\varpi _{p}}} \end{aligned}
(31)

Note that (31) also holds even $$\eta _{{\beta }_{p}}=\eta _{{\beta }_{p}^{*}}=0,$$ for all p,

Let $$h(z)=\frac{2-z}{z},$$ be the decreasing function on (0, 1]. Since $$\Upsilon _{{\beta }_{p}}\ge \Upsilon _{{\beta }_{p}^{*}}>0,$$ for all p,  then $$h\left( \Upsilon _{{\beta }_{p}^{*}}\right) \ge h\left( \Upsilon _{{\beta }_{p}}\right).$$ i.e.,$$\frac{ 2-\Upsilon _{{\beta }_{p}^{*}}}{\Upsilon _{{\beta } _{p}^{*}}}\ge \frac{2-\Upsilon _{{\beta }_{p}}}{\Upsilon _{{\beta }_{p}}},\left( p=1,2,\ldots, n\right).$$ Let $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}$$ such that $$\varpi _{p}\in \left[ 0,1\right],$$$$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ we have $$\left( \frac{2-\Upsilon _{{\beta }_{p}^{*}}}{\Upsilon _{{\beta } _{p}^{*}}}\right) ^{\varpi _{p}}\ge \left( \frac{2-\Upsilon _{{\beta }_{p}}}{\Upsilon _{{\beta }_{p}}}\right) ^{\varpi _{p}}.$$ Thus

\begin{aligned}&\prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta } _{p}^{*}}}{{\Upsilon }_{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}} \ge \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon } _{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}}\right) ^{\varpi _{p}} \\ &\quad \Leftrightarrow \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{ {\beta }_{p}^{*}}}{{\Upsilon }_{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1 \ge \prod\limits_{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}} \right) ^{\varpi _{p}}+1 \\ &\quad \Leftrightarrow \frac{1}{\prod \nolimits _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}} \right) ^{\varpi _{p}}+1} \ge \frac{1}{\prod \nolimits _{p=1}^{n}\left( \frac{2- {\Upsilon }_{{\beta }_{p}^{*}}}{{\Upsilon }_{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1} \\ &\quad \Leftrightarrow \frac{2}{\prod \nolimits _{p=1}^{n}\left( \frac{2-{\Upsilon }_{{\beta }_{p}}}{{\Upsilon }_{{\beta }_{p}}} \right) ^{\varpi _{p}}+1} \ge \frac{2}{\prod \nolimits _{p=1}^{n}\left( \frac{2- {\Upsilon }_{{\beta }_{p}^{*}}}{{\Upsilon }_{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}+1} \end{aligned}

i.e.,

\begin{aligned}&\frac{2\prod \nolimits _{p=1}^{n}{\Upsilon }_{{\beta }_{p}}}{ \prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}{\Upsilon }_{{\beta }_{p}}} \ge \frac{2\prod \nolimits _{p=1}^{n}\left( {\Upsilon }_{{\beta } _{p}^{*}}\right) ^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{p}^{*}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\left( {\Upsilon }_{{\beta } _{p}^{*}}\right) ^{\varpi _{p}}} \end{aligned}
(32)

Note that (32) also holds even $$\Upsilon _{{\beta }_{p}}=\Upsilon _{ {\beta }_{p}^{*}}=0,$$ for all p

Let $${\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{n})=\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =\beta$$ and $${\hbox {PFEWA}}_{\varpi }(\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*})=\left( \mu _{\beta ^{*}},\eta _{\beta ^{*}},\Upsilon _{\beta ^{*}}\right) =\beta ^{*}$$ Then (30), (31) and (32) are transformed into the following forms, respectively;

\begin{aligned} {\mu }_{{\beta }}\le & {} {\mu }_{{\beta ^{*}} } \\ \eta _{{\beta }}\ge & {} \eta _{{\beta ^{*}}} \\ {\Upsilon }_{{\beta }}\ge & {} {\Upsilon }_{{\beta ^{*}}} \end{aligned}

$$\square$$

### Picture fuzzy Einstein ordered weighted averaging operator

In this section, we shall develop picture fuzzy Einstein ordered weighted averaging aggregation operators to aggregate the picture fuzzy information and also some basic properties of picture fuzzy Einstein ordered weighted averaging operator.

### Definition 10

Let $$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right) \left( p=1,2,\ldots, n\right)$$ be the family of PFNs. A picture fuzzy Einstein OWA operator of dimension n is a mapping $${\hbox {PFEOWA}}:L^{*}\rightarrow L^{*},$$ which has an associated vector$$\ \varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$ such that $$\varpi _{p}\in \left[ 0,1\right], p=1,2,\ldots, n$$ and $$\sum _{p=1}^{n}\varpi _{p}=1,$$ and

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }\left( {\beta }_{1},{\beta }_{2},{\ldots}, {\beta }_{n}\right) \nonumber \\ &\quad =\sum \limits _{i=1}^{n}\left( \varpi _{i}\beta _{\delta (1)}\right) \nonumber \\ &\quad =(\varpi _{1}._{\in }{\beta }_{\delta \left( 1\right) }\oplus _{\in }\varpi _{2}._{\in }{\beta }_{\delta \left( 2\right) }\oplus _{\in }\cdots \oplus _{\in }\varpi _{n}.{\beta }_{\delta \left( n\right) }). \end{aligned}
(33)

Such that $$\left( \delta \left( 1\right), \delta \left( 2\right), \ldots, \delta \left( n\right) \right)$$ be a permutation of $$\left( 1,2,\ldots, n\right)$$, where $$\beta _{\delta \left( p\right) }\le _{L}\beta _{\delta \left( p+1\right) }$$$$\forall$$$$p=1,2,\ldots, n.$$

### Theorem 4

Let$$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)$$are the family of PFNs, then by using thePFEOWAoperator the aggregated value is again aPFVand

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad =\left\{ \begin{array}{c} \frac{\prod \nolimits _{p=1}^{n}(1+\mu _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}-\prod \nolimits _{p=1}^{n}(1-\mu _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(1+\mu _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(1-\mu _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}}, \\ \frac{2\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-\eta _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}(\eta _{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}} \\, \frac{2\prod \nolimits _{p=1}^{n}({\Upsilon }_{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{n}(2-{\Upsilon }_{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}({\Upsilon }_{{\beta }_{\delta \left( p\right) }})^{\varpi _{p}}} \end{array} \right\}, \end{aligned}

where$$\left( \delta \left( 1\right), \delta \left( 2\right), \ldots, \delta \left( n\right) \right)$$be a permutation of$$\left( 1,2,\ldots, n\right)$$with$$\beta _{\delta \left( p\right) }\le _{L}\beta _{\delta \left( p+1\right) }$$$$\forall$$$$p=1,2,\ldots, n.$$$$\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$be the weighting vector ofthePFEOWAoperator such that$$\varpi _{p}\in \left[ 0,1\right]$$and$$\sum _{p=1}^{n}\varpi _{p}=1.\left( p=1,2,\ldots, n\right)$$

### Proof

The proof of this theorem is same as to Theorem 3. $$\square$$

### Corollary 3

The PFOWA operator and PFEOWA operator have the following relation

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad \le {\hbox {PFOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots}, {\beta }_{n}). \end{aligned}

where $$\beta _{p}$$ be the family of PFNs and $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$ be the weighting vector of $$\beta _{p}$$ $$\left( p=1,2,\ldots, n\right)$$ such that $$\varpi _{p}\in \left[ 0,1\right]$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;\left( p=1,2,\ldots, n\right)$$

### Proof

Similar to the Corollary 2. $$\square$$

### Example 2

Let $$\beta _{1}=\left( 0.2,0.3,0.4\right), \beta _{2}=\left( 0.1,0.5,0.3\right), \beta _{3}=\left( 0.3,0.2,0.4\right),$$$$\beta _{4}=\left( 0.1,0.2,0.3\right)$$ and $$\beta _{5}=\left( 0.3,0.1,0.4\right)$$ be a five PFVs and the PFOWA operator has an associated vector $$\varpi =\left( 0.113,0.256,0.132,0.403,0.096\right) ^{T}$$. Since $$\beta _{2}<\beta _{1}<\beta _{4}<\beta _{3}<\beta _{5},$$ then $$\beta _{\delta \left( 1\right) }=\beta _{5}=\left( 0.3,0.1,0.4\right),$$$$\beta _{\delta \left( 2\right) }=\beta _{3}=\left( 0.3,0.2,0.4\right),$$$$\beta _{\delta \left( 3\right) }=\beta _{4}=\left( 0.1,0.2,0.3\right),$$$$\beta _{\delta \left( 4\right) }=\beta _{1}=\left( 0.2,0.3,0.4\right),$$$$\beta _{\delta \left( 5\right) }=$$$$\beta _{2}=\left( 0.1,0.5,0.3\right).$$ Then, we compute the following partial values: $$\prod \nolimits _{p=1}^{5}(1+\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.2115,$$$$\prod \nolimits _{p=1}^{5}(1-\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=0.7821,$$$$\prod \nolimits _{p=1}^{5}\eta _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}=0.2378,$$$$\prod \nolimits _{p=1}^{5}(2-\eta _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.7391,\prod \nolimits _{p=1}^{5}\Upsilon _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}$$$$=0.3746,$$$$\prod \nolimits _{p=1}^{5}(2-\Upsilon _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.6222.$$ By (45), it follows that

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{5}) \\ &\quad =\left[ \begin{array}{c} \frac{\prod \nolimits _{p=1}^{5}(1+\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}-\prod \nolimits _{p=1}^{5}(1-\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}}{\prod \nolimits _{p=1}^{5}(1+\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{5}(1-\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}},\frac{2\prod \nolimits _{p=1}^{5} \eta _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}}{\prod \nolimits _{p=1}^{5}(2-\eta _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{5}\eta _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}} \\, \frac{2\prod \nolimits _{p=1}^{5}\Upsilon _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}}{\prod \nolimits _{p=1}^{5}(2-\Upsilon _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}+\prod \nolimits _{p=1}^{5}\Upsilon _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}} \end{array} \right] \\ &\quad =\left( 0.2154,0.2406,0.3752\right) \end{aligned}

### Example 3

If we use the PFOWA operator, which was developed by Xu [52], to aggregate the PFVs, $$\beta _{p}$$$$\left( p=1,2,\ldots, 5\right),$$then we have

\begin{aligned} & {\text{PFOWA}}_{\varpi } (\beta _{1}, \beta _{2}, \ldots, \beta _{5} ) = \left( {1 - \prod\limits_{{p = 1}}^{5} {(1 - \phi _{{\beta _{{\theta (i)}} }} )^{{\varpi _{p} }} }, \;\left. {\prod\limits_{{p = 1}}^{5} {\varphi _{{\beta _{{\theta (i)}} }}^{{\varpi _{p} }} }, \;\prod\limits_{{p = 1}}^{5} {\psi _{{\beta _{{\theta (i)}} }}^{{\varpi _{p} }} } } \right)} \right. \\ & \quad = \left( {0.2179,0.2378,0.3746} \right) \\ \end{aligned}

It is clear that $${\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{5})<{\hbox {PFOWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{5}).$$

Similar to those of the PFOWA operator, the PFEOWA operator has same properties as follows.

### Proposition 3

Let $$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)$$ be a family of PFNs, and $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}$$ be the weighting vector of the PFEOWA operator, such that $$\varpi _{p}\in \left[ 0,1\right],$$ $$\left( p=1,2,\ldots, n\right)$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;$$ then, we have the following.

1. (1).

Idempotency If all $$\beta _{p}$$ are equal, i.e., $$\beta _{p}=\beta$$ for all $$\beta,$$ $$\left( p=1,2,\ldots, n\right)$$ then

\begin{aligned} {\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})={\beta } \end{aligned}
2. (2).

Boundary

\begin{aligned} {\beta }_{\min }\le {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\le {\beta }_{\max } \end{aligned}

where $$\beta _{\min }=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}$$ and $$\beta _{\max }=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.$$

3. (3).

Monotonicity Let$$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{ {\beta }_{p}},\Upsilon _{{\beta }_{p}}\right)$$and$$\beta _{p}^{*}=\left( \mu _{{\beta }_{p}^{*}},\eta _{{\beta } _{p}^{*}},\Upsilon _{{\beta }_{p}^{*}}\right)$$$$\left( p=1,2,\ldots, n\right)$$be a two collections of PFVs, and$$\beta _{p}\le _{L}\beta _{p}^{*},$$i.e.,$$\mu _{{\beta }_{p}}\le \mu _{{\beta }_{p}^{*}},\eta _{{\beta }_{p}}\ge \eta _{{\beta } _{p}^{*}}$$and$$\Upsilon _{{\beta }_{p}}\ge \Upsilon _{{\beta }_{p}^{*}}$$, for all pthen

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad \le {\hbox {PFEOWA}}_{\varpi }({\beta }_{1}^{*},{\beta } _{2}^{*},{\ldots },{\beta }_{n}^{*}). \end{aligned}
4. (4).

Commutativity Let $$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{ {\beta }_{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)$$ be a family of PFNs, then for every $$\varpi$$

\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad ={\hbox {PFEOWA}}_{\varpi }({\beta }_{1}^{*},{\beta } _{2}^{*},{\ldots},{\beta }_{n}^{*}). \end{aligned}

where $$(\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*})$$ is any permutation of $$(\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*}).$$

Besides the aforementioned properties, the PFEOWA operator has the following desirable results.

### Proposition 4

Let $$\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)$$ be a family of PFVs, and $$\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}$$ be the weighting vector of the PFEOWA operator, such that $$\varpi _{p}\in \left[ 0,1\right]$$ and $$\sum _{p=1}^{n}\varpi _{p}=1;\left( p=1,2,\ldots, n\right)$$ then, we have the following.

1. (1).

If $$\varpi =\left( 1,0,\ldots, 0\right) ^{T},$$ then $${\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.$$

2. (2).

If $$\varpi =\left( 0,0,\ldots, 1\right) ^{T},$$ then $${\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.$$

3. (3).

If $$\varpi _{j}=1$$ and $$\varpi _{p}=0\left( j\ne i\right),$$ then $${\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\beta _{\delta \left( j\right) }$$ where $$\beta _{\delta \left( j\right) }$$ is the jth largest of $$\beta _{p}\left( p=1,2,\ldots, n\right).$$

## Application of the picture fuzzy Einstein weighted averaging operator to multiple attribute decision making

MADM problems are common in everyday decision environments. An MADM problem is to find a great concession solution from all possible alternatives measured on multiple attributes. Let the discrete set of alternatives and attributes are $$\ A=\left\{ A_{1},A_{2},\ldots, A_{n}\right\}$$ and $$\ C=\left\{ C_{1},C_{2},\ldots, C_{n}\right\}$$, respectively. Suppose decision maker given the PFVs for the alternatives $$A_{i}$$$$\left( \text {p} =1,2,\ldots, n\right)$$ on attributes $$C_{j}$$$$\left( j=1,2,\ldots, m\right)$$ are $$k_{ij}=\left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right),$$ where $$\mu _{ij},\eta _{ij}$$ and $$\Upsilon _{ij}$$ indicates the degrees that the alternative $$d_{i}$$ satisfies, neutral and does not satisfy the attribute $$h_{j}$$, respectively. Where $$0\le \mu _{ij}+\eta _{ij}+\Upsilon _{ij}\le 1,$$ and satisfy the condition $$\mu _{ij}+\eta _{ij}+\Upsilon _{ij}\le 1$$. Hence, an MADM problem can be briefly stated in a picture fuzzy decision matrix $$K=\left( k_{ij}\right) _{n\times m}=\left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) _{n\times m}$$.

Step 1 Find the normalized picture fuzzy decision matrix. Generally, attributes are two types, the one is benefit and the other is cost; in other words, the attribute set C can be divided into two subsets: $$C_{1}$$ and $$C_{2}$$, where $$C_{1}$$ and $$C_{2}$$ are the subset of benefit attributes and cost attributes, respectively. If in a MADM problem the attributes are of the same type (benefit or cost), then the rating values do not need normalization. If in a MADM problem the attributes are of the different type (benefit and cost) in such case, we can use the following formula to change the benefit type values into the cost type values.

\begin{aligned} s_{ij}& = {} \left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) \nonumber \\ & = {} \left\{ \begin{array}{ll} k_{ij}^{c} &{} j\in C_{1} \\ k_{ij} &{} j\in C_{2} \\ \end{array} \right\} \end{aligned}
(34)

Hence, we obtain the normalized picture fuzzy decision matrix $$S=\left( s_{ij}\right) _{n\times m}=\left( \left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) \right) _{n\times m},$$ where $$k_{ij}^{c}$$ is the complement of $$k_{ij}$$.

Step 2 Utilize the PFWA and PFEWA operator to aggregate all the rating values $$s_{ij}\left( j=1,2,\ldots, m\right)$$ of the ith line and achieve the total rating value $$s_{i}$$ corresponding to the alternative $$A_{i}$$.

Step 3 Find the score value of the total aggregated value $$s_{i}$$ by using Eq. (4). Then select the best one ranking the alternative $$A_{i}(i=1,2,\ldots, m),$$ by the descending order of the score values.

## Illustrative example

Let consider a numerical example for decision making problem, the decision maker consider three different companies for invest of money. Let $$A_{1},A_{2}$$ and $$A_{3}$$ be represented car company, Food company and Arm company, respectively, and the criteria plane $$C_{1}$$ is risk analysis, $$C_{2}$$ is growth analysis, $$C_{3}$$ is investment cost, $$C_{4}$$ is social impact, $$C_{5}$$ operating cost, $$C_{6}$$ is environmental impact analysis and $$C_{7}$$ other factors. The opinions of decision maker about these three companies based seven criteria are represented in Table 1.

Based on Table 1, we get the picture fuzzy decision metric $$\ K=\left( k_{ij}\right) _{3\times 7}=\left( \left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) \right) _{3\times 7}$$. In this problem, we consider that the attributes $$C_{3}$$ and $$C_{7}$$ are the cost attributes and all another are benefit attributes, using Eq. (34), transformed the picture fuzzy decision matrix K into the following normalized matrix, shown in Table 2. Let $$\varpi =(0.05,0.15,0.20,0.20,0.15,0.15,0.10)^{T}$$ be the attribute weight vector. Using the PFEWA and PFWA operators, respectively, we can get the ranking orders and the overall rating values of the alternatives $$A_{i}\left( i=1,2,3\right)$$ as follows in Table 3.

It is clearly seen from Table 3 that using different operators, the overall rating values of the alternatives are different, but the ranking orders of the alternatives remain the same, and therefore, the best option is $$A_{1}$$.

### Comparison analysis

This section consists of the comparative analysis of several existing aggregation operators of picture fuzzy information with proposed Einstein aggregation operators. Existing methods to aggregated picture fuzzy information are shown in the below table.

Overall ranking of the alternatives
Existing operators Ranking
PFWG [1] $$\varvec{H}_{4}>H_{1}>H_{2}>H_{3}$$
PFOWG [1] $$\varvec{H}_{4}>H_{2}>H_{1}>H_{3}$$
PFHWG [1] $$\varvec{H}_{4}>H_{1}>H_{2}>H_{3}$$
GPFHWG [1] $$\varvec{H}_{4}>H_{1}>H_{2}>H_{3}$$
Proposed operators Ranking
PFEWA $$\varvec{H}_{4}>H_{1}>H_{2}>H_{3}$$
PFEOWA $$\varvec{H}_{4}>H_{1}>H_{2}>H_{3}$$

The bast alternative is $$H_{4}.$$ The obtaining result utilizing Einstein weighted averaging operators is same as results shown in existing methods. Hence, this study proposed the novel Einstein aggregation operators to aggregate the picture fuzzy information more effectively and efficiently. Utilizing proposed Einstein aggregation operators, we find the best alternative from set of alternative given by the decision maker. Hence, the proposed MCDM technique based on proposed operators gives to find best alternative as an applications in decision support systems.

## Conclusion

In this paper, we investigate the multiple attribute decision making (MADM) problem based on the arithmetic aggregation operators and Einstein operations with picture fuzzy information. Then, motivated by the ideal of traditional arithmetic aggregation operators and Einstein operations, we have developed some aggregation operators for aggregating picture fuzzy information: picture fuzzy Einstein aggregation operators. Then, we have utilized these operators to develop some approaches to solve the picture fuzzy multiple attribute decision making problems. Finally, a practical example for different companies for invest of money is given to verify the developed approach and to demonstrate its practicality and effectiveness. In the future, the application of the proposed aggregating operators of PFSs needs to be explored in the decision making, risk analysis and many other uncertain and fuzzy environment.

## References

1. 1.

Ashraf, S., Mahmood, T., Abdullah, S., Khan, Q.: Different approaches to multi-criteria group decision making problems for picture fuzzy environment. Bull. Braz. Math. Soc. New Ser. (2018). https://doi.org/10.1007/s00574-018-0103-y

2. 2.

Ashraf, S., Abdullah, S., Mahmood, T., Ghani, F., Mahmood, T.: Spherical fuzzy sets and their applications in multi-attribute decision making problems. J. Intell. Fuzzy Syst. (Preprint), pp. 1–16. https://doi.org/10.3233/JIFS-172009

3. 3.

Ashraf, S., Abdullah, S.: Spherical aggregation operators and their application in multiattribute group decision-making. Int. J. Intell. Syst. 34(3), 493–523 (2019)

4. 4.

Atanassov, K.T.: Intuitionistic fuzzy sets. Fuzzy Sets Syst. 20, 87–96 (1986)

5. 5.

Atanassov, K.T.: An equality between intuitionistic fuzzy sets. Fuzzy Sets Syst. 79, 257–258 (1996)

6. 6.

Atanassov, K.T.: Intuitionistic Fuzzy Sets, pp. 1–137. Physica, Heidelberg (1999)

7. 7.

Arora, R., Garg, H.: A robust correlation coefficient measure of dual hesitant fuzzy soft sets and their application in decision making. Eng. Appl. Artif. Intell. 72, 80–92 (2018)

8. 8.

Chen, S.M., Chang, C.H.: Fuzzy multiattribute decision making based on transformation techniques of intuitionistic fuzzy values and intuitionistic fuzzy geometric averaging operators. Inf. Sci. 352, 133–149 (2016)

9. 9.

Chen, S.M., Cheng, S.H., Tsai, W.H.: Multiple attribute group decision making based on interval-valued intuitionistic fuzzy aggregation operators and transformation techniques of intervalvalued intuitionistic fuzzy values. Inf. Sci. 367–368(1), 418–442 (2016)

10. 10.

Cuong, B.C.: Picture fuzzy sets-first results. part 2, seminar neuro-fuzzy systems with applications. Institute of Mathematics, Hanoi (2013)

11. 11.

Cuong, B.C., Van Hai, P.: Some fuzzy logic operators for picture fuzzy sets. In: 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), pp. 132–137. IEEE (2015, October)

12. 12.

De, S.K., Biswas, R., Roy, A.R.: An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst. 117(2), 209–213 (2001)

13. 13.

Deschrijver, G., Kerre, E.E.: A generalization of operators on intuitionistic fuzzy sets using triangular norms and conorms. Notes IFS 8(1), 19–27 (2002)

14. 14.

Deschrijver, G., Cornelis, C., Kerre, E.E.: On the representation of intuitionistic fuzzy t-norms and t-conorms. IEEE Trans. Fuzzy Syst. 12(1), 45–61 (2004)

15. 15.

Fahmi, A., Abdullah, S., Amin, F., Ali, A., Ahmad Khan, W.: Some geometric operators with triangular cubic linguistic hesitant fuzzy number and their application in group decision-making. J. Intell. Fuzzy Syst. 35, 2485–2499 (2018)

16. 16.

Fahmi, A., Amin, F., Abdullah, S., Ali, A.: Cubic fuzzy Einstein aggregation operators and its application to decision-making. Int. J. Syst. Sci. 49(11), 2385–2397 (2018)

17. 17.

Fahmi, A., Abdullah, S., Amin, F., Khan, M.S.A.: Trapezoidal cubic fuzzy number Einstein hybrid weighted averaging operators and its application to decision making. Soft Comput. (2018). https://doi.org/10.1007/s00500-018-3242-6

18. 18.

Goyal, M., Yadav, D., Tripathi, A.: Intuitionistic fuzzy genetic weighted averaging operator and its application for multiple attribute decision making in e-learning. Indian J. Sci. Technol. 9(1), 1–15 (2016)

19. 19.

Garg, H.: A new generalized Pythagorean fuzzy information aggregation using Einstein operations and its application to decision making. Int. J. Intell. Syst. 31(9), 886–920 (2016)

20. 20.

Garg, H.: Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 101, 53–69 (2016)

21. 21.

Garg, H.: Generalized Pythagorean fuzzy geometric aggregation operators using Einstein t-norm and t-conorm for multicriteria decision-making process. Int. J. Intell. Syst. 32(6), 597–630 (2017)

22. 22.

Garg, H.: Some picture fuzzy aggregation operators and their applications to multicriteria decision-making. Arab. J. Sci. Eng. 42(12), 5275–5290 (2017)

23. 23.

Garg, H.: Generalised Pythagorean fuzzy geometric interactive aggregation operators using Einstein operations and their application to decision making. J. Exp. Theor. Artif. Intell. 30, 1–32 (2018). https://doi.org/10.1080/0952813X.2018.1467497

24. 24.

Garg, H.: Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision-making process. J. Ind. Manag. Optim. 14(1), 283–308 (2018)

25. 25.

Garg, H.: A linear programming method based on an improved score function for interval-valued Pythagorean fuzzy numbers and its application to decision-making. Int. J. Uncertain. Fuzziness Knowl. based Syst. 26(01), 67–80 (2018)

26. 26.

Garg, H., Kumar, K.: An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set pair analysis theory and their application in decision making. Soft Comput. 22, 4959–4970 (2018)

27. 27.

Garg, H.: New exponential operational laws and their aggregation operators for interval-valued Pythagorean fuzzy multicriteria decision-making. Int. J. Intell. Syst. 33(3), 653–683 (2018)

28. 28.

Huang, J.Y.: Intuitionistic fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 27(1), 505–513 (2014)

29. 29.

Hong, D.H., Choi, C.H.: Multicriteria fuzzy decision-making problems based on vague set theory. Fuzzy Sets Syst. 114(1), 103–113 (2000)

30. 30.

Khan, M.S.A., Abdullah, S.: Interval-valued Pythagorean fuzzy GRA method for multiple-attribute decision making with incomplete weight information. Int. J. Intell. Syst. 33(8), 1689–1716 (2018)

31. 31.

Khan, A.A., Ashraf, S., Abdullah, S., Qiyas, M., Luo, J., Khan, S.U.: Pythagorean fuzzy Dombi aggregation operators and their application in decision support system. Symmetry 11(3), 383 (2019)

32. 32.

Khatibi, V., Montazer, GA (2009) Intuitionistic fuzzy set vs. fuzzy set application in medical pattern recognition. Artif. Intell. Med. 47(1):43–52

33. 33.

Kaur, G., Garg, H.: Cubic intuitionistic fuzzy aggregation operators. Int. J. Uncertain. Quantif. 8(5), 405–427 (2018)

34. 34.

Kaur, G., Garg, H.: Multi-attribute decision-making based on Bonferroni mean operators under cubic intuitionistic fuzzy set environment. Entropy 20(1), 65 (2018)

35. 35.

Kumar, K., Garg, H.: TOPSIS method based on the connection number of set pair analysis under interval-valued intuitionistic fuzzy set environment. Comput. Appl. Math. 37(2), 1319–1329 (2018)

36. 36.

Lu, M., Wei, G., Alsaadi, F.E., Hayat, T., Alsaedi, A.: Bipolar 2-tuple linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 33(2), 1197–1207 (2017)

37. 37.

Phong, P.H., Hieu, D.T., Ngan, R.T., Them, P.T.: Some compositions of picture fuzzy relations. In: Proceedings of the 7th National Conference on Fundamental and Applied Information Technology Research (FAIR’7), Thai Nguyen, pp. 19–20 (2014, June)

38. 38.

Rani, D., Garg, H.: Complex intuitionistic fuzzy power aggregation operators and their applications in multicriteria decision-making. Expert Syst. 35(6), e12325 (2018)

39. 39.

Singh, S., Garg, H.: Distance measures between type-2 intuitionistic fuzzy sets and their application to multicriteria decision-making process. Appl. Intell. 46(4), 788–799 (2017)

40. 40.

Son, L.H.: Generalized picture distance measure and applications to picture fuzzy clustering. Appl. Soft Comput. 46(C), 284–295 (2016)

41. 41.

Thong, P.H.: A new approach to multi-variable fuzzy forecasting using picture fuzzy clustering and picture fuzzy rule interpolation method. In: Knowledge and Systems Engineering, pp. 679–690. Springer, Cham (2015)

42. 42.

Wang, L., Peng, J.J., Wang, J.Q.: A multi-criteria decision-making framework for risk ranking of energy performance contracting project under picture fuzzy environment. J. Clean. Prod. 191, 105–118 (2018)

43. 43.

Wang, L., Zhang, H.Y., Wang, J.Q., Li, L.: Picture fuzzy normalized projection-based VIKOR method for the risk evaluation of construction project. Appl. Soft Comput. 64, 216–226 (2018)

44. 44.

Wang, Liu: Intuitionistic fuzzy geometric aggregation operators based on Einstein operations. Int. J. Intell. Syst. 26(11), 1049–1075 (2011)

45. 45.

Wang, Liu: Intuitionistic fuzzy information aggregation using Einstein operations. IEEE Trans. Fuzzy Syst. 20(5), 923–938 (2012)

46. 46.

Wei, G.: Picture fuzzy cross-entropy for multiple attribute decision making problems. J. Bus. Econ. Manag. 17(4), 491–502 (2016)

47. 47.

Wei, G.: Picture fuzzy aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 33(2), 713–724 (2017)

48. 48.

Wei, G.: Some cosine similarity measures for picture fuzzy sets and their applications to strategic decision making. Informatica 28(3), 547–564 (2017)

49. 49.

Wei, G.: Picture 2-tuple linguistic Bonferroni mean operators and their application to multiple attribute decision making. Int. J. Fuzzy Syst. 19(4), 997–1010 (2017)

50. 50.

Wei, G., Alsaadi, F.E., Hayat, T., Alsaedi, A.: Projection models for multiple attribute decision making with picture fuzzy information. Int. J. Mach. Learn. Cybern. 9(4), 713–719 (2018)

51. 51.

Xu, Z., Chen, J., Wu, J.: Clustering algorithm for intuitionistic fuzzy sets. Inf. Sci. 178(19), 3775–3790 (2008)

52. 52.

Xu, Z., Cai, X.: Recent advances in intuitionistic fuzzy information aggregation. Fuzzy Optim. Decis. Mak. 9(4), 359–381 (2010)

53. 53.

Xu, Z., Cai, X.: Intuitionistic fuzzy information aggregation. In: Intuitionistic Fuzzy Information Aggregation, pp. 1–102. Springer, Berlin (2012)

54. 54.

Xu, Z., Yager, R.R.: Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 35(4), 417–433 (2006)

55. 55.

Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)

56. 56.

Zeng, S., Asharf, S., Arif, M., Abdullah, S.: Application of exponential Jensen picture fuzzy divergence measure in multi-criteria group decision making. Mathematics 7(2), 191 (2019)

57. 57.

Zhao, X., Wei, G.: Some intuitionistic fuzzy Einstein hybrid aggregation operators and their application to multiple attribute decision making. Knowl. Based Syst. 37, 472–479 (2013)

## Author information

Authors

### Corresponding author

Correspondence to Saifullah Khan.

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and Permissions

Khan, S., Abdullah, S. & Ashraf, S. Picture fuzzy aggregation information based on Einstein operations and their application in decision making. Math Sci 13, 213–229 (2019). https://doi.org/10.1007/s40096-019-0291-7