Introduction

Computational Intelligence is a set of methodologies and approaches that address complex problems of real-world applications that cannot be tackled by traditional methodologies and approaches. Research on operators and computing processes in fuzzy sets is promising for Computational Intelligence, especially for real word problems, in which there is much vague or uncertain information.

The increasing complexity and the dynamic change of today’s environment have brought many challenges to the decision-making process. In recent years, researchers have developed several methods for multi-criteria decision-making (MCDM) to overcome the complexities faced and facilitate the decision-making process. These methods have been widely exploited to increase the efficiency of the decision-making process and improve the quality of the decisions made. MCDM methods have contributed effectively and efficiently in handling various applications, e.g. risk assessment [1, 2], new product development [3], renewable energy [46], economic and societal dynamics of the climate change [7], complex networks [8], image processing [9], and water resource management [10].

The decision-making environment is often uncertain due to information redundancy, ambiguity, and vagueness resulting from some uncontrollable factors, e.g. the nature of data and measurement errors. To overcome this issue, Zadeh [11] proposed the concept of fuzzy sets, later named ordinary fuzzy sets (OFSs) or type-1 fuzzy sets (T1FSs). OFSs were widely employed and well used to solve different problems in various applications. However, with the development of modern technologies, real-world problems became more complicated and OFSs were not sufficient to model human judgment and reasoning when dealing with indeterminate and ambiguous information. Therefore, fuzzy sets started to evolve and different types were proposed.

Among the proposed fuzzy sets: are type-2 fuzzy sets [12], neutrosophic fuzzy sets [13], intuitionistic fuzzy sets [14], picture fuzzy sets [15], hesitant fuzzy sets [16, 17], and Pythagorean fuzzy sets [18, 19].

Atanassov [14] proposed intuitionistic fuzzy sets (IFSs) which constitute a generalization of the concept of a fuzzy set. IFSs handle imprecision and uncertainty differently. While a fuzzy set gives the degree of membership only of an element in a given set, an IFS gives a degree of membership and a degree of non-membership. The sum of these degrees is less than or equal to one. The residue from the one is the hesitation degree.

Smarandache [13] expanded the concept of IFSs and proposed neutrosophic fuzzy sets (NFSs) with the independent triplet namely, truthiness, falsity, and indeterminacy degrees. Each degree is less than or equal to one, and their sum is less than or equal to three.

Cuong and Kreiovich [15] proposed picture fuzzy sets (PcFSs) with the independent triplet namely, the degree of positive membership, the degree of negative membership, and the degree of neutral membership, whose sum is less than or equal to one. The residue from the one is the degree of refusal membership. PcFSs are suitable for models when human opinions involve answers of types: yes, abstain, no, and refusal.

Yager and Abbasov [18] and Yager [19] made an advanced development of IFSs and proposed Pythagorean fuzzy sets (PFSs). In PFSs, the sum of squares of the degree of membership and the degree of non-membership is less than or equal to one. The hesitancy degree is the square of the residue of the sum of the squares of both degrees from the one. Hence, PFSs provide a larger domain than IFSs.

In both IFSs and PFSs, the indeterminacy degree depends on the membership and the non-membership degrees. On the other hand, in NFSs and PcFSs the indeterminacy degree is independent of the membership and the non-membership degrees.

Spherical fuzzy sets (SFSs) are the current stage in the evolution of fuzzy sets to handle imprecise, vague, and uncertain information faced in real-world problems. SFSs are generalizations of IFSs, PcFSs, PFSs, and NFSs. They encompass the advantages of the previous fuzzy sets. Moreover, SFSs are important when the decision makers’ opinion is not only constrained to agreement or disagreement but also there is some sort of hesitation. SFSs define the membership function on a unit sphere, and the three independent membership parameters are related to their squared summation. Hence, human cognition and perceptions are expressed in a larger domain more extensively. SFSs were introduced simultaneously and independently by Gündoğdu and Kahraman [20], Mahmood, et al. [21], and Ashraf, et al. [22]. Since SFSs are recently introduced, the literature is still limited.

Gündoğdu and Kahraman [20] introduced the generalized three-dimensional SFSs and presented some essential differences from the other fuzzy sets. They presented the arithmetic operations involving addition, subtraction, and multiplication with their proofs. They also developed the spherical weighted arithmetic mean (SWAM) and the spherical weighted geometric mean (SWGM) aggregation operators.

Mahmood, et al. [21] proposed the concept of SFSs and T-spherical fuzzy sets (T-SFSs). They defined some operations of SFSs and T-SFSs along with spherical fuzzy relations. They proposed the T-spherical fuzzy weighted geometric (TSFWG) operator and discussed medical diagnostics and decision-making problems in the environment of SFSs and T-SFSs as practical applications.

Ashraf, et al. [22] introduced SFSs and investigated their basic operations. They introduced the spherical fuzzy number weighted averaging aggregation (SFNWAA) operators and the spherical fuzzy number weighted geometric aggregation (SFNWGA) operators.

Ashraf, et al. [23] introduced some useful operations such as spherical fuzzy t-norms and spherical fuzzy t-conorms. In addition, they introduced spherical fuzzy negator, and some classifications of spherical fuzzy t-norms and spherical fuzzy t-conorms which are useful for developing the aggregation operators. Then, Ashraf and Abdullah [24] proposed generalized spherical aggregation operators for SFSs that utilize strict Archimedean t-norm and t-conorm. Later, Ashraf, et al. [25] defined some new operational laws by Dombi t-norm and t-conorm. They proposed spherical fuzzy Dombi weighted averaging (SFDWA), spherical fuzzy Dombi ordered weighted averaging (SFDOWA), spherical fuzzy Dombi hybrid weighted averaging (SFDHWA), spherical fuzzy Dombi weighted geometric (SFDWG), spherical fuzzy Dombi ordered weighted geometric (SFDOWG), and spherical fuzzy Dombi hybrid weighted geometric (SFDHWG) aggregation operators. Farrokizadeh, et al. [26] adopted the Bonferroni Mean (BM) and SFSs operators to propose new aggregation operators, e.g. spherical fuzzy Bonferroni mean (SFBM) and spherical fuzzy normalized weighted Bonferroni mean (SFNWBM).

There are several multi-criteria decision-making (MCDM) methods that can be applied to handle complex decision-making problems, e.g. Technique of Order Preference by Similarity to an Ideal Solution (TOPSIS), VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR), preference ranking organization method for enrichment evaluation (PROMETHEE), etc. Nevertheless, there is a more robust MCDM tool that is easier to understand and implement, the MULTIMOORA method (multi-objective optimization on the basis of ratio analysis plus the full multiplicative form) that can facilitate the decision-making process and provide effective ranking with comprehensive measurement [27].

The MULTIMOORA is an efficient and effective multi-criteria decision-making (MCDM) method that has been successfully applied in solving real-life complicated problems. According to Brauers and Zavadskas [28] using two different methods of multi-objective optimization is more robust than using a single method; using three methods is more robust than using two. The MULTIMOORA technique encompasses three different utility functions and a mixture of compensatory and non-compensatory approaches [29]. Hence, it is one of the most robust MCDM techniques.

First, Brauers and Zavadskas [30] introduced the MOORA method (multi-objective optimization on the basis of ratio analysis) which utilizes two techniques, the ratio system technique and the reference point technique. Then, Brauers and Zavadskas [28] added the full multiplicative form and proposed the MULTIMOORA. The MULTIMOORA was extended using different types of fuzzy sets being robust and flexible and was applied to various practical applications [31]. Recently, Kutlu Gündoğdu [32] extended the MULTIMOORA in the spherical fuzzy environment.

In the MULTIMOORA method, aggregation operators are the basis of the ratio system and the full multiplicative form techniques. Almost all the proposed aggregation operators, except for the Dombi aggregation operators, are based on the product of the membership parameters. As a result, whenever the extreme ratings \(\{ ( 1,0,0 ) \text{ and } ( 0,1,0 ) \}\) are present in the evaluation process they dominate and cancel other ratings. When proposing SFSs corresponding to linguistic terms, these extreme values are excluded. However, when decision-makers (DMs) express their opinions directly as a percentage of agreement and disagreement, these ratings can occur as totally agree or disagree. Furthermore, in some practical applications, the presence of these extreme ratings is inevitable, as in the evaluation of energy storage technologies. As for the Dombi aggregation operators, the reciprocals of the three membership parameters are utilized, thus they cannot also process the extreme values. Therefore, there is a need to propose aggregation functions that can handle the extreme values serving two purposes. First, guarantee fair and balanced treatment in evaluation whenever the extreme values exist, hence avoiding false ranking. Second, allow the extension of SFSs corresponding to the linguistic terms to encompass these values.

Since SFSs are recently introduced, the score functions and the distance measures are still subject to study. Hence, the distance between SFSs might not be proportional to their scores, i.e. being closer to the ideal rating might not indicate a SFS with a better score. Therefore, to account for the uncertainty in the distance of an SFS from a reference point it is more convenient to employ both the best and worst ratings and express the distance as an SFS.

In this study, a new approach for the MULTIMOORA method is developed that avoids the disadvantages in the implementation of the current methodology in the spherical fuzzy environment. Thus, the robustness and the accuracy of the method are enhanced. In this approach, two novel aggregation functions for SFSs are proposed that avert the flaws of the current aggregation operators. Moreover, to avoid the pitfalls of distance measures in the reference point technique, two reference points are employed instead of one, and the distance between a rating and the ideal ratings is expressed as an SFS. Furthermore, due to the disadvantages of the dominance theory in large-scale applications, the results of the three utilities are aggregated to get the overall utility on which the ranking is based. Two practical examples are solved and the results are compared with the results of other methods to test and validate the performance of the proposed spherical fuzzy MULTIMOORA (SF-MULTIMOORA).

The article is organized as follows. Section 2 presents the preliminaries of SFSs and the conventional MULTIMOORA. The proposed aggregation functions with their proofs are introduced in Sect. 3. Section 4 discusses the proposed SF-MULTIMOORA using the novel aggregation functions and the fuzzy distance. In Sect. 5 practical examples that illustrate the applicability and validity of the proposed method are solved, and the results are compared with the results of previously used methods. Finally, the conclusion is given in Sect. 6.

From the previous illustration, since few scholars have applied the MULTIMOORA method in the spherical fuzzy environment, the method has not been well investigated and the pitfalls in the implementation of the method that might give incorrect results have not been eliminated. Thus, the MULTIMOORA method needs further development to increase its robustness and guarantee accurate and reliable results using spherical fuzzy information. Therefore, the main contributions of the article are:

  1. a)

    Develop new aggregation functions that guarantee fair and balanced treatment in the evaluation to avoid false ranking.

  2. b)

    Express the distance in the spherical fuzzy environment as an SFS to avoid the pitfalls of the extant score functions and distance measures that might yield incorrect results.

  3. c)

    Propose an improved MULTIMOORA method based on the novel aggregation functions and the spherical fuzzy distance to increase the robustness and accuracy of the method. In addition, eliminate the disadvantages of applying the dominance theory.

Preliminaries

Spherical fuzzy sets

For a given universe of discourse U, an SFS is defined by [20]

$$ \tilde{A}_{s} = \bigl\{ \bigl\langle u, \bigl( \mu _{\tilde{A}_{s}} ( u ), \upsilon _{\tilde{A}_{s}} ( u ),\pi _{\tilde{A}_{s}} ( u ) \bigr) \mid u\in U \bigr\rangle \bigr\} , $$
(1)

where \(\mu _{\tilde{A}_{s}}: U\rightarrow [ 0,1 ] \) is the degree of membership, \(\upsilon _{\tilde{A}_{s}}: U\rightarrow [ 0,1 ]\) is the degree of non-membership, \(\pi _{\tilde{A}_{s}}:U\rightarrow [ 0,1 ]\) is the degree of hesitation, satisfying

$$ 0\leq \mu _{\tilde{A}_{s}}^{2} ( u ) + \upsilon _{\tilde{A}_{s}}^{2} ( u ) + \pi _{\tilde{A}_{s}}^{2} ( u ) \leq 1,\quad \forall u\in U. $$
(2)

For two SFSs \(\tilde{A}_{s} = ( \mu _{\tilde{A}_{s}}, \upsilon _{\tilde{A}_{s}},\pi _{\tilde{A}_{s}} )\), \(\tilde{B}_{s} = ( \mu _{\tilde{B}_{s}}, \upsilon _{\tilde{B}_{s}},\pi _{\tilde{B}_{s}} )\), and a scalar \(\lambda >0\), the operational laws and the aggregation operators are given as follows [20]

$$\begin{aligned}& \begin{aligned}[b] &\tilde{A}_{s} \oplus \tilde{B}_{s} \\ &\quad = \bigl\{ \bigl( \mu _{\tilde{A}_{s}}^{2} + \mu _{\tilde{B}_{s}}^{2} - \mu _{\tilde{A}_{s}}^{2} \mu _{\tilde{B}_{s}}^{2} \bigr)^{{1} / {2}}, \upsilon _{\tilde{A}_{s}} \upsilon _{\tilde{B}_{s}}, \\ &\qquad \bigl( \bigl( 1- \mu _{\tilde{B}_{s}}^{2} \bigr) \pi _{\tilde{A}_{s}}^{2} + \bigl( 1- \mu _{\tilde{A}_{s}}^{2} \bigr) \pi _{\tilde{B}_{s}}^{2} - \pi _{\tilde{A}_{s}}^{2} \pi _{\tilde{B}_{s}}^{2} \bigr)^{{1} / {2}} \bigr\} , \end{aligned} \end{aligned}$$
(3)
$$\begin{aligned}& \begin{aligned}[b] &\tilde{A}_{s} \otimes \tilde{B}_{s} \\ &\quad = \bigl\{ \mu _{\tilde{A}_{s}} \mu _{\tilde{B}_{s}}, \bigl( \upsilon _{\tilde{A}_{s}}^{2} + \upsilon _{\tilde{B}_{s}}^{2} - \upsilon _{\tilde{A}_{s}}^{2} \upsilon _{\tilde{B}_{s}}^{2} \bigr)^{{1} / {2}}, \\ &\qquad \bigl( \bigl( 1- \upsilon _{\tilde{B}_{s}}^{2} \bigr) \pi _{\tilde{A}_{s}}^{2} + \bigl( 1- \upsilon _{\tilde{A}_{s}}^{2} \bigr) \pi _{\tilde{B}_{s}}^{2} - \pi _{\tilde{A}_{s}}^{2} \pi _{\tilde{B}_{s}}^{2} \bigr)^{{1} / {2}} \bigr\} , \end{aligned} \end{aligned}$$
(4)
$$\begin{aligned}& \begin{aligned}[b] \lambda \odot \tilde{A}_{s} &= \bigl\{ \bigl( 1- \bigl( 1- \mu _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} \bigr)^{{1} / {2}}, \upsilon _{\tilde{A}_{s}}^{\lambda}, \\ &\quad \bigl( \bigl( 1- \mu _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} - \bigl( 1- \mu _{\tilde{A}_{s}}^{2} - \pi _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} \bigr)^{1/2} \bigr\} , \end{aligned} \end{aligned}$$
(5)
$$\begin{aligned}& \begin{aligned}[b] \tilde{A}_{s}^{\lambda} &= \bigl\{ \mu _{\tilde{A}_{s}}^{\lambda}, \bigl( 1- \bigl( 1- \upsilon _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} \bigr)^{{1} / {2}},\\ &\quad \bigl( \bigl( 1- \upsilon _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} - \bigl( 1- \upsilon _{\tilde{A}_{s}}^{2} - \pi _{\tilde{A}_{s}}^{2} \bigr)^{\lambda} \bigr)^{{1} / {2}} \bigr\} . \end{aligned} \end{aligned}$$
(6)

The aggregation operators, i.e. the spherical weighted arithmetic mean (SWAM) and the spherical weighted geometric mean (SWGM), are defined by:

$$\begin{aligned}& \mathrm{SWAM}_{w} ( \tilde{A}_{S_{1}}, \tilde{A}_{S_{2}}, \dots , \tilde{A}_{S_{n}} ) \\& \quad = w_{1} \tilde{A}_{S_{1}} + w_{2} \tilde{A}_{S_{2}} +\cdots + w_{n} \tilde{A}_{S_{n}} \\& \quad = \Biggl\{ \Biggl[ 1- \prod_{i=1}^{n} \bigl( 1- \mu _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} \Biggr]^{{1} / {2}}, \prod_{i=1}^{n} \upsilon _{\tilde{A}_{S_{i}}}^{w_{i}}, \\& \qquad \Biggl[ \prod _{i=1}^{n} \bigl( 1- \mu _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} - \prod_{i=1}^{n} \bigl( 1- \mu _{\tilde{A}_{S_{i}}}^{2} - \pi _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} \Biggr]^{{1} / {2}} \Biggr\} , \\& \quad \text{where } w_{i} \in [ 0,1 ]; \sum _{i=1}^{n} w_{i} =1. \end{aligned}$$
(7)
$$\begin{aligned}& \mathrm{SWGM}_{w} ( \tilde{A}_{S_{1}}, \tilde{A}_{S_{2}},\dots , \tilde{A}_{S_{n}} ) \\& \quad = \tilde{A}_{S_{1}}^{w_{1}} + \tilde{A}_{S_{2}}^{w_{2}} +\cdots + \tilde{A}_{S_{n}}^{w_{n}} \\& \quad = \Biggl\{ \prod_{i=1}^{n} \mu _{\tilde{A}_{S_{i}}}^{w_{i}}, \Biggl[ 1- \prod_{i=1}^{n} \bigl( 1- \upsilon _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} \Biggr]^{{1} / {2}}, \\& \qquad \Biggl[ \prod_{i=1}^{n} \bigl( 1- \upsilon _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} - \prod _{i=1}^{n} \bigl( 1- \upsilon _{\tilde{A}_{S_{i}}}^{2} - \pi _{\tilde{A}_{S_{i}}}^{2} \bigr)^{w_{i}} \Biggr]^{{1} / {2}} \Biggr\} , \\& \quad \text{where } w_{i} \in [ 0,1 ]; \sum _{i=1}^{n} w_{i} =1. \end{aligned}$$
(8)

The score function is given by [33]

$$ \mathrm{Score} ( \tilde{A}_{s} ) = ( 2\mu _{\tilde{A}_{s}} - \pi _{\tilde{A}_{s}} )^{2} - ( \upsilon _{\tilde{A}_{s}} - \pi _{\tilde{A}_{s}} )^{2}, $$
(9)

and the accuracy function is given by [33]

$$ \mathrm{Accuracy} ( \tilde{A}_{s} ) = \mu _{\tilde{A}_{s}}^{2} + \upsilon _{\tilde{A}_{s}}^{2} + \pi _{\tilde{A}_{s}}^{2}. $$
(10)

The score and accuracy functions are used for ranking as follows

$$\begin{aligned} \tilde{A}_{s} < \tilde{B}_{s} \quad &\text{iff}\quad \mathrm{Score} ( \tilde{A}_{s} ) < \mathrm{Score} ( \tilde{B}_{s} ),\\ &\text{or}\quad \mathrm{Score} ( \tilde{A}_{s} ) = \mathrm{Score} ( \tilde{B}_{s} ) \quad \text{and}\\ &\hphantom{\text{or}\quad } \mathrm{Accuracy} ( \tilde{A}_{s} ) < \mathrm{Accuracy} ( \tilde{B}_{s} ). \end{aligned}$$

The conjugate of an SFS is defined by [22]

$$ \tilde{A}_{s}^{c} = ( \upsilon _{\tilde{A}_{s}}, \mu _{\tilde{A}_{s}}, \pi _{\tilde{A}_{s}} ). $$
(11)

Finally, the normalized Euclidean distance formula is [20]

$$ \begin{aligned}[b] d ( \tilde{A}_{s}, \tilde{B}_{s} ) &= \Biggl( \frac{1}{2n} \sum_{i=1}^{n} \bigl( ( \mu _{\tilde{A}_{s}} - \mu _{\tilde{B}_{s}} )^{2} + ( \upsilon _{\tilde{A}_{s}} - \upsilon _{\tilde{B}_{s}} )^{2} \\ &\quad {}+ ( \pi _{\tilde{A}_{s}} - \pi _{\tilde{B}_{s}} )^{2} \bigr)\Biggr)^{1/2}. \end{aligned} $$
(12)

The conventional MULTIMOORA method

Brauers and Zavadskas [30] proposed multi-objective optimization on the basis of ratio analysis (MOORA) method. The MOORA method encompasses the additive utility function and the reference point approach. Later, Brauers and Zavadskas [28] incorporated the full multiplicative form to the MOORA method to increase its robustness, hence introducing the MULTIMOORA. The MULTIMOORA method could be summarized in the following steps [28, 34].

Given the general decision matrix D of a multi-criteria decision-making (MCDM) problem with n alternatives and m criteria

$$\mathbf{D}= [ X_{ij} ] = \textstyle\begin{array}{c} \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {C}_{1} & {C}_{2}&\hphantom{\qquad }& {C}_{{m}} \end{array}\displaystyle \\ \textstyle\begin{array}{c} \textstyle\begin{array}{c} X_{1}\\ X_{2} \end{array}\displaystyle \\ \textstyle\begin{array}{c} {\vdots} \\ X_{n} \end{array}\displaystyle \end{array}\displaystyle \begin{bmatrix} \textstyle\begin{array}{c@{\quad }c} X_{11} & X_{12}\\ X_{21} & X_{22} \end{array}\displaystyle & \cdots & \textstyle\begin{array}{c} X_{1m}\\ X_{2m} \end{array}\displaystyle \\ {\vdots} & {\ddots} & {\vdots} \\ \textstyle\begin{array}{c@{\quad }c} X_{n1} & X_{n2} \end{array}\displaystyle & \cdots & X_{nm} \end{bmatrix} \end{array}\displaystyle , $$

where \(X_{ij}\) is the rating of the alternative \(X_{i}\); \(i=1,2,\dots ,n\) for the criterion \(C_{j}\); \(j=1,2,\dots ,m\). The elements of the general decision matrix D are normalized by dividing each rating by the square root of the sum of squares of the total ratings forming the normalized general decision matrix \(\mathbf{D}_{{N}} = [ X_{ij}^{N} ]\)

$$ X_{ij}^{N} = \frac{X_{ij}}{\sqrt{\sum_{i=1}^{n} X_{ij}^{2}}}. $$
(13)

When applying the ratio system technique, the normalized ratings are added for the criteria to be maximized. Meanwhile, they are subtracted for the criteria to be minimized. The overall index of each alternative is computed by

$$ R_{i} = \sum_{j=1}^{g} X_{ij}^{N} - \sum_{j=g+1}^{m} X_{ij}^{N}, $$
(14)

where g is the number of the maximized benefit criteria and \(m-g\) is the number of minimized cost criteria. The alternatives are ranked in descending order. The best alternative is the one with the maximum \(R_{i}\).

The reference point approach is applied using the Min-Max Metric of Chebyshev. First, the reference point of the jth criterion is defined

$$ X_{j}^{*} = \textstyle\begin{cases} \max_{i} X_{ij}^{N},& \text{for benefit criteria},\\ \min_{i} X_{ij}^{N},& \text{for cost criteria}. \end{cases} $$
(15)

Then, the deviation of the normalized ratings of each alternative from the reference point is computed

$$ d_{i} = \min_{i} \Bigl\{ \max _{j} \bigl\vert X_{j}^{*} - X_{ij}^{N} \bigr\vert \Bigr\} . $$
(16)

The alternatives are ranked in ascending order. The best alternative is the one with the minimum \(d_{i}\).

In the full multiplicative form, the overall utility of each alternative is calculated by the dimensionless number

$$ U_{i} = \frac{U_{i}^{b}}{U_{i}^{c}}, $$
(17)

where \(U_{i}^{b} = \prod_{j=1}^{g} X_{ij}\) is the product of an alternative’s ratings of the benefit criteria, and \(U_{i}^{c} = \prod_{j=g+1}^{m} X_{ij}\) is the product of an alternative’s ratings of the cost criteria. The alternatives are ranked in descending order. The best alternative is the one with the maximum \(U_{i}\).

Finally, the dominance theory is utilized to find the best alternative based on the three ranking lists. The alternative that mostly appears in the first place in the three ranking lists is the best.

Spherical fuzzy averaging aggregation functions

Aggregation operators are the basis of any multi-criteria decision-making method. They are the key factors to processing the given information and getting a unique value from a list of values to make a unique decision. In this section, two new aggregation functions are introduced. In real-life MCDM problems based on spherical fuzzy information, these aggregation functions make the decision results more accurate and exact. They guarantee fair evaluation and avoid biased results.

An aggregation function should satisfy the following definition.

Definition 3.1

([19])

A function \(Agg: [ 0,1 ]^{n} \rightarrow [ 0,1 ]\) is called an aggregation function if it satisfies the following conditions

  1. i.

    \(Agg ( 0,\dots ,0 ) =0\),

  2. ii.

    \(Agg ( 1,\dots ,1 ) =1\), and

  3. iii.

    \(Agg ( a_{1},\dots , a_{n} ) \geq Agg ( b_{1},\dots , b_{n} ) \) if \(a_{i} \geq b_{i} \) for all i.

According to Beliakov and James [35], an aggregation function is considered averaging whenever the output is bounded by the minimum and maximum input, conjunctive whenever the output is bounded from above by the minimum input, disjunctive whenever the output is bounded from below by the maximum input and mixed otherwise.

The spherical fuzzy averaging arithmetic aggregation function

In this subsection, the averaging arithmetic aggregation function (SFAA) in the spherical fuzzy environment is defined as an extension of Yager’s Pythagorean fuzzy weighted aggregation function and the necessary proofs are given.

Theorem 3.1

For the SFSs \(\{ \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} \}\) with weights \(( w_{1}, w_{2}, \dots , w_{n} )\), where \(w_{i} \in [0,1]\) and \(\sum_{i=1}^{n} w_{i} =1\), their aggregated value using the spherical fuzzy averaging arithmetic aggregation function \(( \mathrm{SFAA} )\) is given by

$$ \begin{aligned}[b] & \mathrm{SFAA} ( \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} ) \\ &\quad = \Biggl( \sum_{i=1}^{n} w_{i} \mu _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \pi _{\tilde{A}_{i}} \Biggr). \end{aligned} $$
(18)

First, it is proved that the SFAA function satisfies Definition 3.1. Then, it is proved that the result is an SFS.

Proof (1)

Applying the SFAA for SFSs \(\{ ( 1,0, 0 ),\dots , ( 1,0, 0 ) \}\)

$$\begin{aligned}& \mathrm{SFAA} \bigl\{ ( 1,0, 0 ),\dots , ( 1,0, 0 ) \bigr\} \\& \quad = \Biggl( \sum _{i=1}^{n} w_{i} \mu _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \pi _{\tilde{A}_{i}} \Biggr)\\& \quad = \Biggl( \sum_{i=1}^{n} w_{i}, \sum_{i=1}^{n} 0, \sum _{i=1}^{n} 0 \Biggr) = ( 1,0, 0 ). \end{aligned}$$

It can be similarly proved for the SFSs \(\{ ( 0,1, 0 ),\dots , ( 0,1,0 ) \}\) and \(\{ ( 0,0, 1 ),\dots , ( 0,0,1 ) \}\).

Therefore, \(Agg ( 0,0,\dots ,0 ) =0\), and \(Agg ( 1,1,\dots ,1 ) =1\). □

Proof (2)

Applying the SFAA for SFSs \(\{ \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} \}\), we get

$$ \begin{aligned} &\mathrm{SFAA} \{ \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} \} \\ &\quad = \Biggl( \sum_{i=1}^{n} w_{i} \mu _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{A}_{i}}, \sum_{i=1}^{n} w_{i} \pi _{\tilde{A}_{i}} \Biggr). \end{aligned} $$

Applying the SFAA for SFSs \(\{ \tilde{B}_{1}, \tilde{B}_{2},\dots , \tilde{B}_{n} \}\), we get

$$ \begin{aligned} &\mathrm{SFAA} \{ \tilde{B}_{1}, \tilde{B}_{2},\dots , \tilde{B}_{n} \} \\ &\quad = \Biggl( \sum_{i=1}^{n} w_{i} \mu _{\tilde{B}_{i}}, \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{B}_{i}}, \sum_{i=1}^{n} w_{i} \pi _{\tilde{B}_{i}} \Biggr). \end{aligned} $$

Assuming that \(\mu _{\tilde{A}_{i}} \geq \mu _{\tilde{B}_{i}}\), \(\upsilon _{\tilde{A}_{i}} \geq \upsilon _{\tilde{B}_{i}}\), and \(\pi _{\tilde{A}_{i}} \geq \pi _{\tilde{B}_{i}}\)i, we have

$$\begin{aligned}& \sum_{i=1}^{n} w_{i} \mu _{\tilde{A}_{i}} \geq \sum_{i=1}^{n} w_{i} \mu _{\tilde{B}_{i}},\qquad \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{A}_{i}} \geq \sum_{i=1}^{n} w_{i} \upsilon _{\tilde{B}_{i}}, \end{aligned}$$

and

$$\begin{aligned}& \sum _{i=1}^{n} w_{i} \pi _{\tilde{A}_{i}} \geq \sum_{i=1}^{n} w_{i} \pi _{\tilde{B}_{i}}. \end{aligned}$$

Then, \(Agg ( a_{1},a_{2},\dots , a_{n} ) \geq Agg ( b_{1},b_{2},\dots , b_{n} ) \) if \(a_{i} \geq b_{i} \) for all i. □

Proof (3)

$$\begin{aligned}& \Biggl( \sum_{i=1}^{n} w_{i} \mu _{\tilde{A}_{i}} \Biggr)^{2} + \Biggl( \sum _{i=1}^{n} w_{i} \upsilon _{\tilde{A}_{i}} \Biggr)^{2}+ \Biggl( \sum_{i=1}^{n} w_{i} \pi _{\tilde{A}_{i}} \Biggr)^{2} \\& \quad = \sum _{i=1}^{n} w_{i}^{2} \mu _{\tilde{A}_{i}}^{2} + \sum_{i=1}^{n} w_{i}^{2} \upsilon _{\tilde{A}_{i}}^{2} + \sum _{i=1}^{n} w_{i}^{2} \pi _{\tilde{A}_{i}}^{2} \\& \qquad {}+ \sum_{i\neq j,i< j} 2 w_{i} \mu _{\tilde{A}_{i}} w_{j} \mu _{\tilde{A}_{j}} + \sum _{i\neq j,i< j} 2 w_{i} \upsilon _{\tilde{A}_{i}} w_{j} \upsilon _{\tilde{A}_{j}} \\& \qquad {}+ \sum_{i\neq j,i< j} 2 w_{i} \pi _{\tilde{A}_{i}} w_{j} \pi _{\tilde{A}_{j}}\\& \quad = \sum_{i=1}^{n} w_{i}^{2} \bigl(\mu _{\tilde{A}_{i}}^{2} + \upsilon _{\tilde{A}_{i}}^{2} + \pi _{\tilde{A}_{i}}^{2} \bigr)\\& \qquad {}+ \sum_{i\neq j,i< j} 2 w_{i} w_{j} ( \mu _{\tilde{A}_{i}} \mu _{\tilde{A}_{j}} + \upsilon _{\tilde{A}_{i}} \upsilon _{\tilde{A}_{j}} + \pi _{\tilde{A}_{i}} \pi _{\tilde{A}_{j}} )\\& \quad \leq \sum_{i=1}^{n} w_{i}^{2} + \sum_{i\neq j,i< j} 2 w_{i} w_{j} ( \mu _{\tilde{A}_{i}} \mu _{\tilde{A}_{j}} + \upsilon _{\tilde{A}_{i}} \upsilon _{\tilde{A}_{j}} + \pi _{\tilde{A}_{i}} \pi _{\tilde{A}_{j}} ). \end{aligned}$$

Since \(\sum_{i=1}^{n} w_{i} =1 \),

$$\Biggl( \sum_{i=1}^{n} w_{i} \Biggr)^{2} = \sum_{i=1}^{n} w_{i}^{2} + \sum_{i\neq j,i< j} 2 w_{i} w_{j} =1.$$

Accordingly, it suffices to show that \(\mu _{\tilde{A}_{1}} \mu _{\tilde{A}_{2}} + \upsilon _{\tilde{A}_{1}} \upsilon _{\tilde{A}_{2}} + \pi _{\tilde{A}_{1}} \pi _{\tilde{A}_{2}} \leq 1\), and the calculations for the other \(\mu _{\tilde{A}_{i}} \mu _{\tilde{A}_{j}} + \upsilon _{\tilde{A}_{i}} \upsilon _{\tilde{A}_{j}} + \pi _{\tilde{A}_{i}} \pi _{\tilde{A}_{j}}\) are similar.

The term \(\mu _{\tilde{A}_{1}} \mu _{\tilde{A}_{2}} + \upsilon _{\tilde{A}_{1}} \upsilon _{\tilde{A}_{2}} + \pi _{\tilde{A}_{1}} \pi _{\tilde{A}_{2}}\) can be viewed as the result of the dot product of the two vectors \(\mathrm{V}_{1} = ( \mu _{\tilde{A}_{1}}, \upsilon _{\tilde{A}_{1}}, \pi _{\tilde{A}_{1}} )\), and \(\mathrm{V}_{2} = ( \mu _{\tilde{A}_{2}}, \upsilon _{\tilde{A}_{2}}, \pi _{\tilde{A}_{2}} )\) that lies in the first octant of the unit sphere. From the geometric definition of the dot product

$$ \begin{aligned} \mathrm{V}_{1}. \mathrm{V}_{2} &= \mu _{\tilde{A}_{1}} \mu _{\tilde{A}_{2}} + \upsilon _{\tilde{A}_{1}} \upsilon _{\tilde{A}_{2}} + \pi _{\tilde{A}_{1}} \pi _{\tilde{A}_{2}} = \Vert \mathrm{V}_{1} \Vert \Vert \mathrm{V}_{2} \Vert \cos\theta \\ &= \sqrt{\mu _{\tilde{A}_{1}}^{2} + \upsilon _{\tilde{A}_{1}}^{2} + \pi _{\tilde{A}_{1}}^{2}} \sqrt{\mu _{\tilde{A}_{2}}^{2} + \upsilon _{\tilde{A}_{2}}^{2} + \pi _{\tilde{A}_{2}}^{2}} \cos \theta \leq 1. \end{aligned} $$

This completes the proof. □

The choice of an appropriate spherical averaging arithmetic aggregation function plays a crucial role in MCDM problems. To illustrate this critical role, the following example is given.

Consider a simple MCDM problem with two alternatives and three criteria whose weights are \(\{ 0.2,0.3,0.5 \}\). The decision matrix is given by

$$ \textstyle\begin{array}{c@{\quad }c@{\quad }c} \hphantom{\tilde{\mathbf{D}} =X_{1}}C_{1} & C_{2} & C_{3}\\ \tilde{\mathbf{D}} = \textstyle\begin{array}{c} X_{1}\\ X_{2} \end{array}\displaystyle \left [ \textstyle\begin{array}{c} (1,0,0)\\ (0.9, 0.1,0.1) \end{array}\displaystyle \right.& \textstyle\begin{array}{c} (0.1, 0.9,0.1)\\ (0.8, 0.2,0.1) \end{array}\displaystyle & \left.\textstyle\begin{array}{c} (0.1, 0.9,0.1)\\ (0.9, 0.1,0.1) \end{array}\displaystyle \right ]. \end{array} $$

It is obvious from the decision matrix that the ratings of \(X_{2}\) far exceed that of \(X_{1}\) for the second and third criteria that have bigger weights. For the second criterion with a weight of 0.3, the alternative \(X_{1}\) has a percentage of agreement of 10% and a percentage of disagreement of 90%. The alternative \(X_{2}\) has a percentage of agreement of 80% and a percentage of disagreement of 20%. Similarly, for the third criterion with a weight of 0.5, the agreement on \(X_{1}\) is 10% and the disagreement is 90%, while the agreement on \(X_{2}\) is 90% and the disagreement is 10%. Meanwhile, the rating of \(X_{1}\) is slightly better than that of \(X_{2}\) for the first criterion with the smallest weight of 0.2 (the agreement on \(X_{1}\) is 100%, and the agreement on \(X_{2}\) is 90% with a small hesitation margin). Therefore, by intuition \(X_{2}\) is better than \(X_{1}\).

Using the SWAM aggregation operator (7) and the score function (9) we have \(\mathrm{SWAM} ( X_{1} \mid C_{j} ) = ( 1,0,0 )\) with \(Sc ( X_{1} ) =4\). We also have the \(\mathrm{SWAM} ( X_{2} \mid C_{j} ) =(0.8774, 0.1231,0.1024)\) with \(Sc ( X_{2} ) =2.7313\). Based on this result, \(X_{1}\) is chosen as the best despite being the worst by intuition. Then, a single criterion with a full rating of \(( 1,0,0 )\) will eliminate the effect of all the other assessment criteria regardless of their weight, which is unfair in the evaluation process. In this case, the selection is biased towards the alternative having a full rating for a criterion regardless of its ratings for the other criteria, leading to a false ranking.

Using the SFAA (18) and the score function (9), we get the \(\mathrm{SFAA} ( X_{1} \mid C_{j} ) = ( 0.28,0.72,0.08 )\) with \(Sc ( X_{1} ) =-0.1792\), and the \(\mathrm{SFAA} ( X_{2} \mid C_{j} ) =(0.87,0.13,0.13)\) with \(Sc ( X_{2} ) =2.5921\). This leads to the selection of \(X_{2}\), which is the best by intuition. Hence, the ranking is rational and avoids biased results.

The spherical fuzzy averaging power aggregation function

In this subsection, the spherical fuzzy averaging power aggregation function (SFPA) is defined and the proofs are given.

Theorem 3.2

For the SFSs \(\{ \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} \}\) with weights \(( w_{1}, w_{2}, \dots , w_{n} )\), with \(w_{i} \in (0,1]\) and \(\sum_{i=1}^{n} w_{i} =1\), then their aggregated value using the spherical fuzzy averaging power aggregation function \(( \mathrm{SFPA} )\) is given by

$$ \begin{aligned}[b] &\mathrm{SFPA} ( \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} ) \\ &\quad = \Biggl( \frac{1}{n} \sum _{i=1}^{n} \mu _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum_{i=1}^{n} \upsilon _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum _{i=1}^{n} \pi _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \Biggr). \end{aligned} $$
(19)

Similar to the previous theorem, we need to show that the SFPA function satisfies Definition 3.1, and then show that the result is a SFS.

Proof (1)

Applying the SFPA for SFSs \(\{ ( 1,0, 0 ),\dots , ( 1,0, 0 ) \}\)

$$ \begin{aligned} &\mathrm{SFPA} \bigl\{ ( 1,0, 0 ),\dots , ( 1,0, 0 ) \bigr\} \\ &\quad = \Biggl( \frac{1}{n} \sum_{i=1}^{n} \mu _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum _{i=1}^{n} \upsilon _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum_{i=1}^{n} \pi _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \Biggr) \\ &\quad = \Biggl( \frac{1}{n} \sum _{i=1}^{n} 1, \frac{1}{n} \sum _{i=1}^{n} 0, \frac{1}{n} \sum _{i=1}^{n} 0 \Biggr)\\ &\quad = ( 1,0, 0 ). \end{aligned} $$

It can be similarly proved for the SFSs \(\{ ( 0,1, 0 ),\dots , ( 0,1,0 ) \}\) and \(\{ ( 0,0, 1 ),\dots , ( 0,0,1 ) \}\).

Therefore, \(Agg ( 0,0,\dots ,0 ) =0\), and \(Agg ( 1,1,\dots ,1 ) =1\). □

Proof (2)

Applying the SFPA for SFSs \(\{ \tilde{A}_{S_{1}}, \tilde{A}_{S_{2}},\dots , \tilde{A}_{S_{n}} \}\), we get

$$ \begin{aligned} &\mathrm{SFPA} \{ \tilde{A}_{1}, \tilde{A}_{2},\dots , \tilde{A}_{n} \} \\ &\quad = \Biggl( \frac{1}{n} \sum _{i=1}^{n} \mu _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum_{i=1}^{n} \upsilon _{\tilde{A}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum _{i=1}^{n} \pi _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \Biggr). \end{aligned} $$

Applying the SFPA for SFSs \(\{ \tilde{B}_{S_{1}}, \tilde{B}_{S_{2}},\dots , \tilde{B}_{S_{n}} \}\), we get

$$ \begin{aligned} &\mathrm{SFPAg} \{ \tilde{B}_{1}, \tilde{B}_{2},\dots , \tilde{B}_{n} \} \\ &\quad = \Biggl( \frac{1}{n} \sum _{i=1}^{n} \mu _{\tilde{B}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum_{i=1}^{n} \upsilon _{\tilde{B}_{i}}^{\frac{1}{w_{i}}}, \frac{1}{n} \sum _{i=1}^{n} \pi _{\tilde{B}_{i}}^{\frac{1}{w_{i}}} \Biggr). \end{aligned} $$

Assuming that \(\mu _{\tilde{A}_{i}} \geq \mu _{\tilde{B}_{i}}\), \(\upsilon _{\tilde{A}_{i}} \geq \upsilon _{\tilde{B}_{i}}\), and \(\pi _{\tilde{A}_{i}} \geq \pi _{\tilde{B}_{i}}\)i, we have

$$\begin{aligned}& \sum_{i=1}^{n} \mu _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \geq \sum_{i=1}^{n} \mu _{\tilde{B}_{i}}^{\frac{1}{w_{i}}},\qquad \sum_{i=1}^{n} \upsilon _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \geq \sum_{i=1}^{n} \upsilon _{\tilde{B}_{i}}^{\frac{1}{w_{i}}}, \end{aligned}$$

and

$$\begin{aligned}& \sum _{i=1}^{n} \pi _{\tilde{A}_{i}}^{\frac{1}{w_{i}}} \geq \sum_{i=1}^{n} \pi _{\tilde{B}_{i}}^{\frac{1}{w_{i}}}. \end{aligned}$$

Then, \(Agg ( a_{1},a_{2},\dots , a_{n} ) \geq Agg ( b_{1},b_{2},\dots , b_{n} ) \) if \(a_{i} \geq b_{i} \) for all i. □

For simplicity \(\mu _{i}\), \(\upsilon _{i}\), and \(\pi _{i}\) will be directly used.

Proof (3)

$$\begin{aligned}& \Biggl( \frac{1}{n} \sum_{i=1}^{n} \mu _{i}^{\frac{1}{w_{i}}} \Biggr)^{2} + \Biggl( \frac{1}{n} \sum_{i=1}^{n} \upsilon _{i}^{\frac{1}{w_{i}}} \Biggr)^{2} + \Biggl( \frac{1}{n} \sum_{i=1}^{n} \pi _{i}^{\frac{1}{w_{i}}} \Biggr)^{2}\\& \quad = \frac{1}{n^{2}} \Biggl( \sum_{i=1}^{n} \mu _{i}^{\frac{2}{w_{i}}} + \sum_{i=1}^{n} \upsilon _{i}^{\frac{2}{w_{i}}} + \sum_{i=1}^{n} \pi _{i}^{\frac{2}{w_{i}}} +2 \sum_{i\neq j,i< j} \mu _{i}^{\frac{1}{w_{i}}} \mu _{j}^{\frac{1}{w_{j}}}\\& \qquad {} +2 \sum _{i\neq j,i< j} \upsilon _{i}^{\frac{1}{w_{i}}} \upsilon _{j}^{\frac{1}{w_{j}}} +2 \sum_{i\neq j,i< j} \pi _{i}^{\frac{1}{w_{i}}} \pi _{j}^{\frac{1}{w_{j}}} \Biggr). \end{aligned}$$

Since \(\mu _{i}\), \(\upsilon _{i}\), \(\pi _{i}\), \(w_{i} \) and \(w_{j} \leq 1\),

$$\begin{aligned}& \mu _{i}^{\frac{2}{w_{i}}} \leq \mu _{i}^{2},\qquad \upsilon _{i}^{\frac{2}{w_{i}}} \leq \upsilon _{i}^{2}, \qquad \pi _{i}^{\frac{2}{w_{i}}} \leq \pi _{i}^{2} \quad \text{and}\\& \mu _{i}^{\frac{1}{w_{i}}} \leq \mu _{i}, \qquad \upsilon _{i}^{\frac{1}{w_{i}}} \leq \upsilon _{i}, \qquad \pi _{i}^{\frac{1}{w_{i}}} \leq \pi _{i}, \end{aligned}$$

and

$$\begin{aligned}& \frac{1}{n^{2}} \Biggl( \sum_{i=1}^{n} \mu _{i}^{\frac{2}{w_{i}}} + \sum_{i=1}^{n} \upsilon _{i}^{\frac{2}{w_{i}}} + \sum_{i=1}^{n} \pi _{i}^{\frac{2}{w_{i}}} +2 \sum_{i\neq j,i< j} \mu _{i}^{\frac{1}{w_{i}}} \mu _{j}^{\frac{1}{w_{j}}}\\& \qquad {} +2 \sum _{i\neq j,i< j} \upsilon _{i}^{\frac{1}{w_{i}}} \upsilon _{j}^{\frac{1}{w_{j}}} +2 \sum_{i\neq j,i< j} \pi _{i}^{\frac{1}{w_{i}}} \pi _{j}^{\frac{1}{w_{j}}} \Biggr)\\& \quad \leq \frac{1}{n^{2}} \Biggl( \sum_{i=1}^{n} \mu _{i}^{2} + \sum_{i=1}^{n} \upsilon _{i}^{2} + \sum_{i=1}^{n} \pi _{i}^{2} +2 \sum_{i\neq j,i< j} \mu _{i} \mu _{j}\\& \qquad {} +2 \sum_{i\neq j,i< j} \upsilon _{i} \upsilon _{j} +2 \sum _{i\neq j,i< j} \pi _{i} \pi _{j} \Biggr)\\& \quad \leq \frac{1}{n^{2}} \Biggl( \sum_{i=1}^{n} \mu _{i}^{2} + \upsilon _{i}^{2} + \pi _{i}^{2} +2 \sum_{i\neq j,i< j} \mu _{i} \mu _{j} + \upsilon _{i} \upsilon _{j} + \pi _{i} \pi _{j} \Biggr). \end{aligned}$$

From the proof of Theorem 3.1, \(\mu _{1} \mu _{2} + \upsilon _{1} \upsilon _{2} + \pi _{1} \pi _{2} \leq 1\). Then,

$$ \begin{aligned} &\frac{1}{n^{2}} \Biggl( \sum_{i=1}^{n} \mu _{i}^{2} + \upsilon _{i}^{2} + \pi _{i}^{2} +2 \sum_{i\neq j,i< j} \mu _{i} \mu _{j} + \upsilon _{i} \upsilon _{j} + \pi _{i} \pi _{j} \Biggr) \\ &\quad \leq \frac{1}{n^{2}} \biggl[ n+2* \frac{n^{2} -n}{2} \biggr] =1. \end{aligned} $$

This completes the proof. □

To illustrate the importance of using the proper spherical averaging power aggregation function to associate a unique value for each alternative, the following example is given.

Consider an MCDM problem with two alternatives and three criteria whose relative importance is \(\{ 0.2,0.3,0.5 \}\) with the following decision matrix

$$ \textstyle\begin{array}{c@{\quad }c@{\quad }c} \hphantom{\tilde{\mathbf{D}} =\quad }C_{1} & C_{2} & C_{3}\\ \tilde{\mathbf{D}} = \textstyle\begin{array}{c} X_{1}\\ X_{2} \end{array}\displaystyle \left [ \textstyle\begin{array}{c} (0,1,0)\\ (0.1, 0.9,0.1) \end{array}\displaystyle \right. & \textstyle\begin{array}{c} (0.9, 0.1, 0.1)\\ (0.2, 0.8,0.2) \end{array}\displaystyle & \left. \textstyle\begin{array}{c} (0.9, 0.1,0.1)\\ (0.1, 0.9,0.1) \end{array}\displaystyle \right ] \end{array}\displaystyle . $$

The ratings of \(X_{1}\) exceed that of \(X_{2}\) for the second and third criteria with larger weights. On the other hand, the rating of \(X_{2}\) is slightly better than that of \(X_{1}\) for the first criteria having the smallest weight. Accordingly, \(X_{1}\) is better than \(X_{2} \) by intuition.

Using the SWGM (8) and the score function (9), we have the \(\mathrm{SWGM} ( X_{1} \mid C_{j} ) = ( 0,1,0 )\) with \(Sc ( X_{1} ) =-1\). We also have the \(\mathrm{SWGM} ( X_{2} \mid C_{j} ) =(0.1231,0.8774,0.1274)\) with \(Sc ( X_{2} ) =-0.5484\). Consequently, \(X_{2}\) is ranked the best although this is not correct by intuition. Then, a single criterion with the worst rating of \(( 0,1,0 )\) will eliminate the effect of the other evaluation criteria, which results in a biased evaluation. In this case, the selection is biased against the alternative with the worst rating regardless of its rating for the other criteria. Hence, a false ranking is obtained.

Meanwhile, when using the SFPA (19) and the score function (9) for \(X_{1}\), the \(\mathrm{SFPA} ( X_{1} \mid C_{j} ) = ( 0.5046,0.3368, 0.0035 )\) with \(Sc ( X_{1} ) =0.9003\). As for \(X_{2}\), the \(\mathrm{SFPA} ( X_{2} \mid C_{j} ) =(0.0049,0.6253,0.0049)\) with \(Sc ( X_{2} ) =-0.3849\). Then, the alternative \(X_{1}\) is selected, which is the best alternative by intuition.

The proposed SF-MULTIMOORA

Being one of the most practical MCDM methods, MULTIMOORA was successfully applied to various fields, e.g. computer science, economics, engineering, and environmental sciences [32]. Brauers and Zavadskas [28] classified the MULTIMOORA as the most robust system of multi-objective optimization. In this section, the MULTIMOORA method is extended in the spherical fuzzy environment due to its desirable characteristics. A flowchart of the proposed SF-MULTIMOORA is presented in Fig. 1 to illustrate the method.

Figure 1
figure 1

A flowchart of the proposed SF-MULTIMOORA

For an MCDM problem with n alternatives \(\{ X_{1}, X_{2},\dots , X_{n} \}\), and m criteria \(\{ C_{1}, C_{2},\dots , C_{m} \}\) with relative importance \(( w_{1}, w_{2},\dots , w_{m} )\) satisfying \(\sum_{i=1}^{m} w_{i} =1\), the spherical fuzzy general decision matrix is represented by

$$\tilde{\mathbf{D}} = [ \tilde{{X}}_{ij} ] = \textstyle\begin{array}{c} \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {C}_{1} & {C}_{2}&\hphantom{\qquad }& {C}_{{m}} \end{array}\displaystyle \\ \textstyle\begin{array}{c} \textstyle\begin{array}{c} {X}_{1}\\ {X}_{2} \end{array}\displaystyle \\ \textstyle\begin{array}{c} {\vdots} \\ {X}_{n} \end{array}\displaystyle \end{array}\displaystyle \begin{bmatrix} \textstyle\begin{array}{c@{\quad}c} \tilde{{X}}_{11} & \tilde{{X}}_{12}\\ \tilde{{X}}_{21} & \tilde{{X}}_{22} \end{array}\displaystyle & \cdots & \textstyle\begin{array}{c} \tilde{{X}}_{1m}\\ \tilde{{X}}_{2m} \end{array}\displaystyle \\ {\vdots} & {\ddots} & {\vdots} \\ \textstyle\begin{array}{c@{\quad}c} \tilde{{X}}_{n1} & \tilde{{X}}_{n2} \end{array}\displaystyle & \cdots & \tilde{{X}}_{nm} \end{bmatrix} \end{array}\displaystyle , $$

where \(\tilde{{X}}_{ij} = ( \mu _{ij}, \upsilon _{ij,} \pi _{ij} )\) is the rating of the alternative \(X_{i}\) for the assessment criteria \(C_{j} \) expressed as an SFS. The membership degree “\(\mu _{ij} \)” indicates the degree to which an alternative \({X}_{i}\) satisfies a criterion \({C}_{j}\). Meanwhile, the non-membership degree “\(\upsilon _{ij} \)” indicates the degree to which \({X}_{i}\) fails to satisfy this criterion. The degree of hesitation “\(\pi _{ij} \)” indicates the degree of doubtfulness in the evaluation. In the decision matrix, the complement of the rating is used when handling a cost criterion. Therefore, the decision matrix is processed directly and the three techniques are directly applied.

The three techniques are illustrated by the following steps.

Step 1. The ratio system technique

When applying the ratio system in a spherical fuzzy environment, two main differences from the conventional MULTUMOORA exist. First, the ratings do not need normalization since they are expressed by SFSs. Second, since all the criteria are treated as benefit criteria by using the complement in the case of cost criteria; the subtraction operation is not required. Therefore, a spherical averaging arithmetic aggregation function is directly applied. The proposed SFAA (18) is chosen for aggregation due to its unbiased and balanced treatment. Then, the additive utility \(\tilde{U}_{i}^{A}\) of each alternative \({X}_{i}\) is computed by

$$ \begin{aligned} \tilde{U}_{i}^{A} &= \mathrm{SFAA} ( \tilde{X}_{ij} \vert j=1,2,\dots ,m; w_{j} ) \\ &= \Biggl( \sum _{j=1}^{m} w_{j} \mu _{\tilde{X}_{ij}}, \sum _{j=1}^{m} w_{j} \upsilon _{\tilde{X}_{ij}}, \sum_{j=1}^{m} w_{j} \pi _{\tilde{X}_{ij}} \Biggr). \end{aligned} $$

Step 2. The reference point technique

The reference point technique is based on the identification of the best rating of each criterion and using it as a reference point. After that, using a distance formula, the distance between the rating of each alternative for a criterion and its reference point is calculated. Consequently, the result of the reference point technique is always a crisp value.

The previously proposed SF-MULTIMOORA [32] identifies the best rating after the defuzzification of all the ratings. Here, the best rating might change with the utilized score function. Up to now, several score functions are proposed [20, 21, 33, 3638]. In this proposed SF-MULTIMOORA the reference point is chosen directly without defuzzification to guarantee its uniqueness.

In a spherical fuzzy environment, the distance between the ratings of the alternatives and the best rating might not be proportional to their scores. For example, consider the SFSs \(\tilde{S}_{1} = ( 0.81,0.46,0.26 )\) and \(\tilde{S}_{2} = ( 0.73,0.30,0.31 )\). \(\tilde{S}_{1}\) is higher in score than \(\tilde{S}_{2} \). However, the distance between \(\tilde{S}_{1}\) and the reference point \(( 1,0,0 )\) is 0.3971, while the distance of \(\tilde{S}_{2}\) is 0.3599. Then, the SFS with the higher score is not closer to the best rating. Consequently, one reference point is insufficient to describe the situation effectively and efficiently. Therefore, it is more convenient to express distance as an SFS and employ both the best and worst ratings with a degree of hesitation. Therefore, in the reference point approach, the distance is expressed as an SFS. An alternative is preferred as its distance from the worst rating increase and its distance from the best rating decrease. Hence, the degree of membership is the distance from the worst rating, the degree of non-membership is the distance from the best rating, and the degree of hesitation is that of the weighted rating of the alternative.

When the distance is expressed as an SFS, the fuzzy distance for \(\tilde{S}_{1}\) is \(\tilde{dS}_{1} = ( 0.7125,0.3971,0.26 )\). For \(\tilde{S}_{2}\), the fuzzy distance is \(\tilde{dS}_{2} = ( 0.7480,0.3599,0.31 )\). Hence, \(Sc ( \tilde{dS}_{1} ) =1.3384\) and \(Sc ( \tilde{dS}_{2} ) =1.4041\). Thus, the distances are proportional to the scores. The score of the higher rating is associated with the smaller score of the distance and vice versa.

The used reference points can be either theoretical or empirical. The theoretical reference points for the jth criterion are the ratings \(( 1,0,0 )\) and \(( 0,1,0 )\). The empirical reference points for a criterion, obtained from a problem’s data, are defined as [39]

$$ \tilde{R}_{j}^{+} = \bigl( \mu _{j}^{+}, \upsilon _{j}^{+}, \pi _{j}^{+} \bigr), $$
(20)

where \(\mu _{j}^{+} = \max_{i} \mu _{ij}\), \(\upsilon _{j}^{+} = \min_{i} \upsilon _{ij}\), and \(\pi _{j}^{+} = \min_{i} \pi _{ij}\), for \(j=1,2,\dots ,m\).

$$ \tilde{R}_{j}^{-} = \bigl( \mu _{j}^{-}, \upsilon _{j}^{-}, \pi _{j}^{-} \bigr), $$
(21)

where \(\mu _{j}^{-} = \min_{i} \mu _{ij}\), \(\upsilon _{j}^{-} = \max_{i} \upsilon _{ij}\), and

$$ \pi _{j}^{-} = \textstyle\begin{cases} \max_{i} \pi _{ij} \quad \text{if } ( \mu _{j}^{-} )^{2} + ( \upsilon _{j}^{-} )^{2} + ( \pi _{j}^{-} )^{2} \leq 1,\\ \sqrt{1- ( \mu _{j}^{-} )^{2} - ( \upsilon _{j}^{-} )^{2}} \quad \text{otherwise}. \end{cases} $$

The reference point utility value is calculated as follows.

First, the weighted general decision matrix is computed

$$\tilde{\mathbf{D}}_{w} = [ \tilde{\mathcal{X}}_{ij} ] = \textstyle\begin{array}{c} \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} {C}_{1} & {C}_{2}&\hphantom{\qquad }& {C}_{{m}} \end{array}\displaystyle \\ \textstyle\begin{array}{c} \textstyle\begin{array}{c} {X}_{1}\\ {X}_{2} \end{array}\displaystyle \\ \textstyle\begin{array}{c} {\vdots} \\ {X}_{n} \end{array}\displaystyle \end{array}\displaystyle \begin{bmatrix} \textstyle\begin{array}{c@{\quad}c} \tilde{\mathcal{X}}_{11} & \tilde{\mathcal{X}}_{12}\\ \tilde{\mathcal{X}}_{21} & \tilde{\mathcal{X}}_{22} \end{array}\displaystyle & \cdots & \textstyle\begin{array}{c} \tilde{\mathcal{X}}_{1m}\\ \tilde{\mathcal{X}}_{2m} \end{array}\displaystyle \\ {\vdots} & {\ddots} & {\vdots} \\ \textstyle\begin{array}{c@{\quad}c} \tilde{\mathcal{X}}_{n1} & \tilde{\mathcal{X}}_{n2} \end{array}\displaystyle & \cdots & \tilde{\mathcal{X}}_{nm} \end{bmatrix} \end{array}\displaystyle , $$

where \(\tilde{\mathcal{X}}_{ij} = w_{j} \odot \tilde{{X}}_{ij}\).

The normalized Euclidean distance (12) is employed to find the distance between the ratings of an alternative for the criteria and the best and worst ratings.

$$ \begin{aligned}[b] d^{-} \bigl( \tilde{\mathcal{X}}_{ij}, \tilde{R}_{j}^{-} \bigr) &= \Biggl(\frac{1}{2m} \sum_{j=1}^{m} \bigl( \bigl( \mu _{ij} - \mu _{j}^{-} \bigr)^{2} + \bigl( \upsilon _{ij} - \upsilon _{j}^{-} \bigr)^{2} \\ &\quad {}+ \bigl( \pi _{ij} - \pi _{j}^{-} \bigr)^{2} \bigr)\Biggr)^{1/2}, \end{aligned} $$
(22)

and

$$ \begin{aligned}[b] d^{+} \bigl( \tilde{\mathcal{X}}_{ij}, \tilde{R}_{j}^{+} \bigr) &= \Biggl(\frac{1}{2m} \sum_{j=1}^{m} \bigl( \bigl( \mu _{ij} - \mu _{j}^{+} \bigr)^{2} + \bigl( \upsilon _{ij} - \upsilon _{j}^{+} \bigr)^{2} \\ &\quad {}+ \bigl( \pi _{ij} - \pi _{j}^{+} \bigr)^{2} \bigr)\Biggr)^{1/2}. \end{aligned} $$
(23)

The utility value based on the reference point technique is given by

$$ \tilde{U}_{i}^{R} = ( \mu _{i}, \upsilon _{i}, \pi _{i} ), $$
(24)

where

$$ \begin{gathered} \mu _{i} = d^{-} \bigl( \tilde{\mathcal{X}}_{ij}, \tilde{R}_{j}^{-} \bigr),\qquad \upsilon _{i} = d^{+} \bigl( \tilde{\mathcal{X}}_{ij}, \tilde{R}_{j}^{+} \bigr),\quad\text{and}\\ \pi _{i} = \sum _{j=1}^{m} \frac{\pi _{ij}}{m}. \end{gathered} $$
(25)

To examine the validity of the proposed fuzzy distance \(\tilde{U}_{i}^{R} = ( \mu _{i}, \upsilon _{i}, \pi _{i} )\) as an SFS, it is examined whether \(\mu _{i}^{2} + \upsilon _{i}^{2} + \pi _{i}^{2} \leq 1\). This is equivalent to the quadratic programming problem

$$\begin{aligned}& \text{Max } \bigl( \mu _{ij} - \mu _{j}^{-} \bigr)^{2} + \bigl( \upsilon _{ij} - \upsilon _{j}^{-} \bigr)^{2} + \bigl( \pi _{ij} - \pi _{j}^{-} \bigr)^{2}\\& \quad {} + \bigl( \mu _{ij} - \mu _{j}^{+} \bigr)^{2} + \bigl( \upsilon _{ij} - \upsilon _{j}^{+} \bigr)^{2} + \bigl( \pi _{ij} - \pi _{j}^{+} \bigr)^{2} + \pi _{ij}^{2},\\& \text{Subject to } \mu _{ij}^{2} + \upsilon _{ij}^{2} + \pi _{ij}^{2} \leq 1, \mu _{ij} \geq 0, \upsilon _{ij} \geq 0, \\& \hphantom{\text{Subject to }}\text{ and } \pi _{ij} \geq 0. \end{aligned}$$

This is a known constrained quadratic optimization problem over the unit sphere, whose solution is \(( \frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, \frac{1}{\sqrt{3}} ) \approx ( 0.58,0.58,0.58 )\). This value gives the maximum attained distances from the extreme points \(( 1,0,0 )\) and \(( 1,0,0 ) \). The fuzzy distance corresponding to this point is \(( 0.65,0.65,0.58 )\). The sum of their squares is 1.18. However, since in (22) and (23) the average distance for all the ratings of an alternative from the reference points is used, then the only restriction on using the spherical fuzzy distance, when the theoretical reference points are employed, is that all the ratings of an alternative in the weighted fuzzy decision matrix have the same value \(( 0.58,0.58,0.58 )\). Practically, this can rarely happen. For empirical reference points, the distances are smaller, and this value is not expected to occur.

Step 3. The full multiplicative form technique

This technique is based on the product of the alternatives’ ratings for the benefit criteria, and the product of the alternatives’ ratings for the cost criteria. In a spherical fuzzy environment, as previously explained, the presence of the worst rating for a criterion in the multiplication operation will eliminate the effect of all the other criteria. Besides, the division operation is not defined for SFS. On the other hand, defuzzification of the ratings might lead to incorrect results. Score functions may give similar crisp values for different SFSs. For example, the SFSs \(( 0.4,0.7,0.3 )\) and \(( 0.3,0.5,0.1 )\) have the same crisp value 0.09 using (9). Therefore, it is preferable to delay usage of the score functions to the final step where the accuracy function (10) can assist in differentiating SFSs.

Alternatively, the spherical fuzzy averaging power aggregation function SFPA (18) is used instead to ensure balanced and fair treatment. The multiplicative utility \(U_{i}^{P}\) of each alternative \({X}_{i}\) is given by

$$ \begin{aligned} \tilde{U}_{i}^{P} &= \mathrm{SFPA} ( \tilde{X}_{ij} \vert j=1,2,\dots ,m; w_{j} ) \\ &= \Biggl( \frac{1}{m} \sum_{j=1}^{m} \mu _{\tilde{X}_{ij}}^{\frac{1}{w_{i}}}, \frac{1}{m} \sum _{j=1}^{m} \upsilon _{\tilde{X}_{ij}}^{\frac{1}{w_{i}}}, \frac{1}{m} \sum_{j=1}^{m} \pi _{\tilde{X}_{ij}}^{\frac{1}{w_{i}}} \Biggr). \end{aligned} $$

Step 4. Compute the overall fuzzy utility

Due to the limitation of the dominance theory, e.g. multiple comparisons and circular reasoning [31], instead of ranking the alternatives individually based on their scores, the results of the three techniques are aggregated using the proposed SFAA (18). The three techniques are given equal degrees of importance.

$$ \tilde{U}_{i}^{T} = \mathrm{SFAA} \bigl( \tilde{U}_{i}^{A}, \tilde{U}_{i}^{R}, \tilde{U}_{i}^{P} \vert \omega _{A}, \omega _{R}, \omega _{P} \bigr), $$

where \(\omega _{A} = \omega _{R} = \omega _{P} = \frac{1}{3}\).

Step 5. Defuzzification and ranking

The score function (9) is used to defuzzify the overall fuzzy utility. Then, the alternatives are ranked in descending order. The accuracy function (10) is utilized if needed. The alternative with the highest score is the best.

Applications

In this section, two practical examples are solved. The first example is a personnel selection problem adopted from Kutlu Gündoğdu [32]. The weights of the criteria and the ratings of the alternatives for the criteria are SFSs. The second example is adopted from Zhang [29]. The example evaluates energy storage technologies for a set of criteria that belong to different aspects of sustainability. The weights of the criteria are crisp values and ratings of the alternatives for the criteria are IFSs.

A personnel selection problem

Three decision-makers (DMs) namely, the director of the human resources \(( \mathrm{DM}_{1} )\), a human resource specialist \(( \mathrm{DM}_{2} )\), and the sales and marketing manager \(( \mathrm{DM}_{3} )\) select personnel from four candidates \(\{ X_{1}, X_{2}, X_{3}, X_{4} \}\). Four criteria are used as the basis for the selection process namely, educational background \(( C_{1} )\), professional experience \(( C_{2} )\), communication skills \(( C_{3} )\), and team management \(( C_{4} )\). The weights of the DMs according to their experience level are \(\{ 0.3,0.2,0.5 \}\). The assessment of the DMs for the weights of the criteria and the ratings of the criteria are given by SFSs. The aggregated decision matrix, the weights of the criteria, the weighted decision matrix, and the empirical reference points are given directly in Table 1. For the details on the data of the problem and the SF ratings, the reader is referred to Kutlu Gündoğdu [32].

Table 1 The decision matrix, the weighted decision matrix, and the reference points

Before proceeding with the solution, the SF weights are defuzzified using (9). If the defuzzified weights have negative or zero values, then they need to be translated such that the axis of reference is the vertical line \(x=1\) instead of \(x=0\). The translation formula for the criteria weights \(\{ w_{1}, w_{2},\dots , w_{m} \}\) is given as follows

$$ w_{j} ' = w_{j} +\Delta ,\quad\text{where } \Delta = \bigl\vert w_{j}^{min} \bigr\vert +1 \text{ for } j=1,2,\dots ,m. $$

Then, the transformed weights are normalized to get

$$ w_{j}^{N} = \frac{w_{j} '}{\sum_{j=1}^{n} w_{j} '}\quad \text{such that } 0 \leq w_{j}^{N} \leq 1 \text{ and } \sum _{j=1}^{n} w_{j}^{N} =1. $$

The criteria weights after defuzzification are \(\{ 2.1903, 0.5775,-0.1207,0.1728 \}\). After shifting by 1.1207, the weights become \(\{ 3.3110,1.6982,1,1.2935 \}\). Normalizing the weights, we get \(\{ 0.4534,0.2325,0.1369,0.1771 \}\).

The solution steps are summarized as follows.

Step 1. Apply the SFAA (18).

$$ \tilde{U}_{i}^{A} = \mathrm{SFAA} ( \tilde{X}_{ij} \vert j=1,2,\dots ,m; w_{j} ). $$

Step 2. Apply the reference point technique.

  1. a)

    Compute the weighted general decision matrix.

  2. b)

    Define the best and the worst ratings, either the theoretical or the empirical using (20) and (21).

  3. c)

    Compute the fuzzy distance \(\tilde{U}_{i}^{R} = ( \mu _{i}, \upsilon _{i}, \pi _{i} )\) using (22), (23), and (25).

Step 3. Apply the SFPA (19).

$$ \tilde{U}_{i}^{P} = \mathrm{SFPA} ( \tilde{X}_{ij} \vert j=1,2,\dots ,m; w_{j} ). $$

Step 4. Compute the overall fuzzy utility using (18).

$$ \begin{gathered} \tilde{U}_{i}^{T} = \mathrm{SFAA} \bigl( \tilde{U}_{i}^{A}, \tilde{U}_{i}^{R}, \tilde{U}_{i}^{P} \vert \omega _{A}, \omega _{R}, \omega _{P} \bigr),\\ \quad \text{where } \omega _{A} = \omega _{R} = \omega _{P} = \frac{1}{3}. \end{gathered} $$

Step 5. Use the score function (9) to rank. The accuracy function (10) is used if necessary.

The results using the theoretical reference points are shown in Table 2 and using the empirical reference points are shown in Table 3. The same ranking is obtained using both the theoretical and the empirical reference points. The ranking is \(X_{1} > X_{3} > X_{4} > X_{2}\).

Table 2 The results of the personnel selection problem using theoretical reference point
Table 3 The results of personnel selection problem using empirical reference point

The ranking of the proposed method is compared with the ranking of the IF-TOPSIS using an intuitionistic linguistic scale, the neutrosophic-MULTIMOORA using a neutrosophic linguistic scale, and the previous version of the SF-MULTIMOORA. The results of these three methods are due to Kutlu Gündoğdu [32]. The comparison is given in Table 4. The ranking obtained by the proposed method coincides with the results obtained by the methods used for comparison.

Table 4 The ranking of the alternatives using different methods

Energy storage technologies

An overview

The energy demand has been significantly increasing due to industrialization and the growth in population with rising living standards. Fossil fuels proved to have critical negative effects on the environment, e.g. global warming, ozone layer depletion, and pollution. Despite their environmental impacts, renewable energy systems (RESs) compare extremely favorably to fossil fuels. Therefore, they are effective candidates to fulfill this increasing demand. On the other hand, RESs are fluctuating and intermittent. Energy storage technologies (ESTs) provide a perfect and successful solution to overcome this obstacle. To secure energy supply, ESTs enable the storage of excess energy and utilize it when needed [40].

ESTs function through two main processes. The charging process is when energy is absorbed and the discharging process is when the stored energy is delivered. Hence, energy storage balances the supply and demand even when the generation and consumption of energy do not occur simultaneously. ESTs are exploited in diverse applications. Some ESTs are applicable for specific applications; some others are applicable in wider frames. Matching the application to the storage technology is the key factor to efficient and effective performance [41].

Energy can be stored via several storage technologies. ESTs are classified according to the used technology to thermal, chemical, electrochemical, electrical, and mechanical systems.

Thermal energy storage (TES): in these systems energy is stored in the form of thermal energy by heating or cooling a storage medium. The storage medium can be located in storages of various kinds, e.g. tanks, ponds, caverns, and underground aquifers. The materials used for heat storage can be both liquid and solid. TES systems are utilized for various industrial and domestic purposes [41]. Molten salt thermal storage is mostly used followed by chilled water thermal storage, heat thermal storage, and finally ice thermal storage [42].

Chemical energy storage (CES): in these systems energy is stored in the form of chemical energy in different materials. Charging is achieved by a natural process (photosynthesis) or a technical process (power-to-gas, power-to-liquid). The resulting energy carriers can be stored in tanks. Discharging occurs through combustion processes and the conversion of thermal energy into electric or mechanical energy. CES includes gases such as hydrogen and biogas; liquids such as methanol and gasoline; and solids such as biomass and coal [43].

Electrochemical energy storage (EcES): in these systems, electric energy is converted to chemical energy and vice versa during energy storage and recovery. EcES are classified as batteries energy storage and flow batteries. EcES are used in small devices, e.g. laptops, tablets, and cell phones. They are also utilized in larger devices, e.g. electric cars, to provide efficient and reliable energy. The most common batteries energy storage are Lead-acid (Pb-acid), Sodium-nickel chloride (NaNiCl), Nickel-cadmium (Ni-Cd), and Lithium-ion (Li-ion) batteries. The most common flow batteries are vanadium redox (VRB) and zinc-bromine batteries (ZnBr) [42].

Electrical energy storage (EES): in these systems energy is stored in electrostatic fields and magnetic fields. Therefore, EES can be classified into superconducting magnetic energy storage (SMES) and Capacitors. SMES relies on the magnetic field induced within a coil of a superconducting wire. Capacitors store energy within the electric field between the two electrodes insulated by the dielectric [29]. Capacitors may be used for high currents, but for very short periods due to their relatively low capacitance generation. Supercapacitors can replace regular capacitors, taking into consideration that they provide very high capacitance in a small package [41]. However, the use of supercapacitors is not very common since this technology is still in the research and development (R&D) demonstration and pre-commercial stage [42].

Mechanical energy storage (MES): in these systems, the energy stored in gaseous, liquid, or solid media due to their position, speed, or thermodynamic state is utilized [43]. MES includes flywheel, springs, pumped hydroelectric storage (PHS), and compressed air energy storage (CAES). MES can easily convert and store energy from water currents, waves, and tidal sources [41].

The characteristics of different ESTs differ a great deal in terms of energy density, efficiency, time scales, energy, power relations, and costs. Therefore, the selection of an EST for a certain application is an MCDM problem.

A practical example

The proposed SF-MULTIMOORA is applied to evaluate and rank a set of different ESTs. Three main criteria are selected to evaluate the performance of some ESTs to achieve sustainability and energy security namely, technological, economic, and environmental criteria. Technological criteria assess the reliability of EST and its ability to guarantee a safe energy supply. Economic criteria focus on competitiveness and affordability issues represented in the associated installations’ costs and their impact on energy prices. Environmental criteria address environmental sustainability [29].

Fourteen alternatives are evaluated by eleven sub-criteria. The alternatives under evaluation are given in Table 5. The criteria and sub-criteria used for assessment are defined as follows [29].

Table 5 The evaluated ESTs

The Technological criterion consists of the following sub-criteria.

  • The power rating \(( C_{1} )\): the total energy stored to that retrieved. It is measured in megawatts (MW). A high power rating is preferred.

  • The energy rating \(( C_{2} )\): the maximum time the system can constantly release energy and it is measured in hours, i.e. the duration of discharge. Operating flexibility requires a long discharge period to manage variations to match demand. As the energy rating increases, the better the EST.

  • The response time \(( C_{3} )\): the time required to bring the system into operation and discharge energy. This criterion is measured on a linguistic scale. Short time response is preferred, the faster the response time the better the EST.

  • The energy density \(( C_{4} )\): the amount of energy accumulated per unit mass of the storage unit. It is measured in Wh/kg. Higher energy density is required.

  • The self-discharge time \(( C_{5} )\): the energy dissipated over a given amount of non-use time, i.e. idling losses, and it is measured in percentage per day. The lower the losses the better the EST.

  • The round-trip efficiency \(( C_{6} )\): is the ratio of the stored energy (in MWh) to the energy released from storage (in MWh), and it is measured in percentage. High round-trip efficiency is an advantage.

  • The lifetime \(( C_{7} )\): is also known as the service period, and it is expressed in years for a certain cycling rate, or in the total number of operating cycles (one cycle represents one charge and one discharge). A Long lifetime is an asset.

  • The number of cycles of operation \(( C_{8} )\): the number of times the storage unit releases the energy level it was designed for after each recharge. An EST with the maximum number of cycles is preferred.

The economic criterion consists of two sub-criteria.

  • The power cost \(( C_{9} )\): the installation and operational costs, and it is measured in Eur/kW.

  • The energy cost \(( C_{10} )\): the costs of energy supply, and it is measured in Eur/kWh.

Finally, the environmental criterion has no sub-criteria.

The environmental aspects \(( C_{11} )\): the effect of disposal/end of life and usage of ESTs on the environment. It is measured on a qualitative scale, and the minimum effect should be attained.

From the previous definitions, it is clear that the sub-criteria have both quantitative and qualitative data, and the quantitative data have several units of measurement. Furthermore, the sub-criteria have different objectives. While \(C_{3}\), \(C_{5}\), \(C_{9}\), \(C_{10}\), and \(C_{11}\) have to be minimized, \(C_{1}\), \(C_{2}\), \(C_{4}\), \(C_{6}\), \(C_{7}\), and \(C_{8}\) have to be maximized. The data used in this example is in the final form ready to be processed since Zhang, et al. [29] transformed all the data into IFSs and the criteria to be minimized were negated. Table 6 includes the data of the problem. It is clear from Table 6 that the extreme values are present in the ratings of the criteria. For the details on data transformation and fusion, the reader is referred to Zhang, et al. [29].

Table 6 The ratings of the alternatives for the evaluation criteria

The three main criteria are assigned equal weights. Then, the weight of each criterion is equally divided among its sub-criteria. Then, the weights of the criteria are

$$ \begin{gathered} ( 0.042,0.042,0.042,0.042,0.042,0.042,\\ \quad 0.042,0.042,0.167,0.167,0.333 ). \end{gathered} $$

The proposed SF-MULTIMOORA is applied to solve this problem. Both the theoretical and the empirical reference points are used. The results are summarized in Table 7 and Table 8. The best ESTs are NaNiCl, NiCd, and ZnBr. The worst ESTs are Super Cap, VRB, and SMES. The ranking is unchanged for the theoretical and empirical reference points.

Table 7 The results of the evaluation of ESTs using the theoretical reference point
Table 8 The results of the evaluation of ESTs using the empirical reference point

The results using the theoretical and empirical reference points are compared with the result of the IF-MULTIMOORA and IF-TOPSIS [29] as shown in Table 9. NaNiCl holds first place using SF-MULTIMOORA and IF-TOPSIS. Meanwhile, Molten salt holds first place using IF-MULTIMOORA. NiCd and Zn Br exchange the 2nd, 3rd, and 4th places according to the used method. The worst-ranked ESTs are VRB and SMES by the three methods.

Table 9 Ranking using different methods for ESTs

It is to be noted that a traditional SF-MULTIMOORA cannot handle the data of this example properly. For illustration, the example is resolved using a traditional SF-MULTIMOORA utilizing the SWAM operator (7) for the ratio system technique, the reference point (20) with the normalized Euclidean distance (12) for the reference point technique, and the SWGM operator (8) for the full multiplicative form. The results are given in Table 10.

Table 10 The results of a traditional MULTIMOORA

From Table 10, it is obvious how the ranking is affected by the presence of the extreme points. For example, \(X_{1}\) and \(X_{2}\) hold first place using the SWAM operator having the best rating \(( 1,0,0 )\) for the fifth criterion only. Similarly, \(X_{7}\), \(X_{8}\), \(X_{10}\), \(X_{11}\), and \(X_{13}\) hold the last place using the SWGM operator having the worst rating \(( 0,1,0 )\) for the eighth criterion only. As a result, the ranking varies significantly from one operator to another due to the presence of the extreme points. According to the Dominance Theory, the alternative that mostly appears in the first place in the three ranking lists is the best. In this case, none of the alternatives appears in the first place twice. Thus, the difficulty in applying the Dominance Theory is also evident.

Conclusion

In this study, a more accurate and robust version of the SF-MULTIMOORA is proposed using spherical fuzzy data. The main aim is to avoid the pitfalls that might be encountered in the implementation of this method and lead to incorrect results. First, two aggregation functions for SFSs are proposed. The extant aggregation operators fail to handle the extreme values. Whenever the extreme values are present in the evaluation, the aggregation results are biased toward these values, which is unfair in the evaluation process. Therefore, the spherical fuzzy averaging arithmetic aggregation function (SFAA) and the spherical fuzzy averaging power aggregation function (SFPA) are proposed to process the evaluation criteria fairly and avoid false ranking. Second, in the spherical fuzzy environment, being close to the best rating does not necessarily imply an SFS with a higher score. To avoid this flaw, both the best and worst ratings are employed. Then, these two reference points are utilized and the distance is expressed as an SFS instead of a crisp value. Furthermore, due to the disadvantages of the dominance theory in large-scale applications, the results of the three utilities are aggregated to get the overall utility.

Two practical examples are solved to test and validate the performance of the proposed SF-MULTIMOORA. The first example is a personnel selection problem. The result coincides with the result of the IF-TOPSIS, the neutrosophic-MULTIMOORA, and the SF-MULTIMOORA. The second example is the assessment of energy storage technologies. The result is compared with the result of the IF-MULTIMOORA and IF-TOPSIS. The top-ranked four technologies are the same in all these methods, yet they exchange positions from one method to the other. The two worst-ranked technologies are the same in the four methods.

The main limitation in expressing the distance as an SFS appears when the theoretical reference points are employed. If all the ratings of an alternative in the weighted fuzzy decision matrix have the same value \(( 0.58,0.58,0.58 )\), the resulting fuzzy distance is not an SFS. It is also preferable to use the proposed method when the weights of the criteria are crisp values. Else for spherical fuzzy weights, score functions must be employed. Here, the weights might change according to the used score function.

Future research will focus on developing aggregation operators for SFSs based on operational laws and can handle the extreme points.