1 Introduction

Multiple criteria decision making (MCDM) methods have become very popular in recent years and are frequently applied in many real-life situations [for more information see, e.g., Behzadian et al. (2012) and Abdullah and Adawiyah (2014)]. One of the most popular and widely applied MCDM methods is the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) proposed by Hwang and Yoon (1981). The basic idea of this method is fairly straightforward. It uses two reference points: the so-called positive ideal solution (PIS) and negative ideal solution (NIS) as benchmarks. The selected alternative is that which has both the shortest distance from the PIS and the longest distance from the NIS. The PIS is a solution that maximizes all the benefit criteria and minimizes all the cost criteria, whereas the NIS is a solution that maximizes all the cost criteria and minimizes all the benefit criteria.

The classical TOPSIS method is based on the information provided by the decision maker (DM) or expert as exact numerical values. However, in some real-life situations, the DM may not be able to precisely express the value of the ratings of alternatives with respect to criteria or else he/she uses linguistic expressions. In such situations, when evaluations are based on unquantifiable, incomplete, or unobtainable information, the DM may use other data formats, such as: interval numbers (Jahanshahloo et al. 2006a; Yue 2011), fuzzy numbers (Chen 2000; Jahanshahloo et al. 2006b), hesitant fuzzy sets (Senvar et al. 2016), intuitionistic fuzzy sets (Boran et al. 2009) and others. Recently, Roszkowska and Kacprzak (2016) have developed a new approach for TOPSIS in which information provided by the DM is represented in the form of ordered fuzzy numbers (OFNs) which are well suited to handle incomplete and uncertain knowledge.

This new approach to MCDM problems has been considered, so far, in a few papers only, among others in Roszkowska and Kacprzak (2016), Rudnik and Kacprzak (2017) and Kacprzak (2017). The first application of OFNs for TOPSIS was presented at the Sixth Podlasie Conference on Mathematics in Bialystok, Poland, by Kacprzak and Roszkowska (2014), and discussed in detail in Roszkowska and Kacprzak (2016). In that paper, the authors evaluated alternatives with respect to criteria using linguistic expressions, in which linguistic terms were quantified on a scale given in advance. The scale was extended to include intermediate values, such as “more than\( i \)” or “less than\( i + 1 \)”, expressed by trapezoidal OFNs, together with “\( i \)” and “\( i + 1 \)”, where \( i \in {\mathbb{N}} \). The authors showed that the extended TOPSIS using OFNs can better distinguish among alternatives as compared with the classical TOPSIS which uses crisp values, and with the extended TOPSIS which uses (convex) fuzzy numbers (CFNs). The extended TOPSIS with OFNs has been also used to solve a real-life problem of discrete flow control in a manufacturing system (Rudnik and Kacprzak 2017). The authors have tested this method in a flow control system and compared it to the classical TOPSIS and to other simple control methods. As a result, they concluded that the extended TOPSIS with OFNs is better suited for the analysed case than the classical TOPSIS or than most of other methods considered. Kacprzak (2017) developed a method for obtaining criteria weights based on the concept of Shannon entropy when data are in the form of OFNs. The proposed approach allows to obtain, for each criterion, its weight in the form of an OFN. Moreover, it has also been shown that the obtained criteria weights satisfy certain conditions similar to those for real numbers: for instance, they can be normalized and sum up to 1.

On the other hand, the increasing complexity of the decision problems analysed makes it less feasible to consider all the relevant aspects of the problem by a single decision maker or expert. Therefore, many real-life problems are considered by a group of decision makers (or experts), resulting in Group Decision Making (GDM), which is an interesting and important part of today’s decision science and research area. In the literature, a majority of researchers used aggregation methods for all individual decisions made by all DMs (usually in the form of decision matrices) to construct a collective decision (also in the form of a collective decision matrix). This collective decision is the starting point for the ranking of alternatives or the selection of the best one. In such a situation the MCDM problem for GDM reduces to a classical MCDM problem and can lead to the loss of some important information.

One of the most commonly used methods of aggregation, in MCDM methods such as TOPSIS, is arithmetic mean (Chen 2000; Wang and Chang 2007; Roszkowska and Kacprzak 2016). On the one hand, this type of aggregation of individual decisions is also used in practice, e.g., in certain sports, such as snowboard slopestyle or halfpipe. Each participant is evaluated by a group of referees (as DMs) and the average of the referees’ scores is taken as the final result for each participant. On the other hand, due to this method of aggregation of individual information, some significant information related to the individual decisions of DMs may be omitted. The PIS and NIS, as benchmarks of the classical TOPSIS method, are expressed by a vector. For this reason, the TOPSIS methods for GDM based on the aggregation of the individual decisions made by each DM are limited to comparisons of vectors of alternatives (with respect to criteria) with the vector of PIS and NIS. This comparison cannot reflect the DMs’ individual decisions, which are expressed by a decision matrix. In order to better explain these limitations, we present a simple example. Let us consider a group of two decision makers \( \left\{{{\it DM}_{1},DM_{2}} \right\} \) who evaluate three alternatives \( \left\{{A_{1}, A_{2},A_{3}} \right\} \) using a scale of positive trapezoidal OFNs [see Eq. (4)], given in advance: \( \left\{{\left({1,2,3,4} \right), \left({2,3,4,5} \right), \left({3,4,5,6} \right), \left({4,5,6,7} \right), \left({5,6,7,8} \right), \left({6,7,8,9} \right), \left({7,8,9,10} \right)} \right\}. \) Their evaluations of the alternatives with respect to the \( j \)th criterion are [see Eq. (26)]: \( x_{1j}^{1} = \left({4,5,6,7} \right) \), \( x_{2j}^{1} = \left({3,4,5,6} \right) \) and \( x_{3j}^{1} = \left({2,3,4,5} \right) \) for \( DM_{1} \) and \( x_{1j}^{2} = \left({4,5,6,7} \right) \), \( x_{2j}^{2} = \left({5,6,7,8} \right) \) and \( x_{3j}^{2} = \left({6,7,8,9} \right) \) for \( DM_{2} \). The aggregation results (using arithmetic mean) are all equal to \( v_{1j} = v_{2j} = v_{3j} = \left({4,5,6,7} \right) \). This means that the corresponding elements of PIS and NIS [using Eqs. (7) and (8) in formulas (21) and (22)] are: \( v_{j}^{+} = v_{j}^{-} = \left({4,5,6,7} \right) \). It follows that the distances of each alternative \( A_{i} \) for \( i = 1,2,3 \) to PIS and NIS obtained using Eq. (6) are equal to 0, i.e. \( d(v_{1j},v_{j}^{+}) = d(v_{2j},v_{j}^{+}) = d(v_{3j},v_{j}^{+}) = 0 \) and \( d(v_{1j},v_{j}^{-}) = d(v_{2j},v_{j}^{-}) = d(v_{3j},v_{j}^{-}) = 0 \). This means that the \( j \)th criterion has no influence on the ranking of the alternatives [see Eqs. (23), (24) and (25)] and can be omitted. Therefore we can conclude that such an averaged result does not reflect the discrepancies between the individual decisions (preferences of DMs) and that using such averaged information may lead to an incorrect final decision.

The aim of this paper and its main contribution is to present a new approach for ranking of alternatives for group decision making using the TOPSIS method based on ordered fuzzy numbers. This is an alternative to methods that use different forms of averages for the aggregation of the individual matrices into a collective matrix. In the proposed approach aggregation is not needed and all individual decision information of DMs is taken into account in determining the ranking of alternatives and selecting the best one. The key stage of this method is the transformation of the decision matrices provided by the decision makers into matrices of alternatives. A matrix corresponding to an alternative is composed of its assessments with respect to all criteria, performed by all the decision makers. Since all individual decision matrices are normalized beforehand with respect to the type of criterion, the positive ideal solution in this approach is a matrix composed of maximal assessments, and the negative ideal solution is a matrix composed of minimal assessments. The distances of alternatives from the PIS and the NIS, in contrast to the classic TOPSIS and to the method based on the aggregation of the individual decisions made by each DM, are distances between matrices. Using the coefficient of relative closeness of each alternative to the positive ideal solution, a ranking of alternatives is created and the best one is indicated.

The rest of the paper is organized as follows. In Sect. 2 basic definitions and notations of ordered fuzzy numbers are introduced. In Sect. 3 the classical TOPSIS method and its fuzzy extension based on ordered fuzzy numbers are presented. The proposed approach and a numerical example are described in Sects. 4 and 5, respectively. Section 6 is devoted to the comparison of the proposed approach with other, similar approaches. Finally, concluding remarks and suggestions for further research are in Sect. 7.

2 Ordered fuzzy numbers

In this section, some definitions related to the model of ordered fuzzy numbers used in the paper are briefly presented. The model of ordered fuzzy numbers, as an extension of the notion of (convex) fuzzy numbers, was introduced and developed by Kosiński and his two coworkers Prokopowicz and Ślęzak (2002) in a series of papers (Kosinski et al. 2002, 2003, 2004; Kosiński 2006). The main goal guiding the authors was to overcome the limitations of the CFNs model resulting primarily from the definition of arithmetic operations on these numbers, an increase of fuzziness during the operations, and a lack of an opposite element with respect to addition and an inverse element with respect to multiplication. Due to that, the simplest fuzzy arithmetic equation \( A + X = C \) or \( A \cdot X = C \), where \( A \) and \( C \) are given fuzzy numbers, has no fuzzy solution \( X \) in general. Arithmetic operations in the OFNs model are similar to the operations on real numbers, which are a special case of OFNs.

Definition 1

(Kosinski et al.2004; Chwastyk and Kosinski2013). An ordered fuzzy numberFootnote 1\( A \) is an ordered pair \( A = \left({f_{A},g_{A}} \right) \) of continuous functions \( f_{A},g_{A} :\left[{0,1} \right] \to {\mathbb{R}} \).

The set of all OFNs will be denoted by \( {\Re} \). The elements of the OFN \( A \) are called: \( f_{A} \)—the up part and \( g_{A} \)—the down part. To conform to the classical notation of fuzzy numbers, the independent variable of both functions \( f_{A} \) and \( g_{A} \) will be denoted by \( y \), while their values, by \( x \) (Fig. 1a). The continuity of both functions \( f_{A} \) and \( g_{A} \) implies that their images are bounded intervals, called \( UP_{A} \) and \( DOWN_{A} \), respectively (Fig. 1a), described by their endpoints as follows: \( UP_{A} = \left[{f_{A} \left(0 \right),f_{A} \left(1 \right)} \right] \) and \( DOWN_{A} = \left[{g_{A} \left(1 \right),g_{A} \left(0 \right)} \right] \).

Fig. 1
figure 1

a An OFN \( A \), b an OFN \( A \) with its membership function, c the arrow denotes the order of the inverted functions and the orientation of OFN \( A \)

In general, the functions \( f_{A} \) and \( g_{A} \) of the OFN \( A \) need not be invertible as functions of the variable \( y \); only continuity is required in Definition 1. But we can assume, additionally, that (Kosiński 2006):

  • (A1) \( f_{A} \) is increasing and \( g_{A} \) is decreasing,

  • (A2) \( f_{A} \le g_{A} \) (pointwise),

and we can also define a function of the variable \( x \) on the interval \( \left[{f_{A} \left(1 \right),g_{A} \left(1 \right)} \right] \) with constant value equal to 1. Then the membership function of the ordered fuzzy number \( A \) can be defined as follows (Fig. 1b)

$$ \mu_{A} \left( x \right) = \left\{ {\begin{array}{*{20}l} 0 \hfill & {{\text{if}}\;x < f_{A} \left( 0 \right)} \hfill \\ {f_{A}^{ - 1} \left( x \right)} \hfill & {{\text{if}}\;f_{A} \left( 0 \right) \le x \le f_{A} \left( 1 \right)} \hfill \\ 1 \hfill & {{\text{if}}\;f_{A} \left( 1 \right) \le x \le g_{A} \left( 1 \right)} \hfill \\ {g_{A}^{ - 1} \left( x \right)} \hfill & {{\text{if}}\;g_{A} \left( 1 \right) \le x \le g_{A} \left( 0 \right)} \hfill \\ 0 \hfill & {{\text{if}}\;g_{A} \left( 0 \right) < x.} \hfill \\ \end{array} } \right. $$
(1)

Using the characteristic point in (1), an OFN \( A \) can be written as \( A = \left({f_{A} \left(0 \right),f_{A} \left(1 \right),g_{A} \left(1 \right),g_{A} \left(0 \right)} \right) \), where \( f_{A} \left(0 \right) \le f_{A} \left(1 \right) \le g_{A} \left(1 \right) \le g_{A} \left(0 \right) \). Figure 1c shows the membership function of an OFN \( A \) with an extra arrow denoting the orientation (the order of the inverse functions \( f_{A}^{- 1} \) and \( g_{A}^{- 1} \)). In this case, the application of the OFNs model is no different from the application of the CFNs model. But let us note that the pair of continuous functions \( \left({g_{A},f_{A}} \right) = \left({g_{A} \left(0 \right),g_{A} \left(1 \right),f_{A} \left(1 \right),f_{A} \left(0 \right)} \right) \), where \( g_{A} \left(0 \right) \ge g_{A} \left(1 \right) \ge f_{A} \left(1 \right) \ge f_{A} \left(0 \right) \), determines a different OFN than the pair \( \left({f_{A},g_{A}} \right) \). Figure 2 shows that although the two curves have an identical shape, the corresponding membership functions determine two different OFNs differing in orientation. Using the orientation of OFNs, the set \( {\Re} \) can be divided into two subsets:

Fig. 2
figure 2

a An OFN \( \left({f_{A},g_{A}} \right) \) with a positive orientation, b an OFN \( \left({g_{A},f_{A}} \right) \) with a negative orientation

  • numbers with a positive orientation, if the direction of the OFNs is the same as the \( x \)-axis (Fig. 2a),

  • numbers with a negative orientation, if the direction of the OFNs is opposite (Fig. 2b).

Arithmetic operations on OFNs are defined as pairwise operations on their functions \( f \) and \( g \).

Definition 2

(Kosiński2006). Let \( A = \left({f_{A},g_{A}} \right) \), \( B = \left({f_{B},g_{B}} \right) \) and \( C = \left({f_{C},g_{C}} \right) \) be OFNs. The arithmetic operations: addition \( \left({C = A + B} \right) \), subtraction \( \left({C = A - B} \right) \), multiplication \( \left({C = A \cdot B} \right) \) and division \( \left({C = A/B} \right) \) are defined in \( {\Re} \) as follows

$$ \forall y \in \left[ {0,1} \right] \left[ { f_{C} \left( y \right) = f_{A} \left( y \right)*f_{B} \left( y \right) \;{\text{and}}\;g_{C} \left( y \right) = g_{A} \left( y \right)*g_{B} \left( y \right) } \right] $$
(2)

where “\( * \)“∈{ \( + \), \( - \), \( \cdot \),/} and \( A/B \) is defined when \( f_{B} \ne 0 \) and \( g_{B} \ne 0 \) for each \( y \in \left[{0,1} \right] \).

Since real numbers are a special case of OFNs, they can be represented in \( {\Re} \) as follows. Let \( r \in {\mathbb{R}} \) and let \( r^{\prime} \) be a constant function, i.e., \( r^{\prime}\left(y \right) = r \) for all \( y \in \left[{0,1} \right] \). Then \( r^{\ast} \left(y \right) = \left({r^{\prime},r^{\prime}} \right) \) is the OFN which represents the real number \( r \) in \( {\Re} \). Now we can define the multiplication of an ordered fuzzy number \( A = \left({f_{A},g_{A}} \right) \) by a real number \( r \).

Definition 3

(Kosiński2006). Let \( A = \left({f_{A},g_{A}} \right) \in {\Re} \) and \( r \in {\mathbb{R}} \). Multiplication of an ordered fuzzy number \( A \) by a real number \( r \) is defined by the formula

$$ \forall y \in \left[ {0,1} \right] \left[ {r \cdot A = (r \cdot f_{A} (y),r \cdot g_{A} (y))} \right]. $$
(3)

Here it is worth mentioning that if the functions \( f_{A} \) and/or \( g_{A} \) are not invertible (Fig. 3a) or assumption (A2) is not satisfied (Fig. 3b), we can obtain the so-called improper OFNs (Kosinski et al. 2003). Instead of the membership function, we can define the membership curve (or relation) in the \( xy \)-plane, consisting of the functions \( f_{A} \) and \( g_{A} \) (as functions of the variable \( x \)) and a segment of the line \( y = 1 \) over the interval \( \left[{f_{A} \left(1 \right), g_{A} \left(1 \right)} \right] \) (Fig. 3).

Fig. 3
figure 3

Improper OFNs with a membership curve (or relation) instead of a membership function

Moreover, these type of improper OFNs can appear when arithmetic operations on OFNs are performed. Let us consider two numerical examples:

  • the sum of two OFNs \( A = \left({1 + y,6 - y} \right) \) and \( B = \left({7 - y,1 + 4y} \right) \), \( y \in \left[{0,1} \right] \) is equal to \( A + B = \left({8, 7 + 3y} \right) \) and is an improper OFN (Fig. 4a),

    Fig. 4
    figure 4

    OFNs \( A \) and \( B \) and their sum \( A + B \) which is an improper OFN

  • the sum of another pair of OFNs \( A = \left({1 + y,5 - y} \right) \) and \( B = \left({8 - 3y,1 + 3y} \right) \), \( y \in \left[{0,1} \right] \) is equal to \( A + B = \left({9 - 2y,6 + 2y} \right) \) which is also an improper OFN (Fig. 4b).

In formula (1) of the membership function of the OFN \( A \) there are four characteristic real numbers: \( f_{A} \left(0 \right) \), \( f_{A} \left(1 \right) \), \( g_{A} \left(1 \right) \) and \( g_{A} \left(0 \right) \). If the functions \( f_{A} \) and \( g_{A} \) are linear, these four numbers uniquely describe \( A \) as follows (Fig. 5)

$$ A = \left({f_{A} \left(0 \right), f_{A} \left(1 \right), g_{A} \left(1 \right), g_{A} \left(0 \right)} \right) . $$
(4)
Fig. 5
figure 5

A trapezoidal OFN \( A \) (a pair of linear functions) with characteristic points

The number \( A \) is called a positive trapezoidal OFN if \( 0 {<} f_{A} \left(0 \right) \,{\le}\, f_{A} \left(1 \right) {<} g_{A} \left(1 \right) \,{\le}\, g_{A} \left(0 \right) \) or \( f_{A} \left(0 \right) \ge f_{A} \left(1 \right) > g_{A} \left(1 \right) \ge g_{A} \left(0 \right) > 0 \) (if \( f_{A} \left(1 \right) = g_{A} \left(1 \right) \), it is called a positive triangular OFN).

The representation of the form (4) allows us to quickly perform arithmetic operations on trapezoidal OFNs using these characteristic points. Let \( A = \left({f_{A} \left(0 \right), f_{A} \left(1 \right), g_{A} \left(1 \right), g_{A} \left(0 \right)} \right) \) and \( B = \left({f_{B} \left(0 \right), f_{B} \left(1 \right), g_{B} \left(1 \right), g_{B} \left(0 \right)} \right) \) be positive trapezoidal OFNs. The arithmetic operations on these numbers are defined by the formula

$$ A\diamondsuit B = \left( {f_{A} \left( 0 \right)\diamondsuit f_{B} \left( 0 \right), f_{A} \left( 1 \right)\diamondsuit f_{B} \left( 1 \right), g_{A} \left( 1 \right)\diamondsuit g_{B} \left( 1 \right), g_{A} \left( 0 \right)\diamondsuit g_{B} \left( 0 \right)} \right) $$
(5)

where \( \diamondsuit \in \{ + , - ,*,/\} \).

In the numerical example we will use three special types of OFNs (Roszkowska and Kacprzak 2016). For \( i \in {\mathbb{N}} \), the first one is the real number \( A_{1} = \left({i,i,i,i} \right) \), which will express the term “exactly \( i \)” (Fig. 6a). The other OFNs are \( A_{2} = \left({i,i,\frac{2i + 1}{2},i + 1} \right) \) and \( A_{3} = \left({i + 1,i + 1,\frac{2i + 1}{2},i} \right) \), expressing the terms “more than\( i \)” (Fig. 6b) and “less than\( i + 1 \)” (Fig. 6c), respectively.

Fig. 6
figure 6

Special OFNs which will be used in the numerical example

In some fuzzy MCDM methods, including fuzzy TOPSIS, it is necessary to measure the distance between fuzzy numbers, and to perform max and min operations on them.

Definition 4

Let \( A = \left({f_{A} \left(0 \right),f_{A} \left(1 \right),g_{A} \left(1 \right),g_{A} \left(0 \right)} \right) \) and \( B = \left({f_{B} \left(0 \right),f_{B} \left(1 \right),g_{B} \left(1 \right),g_{B} \left(0 \right)} \right) \) be two positive trapezoidal OFNs. Then the distance calculated by the vertex method and the max and min operations are defined as

$$ d\left({A,B} \right) = \sqrt {\frac{1}{4}\left[{\left({f_{A} \left(0 \right) - f_{B} \left(0 \right)} \right)^{2} + \left({f_{A} \left(1 \right) - f_{B} \left(1 \right)} \right)^{2} + \left({g_{A} \left(1 \right) - g_{B} \left(1 \right)} \right)^{2} + \left({g_{A} \left(0 \right) - g_{B} \left(0 \right)} \right)^{2}} \right]}, $$
(6)
$$ { \hbox{max} }\left({A,B} \right) = \left({{ \hbox{max} }\left\{{f_{A} \left(0 \right),f_{B} \left(0 \right)} \right\},{ \hbox{max} }\left\{{f_{A} \left(1 \right),f_{B} \left(1 \right)} \right\},{ \hbox{max} }\left\{{g_{A} \left(1 \right),g_{B} \left(1 \right)} \right\},{ \hbox{max} }\left\{{g_{A} \left(0 \right),g_{B} \left(0 \right)} \right\}} \right), $$
(7)
$$ { \hbox{min} }\left({A,B} \right) = \left({{ \hbox{min} }\left\{{f_{A} \left(0 \right),f_{B} \left(0 \right)} \right\},{ \hbox{min} }\left\{{f_{A} \left(1 \right),f_{B} \left(1 \right)} \right\},{ \hbox{min} }\left\{{g_{A} \left(1 \right),g_{B} \left(1 \right)} \right\},{ \hbox{min} }\left\{{g_{A} \left(0 \right),g_{B} \left(0 \right)} \right\}} \right) . $$
(8)

3 The TOPSIS method based on OFNs

In this section the classical TOPSIS method and its fuzzy extension based on OFNs are presented. Let us assume that the decision maker has to choose one of \( m \) possible alternatives described by \( n \) criteria. The rating of alternative \( A_{i} \)\( \left({i = 1, \ldots,m} \right) \) with respect to criterion \( C_{j} \)\( \left({j = 1, \ldots,n} \right) \) is denoted by \( x_{ij} \). The set of criteria is divided into two subsets: benefit criteria (greater value is better) denoted by \( B \) and cost criteria (lower value is better) denoted by \( C \). Let \( w = \left({w_{1},w_{2}, \ldots,w_{n}} \right) \) be the vector of criteria weights.

The original TOPSIS method assumes that the rating \( x_{ij} \) of the alternatives with respect to the criteria, as well as the criteria weights \( w_{j} \), are expressed precisely by real numbers. It consists of the following steps (Hwang and Yoon 1981).

Step 1:

Determination of the decision matrix \( X \)

$$ X = \left[{x_{ij}} \right]_{m \times n} $$
(9)

where \( x_{ij} \in{\mathbb{R}}. \)

Step 2:

Calculation of the normalized decision matrix \( R \) using vector normalization

$$ R = \left[{r_{ij}} \right]_{m \times n} $$
(10)

where \( r_{ij} = \frac{{x_{ij}}}{{\sqrt {\mathop \sum \nolimits_{k = 1}^{m} x_{kj}^{2}}}} \).

Step 3:

Calculation of the weighted normalized matrix \( V \) by multiplying the columns of the normalized decision matrix \( R \) by the associated weights \( w_{j} \in{\mathbb{R}} \) satisfying \( \mathop \sum \limits_{j = 1}^{n} w_{j} = 1 \)

$$ V = \left[{v_{ij}} \right]_{m \times n} $$
(11)

where \( v_{ij} = r_{ij} \cdot w_{j} \).

Step 4:

Determination of the positive ideal solution \( A^{+} \)

$$ A^{ + } = \left( {v_{1}^{ + } ,v_{2}^{ + } , \ldots ,v_{n}^{ + } } \right) = \left\{ {\left( {\mathop {\hbox{max} }\limits_{i} v_{ij} | j \in B} \right),\left( {\mathop {\hbox{min} }\limits_{i} v_{ij} | j \in C} \right)} \right\} $$
(12)

and the negative ideal solution \( A^{-} \)

$$ A^{ - } = \left( {v_{1}^{ - } ,v_{2}^{ - } , \ldots ,v_{n}^{ - } } \right) = \left\{ {\left( {\mathop {\hbox{min} }\limits_{i} v_{ij} | j \in B} \right),\left( {\mathop {\hbox{max} }\limits_{i} v_{ij} | j \in C} \right)} \right\}. $$
(13)
Step 5:

Calculation of the Euclidean distances of each alternative \( A_{i} \) from the positive ideal solution \( A^{+} \)

$$ d_{i}^{+} = \sqrt {\mathop \sum \limits_{j = 1}^{n} \left({v_{ij} - v_{j}^{+}} \right)^{2}} $$
(14)

and from the negative ideal solution \( A^{-} \)

$$ d_{i}^{-} = \sqrt {\mathop \sum \limits_{j = 1}^{n} \left({v_{ij} - v_{j}^{-}} \right)^{2}} . $$
(15)
Step 6:

Calculation of the relative closeness of each alternative \( A_{i} \) to the positive ideal solution \( A^{+} \)

$$ RC_{i} = \frac{{d_{i}^{-}}}{{d_{i}^{+} + d_{i}^{-}}} . $$
(16)
Step 7:

Ranking of the alternatives \( A_{i} \) according to their relative closeness to the ideal solutions \( A^{+} \) (the larger the value of \( RC_{i} \) the better the alternative \( A_{i} \)). The best alternative is the one with the largest value of \( RC_{i}. \)

In real-life decision making problems it is usually difficult to express evaluations precisely using real numbers, due to a lack of knowledge and data or to subjective and imprecise expert judgments. In such situations, instead of exact numbers, OFNs can be used. The fuzzy TOPSIS method based on positive trapezoidal OFNs proposed by Roszkowska and Kacprzak (2016) consists of the following steps.

Step 1:

Define the fuzzy decision matrix \( X \)

$$ X = \left[{x_{ij}} \right]_{m \times n} $$
(17)

where \( x_{ij} = \left({f_{{x_{ij}}} \left(0 \right),f_{{x_{ij}}} \left(1 \right),g_{{x_{ij}}} \left(1 \right),g_{{x_{ij}}} \left(0 \right)} \right) \) is a positive trapezoidal OFN representing the rating of alternative \( A_{i} \) with respect to attribute \( C_{j}. \)

Step 2:

Construct the normalized fuzzy decision matrix \( R \) using linear normalization

$$ R = \left[{r_{ij}} \right]_{m \times n} $$
(18)

where

$$ r_{ij} = \left\{{\begin{array}{*{20}c} {\left({\frac{{f_{{x_{ij}}} \left(0 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}, \frac{{f_{{x_{ij}}} \left(1 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}, \frac{{g_{{x_{ij}}} \left(1 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}, \frac{{g_{{x_{ij}}} \left(0 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}} \right)} & {{\text{if}}\quad j \in B} \\ {\left({\frac{{\mathop {\min}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}{{f_{{x_{ij}}} \left(0 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}{{f_{{x_{ij}}} \left(1 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}{{g_{{x_{ij}}} \left(1 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}}} \left(0 \right)}}{{g_{{x_{ij}}} \left(0 \right)}}} \right)} & {{\text{if}}\quad j \in C} \\ \end{array}} \right.. $$
(19)
Step 3:

Construct the weighted normalized fuzzy matrix \( V \) by multiplying the columns of the normalized fuzzy decision matrix \( R \) by the associated weights \( w_{j} \in{\mathbb{R}} \) satisfying \( \mathop \sum \limits_{j = 1}^{N} w_{j} = 1 \)

$$ V = \left[{v_{ij}} \right]_{m \times n} $$
(20)

where

$$\begin{aligned} v_{ij} & = r_{ij} \cdot w_{j} = \left({f_{\left({x_{ij} } \right)} \left(0 \right),f_{\left({x_{ij} } \right)} \left(1 \right),g_{\left({x_{ij} } \right)} \left(1 \right),g_{\left({x_{ij} } \right)} \left(0 \right)} \right) \cdot w_{j} & = \left({f_{\left({x_{ij} } \right)} \left(0 \right) \cdot w_{j},f_{\left({x_{ij} }\right)}} \left(1 \right) \cdot w_{j},g_{\left({x_{ij} } \right)} \left (1 \right) \cdot w_{j},g_{\left({x_{{ij} }}\right)} \left(0 \right) \cdot w_{j} \right).\end{aligned}$$
Step 4:

Determine the positive ideal solution as follows

$$ A^{+} = \left({v_{1}^{+},v_{2}^{+}, \ldots,v_{n}^{+}} \right) $$
(21)

where \( v_{j}^{+} = \mathop {\max}\limits_{i} v_{ij} \)\( \left({j = 1, \ldots,n} \right) \) and the negative ideal solution

$$ A^{ - } = \left( {v_{1}^{ - } ,v_{2}^{ - } , \ldots ,v_{n}^{ - } } \right) $$
(22)

where \( v_{j}^{-} = \mathop {\min}\limits_{i} v_{ij} \)\( \left({j = 1, \ldots,n} \right). \)

Step 5:

Calculate the distances of each alternative \( A_{i} \)\( \left({i = 1, \ldots,m} \right) \) from the positive ideal solution \( A^{+} \)

$$ d_{i}^{+} = \mathop \sum \limits_{j = 1}^{n} d\left({v_{ij},v_{j}^{+}} \right) $$
(23)

and from the negative ideal solution \( A^{-} \)

$$ d_{i}^{-} = \mathop \sum \limits_{j = 1}^{n} d\left({v_{ij},v_{j}^{-}} \right) . $$
(24)
Step 6:

Calculate the relative closeness of alternative \( A_{i} \) to the ideal solution \( A^{+} \)

$$ RC_{i} = \frac{{d_{i}^{-}}}{{d_{i}^{+} + d_{i}^{-}}}. $$
(25)
Step 7:

Rank the alternatives \( A_{i} \) and select the one with the largest value of \( RC_{i}. \)

4 The proposed approach

An extended TOPSIS based on OFNs for GDM consists of the following four main stages presented in Fig. 7.

Fig. 7
figure 7

The conceptual framework of the proposed approach

  1. 1.

    Problem description stage, consisting of the determination of the MCDM problem, the determination of the group of decision makers and the selection of all the feasible alternatives and important criteria.

  2. 2.

    Preparation stage, consisting of the construction of fuzzy decision matrices, their normalization, and the calculation of weighted normalized fuzzy decision matrices for each DM.

  3. 3.

    Transformation stage, consisting of the construction of normalized fuzzy decision matrices for each alternative based on the weighted normalized fuzzy decision matrices for each DM.

  4. 4.

    Selection stage, consisting of the determination of the positive ideal solution and the negative ideal solution, the computation of the total score for each alternative, rank ordering them and the selection of the best one.

4.1 Problem description

Consider an MCDM problem for group decision making; for instance, the example presented in the next section: hiring a secretary for a company. Let \( A = \left\{{A_{1},A_{2}, \ldots, A_{m}} \right\} \)\( \left({m \ge 2} \right) \) be a discrete set of \( m \) feasible alternatives (candidates), \( C = \left\{{C_{1},C_{2}, \ldots, C_{n}} \right\} \)\( \left({n \ge 2} \right) \) be a finite set of criteria, \( w = \left({w_{1},w_{2}, \ldots, w_{n}} \right) \) be the vector of criteria weights, such that \( 0 \le w_{j} \le 1 \) and \( \mathop \sum \limits_{j = 1}^{n} w_{j} = 1 \). Moreover, let \( DM = \left\{{{\it DM}_{1},DM_{2}, \ldots, DM_{K}} \right\} \)\( \left({K \ge 2} \right) \) be a group of decision makers.

4.2 Data preparation

In the process of group decision making, the DMs are asked to rate alternatives with respect to criteria. In many cases—when our knowledge of the analysed subject is incomplete, or the available data are inaccurate, or when the ratings are expressed linguistically—OFNs can be used. In that case, each decision maker \( DM_{k} \) (\( k = 1,2, \ldots,K \)) provides a decision matrix of the form

$$ X^{k} = \left[{x_{ij}^{k}} \right]_{m \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & { C_{2}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & { C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {A_{1}} \\ {A_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {A_{m}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{11}^{k}} & {x_{12}^{k}} \\ {x_{21}^{k}} & {x_{22}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {x_{1n}^{k}} \\ \cdots & {x_{2n}^{k}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {x_{m1}^{k}} & {x_{m2}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {x_{mn}^{k}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array} $$
(26)

where \( x_{ij}^{k} = \left({f_{{x_{ij}^{k}}} \left(0 \right),f_{{x_{ij}^{k}}} \left(1 \right),g_{{x_{ij}^{k}}} \left(1 \right),g_{{x_{ij}^{k}}} \left(0 \right)} \right) \) is a positive trapezoidal OFN representing the rating of alternative \( A_{i} \) with respect to criterion \( C_{j} \) provided by decision maker \( DM_{k} \).

Remark 1

The orientation of the OFN can be used to indicate the type of the criterion. In the case of a cost criterion, the OFN has a negative orientation (the lower the value, the better). In the case of a benefit criterion, the OFN has a positive orientation (the higher the value, the better). In our example in the next section the orientation is used to express the terms “more than\( i \)” and “less than\( i \)”.

Next, in order to ensure comparability of criteria, the fuzzy decision matrix \( X^{k} \)\( \left({k = 1, \ldots,K} \right) \) is normalized. The normalized fuzzy decision matrix

$$ Y^{k} = \left[{y_{ij}^{k}} \right]_{m \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & { C_{2}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & { C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {A_{1}} \\ {A_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {A_{m}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {y_{11}^{k}} & {y_{12}^{k}} \\ {y_{21}^{k}} & {y_{22}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {y_{1n}^{k}} \\ \cdots & {y_{2n}^{k}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {y_{m1}^{k}} & {y_{m2}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {y_{mn}^{k}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array} $$
(27)

can be calculated using the following formulas

$$ y_{ij}^{k} = \left\{{\begin{array}{*{20}c} {\left({\frac{{f_{{x_{ij}^{k}}} \left(0 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}, \frac{{f_{{x_{ij}^{k}}} \left(1 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}, \frac{{g_{{x_{ij}^{k}}} \left(1 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}, \frac{{g_{{x_{ij}^{k}}} \left(0 \right)}}{{\mathop {\max}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}} \right)} & {{\text{if}}\quad j \in B} \\ {\left({\frac{{\mathop {\min}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}{{f_{{x_{ij}^{k}}} \left(0 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}{{f_{{x_{ij}^{k}}} \left(1 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}{{g_{{x_{ij}^{k}}} \left(1 \right)}}, \frac{{\mathop {\min}\limits_{i} g_{{x_{ij}^{k}}} \left(0 \right)}}{{g_{{x_{ij}^{k}}} \left(0 \right)}}} \right)} & {{\text{if}}\quad j \in C} \\ \end{array}.} \right. $$
(28)

Using the vector of criteria weights \( w = \left({w_{1},w_{2}, \ldots, w_{n}} \right) \), the weighted normalized fuzzy decision matrix is calculated for each \( DM_{k} \) (\( k = 1,2, \ldots,K \))

$$ V^{k} = \left[{v_{ij}^{k}} \right]_{m \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & { C_{2}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & { C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {A_{1}} \\ {A_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {A_{m}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {v_{11}^{k}} & {v_{12}^{k}} \\ {v_{21}^{k}} & {v_{22}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {v_{1n}^{k}} \\ \cdots & {v_{2n}^{k}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {v_{m1}^{k}} & {v_{m2}^{k}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {v_{mn}^{k}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array}, $$
(29)

where \( v_{ij}^{k} = y_{ij}^{k} \cdot w_{j} = \left({f_{{y_{ij}^{k}}} \left(0 \right) \cdot w_{j},f_{{y_{ij}^{k}}} \left(1 \right) \cdot w_{j},g_{{y_{ij}^{k}}} \left(1 \right) \cdot w_{j},g_{{y_{ij}^{k}}} \left(0 \right) \cdot w_{j}} \right) \).

4.3 Transformation of the weighted normalized fuzzy decision matrices for all DMs into normalized fuzzy decision matrices for all alternatives

The matrices \( V^{k} \)\( \left({k = 1,2, \ldots,K} \right) \) form the basis for the construction of weighted normalized fuzzy decision matrices for each alternative \( A_{i} \left({i = 1,2, \ldots, m} \right) \)

$$ A^{i} = \left[{v_{ij}^{k}} \right]_{K \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & {C_{2}} \\ \end{array} } & {\begin{array}{*{20}c} \cdots & {C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\it DM}_{1}} \\ {{\it DM}_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {{\it DM}_{K}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {v_{i1}^{1}} & {v_{i2}^{1}} \\ {v_{i1}^{2}} & {v_{i2}^{2}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {v_{in}^{1}} \\ \cdots & {v_{in}^{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {v_{i1}^{K}} & {v_{i2}^{K}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {v_{in}^{K}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array} . $$
(30)

4.4 Rank ordering of the alternatives and the selection of the best one

Matrices \( A^{i} \)\( \left({i = 1,2, \ldots, m} \right) \) constitute the basis for the construction of a ranking of the alternatives and the selection of the best one. The positive ideal solution \( A^{+} \) is determined as follows

$$ A^{+} = \left[{v_{j}^{k +}} \right]_{K \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & { C_{2}} \\ \end{array} } & {\begin{array}{*{20}c} \cdots & { C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\it DM}_{1}} \\ {{\it DM}_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {{\it DM}_{K}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {v_{1}^{1 +}} & {v_{2}^{1 +}} \\ {v_{1}^{2 +}} & {v_{2}^{2 +}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {v_{k}^{1 +}} \\ \cdots & {v_{n}^{2 +}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {v_{1}^{K +}} & {v_{2}^{K +}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {v_{n}^{K +}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array} $$
(31)

where \( v_{j}^{k +} = \mathop {\max}\limits_{i} v_{ij}^{k} \) (\( j = 1,2, \ldots,n;k = 1,2, \ldots,K) \), and the negative ideal solution \( A^{-} \) is determined as follows

$$ A^{-} = \left[{v_{j}^{k -}} \right]_{K \times n} = \begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {C_{1} } & { C_{2}} \\ \end{array} } & {\begin{array}{*{20}c} \cdots & { C_{n}} \\ \end{array}} \\ \end{array}} \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\it DM}_{1}} \\ {{\it DM}_{2}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots \\ {{\it DM}_{K}} \\ \end{array}} \\ \end{array}} & {\left[{\begin{array}{*{20}c} {\begin{array}{*{20}c} {v_{1}^{1 -}} & {v_{2}^{1 -}} \\ {v_{1}^{2 -}} & {v_{2}^{2 -}} \\ \end{array}} & {\begin{array}{*{20}c} \cdots & {v_{k}^{1 -}} \\ \cdots & {v_{n}^{2 -}} \\ \end{array}} \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {v_{1}^{K -}} & {v_{2}^{K -}} \\ \end{array}} & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & {v_{n}^{K -}} \\ \end{array}} \\ \end{array}} \right]} \\ \end{array} $$
(32)

where \( v_{j}^{k -} = \mathop {\min}\limits_{i} v_{ij}^{k} \)\( \left({j = 1,2, \ldots,n;k = 1,2, \ldots,K} \right) \). Next, the distances of each alternative \( A_{i} \) represented by matrix \( A^{i} \)\( \left({i = 1, \ldots,m} \right) \) from PIS

$$ d_{i}^{+} = \mathop \sum \limits_{k = 1}^{K} \mathop \sum \limits_{j = 1}^{n} d\left({v_{ij}^{k}, v_{j}^{k +}} \right) $$
(33)

and from NIS

$$ d_{i}^{-} = \mathop \sum \limits_{k = 1}^{K} \mathop \sum \limits_{j = 1}^{n} d\left({v_{ij}^{k}, v_{j}^{k -}} \right) $$
(34)

are calculated. Using these distances, the relative closeness coefficients \( RC_{i} \) to PIS for each alternative \( A_{i} \) is calculated

$$ RC_{i} = \frac{{d_{i}^{-}}}{{d_{i}^{-} + d_{i}^{+}}} . $$
(35)

According to the descending values of \( RC_{i} \), all alternatives \( A_{i} \)\( \left({i = 1, \ldots,m} \right) \) are rank ordered and the best one is selected.

Remark 2

Note that if there is only one DM, i.e. \( K = 1 \), then the proposed approach is equivalent to the extended TOPSIS method proposed by Roszkowska and Kacprzak (2016).

Remark 3

Note that if we take into account the form of the matrices \( A^{i} \)\( \left({i = 1,2, \ldots, m} \right) \), the positive ideal solution \( A^{+} \) and the negative ideal solution \( A^{-} \), the proposed approach can be regarded as a simultaneous application of the extended TOPSIS for each of the DMs represented by the corresponding rows of these matrices.

4.5 The procedure of the proposed approach

To sum up, the steps of the proposed approach for group decision making are as follows.

Step 1:

Construction of the fuzzy decision matrix \( X^{k} = \left[{x_{ij}^{k}} \right]_{m \times n} \) for each decision maker \( DM_{k} \)\( \left({k = 1, \ldots,K} \right) \) in the form (26).

Step 2:

Normalization of the fuzzy decision matrices \( X^{k} = \left[{x_{ij}^{k}} \right]_{m \times n} \) into matrices \( Y^{k} = \left[{y_{ij}^{k}} \right]_{m \times n} \) using the Eq. (28).

Step 3:

Calculation of the weighted normalized fuzzy decision matrices \( V^{k} = \left[{v_{ij}^{k}} \right]_{m \times n} \) using the Eq. (29), for the given vector of criteria weights \( w = \left({w_{1},w_{2}, \ldots, w_{n}} \right). \)

Step 4:

Transformation of the weighted normalized fuzzy decision matrices \( V^{k} = \left[{v_{ij}^{k}} \right]_{m \times n} \) for all DMs, using Eq. (30), into normalized fuzzy decision matrices for all alternatives \( A^{i} = \left[{v_{ij}^{k}} \right]_{K \times n} \)\( \left({i = 1, \ldots,m} \right). \)

Step 5:

Determination of the positive ideal solution \( A^{+} = \left[{v_{j}^{k +}} \right]_{K \times n} \) and the negative ideal solution \( A^{-} = \left[{v_{j}^{k -}} \right]_{K \times n} \), using Eqs. (31) and (32), respectively.

Step 6:

Calculation of the distances of each alternative \( A_{i} \) represented by matrix \( A^{i} \)\( \left({i = 1, \ldots,m} \right) \), from PIS and from NIS, using Eqs. (33) and (34), respectively.

Step 7:

Calculation of the relative closeness coefficients to PIS for each alternative \( A_{i} \)\( \left({i = 1, \ldots,m} \right) \), using Eq. (35).

Step 8:

Rank ordering of the alternatives and the selection of the best one.

5 Numerical example

In this section, we examine a numerical example taken from Roszkowska and Kacprzak (2016), where the authors used arithmetic mean to aggregate the decision matrices of DMs; we use the proposed approach.

Let us consider the following situation: a company intends to hire a secretary and five candidates (alternatives) \( \left\{{A_{1},A_{2},A_{3},A_{4},A_{5}} \right\} \) are available for evaluation. A committee of five decision makers \( \left\{{{\it DM}_{1},DM_{2},DM_{3},DM_{4},DM_{5}} \right\} \) have conducted interviews to select the most suitable candidate. They have considered five benefit criteria \( \left\{{C_{1},C_{2},C_{3},C_{4},C_{5}} \right\} \), where \( C_{1} \)—emotional steadiness, \( C_{2} \)—oral communication skills, \( C_{3} \)—personality, \( C_{4} \)—past experience, \( C_{5} \)—self-confidence, with the weight vector \( w = \left({0.3, 0.3, 0.2, 0.1, 0.1} \right) \). The hierarchical structure of this decision problem is shown in Fig. 8.

Fig. 8
figure 8

The hierarchical structure of the considered decision problem

The DMs have used linguistic variables to rate the candidates (alternatives) with respect to the criteria (see the first column of Table 1) and their evaluations using descriptions of linguistic variables are shown in Table 2. Next, these descriptions of linguistic variables are expressed as positive trapezoidal OFNs (see the third column of Table 1). Because all the criteria are on the same ordinal scale, normalization is not needed. Using the vector \( w = \left({0.3, 0.3, 0.2, 0.1, 0.1} \right) \) of criteria weights, the weighted normalized fuzzy decision matrix is calculated for each DM (see Table 3). Next, these matrices are transformed into the weighed normalized fuzzy decision matrices for each alternative (see Table 4). Using these matrices, the positive ideal solution \( A^{+} \) and the negative ideal solution \( A^{-} \) are determined (see Table 5). Finally, the distances of each alternative from the positive ideal solution \( d_{i}^{+} \) and the negative ideal solution \( d_{i}^{-} \), as well as the relative closeness coefficients \( RC_{i} \) are calculated; finally, the alternatives are rank ordered (see Table 6)

$$ A_{1} \prec A_{4} \prec A_{2} \prec A_{5} \prec A_{3} $$

where \( \prec \) means “inferior to”. Hence, alternative (candidate) \( A_{3} \) should be selected.

Table 1 Linguistic variables for the ratings of the alternatives and their representation by OFNs
Table 2 Decision matrices provided by decision makers
Table 3 Weighted normalized decision matrices of decision makers
Table 4 Weighted normalized decision matrices of alternatives
Table 5 Positive ideal solution and negative ideal solution
Table 6 The distances of each alternative from the positive ideal solution \( d_{i}^{+} \) and the negative ideal solution \( d_{i}^{-} \), the relative closeness coefficients \( RC_{i} \) and the ranking order \( R \) of alternatives

Now we perform sensitivity analysis, which is essential in evaluating the influence of changing the weights of evaluation criteria on the ranking of alternatives. One element of the weight vector \( w \) will be changed, the rest will be adjusted, and the resulting ranking of alternatives will be analyzed. Let \( w_{j}^{m} \) be the modified weight \( w_{j} \) of the \( j \)th criterion, for \( j = 1, 2, 3, 4, 5 \). The remaining weights need to be adjusted and for this purpose the weight \( w_{h} \) of the hth criterion (\( h \ne j \)) is calculated using the formula

$$ w_{h}^{m} = \frac{{1 - w_{j}^{m}}}{{1 - w_{j}}} \cdot w_{h}. $$

Table 7 presents the obtained results, where CW—the current criterion weight, SR - the range of changes of those criterion weights, for which the current ranking does not change (the same ranking), SBA - the range of changes of those criterion weights, for which the best alternative does not change (the same best alternative).

Table 7 Ranges of criterion weights which do not affect the current solutions

Based on the results presented in Table 7, we can conclude that the obtained ranking of alternatives is not a stable solution. For criteria \( C_{1} \) and \( C_{2} \), even a small change of the weights (e.g. equal to 0.0036) may result in a new ranking of alternatives and another best alternative. Taking into account the ranges of weight changes for criteria \( C_{4} \) and \( C_{5} \) which do not affect the ranking and selection of the best alternative we can see that they are much larger (equal to at least 0.229) than those for criteria \( C_{1} \) and \( C_{2} \). Moreover, we can see that the weight of criterion \( C_{3} \) does not affect either the ranking of alternatives or the selection of the best one. The sensitivity analysis for the criteria is presented in Fig. 9. Figure 9a shows how the weights of the remaining criteria change when the weight of the chosen criterion changes. Figure 9b shows how the ranking of alternatives changes when the weight of the chosen criterion changes.

Fig. 9
figure 9

Sensitivity analysis for the criteria with respect to the ranking of alternatives

6 Comparison of the proposed approach with other and similar approaches

The proposed approach will be compared with other methods. Figures 10, 11 and 12 show the hierarchical structure of the classical TOPSIS (Hwang and Yoon 1981), the TOPSIS for group decision making with aggregation of individual decision matrices, and the proposed approach, respectively. Table 8 illustrates the differences and similarities between the proposed approach (PA), the classical TOPSIS (CT), and the TOPSIS for group decision making with aggregation of individual decision matrices using arithmetic mean based on OFNs (AT) proposed by Roszkowska and Kacprzak (2016).

Fig. 10
figure 10

The hierarchical structure of the classical TOPSIS

Fig. 11
figure 11

The hierarchical structure of the TOPSIS method for group decision making with aggregation of individual decision matrices

Fig. 12
figure 12

The hierarchical structure of the proposed TOPSIS method for group decision making

Table 8 The differences and similarities between the proposed approach (PA), the classical TOPSIS (CT) and TOPSIS for group decision making with aggregation of individual decision matrices using arithmetic mean based on OFNs (AT)

Now we will compare the proposed approach with methods which aggregate the individual weighted normalized decision matrixes \( V^{k} = \left[{v_{ij}^{k}} \right] \) into an aggregated collective matrix \( V = \left[{v_{ij}} \right] \), which is the starting point for the ranking of the alternatives or the selection of the best one, based on data from the example in Sect. 5 (Table 2). For the comparison we use the following aggregation methods:

  • AGG1 - arithmetic mean (Chen 2000; Wang and Chang 2007; Roszkowska and Kacprzak 2016), defined by

    $$ v_{ij} = \frac{1}{K}\mathop \sum \limits_{k = 1}^{K} v_{ij}^{k} = \left({\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(0 \right),\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(1 \right),\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} g_{{v_{ij}^{k}}} \left(1 \right),\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} g_{{v_{ij}^{k}}} \left(0 \right)} \right), $$
  • AGG2 - geometric mean (Shih et al. 2007; Ye and Li 2009), defined by

    $$ v_{ij} = \left({\mathop \prod \limits_{k = 1}^{K} v_{ij}^{k}} \right)^{{\frac{1}{K}}} = \left({\left({\mathop \prod \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(0 \right)} \right)^{{\frac{1}{K}}},\left({\mathop \prod \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(1 \right)} \right)^{{\frac{1}{K}}},\left({\mathop \prod \limits_{k = 1}^{K} {\text{g}}_{{v_{ij}^{k}}} \left(1 \right)} \right)^{{\frac{1}{K}}},\left({\mathop \prod \limits_{k = 1}^{K} g_{{v_{ij}^{k}}} \left(0 \right)} \right)^{{\frac{1}{K}}}} \right), $$
  • AGG3 - modified arithmetic mean (Shemshadi et al. 2011; Nadaban et al. 2016), defined by

    $$ v_{ij} = \left({\mathop {\min}\limits_{k} f_{{v_{ij}^{k}}} \left(0 \right),\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(1 \right),\frac{1}{K}\mathop \sum \limits_{k = 1}^{K} g_{{v_{ij}^{k}}} \left(1 \right),\mathop {\max}\limits_{\text{k}} g_{{v_{ij}^{k}}} \left(0 \right)} \right), $$
  • AGG4 - modified geometric mean (Ding 2011; Chang et al. 2009; Hatami-Marbini and Kangi 2017), defined by

    $$ v_{ij} = \left({\mathop {\min}\limits_{k} f_{{v_{ij}^{k}}} \left(0 \right),\left({\mathop \prod \limits_{k = 1}^{K} f_{{v_{ij}^{k}}} \left(1 \right)} \right)^{{\frac{1}{K}}},\left({\mathop \prod \limits_{k = 1}^{K} {\text{g}}_{{v_{ij}^{k}}} \left(1 \right)} \right)^{{\frac{1}{K}}},\mathop {\max}\limits_{\text{k}} g_{{v_{ij}^{k}}} \left(0 \right)} \right), $$
  • AGG5 - weighted mean (Kacprzak 2019), defined by

    $$ v_{ij} = \mathop \sum \limits_{k = 1}^{K} \lambda_{k} v_{ij}^{k} = \left({\mathop \sum \limits_{k = 1}^{K} \lambda_{k} f_{{v_{ij}^{k}}} \left(0 \right),\mathop \sum \limits_{k = 1}^{K} \lambda_{k} f_{{v_{ij}^{k}}} \left(1 \right),\mathop \sum \limits_{k = 1}^{K} \lambda_{k} g_{{v_{ij}^{k}}} \left(1 \right),\mathop \sum \limits_{k = 1}^{K} \lambda_{k} g_{{v_{ij}^{k}}} \left(0 \right)} \right), $$

    where \( \lambda_{k} \) is the weight of the kth DM.

Table 9 shows the distance of each alternative \( A_{i} \) from the positive ideal solution \( d_{i}^{+} \) and the negative ideal solution \( d_{i}^{-} \), as well as the relative closeness coefficients \( RC_{i} \) and ranking \( R \) of the alternatives using the proposed approach and different aggregation methods. The last column, denoted by \( J \), consists of the normalized (summing up to 1) values of the relative closeness coefficients of each alternative to the ideal solution, which allows to highlight the differences between the final scores of the alternatives. Next, Table 10 and Fig. 13 show the ranking of the alternatives. Let us note that the proposed approach and the aggregation methods using arithmetic mean and geometric mean give the same ranking of the alternatives

$$ A_{1} \prec A_{4} \prec A_{2} \prec A_{5} \prec A_{3}. $$
Table 9 The comparison of the results using the proposed approach and different aggregation methods
Table 10 The comparison of the rankings of alternatives based on the relative closeness coefficients using the proposed method and different aggregation methods
Fig. 13
figure 13

The comparison of the rankings of alternatives using the proposed approach and different aggregation methods based on: a\( RC_{i} \), b\( J \)

Moreover, taking into account column \( J \) in Table 9 and Fig. 13b, we can notice that the methods PA, AGG1 and AGG2 give a similar final score for each alternative. The aggregation methods AGG3 and AGG4 also give the same ranking of the alternatives

$$ A_{1} \prec A_{5} \prec A_{3} \prec A_{4} \prec A_{2} $$

but different from that given by the methods considered earlier. These methods swap the order of alternatives \( A_{4} \) and \( A_{5} \) and alternatives \( A_{2} \) and \( A_{3} \). The aggregation methods AGG5 give the following ranking of alternatives

$$ A_{1} \prec A_{4} \prec A_{2} \prec A_{3} \prec A_{5} $$

and swap the order of alternatives \( A_{3} \) and \( A_{5} \) as compared to the methods PA, AGG1 and AGG2. This means that the final ranking order of the alternatives and the choice of the best one (in our example, the successful candidate) depend on the method used. Taking into account column \( J \) in Table 9 and Fig. 13b, we can notice that the aggregation methods using the modified arithmetic mean and the modified geometric mean result in a fairly high score of alternatives \( A_{2} \) and \( A_{4} \) and a fairly low final score of the remaining alternatives in comparison with the other methods analysed (which result in more diverse final scores).

Moreover, using the same data (from the example presented in Sect. 5, Table 2), we will compare the application of the CFNs model and the real numbers (RNs) model with the application of the OFNs model in the proposed approach. For this purpose, the negative orientation of OFNs in the decision matrices (Table 2) will be changed to positive orientation. For instance, the linguistic variables “less than\( i \)” of the form \( L\left(i \right) = \left({i,i,\frac{2i - 1}{2},i - 1} \right) \) will be replaced by fuzzy numbers of the form \( \left({i - 1,\frac{2i - 1}{2},i,i} \right) \). In this situation, the use of the OFNs model is no different from the application of the CFNs model, while the RNs model will be obtained by the application of a defuzzification method in the decision matrices (Table 2). We use the very commonly used and well-known method called centre of gravity (CoG) defined by

$$ \phi_{CoG} \left(A \right) = \frac{{\mathop \int \nolimits_{0}^{1} \frac{{f_{A} \left(y \right) + g_{A} \left(y \right)}}{2} \cdot \left| {f_{A} \left(y \right) - g_{A} \left(y \right)} \right|dy}}{{\mathop \int \nolimits_{0}^{1} \left| {f_{A} \left(y \right) - g_{A} \left(y \right)} \right|dy}} $$

which for the trapezoidal OFN \( A = \left({f_{A} \left(0 \right),f_{A} \left(1 \right),g_{A} \left(1 \right),g_{A} \left(0 \right)} \right) \) has the form

$$ \phi_{CoG} \left(A \right) = \frac{{\left[{f_{A} \left(0 \right)} \right]^{2} + \left[{f_{A} \left(1 \right)} \right]^{2} + f_{A} \left(0 \right)f_{A} \left(1 \right) - \left[{g_{A} \left(1 \right)} \right]^{2} - \left[{g_{A} \left(0 \right)} \right]^{2} - g_{A} \left(1 \right)g_{A} \left(0 \right)}}{{3\left({f_{A} \left(0 \right) + f_{A} \left(1 \right) - g_{A} \left(1 \right) - g_{A} \left(0 \right)} \right)}}. $$

The weakness of the CoG method for OFNs is that it is not sensitive to the orientation of OFNs. For this reason, we will also use two simple methods sensitive to the orientation of OFNs:

  • first of maximum (FOM) defined by

    $$ \phi_{FOM} \left(A \right) = f_{A} \left(1 \right) $$
  • last of maximum (LOM) defined by

    $$ \phi_{LOM} \left(A \right) = g_{A} \left(1 \right). $$

The real numbers obtained using these defuzzification methods will be written as OFNs as follows: \( A = \left({\phi_{df} \left(A \right),\phi_{df} \left(A \right),\phi_{df} \left(A \right),\phi_{df} \left(A \right)} \right) \), where \( df \in \left\{{CoG,FOM,LOM} \right\} \). The results obtained using the proposed approach are presented in Tables 11 and 12 and in Fig. 14. If we compare OFNs and CFNs in the proposed approach (Table 11 and Fig. 14), we can see that when we disregard the orientation of OFNs, for instance by using CFNs, we obtain the same final scores of the alternatives, that is, the same ranking of the alternatives. This is due to the method of calculation of the distance between fuzzy numbers (see formula (6)) and to the determination of PIS and NIS in the numerical example (see Table 5). Moreover, if we use the RNs obtained from OFNs using CoG, we can see that the alternatives \( A_{3} \) and \( A_{5} \) are not distinguished (\( \approx \) means “equivalent”), and the same result is obtained when FOM is applied. Furthermore, LOM results in the same final score for all the alternatives: it does not distinguish between the alternatives at all. The results of LOM and FOM are due to the fact that when OFNs are used, we have

$$ \phi_{FOM} \left({M\left(i \right)} \right) = \phi_{FOM} \left({L\left(i \right)} \right) $$

and

$$ \phi_{LOM} \left({M\left(i \right)} \right) = \phi_{LOM} \left({L\left({i + 1} \right)} \right). $$
Table 11 The comparison of the results using OFNs, CFNs and RNs in the proposed approach
Table 12 The comparison of the rankings of alternatives based on the relative closeness coefficients using OFNs, CFNs and RNs in the proposed approach
Fig. 14
figure 14

The comparison of the rankings of alternatives using OFNs, CFNs and RNs in the proposed approach based on: a\( RC_{i} \), b\( J \)

Finally, we use sensitivity analysis to compare the proposed method with the TOPSIS method with aggregation of individual matrices into a collective matrix using arithmetic mean (Roszkowska and Kacprzak 2016). The results are presented in Table 13 and Fig. 15.

Table 13 Ranges of criterion weights which do not affect the current solutions for the TOPSIS method with aggregation of individual matrices into a collective matrix using arithmetic mean (Roszkowska and Kacprzak 2016)
Fig. 15
figure 15

Sensitivity analysis for the criteria with respect to the ranking of alternatives for the TOPSIS method with aggregation of individual matrices into a collective matrix using arithmetic mean (Roszkowska and Kacprzak 2016)

From Tables 7 and 13 we can conclude that the results obtained for criteria \( C_{1} \), \( C_{2} \) and \( C_{3} \) are almost identical. Taking into account the ranges of changes of the weight of criterion \( C_{4} \) we can see that the proposed approach gives a much larger range for both the same ranking (SR) and the same best alternative (SBA) than the method proposed in Roszkowska and Kacprzak (2016). This means that the resulting ranking of alternatives is a more stable solution when the proposed approach is used. For criterion \( C_{5} \) the proposed approach gives a slightly smaller range of weight changes than the method proposed in Roszkowska and Kacprzak (2016), both for the ranking of alternatives (SR) and the selection of the best one (SBA).

7 Conclusion

In this paper an extended TOPSIS method based on OFNs for GDM problems has been presented. Most papers in the literature aggregate the individual decision matrices provided by the DMs into a collective decision matrix which is the starting point for the ranking of alternatives or the selection of the best one, using arithmetic mean, geometric mean or their modifications. In such a situation the MCDM problem for GDM reduces to a classical MCDM problem. Moreover, such an averaged result does not reflect the discrepancies between the individual assessments or the preferences of the DMs. By contrast, in the proposed approach, all individual decision data of the DMs are taken into account in determining the ranking of alternatives and the selection of the best one. This is because the distances of alternatives from the PIS and the NIS are distances between matrices.

The numerical example has shown that the proposed approach based on OFNs—as compared with other methods of aggregation of individual decision matrices of each DM based on OFNs and the omission of orientation of OFNs (which is equivalent to using CFNs) or their transformation into RNs (using selected defuzzification methods)—can give a different final result, both as regards the ranking of alternatives and the selection of the best one.

One of the very important steps in MCDM methods is finding appropriate weights of criteria. In future research methods of the determination of criteria weights (subjective, provided by each decision maker and objective, obtained from decision matrices using the entropy method, as well as integrated subjective and objective) for GDM using the OFNs model will be investigated.

Moreover, the proposed approach is specific to OFNs. In the future, we will continue working on other types of data, such as: real numbers, intervals numbers, hesitant fuzzy sets, interval-valued intuitionistic fuzzy information, and others.