Abstract
In multi-criteria decision making, the importance of decision criteria (decision attributes) plays a crucial role. Ranking is a useful technique for expressing the importance of decision criteria in a decision-makers’ preference system. Since weights are commonly utilized for characterizing the importance of criteria, weight determination and assessment are important tasks in multi-criteria decision making and in voting systems as well. In this study, we concentrate on the connection between the preference order of decision criteria and the decision weights. Here, we present an easy-to-use procedure that can be used to produce a sequence of weights corresponding to a decision-makers’ preference order of decision criteria. The proposed method does not require pairwise comparisons, which is an advantageous property especially in cases where the number of criteria is large. This method is based on the application of a class of regular increasing monotone quantifiers, which we refer to as the class of weighting generator functions. We will show that the derivatives of these functions can be used for approximating the criteria weights. Also, we will demonstrate that using weighting generator functions, weights can be inverted in a consistent way. We will deduce the generators for arithmetic and geometric weight sequences, and we will present a one-parameter generator function known as the tau function in continuous-valued logic. We will show that using these weighting generator functions, the weight learning task can be turned into a simple, one-parameter optimization problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In most of the decision making problems, decision-makers should take into account multiple criteria when comparing the available decision alternatives. [64] provided a comprehensive literature review on multi-criteria decision making methods including the Analytic Hierarchy Process (AHP) (see [57]), the Case-Based Reasoning (CBR) (see, e.g., [44]), the Data Envelopment Analysis (DEA) (see, e.g., [3, 61]), the Simple Multi-Attribute Rating Technique (SMART) (see, e.g., [16]) the ELimination and Choice Translating REality (ELECTRE) method family (see, [29, 56]), the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE) (see, e.g., [9, 12]), and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) (see, e.g., [36]).
Another widely utilized method in multi-criteria decision making is the Vlsekriterijumska Optimizacija I KOmpromisno Resenje (VIKOR) method (see, e.g., [51, 52]). Based on the generalized Jacquet-Lagreze’s permutation method, [53] developed the QUALItative FLEXible multiple criteria decision making method that is known as QUALIFLEX. It should be added that over the past few decades, numerous studies have been published on various extensions of the above-mentioned methods. From the recent literature, without any claim to completeness, we should mention here [58] on fuzzy-TOPSIS methods, [8, 74] on extensions of PROMETHEE, [2, 28] on extensions of ELECTRE methods. [62] presented a likelihood-based variant of the QUALIFLEX method, while [15] proposed the interval-valued intuitionistic fuzzy QUALIFLEX method using a likelihood-based comparison approach. A novel approach for the ranking and selection of design alternatives based on pairwise comparisons was introduced by [70]. In [42], a novel hybrid interval type-2 fuzzy multidimensional decision-making approach was presented for evaluating Fintech investments in European banks.
In a typical decision making situation, both the value and the importance of each decision criterion (or decision attribute) are incorporated into the final decision. It is quite common that importance is expressed using weight values associated with the criteria in question. Hence, determining the appropriate weights of criteria or attributes is an important topic in multi-criteria decision making; and so this topic has been attracting a lot of attention in recent years (see, e.g., [4, 7, 10, 11, 17, 39, 46,47,48,49, 60, 68, 76,77,78]).
Without claiming completeness, we should mention here some of the well-known weighting methods that can be used to determine weights in multi-criteria decision making problems. These methods can be classified into three main groups: subjective, objective and combined (integrated) weighting approaches (see, e.g., [33, 50, 63]).
In a subjective weight determination method, expert opinions are translated into weights. This is commonly done by asking multiple questions from the decision-maker, and so this process may be time consuming. The most commonly utilized subjective weighting methods are the point allocation method (see, e.g., [34]), the direct rating method (see, e.g., [5]), the pairwise comparisons, the ranking methods such as the rank sum, the rank exponent and rank reciprocal methods (see, e.g., [55]), the ratio weighting method, the swing weighting (see, e.g., [69]), the trade-off weighting (see, e.g., [38, 54]), the Delphi method (see, e.g., [59]), the Nominal Group Technique (NGT) (see, e.g., [1, 21]), the Simple Multi-attribute Rating Technique (SMART) and the Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER) methods (see, e.g., [27]). We should add that some of the above methods (e.g., the point allocation method) appear in the theory of voting systems (see, e.g., [65]). The main disadvantage of the subjective weight determination methods is that their efficiency decreases as the number of decision criteria increases.
In the objective weighting methods, the criteria weights are determined based on information related to each criterion. These methods apply mathematical models and the decision-makers’ preferences do not play any role in determining the criteria weights [75]. Typical inputs of these methods are the attribute values of the decision alternatives or a decision-matrix that contains the performance of each alternative on each decision criterion [41]. Some well-known objective weighting methods are the mean weight method (see, e.g., [67]), the entropy method (see, e.g., [14]), the standard deviation method (see, e.g., [37]), the CRiteria Importance Through Inter-criteria Correlation (CRITIC) (see, e.g., [23]) and the Simultaneous Evaluation of Criteria and Alternatives (SECA) (see [40]) method.
In the hybrid weighting methods, various subjective and objective weighting methods are combined. These methods can make use of both the decision-makers’ preferences and the data in decision-matrices (see, e.g., [13, 22, 26, 32, 45]).
1.1 Motivations of this study
The scoring-based ranking plays an important role in many areas of our lives. Commonly, scores are transformed into weights, and then the weights are used in multi-criteria decision making problems. In our study, we will focus on how an appropriate weighting system can be derived based on a ranking. Namely, we will study weighting systems that are derived from a preference order of attributes or decision criteria. For example, from the area of sport, in Formula 1, the first ten drivers are awarded with scores given in Table 1.
These scores can be transformed into weights using the normalization \(w_{i} = \frac{s_{i}}{\sum _{j=1}^{10} s_{j}}\). We will demonstrate that an appropriate weight system can also be obtained using the order information (i.e., the ranks), which is based on scores. Later, in Sect. 5.2, we will show how the weights shown in Table 1 can be approximated using a so-called weighting generator function that produces a geometric sequence of weights.
In our study, we sought to establish a weight learning procedure that requires simple inputs from the decision-maker and yields criteria weights via an easy-to-use mathematical method. Namely, our heuristic requires two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives. This method utilizes the so-called weighting generator functions to produce weights so that the order of the produced weights corresponds to the preference order of the attributes (criteria) provided by the decision-maker.
Based on its characteristics, the proposed method may be treated as a hybrid weighting method. On the one hand, as subjective inputs, the proposed method utilizes a ranking of attribute preferences (criteria preferences) and a set of evaluations of decision alternatives. On the other hand, a weighting generator function, which can produce weights using the above-mentioned subjective inputs, may be regarded as a mathematical model of the proposed method. Our main research question is how a ranking of attribute preferences (criteria preferences) supplemented with evaluations of decision alternatives can be transformed into appropriate attribute (criteria) weights. This question is justified by the fact that, to the best of our knowledge, no such methods are available that utilize simple, one-parameter weight generator functions to produce appropriate attribute (criteria) weights from the two decision making inputs mentioned above.
The family of Regular Increasing Monotone (RIM) quantifiers is a well-known construction in the theory of Ordered Weighted Averaging (OWA) operators and in the quantifier guided aggregation (see, e.g., [20, 30, 31, 35, 71,72,73]). It is also an acknowledged fact that the RIM quantifiers can be used to generate weights for a weighted aggregation operation. In our study, we will consider a class of the RIM quantifiers, which we will call the class of weighting generator functions.
In this article, we will show how a weighting generator function, which is a strictly increasing, differentiable and strictly concave (convex, respectively) mapping, can be used to produce a monotonic or a strictly monotonic sequence of weights such that this sequence corresponds to a decision-maker’s attribute preference order. It should be added that the concept of a weighting generator function lays the foundations for generating inverse (inverted) weights in a consistent way. Next, we will demonstrate that the derivative of a weighting generator function can be used to produce approximate weights. Also, we will deduce the weighting generator functions for arithmetic and geometric weight sequences, and we will present a special, one-parameter generator function that is known as the tau function in continuous-valued logic. After, based on our theoretical results concerning the weighting generator functions, we will present the above-described weight learning method and discuss its advantages and limitations.
Here, we will utilize one-parameter weighting generator functions. We will show that the parameter value of such a function can determine a sequence of weights that corresponds to the order of attribute preferences given by a decision-maker. Since the weighting generator function has only one parameter, it can be easily tuned, via its parameter value, to minimize the difference between the computed and the decision-maker-established utility values, which are assigned to each alternative in the input sample. Hence, our procedure may be treated as a hybrid method in the sense that in order to determine attribute weights, it utilizes both the inputs provided by a decision maker and the weighting generator functions that can be viewed as mathematical models. However, we should mention that, unlike the AHP method and its later developed versions, our procedure does not require pairwise comparisons of the decision criteria, which may adversely affect the efficiency of these methods especially in cases where the number of decision attributes (decision criteria) is large. In our method, regardless the number of decision criteria, we need to find the optimal value of one parameter. Since for the proposed weighting generator functions, the domain of this parameter is a bounded interval, e.g., \((0,\frac{1}{2})\), \([-1,0)\) or (0, 1), a nearly optimal parameter value can be determined using a brute force approach. The proposed method consists of two distinct steps, each of which is strongly connected with one of the two inputs. The first input, i.e., a non-increasing sequence of the attribute preferences (criteria preferences), readily determines the parameter domain of the weighting generator function. The parameter domains of the weighting generator functions that produce arithmetic and geometric sequences of weights are \([-1,0)\) or (0, 1), respectively. The parameter domain for the tau weighting generator function is \((0,\frac{1}{2})\). The second input, i.e., a sample of evaluated alternatives, is utilized for optimizing the parameter value such that the difference between the computed and the decision-maker-established utility values, which are assigned to each alternative in the input sample, is a minimum.
The main findings of our study can be summarized as follows:
-
A class of RIM quantifiers is treated as a set of weighting generator functions, which will be utilized for generating weighting systems.
-
We present a weight learning procedure that requires two inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives.
-
We utilize one-parameter weighting generator functions and so the weight learning procedure leads to an optimization problem where the optimal value of only one parameter needs to be found.
-
Using a numerical example, we will show how our procedure can be used in practice.
This paper is structured as follows. In Sect. 2, we present the weighting generator functions and a result concerning the inverted weights. Our method for generating weights based on a monotonic or strictly monotonic order of attribute preferences is described in Sect. 3. In Sect. 4, we focus on weight approximations using the derivatives of weighting generator functions. Weighting generator functions for arithmetic and geometric weight sequences as well as the tau weighting generator function is presented in Sect. 5. In Sect. 6, we present our weighting generator function-based weight learning method. Lastly, in Sect. 7, we draw some pertinent conclusions and outline our plans for future research.
2 Weighting generator functions and inverse weights
In this section, we will introduce the so-called weighting generator functions, which can be viewed as a class of the well-known regular increasing monotone quantifiers. Next, we will show how a weighting generator function can be used to produce inverse weights.
2.1 Weighting generator functions
A function \(Q:[0,1]\rightarrow [0,1]\) is a RIM quantifier if \(Q(0)=0\), \(Q(1)=1\) and for any \(x,y \in [0,1]\), \(x>y\) implies \(Q(x) \ge Q(y)\) (see, e.g., [20]). The requirements for weighting generator functions are more strict, i.e., these functions are strictly increasing, strictly convex (or concave) and differentiable. Here, the weighting generator functions form a class of RIM quantifiers.
Definition 1
Let \(\mathcal {G}\) be the set of all functions \(g:[0,1] \rightarrow [0,1]\) that are strictly increasing with \(g(0)=0\) and \(g(1)=1\), strictly concave (convex, respectively) and differentiable on (0, 1). We shall say that \(\mathcal {G}\) is the class of weighting generator functions.
Making use of Definition 1, we will interpret the weights induced by a weighting generator function as follows.
Definition 2
Let \(g \in \mathcal {G}\), let \(n \in \mathbb {N}\), \(n \ge 1\), and for \(i=1,2, \ldots , n\), let \(w_i\) be given by
Then, we will say that the weights \(w_1, w_2, \ldots , w_n\) are induced by the weighting generator function g.
The following proposition explains why a function \(g \in \mathcal {G}\) may be viewed as a weighting generator function. With this proposition we can demonstrate that the quantities \(w_1, w_2, \ldots , w_n\), induced by a weighting generator function g according to Eq. (1), are in fact weights.
Proposition 1
Let \(g \in \mathcal {G}\), and let \(n \in \mathbb {N}\), \(n \ge 1\). If for \(i=1,2, \ldots , n\), \(w_i\) is given by Eq. (1). Then, \(w_i\) has the following properties:
-
(a)
\(w_i > 0\)
-
(b)
if g is strictly concave, then \(w_1> w_2> \cdots > w_n\) and if g is strictly convex, then \(w_1< w_2< \cdots < w_n\).
-
(c)
\(\sum _{i=1}^{n} w_i = 1\).
Proof
Since \(g \in \mathcal {G}\), g is a strictly increasing function, and by taking into account Eq. (1) and the convexity of g, we immediately see that properties (a) and (b) hold. Next, taking into account the fact that \(g(0)=0\) and \(g(1)=1\), we have
\(\square\)
Remark 1
Note that based on Eq. (1), we have that for any \(j \in \lbrace 1,2, \ldots , n\rbrace\),
In Fig. 1, a strictly concave weighting generator function and the weights induced by this function have been plotted.
2.2 Inverse weights induced by a weighting generator function
Taking into account Definition 1, Proposition 1 and Definition 2, we can state the following theorem.
Theorem 1
Let \(g \in \mathcal {G}\) be a weighting generator function, let \(n \in \mathbb {N}\), \(n \ge 1\) and let the weights \(w^{(g)}_1, w^{(g)}_2, \ldots , w^{(g)}_n\) be induced by g. Furthermore, let the function \(f:[0,1] \rightarrow [0,1]\) be given by
for any \(x \in [0,1]\). Then, the function f is a weighting generator function as well, and the following properties hold for the weights \(w^{(f)}_1, w^{(f)}_2, \ldots , w^{(f)}_n\) induced by f:
-
(a)
\(w^{(f)}_{i} = w^{(g)}_{n-i+1}\), for \(i=1,2, \ldots , n\).
-
(b)
If g is strictly concave, then f is strictly convex and
$$\begin{aligned} w^{(g)}_1> w^{(g)}_2> \cdots > w^{(g)}_n \quad \text {and} \quad w^{(f)}_1< w^{(f)}_2< \cdots < w^{(f)}_n. \end{aligned}$$ -
(c)
If g is strictly convex, then f is strictly concave and
$$\begin{aligned} w^{(g)}_1< w^{(g)}_2< \cdots < w^{(g)}_n \quad \text {and} \quad w^{(f)}_1> w^{(f)}_2> \cdots > w^{(f)}_n. \end{aligned}$$ -
(d)
\(\sum _{i=1}^{n} w^{(f)}_{i} = 1\).
Proof
Since \(g \in \mathcal {G}\), based on Definition 1, g is strictly increasing with \(g(0)=0\) and \(g(1)=1\), it is differentiable on (0, 1) and it is either strictly concave or strictly convex. Therefore, noting Eq. (3), we immediately see that f is strictly increasing with \(f(0)=0\) and \(f(1)=1\), f is differentiable, and if g is strictly concave (convex, respectively), then f is strictly convex (concave, respectively). Hence, f satisfies the criteria for a weighting generator function given in Definition 1. Next, exploiting the results of Proposition 1, we immediately see that properties (b) and (c) hold.
Based on Eqs. (1) and (3), for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), we can write
This means that property (a) holds. Since \(w^{(g)}_1, w^{(g)}_2, \ldots , w^{(g)}_n\) are induced by the weighting generator function g, we have \(\sum _{i=1}^{n} w^{(g)}_{i}=1\). Now, making use of property (a), we find that
That is, property (c) holds as well. \(\square\)
Remark 2
It should be stressed that the concept of weighting generator function along with Theorem 1 lay the foundations for generating inverse (inverted) weights in a consistent way. That is, if g is a weighting generator function, then \(f(x) = 1- g(1-x)\) is a weighting generator function as well and the weights induced by f can be viewed as the inverted weights of those induced by g.
3 Generating weights based on a monotonic or strictly monotonic order of attribute preferences
Let \(a_1, a_2, \ldots , a_n\) be attributes which characterize each entity (alternative) in a decision making procedure, \(n \in \mathbb {N}\), \(n \ge 1\). Let the variables \(x_1, x_2, \ldots , x_n\) and the weights \(w_1, w_2, \ldots , w_n\) be the inputs of this decision making procedure. Here, we interpret the value of variable \(x_i\) and the value of weight \(w_i\) as the utility value and the importance value of the ith attribute, respectively, in the preference system of a decision-maker, \(i \in \lbrace 1,2, \ldots , n\rbrace\). Now, using weighting generator functions, we will present a method that can be used to generate weights so that they reflect the decision-maker’s preferences regarding the attributes.
Let \(\textbf{A}\) be the set of attributes, i.e., \(\textbf{A} = \lbrace a_1, a_2, \ldots , a_n \rbrace\) and let \(\prec\) and \(\succ\) be two strict order relations on the set \(\textbf{A} = \lbrace a_1, a_2, \ldots , a_n \rbrace\) such that for any \(a_i, a_j \in \textbf{A}\) and \(i \ne j\),
-
(a)
\(a_j \prec a_i\) if and only if \(a_i\) is more important than \(a_j\)
-
(b)
\(a_j \succ a_i\) if and only if \(a_i\) is less important than \(a_j\).
Also, let \(\equiv\) be an equivalence relation on the set \(\textbf{A}\) such that
-
(c)
\(a_j \equiv a_i\) if and only if \(a_i\) and \(a_j\) are equally important.
Later, we will utilize the weighted arithmetic mean
to aggregate the \(x_{1}, x_{2}, \ldots , x_{n} \in \mathbb {R}\) utility values with respect to the weights \(w_{1}\), \(w_{2}\), \(\ldots\), \(w_{n} \in (0,1)\), where
and \(\sum _{i=1}^{n} w_i = 1\). Hence, we shall assume that the greater the importance of an attribute is, the greater its weight value will be. That is, we shall assume that for any \(i, j \in \lbrace 1,2, \ldots , n\rbrace\) and \(i \ne j\),
-
(a)
\(w_j < w_i\) if and only if \(a_j \prec a_i\)
-
(b)
\(w_j > w_i\) if and only if \(a_j \succ a_i\)
-
(c)
\(w_j = w_i\) if and only if \(a_j \equiv a_i\).
Remark 3
We should add that a greater level of attribute importance does not necessarily mean a greater value of the corresponding weight, as this depends on the aggregation method. For example, if we aggregate the \(x_{1}, x_{2}, \ldots , x_{n} \in (0,1)\) values with respect to the weights \(w_{1}, w_{2}, \ldots , w_{n}\), where \(\sum _{i=1}^{n} w_i = 1\), using the weighted geometric mean
then a greater level of attribute importance results in a lower value of the corresponding weight. This is simply due to the fact that for any \(x \in (0,1)\), \(x^{w}\) is a strictly decreasing function of w, where \(w \in (0,1)\).
The following theorem tells us how the weighting generator functions can be used to obtain a weight sequence that represents the decision-maker’s order of importance of the decision attributes.
Theorem 2
Let \(n \in \mathbb {N}\), \(n \ge 1\) and let \(a_1, a_2, \ldots , a_n\) be attributes, which characterize the alternatives in a decision making procedure, such that
where \(k \in \mathbb {N}\) is an arbitrary fixed constant with \(1 \le k \le n\), \(n_{0}, n_{1}, \ldots , n_{k}\) are fixed indices that satisfy
\(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\), \(\prec\) and \(\equiv\) are a strict order and an equivalence relations on the set \(\lbrace a_1, a_2, \ldots , a_n \rbrace\), respectively. Furthermore, let \(g \in \mathcal {G}\) be a strictly concave weighting generator function and for \(r=1,2, \ldots , k\), let \(w^{*}_{r}\) be given by
If, for every \(i=1,2, \ldots , n\), \(w_{\pi (i)}\) is given by
where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which
then \(w_{\pi (i)} >0\),
and
Proof
Since \(w^{*}_1, w^{*}_2, \ldots , w^{*}_{k}\) are weights induced by a strictly concave weighting generator function g, based on Proposition 1, we immediately get that for any \(r \in \lbrace 1,2, \ldots , k\rbrace\), \(w^{*}_{r}>0\),
and
holds. Hence, noting Eqs. (8) and (6), we find that for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)} >0\). Notice that the denominator of the formula for \(w_{\pi (i)}\) in Eq. (8) is independent of i. Therefore, taking into account Eqs. (12), (8) and the fact that \(w_{\pi (i)}\) has the same value for \(i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace\), we find that Eq. (10) holds. Also, we can write
\(\square\)
Note that in Theorem 2, the value of k corresponds to the number of unique weight values in the weight sequence given in Eq. (10). In the two terminal cases, where \(k=1\) or \(k=n\), Theorem 2 gives us the following results.
If \(k=1\), then all the attributes have the same importance value, and so, based on Eq. (10), all the weights should be equal, i.e.,
Indeed, if \(k=1\), then \(n_0=0\), \(n_1 = n\), \(w^{*}_1=1\), \(l=0\) (\(l \in {0}\)), and utilizing Eq. (8), for any \(i \in \lbrace 1, 2, \ldots , n \rbrace\), we get
If \(k=n\), then each attribute has a unique importance value, and so, based on Eq. (10),
should hold. Indeed, if \(k=n\), then \(n_0=0\), \(n_1 = 1\), \(\ldots\), \(n_k=n\). Therefore, using Eq. (8), for any \(i \in \lbrace 1, 2, \ldots , n \rbrace\), and noting Eq. (11), we can write
Since \(w^{*}_1> w^{*}_2> \cdots > w^{*}_{k}\), based on Eq. (13), we also have
Remark 4
If in Theorem 2, we replace the relations \(\prec\) and < in Eqs. (5) and (10) with the relations \(\succ\) and >, respectively, then the theorem remains valid with any strictly convex weighting generator function g.
Making use of weighting generator functions and Theorem 2, the following procedure can be utilized to generate weights that reflect the preference order of attributes.
Procedure 1
[A procedure for generating weights that reflect the preference order of attributes]
-
Input: A non-increasing sequence of the decision-maker’s preferences regarding the attributes \(a_1, a_2, \ldots , a_n\):
$$\begin{aligned} a_{\pi (n_{0}+1)} \equiv a_{\pi (2)} \cdots \equiv a_{\pi (n_{1})}&\succ a_{\pi (n_{1}+1)} \equiv a_{\pi (n_{1}+2)} \equiv \cdots \equiv a_{\pi (n_{2})} \succ \cdots \\ \cdots&\succ a_{\pi (n_{k-1}+1)} \equiv a_{\pi (n_{k-1}+2)} \equiv \cdots \equiv a_{\pi (n_{k})},\end{aligned}$$where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),
$$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n \end{aligned}$$and \(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\).
-
Step 1: Select a strictly concave weighting generator function \(g \in \mathcal {G}\) and for all \(r = 1,2, \ldots , k\), compute \(w^{*}_r\) as
$$\begin{aligned} w^{*}_r = g\left( \frac{r}{n} \right) - g\left( \frac{r-1}{n} \right) . \end{aligned}$$ -
Step 2: For every \(i=1,2, \ldots , n\), compute \(w_{\pi (i)}\) as
$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}}, \end{aligned}$$where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which
$$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace . \end{aligned}$$ -
Output: An ordered sequence of weights preserving the preference order of the attributes \(a_1, a_2, \ldots , a_n\):
$$\begin{aligned} w_{\pi (n_{0}+1)} = w_{\pi (2)} \cdots = w_{\pi (n_{1})}&> w_{\pi (n_{1}+1)} = w_{\pi (n_{1}+2)} = \cdots = w_{\pi (n_{2})}> \cdots \\ \cdots&> w_{\pi (n_{k-1}+1)} = w_{\pi (n_{k-1}+2)} = \cdots = w_{\pi (n_{k})},\end{aligned}$$where for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)}>0\), and
$$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)} = \sum _{l=0}^{k-1} \sum _{j=n_{l}+1}^{n_{l+1}} w_{\pi (j)} = 1. \end{aligned}$$
Remark 5
According to Remark 4, if in Procedure 1, we replace the relations \(\prec\) and < with the relations \(\succ\) and >, respectively, then the procedure remains valid if in Step 1 a strictly convex weighting generator function g is selected.
The following example shows how Procedure 1 can be applied in practice.
Example 1
Suppose that \(a_1, a_2, \ldots , a_5\) are five attributes that characterize the alternatives in a decision making procedure, and the decision-maker’s preferences concerning the attributes are given by the following ordered sequence:
Our intention is to assign a weight value \(w_{\pi (i)}\) to each attribute \(a_{\pi (i)}\), where \(i=1,2, \ldots , 5\) and the permutation \(\pi\) is given by \((1,2,3,4,5)\mapsto (3,5,2,1,4)\), such that the order of the weights is identical to the preference order of the corresponding attributes. That is,
or equivalently,
We see that there are three different weight values in the ordered sequence in Eq. (15). Therefore, first we need to generate \(k=3\) unique weights, \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\), such that
Let \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) be induced by the weighting generator function g, which is given by \(g(x) = \sqrt{x}\), \(x \in [0,1]\). Since g is a strictly concave function, based on Proposition 1, we readily get that the weights \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) satisfy the inequality relation stated in Eq. (16). Using Eq. (1), the values of \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) are
Using the notations of Procedure 1, the ordered sequence of weights given in Eq. (15) can be written as
where
Since \(w_{\pi (i)}\) can be computed as
where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which
we have the following:
-
1.
If \(i=1\) or \(i=2\), then \(l=0\) and
$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{1}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.5774}{1.8165} = 0.3178. \end{aligned}$$ -
2.
If \(i=3\) or \(i=4\), then \(l=1\) and
$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{2}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.2391}{1.8165} = 0.1317. \end{aligned}$$ -
3.
If \(i=5\), then \(l=2\) and
$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{3}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.1835}{1.8165} = 0.1010. \end{aligned}$$
This means that we have
which satisfies the criterion in Eq. (14), and \(\sum _{i=1}^{5} w_i = 1\).
4 Approximating the weights using the derivatives of weighting generator functions
Here, we will present a way of effectively approximating the weights that are induced by a weighting generator function.
Let \(g \in \mathcal {G}\) be a weighting generator function. Based on Proposition 1, the ith weight \(w_{i}\) induced by g is
where \(i \in \lbrace 1,2, \ldots , n \rbrace\) and \(n \in \mathbb {N}\), \(n \ge 1\). The gradient of the line segment that connects the points \(\left( \frac{i-1}{n}, g\left( \frac{i-1}{n}\right) \right)\) and \(\left( \frac{i}{n}, g\left( \frac{i}{n}\right) \right)\) is
Since g is a differentiable function, if n is sufficiently large, then the gradient in Eq. (18) can be approximated quite well like so:
where \(g'\) is the first derivative of the weighting generator function g. Hence, the weight \(w_{i}\) in Eq. (17) can be approximated quite well by \(w'_{i}\), where
As g is a strictly increasing function, for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w'_{i}>0\) holds. Furthermore, if n is sufficiently large, then \(\sum _{i=1}^{n} w'_{i} \approx \sum _{i=1}^{n} w_{i} = 1\), and so we have that
This means that if n is sufficiently large, then \(w_{i}\) can be approximated quite well by \(\hat{w}_{i}\), where
and for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(\hat{w}_{i}>0\), and \(\sum _{i=1}^{n} \hat{w}_{i} =1\).
Example 2
Let the weights \(w_{1}\), \(w_{2}, \ldots , w_{7}\) be induced by the weighting generator function \(g(x) = x^{0.8}\), where \(x \in [0,1]\), and let the weights \(\hat{w}_{1}\), \(\hat{w}_{2}, \ldots , \hat{w}_{7}\) be computed using Eq. (20). The results of the calculations are summarized in Table 2.
The results in Table 2 demonstrate that, in this particular case, if \(n=7\), then the \(w'_{i}\) and \(\hat{w}_{i}\) values approximate quite well the value of the \(w_{i}\) weight, where \(i=1,2, \ldots , n\).
5 Weights induced by various weighting generator functions
In Example 1, we utilized the weighting generator function \(g(x) = \sqrt{x}\), \(x \in [0,1]\), to obtain a desired order of weights. We should mention that the application of any strictly concave weighting generator function would have resulted in the same weight order. Of course, the weights generated by different generator functions are not the same, but the order of the generated weights just depends on the convexity of the generator function.
Now, we will provide a few examples of weighting generator functions which can be used to produce weights that satisfy certain criteria. Then we will present a particular generator function which is known as the tau function in continuous-valued logic.
5.1 Weighting generator functions of arithmetic weight sequences
Suppose that we wish to create the following sequence of weights:
where \(n \ge 1\),
\(w, w_{i} \in (0,1)\), \(i = 1,2, \ldots , n\), \(d \in (-1,1)\), \(d \ne 0\) and
Since the sequence of weights given by Eq. (21) is an arithmetic sequence, we have that for any \(j \in \lbrace 1,2, \ldots , n \rbrace\),
Hence, based on Eqs. (22) and (23), we have
from which we get
Next, based on Remark 1 and Eq. (23), the generator function g that induces the arithmetic sequence of the weights given in Eq. (21) should satisfy the following:
for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Next, noting Eq. (24), from Eq. (25), we get
Now, by introducing
Eq. (26) can be written as
where \(\alpha \in [-1,1]\) and \(\alpha \ne 0\). Note that these restrictions on the value of parameter \(\alpha\) ensure that g satisfies the criteria for a weighting generator function given in Definition 1. If \(\alpha \in (0,1]\) (respectively, \(\alpha \in [-1,0)\)), then the weight sequence induced by g is strictly increasing (respectively, decreasing). The derivative of the generator function in Eq. (27) is
and so, using Eq. (19),
can be treated as a good approximation of \(w_{i}\) when n is large, \(i=1,2, \ldots , n\).
5.2 Weighting generator functions of geometric weight sequences
Now, suppose that we wish to create the following geometric sequence of weights:
where \(n \ge 1\),
\(w, w_{i} \in (0,1)\), \(i = 1,2, \ldots , n\), \(r \in \mathbb {R}\), \(r >0\), \(r \ne 1\) and
Because the sequence of weights given by Eq. (29) is a geometric sequence,
holds for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Therefore, based on Eqs. (30) and (31), we have
from which
follows. Next, based on Remark 1 and Eq. (31), the generator function g that induces the geometric sequence of the weights given in Eq. (29) should meet the requirement
for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Hence, making use of Eq. (32), from Eq. (33), we get
Now, by introducing
Eq. (34) can be written as
where \(\alpha \in \mathbb {R}\), \(\alpha > 0\) and \(\alpha \ne 1\). Note that if \(\alpha >1\) (respectively, \(\alpha <1\)), then the weight sequence induced by g is strictly increasing (respectively, decreasing). The derivative of the generator function in Eq. (35) is
Hence, on the one hand, with Eqs. (20) and (36), we have
On the other hand, using Eqs. (1) and (35), we find that
This means that \(\hat{w}_{i}\) is not just a good approximation of the \(w_{i}\) weight, but \(\hat{w}_{i}\) and \(w_{i}\) coincide for any \(i \in \lbrace 1,2, \ldots , n \rbrace\).
Example 3
The weights in Table 1, which represent the normalized scores that the first ten drivers are awarded with in a Formula 1 race, can be approximated quite well using the weighting generator function given in Eq. (35). Namely, with a simple sequential search \(\alpha =0, 0.02, 0.04, \ldots , 1\), we find that
is nearly minimal for \(\alpha =0.08\), where \(w_i\) is the ith weight in Table 1, i.e., \(w_{i} = \frac{s_{i}}{\sum _{j=1}^{10} s_{j}}\), and
For \(\alpha =0.08\), we have \(E(\alpha )=0.001\). The results of the approximation are summarized in Table 3.
5.3 Weights induced by the tau function
The tau function \(\tau _{A}:[0,1] \rightarrow [0,1]\), which was first introduced by Dombi as a unary modifier operator in continuous-valued logic (see [24]), is defined as follows:
where \(A \in (0,\infty )\). For more details on the tau function, its more general form and applications, see [25]. If we set the requirement that \(\tau _{A}(\nu ) = 1-\nu\), where \(\nu \in (0,1)\), then the tau function in Eq. (37) can be written as
It can be verified that the tau function given in Eq. (38)
-
(a)
\(\tau _{\nu }(0) = 0\) and \(\tau _{\nu }(1)=1\)
-
(b)
\(\tau _{\nu }(\nu ) = 1-\nu\)
-
(c)
\(\tau _{\nu }(x)\) is strictly increasing and it is differentiable for any \(x \in [0,1]\)
-
(d)
-
If \(\nu \in \left( 0,\frac{1}{2} \right)\), then \(\tau _{\nu }(x)\) is strictly concave on [0, 1] and for any \(x \in (0,1)\), \(\tau _{\nu }(x) > x\).
-
If \(\nu = \frac{1}{2}\), then for any \(x \in [0,1]\), \(\tau _{\nu }(x) = x\).
-
If \(\nu \in \left( \frac{1}{2},1 \right)\), then \(\tau _{\nu }(x)\) is strictly convex on [0, 1] and for any \(x \in (0,1)\), \(\tau _{\nu }(x) < x\).
-
Therefore, for any arbitrarily fixed \(\nu \in (0,1)\), \(\nu \ne \frac{1}{2}\), the tau function \(\tau _{\nu }\) meets the requirements for a weighting generator function. Taking into account Eq. (1), the ith weight induced by \(\tau _{\nu }\) is
where \(n \ge 1\), \(i=1,2, \ldots , n\). Figure 2 shows sample plots of the tau function.
It should be noted that the convexity of the tau function depends solely on the value of its parameter \(\nu\). Hence, based on Proposition 1, we have that if \(\nu \in \left( 0,\frac{1}{2} \right)\), then \(\tau _{\nu }\) generates a strictly decreasing sequence of weights, and if \(\nu \in \left( \frac{1}{2},1 \right)\), then \(\tau _{\nu }\) generates a strictly increasing sequence of weights.
It can be shown that with the requirement \(g(\nu )=1-\nu\), where \(\nu \in (0,1)\), the tau function \(\tau _{\nu }\) is the solution of the differential equation
Therefore, noting Eq. (19), we find that
can be treated as a good approximation of \(w_{i}\) when n is large, \(i=1,2, \ldots , n\).
6 Learning weights using weighting generator functions
Suppose that \(a_1, a_2, \ldots , a_n\) are attributes which characterize each alternative in a decision making procedure, \(n \in \mathbb {N}\), \(n \ge 1\). Let \(\textbf{x}_{j}= (x_{j,1}, x_{j,2}, \ldots , x_{j,n}) \in {[0,1]^{n}}\) be the vector that contains the normalized utility values of the attributes \(a_1, a_2, \ldots , a_n\) for the jth alternative, respectively, i.e., \(x_{j,i} { \in [0,1]}\) is the utility value of the ith attribute for the jth alternative, where \(i=1,2, \ldots , n\), \(j=1,2, \ldots , m\), \(n,m \in \mathbb {N}\), \(n,m \ge 1\). Furthermore, let \(w_i\) denote the weight that represents the importance value of the ith attribute in the preference system of a decision-maker, \(i \in \lbrace 1,2, \ldots , n\rbrace\). Now, using weighting generator functions, we will present a heuristic that can be utilized to determine the \(w_{1}, w_{2}, \ldots , w_{n}\) weights when the aggregate utility of the jth alternative is computed as the weighted arithmetic mean of the utility values \(x_{j,1}, x_{j,2}, \ldots , x_{j,n}\) with respect to the weights \(w_{1}, w_{2}, \ldots , w_{n}\). This heuristic uses two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences and (2) A sample of evaluated alternatives. We will present this heuristic for the case where the weighting generator function is the tau function given in Eq. (38). Making use of the above-mentioned approach, our weight learning heuristic can be adapted to other weighting functions as well.
Procedure 2
(A heuristic for learning attribute weights)
-
Inputs (1) A non-increasing sequence of the decision-maker’s preferences regarding the attributes \(a_1, a_2, \ldots , a_n\):
$$\begin{aligned} a_{\pi (n_{0}+1)} \equiv a_{\pi (2)} \cdots \equiv a_{\pi (n_{1})}&\succ a_{\pi (n_{1}+1)} \equiv a_{\pi (n_{1}+2)} \equiv \cdots \equiv a_{\pi (n_{2})} \succ \cdots \\ \cdots&\succ a_{\pi (n_{k-1}+1)} \equiv a_{\pi (n_{k-1}+2)} \equiv \cdots \equiv a_{\pi (n_{k})},\end{aligned}$$where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),
$$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n \end{aligned}$$and \(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\). Here, the value of k corresponds to the number of the unknown unique weight values in a weight sequence that reflects the above sequence of attribute preferences. (2) A sample of \((\textbf{x}_{j}, v_{j})\) pairs, where \(\textbf{x}_{j}= (x_{j,1}, x_{j,2}, \ldots , x_{j,n}) \in {[0,1]^{n}}\) is the utility vector for the jth alternative and \(v_{j} \in {[0,1]}\) is a utility value (score) assigned to this alternative by the decision-maker.
-
Value searching Find the value of \(\nu \in \left( 0, \frac{1}{2} \right)\) for which
$$\begin{aligned} \sum _{j=1}^{m} \left( v_{j} - \sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} x_{j,\pi (i)} \right) ^{2} \rightarrow \min , \end{aligned}$$(40)where for every \(i=1,2, \ldots , n\),
$$\begin{aligned} w_{\pi (i)}^{(\nu )} = \frac{\tau _{\nu }\left( \frac{l+1}{n} \right) - \tau _{\nu }\left( \frac{l}{n} \right) }{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) \left( \tau _{\nu }\left( \frac{r+1}{n} \right) - \tau _{\nu }\left( \frac{r}{n} \right) \right) }, \end{aligned}$$(41)\(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which
$$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace . \end{aligned}$$ -
Output: An ordered sequence of weights preserving the preference order of the attributes \(a_1, a_2, \ldots , a_n\):
$$\begin{aligned} w_{\pi (n_{0}+1)}^{(\nu )} = w_{\pi (2)}^{(\nu )} \cdots = w_{\pi (n_{1})}^{(\nu )}&> w_{\pi (n_{1}+1)}^{(\nu )} = w_{\pi (n_{1}+2)}^{(\nu )} = \cdots = w_{\pi (n_{2})}^{(\nu )}> \cdots \\ \cdots&> w_{\pi (n_{k-1}+1)}^{(\nu )} = w_{\pi (n_{k-1}+2)}^{(\nu )} = \cdots = w_{\pi (n_{k})}^{(\nu )},\end{aligned}$$where for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)}^{(\nu )}>0\), and
$$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} = \sum _{l=0}^{k-1} \sum _{j=n_{l}+1}^{n_{l+1}} w_{\pi (j)}^{(\nu )} = 1. \end{aligned}$$
In Procedure 2, \(v_{j}\) and \(\sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} x_{j,\pi (i)}\) are the perceived and computed aggregate utility values for the jth alternative, respectively. That is, in this procedure, we seek to find the value of parameter \(\nu\) for which the sum of squared differences between the perceived and computed utility values for a given sample of alternatives is a minimum. A nearly optimal value of parameter \(\nu\) can be found using numerical methods. Since the learned weights should form a non-increasing sequence, the weighting generator tau function needs to be strictly concave, i.e., \(\nu \in \left( 0, \frac{1}{2} \right)\). This is why the optimization in Eq. (40) needs to be solved under the constraint \(0< \nu <\frac{1}{2}\). Hence, a nearly optimal value of \(\nu\) can be determined using a brute force approach. Also, the generalized reduced gradient (GRG) method (see, e.g., [6, 43]), the GLOBAL optimization method introduced by Csendes (see [18, 19]) or a particle swarm optimization method (see, e.g., [66]) can be used to find a nearly optimal value of \(\nu\).
We should add that Procedure 2 can be adapted to the cases where the weights have an arithmetic or a geometric sequence. In such cases, instead of the tau function, we need to utilize the weighting generator functions of the arithmetic or geometric weight sequences given by Eqs. (27) and (35), respectively. This also means that we need to find the optimal value of the \(\alpha\) parameter of the corresponding weighting generator function, where \(\alpha \in [-1,0)\) and \(\alpha \in (0,1)\) for the arithmetic and geometric cases, respectively.
Example 4
Here, we will show how Procedure 2 can be applied in practice. Suppose that cars are characterized by the following four attributes: (1) Engine power, (2) Max. speed, (3) Fuel consumption and (4) Trunk capacity. Table 4 shows the unit of measure and the range for each of these attributes.
Let \(a_1\), \(a_2\), \(a_3\) and \(a_4\) denote the attributes (1) Engine power, (2) Max. speed, (3) Fuel efficiency and (4) Trunk capacity, respectively, and let us assume that a decision-maker’s preferences regarding the car attributes are given by the following relations:
Then, the preference relations given in Eq. (42) can be written as
where the permutation \(\pi\) is given by \((1,2,3,4) \mapsto (3,2,4,1)\), i.e., we have
Now, suppose that five cars with the attributes given in Table 5 are offered to the decision-maker.
In Table 5, \(x^{*}_{j,i}\) denotes the value of the ith attribute (i.e., \(a_{i}\)) for the jth alternative (car) in the units of measures given in Table 4, while \(x^{*}_{j,i} \in [0,1]\) is the normalized value of \(x_{j,i}\) using the min-max normalization with the ranges for the attributes given in Table 4, where \(i=1,2, 3, 4\) and \(j=1,2, 3, 4, 5\). That is,
where \(j=1,2, 3, 4, 5\). Let us assume that the decision-maker evaluates each alternative and assigns the \(v_{j} \in (0,1)\) utility value (score) to the jth alternative as shown in Table 5. Our aim is to determine the \(w_{\pi (1)}, w_{\pi (2)}, w_{\pi (3)}, w_{\pi (4)}\) weights such that
and
Here, will utilize the \(\tau _{\nu }\) weighting generator function given in Eq. (38) to determine the values of the \(w_{\pi (1)}, w_{\pi (2)}, w_{\pi (3)}, w_{\pi (4)}\) weights. Following Procedure 2 and noting the requirement that \(w_{\pi (1)}> w_{\pi (2)}> w_{\pi (3)} > w_{\pi (4)}\), we have that, based on Eq. (41),
where \(i=1,2,3,4\). Applying the generalized reduced gradient method with the constraint \(\nu \in \left( 0, \frac{1}{2} \right)\), we found that a nearly optimal solution of the minimization problem given by Eqs. (43) and (44) is \(\nu =0.2425\). For this value of parameter \(\nu\), the objective function in Eq. (44) has the value 0.0055. The results of the computations are summarized in Table 6.
The aggregate utility \(U_{j}\) for the jth alternative computed using the weighted arithmetic mean
and the decision-maker’s \(v_{j}\) utility value assigned to the jth alternative (i.e., the aggregate utility perceived by the decision-maker) are listed for \(j=1,2,3,4,5\) in Table 7.
We see in Table 7 that the corresponding computed (\(U_{j}\)) and perceived (\(v_{j}\)) utility values are quite close to each other. This means that based on a non-increasing sequence of the attribute preferences and on a sample of evaluated alternatives, both provided by the decision-maker, by applying Procedure 2, we were able to determine the weights that describe the decision-maker’s preferences quite well.
6.1 Limitations
Let \(\textbf{W}\) denote the set of all weight vectors \(\textbf{w}=(w_{1}, w_{2}, \ldots , w_{n}) \in (0,1)^{n}\) that meet the criterion
where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),
with an arbitrarily fixed permutation \(\pi :\lbrace 1,2, \ldots , n \rbrace \rightarrow \lbrace 1,2, \ldots , n \rbrace\). Next, let \(\textbf{W}^{(\nu )}\) be the set of all weight vectors that can be produced by Procedure 2. Since the components of a weight vector produced by this procedure are not independent, i.e., all these components depend on the corresponding weighting generator function, we have that \(\textbf{W}^{(\nu )} \subset \textbf{W}\). This means that the heuristic described in Procedure 2 cannot produce all the theoretically possible weight vectors that may reflect preferences of decision makers.
We should also mention that another shortcoming of Procedure 2 is related to the fact that the weighting generator function has a predefined mathematical form (i.e., it is a tau function or it may be a predefined concave parametric function), and so this function cannot produce arbitrarily distributed weight values.
7 Conclusions and future research plans
In our study, we utilized the weighting generator functions, which are a class of the regular increasing monotone quantifiers, to produce weights so that the order of the produced weights corresponds to the order of the attributes (criteria) provided by a decision-maker. We showed that the weighting generator function lays the foundations for generating inverse (inverted) weights in a consistent way. We demonstrated that the derivative of of these functions is suitable for producing approximate weights. Next, we presented the weighting generator functions for arithmetic and geometric weight sequences. Here, we showed that the tau function, which is a unary operator in continuous-valued logic, is also a one-parameter weighting generator function with useful properties. Besides these theoretical results, we presented a practical weight learning procedure that is mathematically simple and can be easily applied in practice. This heuristic utilizes two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives. The output of this procedure is a sequence of weights, which corresponds to the preference order of the attributes (criteria) provided by the decision-maker. We should add that our procedure does not require pairwise comparisons of the decision criteria, which is an advantageous property especially in cases where the number of decision attributes (decision criteria) is large. In the proposed method, we utilize one-parameter weighting generator functions, and so the weight learning procedure leads to an optimization problem where the optimal value of only one parameter needs to be found. As for the proposed weighting generator function, the parameter domain is a bounded interval, e.g., \((0,\frac{1}{2})\), \([-1,0)\) or (0, 1), and a nearly optimal parameter value can be determined even using a brute force algorithm. This means that the proposed method can be readily implemented in practice. Microsoft Excel implementations of the four examples presented in the course of this study are available at https://github.com/dombijozsef/-Learning-the-weights-using-attribute-order-information-for-multi-criteria-decision-making-tasks.
As part of our future research, we would like to find new weight generator functions that have useful properties in different practical applications. For example, we would like to know what the weight generator function for a weight sequence is that satisfies a linear recursion. It would also be good to know how the weights can be determined for a given lexicographic order of decision alternatives.
References
Abdullah, M.M.B., Islam, R.: Nominal group technique and its applications in managing quality in higher education. Pak. J. Comm. Social Sci. (PJCSS) 5(1), 81–99 (2011)
Akram, M., Ilyas, F., Garg, H.: ELECTRE-II method for group decision-making in Pythagorean fuzzy environment. Appl. Intell. 51(12), 8701–8719 (2021). https://doi.org/10.1007/s10489-021-02200-0
Allen, R., Thanassoulis, E.: Improving envelopment in data envelopment analysis. Eur. J. Oper. Res. 154(2), 363–379 (2004). https://doi.org/10.1016/S0377-2217(03)00175-9
de Almeida, A.T., de Almeida, J.A., Costa, A.P.C.S., et al.: A new method for elicitation of criteria weights in additive models: flexible and interactive tradeoff. Eur. J. Oper. Res. 250(1), 179–191 (2016). https://doi.org/10.1016/j.ejor.2015.08.058
Arbel, A.: Approximate articulation of preference and priority derivation. Eur. J. Oper. Res. 43(3), 317–326 (1989). https://doi.org/10.1016/0377-2217(89)90231-2
Arora, J.: Introduction to optimum design. Elsevier, Amsterdam (2004)
Arya, V., Kumar, S.: Multi-criteria decision making problem for evaluating ERP system using entropy weighting approach and q-rung orthopair fuzzy TODIM. Granular Comput. 6(4), 977–989 (2021). https://doi.org/10.1007/s41066-020-00242-2
Banamar, I., Smet, Y.D.: An extension of PROMETHEE II to temporal evaluations. Int. J. Multicriter Dec. Mak. 7(3–4), 298–325 (2018). https://doi.org/10.1504/IJMCDM.2018.094371
Behzadian, M., Kazemzadeh, R., Albadvi, A., et al.: PROMETHEE: a comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 200(1), 198–215 (2010). https://doi.org/10.1016/j.ejor.2009.01.021
Beliakov, G., Gómez, D., James, S., et al.: Approaches to learning strictly-stable weights for data with missing values. Fuzzy Sets Syst. 325, 97–113 (2017). https://doi.org/10.1016/j.fss.2017.02.003
Boix-Cots, D., Pardo-Bosch, F., Alvarez, P.P.: A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme. Inf. Fusion (2023). https://doi.org/10.1016/j.inffus.2023.03.004
Brans, J., Vincke, P., Mareschal, B.: How to select and how to rank projects: the PROMETHEE method. Eur. J. Oper. Res. 24(2), 228–238 (1986). https://doi.org/10.1016/0377-2217(86)90044-5
Chen, C.H.: A novel multi-criteria decision-making model for building material supplier selection based on entropy-AHP weighted TOPSIS. Entropy 22(2), 259 (2020). https://doi.org/10.3390/e22020259
Chen, F., Wang, J., Deng, Y.: Road safety risk evaluation by means of improved entropy TOPSIS-RSR. Saf. Sci. 79, 39–54 (2015). https://doi.org/10.1016/j.ssci.2015.05.006
Chen, T.Y.: Interval-valued intuitionistic fuzzy qualiflex method with a likelihood-based comparison approach for multiple criteria decision analysis. Inf. Sci. 261, 149–169 (2014). https://doi.org/10.1016/j.ins.2013.08.054
Chen, Y., Okudan, G.E., Riley, D.R.: Decision support for construction method selection in concrete buildings: prefabrication adoption and optimization. Autom. Constr. 19(6), 665–675 (2010). https://doi.org/10.1016/j.autcon.2010.02.011
Corrente, S., Figueira, J.R., Greco, S., et al.: A robust ranking method extending ELECTRE III to hierarchy of interacting criteria, imprecise weights and stochastic analysis. Omega 73, 1–17 (2017). https://doi.org/10.1016/j.omega.2016.11.008
Csendes, T.: Nonlinear parameter estimation by global optimization-efficiency and reliability. Acta Cybernet. 8(4), 361–370 (1988)
Csendes, T., Pál, L., Sendin, J.O.H., et al.: The GLOBAL optimization method revisited. Optimz. Lett. 2(4), 445 (2008). https://doi.org/10.1007/s11590-007-0072-3
Csiszár, O.: Ordered weighted averaging operators: a short review. IEEE Syst. Man Cybern. Magaz. 7(2), 4–12 (2021). https://doi.org/10.1109/MSMC.2020.3036378
Delbecq, A.L., Van de Ven, A.H.: A group process model for problem identification and program planning. J. Appl. Behav. Sci. 7(4), 466–492 (1971). https://doi.org/10.1177/00218863710070040
Delice, E.K., Can, G.F.: A new approach for ergonomic risk assessment integrating KEMIRA, best-worst and MCDM methods. Soft. Comput. 24, 15093–15110 (2020). https://doi.org/10.1007/s00500-020-05143-9
Diakoulaki, D., Mavrotas, G., Papayannakis, L.: Determining objective weights in multiple criteria problems: the critic method. Comput. Oper. Res. 22(7), 763–770 (1995). https://doi.org/10.1016/0305-0548(94)00059-H
Dombi, J.: On a certain type of unary operators. In: 2012 IEEE International Conference on Fuzzy Systems, pp 1–7, (2012) https://doi.org/10.1109/FUZZ-IEEE.2012.6251349
Dombi, J., Jónás, T.: A unified approach to four important classes of unary operators. Int. J. Approx. Reason. 133, 80–94 (2021). https://doi.org/10.1016/j.ijar.2021.03.007
Du, Y., Zheng, Y., Wu, G., et al.: Decision-making method of heavy-duty machine tool remanufacturing based on AHP-entropy weight and extension theory. J. Clean. Prod. 252, 119607 (2020). https://doi.org/10.1016/j.jclepro.2019.119607
Edwards, W., Barron, F.H.: Smarts and smarter: improved simple methods for multiattribute utility measurement. Organ. Behav. Hum. Decis. Process. 60(3), 306–325 (1994). https://doi.org/10.1006/obhd.1994.1087
Fei, L., Xia, J., Feng, Y., et al.: An ELECTRE-based multiple criteria decision making method for supplier selection using Dempster-Shafer theory. IEEE Access 7, 84701–84716 (2019). https://doi.org/10.1109/ACCESS.2019.2924945
Figueira, J.R., Mousseau, V., Roy, B.: ELECTRE methods. In: Greco, S., Ehrgott, M., Figueira, J.R. (eds.) Multiple Criteria Decision Analysis: State of the Art Surveys, pp. 155–185. Springer, New York (2016). https://doi.org/10.1007/978-1-4939-3094-4_5
Fodor, J., Marichal, J.L., Roubens, M.: Characterization of the ordered weighted averaging operators. IEEE Trans. Fuzzy Syst. 3(2), 236–240 (1995). https://doi.org/10.1109/91.388176
Fodor, J.C., Roubens, M.: Fuzzy preference modelling and multicriteria decision support, vol. 14. Springer Science & Business Media, UK (1994)
Freeman, J., Chen, T.: Green supplier selection using an AHP-Entropy-TOPSIS framework. Supply Chain Manag. An Int. J. 20(3), 327–340 (2015). https://doi.org/10.1108/SCM-04-2014-0142
Ginevičius, R., Podvezko, V.: Objective and subjective approaches to determining the criterion weight in multicriteria models. Transport Telecommun. 6(1), 133–137 (2005)
Golaszewski, R., Sheth, K., Helledy, G., et al.: Methods for initial allocation of points in flight prioritization. In: 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, p 5542 (2012), https://doi.org/10.2514/6.2012-5542
Herrera, F., Herrera-Viedma, E., Verdegay, J.L.: Direct approach processes in group decision making using linguistic OWA operators. Fuzzy Sets Syst. 79(2), 175–190 (1996). https://doi.org/10.1016/0165-0114(95)00162-X
Hwang, C.L., Yoon, K.: Methods for multiple attribute decision making. In: Multiple attribute decision making. Springer, pp 58–191 (1981), https://doi.org/10.1007/978-3-642-48318-9_3
Jahan, A., Mustapha, F., Sapuan, S., et al.: A framework for weighting of criteria in ranking stage of material selection process. Int. J. Adv. Manufact. Technol. 58, 411–420 (2012). https://doi.org/10.1007/s00170-011-3366-7
Keeney, R.L., Raiffa, H.: Decisions with multiple objectives: preferences and value trade-offs. Cambridge University Press, Cambridge (1993)
Keshavarz-Ghorabaee, M., Amiri, M., Zavadskas, E.K., et al.: An extended step-wise weight assessment ratio analysis with symmetric interval type-2 fuzzy sets for determining the subjective weights of criteria in multi-criteria decision-making problems. Symmetry 10(4), 91 (2018). https://doi.org/10.3390/sym10040091
Keshavarz-Ghorabaee, M., Amiri, M., Zavadskas, E.K., et al.: Simultaneous evaluation of criteria and alternatives (SECA) for multi-criteria decision-making. Informatica 29(2), 265–280 (2018)
Keshavarz-Ghorabaee, M., Amiri, M., Zavadskas, E.K., et al.: Determination of objective weights using a new method based on the removal effects of criteria (MEREC). Symmetry 13(4), 525 (2021). https://doi.org/10.3390/sym13040525
Kou, G., Olgu Akdeniz, Ö., Dinçer, H., et al.: Fintech investments in european banks: a hybrid IT2 fuzzy multidimensional decision-making approach. Financial Innovation 7(1), 39 (2021). https://doi.org/10.1186/s40854-021-00256-y
Lasdon, L.S., Waren, A.D., Jain, A., et al.: Design and testing of a generalized reduced gradient code for nonlinear programming. ACM Trans. Math. Softw. (TOMS) 4(1), 34–50 (1978)
Li, H., Sun, J.: Ranking-order case-based reasoning for financial distress prediction. Knowl.-Based Syst. 21(8), 868–878 (2008). https://doi.org/10.1016/j.knosys.2008.03.047
Liu, S., Hu, Y., Zhang, X., et al.: Blockchain service provider selection based on an integrated BWM-Entropy-TOPSIS method under an intuitionistic fuzzy environment. IEEE Access 8, 104148–104164 (2020). https://doi.org/10.1109/ACCESS.2020.2999367
Liu, X., Wan, S.P.: A method to calculate the ranges of criteria weights in ELECTRE I and II methods. Comput. Ind. Eng. 137, 106067 (2019). https://doi.org/10.1016/j.cie.2019.106067
Lolli, F., Balugani, E., Ishizaka, A., et al.: On the elicitation of criteria weights in PROMETHEE-based ranking methods for a mobile application. Expert Syst. Appl. 120, 217–227 (2019). https://doi.org/10.1016/j.eswa.2018.11.030
Mahmoody Vanolya, N., Jelokhani-Niaraki, M.: The use of subjective-objective weights in GIS-based multi-criteria decision analysis for flood hazard assessment: A case study in Mazandaran, Iran. GeoJournal 86, 379–398 (2021). https://doi.org/10.1007/s10708-019-10075-5
Mi, X., Liao, H.: An integrated approach to multiple criteria decision making based on the average solution and normalized weights of criteria deduced by the hesitant fuzzy best worst method. Comput. Ind. Eng. 133, 83–94 (2019). https://doi.org/10.1016/j.cie.2019.05.004
Odu, G.: Weighting methods for multi-criteria decision making technique. J. Appl. Sci. Environ. Manag. 23(8), 1449–1457 (2019). https://doi.org/10.4314/jasem.v23i8.7
Opricovic, S., Tzeng, G.H.: Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 156(2), 445–455 (2004). https://doi.org/10.1016/S0377-2217(03)00020-1
Opricovic, S., Tzeng, G.H.: Extended VIKOR method in comparison with outranking methods. Eur. J. Oper. Res. 178(2), 514–529 (2007). https://doi.org/10.1016/j.ejor.2006.01.020
Paelinck, J.: Qualitative multiple criteria analysis, environmental protection and multiregional development. Papers Reg. Sci. Assoc. 36(1), 59–74 (1976). https://doi.org/10.1007/BF01944375
Pöyhönen, M., Hämäläinen, R.P.: On the convergence of multiattribute weighting methods. Eur. J. Oper. Res. 129(3), 569–585 (2001). https://doi.org/10.1016/S0377-2217(99)00467-1
Roszkowska, E.: Rank ordering criteria weighting methods-a comparative overview. Optim. Studia Ekonomiczne 65(5), 14–33 (2013). https://doi.org/10.15290/ose.2013.05.65.02
Roy, B.: Classement et choix en présence de points de vue multiples (la méthode ELECTRE). Revue française d’informatique et de recherche opérationnelle 2(8), 57–75 (1968)
Saaty, T.L.: Fundamentals of decision making and priority theory with the analytic hierarchy process. RWS publications, UK (1994)
Salih, M.M., Zaidan, B., Zaidan, A., et al.: Survey on fuzzy TOPSIS state-of-the-art between 2007 and 2017. Comput. Oper. Res. 104, 207–227 (2019). https://doi.org/10.1016/j.cor.2018.12.019
Sekhar, C., Patwardhan, M., Vyas, V.: A delphi-AHP-TOPSIS based framework for the prioritization of intellectual capital indicators: A SMEs perspective. Procedia. Soc. Behav. Sci. 189, 275–284 (2015). https://doi.org/10.1016/j.sbspro.2015.03.223
Sun, C., Li, S., Deng, Y.: Determining weights in multi-criteria decision making based on negation of probability distribution under uncertain environment. Mathematics 8(2), 191 (2020). https://doi.org/10.3390/math8020191
Thanassoulis, E., Kortelainen, M., Allen, R.: Improving envelopment in data envelopment analysis under variable returns to scale. Eur. J. Oper. Res. 218(1), 175–185 (2012). https://doi.org/10.1016/j.ejor.2011.10.009
Zp, Tian, Wang, J., Jq, Wang, et al.: A likelihood-based qualitative flexible approach with hesitant fuzzy linguistic information. Cogn. Comput. 8(4), 670–683 (2016). https://doi.org/10.1007/s12559-016-9400-1
Ustinovičius, L.: Determining integrated weights of attributes. Statyba 7(4), 321–326 (2001). https://doi.org/10.1080/13921525.2001.10531743
Velasquez, M., Hester, P.T.: An analysis of multi-criteria decision making methods. Int. J. Oper. Res. 10(2), 56–66 (2013)
Wallis, W.D., et al.: The mathematics of elections and voting. Springer, Germany (2014)
Wang, D., Tan, D., Liu, L.: Particle swarm optimization algorithm: an overview. Soft. Comput. 22, 387–408 (2018). https://doi.org/10.1007/s00500-016-2474-6
Wang, J.J., Jing, Y.Y., Zhang, C.F., et al.: Review on multi-criteria decision analysis aid in sustainable energy decision-making. Renew. Sustain. Energy Rev. 13(9), 2263–2278 (2009). https://doi.org/10.1016/j.rser.2009.06.021
Wieckowski, J., Kizielewicz, B., Paradowski, B., et al.: Application of multi-criteria decision analysis to identify global and local importance weights of decision criteria. Int. J. Inf. Technol. Decision Mak. 22(06), 1867–1892 (2023). https://doi.org/10.1142/S0219622022500948
von Winterfeldt, D., Edwards, W.: Decision Analysis and Behavioral Research. Cambridge University Press, Cambridge (1986)
Xiao, H., Zhang, Y., Kou, G., et al.: Ranking and selection for pairwise comparison. Naval Res. Logist. (NRL) (2023). https://doi.org/10.1002/nav.22093
Yager, R.R.: On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 18(1), 183–190 (1988). https://doi.org/10.1109/21.87068
Yager, R.R.: Families of OWA operators. Fuzzy Sets Syst. 59(2), 125–148 (1993). https://doi.org/10.1016/0165-0114(93)90194-M
Yager, R.R.: Fuzzy screening systems. In: Fuzzy Logic. Springer, p 251–261, (1993b) https://doi.org/10.1007/978-94-011-2014-2_24
Yatsalo, B., Korobov, A., Öztayşi, B., et al.: Fuzzy extensions of PROMETHEE: models of different complexity with different ranking methods and their comparison. Fuzzy Sets Syst. 422, 1–26 (2021). https://doi.org/10.1016/j.fss.2020.08.015
Zardari, N.H., Ahmed, K., Shirazi, S.M., et al.: Weighting methods and their effects on multi-criteria decision making model outcomes in water resources management. Springer, Germany (2015)
Zargini, B., et al.: Multicriteria decision making problems using variable weights of criteria based on alternative preferences. Am. Sci. Res. J. Eng. Technol. Sci. (ASRJETS) 74(1), 1–14 (2020)
Žižović, M., Pamucar, D.: New model for determining criteria weights: level based weight assessment (LBWA) model. Decision Making Appl. Manag. Engl 2(2), 126–137 (2019). https://doi.org/10.31181/dmame1902102z
Zolfani, S.H., Yazdani, M., Zavadskas, E.K.: An extended stepwise weight assessment ratio analysis (SWARA) method for improving criteria prioritization process. Soft. Comput. 22(22), 7399–7405 (2018). https://doi.org/10.1007/s00500-018-3092-2
Funding
Open access funding provided by Eötvös Loránd University. This research was funded by the European Union project RRF\(-\)2.3.1-21-2022-00004 within the framework of the Artificial Intelligence National Laboratory and project TKP2021-NVA-09, implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose. The authors have no Conflict of interest to declare that are relevant to the content of this article. Both authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. The authors have no financial or proprietary interests in any material discussed in this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dombi, J., Jónás, T. Learning the weights using attribute order information for multi-criteria decision making tasks. OPSEARCH (2024). https://doi.org/10.1007/s12597-024-00779-9
Accepted:
Published:
DOI: https://doi.org/10.1007/s12597-024-00779-9