1 Introduction

In most of the decision making problems, decision-makers should take into account multiple criteria when comparing the available decision alternatives. [64] provided a comprehensive literature review on multi-criteria decision making methods including the Analytic Hierarchy Process (AHP) (see [57]), the Case-Based Reasoning (CBR) (see, e.g., [44]), the Data Envelopment Analysis (DEA) (see, e.g., [3, 61]), the Simple Multi-Attribute Rating Technique (SMART) (see, e.g., [16]) the ELimination and Choice Translating REality (ELECTRE) method family (see, [29, 56]), the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE) (see, e.g., [9, 12]), and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) (see, e.g., [36]).

Another widely utilized method in multi-criteria decision making is the Vlsekriterijumska Optimizacija I KOmpromisno Resenje (VIKOR) method (see, e.g., [51, 52]). Based on the generalized Jacquet-Lagreze’s permutation method, [53] developed the QUALItative FLEXible multiple criteria decision making method that is known as QUALIFLEX. It should be added that over the past few decades, numerous studies have been published on various extensions of the above-mentioned methods. From the recent literature, without any claim to completeness, we should mention here [58] on fuzzy-TOPSIS methods, [8, 74] on extensions of PROMETHEE, [2, 28] on extensions of ELECTRE methods. [62] presented a likelihood-based variant of the QUALIFLEX method, while [15] proposed the interval-valued intuitionistic fuzzy QUALIFLEX method using a likelihood-based comparison approach. A novel approach for the ranking and selection of design alternatives based on pairwise comparisons was introduced by [70]. In [42], a novel hybrid interval type-2 fuzzy multidimensional decision-making approach was presented for evaluating Fintech investments in European banks.

In a typical decision making situation, both the value and the importance of each decision criterion (or decision attribute) are incorporated into the final decision. It is quite common that importance is expressed using weight values associated with the criteria in question. Hence, determining the appropriate weights of criteria or attributes is an important topic in multi-criteria decision making; and so this topic has been attracting a lot of attention in recent years (see, e.g., [4, 7, 10, 11, 17, 39, 46,47,48,49, 60, 68, 76,77,78]).

Without claiming completeness, we should mention here some of the well-known weighting methods that can be used to determine weights in multi-criteria decision making problems. These methods can be classified into three main groups: subjective, objective and combined (integrated) weighting approaches (see, e.g., [33, 50, 63]).

In a subjective weight determination method, expert opinions are translated into weights. This is commonly done by asking multiple questions from the decision-maker, and so this process may be time consuming. The most commonly utilized subjective weighting methods are the point allocation method (see, e.g., [34]), the direct rating method (see, e.g., [5]), the pairwise comparisons, the ranking methods such as the rank sum, the rank exponent and rank reciprocal methods (see, e.g., [55]), the ratio weighting method, the swing weighting (see, e.g., [69]), the trade-off weighting (see, e.g., [38, 54]), the Delphi method (see, e.g., [59]), the Nominal Group Technique (NGT) (see, e.g., [1, 21]), the Simple Multi-attribute Rating Technique (SMART) and the Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER) methods (see, e.g., [27]). We should add that some of the above methods (e.g., the point allocation method) appear in the theory of voting systems (see, e.g., [65]). The main disadvantage of the subjective weight determination methods is that their efficiency decreases as the number of decision criteria increases.

In the objective weighting methods, the criteria weights are determined based on information related to each criterion. These methods apply mathematical models and the decision-makers’ preferences do not play any role in determining the criteria weights [75]. Typical inputs of these methods are the attribute values of the decision alternatives or a decision-matrix that contains the performance of each alternative on each decision criterion [41]. Some well-known objective weighting methods are the mean weight method (see, e.g., [67]), the entropy method (see, e.g., [14]), the standard deviation method (see, e.g., [37]), the CRiteria Importance Through Inter-criteria Correlation (CRITIC) (see, e.g., [23]) and the Simultaneous Evaluation of Criteria and Alternatives (SECA) (see [40]) method.

In the hybrid weighting methods, various subjective and objective weighting methods are combined. These methods can make use of both the decision-makers’ preferences and the data in decision-matrices (see, e.g., [13, 22, 26, 32, 45]).

1.1 Motivations of this study

The scoring-based ranking plays an important role in many areas of our lives. Commonly, scores are transformed into weights, and then the weights are used in multi-criteria decision making problems. In our study, we will focus on how an appropriate weighting system can be derived based on a ranking. Namely, we will study weighting systems that are derived from a preference order of attributes or decision criteria. For example, from the area of sport, in Formula 1, the first ten drivers are awarded with scores given in Table 1.

Table 1 Formula 1 scores and the corresponding weights

These scores can be transformed into weights using the normalization \(w_{i} = \frac{s_{i}}{\sum _{j=1}^{10} s_{j}}\). We will demonstrate that an appropriate weight system can also be obtained using the order information (i.e., the ranks), which is based on scores. Later, in Sect. 5.2, we will show how the weights shown in Table 1 can be approximated using a so-called weighting generator function that produces a geometric sequence of weights.

In our study, we sought to establish a weight learning procedure that requires simple inputs from the decision-maker and yields criteria weights via an easy-to-use mathematical method. Namely, our heuristic requires two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives. This method utilizes the so-called weighting generator functions to produce weights so that the order of the produced weights corresponds to the preference order of the attributes (criteria) provided by the decision-maker.

Based on its characteristics, the proposed method may be treated as a hybrid weighting method. On the one hand, as subjective inputs, the proposed method utilizes a ranking of attribute preferences (criteria preferences) and a set of evaluations of decision alternatives. On the other hand, a weighting generator function, which can produce weights using the above-mentioned subjective inputs, may be regarded as a mathematical model of the proposed method. Our main research question is how a ranking of attribute preferences (criteria preferences) supplemented with evaluations of decision alternatives can be transformed into appropriate attribute (criteria) weights. This question is justified by the fact that, to the best of our knowledge, no such methods are available that utilize simple, one-parameter weight generator functions to produce appropriate attribute (criteria) weights from the two decision making inputs mentioned above.

The family of Regular Increasing Monotone (RIM) quantifiers is a well-known construction in the theory of Ordered Weighted Averaging (OWA) operators and in the quantifier guided aggregation (see, e.g., [20, 30, 31, 35, 71,72,73]). It is also an acknowledged fact that the RIM quantifiers can be used to generate weights for a weighted aggregation operation. In our study, we will consider a class of the RIM quantifiers, which we will call the class of weighting generator functions.

In this article, we will show how a weighting generator function, which is a strictly increasing, differentiable and strictly concave (convex, respectively) mapping, can be used to produce a monotonic or a strictly monotonic sequence of weights such that this sequence corresponds to a decision-maker’s attribute preference order. It should be added that the concept of a weighting generator function lays the foundations for generating inverse (inverted) weights in a consistent way. Next, we will demonstrate that the derivative of a weighting generator function can be used to produce approximate weights. Also, we will deduce the weighting generator functions for arithmetic and geometric weight sequences, and we will present a special, one-parameter generator function that is known as the tau function in continuous-valued logic. After, based on our theoretical results concerning the weighting generator functions, we will present the above-described weight learning method and discuss its advantages and limitations.

Here, we will utilize one-parameter weighting generator functions. We will show that the parameter value of such a function can determine a sequence of weights that corresponds to the order of attribute preferences given by a decision-maker. Since the weighting generator function has only one parameter, it can be easily tuned, via its parameter value, to minimize the difference between the computed and the decision-maker-established utility values, which are assigned to each alternative in the input sample. Hence, our procedure may be treated as a hybrid method in the sense that in order to determine attribute weights, it utilizes both the inputs provided by a decision maker and the weighting generator functions that can be viewed as mathematical models. However, we should mention that, unlike the AHP method and its later developed versions, our procedure does not require pairwise comparisons of the decision criteria, which may adversely affect the efficiency of these methods especially in cases where the number of decision attributes (decision criteria) is large. In our method, regardless the number of decision criteria, we need to find the optimal value of one parameter. Since for the proposed weighting generator functions, the domain of this parameter is a bounded interval, e.g., \((0,\frac{1}{2})\), \([-1,0)\) or (0, 1), a nearly optimal parameter value can be determined using a brute force approach. The proposed method consists of two distinct steps, each of which is strongly connected with one of the two inputs. The first input, i.e., a non-increasing sequence of the attribute preferences (criteria preferences), readily determines the parameter domain of the weighting generator function. The parameter domains of the weighting generator functions that produce arithmetic and geometric sequences of weights are \([-1,0)\) or (0, 1), respectively. The parameter domain for the tau weighting generator function is \((0,\frac{1}{2})\). The second input, i.e., a sample of evaluated alternatives, is utilized for optimizing the parameter value such that the difference between the computed and the decision-maker-established utility values, which are assigned to each alternative in the input sample, is a minimum.

The main findings of our study can be summarized as follows:

  • A class of RIM quantifiers is treated as a set of weighting generator functions, which will be utilized for generating weighting systems.

  • We present a weight learning procedure that requires two inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives.

  • We utilize one-parameter weighting generator functions and so the weight learning procedure leads to an optimization problem where the optimal value of only one parameter needs to be found.

  • Using a numerical example, we will show how our procedure can be used in practice.

This paper is structured as follows. In Sect. 2, we present the weighting generator functions and a result concerning the inverted weights. Our method for generating weights based on a monotonic or strictly monotonic order of attribute preferences is described in Sect. 3. In Sect. 4, we focus on weight approximations using the derivatives of weighting generator functions. Weighting generator functions for arithmetic and geometric weight sequences as well as the tau weighting generator function is presented in Sect. 5. In Sect. 6, we present our weighting generator function-based weight learning method. Lastly, in Sect. 7, we draw some pertinent conclusions and outline our plans for future research.

2 Weighting generator functions and inverse weights

In this section, we will introduce the so-called weighting generator functions, which can be viewed as a class of the well-known regular increasing monotone quantifiers. Next, we will show how a weighting generator function can be used to produce inverse weights.

2.1 Weighting generator functions

A function \(Q:[0,1]\rightarrow [0,1]\) is a RIM quantifier if \(Q(0)=0\), \(Q(1)=1\) and for any \(x,y \in [0,1]\), \(x>y\) implies \(Q(x) \ge Q(y)\) (see, e.g., [20]). The requirements for weighting generator functions are more strict, i.e., these functions are strictly increasing, strictly convex (or concave) and differentiable. Here, the weighting generator functions form a class of RIM quantifiers.

Definition 1

Let \(\mathcal {G}\) be the set of all functions \(g:[0,1] \rightarrow [0,1]\) that are strictly increasing with \(g(0)=0\) and \(g(1)=1\), strictly concave (convex, respectively) and differentiable on (0, 1). We shall say that \(\mathcal {G}\) is the class of weighting generator functions.

Making use of Definition 1, we will interpret the weights induced by a weighting generator function as follows.

Definition 2

Let \(g \in \mathcal {G}\), let \(n \in \mathbb {N}\), \(n \ge 1\), and for \(i=1,2, \ldots , n\), let \(w_i\) be given by

$$\begin{aligned} w_{i} = g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) . \end{aligned}$$
(1)

Then, we will say that the weights \(w_1, w_2, \ldots , w_n\) are induced by the weighting generator function g.

The following proposition explains why a function \(g \in \mathcal {G}\) may be viewed as a weighting generator function. With this proposition we can demonstrate that the quantities \(w_1, w_2, \ldots , w_n\), induced by a weighting generator function g according to Eq. (1), are in fact weights.

Proposition 1

Let \(g \in \mathcal {G}\), and let \(n \in \mathbb {N}\), \(n \ge 1\). If for \(i=1,2, \ldots , n\), \(w_i\) is given by Eq. (1). Then, \(w_i\) has the following properties:

  1. (a)

    \(w_i > 0\)

  2. (b)

    if g is strictly concave, then \(w_1> w_2> \cdots > w_n\) and if g is strictly convex, then \(w_1< w_2< \cdots < w_n\).

  3. (c)

    \(\sum _{i=1}^{n} w_i = 1\).

Proof

Since \(g \in \mathcal {G}\), g is a strictly increasing function, and by taking into account Eq. (1) and the convexity of g, we immediately see that properties (a) and (b) hold. Next, taking into account the fact that \(g(0)=0\) and \(g(1)=1\), we have

$$\begin{aligned} \sum _{i=1}^{n} w_i = \sum _{i=1}^{n} \left( g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) \right) = g\left( 1 \right) - g\left( 0 \right) = 1. \end{aligned}$$

\(\square\)

Remark 1

Note that based on Eq. (1), we have that for any \(j \in \lbrace 1,2, \ldots , n\rbrace\),

$$\begin{aligned} \sum _{i=1}^{j} w_{i} = g\left( \frac{j}{n} \right) . \end{aligned}$$
(2)

In Fig. 1, a strictly concave weighting generator function and the weights induced by this function have been plotted.

Fig. 1
figure 1

Weights induced by a weighting generator function g

2.2 Inverse weights induced by a weighting generator function

Taking into account Definition 1, Proposition 1 and Definition 2, we can state the following theorem.

Theorem 1

Let \(g \in \mathcal {G}\) be a weighting generator function, let \(n \in \mathbb {N}\), \(n \ge 1\) and let the weights \(w^{(g)}_1, w^{(g)}_2, \ldots , w^{(g)}_n\) be induced by g. Furthermore, let the function \(f:[0,1] \rightarrow [0,1]\) be given by

$$\begin{aligned} f(x) = 1-g(1-x) \end{aligned}$$
(3)

for any \(x \in [0,1]\). Then, the function f is a weighting generator function as well, and the following properties hold for the weights \(w^{(f)}_1, w^{(f)}_2, \ldots , w^{(f)}_n\) induced by f:

  1. (a)

    \(w^{(f)}_{i} = w^{(g)}_{n-i+1}\), for \(i=1,2, \ldots , n\).

  2. (b)

    If g is strictly concave, then f is strictly convex and

    $$\begin{aligned} w^{(g)}_1> w^{(g)}_2> \cdots > w^{(g)}_n \quad \text {and} \quad w^{(f)}_1< w^{(f)}_2< \cdots < w^{(f)}_n. \end{aligned}$$
  3. (c)

    If g is strictly convex, then f is strictly concave and

    $$\begin{aligned} w^{(g)}_1< w^{(g)}_2< \cdots < w^{(g)}_n \quad \text {and} \quad w^{(f)}_1> w^{(f)}_2> \cdots > w^{(f)}_n. \end{aligned}$$
  4. (d)

    \(\sum _{i=1}^{n} w^{(f)}_{i} = 1\).

Proof

Since \(g \in \mathcal {G}\), based on Definition 1, g is strictly increasing with \(g(0)=0\) and \(g(1)=1\), it is differentiable on (0, 1) and it is either strictly concave or strictly convex. Therefore, noting Eq. (3), we immediately see that f is strictly increasing with \(f(0)=0\) and \(f(1)=1\), f is differentiable, and if g is strictly concave (convex, respectively), then f is strictly convex (concave, respectively). Hence, f satisfies the criteria for a weighting generator function given in Definition 1. Next, exploiting the results of Proposition 1, we immediately see that properties (b) and (c) hold.

Based on Eqs. (1) and (3), for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), we can write

$$\begin{aligned} w^{(f)}_{i}&= f\left( \frac{i}{n} \right) - f\left( \frac{i-1}{n} \right) = 1-g\left( 1- \frac{i}{n} \right) - 1 + g\left( 1- \frac{i-1}{n} \right) \\&= g\left( \frac{n-i+1}{n} \right) - g\left( \frac{n-i}{n} \right) = w^{(g)}_{n-i+1}.\end{aligned}$$
(4)

This means that property (a) holds. Since \(w^{(g)}_1, w^{(g)}_2, \ldots , w^{(g)}_n\) are induced by the weighting generator function g, we have \(\sum _{i=1}^{n} w^{(g)}_{i}=1\). Now, making use of property (a), we find that

$$\begin{aligned} \sum _{i=1}^{n} w^{(f)}_{i} = \sum _{i=1}^{n} w^{(g)}_{n-i+1} = \sum _{i=1}^{n} w^{(g)}_{i} = 1. \end{aligned}$$

That is, property (c) holds as well. \(\square\)

Remark 2

It should be stressed that the concept of weighting generator function along with Theorem 1 lay the foundations for generating inverse (inverted) weights in a consistent way. That is, if g is a weighting generator function, then \(f(x) = 1- g(1-x)\) is a weighting generator function as well and the weights induced by f can be viewed as the inverted weights of those induced by g.

3 Generating weights based on a monotonic or strictly monotonic order of attribute preferences

Let \(a_1, a_2, \ldots , a_n\) be attributes which characterize each entity (alternative) in a decision making procedure, \(n \in \mathbb {N}\), \(n \ge 1\). Let the variables \(x_1, x_2, \ldots , x_n\) and the weights \(w_1, w_2, \ldots , w_n\) be the inputs of this decision making procedure. Here, we interpret the value of variable \(x_i\) and the value of weight \(w_i\) as the utility value and the importance value of the ith attribute, respectively, in the preference system of a decision-maker, \(i \in \lbrace 1,2, \ldots , n\rbrace\). Now, using weighting generator functions, we will present a method that can be used to generate weights so that they reflect the decision-maker’s preferences regarding the attributes.

Let \(\textbf{A}\) be the set of attributes, i.e., \(\textbf{A} = \lbrace a_1, a_2, \ldots , a_n \rbrace\) and let \(\prec\) and \(\succ\) be two strict order relations on the set \(\textbf{A} = \lbrace a_1, a_2, \ldots , a_n \rbrace\) such that for any \(a_i, a_j \in \textbf{A}\) and \(i \ne j\),

  1. (a)

    \(a_j \prec a_i\) if and only if \(a_i\) is more important than \(a_j\)

  2. (b)

    \(a_j \succ a_i\) if and only if \(a_i\) is less important than \(a_j\).

Also, let \(\equiv\) be an equivalence relation on the set \(\textbf{A}\) such that

  1. (c)

    \(a_j \equiv a_i\) if and only if \(a_i\) and \(a_j\) are equally important.

Later, we will utilize the weighted arithmetic mean

$$\begin{aligned} A(\textbf{x},\textbf{w}) = \sum _{i=1}^{n} w_i x_i \end{aligned}$$

to aggregate the \(x_{1}, x_{2}, \ldots , x_{n} \in \mathbb {R}\) utility values with respect to the weights \(w_{1}\), \(w_{2}\), \(\ldots\), \(w_{n} \in (0,1)\), where

$$\begin{aligned} \textbf{x} = \left( x_{1}, x_{2}, \ldots , x_{n} \right) \in \mathbb {R}^{n}, \quad \textbf{w} = \left( w_{1}, w_{2}, \ldots , w_{n} \right) \in (0,1)^{n} \end{aligned}$$

and \(\sum _{i=1}^{n} w_i = 1\). Hence, we shall assume that the greater the importance of an attribute is, the greater its weight value will be. That is, we shall assume that for any \(i, j \in \lbrace 1,2, \ldots , n\rbrace\) and \(i \ne j\),

  1. (a)

    \(w_j < w_i\) if and only if \(a_j \prec a_i\)

  2. (b)

    \(w_j > w_i\) if and only if \(a_j \succ a_i\)

  3. (c)

    \(w_j = w_i\) if and only if \(a_j \equiv a_i\).

Remark 3

We should add that a greater level of attribute importance does not necessarily mean a greater value of the corresponding weight, as this depends on the aggregation method. For example, if we aggregate the \(x_{1}, x_{2}, \ldots , x_{n} \in (0,1)\) values with respect to the weights \(w_{1}, w_{2}, \ldots , w_{n}\), where \(\sum _{i=1}^{n} w_i = 1\), using the weighted geometric mean

$$\begin{aligned} G(\textbf{x},\textbf{w}) = \prod _{i=1}^{n} x_{i}^{w_i}, \end{aligned}$$

then a greater level of attribute importance results in a lower value of the corresponding weight. This is simply due to the fact that for any \(x \in (0,1)\), \(x^{w}\) is a strictly decreasing function of w, where \(w \in (0,1)\).

The following theorem tells us how the weighting generator functions can be used to obtain a weight sequence that represents the decision-maker’s order of importance of the decision attributes.

Theorem 2

Let \(n \in \mathbb {N}\), \(n \ge 1\) and let \(a_1, a_2, \ldots , a_n\) be attributes, which characterize the alternatives in a decision making procedure, such that

$$\begin{aligned} a_{\pi (n_{0}+1)} \equiv a_{\pi (2)} \cdots \equiv a_{\pi (n_{1})}&\succ a_{\pi (n_{1}+1)} \equiv a_{\pi (n_{1}+2)} \equiv \cdots \equiv a_{\pi (n_{2})} \succ \cdots \\ \cdots&\succ a_{\pi (n_{k-1}+1)} \equiv a_{\pi (n_{k-1}+2)} \equiv \cdots \equiv a_{\pi (n_{k})},\end{aligned}$$
(5)

where \(k \in \mathbb {N}\) is an arbitrary fixed constant with \(1 \le k \le n\), \(n_{0}, n_{1}, \ldots , n_{k}\) are fixed indices that satisfy

$$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n, \end{aligned}$$
(6)

\(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\), \(\prec\) and \(\equiv\) are a strict order and an equivalence relations on the set \(\lbrace a_1, a_2, \ldots , a_n \rbrace\), respectively. Furthermore, let \(g \in \mathcal {G}\) be a strictly concave weighting generator function and for \(r=1,2, \ldots , k\), let \(w^{*}_{r}\) be given by

$$\begin{aligned} w^{*}_{r} = g\left( \frac{r}{n} \right) - g\left( \frac{r-1}{n} \right) . \end{aligned}$$
(7)

If, for every \(i=1,2, \ldots , n\), \(w_{\pi (i)}\) is given by

$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}}, \end{aligned}$$
(8)

where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which

$$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace , \end{aligned}$$

then \(w_{\pi (i)} >0\),

$$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)} = 1 \end{aligned}$$
(9)

and

$$\begin{aligned}&w_{\pi (n_{0}+1)} = w_{\pi (2)} \cdots = w_{\pi (n_{1})} \\&> w_{\pi (n_{1}+1)} = w_{\pi (n_{1}+2)} = \cdots = w_{\pi (n_{2})}> \cdots \\&\cdots > w_{\pi (n_{k-1}+1)} = w_{\pi (n_{k-1}+2)} = \cdots = w_{\pi (n_{k})}.\end{aligned}$$
(10)

Proof

Since \(w^{*}_1, w^{*}_2, \ldots , w^{*}_{k}\) are weights induced by a strictly concave weighting generator function g, based on Proposition 1, we immediately get that for any \(r \in \lbrace 1,2, \ldots , k\rbrace\), \(w^{*}_{r}>0\),

$$\begin{aligned} \sum _{r=1}^{k} w^{*}_{r}=1, \end{aligned}$$
(11)

and

$$\begin{aligned} w^{*}_1> w^{*}_2> \cdots > w^{*}_{k} \end{aligned}$$
(12)

holds. Hence, noting Eqs. (8) and (6), we find that for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)} >0\). Notice that the denominator of the formula for \(w_{\pi (i)}\) in Eq. (8) is independent of i. Therefore, taking into account Eqs. (12), (8) and the fact that \(w_{\pi (i)}\) has the same value for \(i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace\), we find that Eq. (10) holds. Also, we can write

$$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)}&= \sum _{l=0}^{k-1} \left( n_{l+1} - n_{l} \right) \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}} \\&= \frac{\sum _{l=0}^{k-1} \left( n_{l+1} - n_{l} \right) w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}} = 1.\end{aligned}$$

\(\square\)

Note that in Theorem 2, the value of k corresponds to the number of unique weight values in the weight sequence given in Eq. (10). In the two terminal cases, where \(k=1\) or \(k=n\), Theorem 2 gives us the following results.

If \(k=1\), then all the attributes have the same importance value, and so, based on Eq. (10), all the weights should be equal, i.e.,

$$\begin{aligned} w_{\pi (1)} = w_{\pi (2)} = \cdots = w_{\pi (n)}=\frac{1}{n}. \end{aligned}$$

Indeed, if \(k=1\), then \(n_0=0\), \(n_1 = n\), \(w^{*}_1=1\), \(l=0\) (\(l \in {0}\)), and utilizing Eq. (8), for any \(i \in \lbrace 1, 2, \ldots , n \rbrace\), we get

$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}} = \frac{w^{*}_1}{n_1 - n_0} = \frac{1}{n}. \end{aligned}$$

If \(k=n\), then each attribute has a unique importance value, and so, based on Eq. (10),

$$\begin{aligned} w_{\pi (1)}> w_{\pi (2)}> \cdots > w_{\pi (n)} \end{aligned}$$

should hold. Indeed, if \(k=n\), then \(n_0=0\), \(n_1 = 1\), \(\ldots\), \(n_k=n\). Therefore, using Eq. (8), for any \(i \in \lbrace 1, 2, \ldots , n \rbrace\), and noting Eq. (11), we can write

$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}} = \frac{w^{*}_{i}}{\sum _{r=0}^{k-1} w^{*}_{r+1}} = w^{*}_{i}. \end{aligned}$$
(13)

Since \(w^{*}_1> w^{*}_2> \cdots > w^{*}_{k}\), based on Eq. (13), we also have

$$\begin{aligned} w_{\pi (1)}> w_{\pi (2)}> \cdots > w_{\pi (n)}. \end{aligned}$$

Remark 4

If in Theorem 2, we replace the relations \(\prec\) and < in Eqs. (5) and (10) with the relations \(\succ\) and >, respectively, then the theorem remains valid with any strictly convex weighting generator function g.

Making use of weighting generator functions and Theorem 2, the following procedure can be utilized to generate weights that reflect the preference order of attributes.

Procedure 1

[A procedure for generating weights that reflect the preference order of attributes]

  • Input: A non-increasing sequence of the decision-maker’s preferences regarding the attributes \(a_1, a_2, \ldots , a_n\):

    $$\begin{aligned} a_{\pi (n_{0}+1)} \equiv a_{\pi (2)} \cdots \equiv a_{\pi (n_{1})}&\succ a_{\pi (n_{1}+1)} \equiv a_{\pi (n_{1}+2)} \equiv \cdots \equiv a_{\pi (n_{2})} \succ \cdots \\ \cdots&\succ a_{\pi (n_{k-1}+1)} \equiv a_{\pi (n_{k-1}+2)} \equiv \cdots \equiv a_{\pi (n_{k})},\end{aligned}$$

    where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),

    $$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n \end{aligned}$$

    and \(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\).

  • Step 1: Select a strictly concave weighting generator function \(g \in \mathcal {G}\) and for all \(r = 1,2, \ldots , k\), compute \(w^{*}_r\) as

    $$\begin{aligned} w^{*}_r = g\left( \frac{r}{n} \right) - g\left( \frac{r-1}{n} \right) . \end{aligned}$$
  • Step 2: For every \(i=1,2, \ldots , n\), compute \(w_{\pi (i)}\) as

    $$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}}, \end{aligned}$$

    where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which

    $$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace . \end{aligned}$$
  • Output: An ordered sequence of weights preserving the preference order of the attributes \(a_1, a_2, \ldots , a_n\):

    $$\begin{aligned} w_{\pi (n_{0}+1)} = w_{\pi (2)} \cdots = w_{\pi (n_{1})}&> w_{\pi (n_{1}+1)} = w_{\pi (n_{1}+2)} = \cdots = w_{\pi (n_{2})}> \cdots \\ \cdots&> w_{\pi (n_{k-1}+1)} = w_{\pi (n_{k-1}+2)} = \cdots = w_{\pi (n_{k})},\end{aligned}$$

    where for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)}>0\), and

    $$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)} = \sum _{l=0}^{k-1} \sum _{j=n_{l}+1}^{n_{l+1}} w_{\pi (j)} = 1. \end{aligned}$$

Remark 5

According to Remark 4, if in Procedure 1, we replace the relations \(\prec\) and < with the relations \(\succ\) and >, respectively, then the procedure remains valid if in Step 1 a strictly convex weighting generator function g is selected.

The following example shows how Procedure 1 can be applied in practice.

Example 1

Suppose that \(a_1, a_2, \ldots , a_5\) are five attributes that characterize the alternatives in a decision making procedure, and the decision-maker’s preferences concerning the attributes are given by the following ordered sequence:

$$\begin{aligned} a_{3} \equiv a_{5} \succ a_{2} \equiv a_{1} \succ a_{4}. \end{aligned}$$

Our intention is to assign a weight value \(w_{\pi (i)}\) to each attribute \(a_{\pi (i)}\), where \(i=1,2, \ldots , 5\) and the permutation \(\pi\) is given by \((1,2,3,4,5)\mapsto (3,5,2,1,4)\), such that the order of the weights is identical to the preference order of the corresponding attributes. That is,

$$\begin{aligned} w_{3} = w_{5}> w_{2} = w_{1} > w_{4}, \end{aligned}$$
(14)

or equivalently,

$$\begin{aligned} w_{\pi (1)} = w_{\pi (2)}> w_{\pi (3)} = w_{\pi (4)} > w_{\pi (5)}. \end{aligned}$$
(15)

We see that there are three different weight values in the ordered sequence in Eq. (15). Therefore, first we need to generate \(k=3\) unique weights, \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\), such that

$$\begin{aligned} w^{*}_{1}> w^{*}_{2} > w^{*}_{3}. \end{aligned}$$
(16)

Let \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) be induced by the weighting generator function g, which is given by \(g(x) = \sqrt{x}\), \(x \in [0,1]\). Since g is a strictly concave function, based on Proposition 1, we readily get that the weights \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) satisfy the inequality relation stated in Eq. (16). Using Eq. (1), the values of \(w^{*}_{1}\), \(w^{*}_{2}\) and \(w^{*}_{3}\) are

$$\begin{aligned} w^{*}_{1} = 0.5774, \quad w^{*}_{2} = 0.2391 \quad \text {and} \quad w^{*}_{3} = 0.1835. \end{aligned}$$

Using the notations of Procedure 1, the ordered sequence of weights given in Eq. (15) can be written as

$$\begin{aligned} w_{\pi (n_0+1)} = w_{\pi (n_1)}> w_{\pi (n_1+1)} = w_{\pi (n_2)} > w_{\pi (n_3)}, \end{aligned}$$

where

$$\begin{aligned} n_{0} = 0, n_1 = 2, n_2 = 4, \quad \text {and} \quad n_3 = 5. \end{aligned}$$

Since \(w_{\pi (i)}\) can be computed as

$$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{l+1}}{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) w^{*}_{r+1}}, \end{aligned}$$

where \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which

$$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace , \end{aligned}$$

we have the following:

  1. 1.

    If \(i=1\) or \(i=2\), then \(l=0\) and

    $$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{1}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.5774}{1.8165} = 0.3178. \end{aligned}$$
  2. 2.

    If \(i=3\) or \(i=4\), then \(l=1\) and

    $$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{2}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.2391}{1.8165} = 0.1317. \end{aligned}$$
  3. 3.

    If \(i=5\), then \(l=2\) and

    $$\begin{aligned} w_{\pi (i)} = \frac{w^{*}_{3}}{2 w^{*}_{1} + 2 w^{*}_{2} + w^{*}_{3}} = \frac{0.1835}{1.8165} = 0.1010. \end{aligned}$$

This means that we have

$$\begin{aligned} w_{3} = w_{5} = 0.3178, \quad w_{2} = w_{1} = 0.1317 \quad \text {and} \quad w_{4} = 0.1010, \end{aligned}$$

which satisfies the criterion in Eq. (14), and \(\sum _{i=1}^{5} w_i = 1\).

4 Approximating the weights using the derivatives of weighting generator functions

Here, we will present a way of effectively approximating the weights that are induced by a weighting generator function.

Let \(g \in \mathcal {G}\) be a weighting generator function. Based on Proposition 1, the ith weight \(w_{i}\) induced by g is

$$\begin{aligned} w_{i} = g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) , \end{aligned}$$
(17)

where \(i \in \lbrace 1,2, \ldots , n \rbrace\) and \(n \in \mathbb {N}\), \(n \ge 1\). The gradient of the line segment that connects the points \(\left( \frac{i-1}{n}, g\left( \frac{i-1}{n}\right) \right)\) and \(\left( \frac{i}{n}, g\left( \frac{i}{n}\right) \right)\) is

$$\begin{aligned} \frac{g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) }{\frac{i}{n} - \frac{i-1}{n}} = \frac{g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) }{\frac{1}{n}}. \end{aligned}$$
(18)

Since g is a differentiable function, if n is sufficiently large, then the gradient in Eq. (18) can be approximated quite well like so:

$$\begin{aligned} \frac{g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) }{\frac{1}{n}} \approx g'\left( \frac{\frac{i-1}{n} + \frac{i}{n}}{2}\right) = g' \left( \frac{2i-1}{2n} \right) , \end{aligned}$$

where \(g'\) is the first derivative of the weighting generator function g. Hence, the weight \(w_{i}\) in Eq. (17) can be approximated quite well by \(w'_{i}\), where

$$\begin{aligned} w'_{i} = \frac{1}{n} g' \left( \frac{2i-1}{2n} \right) . \end{aligned}$$
(19)

As g is a strictly increasing function, for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w'_{i}>0\) holds. Furthermore, if n is sufficiently large, then \(\sum _{i=1}^{n} w'_{i} \approx \sum _{i=1}^{n} w_{i} = 1\), and so we have that

$$\begin{aligned} w_{i} \approx w'_{i} \approx \frac{w'_{i}}{\sum _{j=1}^{n} w'_{j}} = \frac{g' \left( \frac{2i-1}{2n} \right) }{\sum _{j=1}^{n} g' \left( \frac{2j-1}{2n} \right) }. \end{aligned}$$

This means that if n is sufficiently large, then \(w_{i}\) can be approximated quite well by \(\hat{w}_{i}\), where

$$\begin{aligned} \hat{w}_{i} = \frac{g' \left( \frac{2i-1}{2n} \right) }{\sum _{j=1}^{n} g' \left( \frac{2j-1}{2n} \right) }, \end{aligned}$$
(20)

and for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(\hat{w}_{i}>0\), and \(\sum _{i=1}^{n} \hat{w}_{i} =1\).

Example 2

Let the weights \(w_{1}\), \(w_{2}, \ldots , w_{7}\) be induced by the weighting generator function \(g(x) = x^{0.8}\), where \(x \in [0,1]\), and let the weights \(\hat{w}_{1}\), \(\hat{w}_{2}, \ldots , \hat{w}_{7}\) be computed using Eq. (20). The results of the calculations are summarized in Table 2.

Table 2 Approximate weights (\(n=7\))

The results in Table 2 demonstrate that, in this particular case, if \(n=7\), then the \(w'_{i}\) and \(\hat{w}_{i}\) values approximate quite well the value of the \(w_{i}\) weight, where \(i=1,2, \ldots , n\).

5 Weights induced by various weighting generator functions

In Example 1, we utilized the weighting generator function \(g(x) = \sqrt{x}\), \(x \in [0,1]\), to obtain a desired order of weights. We should mention that the application of any strictly concave weighting generator function would have resulted in the same weight order. Of course, the weights generated by different generator functions are not the same, but the order of the generated weights just depends on the convexity of the generator function.

Now, we will provide a few examples of weighting generator functions which can be used to produce weights that satisfy certain criteria. Then we will present a particular generator function which is known as the tau function in continuous-valued logic.

5.1 Weighting generator functions of arithmetic weight sequences

Suppose that we wish to create the following sequence of weights:

$$\begin{aligned} w_{1}, w_{2}, \ldots , w_{n}, \end{aligned}$$

where \(n \ge 1\),

$$\begin{aligned} w_{i} = w+(i-1)d, \end{aligned}$$
(21)

\(w, w_{i} \in (0,1)\), \(i = 1,2, \ldots , n\), \(d \in (-1,1)\), \(d \ne 0\) and

$$\begin{aligned} \sum _{i=1}^{n} w_{i} = \sum _{i=1}^{n} (w+(i-1)d) = 1. \end{aligned}$$
(22)

Since the sequence of weights given by Eq. (21) is an arithmetic sequence, we have that for any \(j \in \lbrace 1,2, \ldots , n \rbrace\),

$$\begin{aligned} \sum _{i=1}^{j} w_{i} = \sum _{i=1}^{j} (w+(i-1)d) = \frac{j}{2}\left( 2w + (j-1)d \right) . \end{aligned}$$
(23)

Hence, based on Eqs. (22) and (23), we have

$$\begin{aligned} \frac{n}{2}\left( 2w + (n-1)d \right) = 1, \end{aligned}$$

from which we get

$$\begin{aligned} w = \frac{1}{n} - (n-1) \frac{d}{2}. \end{aligned}$$
(24)

Next, based on Remark 1 and Eq. (23), the generator function g that induces the arithmetic sequence of the weights given in Eq. (21) should satisfy the following:

$$\begin{aligned} g\left( \frac{j}{n}\right) = \sum _{i=1}^{j} w_{i} = \frac{j}{2}\left( 2w + (j-1)d \right) \end{aligned}$$
(25)

for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Next, noting Eq. (24), from Eq. (25), we get

$$\begin{aligned} g\left( \frac{j}{n}\right) = j \left( \frac{1}{n} - (n-j) \frac{d}{2} \right) . \end{aligned}$$
(26)

Now, by introducing

$$\begin{aligned} x = \frac{j}{n} \quad \text {and} \quad \alpha = n^{2} \frac{d}{2}, \end{aligned}$$

Eq. (26) can be written as

$$\begin{aligned} g(x) = x - \alpha x (1-x), \end{aligned}$$
(27)

where \(\alpha \in [-1,1]\) and \(\alpha \ne 0\). Note that these restrictions on the value of parameter \(\alpha\) ensure that g satisfies the criteria for a weighting generator function given in Definition 1. If \(\alpha \in (0,1]\) (respectively, \(\alpha \in [-1,0)\)), then the weight sequence induced by g is strictly increasing (respectively, decreasing). The derivative of the generator function in Eq. (27) is

$$\begin{aligned} g'(x) = 2 \alpha x-\alpha +1, \end{aligned}$$
(28)

and so, using Eq. (19),

$$\begin{aligned} w'_{i} = \frac{1}{n} g' \left( \frac{2i-1}{2n} \right) = \frac{1}{n} \left( \alpha \frac{2i-1}{2n} - \alpha + 1 \right) \end{aligned}$$

can be treated as a good approximation of \(w_{i}\) when n is large, \(i=1,2, \ldots , n\).

5.2 Weighting generator functions of geometric weight sequences

Now, suppose that we wish to create the following geometric sequence of weights:

$$\begin{aligned} w_{1}, w_{2}, \ldots , w_{n}, \end{aligned}$$

where \(n \ge 1\),

$$\begin{aligned} w_{i} = w r^{i-1}, \end{aligned}$$
(29)

\(w, w_{i} \in (0,1)\), \(i = 1,2, \ldots , n\), \(r \in \mathbb {R}\), \(r >0\), \(r \ne 1\) and

$$\begin{aligned} \sum _{i=1}^{n} w_{i} = \sum _{i=1}^{n} w r^{i-1} = 1. \end{aligned}$$
(30)

Because the sequence of weights given by Eq. (29) is a geometric sequence,

$$\begin{aligned} \sum _{i=1}^{j} w_{i} = \sum _{i=1}^{j} w r^{i-1} = w \frac{r^{j}-1}{r-1} \end{aligned}$$
(31)

holds for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Therefore, based on Eqs. (30) and (31), we have

$$\begin{aligned} w \frac{r^{n}-1}{r-1} = 1, \end{aligned}$$

from which

$$\begin{aligned} w = \frac{r-1}{r^{n}-1} \end{aligned}$$
(32)

follows. Next, based on Remark 1 and Eq. (31), the generator function g that induces the geometric sequence of the weights given in Eq. (29) should meet the requirement

$$\begin{aligned} g\left( \frac{j}{n}\right) = \sum _{i=1}^{j} w_{i} = w \frac{r^{j}-1}{r-1} \end{aligned}$$
(33)

for any \(j \in \lbrace 1,2, \ldots , n \rbrace\). Hence, making use of Eq. (32), from Eq. (33), we get

$$\begin{aligned} g\left( \frac{j}{n}\right) = \frac{r^{j}-1}{r^{n}-1}. \end{aligned}$$
(34)

Now, by introducing

$$\begin{aligned} x = \frac{j}{n} \quad \text {and} \quad \alpha = r^{n}, \end{aligned}$$

Eq. (34) can be written as

$$\begin{aligned} g(x) = \frac{\alpha ^{x}-1}{\alpha -1}. \end{aligned}$$
(35)

where \(\alpha \in \mathbb {R}\), \(\alpha > 0\) and \(\alpha \ne 1\). Note that if \(\alpha >1\) (respectively, \(\alpha <1\)), then the weight sequence induced by g is strictly increasing (respectively, decreasing). The derivative of the generator function in Eq. (35) is

$$\begin{aligned} g'(x) = \frac{\alpha ^{x} \ln (\alpha )}{\alpha -1}. \end{aligned}$$
(36)

Hence, on the one hand, with Eqs. (20) and (36), we have

$$\begin{aligned} \hat{w}_{i} =&\frac{g' \left( \frac{2i-1}{2n} \right) }{\sum _{j=1}^{n} g' \left( \frac{2j-1}{2n} \right) } = \frac{\alpha ^{\frac{2i-1}{2n}}}{\sum _{j=1}^{n} \alpha ^{\frac{2j-1}{2n}}} = \frac{\alpha ^{\frac{i}{n}}}{\sum _{j=1}^{n} \alpha ^{\frac{j}{n}}} = \frac{\alpha ^{\frac{i}{n}}}{\alpha ^{\frac{1}{n}} \frac{\left( \alpha ^{\frac{1}{n}} \right) ^{n}-1}{\alpha ^{\frac{1}{n}}-1}} \\ =&\frac{\alpha ^{\frac{i-1}{n}} \left( \alpha ^{\frac{1}{n}} -1 \right) }{\alpha -1}.\end{aligned}$$

On the other hand, using Eqs. (1) and (35), we find that

$$\begin{aligned} w_{i} = g\left( \frac{i}{n} \right) - g\left( \frac{i-1}{n} \right) = \frac{\alpha ^{\frac{i}{n}}-1}{\alpha -1} - \frac{\alpha ^{\frac{i-1}{n}}-1}{\alpha -1} = \frac{\alpha ^{\frac{i-1}{n}} \left( \alpha ^{\frac{1}{n}} -1 \right) }{\alpha -1}. \end{aligned}$$

This means that \(\hat{w}_{i}\) is not just a good approximation of the \(w_{i}\) weight, but \(\hat{w}_{i}\) and \(w_{i}\) coincide for any \(i \in \lbrace 1,2, \ldots , n \rbrace\).

Example 3

The weights in Table 1, which represent the normalized scores that the first ten drivers are awarded with in a Formula 1 race, can be approximated quite well using the weighting generator function given in Eq. (35). Namely, with a simple sequential search \(\alpha =0, 0.02, 0.04, \ldots , 1\), we find that

$$\begin{aligned} E(\alpha ) = \sum _{i=1}^{10} \left( w_{i} - w^{(\alpha )}_{i} \right) ^2 \end{aligned}$$

is nearly minimal for \(\alpha =0.08\), where \(w_i\) is the ith weight in Table 1, i.e., \(w_{i} = \frac{s_{i}}{\sum _{j=1}^{10} s_{j}}\), and

$$\begin{aligned} w^{(\alpha )}_{i} = g_{\alpha }\left( \frac{i}{n} \right) - g_{\alpha }\left( \frac{i-1}{n} \right) \quad \text {and} \quad g_{\alpha }(x) = \frac{\alpha ^{x}-1}{\alpha -1}. \end{aligned}$$

For \(\alpha =0.08\), we have \(E(\alpha )=0.001\). The results of the approximation are summarized in Table 3.

Table 3 Formula 1 scores, weights and approximate weights

5.3 Weights induced by the tau function

The tau function \(\tau _{A}:[0,1] \rightarrow [0,1]\), which was first introduced by Dombi as a unary modifier operator in continuous-valued logic (see [24]), is defined as follows:

$$\begin{aligned} \tau _{A}(x) = {\left\{ \begin{array}{ll} 0, &{} \hbox { if }\ x = 0 \\ \dfrac{1}{1+A \frac{1-x}{x}}, &{} \text {if } 0 < x \le 1, \end{array}\right. } \end{aligned}$$
(37)

where \(A \in (0,\infty )\). For more details on the tau function, its more general form and applications, see [25]. If we set the requirement that \(\tau _{A}(\nu ) = 1-\nu\), where \(\nu \in (0,1)\), then the tau function in Eq. (37) can be written as

$$\begin{aligned} \tau _{\nu }(x) = {\left\{ \begin{array}{ll} 0, &{} \hbox { if }\ x = 0 \\ \dfrac{1}{1+ \left( \frac{\nu }{1-\nu }\right) ^2 \frac{1-x}{x}}, &{} \text {if } 0 < x \le 1. \end{array}\right. } \end{aligned}$$
(38)

It can be verified that the tau function given in Eq. (38)

  1. (a)

    \(\tau _{\nu }(0) = 0\) and \(\tau _{\nu }(1)=1\)

  2. (b)

    \(\tau _{\nu }(\nu ) = 1-\nu\)

  3. (c)

    \(\tau _{\nu }(x)\) is strictly increasing and it is differentiable for any \(x \in [0,1]\)

  4. (d)
    • If \(\nu \in \left( 0,\frac{1}{2} \right)\), then \(\tau _{\nu }(x)\) is strictly concave on [0, 1] and for any \(x \in (0,1)\), \(\tau _{\nu }(x) > x\).

    • If \(\nu = \frac{1}{2}\), then for any \(x \in [0,1]\), \(\tau _{\nu }(x) = x\).

    • If \(\nu \in \left( \frac{1}{2},1 \right)\), then \(\tau _{\nu }(x)\) is strictly convex on [0, 1] and for any \(x \in (0,1)\), \(\tau _{\nu }(x) < x\).

Therefore, for any arbitrarily fixed \(\nu \in (0,1)\), \(\nu \ne \frac{1}{2}\), the tau function \(\tau _{\nu }\) meets the requirements for a weighting generator function. Taking into account Eq. (1), the ith weight induced by \(\tau _{\nu }\) is

$$\begin{aligned} w_{i} = \tau _{\nu }\left( \frac{i}{n} \right) - \tau _{\nu }\left( \frac{i-1}{n} \right) = \frac{1}{1+\left( \frac{\nu }{1 - \nu } \right) ^{2} \frac{n-i}{i}} - \frac{1}{1+\left( \frac{\nu }{1 - \nu } \right) ^{2} \frac{n-(i-1)}{i-1}}, \end{aligned}$$
(39)

where \(n \ge 1\), \(i=1,2, \ldots , n\). Figure 2 shows sample plots of the tau function.

Fig. 2
figure 2

Example plots of the tau function (and visualization of the property \(\tau _{\nu }(\nu ) = 1-\nu\))

It should be noted that the convexity of the tau function depends solely on the value of its parameter \(\nu\). Hence, based on Proposition 1, we have that if \(\nu \in \left( 0,\frac{1}{2} \right)\), then \(\tau _{\nu }\) generates a strictly decreasing sequence of weights, and if \(\nu \in \left( \frac{1}{2},1 \right)\), then \(\tau _{\nu }\) generates a strictly increasing sequence of weights.

It can be shown that with the requirement \(g(\nu )=1-\nu\), where \(\nu \in (0,1)\), the tau function \(\tau _{\nu }\) is the solution of the differential equation

$$\begin{aligned} \frac{\textrm{d}g(x)}{\textrm{d}x} = \frac{g(x)(1-g(x))}{x(1-x)}. \end{aligned}$$

Therefore, noting Eq. (19), we find that

$$\begin{aligned} w'_{i} = \frac{1}{n} g' \left( \frac{2i-1}{2n} \right) = 4n \frac{\tau _{\nu }\left( \frac{2i-1}{2n}\right) \left( \tau _{\nu }\left( \frac{2i-1}{2n}\right) \right) }{\left( 2i-1\right) \left( 2n-2i+1\right) } \end{aligned}$$

can be treated as a good approximation of \(w_{i}\) when n is large, \(i=1,2, \ldots , n\).

6 Learning weights using weighting generator functions

Suppose that \(a_1, a_2, \ldots , a_n\) are attributes which characterize each alternative in a decision making procedure, \(n \in \mathbb {N}\), \(n \ge 1\). Let \(\textbf{x}_{j}= (x_{j,1}, x_{j,2}, \ldots , x_{j,n}) \in {[0,1]^{n}}\) be the vector that contains the normalized utility values of the attributes \(a_1, a_2, \ldots , a_n\) for the jth alternative, respectively, i.e., \(x_{j,i} { \in [0,1]}\) is the utility value of the ith attribute for the jth alternative, where \(i=1,2, \ldots , n\), \(j=1,2, \ldots , m\), \(n,m \in \mathbb {N}\), \(n,m \ge 1\). Furthermore, let \(w_i\) denote the weight that represents the importance value of the ith attribute in the preference system of a decision-maker, \(i \in \lbrace 1,2, \ldots , n\rbrace\). Now, using weighting generator functions, we will present a heuristic that can be utilized to determine the \(w_{1}, w_{2}, \ldots , w_{n}\) weights when the aggregate utility of the jth alternative is computed as the weighted arithmetic mean of the utility values \(x_{j,1}, x_{j,2}, \ldots , x_{j,n}\) with respect to the weights \(w_{1}, w_{2}, \ldots , w_{n}\). This heuristic uses two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences and (2) A sample of evaluated alternatives. We will present this heuristic for the case where the weighting generator function is the tau function given in Eq. (38). Making use of the above-mentioned approach, our weight learning heuristic can be adapted to other weighting functions as well.

Procedure 2

(A heuristic for learning attribute weights)

  • Inputs (1) A non-increasing sequence of the decision-maker’s preferences regarding the attributes \(a_1, a_2, \ldots , a_n\):

    $$\begin{aligned} a_{\pi (n_{0}+1)} \equiv a_{\pi (2)} \cdots \equiv a_{\pi (n_{1})}&\succ a_{\pi (n_{1}+1)} \equiv a_{\pi (n_{1}+2)} \equiv \cdots \equiv a_{\pi (n_{2})} \succ \cdots \\ \cdots&\succ a_{\pi (n_{k-1}+1)} \equiv a_{\pi (n_{k-1}+2)} \equiv \cdots \equiv a_{\pi (n_{k})},\end{aligned}$$

    where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),

    $$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n \end{aligned}$$

    and \(\pi\) is a permutation on the set \(\lbrace 1,2, \ldots , n \rbrace\). Here, the value of k corresponds to the number of the unknown unique weight values in a weight sequence that reflects the above sequence of attribute preferences. (2) A sample of \((\textbf{x}_{j}, v_{j})\) pairs, where \(\textbf{x}_{j}= (x_{j,1}, x_{j,2}, \ldots , x_{j,n}) \in {[0,1]^{n}}\) is the utility vector for the jth alternative and \(v_{j} \in {[0,1]}\) is a utility value (score) assigned to this alternative by the decision-maker.

  • Value searching Find the value of \(\nu \in \left( 0, \frac{1}{2} \right)\) for which

    $$\begin{aligned} \sum _{j=1}^{m} \left( v_{j} - \sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} x_{j,\pi (i)} \right) ^{2} \rightarrow \min , \end{aligned}$$
    (40)

    where for every \(i=1,2, \ldots , n\),

    $$\begin{aligned} w_{\pi (i)}^{(\nu )} = \frac{\tau _{\nu }\left( \frac{l+1}{n} \right) - \tau _{\nu }\left( \frac{l}{n} \right) }{\sum _{r=0}^{k-1} \left( n_{r+1} - n_{r} \right) \left( \tau _{\nu }\left( \frac{r+1}{n} \right) - \tau _{\nu }\left( \frac{r}{n} \right) \right) }, \end{aligned}$$
    (41)

    \(l \in \lbrace 0,1, \ldots , k-1 \rbrace\) is a uniquely determined index for which

    $$\begin{aligned} i \in \lbrace n_{l}+1, n_{l}+2, \ldots , n_{l+1} \rbrace . \end{aligned}$$
  • Output: An ordered sequence of weights preserving the preference order of the attributes \(a_1, a_2, \ldots , a_n\):

    $$\begin{aligned} w_{\pi (n_{0}+1)}^{(\nu )} = w_{\pi (2)}^{(\nu )} \cdots = w_{\pi (n_{1})}^{(\nu )}&> w_{\pi (n_{1}+1)}^{(\nu )} = w_{\pi (n_{1}+2)}^{(\nu )} = \cdots = w_{\pi (n_{2})}^{(\nu )}> \cdots \\ \cdots&> w_{\pi (n_{k-1}+1)}^{(\nu )} = w_{\pi (n_{k-1}+2)}^{(\nu )} = \cdots = w_{\pi (n_{k})}^{(\nu )},\end{aligned}$$

    where for any \(i \in \lbrace 1,2, \ldots , n \rbrace\), \(w_{\pi (i)}^{(\nu )}>0\), and

    $$\begin{aligned} \sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} = \sum _{l=0}^{k-1} \sum _{j=n_{l}+1}^{n_{l+1}} w_{\pi (j)}^{(\nu )} = 1. \end{aligned}$$

In Procedure 2, \(v_{j}\) and \(\sum _{i=1}^{n} w_{\pi (i)}^{(\nu )} x_{j,\pi (i)}\) are the perceived and computed aggregate utility values for the jth alternative, respectively. That is, in this procedure, we seek to find the value of parameter \(\nu\) for which the sum of squared differences between the perceived and computed utility values for a given sample of alternatives is a minimum. A nearly optimal value of parameter \(\nu\) can be found using numerical methods. Since the learned weights should form a non-increasing sequence, the weighting generator tau function needs to be strictly concave, i.e., \(\nu \in \left( 0, \frac{1}{2} \right)\). This is why the optimization in Eq. (40) needs to be solved under the constraint \(0< \nu <\frac{1}{2}\). Hence, a nearly optimal value of \(\nu\) can be determined using a brute force approach. Also, the generalized reduced gradient (GRG) method (see, e.g., [6, 43]), the GLOBAL optimization method introduced by Csendes (see [18, 19]) or a particle swarm optimization method (see, e.g., [66]) can be used to find a nearly optimal value of \(\nu\).

We should add that Procedure 2 can be adapted to the cases where the weights have an arithmetic or a geometric sequence. In such cases, instead of the tau function, we need to utilize the weighting generator functions of the arithmetic or geometric weight sequences given by Eqs. (27) and (35), respectively. This also means that we need to find the optimal value of the \(\alpha\) parameter of the corresponding weighting generator function, where \(\alpha \in [-1,0)\) and \(\alpha \in (0,1)\) for the arithmetic and geometric cases, respectively.

Example 4

Here, we will show how Procedure 2 can be applied in practice. Suppose that cars are characterized by the following four attributes: (1) Engine power, (2) Max. speed, (3) Fuel consumption and (4) Trunk capacity. Table 4 shows the unit of measure and the range for each of these attributes.

Table 4 Attributes of cars

Let \(a_1\), \(a_2\), \(a_3\) and \(a_4\) denote the attributes (1) Engine power, (2) Max. speed, (3) Fuel efficiency and (4) Trunk capacity, respectively, and let us assume that a decision-maker’s preferences regarding the car attributes are given by the following relations:

$$\begin{aligned} \text {Fuel efficiency} \succ \text {Max. speed} \succ \text {Trunk capacity} \succ \text {Engine power}. \end{aligned}$$
(42)

Then, the preference relations given in Eq. (42) can be written as

$$\begin{aligned} a_{\pi (1)} \succ a_{\pi (2)} \succ a_{\pi (3)} \succ a_{\pi (4)}, \end{aligned}$$

where the permutation \(\pi\) is given by \((1,2,3,4) \mapsto (3,2,4,1)\), i.e., we have

$$\begin{aligned} a_{3} \succ a_{2} \succ a_{4} \succ a_{1}. \end{aligned}$$

Now, suppose that five cars with the attributes given in Table 5 are offered to the decision-maker.

Table 5 Alternatives, normalized utility values of the attributes, and the decision-maker’s evaluations (scores)

In Table 5, \(x^{*}_{j,i}\) denotes the value of the ith attribute (i.e., \(a_{i}\)) for the jth alternative (car) in the units of measures given in Table 4, while \(x^{*}_{j,i} \in [0,1]\) is the normalized value of \(x_{j,i}\) using the min-max normalization with the ranges for the attributes given in Table 4, where \(i=1,2, 3, 4\) and \(j=1,2, 3, 4, 5\). That is,

$$\begin{aligned} x_{j,1} = \frac{x^{*}_{j,1} - 50}{300-50}, \quad x_{j,2} = \frac{x^{*}_{j,2} - 140}{240-140}, \quad x_{j,3} = \frac{x^{*}_{j,3} - 5}{25-5}, \quad x_{j,4} = \frac{x^{*}_{j,4} - 100}{600-100}, \end{aligned}$$

where \(j=1,2, 3, 4, 5\). Let us assume that the decision-maker evaluates each alternative and assigns the \(v_{j} \in (0,1)\) utility value (score) to the jth alternative as shown in Table 5. Our aim is to determine the \(w_{\pi (1)}, w_{\pi (2)}, w_{\pi (3)}, w_{\pi (4)}\) weights such that

$$\begin{aligned} w_{\pi (1)}> w_{\pi (2)}> w_{\pi (3)} > w_{\pi (4)}, \quad \sum _{i=1}^{4} w_{\pi (i)}=1 \end{aligned}$$
(43)

and

$$\begin{aligned} F(\nu ) = \sum _{j=1}^{m} \left( v_{j} - \sum _{i=1}^{n} w_{\pi (i)} x_{j,\pi (i)} \right) ^{2} \rightarrow \min . \end{aligned}$$
(44)

Here, will utilize the \(\tau _{\nu }\) weighting generator function given in Eq. (38) to determine the values of the \(w_{\pi (1)}, w_{\pi (2)}, w_{\pi (3)}, w_{\pi (4)}\) weights. Following Procedure 2 and noting the requirement that \(w_{\pi (1)}> w_{\pi (2)}> w_{\pi (3)} > w_{\pi (4)}\), we have that, based on Eq. (41),

$$\begin{aligned} w_{\pi (i)} = w^{(\nu )}_{\pi (i)} = \tau _{\nu }\left( \frac{i}{n} \right) - \tau _{\nu }\left( \frac{i-1}{n} \right) , \end{aligned}$$

where \(i=1,2,3,4\). Applying the generalized reduced gradient method with the constraint \(\nu \in \left( 0, \frac{1}{2} \right)\), we found that a nearly optimal solution of the minimization problem given by Eqs. (43) and (44) is \(\nu =0.2425\). For this value of parameter \(\nu\), the objective function in Eq. (44) has the value 0.0055. The results of the computations are summarized in Table 6.

Table 6 Optimal values of weights

The aggregate utility \(U_{j}\) for the jth alternative computed using the weighted arithmetic mean

$$\begin{aligned} U_{j} = \sum _{i=1}^{n} w_{\pi (i)} x_{j,\pi (i)}, \end{aligned}$$

and the decision-maker’s \(v_{j}\) utility value assigned to the jth alternative (i.e., the aggregate utility perceived by the decision-maker) are listed for \(j=1,2,3,4,5\) in Table 7.

Table 7 Computed and perceived utility values

We see in Table 7 that the corresponding computed (\(U_{j}\)) and perceived (\(v_{j}\)) utility values are quite close to each other. This means that based on a non-increasing sequence of the attribute preferences and on a sample of evaluated alternatives, both provided by the decision-maker, by applying Procedure 2, we were able to determine the weights that describe the decision-maker’s preferences quite well.

6.1 Limitations

Let \(\textbf{W}\) denote the set of all weight vectors \(\textbf{w}=(w_{1}, w_{2}, \ldots , w_{n}) \in (0,1)^{n}\) that meet the criterion

$$\begin{aligned}&w_{\pi (n_{0}+1)} = w_{\pi (2)} \cdots = w_{\pi (n_{1})} \\&> w_{\pi (n_{1}+1)} = w_{\pi (n_{1}+2)} = \cdots = w_{\pi (n_{2})}> \cdots \\&\cdots > w_{\pi (n_{k-1}+1)} = w_{\pi (n_{k-1}+2)} = \cdots = w_{\pi (n_{k})},\end{aligned}$$
(45)

where \(n \in \mathbb {N}\), \(n \ge 1\), \(k \in \mathbb {N}\), \(1 \le k \le n\),

$$\begin{aligned} 0 = n_{0} \le n_{1} \le \cdots \le n_{k-1} \le n_{k} = n \end{aligned}$$

with an arbitrarily fixed permutation \(\pi :\lbrace 1,2, \ldots , n \rbrace \rightarrow \lbrace 1,2, \ldots , n \rbrace\). Next, let \(\textbf{W}^{(\nu )}\) be the set of all weight vectors that can be produced by Procedure 2. Since the components of a weight vector produced by this procedure are not independent, i.e., all these components depend on the corresponding weighting generator function, we have that \(\textbf{W}^{(\nu )} \subset \textbf{W}\). This means that the heuristic described in Procedure 2 cannot produce all the theoretically possible weight vectors that may reflect preferences of decision makers.

We should also mention that another shortcoming of Procedure 2 is related to the fact that the weighting generator function has a predefined mathematical form (i.e., it is a tau function or it may be a predefined concave parametric function), and so this function cannot produce arbitrarily distributed weight values.

7 Conclusions and future research plans

In our study, we utilized the weighting generator functions, which are a class of the regular increasing monotone quantifiers, to produce weights so that the order of the produced weights corresponds to the order of the attributes (criteria) provided by a decision-maker. We showed that the weighting generator function lays the foundations for generating inverse (inverted) weights in a consistent way. We demonstrated that the derivative of of these functions is suitable for producing approximate weights. Next, we presented the weighting generator functions for arithmetic and geometric weight sequences. Here, we showed that the tau function, which is a unary operator in continuous-valued logic, is also a one-parameter weighting generator function with useful properties. Besides these theoretical results, we presented a practical weight learning procedure that is mathematically simple and can be easily applied in practice. This heuristic utilizes two decision-maker-provided inputs: (1) A non-increasing sequence of the attribute preferences (criteria preferences) and (2) A sample of evaluated alternatives. The output of this procedure is a sequence of weights, which corresponds to the preference order of the attributes (criteria) provided by the decision-maker. We should add that our procedure does not require pairwise comparisons of the decision criteria, which is an advantageous property especially in cases where the number of decision attributes (decision criteria) is large. In the proposed method, we utilize one-parameter weighting generator functions, and so the weight learning procedure leads to an optimization problem where the optimal value of only one parameter needs to be found. As for the proposed weighting generator function, the parameter domain is a bounded interval, e.g., \((0,\frac{1}{2})\), \([-1,0)\) or (0, 1), and a nearly optimal parameter value can be determined even using a brute force algorithm. This means that the proposed method can be readily implemented in practice. Microsoft Excel implementations of the four examples presented in the course of this study are available at https://github.com/dombijozsef/-Learning-the-weights-using-attribute-order-information-for-multi-criteria-decision-making-tasks.

As part of our future research, we would like to find new weight generator functions that have useful properties in different practical applications. For example, we would like to know what the weight generator function for a weight sequence is that satisfies a linear recursion. It would also be good to know how the weights can be determined for a given lexicographic order of decision alternatives.