Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The problem of aggregating numerical attributes to form an overall measure is of considerable importance in many disciplines. The most commonly used aggregation is based on the weighted sum. The preference weights can be effectively introduced with the so-called Ordered Weighted Averaging (OWA) aggregation developed by Yager [18]. In the OWA aggregation the weights are assigned to the ordered values (i.e. to the smallest value, the second smallest and so on) rather than to the specific criteria. Since its introduction, the OWA aggregation has been successfully applied to many fields of decision making including also ones modeling risk averse preferences in decisions under uncertainty [9] as well as those requiring equity and fairness while aggregating several agents gains [10, 11]. The OWA operator allows us to model various aggregation functions from the maximum through the arithmetic mean to the minimum. Thus, it enables modeling of various preferences from the optimistic to the pessimistic one.

Several approaches have been introduced for obtaining the OWA weights with a predefined degree of orness [2, 17]. O‘Hagan [7] proposed a maximum entropy approach, which involved a constrained nonlinear optimization problem with a predefined degree of orness as its constraint and the entropy as the objective function. Actually, the maximum entropy model can be transformed into a polynomial equation and then solved analytically [3]. A minimum variance approach to obtain the minimal variability OWA operator weights was also considered [4]. The minimax disparity approach proposed by Wang and Parkan [15] was the first method of finding OWA operator using Linear Programming (LP). This method determines the OWA operator weights by minimizing the maximum difference between two adjacent weights under a given level of orness. The minimax disparity approach was further extended [1, 14] and related to the minimum variance approaches [6]. The maximum entropy approach has been generalized for various Minkowski metrics [20, 21] in some cases expressed with LP models [16]. The LP model of the mean absolute deviation has been also considered [8]. In this paper we analyze a possibility to use another LP solvable models. In particular, we develop the LP model to determine the OWA operator weights by minimizing the Maximum Absolute Deviation inequality measure. In addition to the LP model an analytical formula is also derived.

2 Orness and Inequality Measures

The OWA aggregation with weights \({\mathbf w}=(w_1,\dots ,w_m)\) of vector \({\mathbf y}=(y_1,\ldots ,y_m)\) is mathematically formalized as follows [18]. First, we introduce the ordering map \(\varTheta : R^m \rightarrow R^m\) such that \(\varTheta (\mathbf{y}) = (\theta _1(\mathbf{y}),\theta _2(\mathbf{y}),\ldots ,\theta _m(\mathbf{y}))\), where \(\theta _1(\mathbf{y}) \ge \theta _2(\mathbf{y}) \ge \cdots \ge \theta _m(\mathbf{y})\) and there exists a permutation \(\tau \) of set \(I\) such that \(\theta _i(\mathbf{y}) =y_{\tau (i)}\) for \(i=1,\ldots ,m\). Next, we apply the weighted sum aggregation to ordered vectors \(\varTheta (\mathbf{y})\), i.e. the OWA aggregation takes the following form:

$$\begin{aligned} A_\mathbf{w}(\mathbf{y}) = \sum _{i=1}^{m}\ w_i \theta _i(\mathbf{y}) . \end{aligned}$$
(1)

The OWA aggregation may model various preferences from the optimistic (max) to the pessimistic (min) Yager [18] introduced a well appealing concept of the orness measure to characterize the OWA operators. The degree of orness associated with the OWA operator \(A_\mathbf{w}(\mathbf{y})\) is defined as

$$\begin{aligned} \text{ orness }(\mathbf{w}) = \sum _{i=1}^{m}\ \frac{m-i}{m-1} w_i \end{aligned}$$
(2)

For the max aggregation representing the fuzzy ‘or’ operator with weights \(\mathbf{w} = (1,0,\ldots ,0)\) one gets \(\text{ orness }(\mathbf{w})=1\) while for the min aggregation representing the fuzzy ‘and’ operator with weights \(\mathbf{w} = (0,\ldots ,0,1)\) one has \(\text{ orness }(\mathbf{w})=0\). For the average (arithmetic mean) one gets \(\text{ orness }((1/m,1/m,\ldots ,1/m))=1/2\). A complementary measure of andness defined as \(\text{ andness }(\mathbf{w})= 1-\text{ orness }(\mathbf{w})\) may be considered. OWA aggregations with orness greater or equal 0.5 are considered or-like whereas the aggregations with orness smaller or equal 0.5 are treated as and-like. The former corresponds to rather optimistic preferences while the latter represents rather pessimistic (risk-averse) preferences.

The OWA aggregations with monotonic weights are either or-like or and-like. Exactly, decreasing weights \(w_1\ge w_2 \ge \ldots \ge w_m\) define an or-like OWA operator, while increasing weights \(w_1\le w_2 \le \ldots \le w_m\) define an and-like OWA operator. Actually, the orness and the andness properties of the OWA operators with monotonic weights are total in the sense that they remain valid for any subaggregations defined by subsequences of their weights. Such OWA aggregations allow one to model equitable or fair preferences [10, 11], as well as risk aversion in decisions under uncertainty [13].

Yager [19] proposed to define the OWA weighting vectors via the regular increasing monotone (RIM) quantifiers, which provide a dimension independent description of the aggregation. A fuzzy subset \(Q\) of the real line is called a RIM quantifier if \(Q\) is (weakly) increasing with \(Q(0)=0\) and \(Q(1)=1\). The OWA weights can be defined with a RIM quantifier \(Q\) as \(w_i = Q(i/m) - Q((i-1)/m)\). and the orness measure can be extended to a RIM quantifier (according to \(m \rightarrow \infty \)) as follows [19]

$$\begin{aligned} \text{ orness }(Q) = \int _0^1 Q(\alpha )\ d\alpha \end{aligned}$$
(3)

Thus, the orness of a RIM quantifier is equal to the area under it.

Monotonic weights can be uniquely defined by their distribution. First, we introduce the right-continuous cumulative distribution function (cdf):

$$\begin{aligned} F_{{\mathbf w}}(d) = \sum _{i=1}^{m}\ \frac{1}{m} \delta _i(d) \quad \text{ where } \quad \delta _i(d) = \left\{ \begin{array}{ll} 1 &{} \text{ if } w_{i} \le d\\ 0 &{} \text{ otherwise } \end{array} \right. \end{aligned}$$
(4)

which for any real value \(d\) provides the measure of weights smaller or equal to \(d\). Alternatively one may use the left-continuous right tail cumulative distribution function \(\overline{F}_{{\mathbf w}}(d)= 1- F_{{\mathbf w}}(d)\) which for any real value \(d\) provides the measure of weights greater or equal to \(d\).

Next, we introduce the quantile function \(F_{{\mathbf w}}^{(-1)} = \inf \ \{ \eta : F_{\mathbf{y}}(\eta ) \ge \xi \}\) for \(0 < \xi \le 1\) as the left-continuous inverse of the cumulative distribution function \(F_{{\mathbf w}}\), ie., \(F_{{\mathbf w}}^{(-1)}(\xi ) = \inf \ \{ \eta : F_{{\mathbf w}}(\eta ) \ge \xi \}\) for \(0 < \xi \le 1\). Similarly, we introduce the right tail quantile function \(\overline{F}_{{\mathbf w}}^{(-1)}\) as the right-continuous inverse of the cumulative distribution function \(\overline{F}_{{\mathbf w}}\), i.e., \(\overline{F}_{{\mathbf w}}^{(-1)}(\xi ) = \sup \ \{ \eta : \overline{F}_{{\mathbf w}}(\eta ) \ge \xi \}\) for \(0 < \xi \le 1\). Actually, \(\overline{F}_{{\mathbf w}}^{(-1)}(\xi ) = F_{{\mathbf w}}^{(-1)}(1-\xi )\). It is the stepwise function \(\overline{F}_{{\mathbf w}}^{(-1)}(\xi ) = \theta _i({\mathbf w})\) for \(\frac{i-1}{m} < \xi \le \frac{i}{m}\).

Dispersion of the weights distribution can be described with the Lorenz curves and related inequality measures. Classical Lorenz curve used in income economics as a cumulative population versus income curve to compare equity of income distributions. Although, the Lorenz curve for any distribution may be viewed [5] as a normalized integrated quantile function. In particular, for distribution of weights \({\mathbf w}\) one gets

$$\begin{aligned} L_{{\mathbf w}}(\xi )= \frac{1}{\mu ({\mathbf w})}\int _0^\xi F_{{\mathbf w}}^{(-1)}(\alpha ) d\alpha = m \int _0^\xi F_{{\mathbf w}}^{(-1)}(\alpha ) d\alpha \end{aligned}$$
(5)

where while dealing with normalized weights \(w_i\) we have always \(\mu ({\mathbf w})=1/m\). Graphs of functions \(L_{{\mathbf w}}(\xi )\) are piecewise linear convex curves. They are nondecreasing, due to nonnegative weights \(w_i\). A perfectly equal distribution weights (\(w_i=1/m\) for all \(i=1,\ldots ,m\)) has the diagonal line as the Lorenz curve.

Alternatively, the upper Lorenz curve may be used which integrates the right tail quantile function. For distribution of weights \({\mathbf w}\) one gets

$$\begin{aligned} \overline{L}_{{\mathbf w}}(\xi )= \frac{1}{\mu ({\mathbf w})}\int _0^\xi \overline{F}_{{\mathbf w}}^{(-1)}(\alpha ) d\alpha = m \int _0^\xi \overline{F}_{{\mathbf w}}^{(-1)}(\alpha ) d\alpha \end{aligned}$$
(6)

Graphs of functions \(\overline{L}_{{\mathbf w}}(\xi )\) are piecewise linear concave curves. They are nondecreasing, due to nonnegative weights \(w_i\). Similar to \(L_{{\mathbf w}}\), the vector of perfectly equal weights has the diagonal line as the upper Lorenz curve. Actually, both the classical (lower) and the upper Lorenz curves are symmetric with respect to the diagonal line in the sense that the differences

$$\begin{aligned} \bar{d}_{{\mathbf w}}(\xi ) = \overline{L}_{{\mathbf w}}(\xi ) - \xi \quad \text{ and } \quad {d}_{{\mathbf w}}(\xi ) = \xi - {L}_{{\mathbf w}}(\xi ) \end{aligned}$$
(7)

are equal for symmetric arguments: \(\bar{d}_{{\mathbf w}}(\xi ) = {d}_{{\mathbf w}}(1-\xi ) \). Hence,

$$\begin{aligned} \overline{L}_{{\mathbf w}}(\xi ) + {L}_{{\mathbf w}}(1-\xi )= 1 \quad \text{ for } \text{ any } 0 \le \xi \le 1 \end{aligned}$$
(8)

Note that in the case of nondecreasing OWA weights \(0 \le w_1 \le \ldots \le w_m \le 1\) the corresponding Lorenz curve \(L_{{\mathbf w}}(\xi )\) is (weakly) increasing with \(L_{{\mathbf w}}(0)=0\) and \(L_{{\mathbf w}}(1)=1\) as well as the OWA weights can be defined with \(L\) as \(w_i = L_{{\mathbf w}}(i/m) - L_{{\mathbf w}}((i-1)/m)\). Hence, \(L_{{\mathbf w}}\) may be considered then as a RIM quantifier generating weights \({\mathbf w}\) [13]. Following Eq. (3), the orness measure of RIM quantifier is given as \(\text{ orness }(L) = \int _0^1 L(\alpha )\ d\alpha \), thus equal to the area under \(L_{{\mathbf w}}\). Certainly, for any finite \(m\) the RIM orness \(\text{ orness }(L_{{\mathbf w}})\) differs form \(\text{ orness }({{\mathbf w}})\), but the difference depends only on the value of \(m\), Exactly,

$$\begin{aligned} \text{ orness }(L_{{\mathbf w}}) = \sum _{i=1}^{m}\ \frac{m-i}{m} w_i + \sum _{i=1}^{m}\ \frac{1}{2m} w_i = \frac{m-1}{m}\text{ orness }({{\mathbf w}}) + \frac{1}{2m} \end{aligned}$$
(9)

In the case of nonincreasing OWA weights \(1 \ge w_1 \ge \ldots \ge w_m \ge 0\) the corresponding upper Lorenz curve \(\overline{L}_{{\mathbf w}}(\xi )\) is (weakly) increasing with \(\overline{L}_{{\mathbf w}}(0)=0\) and \(\overline{L}_{{\mathbf w}}(1)=1\) as well as the OWA weights can be defined with \(\overline{L}\) as \(w_i = \overline{L}_{{\mathbf w}}(i/m) - \overline{L}_{{\mathbf w}}((i-1)/m)\). Hence, \(\overline{L}_{{\mathbf w}}\) may be considered then as a RIM quantifier generating weights \({\mathbf w}\). Similar to (9) the difference between the RIM orness \(\text{ orness }(L_{{\mathbf w}})\) and \(\text{ orness }({{\mathbf w}})\) depends only on the value of \(m\).

Typical inequality measures are some deviation type dispersion characteristics. They are inequality relevant which means that they are equal to 0 in the case of perfectly equal outcomes while taking positive values for unequal ones.

The simplest inequality measures are based on the absolute measurement of the spread of outcomes, like the (Gini’s) mean absolute difference

$$\begin{aligned} \varGamma ({{\mathbf w}}) = \frac{1}{2m^2} \sum _{i=1}^{m}\ \sum _{j=1}^{m}\ |w_i - w_j| \end{aligned}$$
(10)

or the maximum absolute difference

$$\begin{aligned} D({{\mathbf w}}) = \max _{i,j=1,\ldots ,m} |w_i - w_j| . \end{aligned}$$
(11)

In most application frameworks better intuitive appeal may have inequality measures related to deviations from the mean outcome like the maximum absolute deviation

$$\begin{aligned} \varDelta ({{\mathbf w}}) = \max _{i \in I} |w_i - \mu ({\mathbf w})| . \end{aligned}$$
(12)

Note that the standard deviation \(\sigma \) (or the variance \(\sigma ^2\)) represents both the deviations and the spread measurement.

Deviational measures may be focused on the downside semideviations or the upper ones. One may define the maximum downside semideviation \(\varDelta ^d({{\mathbf w}})\) and the maximum upside semideviation \(\varDelta ^u({{\mathbf w}})\), respectively

$$\begin{aligned} \varDelta ^d({{\mathbf w}}) = \max _{i \in I} (\mu ({{\mathbf w}}) - w_i ) \quad \text{ and } \quad \varDelta ^u({{\mathbf w}}) = \max _{i \in I} (w_i - \mu ({{\mathbf w}})) . \end{aligned}$$
(13)

In economics one usually considers relative inequality measures normalized by the mean. Among many inequality measures perhaps the most commonly accepted by economists is the Gini index, which is the relative mean difference

$$\begin{aligned} G({{\mathbf w}}) = {\varGamma ({{\mathbf w}})}/{\mu ({{\mathbf w}})} = m \varGamma ({\mathbf w}) . \end{aligned}$$
(14)

Similarly, one may consider the relative maximum deviation

$$\begin{aligned} R({{\mathbf w}}) = {\varDelta ({{\mathbf w}})}/{\mu ({{\mathbf w}})} = m \varDelta ({\mathbf w}) . \end{aligned}$$
(15)

Note that due to \(\mu ({\mathbf w})=1/m\), the relative inequality measures are proportional to their absolute counterparts and any comparison of the relative measures is equivalent to comparison of the corresponding absolute measures.

The above inequality measures are closely related to the Lorenz curve [10] and its differences from the diagonal (equity) line (7). First of all

$$\begin{aligned} G({{\mathbf w}}) = 2 \int _0^1 \bar{d}_{{\mathbf w}}(\alpha ) d\alpha = 2 \int _0^1 {d}_{{\mathbf w}}(\alpha ) d\alpha \end{aligned}$$
(16)

thus

$$\begin{aligned} G({{\mathbf w}}) = 2 \int _0^1 \overline{L}_{{\mathbf w}}(\alpha ) d\alpha - 1 = 1 - 2 \int _0^1 {L}_{{\mathbf w}}(\alpha ) d\alpha . \end{aligned}$$
(17)

Recall that in the case of nondecreasing OWA weights \(0 \le w_1 \le \ldots \le w_m \le 1\) the corresponding Lorenz curve \(L_{{\mathbf w}}(\xi )\) may be considered as a RIM quantifier generating weights \({\mathbf w}\). Following Eq. (9), one gets

$$\begin{aligned} G({{\mathbf w}}) = 1 - 2 \text{ orness }(L_{{\mathbf w}}) = \frac{m-1}{m} (1- 2 \text{ orness }({{\mathbf w}}) ) \end{aligned}$$
(18)

enabling easy recalculation of the orness measure into the Gini index and vice versa. Similarly, in the case of nonincreasing OWA weights \(1 \ge w_1 \ge \ldots \ge w_m \ge 0\), one gets

$$\begin{aligned} G({{\mathbf w}}) = 2 \text{ orness }(\overline{L}_{{\mathbf w}}) -1 = \frac{m-1}{m} ( 2\text{ orness }({{\mathbf w}}) -1 ) . \end{aligned}$$
(19)

3 Maximum Deviation Minimization

We focus on the case of monotonic weights. Following Eqs. (18) and (19), the Gini index is then uniquely defined by a given orness value. Nevertheless, one may still select various weights by minimizing the Maximum Deviation (MD) measure. Although related to the Lorenz curve it is not uniquely defined by the Gini index and the orness measure. Actually, the MD minimization approach may be viewed as the generalized entropy maximization based on the infinity Minkowski metric [16].

Let us define differences

$$\begin{aligned} \bar{d}_i({{\mathbf w}})= \overline{L}_{{\mathbf w}}(\frac{i}{m}) - \frac{i}{m} \quad \text{ and } \quad {d}_i({{\mathbf w}}) = \frac{i}{m} - {L}_{{\mathbf w}}(\frac{i}{m}) \quad \quad \text{ for } \ i=1,\ldots ,m \end{aligned}$$
(20)

where due to nonnegativity of weights, for all \(i=1,\ldots ,m-1\)

$$\begin{aligned} \bar{d}_{i}({{\mathbf w}}) \le \frac{1}{m} + \bar{d}_{i+1}({{\mathbf w}}) \quad \text{ and } \quad {d}_{i}({{\mathbf w}}) \le \frac{1}{m} + {d}_{i-1}({{\mathbf w}}) \end{aligned}$$
(21)

with \({d}_{0}({{\mathbf w}})=\bar{d}_{0}({{\mathbf w}})=0\) and \({d}_{m}({{\mathbf w}})=\bar{d}_{m}({{\mathbf w}})=0\). Thus

$$\begin{aligned} \bar{d}_{m-i}({{\mathbf w}}) \le \frac{i}{m} \quad \text{ and } \quad {d}_{i}({{\mathbf w}}) \le \frac{i}{m} \quad \quad \text{ for } \ i=1,\ldots ,m-1 \end{aligned}$$
(22)

The Gini index represents the area defined by \(\bar{d}_i({{\mathbf w}})\) or \({d}_i({{\mathbf w}})\), respectively,

$$\begin{aligned} G({{\mathbf w}}) = \frac{2}{m} \sum _{i=1}^{m-1} \bar{d}_i({{\mathbf w}}) =\frac{2}{m} \sum _{i=1}^{m-1} {d}_i({{\mathbf w}}) \end{aligned}$$
(23)

while the relative maximum deviation may be represented as [10]

$$\begin{aligned} \begin{array}{rl} R({{\mathbf w}}) &{} = m \varDelta ({\mathbf w}) = \max \{ m \varDelta ^d({\mathbf w}), m \varDelta ^u({\mathbf w}) \} = \max \{ m {d}_1({{\mathbf w}}), m \bar{d}_1({{\mathbf w}}) \}\\ &{} = \max \{ m {d}_1({{\mathbf w}}), m {d}_{m-1}({{\mathbf w}}) \} = \max \{ m \bar{d}_1({{\mathbf w}}), m \bar{d}_{m-1}({{\mathbf w}}) \}\\ \end{array} \end{aligned}$$
(24)

Note that due to (22) \(m \varDelta ^d({\mathbf w}) \le 1\).

Assume there is given orness value \(0.5 \le \alpha \le 1\) and we are looking for monotonic weights \(1 \ge w_1 \ge \ldots \ge w_m \ge 0\) such that \(\text{ orness }({{\mathbf w}})= \alpha \) and the (relative) maximum deviation \(R({{\mathbf w}})\) is minimal. Following Eqs.  (19), (23) and (24), it leads us to the problem

$$\begin{aligned} \begin{array}{rl} &{}\min \max \{ m \bar{d}_1({{\mathbf w}}), m \bar{d}_{m-1}({{\mathbf w}}) \}\\ &{}\text{ s.t. } \displaystyle \frac{2}{m} \sum _{i=1}^{m-1} \bar{d}_i({{\mathbf w}}) =\frac{m-1}{m} ( 2\alpha -1 ) \end{array} \end{aligned}$$
(25)

with additional (22) constraints. This allows us to form the following LP model

$$\begin{aligned} \min \&md \end{aligned}$$
(26)
$$\begin{aligned} \text{ s.t. }&\bar{d}_1 \le d, \quad \bar{d}_{m-1} \le d\end{aligned}$$
(27)
$$\begin{aligned}&\bar{d}_1 + \ldots + \bar{d}_{m-1} = (m-1) ( \alpha - 0.5 ) \end{aligned}$$
(28)
$$\begin{aligned}&0 \le \bar{d}_{i} \le \frac{1}{m} + \bar{d}_{i+1} \quad \quad \quad \quad \quad \quad \quad \quad \text{ for } \ i=1,\ldots ,m-1 \end{aligned}$$
(29)

with variables \(\bar{d}_{i}\) for \(i=1,\ldots ,m-1\), auxiliary variable \(d\) and constant \(\bar{d}_{m}=0\). Having solved the above LP problem, the corresponding weights can be simply calculated according to the following formula (with \(\bar{d}_0=\bar{d}_m=0\)):

$$\begin{aligned} w_i=\bar{d}_{i}-\bar{d}_{i-1}+\frac{1}{m} \quad \quad \text{ for } \ i=1,\ldots ,m \end{aligned}$$
(30)

Symmetrically, assume there is given orness value \(0 \le \alpha \le 0.5\) and we are looking for monotonic weights \(0 \le w_1 \le \ldots \le w_m \le 1\) such that \(\text{ orness }({{\mathbf w}})= \alpha \) and the (relative) maximum deviation \(R({{\mathbf w}})\) is minimal. Following Eqs. (18), (23) and (24), it leads us to the problem

$$\begin{aligned} \begin{array}{rl} \min &{} \max \{ m {d}_1({{\mathbf w}}), m {d}_{m-1}({{\mathbf w}}) \}\\ \text{ s.t. } &{} \displaystyle \frac{2}{m} \sum _{i=1}^{m-1} {d}_i({{\mathbf w}}) =\frac{m-1}{m} (1 - 2\alpha ) \end{array} \end{aligned}$$
(31)

with additional (22) constraints. Thus leading to the LP problem

$$\begin{aligned} \begin{array}{rll} \min &{} m{d} \\ \text{ s.t. } &{} {d}_1 \le d, \quad {d}_{m-1} \le d\\ &{} {d}_1 + \ldots + {d}_{m-1} = (m-1) ( 0.5 - \alpha )\\ &{} 0 \le {d}_{i} \le \frac{1}{m} + {d}_{i-1} &{} \quad \quad \text{ for } \ i=1,\ldots ,m-1 \end{array} \end{aligned}$$
(32)

with variables \({d}_{i}\) for \(i=1,\ldots ,m-1\), auxiliary variable \(d\) and constant \({d}_{0}=0\). The corresponding weights can be found according to the formula

$$\begin{aligned} w_i={d}_{i-1}-{d}_{i}+\frac{1}{m} \quad \quad \text{ for } \ i=1,\ldots ,m \end{aligned}$$
(33)

where \({d}_0={d}_m=0\).

LP models (26)–(29) and (32) allow for application standard optimization techniques to solve them. However, their structure is so simple that the problem of maximum deviation minimization can also be solved analytically. We will show this in details for the case of \(0.5 \le \text{ orness }({{\mathbf w}}) \le 1\) and the corresponding model (26)–(29) (Fig. 1).

One may take advantage of the fact that an optimal solution to the minimax problem \(\min \{ \max \{y_1, y_2\} : \mathbf{y} \in Q \}\) are perfectly equal values \(y_1=y_2\) or one of them, say \(y_2\), reaches its upper bound \(U_2= \max \{y_2 : \mathbf{y} \in Q \}\) while the other takes the larger value \(y_1>U_2\). Hence, when the required orness level is small enough (still not below 0.5), then the optimal solution is defined by

$$ \bar{d}_1 = \bar{d}_{m-1} = \bar{h} $$

where \(\bar{h}\) is defined by the orness Eq. (28) while leaving inequalities (29) inactive. The optimal solution is then defined by

$$ \frac{m}{i} \bar{d}_i = \frac{m}{i} \bar{d}_{m-i} = \bar{h} \quad \text{ for } \ 1 \le i \le \frac{m}{2} $$
Fig. 1.
figure 1

Areas under Lorenz curve for minimal maximum deviation: even (a) vs. odd (b) number of weights

In the case of odd \(m=2n+1\) one has

$$ \bar{d}_i = \frac{i}{m} \bar{h} \quad \text{ and } \quad \bar{d}_{m-i} = \frac{i}{m} \bar{h} \quad \text{ for } \ 1 \le i \le n $$

thus leading to the equation

$$ \bar{d}_1 + \ldots + \bar{d}_{m-1} = 2\sum _{i=1}{n} \frac{i}{m}\bar{h} = \frac{n(n+1)}{m} \bar{h} = (m-1) ( \alpha - 0.5 ) $$

and \(\bar{h}= \frac{4m( \alpha - 0.5 )}{m+1}\). Note that following Eq. (30) such a solution is generated by weights:

$$\begin{aligned}&\displaystyle w_i = \frac{1}{m} + \frac{4(\alpha - 0.5)}{m+1} \quad \text{ for } \ i=1,\ldots ,n\\&\displaystyle w_{n+1} = \frac{1}{m} \\&w_i = \frac{1}{m} - \frac{4(\alpha - 0.5)}{m+1} \quad \text{ for } \ i=n+2,\ldots ,m \end{aligned}$$

In the case of even \(m=2n\) one has

$$ \bar{d}_i = \frac{i}{m} \bar{h} \quad \text{ and } \quad \bar{d}_{m-i} = \frac{i}{m} \bar{h} \quad \text{ for } \ 1 \le i \le n $$

although \(\bar{d}_n\) and \(\bar{d}_{m-n}\) is the same variable. This leads to the equation

$$ \bar{d}_1 + \ldots + \bar{d}_{m-1} = \sum _{i=1}^{n} \frac{i}{m}\bar{h} + \sum _{i=1}^{n-1} \frac{i}{m}\bar{h} = \frac{n^2}{m} \bar{h} = (m-1)( \alpha - 0.5 ) $$

and \(\bar{h}= \frac{4(m-1)( \alpha - 0.5 )}{m}\). Note that following Eq. (30) such a solution is generated by weights:

$$\begin{aligned}&\displaystyle w_i = \frac{1}{m} + \frac{4(m-1)(\alpha - 0.5)}{m^2} \quad \text{ for } \ i=1,\ldots ,n\\&w_i = \frac{1}{m} - \frac{4(m-1)(\alpha - 0.5)}{m^2} \quad \text{ for } \ i=n+1,\ldots ,m \end{aligned}$$

The above analytical formulae for weights are valid as long as the required orness level \(\alpha \) is small enough (still not below 0.5) allowing constraint (22) to remain inactive. This is equivalent to the restriction \(\bar{h} \le 1\) thus

$$ \alpha \le \frac{m+1}{4m} + 0.5 \quad \text{ and } \quad \alpha \le \frac{m}{4(m-1)} + 0.5 $$

for odd and even \(m\), respectively.

When the required orness level is higher, then constraint (22) becomes active, thus enforcing zero weights within the second part of the sequence. Exactly, there exists \(1 \le \kappa \le m/2\) such that

$$ w_1 = \ldots = w_{\kappa -1} \ge w_\kappa \ge w_{\kappa +1} = \ldots = w_m = 0 $$

where \(\frac{m}{i} \bar{d}_{m-i}({{\mathbf w}}) = 1\) for \(i < m-\kappa \).

4 Conclusion

The determination of ordered weighted averaging (OWA) operator weights is a crucial issue of applying the OWA operator for decision making. We have considered determining monotonic weights of the OWA operator by minimization of the maximum (absolute) deviation inequality measure. This leads us to a linear programming model which can also be solved analytically. The analytic approach results in simple direct formulas. The LP models allow us to find weights by the use of efficient LP optimization techniques and they enable easy enhancement of the preference model with additional requirements on the weights properties. The latter is the main advantage over the standard method of entropy minimization. Both the standard method and the proposed one do have their analytical solutions. However, if we try to elaborate them further by adding some auxiliary (linear) constraints on the OWA weights, then the entropy minimization model forms a difficult nonlinear optimization task while the maximum deviation minimization is still easily LP-solvable.