1 Introduction

The popularity of the pairwise comparisons methods in the field of multi-criteria decision analysis is largely due to their simplicity. It is easier for a decision maker to compare two objects at the same time, as opposed to comparing larger quantities of them. Although the first systematic use of pairwise comparisons is attributed to Ramon Llull [9], thirteenth-century alchemist and mathematician, it can be assumed that also prehistoric people used this method in practice. In the beginning, people were interested in qualitative comparisons. Over time, however, this method gained a quantitative character. The twentieth-century precursor of the quantitative use of pairwise comparisons was Thurstone, who harnessed this method to compare social values [52]. Continuation of studies on the pairwise comparisons method [16, 44, 53] resulted the seminal work written by Saaty [49]. In his article, Saaty proposed the Analytic Hierarchy Process (AHP)—a new multiple-criteria decision-making method based on pairwise comparisons. Thanks to the popularity of AHP, the pairwise comparisons method has become one of the most frequently used decision-making techniques. Numerous variants and extensions of the pairwise comparisons method have found application in economy [46], consumer research [20], management [41, 45, 54], construction [15], military science [24], education and science [28, 42], chemical engineering [14], oil industry [23] and others. The pairwise comparisons method is still under development and inspires researchers who conduct work on the inconsistency of the paired comparisons [2, 4, 7, 33, 37], incompleteness of decision data [13, 18, 39, 40], data accuracy [27], priority calculations [5, 25, 34, 36, 43], representation of uncertain knowledge [1, 47, 57] as well as new methods based on the pairwise comparisons principle [32, 35, 47, 48].

Popularity of the decision-making methods makes them vulnerable to attacks and manipulations. This problem has been studied by several researchers including Yager [55, 56] who considered strategic preferential manipulations, Dong et al. [17], who addressed manipulation in the group decision-making or Sasaki [50], on strategic manipulation in the context of in group decisions with pairwise comparisons. Recently, two heuristics enabling detection of manipulators and minimizing their effect on the group consensus have been introduced in [38]. In [30] the risk of incorrect extrapolation of the number of COVID cases caused by misreported data has been reduced by considering data from various countries. Some aspects of decision manipulation in the context of electoral systems are presented in [21, 22, 51]. Faramondi et al. [19] addresses the problem of rank reversal for pairwise comparisons method equipped with information on judgments uncertainty.

In the presented work, we take a step towards determining the degree of difficulty of manipulating in the pairwise comparisons method. For this purpose, we will propose an algorithm for calculating the closest approximation of the pairwise comparisons matrix (PCM), which equates the priorities of two selected alternatives. We apply a similar technique of orthogonal projections to that used in [26, 31]. The difference between the weights of alternatives shows the degree of difficulty of a given manipulation. Although the reasoning is done for additive matrices, the obtained result is also valid for multiplicative matrices.

It must be stressed that recently Faramondi et al. [19] have proposed an optimisation model to identify a suitable perturbation of the available pairwise comparisons to alter the ordinal ranking for a selected pair of alternatives. Their idea was to express the solution to this problem as the minimum of the appropriate function subject to some constrains. This can be done by resorting to commercial solvers.

In fact, [19] solves a more general problem than stated here, including the case of incomplete PCMs. Furthermore, it allows different levels of element-wise perturbance intensities. On the other hand, we reformulate the problem in algebraic terms. Then we show and prove the explicit formulas for the solution, which is the main advantage of the presented work.

The article consists of four sections. Introduction (Sect. 1) and Preliminaries (Sect. 2) present the state of research and introduce basic concepts and definitions in the field of the pairwise comparisons method. The third section, Towards optimal manipulation of a pair of alternatives, defines the procedure to construct a manipulated pairwise comparisons matrix. It also contains a method for determining the difficulty of manipulation. The article ends with Conclusion (Sect. 4), summarizing the achieved results.

2 Preliminaries

2.1 Multiplicative pairwise comparisons matrices

Let us assume that we want perform pairwise comparisons of a finite set \(E=\{e_{1},\ldots ,e_{n}\}\) of alternatives. The comparisons can be expressed by a pairwise comparisons matrix (PCM) \(M=[m_{ij}]\) with positive elements satisfying the reciprocity condition

$$\begin{aligned} m_{ij}\cdot m_{ji}=1, \end{aligned}$$
(1)

for \(i,j\in \{1,\ldots ,n\}\). Let us denote the set of all PCMs by \(\mathcal {M}\).

Given \(M\in \mathcal {M}\), we can apply different procedures to assign a positive weight \(w_{k}\) to each alternative \(e_{k}\) (\(k\in \{1,\ldots ,n\}\)). The weight of an alternative determinate its position in the ranking of the alternatives (see [8] for their survey).

A popular weighting method is the Geometric Mean Method (GMM), introduced in [10]. Then, the formula for \(w_{k}\) can be calculated as the geometric mean of the k-th row elements:

$$\begin{aligned} w_{k}=\root n \of {\prod _{j=1}^{n}m_{kj}}. \end{aligned}$$
(2)

Normalization of the resulting weight vector is needed for a number of reasons (see [29]). If we want to standardize it, we divide each coordinate by the sum of all of them:

$$\begin{aligned} \hat{w}_{k}=\frac{w_{k}}{\sum _{j=1}^{n}w_{j}}. \end{aligned}$$

Another popular method introduced in [49] is the Eigenvector Method (EVM). We choose the normalized right eigenvector corresponding to the principle eigenvalue of M as the priority vector. The vector can be obtained by means of the power iteration.

However, the main advantage of the GMM over the EVM is its simplicity. Furthermore, as [11] and [12] show, rank monotonicity and weight monotonicity axioms are satisfied by the GMM but violated by the EVM. Since monotonicity is very strongly related to the concept of manipulation (see e.g. [50]), we consider the GMM as the weighting method in our paper.

2.2 Additive pairwise comparisons matrices

The family \(\mathcal {M}\) is not a linear space. However, we can easily transform every multiplicative PCM M into an additive one using the following map:

$$\begin{aligned} \varphi :\ \mathcal {M}\ni [m_{ij}]\mapsto [\ln (m_{ij})]\in \mathcal {A}, \end{aligned}$$

where

$$\begin{aligned} \mathcal {A}:=\{[a_{ij}]:\ \forall i,j\in \{1,\ldots ,n\}\ a_{ij}\in \mathbb {R}\text { and }a_{ij}+a_{ji}=0\}, \end{aligned}$$

is a linear space of additive PCMs.

Obviously, we can define the map

$$\begin{aligned} \mu :\ \mathcal {A}\ni [a_{ij}]\mapsto [e^{a_{ij}}]\in \mathcal {M}, \end{aligned}$$

such that

$$\begin{aligned} \mu \circ \varphi =id_{\mathcal {M}} \end{aligned}$$

and

$$\begin{aligned} \varphi \circ \mu =id_{\mathcal {A}}. \end{aligned}$$

Since \(\varphi \) and \(\mu \) are mutually reverse, from now on we will consider only the additive case in order to use the algebraic structure of \(\mathcal {A}\).

If we treat an additive PCM A as the image of \(M\in \mathcal {M}\) by the map \(\varphi \), we can also obtain the vector of weights v by use of the logarithmic mapping:

$$\begin{aligned} v_{k}=\ln (w_{k}). \end{aligned}$$

By applying (2) we get

$$\begin{aligned} v_{k}=\ln \left( \root n \of {\prod _{j=1}^{n}m_{kj}}\right) =\frac{\ln \left( \prod _{j=1}^{n}m_{kj}\right) }{n}=\frac{\sum _{j=1}^{n}\ln (m_{kj})}{n}=\frac{\sum _{j=1}^{n}a_{kj}}{n}, \end{aligned}$$

so the k-th coordinate of A can be calculated as the arithmetic mean of the k-th row of A.

3 Towards optimal manipulation of a pair of alternatives

Let us start with a simple example.

Example 1

Consider a family of additive pairwise comparisons matrices

$$\begin{aligned} A_{\varepsilon }=\left[ \begin{array}{ccc} 0 &{} 1+\varepsilon &{} -1\\ -1-\varepsilon &{} 0 &{} 1\\ 1 &{} -1 &{} 0 \end{array}\right] . \end{aligned}$$

If we take \(\varepsilon =\frac{1}{n}\) and \(\varepsilon =-\frac{1}{n}\) (\(n\in \mathbb {N}\)), we obtain two PCMs, whose weight vectors are

$$\begin{aligned} v_{\frac{1}{n}}=\left( \frac{1}{3n},-\frac{1}{3n},0\right) ^{T} \end{aligned}$$

and

$$\begin{aligned} v_{-\frac{1}{n}}=\left( -\frac{1}{3n},\frac{1}{3n},0\right) ^{T}, \end{aligned}$$

respectively.

It implies that the order of alternatives is \((a_{1},a_{3},a_{2})\) in the first case and \((a_{2},a_{3},a_{1})\) in the second case.

Since the standard Frobenius distance of the matrices is

$$\begin{aligned} \left| \left| A_{\frac{1}{n}}-A_{-\frac{1}{n}}\right| \right| =\sqrt{\left( \frac{2}{n}\right) ^{2}+\left( \frac{2}{n}\right) ^{2}}=\frac{2\sqrt{2}}{n}, \end{aligned}$$

they can be arbitrarily close.

Assume that \(w_{i}\) and \(w_{j}\) are the weights of \(A\in \mathcal {A}\) such that \(w_{i}<w_{j}\). Example 1 shows that is impossible to find \(A'\in \mathcal {A}\) such that \(w'_{i}>w'_{j}\), where \(w'_{i}\) and \(w'_{j}\) are the weights of \(A'\), and \(A'\) is the closest PCM to A satisfying this property.

However, it is possible to find \(A'\in \mathcal {A}\) minimizing the distance to A such that \(w'_{i}=w'_{j}\).

3.1 The tie spaces

Fix \(i,j\in \{1,\ldots ,n\}\).

Let us define the subspace \({{\mathcal {A}}}_{ij}\) of all additive PCMs which induce the ranking such that alternatives i and j are equal:

$$\begin{aligned} {{\mathcal {A}}}_{ij}=\left\{ A\in \mathcal{A}:\frac{1}{n}\sum _{k=1}^{n}a_{ik}=\frac{1}{n}\sum _{k=1}^{n}a_{jk}\right\} . \end{aligned}$$

We will call such a linear space a tie space.

Lemma 1

\(\dim {{\mathcal {A}}}_{ij}=\frac{n^{2}-n}{2}-1\).

Proof

A reciprocal additive matrix is uniquely defined by \(\frac{n^{2}-n}{2}\) independent numbers above the diagonal, and by changing one of them (e.g. \(a_{ik}\) where \(k\ne i,j\)), the matrix can be placed in the tie space \({{\mathcal {A}}}_{ij}\). \(\square \)

Now let us define a basis for the tie space \(\mathcal{A}_{ij}\). Without loss of generality, we can assume that \(i<j<n\). Let

$$\begin{aligned} Z_{ij}:=\{(q,r):\ 1\le q<r\le n,\ \{q,r\}\cap \{i,j\}=\emptyset \}. \end{aligned}$$

Lemma 2

The set \(Z_{ij}\) has \(\frac{(n-2)(n-3)}{2}\) elements.

Proof

The number of all elements above the main diagonal is \(\frac{n^{2}-n}{2}\). There are:

  • \(i-1\) elements in the i-th column,

  • \(j-1\) elements in the j-th column,

  • \(n-i-1\) elements (excluding \(a_{ij}\)) in the i-th row,

  • \(n-j\) elements in the j-th row.

Thus, the total number of \(Z_{ij}\) elements equals

$$\begin{aligned} \overline{\overline{Z_{ij}}}= & {} \frac{n^{2}-n}{2}-(i-1)-(j-1)-(n-i-1)-(n-j)\\= & {} \frac{n^{2}-n}{2}-2n+3=\frac{n^{2}-n-4n+6}{2}=\frac{(n-2)(n-3)}{2}. \end{aligned}$$

\(\square \)

At first, for each \((q,r)\in Z_{ij}\) let us define \(C^{qr}\in {{\mathcal {A}}}\), whose elements are given by

$$\begin{aligned} c_{kl}^{qr}=\left\{ \begin{array}{rl} 1, &{} k=q,\ l=r\\ -1, &{} k=r,\ l=q\\ 0, &{} \text {otherwise} \end{array}\right. . \end{aligned}$$

Next, we define the elements of additive PCMs \(D^{p}\) for \(p\in \{1,\ldots ,i-1\}\), \(E^{p}\) for \(p\in \{1,\ldots ,j-1\}\), \(F^{p}\) for \(p\in \{i+1,\ldots ,j-1,j+1,\ldots ,n\}\) and \(G^{p}\) for \(p\in \{j+1,\ldots ,n-1\}\) by formulas:

$$\begin{aligned} d_{kl}^{p}= & {} \left\{ \begin{array}{rl} 1, &{} (k=p,\ l=i)\text { or }(k=n,\ l=j)\\ -1, &{} (k=i,\ l=p)\text { or }(k=j,\ l=n)\\ 0, &{} \text {otherwise} \end{array}\right. ,\\ e_{kl}^{p}= & {} \left\{ \begin{array}{rl} 1, &{} (k=p,\ l=j)\text { or }(k=j,\ l=n,\ p\ne i)\\ -1, &{} (k=j,\ l=p)\text { or }(k=n,\ l=j,\ p\ne i)\\ 2, &{} k=j,\ l=n,\ p=i\\ -2, &{} k=n,\ l=j,\ p=i\\ 0, &{} \text {otherwise} \end{array}\right. ,\\ f_{kl}^{p}= & {} \left\{ \begin{array}{rl} 1, &{} (k=i,\ l=p)\text { or }(k=j,\ l=n)\\ -1, &{} (k=p,\ l=i)\text { or }(k=n,\ l=j)\\ 0, &{} \text {otherwise} \end{array}\right. , \end{aligned}$$

and

$$\begin{aligned} g_{kl}^{p}=\left\{ \begin{array}{rl} 1, &{} (k=j,\ l=p)\text { or }(k=n,\ l=j)\\ -1, &{} (k=p,\ l=j)\text { or }(k=j,\ l=n)\\ 0, &{} \text {otherwise} \end{array}\right. . \end{aligned}$$

Lemma 3

The total number of matrices \(C^{q,r}\), \(D^{p}\), \(E^{p}\), \(F^{p}\) and \(G^{p}\) is \(\frac{n^{2}-n}{2}-1\).

Proof

Summing up the numbers of the consecutive matrices, we get the number \(\frac{(n-2)(n-3)}{2}+(i-1)+(j-1)+(n-i-1)+(n-j-1)=\frac{(n-2)(n-3)}{2}+2n-4=\frac{n^{2}-5n+6+4n-8}{2}=\frac{n^{2}-n}{2}-1.\) \(\square \)

Theorem 1

A family of matrices

$$\begin{aligned} {{\mathcal {B}}}:=\{C^{qr}\}_{(q,r)\in Z_{ij}}\cup \{D^{p}\}_{p=1}^{i-1}\cup \{E^{p}\}_{p=1}^{j-1}\cup \{F^{p}\}_{p=i-1}^{j-1}\cup \{F^{p}\}_{p=j+1}^{n}\cup \{G^{p}\}_{p=j+1}^{n-1} \end{aligned}$$

is a basis of \({{\mathcal {A}}}_{ij}\).

Proof

By Propositions 1 and 3, the cardinality of \({{\mathcal {B}}}\) is equal to the dimension of \({{\mathcal {A}}}_{ij}\), so it is enough to show that each matrix \(A\in {{\mathcal {A}}}_{ij}\) is generated by matrices from \({{\mathcal {B}}}\).

For this purpose, let us define a matrix H as a linear combination of matrices from \({{\mathcal {B}}}\):

$$\begin{aligned} H:= & {} \sum _{(q,r)\in Z_{ij}}a_{qr}C^{qr}+\sum _{p=1}^{i-1}a_{pi}D^{p}+\sum _{p=1}^{j-1}a_{pj}E^{p}+\sum _{p=i+1}^{j-1}a_{ip}F^{p}\\{} & {} +\sum _{p=j+1}^{n}a_{ip}F^{p}+ \sum _{p=j+1}^{n-1}a_{jp}G^{p}. \end{aligned}$$

It is straightforward (as all but one addends of the sum are zeros) that for \((q,r)\not \in \{(j,n),(n,j)\}\) we have

$$\begin{aligned} h_{qr}=a_{qr}. \end{aligned}$$

Likewise,

$$\begin{aligned} h_{jn}= & {} \sum _{(q,r)\in Z_{ij}}a_{qr}\cdot 0+\sum _{p=1}^{i-1}a_{pi}\cdot (-1)+\sum _{p=1}^{i-1}a_{pj}\cdot 1+a_{ij}\cdot 2+\sum _{p=i+1}^{j-1}a_{pj}\cdot 1\\{} & {} + \sum _{p=i+1}^{j-1}a_{ip}\cdot 1+\sum _{p=j+1}^{n}a_{ip}\cdot 1+\sum _{p=j+1}^{n-1}a_{jp}\cdot (-1)\\= & {} \sum _{p=1}^{j-1}a_{pj}-\sum _{p=1}^{i-1}a_{pi}+\sum _{p=i+1}^{n}a_{ip}-\sum _{p=j+1}^{n-1}a_{jp}=a_{jn}. \end{aligned}$$

The last equality follows from the equation

$$\begin{aligned} \sum _{p=1}^{n}a_{ip}=\sum _{p=1}^{n}a_{jp}. \end{aligned}$$

By analogy, \(h_{nj}=a_{nj}\), so \(A=H,\) which completes the proof. \(\square \)

Let us redefine \({{\mathcal {B}}}\) as a family of matrices

$$\begin{aligned} {{\mathcal {B}}}=\{B^{p}\}_{p=1}^{\frac{n^{2}-n}{2}-1} \end{aligned}$$

as follows:

We set \(\{B^{p}\}_{p=1}^{\frac{(n-2)(n-3)}{2}}\) as matrices \(\{C^{qr}\}_{(q,r)\in Z_{ij}}\), ordered lexicografically, i.e.

$$\begin{aligned} B^{1}:= & {} C^{12},B^{2}:=C^{13},\ldots ,B^{i-2}:=C^{1,i-1},B^{i-1}:=C^{1,i+1},\ldots ,\nonumber \\ B^{j-3}:= & {} C^{1,j-1},B^{j-2}:=C^{1,j+1},\ldots ,B^{n-3}:=C^{1,n},\ldots ,B^{\frac{(n-2)(n-3)}{2}}:=C^{n-1,n}. \end{aligned}$$
(3)

Next, we define

$$\begin{aligned}{} & {} B^{\frac{(n-2)(n-3)}{2}+p}:=D^{p}, \quad p=1,\ldots ,i-1, \end{aligned}$$
(4)
$$\begin{aligned}{} & {} B^{\frac{(n-2)(n-3)}{2}+i-1+p}:=E^{p}, \quad p=1,\ldots ,j-1,\end{aligned}$$
(5)
$$\begin{aligned}{} & {} B^{\frac{(n-2)(n-3)}{2}+i+j-2+p}:=F^{p}, \quad p=i+1,\ldots ,j-1,\end{aligned}$$
(6)
$$\begin{aligned}{} & {} B^{\frac{(n-2)(n-3)}{2}+j-3+p}:=F^{p}, \quad p=j+1,\ldots ,n,\end{aligned}$$
(7)
$$\begin{aligned}{} & {} B^{\frac{(n-2)(n-3)}{2}+n-3+p}:=G^{p},\quad p=j+1,\ldots ,n-1. \end{aligned}$$
(8)

Example 2

Consider \(n=5,\ i=2\) and \(j=3\).

Since \(Z_{23}=\{(1,4),(1,5),(4,5)\}\), we get the following basis of \({{\mathcal {A}}}_{23}\):

$$\begin{aligned} B^{1}= & {} C^{14}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}1 &{} \hspace{0.3cm}0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ -1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \end{array}\hspace{0.24cm}\right) , B^{2}=C^{15}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}1\\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ -1 &{} 0 &{} 0 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ B^{3}= & {} C^{45}=\left( \begin{array}{rrrrr} \hspace{0.27cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} -1 &{} 0 \end{array}\hspace{0.24cm}\right) , B^{4}=D^{1}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}1 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ -1 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} -1\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ B^{5}= & {} E^{1}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}1 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ -1 &{} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -1 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) , B^{6}=E^{2}=\left( \begin{array}{rrrrr} \hspace{0.27cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} -1 &{} 0 &{} 0 &{} 2\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -2 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ B^{7}= & {} F^{4}=\left( \begin{array}{rrrrr} \hspace{0.27cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} -1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -1 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) , B^{8}=F^{5}=\left( \begin{array}{rrrrr} \hspace{0.27cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} -1 &{} -1 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ B^{9}= & {} G^{4}=\left( \begin{array}{rrrrr} \hspace{0.27cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} -1\\ 0 &{} 0 &{} -1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) . \end{aligned}$$

3.2 Orthogonalization

By Boyd [3], any basis of an inner product space can be transformed into an orthogonal basis by a standard Gram–Schmidt process.

In particular, if we apply that to the basis \(B^{1},\ldots ,B^{\frac{n^{2}-n}{2}-1}\) of the \({{\mathcal {A}}}_{ij}\) vector space equipped with a standard Frobenius inner product \(\langle \cdot ,\cdot \rangle \) then we obtain a pairwise orthogonal basis

$$\begin{aligned} H^{1},\ldots ,H^{\frac{n^{2}-n}{2}-1} \end{aligned}$$

as follows:

$$\begin{aligned} H^{1}= & {} B^{1},\\ H^{2}= & {} B^{2}-\frac{\langle H^{1},B^{2}\rangle }{\langle H^{1},H^{1}\rangle }H^{1},\\ H^{3}= & {} B^{3}-\frac{\langle H^{1},B^{3}\rangle }{\langle H^{1},H^{1}\rangle }H^{1}-\frac{\langle H^{2},B^{3}\rangle }{\langle H^{2},H^{2}\rangle }H^{2},\\ \cdots= & {} \cdots \\ H^{\frac{n^{2}-n}{2}-1}= & {} B^{\frac{n^{2}-n}{2}-1}-\sum _{p=1}^{\frac{n^{2}-n}{2}-2}\frac{\langle H^{p},B^{\frac{n^{2}-n}{2}-1}\rangle }{\langle H^{p},H^{p}\rangle }H^{p}. \end{aligned}$$

Example 3

Let us consider matrices \(B^{1},\ldots ,B^{9}\) from Example 2. We will apply the Gram–Schmidt process to obtain an orthogonal basis \(H^{1},\ldots ,H^{9}\) of \({{\mathcal {A}}}_{23}\):

$$\begin{aligned} H^{1}= & {} B^{1},\\ \langle H^{1},B^{2}\rangle= & {} 0\Rightarrow H^{2}=B^{2},\\ \langle H^{1},B^{3}\rangle= & {} \langle H^{2},B^{3}\rangle =0\Rightarrow H^{3}=B^{3},\\ \langle H^{1},B^{4}\rangle= & {} \langle H^{2},B^{4}\rangle =\langle H^{3},B^{4}\rangle =0\Rightarrow H^{4}=B^{4},\\ \langle H^{1},B^{5}\rangle= & {} \langle H^{2},B^{5}\rangle =\langle H^{3},B^{5}\rangle =0,\ \langle H^{4},B^{5}\rangle =-2,\ \langle H^{4},H^{4}\rangle =4\\\Rightarrow & {} H^{5}=B^{5}+\frac{1}{2}H^{4}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}\frac{1}{2} &{} \hspace{0.3cm}1 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ -\frac{1}{2} &{} 0 &{} 0 &{} 0 &{} 0\\ -1 &{} 0 &{} 0 &{} 0 &{} \frac{1}{2}\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -\frac{1}{2} &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ \langle H^{1},B^{6}\rangle= & {} \langle H^{2},B^{6}\rangle =\langle H^{3},B^{6}\rangle =0,\ \langle H^{4},B^{6}\rangle =-4,\ \langle H^{5},B^{6}\rangle =2,\\ \langle H^{5},H^{5}\rangle= & {} 3\Rightarrow H^{6}=B^{6}+H^{4}-\frac{2}{3}H^{5}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}\frac{2}{3} &{} -\frac{2}{3} &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ -\frac{2}{3} &{} 0 &{} 1 &{} 0 &{} 0\\ \frac{2}{3} &{} -1 &{} 0 &{} 0 &{} \frac{2}{3}\\ 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -\frac{2}{3} &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ \langle H^{1},B^{7}\rangle= & {} \langle H^{2},B^{7}\rangle =\langle H^{3},B^{7}\rangle =0,\ \langle H^{4},B^{7}\rangle =-2,\ \langle H^{5},B^{7}\rangle =1,\\ \langle H^{6},B^{7}\rangle= & {} \frac{4}{3},\ \langle H^{6},H^{6}\rangle =\frac{14}{3}\\\Rightarrow & {} H^{7}=B^{7}+\frac{1}{2}H^{4}-\frac{1}{3}H^{5}-\frac{2}{7}H^{6}=\left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}\frac{1}{7} &{} -\frac{1}{7} &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ -\frac{1}{7} &{} 0 &{} -\frac{2}{7} &{} 1 &{} 0\\ \frac{1}{7} &{} \frac{2}{7} &{} 0 &{} 0 &{} \frac{1}{7}\\ 0 &{} -1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -\frac{1}{7} &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ \langle H^{1},B^{8}\rangle= & {} \langle H^{2},B^{8}\rangle =\langle H^{3},B^{8}\rangle =0,\ \langle H^{4},B^{8}\rangle =-2,\ \langle H^{5},B^{8}\rangle =1,\\ \langle H^{6},B^{8}\rangle= & {} \frac{4}{3},\ \langle H^{7},B^{8}\rangle =\frac{2}{7},\ \langle H^{7},H^{7}\rangle =\frac{16}{7}\\\Rightarrow & {} H^{8}=B^{8}+\frac{1}{2}H^{4}-\frac{1}{3}H^{5}-\frac{2}{7}H^{6}-\frac{1}{8}H^{7}\\= & {} \left( \begin{array}{rrrrr} 0 &{} \hspace{0.3cm}\frac{1}{8} &{} -\frac{1}{8} &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ -\frac{1}{8} &{} 0 &{} -\frac{1}{4} &{} -\frac{1}{8} &{} 1\\ \frac{1}{8} &{} \frac{1}{4} &{} 0 &{} 0 &{} \frac{1}{8}\\ 0 &{} \frac{1}{8} &{} 0 &{} 0 &{} 0\\ 0 &{} -1 &{} -\frac{1}{8} &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) ,\\ \langle H^{1},B^{9}\rangle= & {} \langle H^{2},B^{9}\rangle =\langle H^{3},B^{9}\rangle =0,\ \langle H^{4},B^{9}\rangle =2,\ \langle H^{5},B^{9}\rangle =-1,\\ \langle H^{6},B^{9}\rangle= & {} -\frac{4}{3},\ \langle H^{7},B^{9}\rangle =-\frac{2}{7},\ \langle H^{8},B^{9}\rangle =-\frac{1}{4},\ \langle H^{8},H^{8}\rangle =\frac{9}{4}\\\Rightarrow & {} H^{9}=B^{9}-\frac{1}{2}H^{4}+\frac{1}{3}H^{5}+\frac{2}{7}H^{6}+\frac{1}{8}H^{7}+\frac{1}{9}H^{8}\\= & {} \left( \begin{array}{rrrrr} 0 &{} -\frac{1}{9} &{} \hspace{0.3cm}\frac{1}{9} &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}0\\ \frac{1}{9} &{} 0 &{} \frac{2}{9} &{} \frac{1}{9} &{} \frac{1}{9}\\ -\frac{1}{9} &{} -\frac{2}{9} &{} 0 &{} 1 &{} -\frac{1}{9}\\ 0 &{} -\frac{1}{9} &{} -1 &{} 0 &{} 0\\ 0 &{} -\frac{1}{9} &{} \frac{1}{9} &{} 0 &{} 0 \end{array}\hspace{0.24cm}\right) . \end{aligned}$$

3.3 The best approximation of a PCM equating two alternatives

Consider an additive PCM A. In order to find its projection \(A'\) onto the subspace \({{\mathcal {A}}}_{ij}\) we present \(A'\) as a linear combination of the orthogonal basis vectors

$$\begin{aligned} H^{1},\ldots ,H^{\frac{n^{2}-n}{2}-1}. \end{aligned}$$

We will look for the factors

$$\begin{aligned} \varepsilon _{1},\ldots ,\varepsilon _{\frac{n^{2}-n}{2}-1} \end{aligned}$$

such that \(A'=\varepsilon _{1}H^{1}+\ldots \varepsilon _{\frac{n^{2}-n}{2}-1}H^{\frac{n^{2}-n}{2}-1}\).

Then, \(\forall C\in {{\mathcal {A}}}_{ij},\ \langle A-A',C\rangle _{F}=0\), which is equivalent to the system of linear equations:

$$\begin{aligned} \left\{ \begin{array}{l} \langle A,H^{1}\rangle _{F}-\varepsilon _{1}\langle H^{1},H^{1}\rangle _{F}=0,\\ \langle A,H^{2}\rangle _{F}-\varepsilon _{2}\langle H^{2},H^{2}\rangle _{F}=0,\\ \cdots \\ \left\langle A,H^{\frac{n^{2}-n}{2}-1}\right\rangle _{F}-\varepsilon _{\frac{n^{2}-n}{2}-1} \left\langle H^{\frac{n^{2}-n}{2}-1},H^{\frac{n^{2}-n}{2}-1}\right\rangle _{F}=0, \end{array}\right. \end{aligned}$$

Its solutions:

$$\begin{aligned} \varepsilon _{k}=\frac{\langle A,H^{k}\rangle _{F}}{\langle H^{k},H^{k}\rangle _{F}},\ k=1,\ldots ,\frac{n^{2}-n}{2}-1. \end{aligned}$$

Thus, the PCM \(A'\) which generates a ranking equating the i-th and j-th alternatives and which is the closest to A can be calculated from the formula

$$\begin{aligned} A'=\sum _{k=1}^{\frac{n^{2}-n}{2}-1}\frac{\langle A,H^{k}\rangle _{F}}{\langle H^{k},H^{k}\rangle _{F}}H^{k}. \end{aligned}$$
(9)

Example 4

Let us consider the following additive PCM

$$\begin{aligned} A=\left( \begin{array}{rrrrr} 0 &{} -5 &{} \hspace{0.3cm}2 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}4\\ 5 &{} 0 &{} 2 &{} 5 &{} -6\\ -2 &{} -2 &{} 0 &{} 4 &{} -9\\ 0 &{} -5 &{} -4 &{} 0 &{} -8\\ -4 &{} 6 &{} 9 &{} 8 &{} 0 \end{array}\hspace{0.24cm}\right) . \end{aligned}$$
(10)

The weights in a ranking vector obtained as the arithmetic means of row elements of A are

$$\begin{aligned} w=(0.2,1.2,-1.8,-3.4,3.8)^{T}. \end{aligned}$$

In order to find the PCM closest to A which generates a ranking equating the second and the third alternative, we take the orthogonal basis \(H^{1},\ldots ,H^{9}\) described in Example 3. Next, we calculate the coefficients in (9):

Table 1 The coefficients of \(A^{\prime }\)

Finally, we obtain the orthogonal projection of A onto \({{\mathcal {A}}}_{23}\):

$$\begin{aligned} A'=\left( \begin{array}{rrrrr} 0 &{} -3.5 &{} 0.5 &{} \hspace{0.3cm}0 &{} \hspace{0.3cm}4\\ 3.5 &{} 0 &{} -1 &{} 3.5 &{} -7.5\\ -0.5 &{} 1 &{} 0 &{} 5.5 &{} -7.5\\ 0 &{} -3.5 &{} -5.5 &{} 0 &{} -8\\ -4 &{} 7.5 &{} 7.5 &{} 8 &{} 0 \end{array}\hspace{0.24cm}\right) . \end{aligned}$$
(11)

The corresponding vector of weights is

$$\begin{aligned} w'=(0.2,-0.3,-0.3,-3.4,3.8)^{T}. \end{aligned}$$

Let us notice that the weights of the second and third alternatives are actually equal. Furthermore, the weights of the rest of alternatives have not changed. The common weight of the second and the third alternative in \(w'\) is the arithmetic mean of the corresponding weights in w.

It appears that the remark above is true regardless of the dimension of the PCM and of the choice of the two alternatives whose weights are equalized, i.e the following theorem is true:

Theorem 2

Let \(A=[a_{kl}]\in {{\mathcal {A}}}\), \(i,j\in \{1,\ldots ,n\}\), and \(A'=[a'_{kl}]\) be the orthogonal projection of A onto \({{\mathcal {A}}}_{ij}\). Then

(1) For each \(k\not \in \{i,j\}\)

$$\begin{aligned} \sum _{l=1}^{n}a'_{kl}=\sum _{l=1}^{n}a_{kl}, \end{aligned}$$
(12)

(2)

$$\begin{aligned} \sum _{l=1}^{n}a'_{il}=\sum _{l=1}^{n}a'_{jl}=\frac{\sum _{l=1}^{n}a_{il} +\sum _{l=1}^{n}a_{jl}}{2}. \end{aligned}$$
(13)

Proof

Let us assume, without loss of generality, that \(i<j\).

Note that \((A'-A)\ \bot \ {{\mathcal {A}}}_{ij}\), which implies that \((A'-A)\ \bot \ B^{s}\) for \(s=1\ldots ,\frac{(n-2)(n-3)}{2}\), where \(\{B^{s}\}\) is a base of \({{\mathcal {A}}}_{ij}\) defined in (3)–(8). Thus, for each \(B^{p}\) we can write the equality

$$\begin{aligned} \langle A'-A,B^{s}\rangle =0, \end{aligned}$$

which is equivalent to:

\(\mathbf{(1_{qr})}\) \(a'_{qr}-a_{qr}=0,\) for \((q,r)\in Z_{ij}\), \(1\le s\le \frac{(n-2)(n-3)}{2}\);

\(\mathbf{(2_{p})}\) \(a'_{pi}-a_{pi}-a'_{jn}+a_{jn}=0,\) for \(p<i\), \(\frac{(n-2)(n-3)}{2}+1\le s\le \frac{(n-2)(n-3)}{2}+i-1\);

\(\mathbf{(3_{p})}\) \(a'_{pj}-a_{pj}+a'_{jn}-a_{jn}=0,\) for \(i\ne p<j\), \(\frac{(n-2)(n-3)}{2}+i\le s\le \frac{(n-2)(n-3)}{2}++i+j-2\) and \(s\ne \frac{(n-2)(n-3)}{2}+2i-1\);

\(\mathbf{(3_{i})}\) \(a'_{ij}-a_{ij}+2a'_{jn}-2a_{jn}=0,\) for \(s=\frac{(n-2)(n-3)}{2}+2i-1\);

\(\mathbf{(4_{p})}\) \(a'_{ip}-a_{ip}+a'_{jn}-a_{jn}=0,\) for \(i<p\ne j\), \(\frac{(n-2)(n-3)}{2}+i+j-1\le s\le \le \frac{(n-2)(n-3)}{2}+n+j-2\);

\(\mathbf{(5_{p})}\) \(a'_{jp}-a_{jp}-a'_{jn}+a_{jn}=0,\) for \(p>j\), \(\frac{(n-2)(n-3)}{2}+n+j-1\le s\le \frac{n^{2}-n}{2}-1\).

Now, for the proof of (12) assume that \(k\not \in \{i,j\}\). Then

$$\begin{aligned} \begin{array}{lll} S &{}:= &{} {\displaystyle \sum _{l=1}^{n}a'_{kl}-\sum _{l=1}^{n}a_{kl}=\sum _{l=1}^{n}(a'_{kl}-a_{kl})}\\ &{} = &{} {\displaystyle \sum _{l<i}(a'_{kl}-a_{kl})+a'_{ki}-a_{ki}+\sum _{i<l<j}(a'_{kl}-a_{kl})+a'_{kj}-a_{kj}}\\ &{}&{} + {\displaystyle \sum _{l>j}(a'_{kl}-a_{kl})}. \end{array}. \end{aligned}$$

From \(\mathbf{(1_{kl})}\) for \(l\not \in \{i,j\}\) we get

$$\begin{aligned} {\displaystyle \sum _{l<i}(a'_{kl}-a_{kl})+\sum _{i<l<j}(a'_{kl}-a_{kl})+\sum _{l>j}(a'_{kl}-a_{kl})=0}, \end{aligned}$$

so

$$\begin{aligned} S=a'_{ki}-a_{ki}+a'_{kj}-a_{kj}. \end{aligned}$$

Consider three cases:

  1. (a)

    If \(k<i\), we add equations \(\mathbf{(2_{k})}\) and \(\mathbf{(3_{k})}\) and we get \(S=0\).

  2. (b)

    If \(i<k<j\), we subtract equation \(\mathbf{(4_{k})}\) from \(\mathbf{(3_{k})}\) and we get \(S=0\).

  3. (c)

    If \(k>j\), we add equations \(\mathbf{(4_{k})}\) and \(\mathbf{(5_{k})}\) and we get \(S=0\).

This concludes the proof of (12).

Now, let us calculate

$$\begin{aligned} \begin{array}{lll} T &{}:= &{} {\displaystyle \sum _{l=1}^{n}a'_{il}+\sum _{l=1}^{n}a'_{jl}-\sum _{l=1}^{n}a_{il}-\sum _{l=1}^{n}a_{jl}}\\ &{} = &{} {\displaystyle \sum _{l<i}(a'_{il}-a_{il})+\sum _{l<i}(a'_{jl}-a_{jl})}+a'_{ii}-a_{ii}+a'_{ji}-a_{ji}\\ &{}&{} + {\displaystyle \sum _{i<l<j}(a'_{il}-a_{il})+\sum _{i<l<j}(a'_{jl}-a_{jl})}+a'_{ij}-a_{ij}+a'_{jj}-a_{jj}\\ &{}&{} + {\displaystyle \sum _{l>j}(a'_{il}-a_{il})+\sum _{l>j}(a'_{jl}-a_{jl}).} \end{array}. \end{aligned}$$

Since

$$\begin{aligned} a'_{ii}=a_{ii}=a'_{jj}=a_{jj}=a'_{ij}+a'_{ji}=a_{ij}+a_{ji}=0, \end{aligned}$$

it follows that

$$\begin{aligned} \begin{array}{lll} T &{} = &{} {\displaystyle \sum _{l<i}(a'_{il}-a_{il})+\sum _{l<i}(a'_{jl}-a_{jl})+\sum _{i<l<j}(a'_{il}-a_{il})+\sum _{i<l<j}(a'_{jl}-a_{jl})}\\ &{}&{}+ {\displaystyle \sum _{l>j}(a'_{il}-a_{il})+\sum _{l>j}(a'_{jl}-a_{jl}).} \end{array}. \end{aligned}$$

From \(\mathbf{(2_{l})}\) and \(\mathbf{(3_{l})}\) we get

$$\begin{aligned} \sum _{l<i}(a'_{il}-a_{il})+\sum _{l<i}(a'_{jl}-a_{jl})=\sum _{l<i}(-a'_{jn}+a_{jn})+\sum _{l<i}(a'_{jn}-a_{jn})=0. \end{aligned}$$
(14)

From \(\mathbf{(4_{l})}\) and \(\mathbf{(3_{l})}\) we get

$$\begin{aligned} \sum _{i<l<j}(a'_{il}-a_{il})+\sum _{i<l<j}(a'_{jl}-a_{jl})=\sum _{i<l<j}(-a'_{jn}+a_{jn})+\sum _{i<l<j}(a'_{jn}-a_{jn})=0. \end{aligned}$$
(15)

From \(\mathbf{(4_{l})}\) and \(\mathbf{(5_{l})}\) we get

$$\begin{aligned} \sum _{l>j}(a'_{il}-a_{il})+\sum _{l>j}(a'_{jl}-a_{jl})=\sum _{l>j}(-a'_{jn}+a_{jn})+\sum _{l>j}(a'_{jn}-a_{jn})=0. \end{aligned}$$
(16)

Equations (14), (15) and (16) imply that

$$\begin{aligned} T=0. \end{aligned}$$

On the other hand, \(A'\in {{\mathcal {A}}}_{ij}\), which means that

$$\begin{aligned} \sum _{l=1}^{n}a'_{il}=\sum _{l=1}^{n}a'_{jl}, \end{aligned}$$

so

$$\begin{aligned} T=2\sum _{l=1}^{n}a'_{il}-\sum _{l=1}^{n}a_{il}-\sum _{l=1}^{n}a_{jl}=2\sum _{l=1}^{n}a'_{jl}-\sum _{l=1}^{n}a_{il}-\sum _{l=1}^{n}a_{jl}=0, \end{aligned}$$

which proves (13). \(\square \)

Theorem 2 states that the weights of the k-th alternative (the k-th coordinates of priority vectors) induced by a given PCM A and its ortogonal projection \(A'\) onto \({{\mathcal {A}}}_{ij}\) do not change for \(k\not \in \{i,j\}\). For \(k\in \{i,j\}\) they both become equal to the arithmetic mean of the original alternatives \(e_{i}\) and \(e_{j}\). This is consistent with our intuition that the smallest change we can do to equalize the weights of alternatives \(e_{i}\) and \(e_{j}\) is to take their arithmetic mean and leave the rest of alternatives unchanged.

3.4 Measuring the ease of manipulation

We would like to create a tool for detecting the possibility of manipulation carried out by an expert. Of course, the smaller the difference between the weights of alternatives derived from the PCM created by experts, the lower the chances of dishonest answers being detected, and thus the easier the manipulation. Therefore, the minimum distance between the weights of two alternatives can be considered as the indicator of the hierarchy stability. Notice that this concept is similar to the robustness to rank reversal (see [19] for details).

Let us assume that there exists \(M>0\) such that for all \(i,j\in \{1,\ldots ,n\}\) the elements of a given PCM A satisfy \(|a_{ij}|\le M\). Then, for any \(i,j\in \{1,\ldots ,n\}\) we define the number

$$\begin{aligned} RSI_{ij}^{M}=\frac{|\sum _{k=1}^{n}(a_{ik}-a_{jk})|}{2M}, \end{aligned}$$

which expresses a rescaled distance of the weights of the i-th and j-th alternatives. Let us notice that \(\forall i,j\)

$$\begin{aligned} 0\le RSI_{ij}^{M}\le n-1, \end{aligned}$$
(17)

since the numerator is the highest if the i-th (or the j-th) row consists of \(n-1\) numbers M and one 0 in the main diagonal, while the other row has \(n-1\) numbers \(-M\) and one 0.

Now we are ready to define the Ranking Stability Index:

$$\begin{aligned} RSI^{M}(A)=\min _{1\le i\le j\le n}RSI_{ij}^{M}. \end{aligned}$$

It appears that RSI is bounded by 1:

Theorem 3

For every \(A\in {{\mathcal {A}}}\) if

$$\begin{aligned} \forall i,j\in \{1,\ldots ,n\}\ |a_{ij}|\le M, \end{aligned}$$

then

$$\begin{aligned} 0\le RSI^{M}(A)\le 1. \end{aligned}$$

Proof

The first inequality is obvious. For the proof of the second one, let us assume that \(RSI^{M}(A)>1\), which implies that

$$\begin{aligned} \forall i,j\in \{1,\ldots ,n\}\ RSI_{ij}^{M}(A)>1. \end{aligned}$$
(18)

There exists such permutation

$$\begin{aligned} \sigma :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\} \end{aligned}$$

that

$$\begin{aligned} \sum _{k=1}^{n}a_{\sigma (1)k}\ge \sum _{k=1}^{n}a_{\sigma (2)k}\ge \cdots \ge \sum _{k=1}^{n}a_{\sigma (n)k}. \end{aligned}$$

Therefore,

$$\begin{aligned} RSI_{\sigma (1)\sigma (n)}^{M}= & {} \frac{|\sum _{k=1}^{n}(a_{\sigma (1)k}-a_{\sigma (n)k})|}{2M}=\frac{|\sum _{l=1}^{n-1}\sum _{k=1}^{n}(a_{\sigma (l)k}-a_{\sigma (l+1)k})|}{2M}\\= & {} \sum _{l=1}^{n-1}\frac{|\sum _{k=1}^{n}(a_{\sigma (l)k}-a_{\sigma (l+1)k})|}{2M}=\sum _{l=1}^{n-1}RSI_{\sigma (l)\sigma (l+1)}^{M}(A)>n-1,\\ \end{aligned}$$

which contradicts (17). \(\square \)

The following example shows that the bounds in (18) are sharp.

Example 5

Let us consider two matrices:

$$\begin{aligned} A=\left( \begin{array}{ccccc} \hspace{0.2cm}0\hspace{0.2cm} &{} \hspace{0.2cm}0\hspace{0.2cm} &{} \cdots &{} \hspace{0.2cm}0\hspace{0.2cm} &{} \hspace{0.2cm}0\hspace{0.2cm}\\ 0 &{} 0 &{} \cdots &{} 0 &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 0 &{} 0\\ 0 &{} 0 &{} \cdots &{} 0 &{} 0 \end{array}\right) . \end{aligned}$$

and

$$\begin{aligned} B=\left( \begin{array}{ccccc} 0 &{} M &{} \cdots &{} M &{} M\\ -M &{} 0 &{} \cdots &{} M &{} M\\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ -M &{} -M &{} \cdots &{} 0 &{} M\\ -M &{} -M &{} \cdots &{} -M &{} 0 \end{array}\hspace{0.05cm}\right) . \end{aligned}$$

It is easy to check that

$$\begin{aligned} RSI^{M}(A)=0 \end{aligned}$$

and

$$\begin{aligned} RSI^{M}(B)=1. \end{aligned}$$

Remark 1

The intuition concerning the notion of \(RSI^{M}\) is as follows: the higher value of \(RSI^{M}\), the more clearly the weights of alternatives differ, so the more difficult the manipulation is. In particular, \(RSI^{M}(A)=0\) if and only if at least two alternatives are of equal importance. Then a tiny change of the input data may result in an advantage for one of them. The value of \(RSI^{M}(A)\approx 1\) means that the weights of all alternatives are uniformly distributed, so the ranking is stable.

Two matrices inducing the same priority weights have equal values of \(RSI^{M}\). Hence, they are equally susceptible to manipulation.

We end this section with the case of a \(3\times 3\) PCM.

Example 6

For \(a,b,c\in [-M,M]\) let us consider a PCM:

$$\begin{aligned} C=\left( \begin{array}{ccc} 0 &{} a &{} b\\ -a &{} 0 &{} c\\ -b &{} \hspace{0.2cm}-c\hspace{0.2cm} &{} 0 \end{array}\hspace{0.05cm}\right) . \end{aligned}$$

Its Ranking Stability Index equals

$$\begin{aligned} RSI^{M}(C)=\frac{\min \{|2a+b-c|,|2c+b-a|,|a+2b+c|\}}{2M}. \end{aligned}$$

4 Conclusion and summary

In the presented work, we have introduced a method to find the closest approximation of a PCM which equates the weights of two given alternatives. We have also proved that the weights of all of the other alternatives do not change, while the new weights of the equated alternatives are equal to the arithmetic mean of the original ones.

Example 1 shows that it is impossible to find the best approximation of a PCM such that the positions of the i-th and the j-th alternatives in a ranking are reversed. However, if two alternatives have the same ranking, we may slightly change the element \(a_{ij}\) in order to tip the scales of victory in favor of one of them. The resulting matrix will satisfy the manipulation condition.

We have also proposed “Ranking Stability Index” (RSI), which allows us to determine the difficulty of switching the positions of two alternatives. We have also proved that this index takes values from the range [0, 1]. Obviously, two matrices are vulnerable to manipulation to the same degree if they induce the same priorities. In some cases this might be a weakness of the introduced measure.

One possible generalization could be incomplete PCMs analyzed in [39]. In [5] and [6] the problem of finding the optimal weights for an incomplete PCM whose missing elements can be inserted by indirect comparisons has been solved. The wanted vector is determined from the unique solution of a linear system of equations. It means that using a similar technique to the one presented here, one should obtain the formulas for the optimal PCM equating two alternatives also in the incomplete case. This is a promising subject for the future research.