1 Introduction

A method used in decision-making and frequently discussed in the literature is the Analytic Hierarchy Process (AHP), suggested by Saaty (1977, 1980). Several works since then have developed and discussed many aspects of the method. See the surveys (Choo & Wedley, 2004; Ishizaka & Labib, 2011; Zeleny, 1982). A key element of the method is the notion of pair-wise comparison (PC) matrix. An n-by-n positive matrix \(A=[a_{ij}]\) is called a PC matrix if, for all \(1\le i,j\le n,\)

$$\begin{aligned} a_{ji}=\frac{1}{a_{ij}}. \end{aligned}$$

Each diagonal entry of a PC matrix is 1. We refer to the set of all such matrices as \({\mathcal{P}\mathcal{C}}_{n}\). Often, we refer to these matrices as reciprocal matrices, as do other authors.

The ij entry of a reciprocal matrix is viewed as a pair-wise ratio comparison between alternatives i and j,  and the intent is to deduce an ordering of the alternatives from it. If the reciprocal matrix is consistent (transitive): \(a_{ij}a_{jk}=a_{ik},\) for all triples ijk,  there is a unique natural cardinal ordering, given by the relative magnitudes of the entries in any column. However, in human judgements consistency is unlikely. Inconsistency can also be an inherent feature of objective datasets (Bozóki et al. 2016; Chao et al. 2018; Csató 2013; Petróczy 2021; Petróczy and Csató 2021). Then, there will be many vectors that might be deduced from a reciprocal matrix A. Let w be a positive n-vector and \(w^{(-T)}\) the transpose of its component-wise inverse. We may try to approximate A by the consistent matrix \(W=ww^{(-T)},\) i.e., we wish to choose w so that \(W-A\) is small in some sense. We say that w is efficient for A if, for any other positive vector v and corresponding consistent matrix \(V=vv^{(-T)},\) the entry-wise inequality \(\left| V-A\right| \le \left| W-A\right| \) implies that v and w are proportional. (It follows from Lemma 5 that we give later that this definition is equivalent to that of other authors for the notion of efficiency (Blanquero et al., 2006; Bozóki & Fülöp, 2018)). A historical perspective of this concept can be found in Ehrgott (2012) and recent variations of it can be seen, for example, in Bozóki (2014) and Bozóki and Fülöp (2018). Clearly, a consistent approximation to a reciprocal matrix A should be based upon a vector efficient for A. If \(A\ \)is not itself consistent, the set \(\mathcal {E}(A)\) of efficient vectors for A will include many vectors not proportional to each other. For simplicity, we projectively view proportional efficient vectors as the same, as they produce the same consistent matrix. The set \(\mathcal {E}(A)\) is, however, at least connected (Blanquero et al., 2006), but, in general, an explicit characterization of the entire set seems difficult to give. In Cruz et al. (2021) and Furtado (2023), the complete description of the set of efficient vectors for reciprocal matrices obtained from consistent ones by modifying at most two entries above the main diagonal, and the corresponding reciprocal entries, was given, which, in particular, gives the description of the efficient vectors for any 3-by-3 reciprocal matrix. Recently, in Szádoczki and Bozóki (2023), the authors have studied the set \(\mathcal {E}(A)\) from a geometric point of view, when A is of size 3 or 4. Several methods to study when a vector is efficient were developed and algorithms to improve an inefficient vector have been provided (see (Anholcer & Fülöp, 2019; Blanquero et al., 2006; Bozóki, 2014; Bozóki & Fülöp, 2018) and the references therein).

Despite some criticism (Dyer, 1990a, b; Johnson et al., 1979; Saaty, 2003), one of the most used methods to approximate a reciprocal matrix A by a consistent matrix is the one proposed by Saaty (1977, 1980), in which the consistent matrix is based upon the right Perron eigenvector of A, a positive eigenvector associated with the spectral radius of A (Horn & Johnson, 1985). The efficiency of the Perron eigenvector for certain classes of reciprocal matrices has been shown (Ábele-Nagy & Bozóki, 2016; Ábele-Nagy et al., 2018; Fernandes & Furtado, 2022), though examples of reciprocal matrices for which this vector is inefficient are also known (Blanquero et al., 2006; Bozóki, 2014; Fernandes & Furtado, 2022), even matrices with an arbitrarily small level of inconsistency (Bozóki, 2014). Another method to approximate A by a consistent matrix, with a strong axiomatic background (Barzilai 1997; Csató 2018, 2019; Fichtner 1986; Lundy et al. 2017), is based upon the geometric mean of all columns of A,  which is known to be an efficient vector for A (Blanquero et al., 2006). Many other proposals for approximating A by a consistent matrix have been made in the literature (for comparisons of different methods see, for example, Anholcer et al. (2011), Bajwa et al. (2008), Bozóki (2008), Choo and Wedley (2004), Dijkstra (2013), citefichtner86 and Golany and Kress (1993); Kułakowski et al. (2022)).

Our main contribution in this paper consists of providing new classes of efficient vectors for a general reciprocal matrix A. One way we do this is by extending an efficient vector for a principal submatrix of A to an efficient vector for A. Unless A is consistent, this extension allows us to obtain infinitely many projectively distinct efficient vectors for A. We also show that any of the \(2^{n}-2\) geometric means of proper subsets of the columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) is efficient for A,  extending the known result of the efficiency of the geometric mean of all columns. Recently, motivated by the result in this paper and using more involved arguments, we have shown that any weighted geometric mean of the columns of a reciprocal matrix is efficient (Furtado & Johnson, 2024).

Before summarizing in more detail what we do here, we mention some additional notations and terminology. The Hadamard (or entry-wise) product of two vectors (of the same size) or matrices (of the same dimension) is denoted by \(\circ .\) For example, if \(A,B\in \mathcal{P}\mathcal{C}_{n}\) then \(A\circ B\in \mathcal{P}\mathcal{C}_{n}\), and, similarly, the n-by-n consistent matrices are closed under the Hadamard product. We use superscripts in parentheses to denote an exponent applied to all entries of a vector or a matrix. For example

$$\begin{aligned} \left( u_{1}\circ u_{2}\circ \cdots \circ u_{k}\right) ^{\left( \frac{^{1} }{k}\right) } \end{aligned}$$

is the (Hadamard) geometric mean of positive vectors \(u_{1},\ldots ,u_{k}\) of the same size. This column geometric mean is what is called the row geometric mean for instance in Blanquero et al. (2006).

For an n-by-n matrix \(A=[a_{ij}],\) we partition A by columns as \(A=\left[ a_{1},\text { }a_{2},\ldots ,\text { }a_{n}\right] .\) The principal submatrix determined by deleting (by retaining) the rows and columns indexed by a subset \(K\subseteq \{1,\ldots ,n\}\) is denoted by A(K) (A[K]);  we abbreviate \(A(\{i\})\) as A(i). Note that if A is reciprocal (consistent) then so is A(i).

In Sect. 2 we give some (mostly known) background that we will use and make some related observations. In particular, we present the relationship between efficiency and strong connectivity of a certain digraph and state the efficiency of the Hadamard geometric mean of all columns of a reciprocal matrix. In Sect. 3 we give some (mostly new) additional background that will also be helpful. In Sect. 4 we show explicitly how to extend efficient vectors for A(i) to efficient vectors for the reciprocal matrix A. This leads to an algorithm initiated by any \(A[\{i,j\}]\), \(i\ne j,\) to produce a subset of \(\mathcal {E}(A).\) This subset may not be all of \(\mathcal {E}(A)\) as truncation of an efficient vector for A may not give one for the corresponding principal submatrix. And we may get different subsets by starting with different ij. In Sect. 5 we study the relationship between efficient vectors for a reciprocal matrix A and its columns. As mentioned, any column of a consistent matrix generates that consistent matrix and, so, is efficient for it. Similarly, any column of a reciprocal matrix is efficient for it (Lemma 10), as is the geometric mean of any subset of the columns (Theorem 12). In Sect. 6, we study numerically, using different measures, the performance of these efficient vectors in approximating A by a consistent matrix and, in some cases, compare them, from this point of view, with the Perron eigenvector. We will see that the geometric mean of all columns can be outperformed by the geometric mean of other collections of columns, though, in general, it seems to produce results relatively close to the best among the other geometric means. The geometric mean of all columns seems to have better performance for matrices with a lower level of inconsistency. Also, in the simulations, it is the one that performs better with higher frequency (at least according to one of our measures). Thus, as a general method to be chosen in advance to derive priorities, we can conclude that the geometric mean of all columns seems to be the best choice among all the geometric means of subsets of columns. We also show by example that \(\mathcal {E}(A)\) is not closed under geometric mean (Sect. 5). Finally, in Sect. 7 we give some conclusions.

2 Technical background

We start with some known results that are relevant for this work. First, it is important to know how \(\mathcal {E}(A)\) changes when A is subjected to either a positive diagonal similarity or a permutation similarity, or both (a monomial similarity).

Lemma 1

Cruz et al. (2021); Furtado (2023) Suppose that \(A\in \mathcal{P}\mathcal{C}_{n}\) and \(w\in \mathcal {E}(A).\) Then, if D is a positive diagonal matrix (P is a permutation matrix), then \(DAD^{-1}\in \mathcal{P}\mathcal{C}_{n}\) and \(Dw\in \mathcal {E}(DAD^{-1})\) (\(PAP^{T}\in \mathcal{P}\mathcal{C}_{n}\) and \(Pw\in \mathcal {E} (PAP^{T})\)).

Next we define a directed graph (digraph) associated with a matrix \(A\in \mathcal{P}\mathcal{C}_{n}\) and a positive n-vector w,  which is helpful in studying the efficiency of w for A. For \(w=\left[ \begin{array}{ccc} w_{1}&\cdots&w_{n} \end{array} \right] ^{T}\), we denote by G(Aw) the directed graph (digraph) whose vertex set is \(\{1,\ldots ,n\}\) and whose directed edge set is

$$\begin{aligned} \{i\rightarrow j:\frac{w_{i}}{w_{j}}\ge a_{ij}\text {, }i\ne j\}. \end{aligned}$$

In Blanquero et al. (2006) the authors proved that the efficiency of w can be determined from G(Aw).

Theorem 2

Blanquero et al. (2006) Let \(A\in \mathcal{P}\mathcal{C}_{n}\). A positive n-vector w is efficient for A if and only if G(Aw) is a strongly connected digraph, that is, for all pairs of vertices ij,  with \(i\ne j,\) there is a directed path from i to j in G(Aw).

Recall (Horn & Johnson, 1985) that G(Aw) is strongly connected if and only if \((I_{n}+L)^{n-1}\) is positive. Here \(I_{n}\) is the identity matrix of order n and \(L=[l_{ij}]\) is the adjacency matrix of G(Aw),  that is, \(l_{ij}=1\) if \(i\rightarrow j\) is an edge in G(Aw),  and \(l_{ij}=0\) otherwise.

In Blanquero et al. (2006), it was shown that the geometric mean of all columns of a reciprocal matrix A is an efficient vector for A. This result comes from the fact that the geometric mean minimizes the logarithmic least squares objective function (see also (Crawford & Williams, 1985)).

Theorem 3

Blanquero et al. (2006) If \(A\in \mathcal{P}\mathcal{C}_{n},\) then

$$\begin{aligned} \left( a_{1}\circ a_{2}\circ \cdots \circ a_{n}\right) ^{\left( \frac{^{1} }{n}\right) }\in \mathcal {E}(A). \end{aligned}$$

In Cruz et al. (2021), all the efficient vectors for a simple perturbed consistent matrix, that is, a reciprocal matrix obtained from a consistent one by modifying one entry above the main diagonal and the corresponding reciprocal entry, were described. Let \(Z_{n}(x),\) with \(x>0,\) be the simple perturbed consistent matrix in \(\mathcal{P}\mathcal{C}_{n}\) with all entries equal to 1 except those in positions 1, n and n, 1,  which are x and \(\frac{1}{x},\) respectively.

For any simple perturbed consistent matrix \(A\in \mathcal{P}\mathcal{C}_{n},\) there is a positive diagonal matrix D and a permutation matrix P such that

$$\begin{aligned} DPAP^{-1}D^{-1}=Z_{n}(x), \end{aligned}$$

for some \(x>0.\) Taking into account Lemma 1, an n-vector w is efficient for A if and only if DPw is efficient for \(Z_{n}(x).\) For this reason, we focused on the description of the efficient vectors for \(Z_{n}(x),\) as the efficient vectors for a general simple perturbed consistent matrix can be obtained from them using Lemma 1.

Theorem 4

Cruz et al. (2021) Let \(n\ge 3\), \(x>0\) and \(w=\left[ \begin{array}{cccc} w_{1}&\cdots&w_{n-1}&w_{n} \end{array} \right] ^{T}\) be a positive vector. Then w is efficient for \(Z_{n}(x)\) if and only if

$$\begin{aligned} w_{n}\le w_{i}\le w_{1}\le w_{n}x,\text { for }i=2,\ldots ,n-1, \end{aligned}$$

or

$$\begin{aligned} w_{n}\ge w_{i}\ge w_{1}\ge w_{n}x,\text { for }i=2,\ldots ,n-1. \end{aligned}$$

Note that, if

$$\begin{aligned} A=\left[ \begin{array}{ccc} 1 &{} \quad a_{12} &{} \quad a_{13}\\ \frac{1}{a_{12}} &{} \quad 1 &{} \quad a_{23}\\ \frac{1}{a_{13}} &{} \quad \frac{1}{a_{23}} &{} \quad 1 \end{array} \right] \in \mathcal{P}\mathcal{C}_{3}, \end{aligned}$$

then \(DAD^{-1}=Z_{3}\left( \frac{a_{13}}{a_{12}a_{23}}\right) ,\) with

$$\begin{aligned} D=\left[ \begin{array}{ccc} \frac{1}{a_{12}} &{} \quad 0 &{} \quad 0\\ 0 &{} \quad 1 &{} \quad 0\\ 0 &{} \quad 0 &{} \quad a_{23} \end{array} \right] . \end{aligned}$$

In particular, any 3-by-3 reciprocal matrix is a simple perturbed consistent matrix, as this property is invariant under diagonal similarity.

3 Additional facts on efficiency

From the following result we may conclude that the definition of efficient vector given in Sect. 1 is equivalent to the one in Blanquero et al. (2006) and Bozóki and Fülöp (2018).

Here and throughout, if \(A\in \mathcal{P}\mathcal{C}_{n}\) and w is a positive n-vector, we denote

$$\begin{aligned} D(A,w):=ww^{(-T)}-A. \end{aligned}$$

By |D(Aw)| we mean the entry-wise absolute value of D(Aw).

Lemma 5

Let \(A\in \mathcal{P}\mathcal{C}_{n}\) and vw be positive n-vectors. Then, \(\left| D(A,w)\right| =\left| D(A,v)\right| \) if and only if v and w are proportional.

Proof

The “if” claim is trivial. Next we show the "only if" claim. Let \(w=\left[ \begin{array}{ccc} w_{1}&\cdots&w_{n} \end{array} \right] ^{T}\) and \(v=\left[ \begin{array}{ccc} v_{1}&\cdots&v_{n} \end{array} \right] ^{T}.\) Let \(i,j\in \{1,\ldots ,n\}\) with \(i\ne j.\) Suppose that

$$\begin{aligned} \left| a_{ij}-\frac{w_{i}}{w_{j}}\right| =\left| a_{ij} -\frac{v_{i}}{v_{j}}\right| \text { and }\left| \frac{1}{a_{ij}} -\frac{w_{j}}{w_{i}}\right| =\left| \frac{1}{a_{ij}}-\frac{v_{j} }{v_{i}}\right| . \end{aligned}$$
(1)

If

$$\begin{aligned} \left( a_{ij}-\frac{w_{i}}{w_{j}}\right) \left( a_{ij}-\frac{v_{i}}{v_{j} }\right) \ge 0, \end{aligned}$$

then (1) implies \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}}.\) If

$$\begin{aligned} \left( a_{ij}-\frac{w_{i}}{w_{j}}\right) \left( a_{ij}-\frac{v_{i}}{v_{j} }\right) <0 \end{aligned}$$

then also

$$\begin{aligned} \left( \frac{1}{a_{ij}}-\frac{w_{j}}{w_{i}}\right) \left( \frac{1}{a_{ij} }-\frac{v_{j}}{v_{i}}\right) <0, \end{aligned}$$

implying, from (1),

$$\begin{aligned} a_{ij}=\frac{1}{2}\left( \frac{w_{i}}{w_{j}}+\frac{v_{i}}{v_{j}}\right) \text { and }\frac{1}{a_{ij}}=\frac{1}{2}\left( \frac{w_{j}}{w_{i}} +\frac{v_{j}}{v_{i}}\right) . \end{aligned}$$

So

$$\begin{aligned} 4&=\left( \frac{w_{i}}{w_{j}}+\frac{v_{i}}{v_{j}}\right) \left( \frac{w_{j}}{w_{i}}+\frac{v_{j}}{v_{i}}\right) \\&\Leftrightarrow 2=\frac{w_{i}v_{j}}{w_{j}v_{i}}+\frac{w_{j}v_{i}}{w_{i} v_{j}}\Leftrightarrow \frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}}. \end{aligned}$$

Condition \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}},\) for all \(i,j\in \{1,\ldots ,n\},\) implies w and v proportional. \(\square \)

Note from the proof of Lemma 5 that (1) holds for a pair ij if and only if \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}}.\)

We close this section with a topological property of \(\mathcal {E}(A)\).

Theorem 6

For any \(A\in \mathcal{P}\mathcal{C}_{n},\) \(\mathcal {E}(A)\) is a closed set (as a subset of the set of positive n-vectors).

Proof

We verify this by showing that the inefficient vectors, in the complementary of \(\mathcal {E}(A),\) form an open set, by appealing to Theorem 2. Suppose that \(v\notin \mathcal {E}(A).\) Then the graph G(Av) is not strongly connected. Let \({\widetilde{v}}\) be close enough to v (i.e. \({\widetilde{v}}\) lies in an open ball about v,  whose radius is positive, but as small as we like). Then, if \(i\rightarrow j\) is not an edge of G(Av),  then it is not an edge of \(G(A,{\widetilde{v}}).\) Then \(G(A,{\widetilde{v}})\) has no more edges (under inclusion) than G(Av). Since the latter was not strongly connected, the former also is not, so that \({\widetilde{v}}\notin \mathcal {E}(A).\)

\(\square \)

We also note that, if \(w\in \mathcal {E}(A)\) and the matrix D(Aw) has no 0 off-diagonal entries,  then \({\widetilde{w}}\in \mathcal {E}(A)\) for any vector \({\widetilde{w}}\) close enough to w,  and \(w\in \mathcal {E}({\widetilde{A}})\) for any reciprocal matrix \({\widetilde{A}}\) close enough to A.

4 Inductive construction of efficient vectors

Suppose that \(A\in \mathcal{P}\mathcal{C}_{n}\) and that \(w\in \mathcal {E}(A(n)).\) Then G(A(n), w) is strongly connected. May w be extended to an efficient vector for A,  and, if so, how? For a positive scalar x,  the vector \(w_{x}:=\left[ \begin{array}{c} w\\ x \end{array} \right] \in \mathcal {E}(A)\) if and only if \(G\left( A,w_{x}\right) \) is strongly connected. But, since the subgraph induced by vertices \(1,2,\ldots ,n-1\) of \(G\left( A,w_{x}\right) \) is G(A(n), w) and the latter is strongly connected, \(G\left( A,w_{x}\right) \) is strongly connected if and only if there is at least one edge from vertex n to vertices in G(A(n), w) and also at least one edge from the latter to n (see Proposition 3 in Cruz et al. (2021) and its proof). Since the vector of the first \(n-1\) entries of the last column of \(D\left( A,w_{x}\right) \) is \(\frac{1}{x}w\) less the vector of the first \(n-1\) entries of \(a_{n}\) (the last column of A), there are such edges if and only if this difference vector has a 0 entry or both positive and negative entries. This means that among \(\frac{w_{i}}{x}-a_{in},\) \(i=1,\ldots ,n-1,\) there are both nonnegative and nonpositive numbers. We restate this as

Theorem 7

For \(A\in \mathcal{P}\mathcal{C}_{n}\) and \(w\in \mathcal {E}(A(n)),\) the vector

$$\begin{aligned} \left[ \begin{array}{c} w\\ x \end{array} \right] \in \mathcal {E}(A) \end{aligned}$$

if and only if the scalar x satisfies

$$\begin{aligned} x\in \left[ \min _{1\le i\le n-1}\frac{w_{i}}{a_{in}},\quad \max _{1\le i\le n-1}\frac{w_{i}}{a_{in}}\right] . \end{aligned}$$

Of course, the above interval is nonempty. This leads to a natural algorithm to construct a large subset of \(\mathcal {E}(A)\) for \(A\in \mathcal{P}\mathcal{C}_{n}.\)

Choose the upper left \(2\times 2\) principal submatrix \(A[\{1,2\}]\) of A. It is consistent and, up to a factor of scale, has only one efficient vector \(w[\{1,2\}].\) Now consider each extension, allowed by the possibly infinitely many ways given in Theorem 7, to an efficient vector for \(A[\{1,2,3\}]\). This gives the set \(w[\{1,2,3\}]\subseteq \mathcal {E} (A[\{1,2,3\}]).\) Now, continue extending each vector in \(w[\{1,2,3\}]\) to an element of \(\mathcal {E}(A[\{1,2,3,4\}])\) in the same way, and so on. This terminates in a subset \(w[\{1,2,\ldots ,n\}]\subseteq \mathcal {E}(A).\)

We make two important observations. First, we may instead start with some other 2-by-2 principal submatrix \(A[\{i,j\}],\) \(i\ne j,\) and proceed similarly, either by inserting the new entry of the next efficient vector in the appropriate position, or by placing \(A[\{i,j\}]\) in the upper left 2-by-2 submatrix, via permutation similarity, and proceeding in exactly the same way. We note that starting in two different positions may produce different terminal sets (Example 8), and the union of all possible terminal sets is contained in \(\mathcal {E}(A).\)

Second, \(w[\{1,2,\ldots ,n\}]\) may be a proper subset of \(\mathcal {E}(A),\) as truncation of a vector (deletion of an entry) from an efficient vector for A may not give an efficient vector for the corresponding principal submatrix (see Example 8).

Example 8

Let

$$\begin{aligned} A=\left[ \begin{array}{ccc} 1 &{} \quad 1 &{} \quad \frac{3}{2}\\ 1 &{} \quad 1 &{} \quad 1\\ \frac{2}{3} &{} \quad 1 &{} \quad 1 \end{array} \right] . \end{aligned}$$

The efficient vectors for \(A[\{1,2\}]\) are proportional to

$$\begin{aligned} \left[ \begin{array}{cc} 1&1 \end{array} \right] ^{T}. \end{aligned}$$

By Theorem 7, the vectors of the form

$$\begin{aligned} \left[ \begin{array}{ccc} 1&1&w_{3} \end{array} \right] ^{T} \end{aligned}$$

with

$$\begin{aligned} \min \left\{ \frac{2}{3},1\right\} \le w_{3}\le \max \left\{ \frac{2}{3},1\right\} \quad \Leftrightarrow \quad \frac{2}{3}\le w_{3}\le 1 \end{aligned}$$

are efficient for A (and, of course, all positive vectors proportional to them).

The efficient vectors for \(A[\{1,3\}]\) are proportional to

$$\begin{aligned} \left[ \begin{array}{cc} \frac{3}{2}&1 \end{array} \right] ^{T}. \end{aligned}$$

By Theorem 7, the vectors of the form

$$\begin{aligned} \left[ \begin{array}{ccc} \frac{3}{2}&w_{2}&1 \end{array} \right] ^{T} \end{aligned}$$

with

$$\begin{aligned} \min \left\{ \frac{3}{2},1\right\} \le w_{2}\le \max \left\{ \frac{3}{2},1\right\} \quad \Leftrightarrow \quad 1\le w_{2}\le \frac{3}{2} \end{aligned}$$

are efficient for A.

The efficient vectors for \(A[\{2,3\}]\) are proportional to

$$\begin{aligned} \left[ \begin{array}{cc} 1&1 \end{array} \right] ^{T}. \end{aligned}$$

By Theorem 7, the vectors of the form

$$\begin{aligned} \left[ \begin{array}{ccc} w_{1}&1&1 \end{array} \right] ^{T}, \end{aligned}$$

with

$$\begin{aligned} \min \left\{ 1,\frac{3}{2}\right\} \le w_{1}\le \max \left\{ 1,\frac{3}{2}\right\} \quad \Leftrightarrow \quad 1\le w_{1}\le \frac{3}{2}, \end{aligned}$$

are efficient for A.

Note that, by Theorem 4,

$$\begin{aligned} \mathcal {E}(A)=\left\{ \left[ \begin{array}{ccc} w_{1}&w_{2}&w_{3} \end{array} \right] ^{T}:w_{3}\le w_{2}\le w_{1}\le \frac{3}{2}w_{3}\right\} . \end{aligned}$$

For example, the vector \(\left[ \begin{array}{ccc} \frac{4}{3}&\frac{7}{6}&1 \end{array} \right] ^{T}\) is efficient for A,  though it does not belong to the set of vectors determined above, as no vector obtained from it by deleting one entry is efficient for the corresponding 2-by-2 principal submatrix.

There are cases in which we know all the efficient vectors for a larger submatrix and then we can start our building process with this submatrix. In fact, taking into account Theorem 4, all efficient vectors for a 3-by-3 reciprocal matrix are known, as such matrix is a simple perturbed consistent matrix. Thus, it is always possible to start the process from a 3-by-3 principal submatrix.

Example 9

Consider the matrix

$$\begin{aligned} A=\left[ \begin{array}{ccccc} 1 &{} \quad 1 &{} \quad 9 &{} \quad 4 &{} \quad \frac{1}{2}\\ 1 &{} 1 &{} 1 &{} 4 &{} 3\\ \frac{1}{9} &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad \frac{1}{3}\\ \frac{1}{4} &{} \quad \frac{1}{4} &{} \quad 1 &{} \quad 1 &{} \quad 2\\ 2 &{} \quad \frac{1}{3} &{} \quad 3 &{} \quad \frac{1}{2} &{} \quad 1 \end{array} \right] . \end{aligned}$$
(2)

By Theorem 4, the efficient vectors for \(A[\{1,2,3\}]\) are the vectors of the form

$$\begin{aligned} \left[ \begin{array}{ccc} w_{1}&w_{2}&w_{3} \end{array} \right] ^{T}, \end{aligned}$$

with \(w_{3}\le w_{2}\le w_{1}\le 9w_{3}.\) By Theorem 7, the vectors of the form

$$\begin{aligned} \left[ \begin{array}{cccc} w_{1}&w_{2}&w_{3}&w_{4} \end{array} \right] ^{T}, \end{aligned}$$

with

$$\begin{aligned} \min \left\{ \frac{w_{1}}{4},\frac{w_{2}}{4},w_{3}\right\}&\le w_{4} \le \max \left\{ \frac{w_{1}}{4},\frac{w_{2}}{4},w_{3}\right\} \Leftrightarrow \\ \min \left\{ \frac{w_{2}}{4},w_{3}\right\}&\le w_{4}\le \max \left\{ \frac{w_{1}}{4},w_{3}\right\} \end{aligned}$$

are efficient for \(A[\{1,2,3,4\}].\) Again by Theorem 7, the vectors of the form

$$\begin{aligned} \left[ \begin{array}{ccccc} w_{1}&w_{2}&w_{3}&w_{4}&w_{5} \end{array} \right] ^{T} \end{aligned}$$

with

$$\begin{aligned} \min \left\{ 2w_{1},\frac{w_{2}}{3},3w_{3},\frac{w_{4}}{2}\right\}&\le w_{5}\le \max \left\{ 2w_{1},\frac{w_{2}}{3},3w_{3},\frac{w_{4}}{2}\right\} \Leftrightarrow \\ \min \left\{ \frac{w_{2}}{3},\frac{w_{4}}{2}\right\}&\le w_{5}\le \max \left\{ 2w_{1},3w_{3}\right\} , \end{aligned}$$

are efficient for A. For instance, the vectors

$$\begin{aligned} \left[ \begin{array}{ccccc} 3&2&1&\frac{3}{4}&w_{5} \end{array} \right] ^{T} \end{aligned}$$

with \(\frac{3}{8}\le w_{5}\le 6,\) are efficient for A. Moreover, these are the only obtained efficient vectors with the given first four entries.

We observe that, if \(A\in \mathcal{P}\mathcal{C}_{n}\) is an (inconsistent) simple perturbed consistent matrix, then A has a principal 3-by-3 (inconsistent) simple perturbed consistent submatrix B. If we start the inductive construction of efficient vectors for A with the submatrix B,  for which \(\mathcal {E}(B)\) is known by Lemma 1 and Theorem 4, then we obtain \(\mathcal {E}(A).\) This fact follows from Corollary 9 in Cruz et al. (2021), taking into account that, by Lemma 1, and Remark 4.6 in Furtado (2023), we may focus on \(A=Z_{n}(x),\) for some \(x>0.\)

Similarly, if \(A\in \mathcal{P}\mathcal{C}_{n}\) is a double perturbed consistent matrix (that is, \(A\ \)is obtained from a consistent matrix by modifying two entries above the diagonal and the corresponding reciprocal entries), in which no two perturbed entries lie in the same row or column, then A has a principal 4-by-4 double perturbed consistent submatrix B of the same type and, by Theorem 4.2 in Furtado (2023) and Lemma 1, \(\mathcal {E}(B)\) is known. By Corollary 4.5 in Furtado (2023), if we start the inductive construction of efficient vectors with B,  then again we obtain \(\mathcal {E}(A).\)

Of course, in these simple and double perturbed consistent cases, all the efficient vectors for \(A\in \mathcal{P}\mathcal{C}_{n}\) are already known (Theorem 4, and Theorem 4.2 in Furtado (2023)).

5 Columns of a reciprocal matrix

Previously, it has been noted (Theorem 3) that the Hadamard geometric mean of all columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) is efficient for A. Interestingly, each individual column of A is efficient.

Lemma 10

Let \(A\in \mathcal{P}\mathcal{C}_{n}.\) Then any column of A lies in \(\mathcal {E}(A).\)

Proof

Let \(a_{j}\) be the j-th column of A. Then the j-th column of \(D(A,a_{j})\) has entries \(\frac{a_{ij}}{1}-a_{ij}=0\). Hence, the edges \(i\rightarrow j\) and \(j\rightarrow i\) are in \(G(A,a_{j})\), for any \(1\le i\le n\), with \(i\ne j\), and, therefore, \(G(A,a_{j})\) is strongly connected, verifying that \(a_{j}\) is efficient, by Theorem 2. \(\square \)

Further, the geometric mean of any subset of the columns of a reciprocal matrix A also lies in \(\mathcal {E}(A).\) To prove this result, we use the following lemma.

Lemma 11

Let \(A\in \mathcal{P}\mathcal{C}_{n},\) \(D={\text {*}}{diag}(d_{1} ,\ldots ,d_{n})\) be a positive diagonal matrix and \(1\le s\le n.\) If w is the geometric mean of s columns of A then Dw is a positive multiple of the geometric mean of the corresponding s columns of \(DAD^{-1}.\)

Proof

By a possible permutation similarity and taking into account Lemma 1, suppose, without loss of generality, that w is the geometric mean of the first s columns of A. (Note that, if w is the geometric mean of a reciprocal matrix A then Pw is the geometric mean of \(PAP^{T}\) for a permutation matrix P.) The i-th entry of Dw is \(d_{i}\Pi _{j=1}^{s} a_{ij}^{\frac{1}{s}}\). On the other hand, the i-th entry of the geometric mean v of the first s columns of \(DAD^{-1}\) is \(\Pi _{j=1}^{s}\left( \frac{d_{i}a_{ij}}{d_{j}}\right) ^{\frac{1}{s}}=d_{i}\Pi _{j=1}^{s}\left( \frac{a_{ij}}{d_{j}}\right) ^{\frac{1}{s}}.\) Thus, the quotient of the i-th entries of Dw and v is \(\Pi _{j=1}^{s}(d_{j})^{\frac{1}{s}},\) which does not depend on i,  implying the claim. \(\square \)

Theorem 12

Let \(A\in \mathcal{P}\mathcal{C}_{n}.\) Then the geometric mean of any collection of distinct columns of A lies in \(\mathcal {E}(A).\)

Proof

Let \(1\le s\le n.\) We show that the geometric mean \(w_{A}\) of s distinct columns of A is efficient for A. The proof is by induction on n. For \(n=2,\) the result is straightforward. Suppose that \(n>2.\) If \(s=1\) or \(s=n,\) the result follows from Lemma 10 or Theorem 3, respectively. Suppose that \(1<s<n.\) By Lemmas 1 and 11, we may and do assume that the s columns of A are the first ones, and the entries in the last column and in the last row of A are all equal to 1, that is,

$$\begin{aligned} A=\left[ \begin{array}{cc} B &{} \quad {\textbf{e}}\\ {\textbf{e}}^{T} &{} \quad 1 \end{array} \right] , \end{aligned}$$

where \({\textbf{e}}\) is the \((n-1)\)-vector with all entries equal to 1 and \(B\in \mathcal{P}\mathcal{C}_{n-1}\). Let \(b_{1},\ldots ,b_{s}\) be the first s columns of B,  so that, for \(j=1,\ldots ,s,\)

$$\begin{aligned} a_{j}=\left[ \begin{array}{c} b_{j}\\ 1 \end{array} \right] . \end{aligned}$$

We have

$$\begin{aligned} D(A,w_{A})=\left[ \begin{array}{cc} D(B,w_{B}) &{} w_{B}-{\textbf{e}}\\ w_{B}^{(-T)}-{\textbf{e}}^{T} &{} 0 \end{array} \right] , \end{aligned}$$

in which \(w_{B}\) is the geometric mean of columns \(b_{1},\ldots ,b_{s}.\) By the induction hypothesis, \(w_{B}\) is efficient for B so that, by Theorem 2, \(G(B,w_{B})\) is strongly connected. Thus, \(G(A,w_{A})\) is strongly connected if and only if there is at least one edge from vertex n to vertices in \(G(B,w_{B})\) and also at least one edge from the latter to n (see the observation before Theorem 7). Then, \(G(A,w_{A})\) is strongly connected if and only if \(w_{B}-{\textbf{e}}\) is neither strictly positive nor negative. The product of the first s entries of \(w_{B}\) is \(l^{\frac{1}{s}}\), where l is the product of the entries of \(B[\{1,\ldots ,s\}].\) Since this matrix is reciprocal, then \(l=1.\) Thus, the vector formed by the first s entries of \(w_{B}\) is neither strictly greater than \({\textbf{e}}\) nor strictly less than \({\textbf{e}}\) (entry-wise), implying that \(G(A,w_{A})\) is strongly connected. \(\square \)

We observe that we have \(2^{n}-1\) (nonempty) distinct subsets of columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) (not necessarily corresponding to different geometric means).

The sets of efficient vectors for matrices in \(\mathcal{P}\mathcal{C}_{2}\) (any 2-by-2 reciprocal matrix is consistent) and in \(\mathcal{P}\mathcal{C}_{3}\) (any 3-by-3 reciprocal matrix is a simple perturbed consistent matrix) are closed under geometric means. In the latter case, this follows from Lemma 1 and the facts that a matrix in \(\mathcal{P}\mathcal{C}_{3}\) is monomial similar to \(Z_{3}(x),\) for some \(x>0\), and, by Theorem 4, the set of efficient vectors for \(Z_{3}(x)\) is closed under geometric mean. However, the set of efficient vectors for matrices in \(\mathcal{P}\mathcal{C}_{n},\) with \(n>3,\) may not be closed under geometric mean, as the next example illustrates.

Example 13

Let A be the matrix in (2). Let

$$\begin{aligned} A^{\prime }=A(5)=\left[ \begin{array}{cccc} 1 &{} \quad 1 &{} \quad 9 &{} \quad 4\\ 1 &{} \quad 1 &{} \quad 1 &{} \quad 4\\ \frac{1}{9} &{} \quad 1 &{} \quad 1 &{} \quad 1\\ \frac{1}{4} &{} \quad \frac{1}{4} &{} \quad 1 &{} \quad 1 \end{array} \right] , \end{aligned}$$

the 4-by-4 principal submatrix of A obtained by deleting the 5-th row and column. Taking into account Example 9, the vectors

$$\begin{aligned} w=\left[ \begin{array}{c} 4.1\\ 4.1\\ 1\\ 1 \end{array} \right] \text { and }v=\left[ \begin{array}{c} 4.2\\ 4\\ 3\\ 1 \end{array} \right] \end{aligned}$$

are efficient for \(A^{\prime }.\) However the vector \(\left( w\circ v\right) ^{\left( \frac{1}{2}\right) }\) is not efficient for A as the first three entries of the last column of \(D(A^{\prime },\left( w\circ v\right) ^{\left( \frac{1}{2}\right) })\) are positive and, therefore, \(G(A^{\prime },(w\circ v)^{\left( \frac{1}{2}\right) })\) is not strongly connected.

6 Numerical experiments

We next give numerical examples to compare the geometric means of the vectors in different proper subsets of columns of a reciprocal matrix A with the geometric mean of all columns of A,  denoted here by \(w_{C},\) a vector proposed by several authors to obtain a consistent matrix approximating A. Recall from Sect. 5 that all these vectors are efficient for A. We take \(\left\| D(A,w)\right\| _{1},\) the sum of all entries of \(\left| D(A,w)\right| ,\) as a measure of effectiveness of \(w\in \mathcal {E}(A),\) as well as \(\left\| D(A,w)\right\| _{2},\) the Frobenius norm of D(Aw). Recall that, for an n-by-n matrix \(B=[b_{ij}],\) we have

$$\begin{aligned} \left\| B\right\| _{2}=\left( \sum _{i,j=1,\ldots ,n}(b_{ij})^{2}\right) ^{\frac{1}{2}}. \end{aligned}$$

Other measures (norms) are possible. For comparison, we also consider the case in which w is the Perron eigenvector of A,  denoted by \(w_{P}\), as it is one of the most used vectors to estimate a consistent matrix close to A. Our experiments were done using the software Octave version 8.3.0.

We first make the comparisons for two given matrices, one of size 5-by-5 and another one of size 8-by-8,  and then consider an experiment in which a large number of 5-by-5 reciprocal matrices are randomly generated following a method that does not control the level of inconsistency (Examples 14 and 15, and Experiment 16). In all these cases, it can be verified that the vector \(w_{C}\) is not always the best choice among the geometric means of subsets of columns, though in the simulations \(w_{C}\) is the one that occurs with highest frequency as the best, and seems to be close to the best in general.

Then, we consider matrices with a more controlled and lower level of inconsistency in Experiments 17 and 18, and Example 19. In Experiments 17 and 18, we generate random matrices of sizes 5-by-5 and 10-by-10,  respectively, following the method suggested in Szádoczki et al. (2023) (which is different from the one used in Experiment 16). Again, \(w_{C}\) is not always the best choice among the geometric means of subsets of columns of the matrices, though in the simulations, when considering the Frobenius norm, it is the one that occurs with highest frequency as the best (the frequency is larger than the corresponding one in the simulations in which the level of inconsistency is not controlled). Regarding the 1-norm, close to consistency, it seems that the best results are attained if a single column is considered.

6.1 Matrices with uncontrolled level of inconsistency

Example 14

Consider the matrix

$$\begin{aligned} A=\left[ \begin{array}{ccccc} 1 &{} \quad \frac{9}{5} &{} \quad \frac{6}{5} &{} \quad 12 &{} 6\\ &{} &{} &{} &{} \\ \frac{5}{9} &{} \quad 1 &{} \quad \frac{4}{5} &{} \quad 100 &{} \quad 5\\ &{} &{} &{} &{} \\ \frac{5}{6} &{} \quad \frac{5}{4} &{} \quad 1 &{} \quad \frac{17}{10} &{} \quad 6\\ &{} &{} &{} &{} \\ \frac{1}{12} &{} \quad \frac{1}{100} &{} \quad \frac{10}{17} &{} \quad 1 &{} \quad 3\\ &{} &{} &{} &{} \\ \frac{1}{6} &{} \quad \frac{1}{5} &{} \quad \frac{1}{6} &{} \quad \frac{1}{3} &{} \quad 1 \end{array} \right] \in \mathcal{P}\mathcal{C}_{5}. \end{aligned}$$

There are 31 nonempty distinct subsets of the set of columns of A. We identify each subset with a sequence of five binary numbers, in which a 1 in position i means that the i-th column of A belongs to the subset, while a 0 means that it does not belong to the subset. The sequences are in increasing (numerical) order and by \(S_{i}\) we denote the subset of columns associated with the i-th sequence. Note that \(S_{31}\) is the set of all columns of A. By \(w_{i}\) we denote the geometric mean of the vectors in \(S_{i}.\)

In Table 1 we give the norms \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2},\) \(i=1,\ldots ,31.\) In Table 2 we emphasize the results obtained for the geometric mean \(w_{C}\) of all columns,  for the vectors that produce the smallest and the largest values of \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2},\) and also consider the case of the Perron eigenvector \(w_{P}\) of A (which is efficient). Note that

$$\begin{aligned} \frac{\max _{i}\left\| D(A,w_{i})\right\| _{1}}{\min _{i}\left\| D(A,w_{i})\right\| _{1}}=3.7048\qquad \text {and}\qquad \frac{\max _{i}\left\| D(A,w_{i})\right\| _{2}}{\min _{i}\left\| D(A,w_{i} )\right\| _{2}}=5.8235. \end{aligned}$$

It can be observed that, according to the considered measures, there are proper subsets of columns that produce better results than those for the Perron eigenvector and for the set of all columns. In fact, from Table 2, the geometric mean of columns 2, 3, 5 (associated with the sequence 01101 given by \(S_{13}\)) is the one that performs better in terms of the 1-norm,  when compared with the Perron vector and the geometric mean of any other subset of columns. However, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=1,5,10,13,16,17,20,21,29,\) also perform better than \(w_{C}\) and \(w_{P}\) (see Table 1). In addition, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=4,9,11,12,15,25,27,28,30,31,\) also perform better than \(w_{P}\). For example, for \(i=5,\) we have that the geometric mean of columns 3, 5 (associated with the sequence 00101 given by \(S_{5}\)) performs better in terms of the 1-norm,  when compared with \(w_{C}\) and \(w_{P}\). Similarly, from Table 2, it follows that the geometric mean of columns 2, 4 (associated with the sequence 01010 given by \(S_{10}\)) is the one that performs better in terms of the Frobenius norm,  when compared with \(w_{P}\) and the geometric mean of any other subset of columns. For this norm, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=10,11,14,18,26,27,30,\) perform better than \(w_{P}\) and \(w_{C}\). Regarding the latter, the geometric mean of the subsets of columns associated with \(S_{i}\), for \(i=9,15,24,\) also perform better.

We summarize our results in Fig. 1, in which we give a graphic with a comparison of all the results obtained. In the x axis we have the index i of each subset \(S_{i}\) of columns. In the y axis we have the values of \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2}\) for the different vectors \(w_{i}.\) A line jointing the values of each of these norms for the different subsets of columns is plotted. A horizontal line corresponding to each of the considered norms for the Perron eigenvector also appears.

Example 15

Consider the matrix

$$\begin{aligned} A=\left[ \begin{array}{cccccccc} 1 &{} \frac{9}{5} &{} \quad \frac{6}{5} &{} \quad 12 &{} \quad 6 &{} \quad 2 &{} \quad 5 &{} \quad 3\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{5}{9} &{} \quad 1 &{} \quad \frac{4}{5} &{} \quad \frac{1}{10} &{} \quad 6 &{} \quad \frac{23}{10} &{} \quad \frac{1}{2} &{} \quad \frac{43}{10}\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{5}{6} &{} \quad \frac{5}{4} &{} \quad 1 &{} \quad \frac{17}{10} &{} \quad 6 &{} \quad \frac{1}{5} &{} \quad 50 &{} \quad \frac{3}{10}\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{1}{12} &{} \quad 10 &{} \quad \frac{10}{17} &{} \quad 1 &{} \quad 3 &{} \quad 12 &{} \quad 25 &{} \quad \frac{13}{10}\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{1}{6} &{} \quad \frac{1}{6} &{} \quad \frac{1}{6} &{} \quad \frac{1}{3} &{} \quad 1 &{} \quad \frac{21}{10} &{} \quad 2 &{} \quad 3\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{1}{2} &{} \quad \frac{10}{23} &{} \quad 5 &{} \quad \frac{1}{12} &{} \quad \frac{10}{21} &{} \quad 1 &{} \quad 1 &{} \quad 3\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{1}{5} &{} \quad 2 &{} \quad \frac{1}{50} &{} \quad \frac{1}{25} &{} \quad \frac{1}{2} &{} \quad 1 &{} \quad 1 &{} \quad 3\\ &{} &{} &{} &{} &{} &{} &{} \\ \frac{1}{3} &{} \quad \frac{10}{43} &{} \quad \frac{10}{3} &{} \quad \frac{10}{13} &{} \quad \frac{1}{3} &{} \quad \frac{1}{3} &{} \quad \frac{1}{3} &{} \quad 1 \end{array} \right] \in \mathcal{P}\mathcal{C}_{8}. \end{aligned}$$

Table 3 and Fig. 2 are the analogs of Table 2 and Fig. 1 for the 8-by-8 reciprocal matrix considered here. Note that in this case we have 255 different subsets \(S_{i}\) of the set of columns of A. Again, a proper subset of the columns produces better results than either all columns or the Perron vector (which is efficient for A). Note that

$$\begin{aligned} \frac{\max _{i}\left\| D(A,w_{i})\right\| _{1}}{\min _{i}\left\| D(A,w_{i})\right\| _{1}}=5.0432\qquad \text {and}\qquad \frac{\max _{i}\left\| D(A,w_{i})\right\| _{2}}{\min _{i}\left\| D(A,w_{i} )\right\| _{2}}=10.890. \end{aligned}$$

Experiment 16

We consider random reciprocal matrices \(A_{j}\in \mathcal{P}\mathcal{C}_{5}\), \(j=1,\ldots ,10000,\) by generating matrices \(B_{j}\) with entries from a uniform distribution in the interval (1, 10) and letting \(A_{j}=B_{j}\circ (B_{j}^{(-1)})^{T},\) where \(B_{j}^{(-1)}\) denotes the entry-wise inverse of \(B_{j}\) and \(\circ \) the Hadamard product. These matrices may present a significant level of inconsistency (Csató & Petróczy, 2021). For each \(i=1,\ldots ,31,\) we determine

$$\begin{aligned} p_{1}(i):= {\displaystyle \sum \limits _{j=1}^{10000}} \frac{\left\| D(A_{j},w_{ij})\right\| _{1}}{\min _{k}\left\| D(A_{j},w_{kj})\right\| _{1}}\text { and }p_{2}(i):= {\displaystyle \sum \limits _{j=1}^{10000}} \frac{\left\| D(A_{j},w_{ij})\right\| _{2}}{\min _{k}\left\| D(A_{j},w_{kj})\right\| _{2}}, \end{aligned}$$

in which \(w_{ij}\) is the geometric mean of the columns of \(A_{j}\) in the subset \(S_{i}.\) (We identify the subsets \(S_{i}\) with indices of columns, as introduced in Example 14.) We note that none of the generated matrices \(A_{j}\) is consistent, so that the denominators in each summand of \(p_{1}(i)\) and \(p_{2}(i)\) are nonzero. Also, any such summand is at least \(1,\ \)and is close to 1 when \(w_{ij}\) behaves close to the best among the geometric means of the columns of \(A_{j}\).

In Table 4 we display the values of \(p_{1}(i),\) \(i=1,\ldots ,31\) (rounded to integers). For each i,  we also display the number \(n_{1}(i)\) of j’s for which \(\min \left\| D(A_{j},w)\right\| _{1}\), when w runs over all the geometric means of the subsets of columns of \(A_{j}\), is attained by the subset \(S_{i}\). In Table 5 we display the corresponding data, \(p_{2}(i)\) and \(n_{2}(i),\) for the Frobenius norm.

We can see that the minimum 1-norm and the minimum Frobenius norm of \(D(A_{j},w),\) when w runs over the geometric means of the subsets of columns of \(A_{j},\) are, most of the times, not attained by the geometric mean \(w_{C}\) of all columns of \(A_{j}.\) In fact, it is attained less than \(7\%\) of the times when considering the 1-norm and less than \(19\%\) when considering the Frobenius norm. However, in both cases, \(w_{C}\) is the geometric mean that occurs with highest frequency as the best. We also emphasize the good behavior regarding the 1-norm when just one column is considered. We observe that the measures \(p_{1}(31)\) and \(p_{2}(31),\) associated with \(w_{C},\) are relatively close to 10000 and perform the best when compared to the corresponding ones for the other subsets of columns. Thus, in general, the performance of \(w_{C}\) is close to the best.

6.2 Matrices with controlled level of inconsistency

Experiment 17

We reproduce Experiment 16 but now generate the matrices \(A_{j}\in \mathcal{P}\mathcal{C}_{5}\), \(j=1,\ldots ,10000,\) following the method proposed in Szádoczki et al. (2023). In each trial j, we generated a 5-vector \(v_{j}\) with entries from a uniform distribution in (1, 9) and constructed the reciprocal matrix \(B_{j}=[b_{k\ell }]=v_{j}v_{j}^{(-T)}.\) We also generated a 5-by-5 matrix \(H_{j}=[h_{k\ell }]\) with entries from a uniform distribution in \((-1,1).\) We then constructed the reciprocal matrix \(A_{j}=[a_{k\ell }]\) as follows: for all \(k,\ell \) such that \(b_{k\ell }>1\), or \(b_{k\ell }=1\) and \(k>\ell ,\) we let

$$\begin{aligned} a_{k\ell }=\left\{ \begin{array}{ccc} b_{k\ell }+h_{k\ell } &{} &{} \text {if }b_{k\ell }+h_{k\ell }\ge 1\\ \frac{1}{2-b_{k\ell }-h_{k\ell }} &{} &{} \text {if }b_{k\ell }+h_{k\ell }<1 \end{array} \right. \end{aligned}$$

and then let \(a_{\ell k}=a_{k\ell }.\) According to Szádoczki et al. (2023), the matrices \(A_{j}\) have a low level of inconsistency. The obtained results are presented in Tables 6 and 7 regarding the 1-norm and the Frobenius norm, respectively (\(n_{1}(i),\) \(n_{2}(i),\) \(p_{1}(i)\) and \(p_{2}(i)\) are defined as in Experiment 16).

We can see that the minimum norm of \(D(A_{j},w),\) when w runs over the geometric means of the sets of columns of \(A_{j},\) is attained by the geometric mean \(w_{C}\) of all columns of \(A_{j}\) less than \(5\%\) of the times when considering the 1-norm and less than \(32\%\) when considering the Frobenius norm. In the latter case, \(w_{C}\) is the geometric mean that occurs with highest frequency as the best. An interesting additional finding is that the best behavior regarding the 1-norm is attained when just one column is considered (followed by the case in which all columns are considered). We also observe that the measures \(p_{1}(31)\) and \(p_{2}(31)\) associated with \(w_{C}\) perform the best when compared to the corresponding ones for the other subsets of columns, which means that \(w_{C}\) generally behaves well.

Experiment 18

In this example, we generated 1000 reciprocal matrices \(A_{j},\) now of size 10, following the same method as in Experiment 17, but in this case \(H_{j}\) has entries from a uniform distribution in \((-3/2,3/2),\) implying a modest level of inconsistency of \(A_{j}\) and also a larger standard deviation of inconsistency with relation to Experiment 17 (see Figure 2 in [21]). When compared with the other \(2^{10}-2\) geometric means of proper subsets of columns, the geometric mean of all columns had the best performance only \(0.8\%\) of the times for the 1-norm of \(D(A_{j},w)\), and \(3.7\%\) of the times for the Frobenius norm. However, we obtained \(p_{1}(1023)=1044\) and \(p_{2}(1023)=1037\) (as defined in Experiment 16), now with 1000 instead of 10000). This means that each summand of \(p_{1}(1023)\) and \(p_{2}(1023)\) is close to 1,  which shows that generally the geometric mean of all columns had a performance close to the best one among the geometric means of all subsets of columns.

Example 19

Consider the matrix

$$\begin{aligned} A=\left[ \begin{array}{ccccc} 1 &{} \quad 1 &{} \quad 1 &{} \quad 1+x &{} \quad y\\ 1 &{} \quad 1 &{} \quad 1-x &{} \quad 1 &{} \quad 1\\ 1 &{} \quad \frac{1}{1-x} &{} \quad 1 &{} \quad 1 &{} \quad 1+x\\ \frac{1}{1+x} &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1\\ \frac{1}{y} &{} \quad 1 &{} \quad \frac{1}{1+x} &{} \quad 1 &{} \quad 1 \end{array} \right] \in \mathcal{P}\mathcal{C}_{5}, \end{aligned}$$

for \(0<x<1\) and \(y>0.\) Table 8 shows that, for \(y=1\) and small values of x (in which case the matrix A is close to consistency), the geometric mean of all columns and the Perron eigenvector perform well when compared with the geometric means \(w_{i}\), \(i=1,\ldots ,31,\) of the 31 subsets of columns of A,  with better results the smaller x is. For completeness, we also consider a large value of y (A has a higher inconsistent level) and verify that in that case the results are worse. Table 8 suggests that the relative positions of the geometric mean of all columns and of the Perron vector improve with the level of consistency. While for \(y=1\) and \(x=0.1,0.3,0.5,\) the best behavior, according to the Frobenius norm of D(Aw) is attained for \(w=w_{C},\) Table 9 shows that, for the 1-norm, one column (columns 1 or 3) performs better. We note that in all considered cases, the Perron vector is efficient.

7 Conclusions

In the context of the Analytic Hierarchic Process, pairwise comparison matrices (PC matrices), also called reciprocal matrices, appear to rank different alternatives. In practice, the obtained reciprocal matrices are usually inconsistent and a good consistent matrix approximating the reciprocal matrix should be obtained. A consistent matrix is uniquely determined by a positive vector (the vector of priorities or weights). Many methods have been proposed in the literature to obtain the vectors from which a consistent matrix approximating a given reciprocal matrix is constructed. Some of the most used methods consist of the choice of the Perron eigenvector of the reciprocal matrix or of the Hadamard geometric mean of all its columns. An important property that should be satisfied by the vectors on which such a consistent matrix is based is efficiency. It is known that the Hadamard geometric mean of all columns of a reciprocal matrix is efficient, though the Perron eigenvector not always satisfies this property.

Here we give an algorithm to construct efficient vectors for a reciprocal matrix A from efficient vectors for principal submatrices of A. We also show that the geometric mean of the vectors in any nonempty subset of the columns of A is efficient for A. We give an example that the geometric mean of two efficient vectors need not be efficient. An interesting open question is: given two efficient vectors for a reciprocal matrix A, when is the Hadamard geometric mean of the vectors efficient for A and, moreover, when is the set of efficient vectors for A geometrically convex (recall that it is in the 2-by-2 and 3-by-3 cases, but not generally).

We give numerical examples comparing the geometric means obtained from proper subsets of the columns of A with the geometric mean of all columns of A. We conclude that the geometric mean of some proper subsets of columns may produce better results (also when compared with the Perron eigenvector). However, the geometric mean of all columns performs well in the sense that, in general, its behavior is close to the best one among all the geometric means of subsets of columns. Also, its behavior seems to improve with reducing the level of inconsistency.

The extension result to construct efficient vectors provided in this paper seems to play an important role in solving many questions concerning the study of efficient vectors for reciprocal matrices. For example, it should be helpful in the construction of classes of efficient vectors for some block-perturbed consistent matrices (that is, reciprocal matrices obtained from consistent matrices by modifying a principal block), extending the already studied case of a block of size 2 (Cruz et al., 2021) and some cases of blocks of sizes 3 and 4 (Furtado, 2023). We also think this result will be helpful in the study of the existence of rank reversals by addition of an alternative (Barzilai & Golany, 1994; Schenkerman, 1994; Wang & Luo, 2009).

Table 1 1-norm and Frobenius norm of \(D(A,w_i)\), when \(w_i\) is the geometric mean of the columns of A in the subset \(S_i\), \(i=1,\ldots ,31\) (Example 14)
Table 2 Comparison of the performance of the geometric mean \(w_C\) of all the columns of A, the Perron eigenvector \(w_P\) of A and the geometric means of the subsets of the columns of A with best and worst behaviors (Example 14)
Fig. 1
figure 1

Comparison of the performance of the geometric means of the subsets of columns of A and the Perron eigenvector \(w_{P}\) of A (Example 14)

Table 3 Comparison of the performance of the geometric mean \(w_C\) of all the columns of A, the Perron eigenvector \(w_P\) of A and the geometric means of the subsets of the columns of A with best and worst behaviors (Example 15)
Fig. 2
figure 2

Comparison of the performance of the geometric means of the subsets of the columns of A and the Perron eigenvector \(w_{P}\) of A (Example 15)

Table 4 Behavior of each subset \(S_i\) of columns, \(i=1,\ldots ,31\), with respect to the 1-norm of \(D(A,w_{i})\), in which \(w_{i}\) is the geometric mean of the columns of A in the subset \(S_i\), in an experiment with 10000 random 5-by-5 reciprocal matrices A with uncontrolled level of inconsistency. (Experiment 16)
Table 5 Behavior of each subset \(S_i\) of columns, \(i=1,\ldots ,31\), with respect to the Frobenius norm of \(D(A,w_{i})\), in which \(w_{i}\) is the geometric mean of the columns of A in the subset \(S_i\), in an experiment with 10000 random 5-by-5 reciprocal matrices A with uncontrolled level of inconsistency. (Experiment 16)
Table 6 Behavior of each subset \(S_i\) of columns, \(i=1,\ldots ,31\), with respect to the 1-norm of \(D(A,w_{i})\), in which \(w_{i}\) is the geometric mean of the columns of A in the subset \(S_i\), in an experiment with 10000 random 5-by-5 reciprocal matrices A with low level of inconsistency. (Experiment 17)
Table 7 Behavior of each subset \(S_i\) of columns, \(i=1,\ldots ,31\), with respect to the Frobenius norm of \(D(A,w_{i})\), in which \(w_{i}\) is the geometric mean of the columns of A in the subset \(S_i\), in an experiment with 10000 random 5-by-5 reciprocal matrices A with low level of inconsistency. (Experiment 17)
Table 8 Comparison of the performance of the geometric means of the subsets of columns of A and the Perron eigenvector, for some values of x and y (Example 19)
Table 9 Emphasis on the best behavior of v, when v is the first column or the third column of A (low inconsistent), when compared with the geometric mean of all columns and the Perron eigenvector (Example 19)