Abstract
The Analytic Hierarchy Process (AHP) is a much discussed method in ranking business alternatives based on empirical and judgemental information. We focus here upon the key component of deducing efficient vectors for a reciprocal matrix of pair-wise comparisons. It has been shown that the entry-wise geometric mean of all columns is efficient for any reciprocal matrix. Here, by combining some new basic observations with some known theory, we (1) give a method for inductively generating large collections of efficient vectors, and (2) show that the entry-wise geometric mean of any collection of distinct columns of a reciprocal matrix is efficient. We study numerically, using different measures, the performance of these geometric means in approximating the reciprocal matrix by a consistent matrix. We conclude that, as a general method to be chosen, independent of the data, the geometric mean of all columns performs well when compared with the geometric mean of proper subsets of columns.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A method used in decision-making and frequently discussed in the literature is the Analytic Hierarchy Process (AHP), suggested by Saaty (1977, 1980). Several works since then have developed and discussed many aspects of the method. See the surveys (Choo & Wedley, 2004; Ishizaka & Labib, 2011; Zeleny, 1982). A key element of the method is the notion of pair-wise comparison (PC) matrix. An n-by-n positive matrix \(A=[a_{ij}]\) is called a PC matrix if, for all \(1\le i,j\le n,\)
Each diagonal entry of a PC matrix is 1. We refer to the set of all such matrices as \({\mathcal{P}\mathcal{C}}_{n}\). Often, we refer to these matrices as reciprocal matrices, as do other authors.
The i, j entry of a reciprocal matrix is viewed as a pair-wise ratio comparison between alternatives i and j, and the intent is to deduce an ordering of the alternatives from it. If the reciprocal matrix is consistent (transitive): \(a_{ij}a_{jk}=a_{ik},\) for all triples i, j, k, there is a unique natural cardinal ordering, given by the relative magnitudes of the entries in any column. However, in human judgements consistency is unlikely. Inconsistency can also be an inherent feature of objective datasets (Bozóki et al. 2016; Chao et al. 2018; Csató 2013; Petróczy 2021; Petróczy and Csató 2021). Then, there will be many vectors that might be deduced from a reciprocal matrix A. Let w be a positive n-vector and \(w^{(-T)}\) the transpose of its component-wise inverse. We may try to approximate A by the consistent matrix \(W=ww^{(-T)},\) i.e., we wish to choose w so that \(W-A\) is small in some sense. We say that w is efficient for A if, for any other positive vector v and corresponding consistent matrix \(V=vv^{(-T)},\) the entry-wise inequality \(\left| V-A\right| \le \left| W-A\right| \) implies that v and w are proportional. (It follows from Lemma 5 that we give later that this definition is equivalent to that of other authors for the notion of efficiency (Blanquero et al., 2006; Bozóki & Fülöp, 2018)). A historical perspective of this concept can be found in Ehrgott (2012) and recent variations of it can be seen, for example, in Bozóki (2014) and Bozóki and Fülöp (2018). Clearly, a consistent approximation to a reciprocal matrix A should be based upon a vector efficient for A. If \(A\ \)is not itself consistent, the set \(\mathcal {E}(A)\) of efficient vectors for A will include many vectors not proportional to each other. For simplicity, we projectively view proportional efficient vectors as the same, as they produce the same consistent matrix. The set \(\mathcal {E}(A)\) is, however, at least connected (Blanquero et al., 2006), but, in general, an explicit characterization of the entire set seems difficult to give. In Cruz et al. (2021) and Furtado (2023), the complete description of the set of efficient vectors for reciprocal matrices obtained from consistent ones by modifying at most two entries above the main diagonal, and the corresponding reciprocal entries, was given, which, in particular, gives the description of the efficient vectors for any 3-by-3 reciprocal matrix. Recently, in Szádoczki and Bozóki (2023), the authors have studied the set \(\mathcal {E}(A)\) from a geometric point of view, when A is of size 3 or 4. Several methods to study when a vector is efficient were developed and algorithms to improve an inefficient vector have been provided (see (Anholcer & Fülöp, 2019; Blanquero et al., 2006; Bozóki, 2014; Bozóki & Fülöp, 2018) and the references therein).
Despite some criticism (Dyer, 1990a, b; Johnson et al., 1979; Saaty, 2003), one of the most used methods to approximate a reciprocal matrix A by a consistent matrix is the one proposed by Saaty (1977, 1980), in which the consistent matrix is based upon the right Perron eigenvector of A, a positive eigenvector associated with the spectral radius of A (Horn & Johnson, 1985). The efficiency of the Perron eigenvector for certain classes of reciprocal matrices has been shown (Ábele-Nagy & Bozóki, 2016; Ábele-Nagy et al., 2018; Fernandes & Furtado, 2022), though examples of reciprocal matrices for which this vector is inefficient are also known (Blanquero et al., 2006; Bozóki, 2014; Fernandes & Furtado, 2022), even matrices with an arbitrarily small level of inconsistency (Bozóki, 2014). Another method to approximate A by a consistent matrix, with a strong axiomatic background (Barzilai 1997; Csató 2018, 2019; Fichtner 1986; Lundy et al. 2017), is based upon the geometric mean of all columns of A, which is known to be an efficient vector for A (Blanquero et al., 2006). Many other proposals for approximating A by a consistent matrix have been made in the literature (for comparisons of different methods see, for example, Anholcer et al. (2011), Bajwa et al. (2008), Bozóki (2008), Choo and Wedley (2004), Dijkstra (2013), citefichtner86 and Golany and Kress (1993); Kułakowski et al. (2022)).
Our main contribution in this paper consists of providing new classes of efficient vectors for a general reciprocal matrix A. One way we do this is by extending an efficient vector for a principal submatrix of A to an efficient vector for A. Unless A is consistent, this extension allows us to obtain infinitely many projectively distinct efficient vectors for A. We also show that any of the \(2^{n}-2\) geometric means of proper subsets of the columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) is efficient for A, extending the known result of the efficiency of the geometric mean of all columns. Recently, motivated by the result in this paper and using more involved arguments, we have shown that any weighted geometric mean of the columns of a reciprocal matrix is efficient (Furtado & Johnson, 2024).
Before summarizing in more detail what we do here, we mention some additional notations and terminology. The Hadamard (or entry-wise) product of two vectors (of the same size) or matrices (of the same dimension) is denoted by \(\circ .\) For example, if \(A,B\in \mathcal{P}\mathcal{C}_{n}\) then \(A\circ B\in \mathcal{P}\mathcal{C}_{n}\), and, similarly, the n-by-n consistent matrices are closed under the Hadamard product. We use superscripts in parentheses to denote an exponent applied to all entries of a vector or a matrix. For example
is the (Hadamard) geometric mean of positive vectors \(u_{1},\ldots ,u_{k}\) of the same size. This column geometric mean is what is called the row geometric mean for instance in Blanquero et al. (2006).
For an n-by-n matrix \(A=[a_{ij}],\) we partition A by columns as \(A=\left[ a_{1},\text { }a_{2},\ldots ,\text { }a_{n}\right] .\) The principal submatrix determined by deleting (by retaining) the rows and columns indexed by a subset \(K\subseteq \{1,\ldots ,n\}\) is denoted by A(K) (A[K]); we abbreviate \(A(\{i\})\) as A(i). Note that if A is reciprocal (consistent) then so is A(i).
In Sect. 2 we give some (mostly known) background that we will use and make some related observations. In particular, we present the relationship between efficiency and strong connectivity of a certain digraph and state the efficiency of the Hadamard geometric mean of all columns of a reciprocal matrix. In Sect. 3 we give some (mostly new) additional background that will also be helpful. In Sect. 4 we show explicitly how to extend efficient vectors for A(i) to efficient vectors for the reciprocal matrix A. This leads to an algorithm initiated by any \(A[\{i,j\}]\), \(i\ne j,\) to produce a subset of \(\mathcal {E}(A).\) This subset may not be all of \(\mathcal {E}(A)\) as truncation of an efficient vector for A may not give one for the corresponding principal submatrix. And we may get different subsets by starting with different i, j. In Sect. 5 we study the relationship between efficient vectors for a reciprocal matrix A and its columns. As mentioned, any column of a consistent matrix generates that consistent matrix and, so, is efficient for it. Similarly, any column of a reciprocal matrix is efficient for it (Lemma 10), as is the geometric mean of any subset of the columns (Theorem 12). In Sect. 6, we study numerically, using different measures, the performance of these efficient vectors in approximating A by a consistent matrix and, in some cases, compare them, from this point of view, with the Perron eigenvector. We will see that the geometric mean of all columns can be outperformed by the geometric mean of other collections of columns, though, in general, it seems to produce results relatively close to the best among the other geometric means. The geometric mean of all columns seems to have better performance for matrices with a lower level of inconsistency. Also, in the simulations, it is the one that performs better with higher frequency (at least according to one of our measures). Thus, as a general method to be chosen in advance to derive priorities, we can conclude that the geometric mean of all columns seems to be the best choice among all the geometric means of subsets of columns. We also show by example that \(\mathcal {E}(A)\) is not closed under geometric mean (Sect. 5). Finally, in Sect. 7 we give some conclusions.
2 Technical background
We start with some known results that are relevant for this work. First, it is important to know how \(\mathcal {E}(A)\) changes when A is subjected to either a positive diagonal similarity or a permutation similarity, or both (a monomial similarity).
Lemma 1
Cruz et al. (2021); Furtado (2023) Suppose that \(A\in \mathcal{P}\mathcal{C}_{n}\) and \(w\in \mathcal {E}(A).\) Then, if D is a positive diagonal matrix (P is a permutation matrix), then \(DAD^{-1}\in \mathcal{P}\mathcal{C}_{n}\) and \(Dw\in \mathcal {E}(DAD^{-1})\) (\(PAP^{T}\in \mathcal{P}\mathcal{C}_{n}\) and \(Pw\in \mathcal {E} (PAP^{T})\)).
Next we define a directed graph (digraph) associated with a matrix \(A\in \mathcal{P}\mathcal{C}_{n}\) and a positive n-vector w, which is helpful in studying the efficiency of w for A. For \(w=\left[ \begin{array}{ccc} w_{1}&\cdots&w_{n} \end{array} \right] ^{T}\), we denote by G(A, w) the directed graph (digraph) whose vertex set is \(\{1,\ldots ,n\}\) and whose directed edge set is
In Blanquero et al. (2006) the authors proved that the efficiency of w can be determined from G(A, w).
Theorem 2
Blanquero et al. (2006) Let \(A\in \mathcal{P}\mathcal{C}_{n}\). A positive n-vector w is efficient for A if and only if G(A, w) is a strongly connected digraph, that is, for all pairs of vertices i, j, with \(i\ne j,\) there is a directed path from i to j in G(A, w).
Recall (Horn & Johnson, 1985) that G(A, w) is strongly connected if and only if \((I_{n}+L)^{n-1}\) is positive. Here \(I_{n}\) is the identity matrix of order n and \(L=[l_{ij}]\) is the adjacency matrix of G(A, w), that is, \(l_{ij}=1\) if \(i\rightarrow j\) is an edge in G(A, w), and \(l_{ij}=0\) otherwise.
In Blanquero et al. (2006), it was shown that the geometric mean of all columns of a reciprocal matrix A is an efficient vector for A. This result comes from the fact that the geometric mean minimizes the logarithmic least squares objective function (see also (Crawford & Williams, 1985)).
Theorem 3
Blanquero et al. (2006) If \(A\in \mathcal{P}\mathcal{C}_{n},\) then
In Cruz et al. (2021), all the efficient vectors for a simple perturbed consistent matrix, that is, a reciprocal matrix obtained from a consistent one by modifying one entry above the main diagonal and the corresponding reciprocal entry, were described. Let \(Z_{n}(x),\) with \(x>0,\) be the simple perturbed consistent matrix in \(\mathcal{P}\mathcal{C}_{n}\) with all entries equal to 1 except those in positions 1, n and n, 1, which are x and \(\frac{1}{x},\) respectively.
For any simple perturbed consistent matrix \(A\in \mathcal{P}\mathcal{C}_{n},\) there is a positive diagonal matrix D and a permutation matrix P such that
for some \(x>0.\) Taking into account Lemma 1, an n-vector w is efficient for A if and only if DPw is efficient for \(Z_{n}(x).\) For this reason, we focused on the description of the efficient vectors for \(Z_{n}(x),\) as the efficient vectors for a general simple perturbed consistent matrix can be obtained from them using Lemma 1.
Theorem 4
Cruz et al. (2021) Let \(n\ge 3\), \(x>0\) and \(w=\left[ \begin{array}{cccc} w_{1}&\cdots&w_{n-1}&w_{n} \end{array} \right] ^{T}\) be a positive vector. Then w is efficient for \(Z_{n}(x)\) if and only if
or
Note that, if
then \(DAD^{-1}=Z_{3}\left( \frac{a_{13}}{a_{12}a_{23}}\right) ,\) with
In particular, any 3-by-3 reciprocal matrix is a simple perturbed consistent matrix, as this property is invariant under diagonal similarity.
3 Additional facts on efficiency
From the following result we may conclude that the definition of efficient vector given in Sect. 1 is equivalent to the one in Blanquero et al. (2006) and Bozóki and Fülöp (2018).
Here and throughout, if \(A\in \mathcal{P}\mathcal{C}_{n}\) and w is a positive n-vector, we denote
By |D(A, w)| we mean the entry-wise absolute value of D(A, w).
Lemma 5
Let \(A\in \mathcal{P}\mathcal{C}_{n}\) and v, w be positive n-vectors. Then, \(\left| D(A,w)\right| =\left| D(A,v)\right| \) if and only if v and w are proportional.
Proof
The “if” claim is trivial. Next we show the "only if" claim. Let \(w=\left[ \begin{array}{ccc} w_{1}&\cdots&w_{n} \end{array} \right] ^{T}\) and \(v=\left[ \begin{array}{ccc} v_{1}&\cdots&v_{n} \end{array} \right] ^{T}.\) Let \(i,j\in \{1,\ldots ,n\}\) with \(i\ne j.\) Suppose that
If
then (1) implies \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}}.\) If
then also
implying, from (1),
So
Condition \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}},\) for all \(i,j\in \{1,\ldots ,n\},\) implies w and v proportional. \(\square \)
Note from the proof of Lemma 5 that (1) holds for a pair i, j if and only if \(\frac{w_{i}}{w_{j}}=\frac{v_{i}}{v_{j}}.\)
We close this section with a topological property of \(\mathcal {E}(A)\).
Theorem 6
For any \(A\in \mathcal{P}\mathcal{C}_{n},\) \(\mathcal {E}(A)\) is a closed set (as a subset of the set of positive n-vectors).
Proof
We verify this by showing that the inefficient vectors, in the complementary of \(\mathcal {E}(A),\) form an open set, by appealing to Theorem 2. Suppose that \(v\notin \mathcal {E}(A).\) Then the graph G(A, v) is not strongly connected. Let \({\widetilde{v}}\) be close enough to v (i.e. \({\widetilde{v}}\) lies in an open ball about v, whose radius is positive, but as small as we like). Then, if \(i\rightarrow j\) is not an edge of G(A, v), then it is not an edge of \(G(A,{\widetilde{v}}).\) Then \(G(A,{\widetilde{v}})\) has no more edges (under inclusion) than G(A, v). Since the latter was not strongly connected, the former also is not, so that \({\widetilde{v}}\notin \mathcal {E}(A).\)
\(\square \)
We also note that, if \(w\in \mathcal {E}(A)\) and the matrix D(A, w) has no 0 off-diagonal entries, then \({\widetilde{w}}\in \mathcal {E}(A)\) for any vector \({\widetilde{w}}\) close enough to w, and \(w\in \mathcal {E}({\widetilde{A}})\) for any reciprocal matrix \({\widetilde{A}}\) close enough to A.
4 Inductive construction of efficient vectors
Suppose that \(A\in \mathcal{P}\mathcal{C}_{n}\) and that \(w\in \mathcal {E}(A(n)).\) Then G(A(n), w) is strongly connected. May w be extended to an efficient vector for A, and, if so, how? For a positive scalar x, the vector \(w_{x}:=\left[ \begin{array}{c} w\\ x \end{array} \right] \in \mathcal {E}(A)\) if and only if \(G\left( A,w_{x}\right) \) is strongly connected. But, since the subgraph induced by vertices \(1,2,\ldots ,n-1\) of \(G\left( A,w_{x}\right) \) is G(A(n), w) and the latter is strongly connected, \(G\left( A,w_{x}\right) \) is strongly connected if and only if there is at least one edge from vertex n to vertices in G(A(n), w) and also at least one edge from the latter to n (see Proposition 3 in Cruz et al. (2021) and its proof). Since the vector of the first \(n-1\) entries of the last column of \(D\left( A,w_{x}\right) \) is \(\frac{1}{x}w\) less the vector of the first \(n-1\) entries of \(a_{n}\) (the last column of A), there are such edges if and only if this difference vector has a 0 entry or both positive and negative entries. This means that among \(\frac{w_{i}}{x}-a_{in},\) \(i=1,\ldots ,n-1,\) there are both nonnegative and nonpositive numbers. We restate this as
Theorem 7
For \(A\in \mathcal{P}\mathcal{C}_{n}\) and \(w\in \mathcal {E}(A(n)),\) the vector
if and only if the scalar x satisfies
Of course, the above interval is nonempty. This leads to a natural algorithm to construct a large subset of \(\mathcal {E}(A)\) for \(A\in \mathcal{P}\mathcal{C}_{n}.\)
Choose the upper left \(2\times 2\) principal submatrix \(A[\{1,2\}]\) of A. It is consistent and, up to a factor of scale, has only one efficient vector \(w[\{1,2\}].\) Now consider each extension, allowed by the possibly infinitely many ways given in Theorem 7, to an efficient vector for \(A[\{1,2,3\}]\). This gives the set \(w[\{1,2,3\}]\subseteq \mathcal {E} (A[\{1,2,3\}]).\) Now, continue extending each vector in \(w[\{1,2,3\}]\) to an element of \(\mathcal {E}(A[\{1,2,3,4\}])\) in the same way, and so on. This terminates in a subset \(w[\{1,2,\ldots ,n\}]\subseteq \mathcal {E}(A).\)
We make two important observations. First, we may instead start with some other 2-by-2 principal submatrix \(A[\{i,j\}],\) \(i\ne j,\) and proceed similarly, either by inserting the new entry of the next efficient vector in the appropriate position, or by placing \(A[\{i,j\}]\) in the upper left 2-by-2 submatrix, via permutation similarity, and proceeding in exactly the same way. We note that starting in two different positions may produce different terminal sets (Example 8), and the union of all possible terminal sets is contained in \(\mathcal {E}(A).\)
Second, \(w[\{1,2,\ldots ,n\}]\) may be a proper subset of \(\mathcal {E}(A),\) as truncation of a vector (deletion of an entry) from an efficient vector for A may not give an efficient vector for the corresponding principal submatrix (see Example 8).
Example 8
Let
The efficient vectors for \(A[\{1,2\}]\) are proportional to
By Theorem 7, the vectors of the form
with
are efficient for A (and, of course, all positive vectors proportional to them).
The efficient vectors for \(A[\{1,3\}]\) are proportional to
By Theorem 7, the vectors of the form
with
are efficient for A.
The efficient vectors for \(A[\{2,3\}]\) are proportional to
By Theorem 7, the vectors of the form
with
are efficient for A.
Note that, by Theorem 4,
For example, the vector \(\left[ \begin{array}{ccc} \frac{4}{3}&\frac{7}{6}&1 \end{array} \right] ^{T}\) is efficient for A, though it does not belong to the set of vectors determined above, as no vector obtained from it by deleting one entry is efficient for the corresponding 2-by-2 principal submatrix.
There are cases in which we know all the efficient vectors for a larger submatrix and then we can start our building process with this submatrix. In fact, taking into account Theorem 4, all efficient vectors for a 3-by-3 reciprocal matrix are known, as such matrix is a simple perturbed consistent matrix. Thus, it is always possible to start the process from a 3-by-3 principal submatrix.
Example 9
Consider the matrix
By Theorem 4, the efficient vectors for \(A[\{1,2,3\}]\) are the vectors of the form
with \(w_{3}\le w_{2}\le w_{1}\le 9w_{3}.\) By Theorem 7, the vectors of the form
with
are efficient for \(A[\{1,2,3,4\}].\) Again by Theorem 7, the vectors of the form
with
are efficient for A. For instance, the vectors
with \(\frac{3}{8}\le w_{5}\le 6,\) are efficient for A. Moreover, these are the only obtained efficient vectors with the given first four entries.
We observe that, if \(A\in \mathcal{P}\mathcal{C}_{n}\) is an (inconsistent) simple perturbed consistent matrix, then A has a principal 3-by-3 (inconsistent) simple perturbed consistent submatrix B. If we start the inductive construction of efficient vectors for A with the submatrix B, for which \(\mathcal {E}(B)\) is known by Lemma 1 and Theorem 4, then we obtain \(\mathcal {E}(A).\) This fact follows from Corollary 9 in Cruz et al. (2021), taking into account that, by Lemma 1, and Remark 4.6 in Furtado (2023), we may focus on \(A=Z_{n}(x),\) for some \(x>0.\)
Similarly, if \(A\in \mathcal{P}\mathcal{C}_{n}\) is a double perturbed consistent matrix (that is, \(A\ \)is obtained from a consistent matrix by modifying two entries above the diagonal and the corresponding reciprocal entries), in which no two perturbed entries lie in the same row or column, then A has a principal 4-by-4 double perturbed consistent submatrix B of the same type and, by Theorem 4.2 in Furtado (2023) and Lemma 1, \(\mathcal {E}(B)\) is known. By Corollary 4.5 in Furtado (2023), if we start the inductive construction of efficient vectors with B, then again we obtain \(\mathcal {E}(A).\)
Of course, in these simple and double perturbed consistent cases, all the efficient vectors for \(A\in \mathcal{P}\mathcal{C}_{n}\) are already known (Theorem 4, and Theorem 4.2 in Furtado (2023)).
5 Columns of a reciprocal matrix
Previously, it has been noted (Theorem 3) that the Hadamard geometric mean of all columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) is efficient for A. Interestingly, each individual column of A is efficient.
Lemma 10
Let \(A\in \mathcal{P}\mathcal{C}_{n}.\) Then any column of A lies in \(\mathcal {E}(A).\)
Proof
Let \(a_{j}\) be the j-th column of A. Then the j-th column of \(D(A,a_{j})\) has entries \(\frac{a_{ij}}{1}-a_{ij}=0\). Hence, the edges \(i\rightarrow j\) and \(j\rightarrow i\) are in \(G(A,a_{j})\), for any \(1\le i\le n\), with \(i\ne j\), and, therefore, \(G(A,a_{j})\) is strongly connected, verifying that \(a_{j}\) is efficient, by Theorem 2. \(\square \)
Further, the geometric mean of any subset of the columns of a reciprocal matrix A also lies in \(\mathcal {E}(A).\) To prove this result, we use the following lemma.
Lemma 11
Let \(A\in \mathcal{P}\mathcal{C}_{n},\) \(D={\text {*}}{diag}(d_{1} ,\ldots ,d_{n})\) be a positive diagonal matrix and \(1\le s\le n.\) If w is the geometric mean of s columns of A then Dw is a positive multiple of the geometric mean of the corresponding s columns of \(DAD^{-1}.\)
Proof
By a possible permutation similarity and taking into account Lemma 1, suppose, without loss of generality, that w is the geometric mean of the first s columns of A. (Note that, if w is the geometric mean of a reciprocal matrix A then Pw is the geometric mean of \(PAP^{T}\) for a permutation matrix P.) The i-th entry of Dw is \(d_{i}\Pi _{j=1}^{s} a_{ij}^{\frac{1}{s}}\). On the other hand, the i-th entry of the geometric mean v of the first s columns of \(DAD^{-1}\) is \(\Pi _{j=1}^{s}\left( \frac{d_{i}a_{ij}}{d_{j}}\right) ^{\frac{1}{s}}=d_{i}\Pi _{j=1}^{s}\left( \frac{a_{ij}}{d_{j}}\right) ^{\frac{1}{s}}.\) Thus, the quotient of the i-th entries of Dw and v is \(\Pi _{j=1}^{s}(d_{j})^{\frac{1}{s}},\) which does not depend on i, implying the claim. \(\square \)
Theorem 12
Let \(A\in \mathcal{P}\mathcal{C}_{n}.\) Then the geometric mean of any collection of distinct columns of A lies in \(\mathcal {E}(A).\)
Proof
Let \(1\le s\le n.\) We show that the geometric mean \(w_{A}\) of s distinct columns of A is efficient for A. The proof is by induction on n. For \(n=2,\) the result is straightforward. Suppose that \(n>2.\) If \(s=1\) or \(s=n,\) the result follows from Lemma 10 or Theorem 3, respectively. Suppose that \(1<s<n.\) By Lemmas 1 and 11, we may and do assume that the s columns of A are the first ones, and the entries in the last column and in the last row of A are all equal to 1, that is,
where \({\textbf{e}}\) is the \((n-1)\)-vector with all entries equal to 1 and \(B\in \mathcal{P}\mathcal{C}_{n-1}\). Let \(b_{1},\ldots ,b_{s}\) be the first s columns of B, so that, for \(j=1,\ldots ,s,\)
We have
in which \(w_{B}\) is the geometric mean of columns \(b_{1},\ldots ,b_{s}.\) By the induction hypothesis, \(w_{B}\) is efficient for B so that, by Theorem 2, \(G(B,w_{B})\) is strongly connected. Thus, \(G(A,w_{A})\) is strongly connected if and only if there is at least one edge from vertex n to vertices in \(G(B,w_{B})\) and also at least one edge from the latter to n (see the observation before Theorem 7). Then, \(G(A,w_{A})\) is strongly connected if and only if \(w_{B}-{\textbf{e}}\) is neither strictly positive nor negative. The product of the first s entries of \(w_{B}\) is \(l^{\frac{1}{s}}\), where l is the product of the entries of \(B[\{1,\ldots ,s\}].\) Since this matrix is reciprocal, then \(l=1.\) Thus, the vector formed by the first s entries of \(w_{B}\) is neither strictly greater than \({\textbf{e}}\) nor strictly less than \({\textbf{e}}\) (entry-wise), implying that \(G(A,w_{A})\) is strongly connected. \(\square \)
We observe that we have \(2^{n}-1\) (nonempty) distinct subsets of columns of \(A\in \mathcal{P}\mathcal{C}_{n}\) (not necessarily corresponding to different geometric means).
The sets of efficient vectors for matrices in \(\mathcal{P}\mathcal{C}_{2}\) (any 2-by-2 reciprocal matrix is consistent) and in \(\mathcal{P}\mathcal{C}_{3}\) (any 3-by-3 reciprocal matrix is a simple perturbed consistent matrix) are closed under geometric means. In the latter case, this follows from Lemma 1 and the facts that a matrix in \(\mathcal{P}\mathcal{C}_{3}\) is monomial similar to \(Z_{3}(x),\) for some \(x>0\), and, by Theorem 4, the set of efficient vectors for \(Z_{3}(x)\) is closed under geometric mean. However, the set of efficient vectors for matrices in \(\mathcal{P}\mathcal{C}_{n},\) with \(n>3,\) may not be closed under geometric mean, as the next example illustrates.
Example 13
Let A be the matrix in (2). Let
the 4-by-4 principal submatrix of A obtained by deleting the 5-th row and column. Taking into account Example 9, the vectors
are efficient for \(A^{\prime }.\) However the vector \(\left( w\circ v\right) ^{\left( \frac{1}{2}\right) }\) is not efficient for A as the first three entries of the last column of \(D(A^{\prime },\left( w\circ v\right) ^{\left( \frac{1}{2}\right) })\) are positive and, therefore, \(G(A^{\prime },(w\circ v)^{\left( \frac{1}{2}\right) })\) is not strongly connected.
6 Numerical experiments
We next give numerical examples to compare the geometric means of the vectors in different proper subsets of columns of a reciprocal matrix A with the geometric mean of all columns of A, denoted here by \(w_{C},\) a vector proposed by several authors to obtain a consistent matrix approximating A. Recall from Sect. 5 that all these vectors are efficient for A. We take \(\left\| D(A,w)\right\| _{1},\) the sum of all entries of \(\left| D(A,w)\right| ,\) as a measure of effectiveness of \(w\in \mathcal {E}(A),\) as well as \(\left\| D(A,w)\right\| _{2},\) the Frobenius norm of D(A, w). Recall that, for an n-by-n matrix \(B=[b_{ij}],\) we have
Other measures (norms) are possible. For comparison, we also consider the case in which w is the Perron eigenvector of A, denoted by \(w_{P}\), as it is one of the most used vectors to estimate a consistent matrix close to A. Our experiments were done using the software Octave version 8.3.0.
We first make the comparisons for two given matrices, one of size 5-by-5 and another one of size 8-by-8, and then consider an experiment in which a large number of 5-by-5 reciprocal matrices are randomly generated following a method that does not control the level of inconsistency (Examples 14 and 15, and Experiment 16). In all these cases, it can be verified that the vector \(w_{C}\) is not always the best choice among the geometric means of subsets of columns, though in the simulations \(w_{C}\) is the one that occurs with highest frequency as the best, and seems to be close to the best in general.
Then, we consider matrices with a more controlled and lower level of inconsistency in Experiments 17 and 18, and Example 19. In Experiments 17 and 18, we generate random matrices of sizes 5-by-5 and 10-by-10, respectively, following the method suggested in Szádoczki et al. (2023) (which is different from the one used in Experiment 16). Again, \(w_{C}\) is not always the best choice among the geometric means of subsets of columns of the matrices, though in the simulations, when considering the Frobenius norm, it is the one that occurs with highest frequency as the best (the frequency is larger than the corresponding one in the simulations in which the level of inconsistency is not controlled). Regarding the 1-norm, close to consistency, it seems that the best results are attained if a single column is considered.
6.1 Matrices with uncontrolled level of inconsistency
Example 14
Consider the matrix
There are 31 nonempty distinct subsets of the set of columns of A. We identify each subset with a sequence of five binary numbers, in which a 1 in position i means that the i-th column of A belongs to the subset, while a 0 means that it does not belong to the subset. The sequences are in increasing (numerical) order and by \(S_{i}\) we denote the subset of columns associated with the i-th sequence. Note that \(S_{31}\) is the set of all columns of A. By \(w_{i}\) we denote the geometric mean of the vectors in \(S_{i}.\)
In Table 1 we give the norms \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2},\) \(i=1,\ldots ,31.\) In Table 2 we emphasize the results obtained for the geometric mean \(w_{C}\) of all columns, for the vectors that produce the smallest and the largest values of \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2},\) and also consider the case of the Perron eigenvector \(w_{P}\) of A (which is efficient). Note that
It can be observed that, according to the considered measures, there are proper subsets of columns that produce better results than those for the Perron eigenvector and for the set of all columns. In fact, from Table 2, the geometric mean of columns 2, 3, 5 (associated with the sequence 01101 given by \(S_{13}\)) is the one that performs better in terms of the 1-norm, when compared with the Perron vector and the geometric mean of any other subset of columns. However, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=1,5,10,13,16,17,20,21,29,\) also perform better than \(w_{C}\) and \(w_{P}\) (see Table 1). In addition, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=4,9,11,12,15,25,27,28,30,31,\) also perform better than \(w_{P}\). For example, for \(i=5,\) we have that the geometric mean of columns 3, 5 (associated with the sequence 00101 given by \(S_{5}\)) performs better in terms of the 1-norm, when compared with \(w_{C}\) and \(w_{P}\). Similarly, from Table 2, it follows that the geometric mean of columns 2, 4 (associated with the sequence 01010 given by \(S_{10}\)) is the one that performs better in terms of the Frobenius norm, when compared with \(w_{P}\) and the geometric mean of any other subset of columns. For this norm, the geometric means of the subsets of columns associated with \(S_{i}\), for \(i=10,11,14,18,26,27,30,\) perform better than \(w_{P}\) and \(w_{C}\). Regarding the latter, the geometric mean of the subsets of columns associated with \(S_{i}\), for \(i=9,15,24,\) also perform better.
We summarize our results in Fig. 1, in which we give a graphic with a comparison of all the results obtained. In the x axis we have the index i of each subset \(S_{i}\) of columns. In the y axis we have the values of \(\left\| D(A,w_{i})\right\| _{1}\) and \(\left\| D(A,w_{i})\right\| _{2}\) for the different vectors \(w_{i}.\) A line jointing the values of each of these norms for the different subsets of columns is plotted. A horizontal line corresponding to each of the considered norms for the Perron eigenvector also appears.
Example 15
Consider the matrix
Table 3 and Fig. 2 are the analogs of Table 2 and Fig. 1 for the 8-by-8 reciprocal matrix considered here. Note that in this case we have 255 different subsets \(S_{i}\) of the set of columns of A. Again, a proper subset of the columns produces better results than either all columns or the Perron vector (which is efficient for A). Note that
Experiment 16
We consider random reciprocal matrices \(A_{j}\in \mathcal{P}\mathcal{C}_{5}\), \(j=1,\ldots ,10000,\) by generating matrices \(B_{j}\) with entries from a uniform distribution in the interval (1, 10) and letting \(A_{j}=B_{j}\circ (B_{j}^{(-1)})^{T},\) where \(B_{j}^{(-1)}\) denotes the entry-wise inverse of \(B_{j}\) and \(\circ \) the Hadamard product. These matrices may present a significant level of inconsistency (Csató & Petróczy, 2021). For each \(i=1,\ldots ,31,\) we determine
in which \(w_{ij}\) is the geometric mean of the columns of \(A_{j}\) in the subset \(S_{i}.\) (We identify the subsets \(S_{i}\) with indices of columns, as introduced in Example 14.) We note that none of the generated matrices \(A_{j}\) is consistent, so that the denominators in each summand of \(p_{1}(i)\) and \(p_{2}(i)\) are nonzero. Also, any such summand is at least \(1,\ \)and is close to 1 when \(w_{ij}\) behaves close to the best among the geometric means of the columns of \(A_{j}\).
In Table 4 we display the values of \(p_{1}(i),\) \(i=1,\ldots ,31\) (rounded to integers). For each i, we also display the number \(n_{1}(i)\) of j’s for which \(\min \left\| D(A_{j},w)\right\| _{1}\), when w runs over all the geometric means of the subsets of columns of \(A_{j}\), is attained by the subset \(S_{i}\). In Table 5 we display the corresponding data, \(p_{2}(i)\) and \(n_{2}(i),\) for the Frobenius norm.
We can see that the minimum 1-norm and the minimum Frobenius norm of \(D(A_{j},w),\) when w runs over the geometric means of the subsets of columns of \(A_{j},\) are, most of the times, not attained by the geometric mean \(w_{C}\) of all columns of \(A_{j}.\) In fact, it is attained less than \(7\%\) of the times when considering the 1-norm and less than \(19\%\) when considering the Frobenius norm. However, in both cases, \(w_{C}\) is the geometric mean that occurs with highest frequency as the best. We also emphasize the good behavior regarding the 1-norm when just one column is considered. We observe that the measures \(p_{1}(31)\) and \(p_{2}(31),\) associated with \(w_{C},\) are relatively close to 10000 and perform the best when compared to the corresponding ones for the other subsets of columns. Thus, in general, the performance of \(w_{C}\) is close to the best.
6.2 Matrices with controlled level of inconsistency
Experiment 17
We reproduce Experiment 16 but now generate the matrices \(A_{j}\in \mathcal{P}\mathcal{C}_{5}\), \(j=1,\ldots ,10000,\) following the method proposed in Szádoczki et al. (2023). In each trial j, we generated a 5-vector \(v_{j}\) with entries from a uniform distribution in (1, 9) and constructed the reciprocal matrix \(B_{j}=[b_{k\ell }]=v_{j}v_{j}^{(-T)}.\) We also generated a 5-by-5 matrix \(H_{j}=[h_{k\ell }]\) with entries from a uniform distribution in \((-1,1).\) We then constructed the reciprocal matrix \(A_{j}=[a_{k\ell }]\) as follows: for all \(k,\ell \) such that \(b_{k\ell }>1\), or \(b_{k\ell }=1\) and \(k>\ell ,\) we let
and then let \(a_{\ell k}=a_{k\ell }.\) According to Szádoczki et al. (2023), the matrices \(A_{j}\) have a low level of inconsistency. The obtained results are presented in Tables 6 and 7 regarding the 1-norm and the Frobenius norm, respectively (\(n_{1}(i),\) \(n_{2}(i),\) \(p_{1}(i)\) and \(p_{2}(i)\) are defined as in Experiment 16).
We can see that the minimum norm of \(D(A_{j},w),\) when w runs over the geometric means of the sets of columns of \(A_{j},\) is attained by the geometric mean \(w_{C}\) of all columns of \(A_{j}\) less than \(5\%\) of the times when considering the 1-norm and less than \(32\%\) when considering the Frobenius norm. In the latter case, \(w_{C}\) is the geometric mean that occurs with highest frequency as the best. An interesting additional finding is that the best behavior regarding the 1-norm is attained when just one column is considered (followed by the case in which all columns are considered). We also observe that the measures \(p_{1}(31)\) and \(p_{2}(31)\) associated with \(w_{C}\) perform the best when compared to the corresponding ones for the other subsets of columns, which means that \(w_{C}\) generally behaves well.
Experiment 18
In this example, we generated 1000 reciprocal matrices \(A_{j},\) now of size 10, following the same method as in Experiment 17, but in this case \(H_{j}\) has entries from a uniform distribution in \((-3/2,3/2),\) implying a modest level of inconsistency of \(A_{j}\) and also a larger standard deviation of inconsistency with relation to Experiment 17 (see Figure 2 in [21]). When compared with the other \(2^{10}-2\) geometric means of proper subsets of columns, the geometric mean of all columns had the best performance only \(0.8\%\) of the times for the 1-norm of \(D(A_{j},w)\), and \(3.7\%\) of the times for the Frobenius norm. However, we obtained \(p_{1}(1023)=1044\) and \(p_{2}(1023)=1037\) (as defined in Experiment 16), now with 1000 instead of 10000). This means that each summand of \(p_{1}(1023)\) and \(p_{2}(1023)\) is close to 1, which shows that generally the geometric mean of all columns had a performance close to the best one among the geometric means of all subsets of columns.
Example 19
Consider the matrix
for \(0<x<1\) and \(y>0.\) Table 8 shows that, for \(y=1\) and small values of x (in which case the matrix A is close to consistency), the geometric mean of all columns and the Perron eigenvector perform well when compared with the geometric means \(w_{i}\), \(i=1,\ldots ,31,\) of the 31 subsets of columns of A, with better results the smaller x is. For completeness, we also consider a large value of y (A has a higher inconsistent level) and verify that in that case the results are worse. Table 8 suggests that the relative positions of the geometric mean of all columns and of the Perron vector improve with the level of consistency. While for \(y=1\) and \(x=0.1,0.3,0.5,\) the best behavior, according to the Frobenius norm of D(A, w) is attained for \(w=w_{C},\) Table 9 shows that, for the 1-norm, one column (columns 1 or 3) performs better. We note that in all considered cases, the Perron vector is efficient.
7 Conclusions
In the context of the Analytic Hierarchic Process, pairwise comparison matrices (PC matrices), also called reciprocal matrices, appear to rank different alternatives. In practice, the obtained reciprocal matrices are usually inconsistent and a good consistent matrix approximating the reciprocal matrix should be obtained. A consistent matrix is uniquely determined by a positive vector (the vector of priorities or weights). Many methods have been proposed in the literature to obtain the vectors from which a consistent matrix approximating a given reciprocal matrix is constructed. Some of the most used methods consist of the choice of the Perron eigenvector of the reciprocal matrix or of the Hadamard geometric mean of all its columns. An important property that should be satisfied by the vectors on which such a consistent matrix is based is efficiency. It is known that the Hadamard geometric mean of all columns of a reciprocal matrix is efficient, though the Perron eigenvector not always satisfies this property.
Here we give an algorithm to construct efficient vectors for a reciprocal matrix A from efficient vectors for principal submatrices of A. We also show that the geometric mean of the vectors in any nonempty subset of the columns of A is efficient for A. We give an example that the geometric mean of two efficient vectors need not be efficient. An interesting open question is: given two efficient vectors for a reciprocal matrix A, when is the Hadamard geometric mean of the vectors efficient for A and, moreover, when is the set of efficient vectors for A geometrically convex (recall that it is in the 2-by-2 and 3-by-3 cases, but not generally).
We give numerical examples comparing the geometric means obtained from proper subsets of the columns of A with the geometric mean of all columns of A. We conclude that the geometric mean of some proper subsets of columns may produce better results (also when compared with the Perron eigenvector). However, the geometric mean of all columns performs well in the sense that, in general, its behavior is close to the best one among all the geometric means of subsets of columns. Also, its behavior seems to improve with reducing the level of inconsistency.
The extension result to construct efficient vectors provided in this paper seems to play an important role in solving many questions concerning the study of efficient vectors for reciprocal matrices. For example, it should be helpful in the construction of classes of efficient vectors for some block-perturbed consistent matrices (that is, reciprocal matrices obtained from consistent matrices by modifying a principal block), extending the already studied case of a block of size 2 (Cruz et al., 2021) and some cases of blocks of sizes 3 and 4 (Furtado, 2023). We also think this result will be helpful in the study of the existence of rank reversals by addition of an alternative (Barzilai & Golany, 1994; Schenkerman, 1994; Wang & Luo, 2009).
References
Ábele-Nagy, K., & Bozóki, S. (2016). Efficiency analysis of simple perturbed pairwise comparison matrices. Fundamenta Informaticae, 144, 279–289.
Ábele-Nagy, K., Bozóki, S., & Rebák, Ö. (2018). Efficiency analysis of double perturbed pairwise comparison matrices. Journal of the Operational Research Society, 69, 707–713.
Anholcer, M., Babiy, V., Bozóki, S., & Koczkodaj, W. W. (2011). A simplified implementation of the least squares solution for pairwise comparisons matrices. Central European Journal of Operations Research, 19, 439–444.
Anholcer, M., & Fülöp, J. (2019). Deriving priorites from inconsistent PCM using the network algorithms. Annals of Operations Research, 274, 57–74.
Bajwa, G., Choo, E. U., & Wedley, W. C. (2008). Effectiveness analysis of deriving priority vectors from reciprocal pairwise comparison matrices. Asia-Pacific Journal of Operational Research, 25(3), 279–299.
Barzilai, J. (1997). Deriving weights from pairwise comparison matrices. Journal of the Operational Research Society, 48(12), 1226–1232.
Barzilai, J., & Golany, B. (1994). AHP rank reversal, normalization and aggregation rules. INFOR, 32(2), 57–63.
Blanquero, R., Carrizosa, E., & Conde, E. (2006). Inferring efficient weights from pairwise comparison matrices. Mathematical Methods of Operations Research, 64, 271–284.
Bozóki, S. (2008). Solution of the least squares method problem of pairwise comparison matrices. Central European Journal of Operations Research, 16, 345–358.
Bozóki, S. (2014). Inefficient weights from pairwise comparison matrices with arbitrarily small inconsistency. Optimization, 63, 1893–1901.
Bozóki, S., Csató, L., & Temesi, J. (2016). An application of incomplete pairwise comparison matrices for ranking top tennis players. European Journal of Operational Research, 248(1), 211–218.
Bozóki, S., & Fülöp, J. (2018). Efficient weight vectors from pairwise comparison matrices. European Journal of Operational Research, 264, 419–427.
Chao, X., Kou, G., Li, T., & Peng, Y. (2018). Jie Ke versus AlphaGo: A ranking approach using decision making method for large-scale data with incomplete information. European Journal of Operational Research, 265(1), 239–247.
Choo, E., & Wedley, W. (2004). A common framework for deriving preference values from pairwise comparison matrices. Computers and Operations Research, 31(6), 893–908.
Crawford, G., & Williams, C. (1985). A note on the analysis of subjective judgment matrices. Journal of Mathematical Psychology, 29(4), 387–405.
Cruz, H., Fernandes, R., & Furtado, S. (2021). Efficient vectors for simple perturbed consistent matrices. International Journal of Approximate Reasoning, 139, 54–68.
Csató, L. (2013). Ranking by pairwise comparisons for Swiss-system tournaments. Central European Journal of Operations Research, 21(4), 783–803.
Csató, L. (2018). Characterization of the row geometric mean ranking with a group consensus axiom. Group Decision and Negotiation, 27(6), 1011–1027.
Csató, L. (2019). A characterization of the Logarithmic Least Squares Method. European Journal of Operational Research, 276(1), 212–216.
Csató, L., & Petróczy, D. G. (2021). On the monotonicity of the eigenvector method. European Journal of Operational Research, 292(1), 230–237.
Csató, L. (2023). Right-left asymmetry of the eigenvector method: A simulation study. European Journal of Operational Research. https://doi.org/10.1016/j.ejor.2023.09.022
Dijkstra, T. K. (2013). On the extraction of weights from pairwise comparison matrices. Central European Journal of Operations Research, 21(1), 103–123.
Dyer, J. (1990). Remarks on the Analytic Hierarchy Process. Management Science, 36(3), 249–258.
Dyer, J. (1990). A clarification of “Remarks on the Analytic Hierarchy Process’’. Management Science, 36, 274–275.
Ehrgott, M. (2012). Vilfredo Pareto and multi-objective optimization, Documenta mathematica, the book series 6: Optimization stories, 447-453.
Fernandes, R., & Furtado, S. (2022). Efficiency of the principal eigenvector of some triple perturbed consistent matrices. European Journal of Operational Research, 298, 1007–1015.
Fichtner, J. (1986). On deriving priority vectors from matrices of pairwise comparisons. Socio-Economic Planning Sciences, 20(6), 341–345.
Furtado, S. (2023). Efficient vectors for double perturbed consistent matrices. Optimization, 72, 2679–2701.
Furtado, S., & Johnson, C. R. (2024). Efficiency of any weighted geometric mean of the columns of a reciprocal matrix. Linear Algebra and its Applications, 680, 83–92.
Golany, B., & Kress, M. (1993). A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices. European Journal of Operational Research, 69(2), 210–220.
Horn, R. A., & Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press.
Ishizaka, Alessio, & Labib, Ashraf. (2011). Review of the main developments in the analytic hierarchy process. Expert Systems with Applications, 38(11), 14336–14345.
Johnson, C. R., Beine, W. B., & Wang, T. J. (1979). Right-left asymmetry in an eigenvector ranking procedure. Journal of Mathematical Psychology, 19(1), 61–64.
Kuıakowski, K., Mazurek, J., & Strada, M. (2022). On the similarity between ranking vectors in the pairwise comparison method. Journal of the Operational Research Society, 73(9), 2080–2089.
Lundy, M., Siraj, S., & Greco, S. (2017). The mathematical equivalence of the “spanning tree’’ and row geometric mean preference vectors and its implications for preference analysis. European Journal of Operational Research, 257(1), 197–208.
Petróczy, D. G. (2021). An alternative quality of life ranking on the basis of remittances. Socio-Economic Planning Sciences, 78, 101042.
Petróczy, D. G., & Csató, L. (2021). Revenue allocation in formula one: A pairwise comparison approach. International Journal of General Systems, 50(3), 243–261.
Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 32, 234–281.
Saaty, T. L. (1980). The Analytic Hierarchy Process. McGraw-Hill.
Saaty, T. L. (2003). Decision-making with the AHP: Why is the principal eigenvector necessary. European Journal of Operational Research, 145, 85–91.
Schenkerman, S. (1994). Avoiding rank reversal in AHP decision-support models. European Journal of Operational Research, 74, 407–419.
Szádoczki, Zs., & Bozóki, S. (2023). Geometric interpretation of efficient weight vectors. Manuscript. https://doi.org/10.2139/ssrn.4573048
Szádoczki, Zs., Bozóki, S., Juhász, P., Kadenko, S. V., & Tsyganok, V. (2023). Incomplete pairwise comparison matrices based on graphs with average degree approximately. Annals of Operations Research, 326(2), 783–807.
Wang, Y.-M., & Luo, Y. (2009). On rank reversal in decision analysis. Mathematical and Computer Modelling, 49, 1221–1229.
Zeleny, M. (1982). Multiple criteria decision making. McGraw-Hill.
Funding
Open access funding provided by FCT|FCCN (b-on).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
All authors declare that they have no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The work of this author was supported by FCT- Fundação para a Ciência e Tecnologia, under project UIDB/04721/2020. Charles R. Johnson: The work of this author was supported in part by National Science Foundation grant DMS-0751964.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Furtado, S., Johnson, C.R. Efficient vectors in priority setting methodology. Ann Oper Res 332, 743–764 (2024). https://doi.org/10.1007/s10479-023-05771-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-023-05771-y