Efficient vectors in priority setting methodology

The Analytic Hierarchy Process (AHP) is a much discussed method in ranking business alternatives based on empirical and judgemental information. We focus here upon the key component of deducing efficient vectors for a reciprocal matrix of pair-wise comparisons. It is not yet known how to produce all efficient vectors. It has been shown that the entry-wise geometric mean of all columns is efficient for any reciprocal matrix. Here, by combining some new basic observations with some known theory, we 1) give a method for inductively generating large collections of efficient vectors, and 2) show that the entry-wise geometric mean of any collection of distinct columns of a reciprocal matrix is efficient. We study numerically, using different measures, the performance of these geometric means in approximating the reciprocal matrix by a consistent matrix.


Introduction
A method used in decision-making and frequently discussed in the literature is the Analytic Hierarchy Process (AHP), suggested by Saaty [31,32].Several works since then have developed and discussed many aspects of the method.See the surveys [11,25,34].A key element of the method is the notion of pair-wise comparison (PC) matrix.An n-by-n positive matrix A = [a ij ] is called a PC matrix if, for all 1 ≤ i, j ≤ n, Each diagonal entry of a PC matrix is 1.We refer to the set of all such matrices as PC n .Often, we refer to these matrices as reciprocal matrices, as do other authors.The i, j entry of a reciprocal matrix is viewed as a pair-wise ratio comparison between alternatives i and j, and the intent is to deduce an ordering of the alternatives from it.If the reciprocal matrix is consistent (transitive): a ij a jk = a ik , for all triples i, j, k, there is a unique natural cardinal ordering, given by the relative magnitudes of the entries in any column.However, in human judgements consistency is unlikely.Inconsistency can also be an inherent feature of objective datasets [8,10,14,29,30].Then, there will be many vectors that might be deduced from a reciprocal matrix A. Let w be a positive n-vector and w (−T ) the transpose of its component-wise inverse.We may try to approximate A by the consistent matrix W = ww (−T ) , i.e., we wish to choose w so that W − A is small in some sense.We say that w is efficient for A if, for any other positive vector v and corresponding consistent matrix V = vv (−T ) , the entry-wise inequality |V − A| ≤ |W − A| implies that v and w are proportional.(It follows from Lemma 5 that we give later that this definition is equivalent to that of other authors for the notion of efficiency [6,9]).Clearly, a consistent approximation to a reciprocal matrix A should be based upon a vector efficient for A. If A is not itself consistent, the set E(A) of efficient vectors for A will include many vectors not proportional to each other.This set is, however, at least connected [6], but, in general, it is difficult to determine the entire set E(A).For simplicity, we projectively view proportional efficient vectors as the same, as they produce the same consistent matrix.Several methods to study when a vector is efficient were developed and algorithms to improve an inefficient vector have been provided (see [3,6,7,9,20,22] and the references therein).
Despite some criticism [18,19,26,33], one of the most used methods to approximate a reciprocal matrix A by a consistent matrix is the one proposed by Saaty [31,32], in which the consistent matrix is based upon the right Perron eigenvector of A, a positive eigenvector associated with the spectral radius of A [24].The efficiency of the Perron eigenvector for certain classes of reciprocal matrices has been shown [1,2,20], though examples of reciprocal matrices for which this vector is inefficient are also known [4,6,7].Another method to approximate A by a consistent matrix is based upon the geometric mean of all columns of A, which is known to be an efficient vector for A [6].Many other proposals for approximating A by a consistent matrix have been made in the literature (for comparisons of different methods see, for example, [4,11,17,21,23,27]).
Before summarizing what we do here, we mention some more notation and terminology.The Hadamard (or entry-wise) product of two vectors (of the same size) or matrices (of the same dimension) is denoted by •.For example, if A, B ∈ PC n then A • B ∈ PC n , and, similarly, the n-by-n consistent matrices are closed under the Hadamard product.We use superscripts in parentheses to denote an exponent applied to all entries of a vector or a matrix.For example is the (Hadamard) geometric mean of positive vectors u 1 , . . ., u k of the same size.This column geometric mean is what is called the row geometric mean for instance in [6].
For an n-by-n matrix A = [a ij ], we partition A by columns as A = [a 1 , a 2 , . . ., a n ] .The principal submatrix determined by deleting (by retaining) the rows and columns indexed by a subset K ⊆ {1, . . ., n} is denoted by A(K) (A[K]); we abbreviate A({i}) as A(i).Note that if A is reciprocal (consistent) then so is A(i).
In Section 2 we give some (mostly known) background that we will use and make some related observations.In particular, we present the relationship between efficiency and strong connectivity of a certain digraph and state the efficiency of the Hadamard geometric mean of all the columns of a reciprocal matrix.In Section 3 we give some (mostly new) additional background that will also be helpful.In Section 4 we show explicitly how to extend efficient vectors for A(i) to efficient vectors for the reciprocal matrix A. This leads to an algorithm initiated by any A[{i, j}], i ̸ = j, to produce a subset of E(A).This subset may not be all of E(A) as truncation of an efficient vector for A may not give one for the corresponding principal submatrix.And we may get different subsets by starting with different i, j.In Section 5 we study the relationship between efficient vectors for a reciprocal matrix A and its columns.As mentioned, any column of a consistent matrix generates that consistent matrix and, so, is efficient for it.Similarly, any column of a reciprocal matrix is efficient for it (Lemma 10), as is the geometric mean of any subset of the columns (Theorem 12).In Section 6, we study numerically, using different measures, the performance of these efficient vectors in approximating A by a consistent matrix and compare them, from this point of view, with the Perron eigenvector (in cases in which it is efficient).It will be clear that the geometric mean of all columns can be significantly outperformed by the geometric mean of other collections of columns.We also show by example that E(A) is not closed under geometric mean (Section 5).Finally, in Section 7 we give some conclusions.

Technical Background
We start with some known results that are relevant for this work.First, it is important to know how E(A) changes when A is subjected to either a positive diagonal similarity or a permutation similarity, or both (a monomial similarity).
Lemma 1 [13,22] Suppose that A ∈ PC n and w ∈ E(A).Then, if D is a positive diagonal matrix (P is a permutation matrix), then DAD −1 ∈ PC n and Dw ∈ E(DAD −1 ) (P AP T ∈ PC n and P w ∈ E(P AP T )).
Next we define a directed graph (digraph) associated with a matrix A ∈ PC n and a positive n-vector w, which is helpful in studying the efficiency of w for A. For w = w 1 • • • w n T , we denote by G(A, w) the directed graph (digraph) whose vertex set is {1, . . ., n} and whose directed edge set is {i → j : In [6] the authors proved that the efficiency of w can be determined from G(A, w).
Theorem 2 [6] Let A ∈ PC n .A positive n-vector w is efficient for A if and only if G(A, w) is a strongly connected digraph, that is, for all pairs of vertices i, j, with i ̸ = j, there is a directed path from i to j in G(A, w).
Recall [24] that G(A, w) is strongly connected if and only if (I n + L) n−1 is positive.Here I n is the identity matrix of order n and L = [l ij ] is the adjacency matrix of G(A, w), that is, l ij = 1 if i → j is an edge in G(A, w), and l ij = 0 otherwise.
In [6], it was shown that the geometric mean of all the columns of a reciprocal matrix A is an efficient vector for A. This result comes from the fact that the geometric mean minimizes the logarithmic least squares objective function (see also [12]).
In [13], all the efficient vectors for a simple perturbed consistent matrix, that is, a reciprocal matrix obtained from a consistent one by perturbing one entry above the main diagonal and the corresponding reciprocal entry, were described.Let Z n (x), with x > 0, be the matrix in PC n with all entries equal to 1 except those in positions 1, n and n, 1, which are x and 1 x , respectively.For any simple perturbed consistent matrix A ∈ PC n , there is a positive diagonal matrix D and a permutation matrix P such that for some x > 0. Taking into account Lemma 1, an n-vector w is efficient for A if and only if DP w is efficient for Z n (x).For this reason, we focused on the description of the efficient vectors for Z n (x), as the efficient vectors for a general simple perturbed consistent matrix can be obtained from them using Lemma 1.
Theorem 4 [13] Let n ≥ 3, x > 0 and w = w 1 • • • w n−1 w n T be a positive vector.Then w is efficient for Z n (x) if and only if

Additional facts on efficiency
From the following result we may conclude that the definition of efficient vector given in Section 1 is equivalent to the one in [6,9].
Here and throughout, if A ∈ PC n and w is a positive n-vector, we denote By |D(A, w)| we mean the entry-wise absolute value of D(A, w).Proof.The "if" claim is trivial.Next we show the "only if" claim.Let If , Condition w i w j = v i v j , for all i, j ∈ {1, . . ., n}, implies w and v proportional.
Note from the proof of Lemma 5 that (1) holds for a pair i, j if and only if We close this section with a topological property of E(A).

Theorem 6 For any
Proof.We verify this by showing that the inefficient vectors, in the complementary of E(A), form an open set, by appealing to Theorem 2. Suppose that v / ∈ E(A), then the graph G(A, v) is not strongly connected.Let v be a sufficiently small perturbation of v (i.e.v lies in an open ball about v, whose radius is positive, but as small as we like).Then, if i → j is not an edge of G(A, v), then it is not an edge of G(A, v).Then G(A, v) has no more edges (under inclusion) than G(A, v).Since the latter was not strongly connected, the former also is not, so that v / ∈ E(A).
We also note that, if w ∈ E(A) and the matrix D(A, w) has no 0 offdiagonal entries, then w ∈ E(A) for any sufficiently small perturbation w of w, and w ∈ E( A) for any sufficiently small reciprocal perturbation A of A.

Inductive construction of efficient vectors
Suppose that A ∈ PC n and that w ∈ E(A(n)).Then G(A(n), w) is strongly connected.May w be extended to an efficient vector for A, and, if so, how?For a positive scalar x, the vector w x := ) and the latter is strongly connected, G (A, w x ) is strongly connected if and only if there are edges from vertex n to vertices in G(A(n), w) and also edges from the latter to n (see Proposition 3 in [13]).Since the vector of the first n − 1 entries of the last column of D (A, w x ) is 1 x w less the vector of the first n − 1 entries of a n (the last column of A), there are such edges if and only if this difference vector has a 0 entry or both positive and negative entries.This means that among w i x − a in , i = 1, . . ., n − 1, there are both nonnegative and nonpositive numbers.We restate this as Theorem 7 For A ∈ PC n and w ∈ E(A(n)), the vector if and only if the scalar x satisfies Of course, the above interval is nonempty.This leads to a natural algorithm to construct a large subset of E(A) for A ∈ PC n .

. , n}] ⊆ E(A).
We make two important observations.First, we may instead start with some other 2-by-2 principal submatrix A[{i, j}], i ̸ = j, and proceed similarly, either by inserting the new entry of the next efficient vector in the appropriate position, or by placing A[{i, j}] in the upper left 2-by-2 submatrix, via permutation similarity, and proceeding in exactly the same way.We note that starting in two different positions may produce different terminal sets (Example 8), and the union of all possible terminal sets is contained in E(A).
Second, w[{1, 2, . . ., n}] may be a proper subset of E(A), as truncation of a vector (deletion of an entry) from an efficient vector for A may not give an efficient vector for the corresponding principal submatrix (see Example 8).
The efficient vectors for A[{1, 2}] are proportional to By Theorem 7, the vectors of the form By Theorem 7, the vectors of the form are efficient for A.
The efficient vectors for A[{2, 3}] are proportional to By Theorem 7, the vectors of the form are efficient for A. Note that, by Theorem 4, For example, the vector 4 T is efficient for A, though it does not belong to the set of vectors determined above, as no vector obtained from it by deleting one entry is efficient for the corresponding 2-by-2 principal submatrix.
There are cases in which we know all the efficient vectors for a larger submatrix and then we can start our building process with this submatrix.In fact, taking into account Theorem 4, all efficient vectors for a 3-by-3 reciprocal matrix are known, as such matrix is a simple perturbed consistent matrix.Thus, it is always possible to start the process from a 3-by-3 principal submatrix.

Example 9 Consider the matrix
By Theorem 4, the efficient vectors for A[{1, 2, 3}] are the vectors of the form We observe that, if A ∈ PC n is a (inconsistent) simple perturbed consistent matrix, then A has a principal 3-by-3 (inconsistent) simple perturbed consistent submatrix B. If we start the inductive construction of efficient vectors for A with the submatrix B, for which E(B) is known by Lemma 1 and Theorem 4, then we obtain E(A).This fact follows from Corollary 9 in [13], taking into account that, by Lemma 1, and Remark 4.6 in [22], we may focus on A = Z n (x), for some x > 0.
Similarly, if A ∈ PC n is a double perturbed consistent matrix (that is, A is obtained from a consistent matrix by modifying two entries above the diagonal and the corresponding reciprocal entries), in which no two perturbed entries lie in the same row and column, then A has a principal 4-by-4 double perturbed consistent submatrix B of the same type and, by Theorem 4.2 in [22] and Lemma 1, E(B) is known.By Corollary 4.5 in [22], if we start the inductive construction of efficient vectors with B, then again we obtain E(A).
Of course, in these simple and double perturbed consistent cases, all the efficient vectors for A ∈ PC n are already known (Theorem 4, and Theorem 4.2 in [22]).

Columns of a reciprocal matrix
Previously, it has been noted (Theorem 3) that the Hadamard geometric mean of all columns of A ∈ PC n is efficient for A. Interestingly, each individual column of A is efficient.
Lemma 10 Let A ∈ PC n .Then any column of A lies in E(A).
Proof.Let a j be the j-th column of A. Then the j-th column of D(A, a j ) has entries a ij 1 −a ij = 0. Hence, G(A, a j ) has an undirected star on n vertices as an induced subgraph and is, therefore, strongly connected, verifying that a j is efficient, by Theorem 2.
Further, the geometric mean of any subset of the columns of a reciprocal matrix A also lies in E(A).To prove this result, we use the following lemma.Proof.By a possible permutation similarity and taking into account Lemma 1, suppose, without loss of generality, that w is the geometric mean of the first s columns of A. (Note that, if w is the geometric mean of a reciprocal matrix A then P w is the geometric mean of P AP T for a permutation matrix P.) The i-th entry of Dw is d i Π s j=1 a 1 s ij .On the other hand, the i-th entry of the geometric mean v of the first s columns of Thus, the quotient of the i-th entries of Dw and v is Π s j=1 (d j ) 1 s , which does not depend on i, implying the claim.
Theorem 12 Let A ∈ PC n .Then the geometric mean of any collection of distinct columns of A lies in E(A).
Proof.Let 1 ≤ s ≤ n.We show that the geometric mean w A of s distinct columns of A is efficient for A. The proof is by induction on n.For n = 2, the result is straightforward.Suppose that n > 2. If s = 1 or s = n, the result follows from Lemma 10 or Theorem 3, respectively.Suppose that 1 < s < n.By Lemmas 1 and 11, we may and do assume that the s columns of A are the first ones, and the entries in the last column and in the last row of A are all equal to 1, that is, where e is the (n − 1)-vector with all entries equal to 1 and B ∈ PC n−1 .Let b 1 , . . ., b s be the first s columns of B, so that, for j = 1, . . ., s, We have , where l is the product of the entries of B[{1, . . ., s}].Since this matrix is reciprocal, then l = 1.Thus, the vector formed by the first s entries of w B is neither strictly greater than 1 nor strictly less than 1, implying that G(A, w A ) is strongly connected.
We observe that we have 2 n − 1 (nonempty) distinct subsets of columns of A ∈ PC n (not necessarily corresponding to different geometric means).
The sets of efficient vectors for matrices in PC 2 (any 2-by-2 reciprocal matrix is consistent) and in PC 3 (any 3-by-3 reciprocal matrix is a simple perturbed consistent matrix) are closed under geometric means.In the latter case, this follows from Lemma 1 and the facts that a matrix in PC 3 is monomial similar to Z 3 (x), for some x > 0, and, by Theorem 4, the set of efficient vectors for Z 3 (x) is closed under geometric mean.However, the set of efficient vectors for matrices in PC n , with n > 3, may not be closed under geometric mean, as the next example illustrates.
Example 13 Let A be the matrix in (2).Let the 4-by-4 principal submatrix of A obtained by deleting the 5-th row and column.Taking into account Example 9, the vectors ) is not efficient for A as the first three entries of the last column of ) ) are positive and, therefore, ) ) is not strongly connected.

Numerical experiments
We next give numerical examples in which we compare the geometric means w of the vectors in different proper subsets of columns of a reciprocal matrix A with the geometric mean of all columns of A, denoted here by w C , a vector proposed by several authors to obtain a consistent matrix approximating A, as this method has a strong axiomatic background [5,15,16,21,28].Recall from Section 5 that all these vectors are efficient for A. We take ∥D(A, w)∥ 1 , the sum of all entries of |D(A, w)| , as a measure of effectiveness of w ∈ E(A), as well as ∥D(A, w)∥ 2 , the Frobenius norm of D(A, w).Recall that, for an n-by-n matrix B = [b ij ], we have .
Other measures (norms) are possible.For comparison, we also consider the case in which w is the Perron eigenvector of A, denoted by w P , as it is one of the most used vectors to estimate a consistent matrix close to A. Our experiments were done using the software Octave version 6.1.0. 1 3

Example 14 Consider the matrix
There are 31 distinct subsets of the set of columns of A. We identify each subset with a sequence of five 0/1 numbers, in which a 1 in position i means that the i-th column of A belongs to the subset, while a 0 means that it does not belong to the subset.The sequences are in increasing (numerical) order and by S i we denote the subset of columns associated with the i-th sequence.Note that S 31 is the set of all columns of A. By w i we denote the geometric mean of the vectors in S i .
In Table 1 we give the norms ∥D(A, w i )∥ 1 and ∥D(A, w i )∥ 2 , i = 1, . . ., 31.In Table 2 we emphasize the results obtained for the geometric mean w C of all columns, for the vectors that produce the smallest and the largest values of ∥D(A, w i )∥ 1 and ∥D(A, w i )∥ 2 , and also consider the case of the Perron eigenvector w P of A (which is efficient).Note that It can be observed that, according to the considered measures, there are proper subsets of columns that produce better results than those for the Perron eigenvector and for the set of all columns.We summarize our results in Figure 1, in which we give a graphic with a comparison of all the results obtained.In the x axis we have the index i of each subset S i of columns.In the y axis we have the values of ∥D(A, w i )∥ 1 and ∥D(A, w i )∥ 2 for the different vectors w i .A line jointing the values of each of these norms for the different subsets of columns is plotted.A horizontal line corresponding to each of the considered norms for the Perron eigenvector also appears.

Example 15 Consider the matrix
Table 3 and Figure 2 are the analogs of Table 2 and Figure 1 for the 8-by-8 reciprocal matrix considered here.Note that in this case we have 255 different subsets S i of the set of columns of A. Again, a proper subset of the columns produces better results than either all columns or the Perron vector (which is efficient for A).Note that max i ∥D(A, w i )∥ 1 min i ∥D(A, w i )∥ 1 = 5.0432 and max i ∥D(A, w i )∥ 2 min i ∥D(A, w i )∥ 2 = 10.890,Experiment 16 In this example we generated 100 reciprocal matrices A ∈ PC 5 , in which the entries in the upper triangular part are random real numbers in the interval (0, 100), and compare, in terms of the 1-norm (Figure 3) and the Frobenius norm (Figure 4) of D(A, w), the cases in which w is the geometric mean w C of all columns of A and w is the vector that produces the smallest norm among all geometric means of the subsets of the columns of A.
Again, it can be verified that, in general, a proper subset of columns produce better results.
In our previous examples we have emphasized that the minimum 1-norm and the minimum Frobenius norm of D(A, w), when w runs over the geometric means of the sets of columns of a reciprocal matrix A, is, in general, not attained by w C , the geometric mean of all columns of A. However, if we consider a large set of random reciprocal matrices A j , we can see that the sum, for all A j 's, of some normalization of ∥D(A j , w C ∥ 2 performs well when compared to the corresponding sums for other sets of columns, even when min ∥D(A j , w∥ 2 , with w running over the geometric means of the subsets of columns of A j , is not attained by w C for most j's. Experiment 17 We consider reciprocal matrices A j ∈ PC 5 , j = 1, . . ., 1000, by generating 1000 matrices B j with random real entries in the interval (0, 10) and letting A j = B j • (B (−1) j ) T , where B (−1) j denotes the entry-wise inverse of B j and • the Hadamard product.For each i = 1, . . ., 31, we determine in which w ij is the geometric mean of subset S i of the columns of A j .(We identify the subsets S i with indices of columns, as introduced in Example 14.) In Table 4 we display the values of p(i), i = 1, . . ., 31.For each i, we also display the number n(i) of j's for which min ∥D(A j , w∥ 2 , when w runs over all the geometric means of the subsets of columns of A j , is attained by the subset S i .
We finally give an example illustrating that, close to consistency, all subsets of columns perform about the same, as expected.

Example 18 Consider the matrix
If w i , i = 1, . . ., 31, are the geometric means of the 31 subsets of the columns of A, we have which are small when compared with the corresponding values in Examples 14 and 15.Also,

Conclusions
In the context of the Analytic Hierarchic Process, pairwise comparison matrices (PC matrices), also called reciprocal matrices, appear to rank different alternatives.In practice, the obtained reciprocal matrices are usually inconsistent and a good consistent matrix approximating the reciprocal matrix should be obtained.A consistent matrix is uniquely determined by a positive vector (the vector of priorities or weights).Many methods have been proposed in the literature to obtain the vectors from which a consistent matrix approximating a given reciprocal matrix is constructed.Some of the most used methods consist on the choice of the Perron eigenvector of the reciprocal matrix or on the Hadamard geometric mean of all its columns.An important property that should be satisfied by the vectors on which such a consistent matrix is based is efficiency.It is known that the Hadamard geometric mean of all the columns of a reciprocal matrix is efficient, though the Perron eigenvector not always satisfies this property.
Here we give an algorithm to construct efficient vectors for a reciprocal matrix A from efficient vectors for principal submatrices of A. We also show that the geometric mean of the vectors in any nonempty subset of the columns of A is efficient for A. We give an example that the geometric mean of two efficient vectors need not be efficient.It leaves the question of when the Hadamard geometric mean of two efficient vectors is efficient.
We give numerical examples comparing the geometric means obtained from proper subsets of the columns of A with the geometric mean of all the columns of A. We conclude that the geometric mean of some proper subsets of columns may produce better results, also when compared with the Perron eigenvector, which we include for completeness.So, according to our results, there is no evidence that the geometric mean of all columns has a universal advantage over the geometric means of other sets of columns, specially when the level of inconsistency is significant.
Declaration All authors declare that they have no conflicts of interest.

Lemma 5
Let A ∈ PC n and v, w be positive n-vectors.Then, |D(A, w)| = |D(A, v)| if and only if v and w are proportional.
are efficient for A (and, of course, all positive vectors proportional to them).The efficient vectors for A[{1, 3}] are proportional to

Lemma 11
Let A ∈ PC n , D = diag(d 1 , . . ., d n ) be a positive diagonal matrix and 1 ≤ s ≤ n.If w is the geometric mean of s columns of A then Dw is a positive multiple of the geometric mean of the corresponding s columns of DAD −1 . i

Figure 1 :Table 3 :
Figure 1: Comparison of the performance of the geometric means of the subsets of columns of A and the Perron eigenvector w P of A (Example 14).

Figure 2 :
Figure 2: Comparison of the performance of the geometric means of the subsets of the columns of A and the Perron eigenvector w P of A (Example 15).

Figure 3 :
Figure 3: Comparison of the minimal value of the 1-norm of D(A, w), when w runs over the geometric means of the subsets of the columns of A, with the 1-norm of D(A, w), when w is the geometric mean w C of all columns of A, for 100 reciprocal matrices A (Experiment 16).

Figure 4 :
Figure 4: Comparison of the minimal value of the Frobenius norm of D(A, w), when w runs over the geometric means of the subsets of the columns of A, with the Frobenius norm of D(A, w), when w is the geometric mean w C of all columns of A, for 100 reciprocal matrices A (Experiment 16).
By Theorem 7, the vectors of the form w 1 w 2 w 3 w 4 T , are efficient for A[{1, 2, 3, 4}].Again by Theorem 7, the vectors of the form w 1 w 2 w 3 w 4 w 5 ≤ w 5 ≤ 6, are efficient for A. Moreover, these are the only efficient vectors with the given first four entries.
in which w B is the geometric mean of columns b 1 , . . ., b s .By the induction hypothesis, w B is efficient for B so that, by Theorem 2, G(B, w B ) is strongly connected.Thus, G(A, w A ) is strongly connected if and only if there are edges from vertex n to vertices in G(B, w B ) and also vertices from the latter to n (see the observation before Theorem 7).Then, G(A, w A ) is strongly connected if and only if w B − e is neither strictly positive nor negative.The product of the first s entries of w B is l

Table 2 :
Comparison of the performance of the geometric mean w C of all the columns of A, the Perron eigenvector w P of A and the geometric means of the subsets of the columns of A with best and worst behaviors (Example 14).