We conclude that if the dimension is small, and the researcher knows a priori that the errors are small, and there is exactly one judgment for each pair of entities, there is little (only time, effort, and a little accuracy) to argue against using the eigenvector.Footnote 1

1 Introduction

The pairwise comparison methodology is applied in many decision-making frameworks such as the famous analytic hierarchy process (AHP), proposed by Saaty (Saaty, 1977, 1980). Since the aim is to obtain the priorities for the alternatives, the resulting pairwise comparison matrix should be transformed into a weight vector. To that end, a number of procedures have been suggested (Choo & Wedley, 2004). The two most popular techniques are the eigenvector (Saaty, 1977) and the logarithmic least squares (often called geometric mean) (Crawford & Williams, 1985) methods. They lead to the same solution if the pairwise comparisons are consistent, that is, the direct comparison of alternatives i and j coincides with any indirect comparison of them through a third alternative k. In addition, the two algorithms are verified to give the same weights if the number of alternatives is at most three (Crawford and Williams 1985, p. 393).

Several studies have examined the similarity of the weight vectors derived by the eigenvector and the logarithmic least squares methods for at least four alternatives. According to the Monte Carlo simulations of Herman and Koczkodaj (1996), the priorities are generally closer if the matrix is less inconsistent. Kułakowski et al. (2022) give analytical proof of the convergence. Mazurek et al. (2022) focus on the differences between the ordinal rankings obtained using these two procedures.

However, a complete pairwise comparison matrix contains \(n(n-1)/2\) comparisons, which may be difficult to collect. First, the number of entries is a quadratic function of the number of alternatives. Second, the experts can be unable to compare some items (Harker, 1987). Third, the necessary information might be impossible to acquire, for example, because the pairwise comparisons are derived from the results of matches in sports tournaments and some players or teams have not met each other (Bozóki et al, 2016, Chao et al, 2018, Csató 2013).

In this case, the algorithms suggested for complete pairwise comparison matrices can be used only after all missing judgements are estimated. A straightforward approach is considering an optimisation problem where the unknown comparisons are substituted by variables and an inconsistency index provides the objective function (Koczkodaj et al., 1999). Shiraishi et al. (1998) and Shiraishi and Obata (2002) have proposed this idea for the well-established inconsistency index of Saaty. The implied minimisation problem has been analysed and discussed by Bozóki et al. (2010). Bozóki et al. (2010) also prove the necessary and sufficient condition for the uniqueness of the optimal completion according to the geometric inconsistency index (Crawford & Williams, 1985; Aguarón & Moreno-Jiménez, 2003), which minimises a logarithmic least squares objective function. The optimal completion can be obtained by solving a system of linear equations.

We do not know any result on the relationship of the optimal completions according to these two approaches if the pairwise comparison matrix contains some missing entries. The current paper makes an important contribution to this issue. In particular, it is verified that the two methods lead to the same result if the incomplete pairwise comparison matrix contains at most four alternatives. Our finding is non-trivial because Saaty’s inconsistency index CI and the geometric inconsistency index GCI are not functionally dependent for \(n=4\) (Cavallo, 2020). Unsurprisingly, the theorem does not extend to the case of five alternatives.

The remainder of the study is organised as follows. The theoretical background is presented in Sect. 2. The connection between the two optimal completions is discussed in Sect. 3. Finally, Sect. 4 offers concluding remarks.

2 Basic mathematical definitions

The numerical answers of the decision-maker to questions such as “How many times alternative i is preferred to alternative j?” are collected into a matrix, but we allow for missing comparisons (indicated by \(*\)), too. Denote by \(\mathbb {R}_+\) the set of positive numbers and by \(\mathbb {R}^n_+\) the set of positive vectors of size n.

Definition 1

Incomplete pairwise comparison matrix: Matrix \(\textbf{A} = \left[ a_{ij} \right] \) is an incomplete pairwise comparison matrix if \(a_{ij} \in \mathbb {R}_+ \cup \{ *\}\) such that for all \(1 \le i,j \le n\), \(a_{ij} \in \mathbb {R}_+\) implies \(a_{ji} = 1 / a_{ij}\) and \(a_{ij} = *\) implies \(a_{ji} = *\).

The set of incomplete pairwise comparison matrices of order n is denoted by \(\mathcal {A}_{*}^{n \times n}\).

An incomplete pairwise comparison matrix \(\textbf{A} = \left[ a_{ij} \right] \) is called complete if \(a_{ij} \ne *\) for all \(1 \le i,j \le n\).

Definition 2

Weighting method: A weighting method associates a weight vector \(\textbf{w} \in \mathbb {R}^n_+\) to any incomplete pairwise comparison matrix \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\).

Definition 3

Logarithmic least squares method (Kwiesielewicz, 1996; Takeda & Yu, 1995): Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\) be an incomplete pairwise comparison matrix. The weight vector \(\textbf{w} = \left[ w_i \right] \in \mathbb {R}^n_+\) provided by the logarithmic least squares method is the optimal solution \(\textbf{w}\) of the following problem:

$$\begin{aligned} \min&\sum _{i,j: \, a_{ij} \ne *} \left[ \log a_{ij} - \log \left( \frac{w_i}{w_j} \right) \right] ^2 \nonumber \\ \text {subject to }&w_i > 0 \text { for all } i=1,2, \dots n. \end{aligned}$$
(1)

This approach has originally been suggested for complete pairwise comparison matrices (Crawford & Williams, 1985; De Graan, 1980; de Jong, 1984; Rabinowitz, 1976; Williams & Crawford, 1980). The objective function (1) takes only known comparisons into account, that is, the approximation of unknown comparisons is assumed to be perfect.

Definition 4

Logarithmic least squares optimal completion: Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\) be an incomplete pairwise comparison matrix. The logarithmic least squares optimal completion is \(\textbf{B} = \left[ b_{ij} \right] \) if \(b_{ij} = a_{ij}\) for all \(a_{ij} \ne *\) and \(b_{ij} = w_i / w_j\) otherwise, where \(\textbf{w} = \left[ w_i \right] \) is the optimal solution of (1).

Another natural idea is to replace the m missing comparisons with variables \(\textbf{x} \in \mathbb {R}^m_+\), pick up an inconsistency index (see Brunelli (2018) for a comprehensive survey of them), and minimise the inconsistency of the resulting complete pairwise comparison matrix \(\textbf{A}(\textbf{x})\). It can be seen that logarithmic least squares optimal completion minimises the geometric inconsistency index (Crawford & Williams, 1985; Aguarón & Moreno-Jiménez, 2003).

According to Saaty (1977), the level of inconsistency is a monotonic function of the dominant eigenvalue for any complete pairwise comparison matrix. Thus, the corresponding optimisation problem is as follows.

Definition 5

Eigenvector method (Shiraishi & Obata, 2002; Shiraishi et al., 1998): Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\) be an incomplete pairwise comparison matrix. The weight vector \(\textbf{w} = \left[ w_i \right] \in \mathbb {R}^n_+\) provided by the eigenvector method is the optimal solution \(\textbf{w}\) of the following problem:

$$\begin{aligned}&\min _{\textbf{x} \in \mathbb {R}^m_+} \lambda _{\max } \left( \textbf{A}(\textbf{x}) \right) \nonumber \\&\lambda _{\max } \left( \textbf{A}(\textbf{x}) \right) \textbf{w} = \textbf{A}(\textbf{x}) \textbf{w}. \end{aligned}$$
(2)

According to Definition 5, the variables in \(\textbf{x}\) are determined to minimise the dominant eigenvalue of the corresponding complete pairwise comparison matrix, and the priorities are given by the associated right eigenvector as suggested in the AHP methodology.

Definition 6

Eigenvalue optimal completion: Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\) be an incomplete pairwise comparison matrix. The eigenvalue optimal completion is \(\textbf{B} = \left[ b_{ij} \right] \) if \(b_{ij} = a_{ij}\) for all \(a_{ij} \ne *\) and the value of \(b_{ij}\) is determined by the corresponding coordinate of vector \(\textbf{x}\) that is associated with the optimal solution of (2) if \(a_{ij} = *\).

Graph representation offers a convenient tool to classify incomplete pairwise comparison matrices (Szádoczki et al., 2022).

Definition 7

Graph representation: Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{n \times n}\) be an incomplete pairwise comparison matrix. It is represented by the undirected graph \(G = (V,E)\) such that

  • there is a one-to-one mapping between the vertex set \(V = \left\{ 1,2, \dots ,n \right\} \) and the alternatives;

  • the edge set E is determined by the known comparisons: \((i,j) \in E \iff a_{ij} \ne *\).

These concepts can be illustrated by the following example.

Example 1

Consider the following incomplete pairwise comparison matrix of order four, in which \(a_{13}\) (thus \(a_{31}\)) and \(a_{24}\) (thus \(a_{42}\)) remain undefined:

$$\begin{aligned} \textbf{A} = \left[ \begin{array}{cccc} 1 &{} a_{12} &{} *&{} a_{14} \\ a_{21} &{} 1 &{} a_{23} &{} *\\ *&{} a_{32} &{} 1 &{} a_{34} \\ a_{41} &{} *&{} a_{43} &{} 1 \\ \end{array} \right] . \end{aligned}$$
Fig. 1
figure 1

The graph representation of the incomplete pairwise comparison matrix \(\textbf{A}\) in Example 1

Figure 1 shows the associated graph G.

The necessary and sufficient conditions for the uniqueness of the logarithmic least squares and the eigenvalue optimal completions, respectively, are the same.

Lemma 1

The optimal solution to both optimisation problems (1) and (2) is unique if and only if the graph representing the incomplete pairwise comparison matrix is connected.

Proof

See Bozóki et al (2010, Theorems 2 and 4). \(\square \)

Lemma 1 demands a natural requirement for uniqueness since it is impossible to associate priorities for two distinct sets of alternatives if they are not compared, hence, the corresponding graph is disconnected.

Naturally, Lemma 1 does not imply that the logarithmic least squares and the eigenvalue optimal completions coincide if graph G is connected. This inspires our research question: When are the corresponding complete pairwise comparison matrices the same?

3 The main result

The investigation is worth beginning with small problems in the number of alternatives. The case of \(n=3\) is almost trivial. If one comparison is missing and the associated graph is connected, then it should be a spanning tree. Consequently, there exists a unique consistent completion \(\textbf{B} = \left[ b_{ij} \right] \), for which the optimum of (1) is zero and the optimum of (2) equals n, that is, both objective functions reach their theoretical minimum. In other words, if \(a_{ik} = *\), then \(b_{ik} = a_{ij} a_{jk}\).

Somewhat surprisingly, the two optimal completions coincide even for \(n=4\).

Theorem 1

Let \(\textbf{A} = \left[ a_{ij} \right] \in \mathcal {A}_{*}^{4 \times 4}\) be an incomplete pairwise comparison matrix of size four such that the associated graph G is connected. The logarithmic least squares and the eigenvalue optimal completions are the same, independently of the number of unknown comparisons.

Proof

First, we show that the pairwise comparison matrix can be considered in the form of

$$\begin{aligned} \textbf{A} = \left[ \begin{array}{cccc} 1 &{} 1 &{} y &{} x \\ 1 &{} 1 &{} 1 &{} z \\ 1/y &{} 1 &{} 1 &{} 1 \\ 1/x &{} 1/z &{} 1 &{} 1 \end{array} \right] \end{aligned}$$
(3)

without loss of generality.

See Fernandes and Furtado (2022, Formula (2)) for the eigenvector method.

The sufficiency of representation in the form (3) for the logarithmic least squares method follows from the fact that if the ith row is multiplied by a positive scalar (and, simultaneously, the ith column is divided by it), then the corresponding coordinate \(w_i\) of the optimal weight vector \(\textbf{w} = \left[ w_i \right] \) is the (same) multiple of the original one before normalisation. Multiply the first, second, and fourth rows of a general pairwise comparison matrix

$$\begin{aligned} \left[ \begin{array}{cccc} 1 &{} a &{} b &{} c \\ 1/a &{} 1 &{} d &{} e \\ 1/b &{} 1/d &{} 1 &{} f \\ 1/c &{} 1/e &{} 1/f &{} 1 \end{array} \right] \end{aligned}$$

by 1/ (ad), 1/d, and f, respectively, and divide the first, second, and fourth columns by these numbers to get

$$\begin{aligned} \left[ \begin{array}{cccc} 1 &{} 1 &{} b/ad &{} c/(adf) \\ 1 &{} 1 &{} 1 &{} e/(df) \\ ad/b &{} 1 &{} 1 &{} 1 \\ adf/c &{} df/e &{} 1 &{} 1 \end{array} \right] , \end{aligned}$$

which has exactly the form of (3).

If the coordinate transformation \(x=e^t\), \(y=e^u\), \(z=e^v\) is applied, \(\lambda _{\max }(x,y,z) = \lambda _{\max } \left( e^t,e^u,e^v \right) \) becomes a strictly convex function in \(t,u,v \in \mathbb {R}\) (Bozóki et al. 2010, Section 3). This makes the first-order conditions sufficient for minimality.

Four possible cases shall be discussed.

Case 1: One comparison (x) is missing

If x is missing in (3), then the logarithmic least squares optimal completion is \(x = \sqrt{yz}\), see (1), Lemma 1 and the system of linear equations in the proof of Bozóki et al (2010, Theorem 4).

Based on the calculations of Fernandes and Furtado (2022, Formulas (12) and (13)) for \(n=4\), the characteristic polynomial of matrix (3) is \(\lambda ^4 - 4\lambda ^3 + p \lambda + q\), where

$$\begin{aligned}{} & {} p = - z - \frac{1}{z} - y -\frac{1}{y} - \frac{x}{y} - \frac{y}{x} -\frac{x}{z} - \frac{z}{x} + 8, \text { and} \\{} & {} q = - x - \frac{1}{x} + y + \frac{1}{y} + z + \frac{1}{z} + \frac{x}{y} + \frac{y}{x} + \frac{x}{z} + \frac{z}{x} - \frac{y}{z} - \frac{z}{y} - \frac{x}{yz} - \frac{yz}{x} - 2. \end{aligned}$$

Symbolic calculations by Maple reveal that

$$\begin{aligned} \left. \frac{\partial \lambda _i}{\partial x} \right| _{x=\sqrt{yz}} = 0 \qquad \text { for all } {i=1,2,3,4}, \end{aligned}$$

namely, all eigenvalues take an extremal value at \(x = \sqrt{yz}\). Consequently,

$$\begin{aligned} \left. \frac{\partial \lambda _{\max }}{\partial x}\right| _{x=\sqrt{yz}} = 0, \end{aligned}$$

which, taking the argument above into consideration, implies that \(\lambda _{\max }\) is indeed minimized at \(x = \sqrt{yz}\).

Case 2: Two comparisons (xy) are missing in the same row/column

If x and y are missing in (3), then the logarithmic least squares optimal completion is given by \(x = {z}^{2/3}\) and \(y = {z}^{1/3}\). According to symbolic calculations,

$$\begin{aligned} \left. \frac{\partial \lambda _i}{\partial x}\right| _{x={z}^{2/3}, \, y={z}^{1/3}} = \left. \frac{\partial \lambda _i}{\partial y}\right| _{x={z}^{2/3}, \, y={z}^{1/3}} = 0 \qquad \text { for all } {i=1,2,3,4}, \end{aligned}$$

thus,

$$\begin{aligned} \left. \frac{\partial \lambda _{\max }}{\partial x}\right| _{x={z}^{2/3}, \, y={z}^{1/3}} = \left. \frac{\partial \lambda _{\max }}{\partial y}\right| _{x={z}^{2/3}, \, y={z}^{1/3}} = 0. \end{aligned}$$

Case 3: Two comparisons (yz) are missing in different rows/columns

If y and z are missing in (3), then the logarithmic least squares optimal completion is given by \(y = \sqrt{x}\) and \(z = {x}^{3/4}\). According to symbolic calculations,

$$\begin{aligned} \left. \frac{\partial \lambda _i}{\partial y}\right| _{y=\sqrt{x}, \, z={x}^{3/4}} = \left. \frac{\partial \lambda _i}{\partial z}\right| _{y=\sqrt{x}, \, z={x}^{3/4}} = 0 \qquad \text { for } {i=1,2,3,4}, \end{aligned}$$

thus,

$$\begin{aligned} \left. \frac{\partial \lambda _{\max }}{\partial y}\right| _{y=\sqrt{x}, \, z={x}^{3/4}} = \left. \frac{\partial \lambda _{\max }}{\partial z}\right| _{y=\sqrt{x}, \, z={x}^{3/4}} = 0. \end{aligned}$$

Case 4: Three comparisons (xyz) are missing

If x, y, and z are all missing in (3), then there is a unique consistent completion given by \(x = y = z = 1\), and the minimum of \(\lambda _{\max }\) is equal to 4.

The proof is completed because the associated graph G is guaranteed to be disconnected if there are at least four missing comparisons. \(\square \)

If the graph G representing the incomplete pairwise comparison matrix \(\textbf{A} \in \mathcal {A}_{*}^{4 \times 4}\) is disconnected, then both optimisation problems (1) and (2) have an infinite number of solutions.

Theorem 1 cannot be generalised by increasing the number of alternatives.

Lemma 2

The logarithmic least squares and the eigenvalue optimal completions might be different for incomplete pairwise comparison matrices of order five.

Proof

Consider the following pairwise comparison matrix:

$$\begin{aligned} \textbf{A} = \left[ \begin{array}{ccccc} 1 &{} 1/2 &{} 5 &{} 1/6 &{} *\\ 2 &{} 1 &{} 4 &{} 1/2 &{} 1/6 \\ 1/5 &{} 1/4 &{} 1 &{} 1/6 &{} 1/7 \\ 6 &{} 2 &{} 6 &{} 1 &{} 1/2 \\ *&{} 6 &{} 7 &{} 2 &{} 1 \\ \end{array} \right] . \end{aligned}$$

Let \(\textbf{B}\) and \(\textbf{C}\) the logarithmic least squares and the eigenvalue optimal completions, respectively. It can be checked that \(b_{15} = 0.1705\) and \(c_{15} = 0.1798\), namely, the estimation of the missing comparison between the first and the last alternatives are different according to the two methods. But this is expected as the objective functions to be minimised are different, too. \(\square \)

Remark 1

By cloning the second alternative, the example of Lemma 2 can be used to verify that the logarithmic least squares and the eigenvalue optimal completions might be different for incomplete pairwise comparison matrices of any order higher than five.

The incomplete pairwise comparison matrices used as a counterexample in the proof of Lemma 2 is minimal with respect to both the number of alternatives and the number of missing entries. However, Lemma 2 does not mean that the logarithmic least squares and the eigenvalue optimal completions will always be different if the number of alternatives is at least five. For example, they imply the same completion if the incomplete pairwise comparison matrix can be made consistent with an appropriate choice of the missing entries.

4 Conclusion

In this paper, we have considered the optimal completion of a pairwise comparison matrix with missing entries if the unknown elements are substituted by variables and the inconsistency of the associated complete matrix is minimised. The logarithmic least squares and the eigenvalue optimal completions are found to be the same if the number of alternatives does not exceed four.

The finding is somewhat surprising because the logarithmic least squares and eigenvector methods can provide different priority vectors for pairwise comparison matrices of order four. Furthermore, some theoretical shortcomings of the eigenvector method such as left-right asymmetry (Bozóki & Rapcsák, 2008; Ishizaka & Lusti, 2006; Johnson et al., 1979) and Pareto inefficiency (Blanquero et al., 2006; Bozóki & Fülöp, 2018) might be a problem if a decision-making problem contains four alternatives. According to Theorem 1, this issue becomes relevant only for \(n \ge 5\) in the case of incomplete pairwise comparison matrices. Finally, since both approaches lead to the same optimal completion up to \(n \le 4\), one can “expect” from other completion methods for pairwise comparison matrices with missing entries to provide the same solution. Consequently, Theorem 1 may present a kind of axiom for these techniques, eleven of them discussed by (Tekile et al., 2023).

Our result also brings up several interesting research questions such as:

  • Are there other classes of incomplete pairwise comparison matrices where the two approaches lead to the same estimation of missing entries?

  • Does the equivalence hold if the optimal completion is obtained by minimising a third inconsistency index?

  • When has an incomplete pairwise comparison matrix only one reasonable optimal completion?

Hopefully, all these directions will be investigated in the future.