1 Introduction

Consider the linear system

$$\begin{aligned} A\varvec{x}^*= \varvec{b} \end{aligned}$$
(1)

where \(A \in \mathbb {R}^{d \times d}\) is an invertible matrix, \(\varvec{b} \in \mathbb {R}^d\) is a given vector, and \(\varvec{x}^*\in \mathbb {R}^d\) is an unknown to be determined. Recent work (Hennig 2015; Cockayne et al. 2018) has constructed iterative solvers for this problem which output probability measures, constructed to quantify uncertainty due to terminating the algorithm before the solution has been identified completely. On the surface the approaches in these two works appear different: in the matrix-based inference (MBI) approach of Hennig (2015), a posterior is constructed on the matrix \(A^{-\!1}\), while in the solution-based inference (SBI) method of Cockayne et al. (2018) a posterior is constructed on the solution vector \(\varvec{x}^*\).

These algorithms are instances of probabilistic numerical methods (PNM) in the sense of Hennig et al. (2015) and Cockayne et al. (2017). PNM are numerical methods which output posterior distributions that quantify uncertainty due to discretisation error. An interesting property of PNM is that they often result in a posterior distributions whose mean element coincides with the solution given by a classical numerical method for the problem at hand. The relationship between PNM and classical solvers has been explored for integration (e.g. Karvonen and Sarkka 2017), ODE solvers (Schober et al. 2014, 2019; Kersting et al. 2018) and PDE solvers (Cockayne et al. 2016) in some generality. For linear solvers, attention has thus far been restricted to the conjugate gradient (CG) method. Since CG is but a single member of a larger class of iterative solvers, and applicable only if the matrix A is symmetric and positive definite, extending the probabilistic interpretation is an interesting endeavour. Probabilistic interpretations provide an alternative perspective on numerical algorithms and can also provide extensions such as the ability to exploit noisy or corrupted observations. The probabilistic view has also been used to the develop new numerical methods (Xi et al. 2018), and Bayesian PNM can be incorporated rigorously into pipelines of computation (Cockayne et al. 2017).

Preconditioning—mapping Eq. (1) to a better conditioned system with the same solution—is key to the fast convergence of iterative linear solvers, particularly those based upon Krylov methods (Liesen and Strakos 2012). The design of preconditioners has been referred to as “a combination of art and science” (Saad 2003, p. 283). In this work, we also provide a new, probabilistic interpretation of preconditioning as a form of prior information.

1.1 Contribution

This text contributes three primary insights:

  1. 1.

    It is shown that, for particular choices of the generative model, matrix-based inference (MBI) and solution-based inference (SBI) can be equivalent (Sect. 2).

  2. 2.

    A general probabilistic interpretation of projection methods (Saad 2003) is described (Sect. 3.1), leading to a probabilistic interpretation of the generalised minimum residual method (GMRES; Saad and Schultz (1986), Sect. 6). The connection to CG is expanded and made more concise in Sect. 5.

  3. 3.

    A probabilistic interpretation of preconditioning is presented in Sect. 4.

Most of the proofs are presented inline; lengthier proofs are deferred to “Appendix B”. While an important consideration, the predominantly theoretical contributions of this paper will not consider the impact of finite numerical precision.

1.2 Notation

For a symmetric positive-definite matrix \(M \in \mathbb {R}^{d\times d}\) and two vectors \(\varvec{v}, \varvec{w} \in \mathbb {R}^d\), we write \(\langle \varvec{v}, \varvec{w} \rangle _M = \varvec{v}^{\top }M \varvec{w}\) for the inner product induced by \(M\), and \(\Vert \varvec{v}\Vert _M^2 = \langle \varvec{v}, \varvec{v} \rangle _M\) for the corresponding norm.

A set of vectors \(\varvec{s}_1, \dots , \varvec{s}_m\) is called \(M\)-orthogonal or M-conjugate if \(\langle \varvec{s}_i, \varvec{s}_j \rangle _M = 0\) for \(i\ne j\), and \(M\)-orthonormal if, in addition, \(\Vert \varvec{s}_i\Vert _M = 1\) for \(1\le i\le m\).

For a square matrix \(A =\begin{bmatrix} \varvec{a}_1&\ldots&\varvec{a}_d\end{bmatrix}^{\top }\in \mathbb {R}^{d\times d}\), the vectorisation operator\(\text {vec} : \mathbb {R}^{d\times d} \rightarrow \mathbb {R}^{d^2}\) stacks the rowsFootnote 1 of A into one long vector:

$$\begin{aligned} \overrightarrow{A} \equiv \text {vec}(A) = \begin{bmatrix} \varvec{a}_1 \\ \vdots \\ \varvec{a}_d \end{bmatrix},\quad \text {with}\quad \left[ \overrightarrow{A}\right] _{(ij)} = [A]_{ij} . \end{aligned}$$

The Kronecker product of two matrices \(A, B \in \mathbb {R}^{d\times d}\) is \(A \otimes B\) with \([A\otimes B]_{(ij),(k\ell )} = [A]_{ik}[B]_{j\ell }\). A list of its properties is provided in “Appendix A”.

The Krylov space of order m generated by the matrix \(A\in \mathbb {R}^{d\times d}\) and the vector \(\varvec{b}\in \mathbb {R}^d\) is

$$\begin{aligned} K_m(A, \varvec{b}) = \text {span}\left( \varvec{b}, A\varvec{b}, A^2\varvec{b}, \dots , A^{m-1}\varvec{b}\right) . \end{aligned}$$

We will slightly abuse notation to describe shifted and scaled subspaces of \(\mathbb {R}^d\): let \({\mathbb {S}}\) be an m-dimensional linear subspace of \(\mathbb {R}^d\) with basis \(\{\varvec{s}_1, \dots , \varvec{s}_m\}\). Then, for a vector \(\varvec{v} \in \mathbb {R}^d\) and a matrix \(M \in \mathbb {R}^{d\times d}\), let

$$\begin{aligned} \varvec{v} + M {\mathbb {S}} = \text {span}(\varvec{v} + M\varvec{s}_1, \dots , \varvec{v} + M\varvec{s}_m). \end{aligned}$$

2 Probabilistic linear solvers

Several probabilistic frameworks describing the solution of Eq. (1) have been constructed in recent years. They primarily differ in the subject of inference: SBI approaches such as Cockayne et al. (2018), of whichBayesCG is an example, place a prior distribution on the solution \(\varvec{x}^*\) of Eq. (1). Conversely, the MBI approach of Hennig (2015) and Bartels and Hennig (2016) places a prior on \(A^{-\!1}\), treating the action of the inverse operator as an unknown to be inferred.Footnote 2 This section reviews each approach and adds some new insights. In particular, SBI can be viewed as strict special case of MBI (Sect. 2.4).

Throughout this section, we will assume that the search directions \(S_m\) in \(S_m^\top A\varvec{x}^*= S_m^\top \varvec{b}\) are independent of \(\varvec{x}^*\). Generally speaking, this is not the case for projection methods, in which the solution space often depends strongly on \(\varvec{b}\), as described in Sects. 5 and 6. This disconnect is the source of the poor uncertainty quantification reported in Cockayne et al. (2018) and shown also to hold for the methods in this work in Sect. 6.4. This will not be examined in further detail in this work, though it remains an important area of development for probabilistic linear solvers.

2.1 Background on Gaussian conditioning

The propositions in this section follow from the following two classic properties of Gaussian distributions.

Lemma 1

Let \(\varvec{x} \in \mathbb {R}^d\) be Gaussian distributed with density \(p(\varvec{x})=\mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma )\) for \(\varvec{x}_0 \in \mathbb {R}^d\) and \(\varSigma \in \mathbb {R}^{d\times d}\) a positive semi-definite matrix. Let \(M \in \mathbb {R}^{n \times d}\) and \(\varvec{z} \in \mathbb {R}^n\). Then, \(\varvec{v} = M \varvec{x} + \varvec{z}\) is also Gaussian, with

$$\begin{aligned} p(\varvec{v}) = \mathcal {N}(\varvec{v}; M\varvec{x}_0 + \varvec{z}, M\varSigma M^{\top }). \end{aligned}$$

Lemma 2

Let \(\varvec{x} \in \mathbb {R}^d\) be distributed as in Lemma 1, and let observations \(\varvec{y} \in \mathbb {R}^n\) be generated from the conditional density

$$\begin{aligned} p(\varvec{y}\mid \varvec{x})=\mathcal {N}(\varvec{y}; M \varvec{x} + \varvec{z}, \varLambda ) \end{aligned}$$

with \(M \in \mathbb {R}^{n \times d}\), \(\varvec{z} \in \mathbb {R}^n\), and \(\varLambda \in \mathbb {R}^{n \times n}\) again positive semi-definite. Then, the associated conditional distribution on \(\varvec{x}\) after observing \(\varvec{y}\) is again Gaussian, with

$$\begin{aligned} p(\varvec{x}\mid \varvec{y})&=\mathcal {N}(\varvec{x}; \bar{\varvec{x}}, \bar{\varSigma }) \qquad \text {where}\\ \bar{\varvec{x}}&= \varvec{x}_0+\varSigma M^{\top }(M\varSigma M^{\top }+ \varLambda )^{-\!1}(\varvec{y} - M\varvec{x}_0 - \varvec{z}) \\ \bar{\varSigma }&= \varSigma - \varSigma M^{\top }(M\varSigma M^{\top }+ \varLambda )^{-\!1}M\varSigma ). \end{aligned}$$

This formula also applies if \(\varLambda = 0\), i.e. observations are made without noise, with the caveat that if \(M\varSigma M^{\top }\) is singular, the inverse should be interpreted as a pseudo-inverse.

2.2 Solution-based inference

To phrase the solution of Eq. (1) as a form of probabilistic inference, Cockayne et al. (2018) consider a Gaussian prior over the solution \(\varvec{x}^*\), and condition on observations provided by a set of search directions\(\varvec{s}_1, \dots , \varvec{s}_m\), \(m < d\). Let \(S_m \in \mathbb {R}^{d \times m}\) be given by \(S_m = [\varvec{s}_1, \ldots , \varvec{s}_m]\), and let information be given by \(\varvec{y}_m:=S_m^{\top }A\varvec{x}^*=S_m^{\top }\varvec{b}\). Since the information is a linear projection of \(\varvec{x}^*\), the posterior distribution is a Gaussian distribution on \(\varvec{x}^*\):

Lemma 3

(Cockayne et al. 2018) Assume that the columns of \(S_m\) are linearly independent. Consider the prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma _0). \end{aligned}$$

The posterior from SBI is then given by

$$\begin{aligned} p(\varvec{x}\mid \varvec{y}_m)=\mathcal {N}(\varvec{x}; \varvec{x}_m, \varSigma _m) \end{aligned}$$

where

$$\begin{aligned} \varvec{x}_m&= \varvec{x}_0 + \varSigma _0 A^{\top }S_m (S_m^{\top }A\varSigma _0 A^{\top }S_m)^{-\!1}S_m^{\top }\varvec{r}_0\nonumber \\ \varSigma _m&= \varSigma _0 - \varSigma _0 A^{\top }S_m(S_m^{\top }A\varSigma _0A^{\top }S_m)^{-\!1}S_m^{\top }\varSigma _0, \end{aligned}$$
(2)

and \(\varvec{r}_0=\varvec{b}-A\varvec{x}_0\).

The following proposition establishes an optimality property of the posterior mean \(\varvec{x}_m\). This is a relatively well-known property of Gaussian inference, which will prove useful in subsequent sections.

Proposition 4

If \({\mathbb {S}}_m = \text {range}(S_m)\), then the posterior mean in Lemma 3 satisfies the optimality property

$$\begin{aligned} \varvec{x}_m = {{\,\mathrm{arg\,min}\,}}_{\varvec{x} \in \varvec{x}_0 + \varSigma _0 A^{\top }{\mathbb {S}}_m} { \Vert \varvec{x} - \varvec{x}^*\Vert _{\varSigma _0^{-\!1}}}. \end{aligned}$$

Proof

With the abbreviations \(X=\varSigma _0A^\top S_m\) and \(\varvec{y}=\varvec{x}^*-\varvec{x}_0\) the mean in Lemma 3 can be written as

$$\begin{aligned} \varvec{x}_m=\varvec{x}_0+X\varvec{c}_m, \end{aligned}$$

where

$$\begin{aligned} \varvec{c}_m=(X^\top \varSigma _0^{-\!1}X)^{-\!1}X^\top \varSigma _0^{-\!1}\varvec{y} \end{aligned}$$

is the solution of the weighted least squares problem (Golub and Van Loan 2013, Section 6.1)

$$\begin{aligned} \varvec{c}_m= & {} {{\,\mathrm{arg\,min}\,}}_{\varvec{c} \in \mathbb {R}^m}{\Vert X\varvec{c}-\varvec{y}\Vert _{\varSigma _0^{-\!1}}}\\= & {} {{\,\mathrm{arg\,min}\,}}_{\varvec{c} \in \mathbb {R}^m}{\Vert \varvec{x}_0+\varSigma _0 A^\top S_m\varvec{c}-\varvec{x}^*\Vert _{\varSigma _0^{-\!1}}}. \end{aligned}$$

This is equivalent to the desired statement. \(\square \)

2.3 Matrix-based inference

In contrast to SBI, the MBI approach of Hennig (2015) treats the matrix inverse \(A^{-\!1}\) as the unknown in the inference procedure. As in the previous section, search directions \(S_m\) yield matrix-vector products \(Y_m \in \mathbb {R}^{d\times m}\). In Hennig (2015), these arise from right-multiplyingFootnote 3\(A\) with \(S_m\), i.e. \(Y_m = AS_m\). Note that

$$\begin{aligned} S_m = A^{-\!1}Y_m, \text { or, equivalently } \overrightarrow{S_m} = (I \otimes Y_m^{\top }) \overrightarrow{A^{-\!1}}. \end{aligned}$$
(3)

Thus, \(S_m\) is a linear transformation of \(A^{-\!1}\) and Lemma 2 can again be applied:

Lemma 5

(Lemma 2.1 in Hennig (2015))Footnote 4 Consider the prior

$$\begin{aligned} p\left( \overrightarrow{A^{-\!1}}\right) =\mathcal {N}\left( \overrightarrow{A^{-\!1}}; \overrightarrow{A^{-\!1}_0}, \varSigma _0 \otimes W_0\right) . \end{aligned}$$
(4)

Then, the posterior given the observations \(\overrightarrow{S_m}= A^{-\!1}Y_m\) is given by

$$\begin{aligned} p\left( \overrightarrow{A^{-\!1}}\,\bigg |\, \overrightarrow{S_m}\right) =\mathcal {N}\left( \overrightarrow{A^{-\!1}}; \overrightarrow{A^{-\!1}_m}, \varSigma _0 \otimes W_m\right) \end{aligned}$$

with

$$\begin{aligned} A^{-\!1}_m&= A^{-\!1}_0 + (S_m - A^{-\!1}_0 Y_m)(Y_m^{\top }W_0Y_m)^{-\!1}Y_m^{\top }W_0 \\ W_m&= W_0 - W_0Y_m(Y_m^{\top }W_0Y_m)^{-\!1}Y_m^{\top }W_0. \end{aligned}$$

For linear solvers, the object of interest is \(\varvec{x}^*=A^{-\!1}\varvec{b}\). Writing \(A^{-\!1}\varvec{b}=(I\otimes \varvec{b}^{\top })\overrightarrow{A^{-\!1}}\), and again using Lemma 1, we see that the associated marginal is also Gaussian and given by

$$\begin{aligned} p(\varvec{x}\mid S, Y)=\mathcal {N}(\varvec{x}; A^{-\!1}_m \varvec{b}, \varvec{b}^{\top }W_m \varvec{b}\cdot \varSigma _0). \end{aligned}$$
(5)

In the Kronecker product specification for the prior covariance on \(A^{-\!1}\) from Eq. (4), the matrix \(\varSigma _0\), describes the dependence between the columns of \(A^{-\!1}\). The matrix \(W_0\) captures the dependency between the rows of \(A^{-\!1}\). Note that in Lemma 5, the posterior covariance has the form \(\varSigma _0 \otimes W_m\). When compared to the prior covariance, \(\varSigma _0 \otimes W_0\), it is clear that the observations have conveyed no new information to the first term of the Kronecker product covariance.

The Kronecker structure of the prior covariance matrix in Eq. (4) is by no means the only option that facilitates tractable inference.Footnote 5 However, in the absence of the literature exploring other approaches within MBI, we will assume throughout that MBI refers to the use of the Kronecker produce prior covariance.

2.4 Equivalence of MBI and SBI

In practice, Hennig (2015) notes that inference on \(A^{-\!1}\) should be performed only implicitly, avoiding the \(d^2\) storage cost and the mathematical complexity of the operations involved in Lemma 5. This raises the question of when MBI is equivalent to SBI. Although, based on Lemma 1, one might suspect SBI and MBI to be equivalent, in fact the posterior from Lemma 5 is structurally different to the posterior in Lemma 3: after projecting into solution space, the posterior covariance in Lemma 5 is a scalar multiple of the matrix \(\varSigma _0\), which is not the case in general in Lemma 3.

However, the implied posterior over the solution vector can be made to coincide with the posterior from SBI if one considers observations in MBI as

$$\begin{aligned} S_m^\top = Y_m^\top A^{-1}. \end{aligned}$$
(6)

That is, as left-multiplications of A. We will refer to the observation model of Eq. (3) as right-multiplied information, and to Eq. (6) as left-multiplied information.

Proposition 6

Consider a Gaussian MBI prior

$$\begin{aligned} p(A^{-1})= {\mathcal {N}}\left( A^{-1};\overrightarrow{A_0^{-1}}, \varSigma _0 \otimes W_0\right) , \end{aligned}$$

conditioned on the left-multiplied information of Eq. (6). The associated marginal on \(\varvec{x}\) is identical to the posterior on \(\varvec{x}\) arising in Lemma 3 from \(p(\varvec{x})= {\mathcal {N}}(\varvec{x};\varvec{x}_0, \varSigma _0)\) under the conditions

$$\begin{aligned}A_0^{-1} \varvec{b} = \varvec{x}_0 \quad \text {and}\quad \varvec{b}^\top W_0 \varvec{b} = 1.\end{aligned}$$

Proof

See “Appendix B”. \(\square \)

The first of the two conditions requires that the prior mean on the matrix inverse be consistent with the prior mean on the solution, which is natural. The second condition demands that, after projection into solution space, the relationship between the rows of \(A^{-\!1}\) modelled by \(W_0\) does not inflate the covariance \(\varSigma _0\). Note that this condition is trivial to enforce for an arbitrary covariance \(\bar{W_0}\) by setting \(W_0 = (\varvec{b}^{\top }\bar{W_0} \varvec{b})^{-1} \bar{W_0}\).

2.5 Remarks

The result in Proposition 6 shows that any result proven for SBI applies immediately to MBI with left-multiplied observations. Though MBI has more model parameters than SBI, there are situations in which this point of view is more appropriate. Unlike in SBI, the information obtained in MBI need not be specific to a particular solution vector \(\varvec{x}^*\) and thus can be propagated and recycled over several linear problems, similar to the notion of subspace recycling (Soodhalter et al. 2014). Secondly, MBI is able to utilise both left- and right-multiplied information, while SBI is restricted to left-multiplied information. This additional generality may prove useful in some applications.

3 Projection methods as inference

This section discusses a connection between probabilistic numerical methods for linear systems and the classic framework of projection methods for the iterative solution of linear problems. Section 3.1 reviews this established class of solvers, while Sect. 3.2 presents the novel results.

3.1 Background

Many iterative methods for linear systems, including CG and GMRES, belong to the class of projection methods (Saad 2003, p. 130f.). Saad describes a projection method as an iterative scheme in which, at each iteration, a solution vector \(\varvec{x}_m\) is constructed by projecting \(\varvec{x}^*\) into a solution space\({\mathbb {X}}_m\subset \mathbb {R}^d\), subject to the restriction that the residual \(\varvec{r}_m = \varvec{b} - A\varvec{x}_m\) is orthogonal to a constraint space\({\mathbb {U}}_m\subset \mathbb {R}^d\).

More formally, each iteration of a projection method is defined by two matrices \(X_m, U_m \in \mathbb {R}^{d\times m}\), and by a starting point \(\varvec{x}_0\). The matrices \(X_m\) and \(U_m\) each encode the solution and constraint spaces as \({\mathbb {X}}_m=\mathrm {range}(X_m)\) and \({\mathbb {U}}_m=\mathrm {range}(U_m)\). The projection method then constructs \(\varvec{x}_m\) as \(\varvec{x}_m = \varvec{x}_0 + X_m\varvec{\alpha }_m\) with \(\varvec{\alpha }_m\in \mathbb {R}^m\) determined by the constraint \(U_m^\top \varvec{r}_m = \varvec{0}\). This is possible only if \(U_m^{\top }AX_m\) is nonsingular, in which case one obtains

$$\begin{aligned} \varvec{\alpha }_m&= (U_m^{\top }AX_m)^{-1} U_m^{\top }\varvec{r}_0, \text { and thus} \end{aligned}$$
(7)
$$\begin{aligned} \varvec{x}_m&= \varvec{x}_0 + X_m (U_m^{\top }AX_m)^{-1} U_m^{\top }\varvec{r}_0. \end{aligned}$$
(8)

From this perspective, CG and GMRES perform only a single step with the number of iterations m fixed and determined in advance. For CG, the spaces are \({\mathbb {U}}_m = {\mathbb {X}}_m = K_m(A, \varvec{b})\), while for GMRES they are \({\mathbb {X}}_m=K_m(A, \varvec{b})\) and \({\mathbb {U}}_m=AK_m(A, \varvec{b})\) (Saad 2003, Proposition 5.1).

3.2 Probabilistic perspectives

In this section, we first show, in Proposition 7, that the conditional mean from SBI afterm steps corresponds to some projection method. Then, in Proposition 8 we prove the converse: that each projection method is also the posterior mean of a probabilistic method, for some prior covariance and choice of information.

Proposition 7

Let the columns of \(S_m\) be linearly independent. Consider SBI under the prior

$$\begin{aligned} p(\varvec{x})={\mathcal {N}}(\varvec{x}; \varvec{x}_0, \varSigma _0), \end{aligned}$$

and with observations \(\varvec{y}_m=S_m^{\top }\varvec{b}\). Then, the posterior mean \(\varvec{x}_m\) in Lemma 3 is identical to the iterate from a projection method defined by the matrices \(U_m=S_m\) and \(X_m=\varSigma _0A^{\top }S_m\), and the starting vector \(\varvec{x}_0\).

Proof

Substituting \(U_m = S_m\) and \(X_m = \varSigma _0 A^{\top }S_m\) into Lemma 3 gives Eq. (8), as required. \(\square \)

The converse to this also holds:

Proposition 8

Consider a projection method defined by the matrices \(X_m,U_m\in \mathbb {R}^{d\times m}\), each with linearly independent columns, and the starting vector \(\varvec{x}_0 \in \mathbb {R}^d\). Then, the iterate \(\varvec{x}_m\) in Eq. (8) is identical to the SBI posterior mean in Lemma 3 under the prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}(\varvec{x}; \varvec{x}_0, X_mX_m^\top ) \end{aligned}$$
(9)

when search directions \(S_m = U_m\) are used.

Proof

Abbreviate \(Z=X_m^\top A^\top U_m\) and write the projection method iterate from Eq. (8) as

$$\begin{aligned} \varvec{x}_m = \varvec{x}_0+ X_m Z^{-T} U_m^\top \varvec{r}_0. \end{aligned}$$

Multiply the middle matrix by the identity,

$$\begin{aligned} Z^{-T}= & {} ZZ^{-1}Z^{-T}=Z(Z^\top Z)^{-1}\\= & {} X_m^\top A^\top U_m(U_m^\top A\varSigma _0 A^\top U_m)^{-1}, \end{aligned}$$

and insert this into the expression for \(\varvec{x}_0\),

$$\begin{aligned} \varvec{x}_m = \varvec{x}_0+ \varSigma _0A^\top U_m(U_m^\top A\varSigma _0 A^\top U_m)^{-1}U_m^\top \varvec{r}_0. \end{aligned}$$

Setting \(U_m=S_m\) gives the mean in Lemma 3. \(\square \)

A direct way to enforce the posterior occupying the solution space is by placing a prior on the coefficients \(\varvec{\alpha }\) in \(\varvec{x} = \varvec{x}_0 + X_m \varvec{\alpha }\). Under a unit Gaussian prior \(\varvec{\alpha }\sim \mathcal {N}(\varvec{0}, I)\), the implied prior on \(\varvec{x}\) naturally has the form of Eq. (9).

However, this prior is unsatisfying since it requires the solution space to be specified a priori, precluding adaptivity in the algorithm and perhaps more worryingly, the posterior uncertainty over the solution is a matrix of zeros even though the solution is not fully identified. Again taking \(Z = X_m^{\top }A^{\top }U_m\):

$$\begin{aligned} \varSigma _m&=\varSigma _0 - \varSigma _0 A^{\top }U_m(U_m^{\top }A\varSigma _0 A^{\top }U_m)^{-\!1}U_m^{\top }A \varSigma _0\\&= X_mX_m^{\top }- X_m Z(Z^{\top }Z)^{-1}Z^{\top }X_m^{\top }\\&= X_mX_m^{\top }- X_mX_m^{\top }\\&= 0 . \end{aligned}$$

Concerning this issue, Hennig (2015) and Bartels and Hennig (2016) each proposed to adding additional uncertainty in the null space of \(X_m\). This empirical uncertainty calibration step has not yet been analysed in detail. Such analysis is left for future work.

Including the solution space \(X_m\) in the prior covariance matrix requires it to be specified a priori. For solvers like CG and GMRES which construct \(X_m\) adaptively, this assumption may appear problematic—a probabilistic interpretation should use for inference only quantities that have already been computed. The computation of \(X_m\) could be seen as part of the initialisation, but this requires that the number of iterations m to be fixed a priori, whereas typically such methods choose m adaptively by examining the norm of the residual.Footnote 6 Nevertheless, the proposition provides a probabilistic view forarbitrary projection methods and does not involve \(A^{-\!1}\), unlike the results presented in Hennig (2015), Cockayne et al. (2017).

The above prior is not unique. The next proposition establishes probabilistic interpretations of projection methods under priors that are independent of solution- and constraint space, albeit under more restrictive conditions. The benefit of this is that m need not be fixed a priori.

Proposition 9

Consider a projection method defined by \(X_m, U_m\in \mathbb {R}^{d\times m}\) and the starting vector \(\varvec{x}_0\). Further suppose that \(U_m = R X_m\) for some invertible \(R \in \mathbb {R}^{d\times d}\), and that \(A^\top R\) is symmetric positive definite. Then, under the prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}\left( \varvec{x}; \varvec{x}_0, (A^\top R)^{-1} \right) \end{aligned}$$

and the search directions \(S_m = U_m = R X_m\), the iterate in the projection method is identical to the posterior mean in Lemma 3.

Proof

First substitute \(X_m=R^{-1}U_m\) into Eq. (8) to obtain

$$\begin{aligned} \varvec{x}_m&= \varvec{x}_0 + R^{-1} U_m (U_m^{\top }AR^{-1} U_m)^{-1} U_m^{\top }\varvec{r}_0 \\&= \varvec{x}_0 + R^{-1} A^{-\top } A^{\top }U_m (U_m^\top AR^{-1} A^{-\top } A^{\top }U_m)^{-1} U_m^\top \varvec{r}_0 \\&= \varvec{x}_0 + \varSigma _0 A^{\top }U_m (U_m^\top A\varSigma _0 A^{\top }U_m)^{-1} U_m^\top \varvec{r}_0. \end{aligned}$$

The third line uses \(\varSigma _0 = (A^{\top }R)^{-1} = R^{-1}A^{-T}\). This is equivalent to the posterior mean in Eq. (2) with \(S_m = U_m\). \(\square \)

A corollary which provides further insight arises when one considers the polar decomposition ofA. Recall that an invertible matrixA has a unique polar decomposition \(A = PH\), where \(P \in \mathbb {R}^{d \times d}\) is orthogonal and \(H \in \mathbb {R}^{d\times d}\) is symmetric positive definite.

Corollary 10

Consider a projection method defined by \(X_m, U_m\in \mathbb {R}^{d\times m}\) and the starting vector \(\varvec{x}_0\), and suppose that \(U_m = P X_m\), where P arises from the polar decomposition \(A = PH\). Then, under the prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}\left( \varvec{x}; \varvec{x}_0, H^{-1}\right) \end{aligned}$$

and the search directions \(S_m = U_m = P X_m\), the iterate in the projection method is identical to the posterior mean in Lemma 3.

Proof

This follows from Proposition 9. Setting \(R = P\) aligns the search directions in Corollary 10 with those in Proposition 9. Since P is orthogonal, \(P^{-1} = P^\top \), and since H is symmetric positive definite, \(A^\top P = P^\top A = H\) by definition of the polar decomposition, which gives the prior covariance required for Proposition 9.\(\square \)

This is an intuitive analogue of similar results in Hennig (2015) and Cockayne et al. (2017) which show that CG is recovered under certain conditions involving a prior \(\varSigma _0 = A^{-\!1}\). When \(A\) is not symmetric and positive definite, it cannot be used as a prior covariance. This corollary suggests a natural way to select a prior covariance still linked to the linear system, though this choice is still not computationally convenient. Furthermore, in the case that \(A\) is symmetric positive definite, this recovers the prior which replicates CG described in Cockayne et al. (2018). Note that each of H and P can be stated explicitly as \(H = (A^{\top }A)^\frac{1}{2}\) and \(P = A(A^{\top }A)^{-\frac{1}{2}}\). Thus, in the case of symmetric positive-definite A we have that \(H = A\) and \(P = I\), so that the prior covariance \(\varSigma _0 = A^{-1}\) arises naturally from this interpretation.

4 Preconditioning

This section discusses probabilistic views on preconditioning. Preconditioning is a widely used technique accelerating the convergence of iterative methods (Saad 2003, Sections 9 and 10). A preconditioner P is a nonsingular matrix satisfying two requirements:

  1. 1.

    Linear systems \(Pz=c\) can be solved at low computational cost

  2. 2.

    P is “close” toA in some sense.

In this sense, solving systems based upon a preconditioner can be viewed as approximately inverting A, and indeed, many preconditioners are constructed based upon this intuition. One distinguishes between right preconditioners\(P_r\) and left preconditioners\(P_l\), depending on whether they act on A from the left or the right. Two-sided preconditioning with nonsingular matrices \(P_l\) and \(P_r\) transforms implicitly Eq. (1) into a new linear problem

$$\begin{aligned} P_l AP_r \,\varvec{z}^*=P_l \varvec{b}, \qquad \text {with}\quad \varvec{x}^*=P_r\varvec{z}^*. \end{aligned}$$
(10)

The preconditioned system can then be solved using arbitrary projection methods as described in Sect. 3.1, from the starting point \(\varvec{z}_0\) defined by \(\varvec{x}_0 = P_r \varvec{z}_0\). The probabilistic view can be used to create a nuanced description of preconditioning as a form of prior information. In the SBI framework, Proposition 11 below shows that solving a right-preconditioned system is equivalent to modifying the prior, while Proposition 12 shows that left preconditioning is equivalent to making a different choice of observations.

Proposition 11

(Right preconditioning) Consider the right-preconditioned system

$$\begin{aligned} AP_r \varvec{z}^*= \varvec{b}\qquad \text {where} \quad \varvec{x}^*= P_r \varvec{z}^*. \end{aligned}$$
(11)

SBI on Eq. (11) under the prior

$$\begin{aligned} \varvec{z}\sim \mathcal {N}(\varvec{z}; \varvec{z}_0, \varSigma _0) \end{aligned}$$
(12)

is equivalent to solving Eq. (1) under the prior

$$\begin{aligned} \varvec{x} \sim \mathcal {N}(\varvec{x}; P_r\varvec{z}_0, P_r \varSigma _0 P_r^{\top }) . \end{aligned}$$

Proof

Let \(p(x)=\mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma _r)\). Lemma 3 implies that after observing information from search directions \(S_m\), the posterior mean equals

$$\begin{aligned} \varvec{x}_m = \varvec{x}_0 + \varSigma _r A^\top S_m (S_m^\top A \varSigma _r A^\top S_m)^{-1}S_m^\top \varvec{r}_0 \end{aligned}$$

where \(\varvec{r}_0 = \varvec{b}-A\varvec{x}_0\). Setting \(\varvec{x}_0=P_r \varvec{z}_0\) and letting \(\varSigma _r=P_r\varSigma _0P_r^\top \) gives

$$\begin{aligned} \varvec{x}_m = P_r\varvec{z}_0 + P_r\varSigma _ 0B^\top S_m (S_m^\top B \varSigma _0 B^\top S_m)^{-1}S_m^\top \hat{\varvec{r}}_0 \end{aligned}$$

where \(B:=AP_r\) and \(\hat{\varvec{r}}_0 = \varvec{b}-B\varvec{z}_0\). Left multiplying by \(P_r^{-1}\) shows that this is equivalent to

$$\begin{aligned} \varvec{z}_m&:=P_r^{-1}\varvec{x}_m \\&= \varvec{z}_0 + \varSigma _ 0B^\top S_m (S_m^\top B \varSigma _0 B^\top S_m)^{-1}S_m^\top \hat{\varvec{r}}_0. \end{aligned}$$

Thus, \(\varvec{z}_m\) is the posterior mean of the system \(B\varvec{z}^* = \varvec{b}\) with prior Eq. (12) after observing search directions \(S_m\). \(\square \)

Proposition 12

(Left preconditioning) Consider the left-preconditioned system

$$\begin{aligned} P_l A \varvec{x}^*= P_l \varvec{b} \end{aligned}$$
(13)

And the SBI prior

$$\begin{aligned} p(\varvec{x}) = \mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma _0). \end{aligned}$$

Then, the posterior from SBI on Eq. (13) under search directions \(S_m\) is equivalent to the posterior from SBI applied to the system Eq. (1) under search directions \(P_l^{\top }S_m\).

Proof

Lemma 3 implies that after observing search directions \(T_m\), the posterior mean over the solution of Eq. (1) equals

$$\begin{aligned} \varvec{x}_m = \varvec{x}_0 + \varSigma _0 A^\top T_m (T_m^\top A \varSigma _0 A^\top T_m)^{-1}T_m^\top \varvec{r}_0 \end{aligned}$$

where \(\varvec{r}_0 = \varvec{b}-A\varvec{x}_0\). Setting \(T_m= P_l^\top S_m\) gives

$$\begin{aligned} \varvec{x}_m = \varvec{x}_0 + \varSigma _0 B^\top S_m (S_m^\top B \varSigma _0 B^\top S_m)^{-1}S_m^\top P_l \hat{\varvec{r}}_0 \end{aligned}$$

where \(B :=P_l A\) and \(\hat{\varvec{r}}_0 = P_l \varvec{b}-P_lA\varvec{x}_0\). Thus, \(\varvec{x}_m\) is the posterior mean of the system \(B\varvec{x}^*= P_l \varvec{b}\) after observing search directions \(S_m\). \(\square \)

If a probabilistic linear solver has a posterior mean which coincides with a projection method (as discussed in Sect. 3.1), the Propositions 11 and 12 show how to obtain a probabilistic interpretation of the preconditioned version of that algorithm. Furthermore, the equivalence demonstrated in Sect. 2.4 shows that the reasoning from Propositions 11 and 12 carries over to MBI based on left-multiplied observations: right preconditioning corresponds to a change in prior belief, while left-preconditioning corresponds to a change in observations.

We do not claim that this probabilistic interpretation of preconditioning is unique. For example, when using MBI with right-multiplied observations, the same line of reasoning can be used to show the converse: right preconditioning corresponds to a change in the observations and left preconditioning to a change in the prior.

5 Conjugate gradients

Conjugate gradients have been studied from a probabilistic point of view before by Hennig (2015) and Cockayne et al. (2018). This section generalises the results of Hennig (2015) and leverages Proposition 6 for new insights into BayesCG. For this section (but not thereafter), assume that \(A\) is a symmetric and positive definite matrix.

5.1 Left-multiplied view

The BayesCG algorithm proposed by Cockayne et al. (2018) encompasses conjugate gradients as a special case. BayesCG uses left-multiplied observations and was derived in the solution-based perspective.

The posterior in Lemma 3 does not immediately result in a practical algorithm as it involves the solution of a linear system based on the matrix \(S_m^{\top }A\varSigma _0 A^{\top }S_m\in \mathbb {R}^{m\times m}\), which requires \({\mathcal {O}}(m^3)\) arithmetic operations. BayesCG avoids this cost by constructing search directions that are \(A\varSigma _0A^\top \)-orthonormal, as shown below, see (Cockayne et al. 2018, Proposition 7).

Proposition 13

[Proposition 7 of Cockayne et al. 2018 (BayesCG)] Let \(\tilde{\varvec{s}}_1 = \varvec{b} - A \varvec{x}_0\), and let \(\varvec{s}_1 = \tilde{\varvec{s}}_1 / \Vert \tilde{\varvec{s}}_1\Vert \). For \(j = 2,\dots ,m\) let

$$\begin{aligned} \tilde{\varvec{s}}_j&= \varvec{b}- A\varvec{x}_{j-1} - \langle \varvec{b} - A\varvec{x}_{j-1}, \varvec{s}_{j-1} \rangle _{A\varSigma _0 A^{\top }} \varvec{s}_{j-1} \\ \varvec{s}_j&= \tilde{\varvec{s}}_j / \Vert \tilde{\varvec{s}}_j\Vert _{A\varSigma _0 A^{\top }} . \end{aligned}$$

Then, the set \(\{\varvec{s}_1, \dots , \varvec{s}_m\}\) is \(A\varSigma _0 A^{\top }\)-orthonormal, and consequently \(S_m^{\top }A\varSigma _0 A^{\top }S_m = I\).

With these search directions constructed, BayesCG becomes an iterative method:

Proposition 14

(Proposition 6 of Cockayne et al. 2018) Using the search directions from Proposition 13, the posterior from Lemma 3 reduces to:

$$\begin{aligned} \varvec{x}_m&= \varvec{x}_{m-1} + \varSigma _0 A^{\top }\varvec{s}_m (\varvec{s}_m ^{\top }(\varvec{b} - A \varvec{x}_{m-1})) \\ \varSigma _m&= \varSigma _{m-1} - \varSigma _0 A^{\top }\varvec{s}_m \varvec{s}_m^{\top }A\varSigma _0 \end{aligned}$$

In Proposition 4 of Cockayne et al. (2018), it was shown that the BayesCG posterior mean corresponds to the CG solution estimate when the prior covariance is taken to be \(\varSigma _0 = A^{-\!1}\), though this is not a practical choice of prior covariance as it requires access to the unavailable \(A^{-1}\). Furthermore, in Proposition 9 it was shown that when using the search directions from Proposition 13, the posterior mean from BCG has the following optimality property:

$$\begin{aligned} x_m = {{\,\mathrm{arg\,min}\,}}_{\varvec{x} \in K_m(\varSigma _0 A^{\top }A, \varSigma _0 A^{\top }\varvec{b})} \Vert \varvec{x} - \varvec{x}^*\Vert _{\varSigma _0^{-1}} \end{aligned}$$

Note that this is now a trivial special case of Proposition 4.

The following proposition leverages these results along with Proposition 6 to show that there exists an MBI method which, under a particular choice of prior and with a particular methodology for the generation of search directions, is consistent with CG.

Proposition 15

Consider the MBI prior

$$\begin{aligned} p(\overrightarrow{A^{-\!1}}) = \mathcal {N}(\overrightarrow{A^{-\!1}}; \overrightarrow{A_0^{-1}}, A^{-\!1}\otimes W_0) \end{aligned}$$

where \(W_0 \in \mathbb {R}^d\) is symmetric positive definite and so that \(\varvec{b}^{\top }W_0 \varvec{b}= 1\). Suppose left-multiplied information is used, and that the search directions are generated sequentially according to:

$$\begin{aligned} \tilde{\varvec{s}}_1&= (I - AA_0^{-1}) \varvec{b}\\ \varvec{s}_1&= \frac{\tilde{\varvec{s}_1}}{\Vert \tilde{\varvec{s}_1}\Vert _A} \end{aligned}$$

and for \(j=2,\dots ,m\)

$$\begin{aligned} \tilde{\varvec{s}}_{j}&= (I - AA_{j-1}^{-1}) \varvec{b}- \varvec{b}^{\top }(I - AA_{j-1}^{-1})^{\top }A\varvec{s}_{j-1} \cdot \varvec{s}_{j-1} \\ \varvec{s}_j&= \frac{\tilde{\varvec{s}_j}}{\Vert \tilde{\varvec{s}_j}\Vert _A}. \end{aligned}$$

Then, it holds that the implied posterior mean on solution space, given by \(A_m^{-1} \varvec{b}\), corresponds to the CG solution estimate after m iterations, with starting point \(\varvec{x}_0 = A_0^{-1} \varvec{b}\).

Proof

First note that, by Proposition 6, since left-multiplied observations are used and since \(\varvec{b}^{\top }W_0 \varvec{b}= 1\), the implied posterior distribution on solution space from MBI is identical to the posterior distribution from SBI under the prior

$$\begin{aligned} p(\varvec{x}) = \mathcal {N}(\varvec{x}; A_0^{-1} \varvec{b}, A^{-\!1}) . \end{aligned}$$

It thus remains to show that the sequence of search directions generated is identical to those in Proposition 13 for this prior. For \(\tilde{\varvec{s}_1}\):

$$\begin{aligned} \tilde{\varvec{s}_1} = (I - AA_0^{-1}) \varvec{b}= \varvec{b}- A\varvec{x}_0 \end{aligned}$$

as required. For \(\tilde{\varvec{s}_j}\):

$$\begin{aligned} \tilde{\varvec{s}_j}&= \varvec{(}I - AA_{j-1}^{-1}) \varvec{b}- \varvec{b}^{\top }(I - AA_{j-1}^{-1})^{\top }A\varvec{s}_{j-1} \cdot \varvec{s}_{j-1} \\&= \varvec{b} - A\varvec{x}_{m-1} - (\varvec{b}- A\varvec{x}_{j-1})^{\top }A\varvec{s}_{j-1} \cdot \varvec{s}_{j-1} \\&= \varvec{b} - A\varvec{x}_{m-1} - \langle \varvec{b}- A\varvec{x}_{j-1}, \varvec{s}_{j-1} \rangle _A\cdot \varvec{s}_{j-1} \end{aligned}$$

where the second line uses that \(A^{-\!1}_{j-1} \varvec{b}= \varvec{x}_{j-1}\). Thus, the search directions coincide with those in Proposition 13. It therefore holds that the implied posterior mean on solution space, \(A^{-\!1}_m \varvec{b}\), coincides with the solution estimate produced by CG. \(\square \)

5.2 Right-multiplied view

Interpretations of CG (and general projection methods) that use right-multiplied observations seem to require more care than those based on left-multiplied observations. Nevertheless, Hennig (2015) provided an interpretation for CG in this framework, essentially showingFootnote 7 that Algorithm 1 reproduces both the search directions and solution estimates from CG under the prior

where \(\alpha \in \mathbb {R}\setminus \{0\}\), \(\beta \in \mathbb {R}^+\) and denotes the symmetric Kronecker product (see Section A.1). The posterior under such a prior is described in Lemma 2.2 of Hennig (2015) (see Lemma 21), though we note that the sense in which the solution estimate \(\varvec{x}_m\) output by this algorithm is related to the posterior over \(A^{-1}\) differs from that in the previous section, in the sense that \(A^{-\!1}_m \varvec{b}\ne \varvec{x}_m\). (More precisely, \(\varvec{x}_m=A^{-\!1}_m (\varvec{b}-A\varvec{x}_0)-\varvec{x}_0 - (1-\alpha _m) \varvec{d}_m\), as the CG estimate is corrected by the step size computed in line 6. Fixing this rank-1 discrepancy would complicate the exposition of Algorithm 1 and yield a more cumbersome algorithm.) The following proposition generalises this result.

Proposition 16

Consider the prior

where \(W:=A^{-\!1}\). For all choices \(\alpha \in \mathbb {R}\setminus \{0\}\) and \(\beta ,\gamma \in \mathbb {R}_{+,0}\) with \(\beta + \gamma >0\), Algorithm 1 is equivalent to CG, in the sense that it produces the exact same sequence of estimates \(\varvec{x}_i\) and scaled search directions \(\varvec{s}_i\).

Proof

The proof is extensive and has been moved to “Appendix B”. \(\square \)

figure a

Note that, unlike previous propositions, Proposition 16 proposes a prior that does not involve \(A^{-\!1}\) for the case when \(\gamma = 0\).

6 GMRES

The generalised minimal residual method (Saad 2003, Section 6.5) applies to general nonsingular matrices A. At iteration m, GMRES minimises the residual over the affine space \(\varvec{x}_0 + K_m(A,\varvec{ r}_0)\). That is, \(\varvec{r}_m = \varvec{r}_0 - A\varvec{x}_m\) satisfies

$$\begin{aligned} \Vert \varvec{r}_m\Vert _2= & {} \min _{\varvec{x} \in K_m(A, \varvec{r}_0)}{\Vert A\varvec{x}-\varvec{r}_0\Vert _2}\nonumber \\= & {} \min _{x \in \varvec{x}_0+K_m(A, \varvec{r}_0)}{\Vert A\varvec{x} - \varvec{b}\Vert _2}. \end{aligned}$$
(14)

Since \(A \varvec{x}-\varvec{b}= A (\varvec{x}- \varvec{x}^*)\), this corresponds to minimising the error in the \(A^\top A\) norm.

We present a brief development of GMRES, starting with Arnoldi’s method (Sect. 6.1) and the GMRES algorithm (Sect. 6.2), before presenting our Bayesian interpretation (Sect. 6.3).

6.1 Arnoldi’s method

GMRES uses Arnoldi’s method (Saad 2003, Section 6.3) to construct orthonormal bases for Krylov spaces of general, nonsingular matrices A. Starting with \(\varvec{q}_1 = \varvec{r}_0/\Vert \varvec{r}_0\Vert _2\), Arnoldi’s method recursively computes the orthonormal basis

$$\begin{aligned} Q_m = \begin{bmatrix}\varvec{q}_1&\ldots&\varvec{q}_m\end{bmatrix}\in \mathbb {R}^{d \times m} \end{aligned}$$

for \(K_m(A, \varvec{r}_0)\). The basis vectors satisfy the relations

$$\begin{aligned} A Q_m = Q_{m+1} {\tilde{H}}_m=Q_m H_m+h_{m+1,m}\varvec{q}_{m+1}\varvec{e}_m^\top \end{aligned}$$
(15)

and \(Q_m^\top AQ_m = H_m\), where the upper Hessenberg matrix \(H_m\) is defined as

$$\begin{aligned} H_m = \begin{bmatrix} h_{11}&h_{12}&h_{13}&\dots&h_{1,m-1}&h_{1m} \\ h_{21}&h_{22}&h_{23}&\dots&h_{2,m-1}&h_{2m} \\ 0&h_{32}&h_{33}&\dots&h_{3,m-1}&h_{3m} \\ \vdots&0&h_{43}&\dots&h_{4,m-1}&h_{3m} \\ \vdots&\ddots&\ddots&\vdots&\vdots \\ 0&\dots&\dots&0&h_{m,m-1}&h_{mm} \end{bmatrix} \in \mathbb {R}^{m \times m} \end{aligned}$$

and

$$\begin{aligned} {\tilde{H}}_m=\begin{bmatrix} H_m \\ h_{m+1,m}\varvec{e}_m^\top \end{bmatrix}\in \mathbb {R}^{(m+1)\times m}. \end{aligned}$$

6.2 GMRES

GMRES computes the iterate

$$\begin{aligned} \varvec{x}_m =\varvec{x}_0+ Q_m \varvec{c}_m \end{aligned}$$

based on the optimality condition in Eq. (14), which can equivalently be expressed as

$$\begin{aligned} \varvec{c}_m&= {{\,\mathrm{arg\,min}\,}}_{\varvec{c} \in \mathbb {R}^m} \Vert A Q_m \varvec{c} - \varvec{r}_0\Vert _2 \nonumber \\&= \left( (AQ_m)^\top (AQ_m)\right) ^{-1}(AQ_m)^\top \varvec{r}_0. \end{aligned}$$
(16)

Thus,

$$\begin{aligned} \varvec{x}_m =\varvec{x}_0+ Q_m \left( Q_m^\top A^\top AQ_m\right) ^{-1}Q_m^\top A^\top \varvec{r}_0, \end{aligned}$$
(17)

confirming that GMRES is a projection method with \(X_m=Q_m\) and \(U_m=AQ_m\).

GMRES solves the least squares problem in Eq. (16) efficiently by projecting it to a lower-dimensional space via Arnoldi’s method. To this end, express the starting vector in the Krylov basis,

$$\begin{aligned} \varvec{r}_0 =\Vert \varvec{r}_0\Vert _2 \varvec{q}_1=\Vert \varvec{r_0}\Vert _2 Q_{m+1} \varvec{e_1}, \end{aligned}$$

and exploit the Arnoldi recursion from Eq. (15),

$$\begin{aligned} AQ_m\varvec{c}-\varvec{r_0}=Q_{m+1}\left( {\tilde{H}}_{m+1} \varvec{c}-\Vert \varvec{r}_0\Vert _2 \varvec{e}_1\right) , \end{aligned}$$

followed by the unitary invariance of the two-norm,

$$\begin{aligned} \Vert A Q_m \varvec{c} - \varvec{r}_0\Vert _2 = \Vert {\tilde{H}}_m \varvec{c} - \Vert \varvec{r}_0\Vert _2 \, \varvec{e}_1\Vert _2. \end{aligned}$$

Thus, instead of solving the least squares problem Equation (16) with d rows, GMRES solves instead a problem with only \(m+1\) rows,

$$\begin{aligned} \varvec{c}_m = {{\,\mathrm{arg\,min}\,}}_{\varvec{c} \in \mathbb {R}^m} \Vert {\tilde{H}}_m \varvec{c} - \Vert \varvec{r}_0\Vert _2 \, \varvec{e}_1\Vert _2. \end{aligned}$$
(18)

The computations are summarised in Algorithm 2.

figure b

6.3 Bayesian interpretation of GMRES

We now present probabilistic linear solvers with posterior means that coincide with the solution estimate from GMRES.

6.3.1 Left-multiplied view

Proposition 17

Under the SBI prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma _0) \qquad \text {where} \quad \varSigma _0=(A^{\top }A)^{-1} \end{aligned}$$

and the search directions \(U_m = A Q_m\), the posterior mean is identical to the GMRES iterate \(\varvec{x}_m\) in Eq. (17).

Proof

Substitute \(R=A\) and \(U_m = AQ_m\) into Proposition 9.\(\square \)

Proposition 17 is intuitive in the context of Proposition 4: setting \(\varSigma _0 = (A^{\top }A)^{-1}\) ensures that the norm being minimised coincides with that of GMRES, as does the solution space \(X_m = A Q_m\). This interpretation exhibits an interesting duality with CG for which \(\varSigma _0=A^{-\!1}\).

Another probabilistic interpretation follows from Proposition 8.

Corollary 18

Under the prior

$$\begin{aligned} p(\varvec{x})=\mathcal {N}(\varvec{x}; \varvec{x}_0, \varSigma _0) \qquad \text {where}\quad \varSigma _0=Q_mQ_m^\top , \end{aligned}$$
(19)

and with observations \(\varvec{y}_m =Q_m^\top \varvec{b}\), the posterior mean from SBI is identical to the GMRES iterate \(\varvec{x}_m\) in Eq. (17).

Note that Proposition 17 has a posterior covariance which is not practical, as it involves \(A^{-\!1}\). (Cockayne et al. 2017) proposed replacing \(A^{-\!1}\) in the prior covariance with a preconditioner to address this, which does yield a practically computable posterior, but this extension was not explored here. Furthermore, that approach yields unsatisfactorily calibrated posterior uncertainty, as described in that work. Corollary 18 does not have this drawback, but the posterior covariance is a matrix of zeroes.

6.3.2 Right-multiplied view

As for CG in Sect. 5.2, finding interpretations of GMRES that use right-multiplied observations appears to be more difficult.

Proposition 19

Under the prior

$$\begin{aligned} p(A^{-1})={\mathcal {N}}(0, \varSigma \otimes I) \end{aligned}$$
(20)

and given \(Y_m=AQ_m\), the implied posterior mean on the solution space given by \(A_m^{-1} \varvec{b}\) is equivalent to the GMRES solution. This correspondence breaks when \(\varvec{x}_0\ne \varvec{0}\).

Proof

Under this prior, \(\varvec{b}\) applied to the posterior mean is

$$\begin{aligned} A_m^{-1}\varvec{b}=&\,A_0^{-1}\varvec{b}+(Q_m-A_m^{-1}Y_m)(Y_m^{\top }Y_m)^{-1}Y_m^{\top }\varvec{b}\\ =&\,Q_m(Y_m^{\top }Y_m)^{-1}Y_m^{\top }\varvec{b}\\ =&\,Q_m(Q_m^{\top }A^{\top }AQ_m)^{-1}Q_m^{\top }A^{\top }\varvec{b}\end{aligned}$$

which is the GMRES projection step if \(\varvec{x}_0=\varvec{0}\). \(\square \)

6.4 Simulation study

In this section, the simulation study of Cockayne et al. (2018) will be replicated to demonstrate that the uncertainty produced from GMRES in Proposition 17 is similarly poorly calibrated to CG, owing to the dependence of \(Q_m\) on \(\varvec{x}^*\) by way of its dependence on \(\varvec{b}\). Throughout the size of the test problems is set to \(d=100\). The eigenvalues of \(A\) were drawn from an exponential distribution with parameter \(\gamma =10\) and eigenvectors uniformly from the Haar-measure over rotation matrices (see Diaconis and Shahshahani 1987). In contrast to Cockayne et al. (2018), the entries of \(\varvec{b}\) are drawn from a standard Gaussian distribution, rather than \(\varvec{x}_*\). By Lemma 1, the prior is then perfectly calibrated for this scenario, providing justification for the expectation that the posterior should be equally well calibrated for \(m\ge 1\).

Fig. 1
figure 1

Convergence of posterior mean and variance of the probabilistic interpretation of GMRES from Proposition 17

Fig. 2
figure 2

Assessment of the uncertainty quantification. Plotted are kernel density estimates for the statistic Z based on 500 randomly sampled test problems for steps \(m=\{1, 3, 5, 8, 10\}\). These are compared with the theoretical distribution of Z when the posterior distribution is well calibrated

Cockayne et al. (2018) argue that if the uncertainty is well calibrated, then \(\varvec{x}^*\) can be considered as a draw from the posterior. Under this assumption, i.e. \(\varSigma _m^{-\nicefrac {1}{2}}(\varvec{x}^*-\varvec{x}_m) \sim \mathcal {N}(\varvec{0}, \varvec{I})\) they derive the test statistic:

$$\begin{aligned} Z(\varvec{x}^*):=\Vert \varSigma _m^{-\nicefrac {1}{2}}(\varvec{x}^*-\varvec{x}_m)\Vert \sim \chi ^2_{d-m}. \end{aligned}$$

Figure 1 shows on the left the convergence of GMRES and on the right the convergence rate of the trace of the posterior covariance. Figure 2 displays the test statistic. It can be seen that the same poor uncertainty quantification occurs in BayesGMRES; even after just 10 iterations, the empirical distribution of the test statistic exhibits a profound left shift, indicating an overly conservative posterior distribution. Producing well-calibrated posteriors remains an open issue in the field of probabilistic linear solvers.

7 Discussion

We have established many new connections between probabilistic linear solvers and a broad class of iterative methods. Matrix-based and solution-based inference were shown to be equivalent in a particular regime, showing that results from SBI transfer to MBI with left-multiplied observations. Since SBI is a special case of MBI, future research will establish what additional benefits the increased generality of MBI can provide.

We also established a connection between the wide class of projection methods and probabilistic linear solvers. The common practice of preconditioning has an intuitive probabilistic interpretation, and all probabilistic linear solvers can be interpreted as projection methods. While the converse was shown to hold, the conditions under which generic projection methods can be reproduced are somewhat restrictive; however, GMRES and CG, which are among the most commonly used projection methods, have a well-defined probabilistic interpretation. Probabilistic interpretations of other widely used iterative methods can, we anticipate, be established from the results presented in this work.

Posterior uncertainty remains a challenge for probabilistic linear solvers. Direct probabilistic interpretations of CG and GMRES yield posterior covariance matrices which are not always computable, and even when the posterior can be computed, the uncertainty remains poorly calibrated. This is owed to the dependence of the search directions in Krylov methods on \(A\varvec{x}^*= \varvec{b}\), resulting in an algorithm which is not strictly Bayesian. Mitigating this issue without sacrificing the fast rate of convergence provided by Krylov methods remains an important focus for future work.