Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models

Cross-validation can be used to measure a model's predictive accuracy for the purpose of model comparison, averaging, or selection. Standard leave-one-out cross-validation (LOO-CV) requires that the observation model can be factorized into simple terms, but a lot of important models in temporal and spatial statistics do not have this property or are inefficient or unstable when forced into a factorized form. We derive how to efficiently compute and validate both exact and approximate LOO-CV for any Bayesian non-factorized model with a multivariate normal or Student-t distribution on the outcome values. We demonstrate the method using lagged simultaneously autoregressive (SAR) models as a case study.


Introduction
In the absence of new data, cross-validation is a general approach for evaluating a statistical model's predictive accuracy for the purpose of model comparison, averaging, or selection (Geisser and Eddy, 1979;Hoeting et al., 1999;Ando and Tsay, 2010;Vehtari and Ojanen, 2012). One widely used variant of cross-validation is leave-one-out cross-validation (LOO-CV), where observations are left out one at a time and then predicted based on the model fit to the remaining data. Predictive accuracy is evaluated by first computing a pointwise predictive measure and then taking the sum of these values over all observations to obtain a single measure of predictive accuracy (e.g., Vehtari et al., 2017). In this paper, we focus on the expected log predictive density (ELPD) as the measure of predictive accuracy. The ELPD takes the whole predictive distribution into account and is less focused on the bulk of the distribution compared to other common measures such as the root mean squared error (RMSE) or mean absolute error (MAE; see Vehtari and Ojanen, 2012, for details). Exact LOO-CV is costly, as it requires fitting the model as many times as there are observations in the data. Depending on the size of the data, complexity of the model, and estimation method, this can be practically infeasible as it simply requires too much computation time. For this reason, fast approximate versions of LOO-CV have been developed (Gelfand et al., 1992, Vehtari et al. (2017), most recently using Pareto-smoothed importance-sampling (PSIS; Vehtari et al., 2017Vehtari et al., , 2019.
A standard assumption of any such fast LOO-CV approach using the ELPD is that the model over all 1 arXiv:1810.10559v5 [stat.ME] 1 Oct 2020 observations has to have a factorized form. That is, the overall observation model should be represented as the product of the pointwise models for each observation. However, many important models do not have this factorization property. Particularly in temporal and spatial statistics it is common to fit multivariate normal or Student-t models that have structured covariance matrices such that the model does not factorize. This is typically due to the fact that observations depend on other observations from different time periods or different spatial units in addition to the dependence on the model parameters. Some of these models are actually non-factorizable, that is, we do not know of any reformulation that converts the observation model into a factorized form. Other non-factorized models could be factorized in theory but it is sometimes more robust or efficient to marginalize out certain parameters, for instance observation-specific latent variables, and then work with a non-factorized model instead.
Conceptually, a factorized model is not required for cross-validation in general or LOO-CV in particular to make sense. This also implies that neither conditional independence nor conditional exchangability are necessary assumptions. However, when using non-factorized observation models in LOO-CV, computational challenges arise. In this paper, we derive how to perform efficient approximate LOO-CV for any Bayesian multivariate normal or Student-t model with an invertible covariance or scale matrix, regardless of whether or not the model factorizes. We also provide equations for computing exact LOO-CV for these models, which can be used to validate the approximation and to handle problematic observations. Throughout, a Bayesian model specification and estimation via Markov chain Monte Carlo (MCMC) is assumed. We have implemented the developed methods in the R package brms (Bürkner, 2017(Bürkner, , 2018. All materials including R source code are available in an online supplement 1 .

Pointwise log-likelihood for non-factorized models
When computing ELPD-based exact LOO-CV for a Bayesian model we need to compute the log leave-one-out predictive densities log p(y i |y −i ) for every response value y i , i = 1, . . . , N , where y −i denotes all response values except observation i. To obtain p(y i |y −i ), we need to have access to the pointwise likelihood p(y i | y −i , θ) and integrate over the model parameters θ: Here, p(θ | y −i ) is the leave-one-out posterior distribution for θ, that is, the posterior distribution for θ obtained by fitting the model while holding out the ith observation (in Section 3, we will show how refitting the model to data y −i can be avoided).
If the observation model is formulated directly as the product of the pointwise observation models, we call it a factorized model. In this case, the likelihood is also the product of the pointwise likelihood contributions p(y i | y −i , θ). To better illustrate possible structures of the observation models, we formally divide θ into two parts, observation-specific latent variables f = (f 1 , . . . , f N ) and hyperparameters ψ, so ψ). Depending on the model, one of the two parts of θ may also be empty.
In very simple models, such as linear regression models, latent variables are not explicitely presented and response values are conditionally independent given ψ, so that p(y i | y −i , f i , ψ) = p(y i | ψ) (see Figure 1 (a)).

2
The full likelihood can then be written in the familiar form where y = (y 1 , . . . , y N ) denotes the vector of all responses. When the likelihood factorizes this way, the conditional pointwise log-likelihood can be obtained easily by computing p(y i | ψ) for each i with computational cost O(n).
If directional paths between consecutive responses are added, responses are no longer conditionally independent, but the model factorizes to simple terms with Markovian dependency. This is common in time-series models.
For example, in an autoregressive model of order 1 (see Figure 1 (b)), the pointwise likelihoods are given by In other time series, models may have observation-specific latent variables f i and conditionally independent responses so that the pointwise log-likelihoods simplify to p( In models without directional paths between the latent values f (see Figure 1 (c)), such as latent Gaussian processes (GPs; e.g., Rasmussen, 2003) or spatial conditional autoregressive (CAR) models (e.g., Gelfand and Vounatsou, 2003), an explicit joint prior over f is imposed. In models with directional paths between the latent values f (see Figure 1 (d)), such as hidden Markov models (HMMs; e.g., Rabiner and Juang, 1986), the joint prior over f is defined implicitly via the directional dependencies. What is more, estimation can make use of the latent Markov property of such models, for example, using the Kalman filter (e.g., Welch et al., 1995). In all of these cases (i.e., Figure 1 (a) to (d)), the factorization property is retained and computational cost for the pointwise log-likelihood contributions remains in O(n).
Yet, there are several reasons why a non-factorized observation model (see Figure 1 (e)) may be necessary or preferred. In non-factorized models, the joint likelihood of the response values p(y | θ) is not factorized into observation-specific components, but rather given directly as one joint expression. For some models, an analytical factorized formulation is simply not available in which case we speak of a non-factorizable model. Even in models whose observation model can be factorized in principle, it may still be preferable to use a non-factorized form. This is true in particular for models with observation-specific latent variables (see Figure 1 (c) and (d)), as a non-factorized formulation where the latent variables have been integrated out is often more efficient and numerically stable. For example, a latent GP combined with a Gaussian observation model can be fit more efficiently by marginalizing over f and formulating the GP directly on the responses y (e.g., Rasmussen, 2003). Such marginalization has the additional advantage that both exact and approximate leave-one-out predictive estimation become more stable. This is because, in the factorized formulation, leaving out response y i also implies treating the corresponding latent variable f i as missing, which is then only identified through the joint prior over f . If this prior is weak, the posterior of f i is highly influenced by one observation and the leave-one-out predictions of y i may be unstable both numerically and because of estimation error due to finite MCMC sampling or similar finite approximations.
Whether a non-factorized model is used by necessity or for efficiency and stability, it comes at the cost of having no direct access to the leave-one-out predictive densities (1) and thus to the overall leave-one-out predictive accuracy. In theory, we can express the observation-specific likelihoods in terms of the joint likelihood via but the expression on the right-hand side of (3) may not always have an analytical solution. Computing log p(y i | y −i , θ) for non-factorized models is therefore often impossible, or at least inefficient and numerically unstable. However, there is a large class of multivariate normal and Student-t models for which we will provide efficient analytical solutions in this paper.

Non-factorized normal models
The density of the N dimensional multivariate normal distribution of vector y is given by with mean vector µ and covariance matrix Σ. Often µ and Σ are functions of the model parameters θ, that is, µ = µ(θ) and Σ = Σ(θ), but for notational convenience we omit the potential dependence of µ and Σ on θ unless it is relevant. Using standard multivariate normal theory (e.g., Tong, 2012), we know that for the ith and varianceσ In the equations above, µ −i is the mean vector without the ith element, Σ −i is the covariance matrix without the ith row and column (Σ −1 −i is its inverse), σ i,−i and σ −i,i are the ith row and column vectors of Σ without the ith element, and σ ii is the ith diagonal element of Σ. Equations (5) and (6) can be used to compute the pointwise log-likelihood values as Evaluating equation (7) for each y i and each posterior draw θ s then constitutes the input for the LOO-CV computations. However, the resulting procedure is quite inefficient. Computation is usually dominated by the O(N k ) cost of computing Σ −1 −i , where k depends on the structure of Σ. If Σ is dense then k = 3. For sparse Σ or reduced rank computations we have 2 < k < 3. And since Σ −1 −i must be computed for each i, the overall complexity is actually O(N k+1 ).
Additionally, if Σ −i also depends on the model parameters θ in a non-trivial manner, which is the case for most models of practical relevance, then it needs to be inverted for each of the S posterior draws. Therefore, in most applications the overall complexity will actually be O(SN k+1 ), which will be impractical in most situations. Accordingly, we seek to find more efficient expressions forμ i andσ i that make these computations feasible in practice.
Proposition 1 If y is multivariate normal with mean vector µ and covariance matrix Σ, then the conditional mean and standard deviation of y i given y −i for any observation i can be computed as ii . The proof is based on results from Sundararajan and Keerthi (2001) and is provided in the Appendix.
Contrary to the brute force computations in (5) and (6), where Σ −i has to be inverted separately for each i, equations (8) and (9) require inverting the full covariance matrix Σ only once and it can be reused for each i.
This reduces the computational cost to O(N k ) if Σ is independent of θ and O(SN k ) otherwise. If the model is parameterized in terms of the covariance matrix Σ = Σ(θ), it is not possible to reduce the complexity further as inverting Σ is unavoidable. However, if the model is parameterized directly through the inverse of Σ, that is Σ −1 = Σ −1 (θ), the complexity goes down to O(SN 2 ). Note that the latter is not possible in the brute force approach as both Σ and Σ −1 are required.

Non-factorized Student-t models
Several generalizations of the multivariate normal distribution have been suggested, perhaps most notably the multivariate Student-t distribution (Zellner, 1976), which has an additional positive degrees of freedom parameter ν that controls the tails of the distribution. If ν is small, the tails are much fatter than those of the normal distribution. If ν is large, the multivariate Student-t distribution becomes more similar to the corresponding multivariate normal distribution and is equal to the latter for ν → ∞. As ν can be estimated alongside the other model parameters in Student-t models, the thickness of the tails is flexibly adjusted based on information from the observed response values and the prior. The (multivariate) Student-t distribution has been studied in various places (e.g., Zellner, 1976;O'Hagan, 1979;Fernández and Steel, 1999;Zhang and Yeung, 2010;Piché et al., 2012;Shah et al., 2014). For example, Student-t processes which are based on the multivariate Student-t distribution constitute a generalization of Gaussian processes while retaining most of the latter's favorable properties (Shah et al., 2014) The density of the N dimensional multivariate Student-t distribution of vector y is given by with degrees of freedom ν, location vector µ and scale matrix Σ. The mean of y is µ if ν > 1 and ν ν−2 Σ is the covariance matrix if ν > 2. Similar to the multivariate normal case, the conditional distribution of the ith observation given all other observations and the model parameters, p(y i |y −i , θ), can be computed analytically.
Proposition 2 If y is multivariate Student-t with degrees of freedom ν, location vector µ, and scale matrix Σ, then the conditional distribution of y i given y −i for any observation i is univariate Student-t with parameters where A proof based on results of Shah et al. (2014) is given in the Appendix. Hereμ i is the same as in the normal case andσ i is the same up to the correction factor ν+β−i ν+N −1 , which approaches 1 for ν → ∞ as one would expect. Based on the above equations, we can compute the pointwise log-likelihood values in the Student-t case as This approach has the same overall computational cost of O(SN k+1 ) as the non-optimized normal case and is therefore quite inefficient. Fortunately, the efficiency can again be improved.

Proposition 3
If y is multivariate Student-t with degrees of freedom ν, location vector µ, and scale matrix Σ, then the conditional location and scale of y i given y −i for any observation i can be computed as with and so avoids matrix inversion altogether. However, this is still substantially more efficient than the brute force approach, which requires O(SN k+1 ) > O(SN 3 ) operations.

Example: Lagged SAR models
It often requires additional work to take a given multivariate normal or Student-t model and express it in the form required to apply the equations for the predictive mean and standard deviation. Consider, for example, the lagged simultaneous autoregressive (SAR) model (Cressie, 1992;Haining and Haining, 2003;LeSage and Pace, 2009), a spatial model with many applications in both the social sciences (e.g., economics) and natural sciences (e.g., ecology). The model is given by or equivalently where ρ is a scalar spatial correlation parameter and W is a user-defined matrix of weights. The matrix W has entries w ii = 0 along the diagonal and the off-diagonal entries w ij are larger when units i and j are closer to each other but mostly zero as well. In a linear model, the predictor term is η = Xβ, with design matrix X and regression coefficients β, but the definition of the lagged SAR model holds for arbitrary η, so these results are not restricted to the linear case. See LeSage and Pace (2009), Section 2.3, for a more detailed introduction to SAR models. A general discussion about predictions of SAR models from a frequentist perspective can be found in Goulard et al. (2017).
If we have ∼ N(0, σ 2 I), with residual variance σ 2 and identity matrix I of dimension N , it follows that but this standard way of expressing the model is not compatible with the requirements of Proposition 1. To make the lagged SAR model reconcilable with this proposition we need to rewrite it as follows (conditional on ρ, η, and σ 2 ): or more compactly, with W = (I − ρW ), Written in this way, the lagged SAR model has the required form (4). Accordingly, we can compute the leaveone-out predictive densities with Equations (8) and (9), replacing µ with W −1 η and taking the covariance matrix Σ to be σ 2 ( W T W ) −1 . This implies Σ −1 = σ −2 W W T and so that the overall computational cost is dominated by W −1 η. In SAR models, W is usually sparse and so is W . Thus, if sparse matrix operations are used, then the computational cost for Σ −1 will be less than O(N 2 ) and for W −1 it will be less than O(N 3 ) depending on number of non-zeros and the fill pattern. Since W depends on the parameter ρ in a non-trivial manner, W −1 needs to be computed for each posterior draw, which implies an overall computational cost of less than O(SN 3 ).
If the residuals are Student-t distributed, we can apply analogous transformations as above to arrive at the Student-t distribution for the responses with computational cost dominated by the computation of the β i from Equation (18) Studying leave-one-out predictive densities in SAR models is related to considering impact measures, that is, measures to quantify how changes in the predicting variables of a given observation i affect the responses in other observations j = i as well as the obtained parameter estimates (see LeSage and Pace, 2009, Section 2.7). A detailed discussion of this topic it out of scope of the present paper.

Approximate LOO-CV for non-factorized models
Exact LOO-CV, requires refitting the model N times, each time leaving out one observation. Alternatively, it is possible to obtain an approximate LOO-CV using only a single model fit by instead calculating the 8 pointwise log-predictive density (1), without leaving out any observations, and then applying an importance sampling correction (Gelfand et al., 1992), for example, using Pareto smoothed importance sampling (PSIS; Vehtari et al., 2017).
The conditional pointwise log-likelihood matrix of dimension S × N is the only required input to the approximate LOO-CV algorithm from Vehtari et al. (2017) and thus the equations provided in Section 2 allow for approximate LOO-CV for any model that can be expressed conditionally in terms of a multivariate or Student-t model with invertible covariance/scale matrix Σ; including those where the likelihood does not factorize.
Suppose we have obtained S posterior draws θ (s) (s = 1, . . . , S), from the full posterior p(θ | y) using MCMC or another sampling algorithm. Then, the pointwise log-predictive density (1) can be approximated as: where w (s) i are importance weights to be computed in two steps. First, we obtain the raw importance ratios and then stabilize them using Pareto-smoothed importance-sampling to obtain the weights w (s) i (Vehtari et al., 2017. The resulting approximation is referred to as PSIS-LOO-CV (Vehtari et al., 2017).
For Bayesian models fit using MCMC, the whole procedure of evaluating and comparing model fit via PSIS-LOO-CV can be summarized as follows: 1. Fit the model using MCMC obtaining S samples from the posterior distribution of the parameters θ. In the Section 4, we demonstrate this method by performing approximate LOO-CV for lagged SAR models fit to spatially correlated crime data.

Validation using exact LOO-CV
In order to validate the approximate LOO-CV procedure, and also in order to allow exact computations to be made for a small number of leave-one-out folds for which the Pareto-k diagnostic  indicates an unstable approximation, we need to consider how we might do exact LOO-CV for a non-factorized model. Here we will provide the necessary equations and in the supplementary materials we provide code for comparing the exact and approximate versions.
In the case of those multivariate normal or Student-t models that have the marginalization property, exact LOO-CV is relatively straightforward: when refitting the model we can simply drop the one row and column of the covariance matrix Σ corresponding to the held out observation without altering the prior of the other observations. But this does not hold in general for all multivariate normal or Student-t models (in particular it does not hold for SAR models). Instead, in order to keep the original prior, we may need to maintain the full covariance matrix Σ even when one of the observations is left out.
The general solution is to model y i as a missing observation and estimate it along with all of the model parameters. For a multivariate normal model log p(y i | y −i ) can be computed as follows. First, we model y i as missing and denote the corresponding parameter y mis i . Then, we define y mis(i) = (y 1 , . . . , y i−1 , y mis i , y i+1 , . . . , y N ).
to be the same as the full set of observations y but replacing y i with the parameter y mis i . Second, we compute the log predictive densities as in Equations (7) and (15), but replacing y with y mis(i) in all computations.
Finally, the leave-one-out predictive distribution can be estimated as where θ (s) −i are draws from the posterior distribution p(θ | y mis(i) ).

Case Study: Neighborhood Crime in Columbus, Ohio
In order to demonstrate how to carry out the computations implied by these equations, we will fit and evaluate lagged SAR models to data on crime in 49 different neighborhoods of Columbus, Ohio during the year 1980. The data was originally described in (Anselin, 1988) and ships with the spdep R package (Bivand and Piras, 2015). The three variables in the data set relevant to this example are: CRIME: the number of residential burglaries and vehicle thefts per thousand households in the neighborhood, HOVAL: housing value in units of $1000 USD, and INC: household income in units of $1000 USD. In addition, we have information about the spatial relationship of neighborhoods from which we can construct the weight matrix to help account for the spatial dependency among the observations. In addition to the loo R package (Vehtari et al., 2018), for this analysis we use the brms interface (Bürkner, 2017(Bürkner, , 2018 to Stan (Carpenter et al., 2017) to generate a Stan program and fit the model. The complete R code for this case study can be found in the supplemental materials.
We fit a normal SAR model first using the weakly-informative default priors of brms. In Figure 2 (a), we see that both higher income and higher housing value predict lower crime rates in the neighborhood. Moreover, there seems to be substantial spatial correlation between adjacent neighborhoods, as indicated by the posterior distribution of the lagsar parameter.
In order to evaluate model fit, the next step is to compute the pointwise log-likelihood values needed for approximate LOO-CV and we apply the method laid out in Section 3. Since this is already implemented in brms 3 , we can simply use the built-in loo method on the fitted model to obtain the desired results. The 3 Source code is available at https://github.com/paul-buerkner/brms/blob/master/R/log_lik.R.
quality of the approximation can be investigated graphically by plotting the Pareto-k diagnostic for each observation. Ideally, they should not exceed 0.5, but in practice the algorithm turns out to be robust up to values of 0.7 (Vehtari et al., 2017. In Figure 2 (b), we see that the fourth observation is problematic.
This has two implications. First, it may reduce the accuracy of the LOO-CV approximation. Second, it indicates that the fourth observation is highly influential for the posterior and thus questions the robustness of the inference obtained by means of this model. We will address the former issue first and come back to the latter issue afterwards.
The PSIS-LOO-CV approximation of the expected log predictive density for new data reveals elpd approx = -186.9. To verify the correctness of our approximate estimates, this result still needs to be validated against exact LOO-CV, which is somewhat more involved, as we need to re-fit the model N times each time leaving out a single observation. For the lagged SAR model, we cannot just ignore the held-out observation entirely as this will change the prior distribution. In other words, the lagged SAR model does not have the marginalization property that holds, for instance, for Gaussian process models. Instead, we have to model the held-out observation as a missing value, which is to be estimated along with the other model parameters (see the supplemental material for details on the R code). Although we were able to correct for the problematic observation in the approximate LOO-CV estimation, the mere existence of such problematic observations raises doubts about the appropriateness of the normal SAR model for the present data. Accordingly, it is sensible to fit a Student-t SAR model as a potentially better predicting alternative due to its fatter tails. We choose an informative Gamma(4, 0.5) prior (with mean 8 and standard deviation 4) on the degrees of freedom parameter ν to ensure rather fat tails of the likelihood a-priori. For all other parameters, we continue to use the weakly-informative default priors of brms.
In Figure 3 (a), the marginal posterior distributions of the main model parameters are depicted. Comparing the results to those shown in Figure 2 (a), we see that the estimates of both the regression parameters and the SAR autocorrelation are quite similar to the estimates obtained from the normal model.
In contrast to the normal case, we see in Figure 3  Lastly, let us compare the PSIS-LOO-CV estimate of the normal SAR model (after correcting for the problematic observation via refit) to the Student-t SAR model. The ELPD difference between the two models is -0.3 (SE = 0.5) in favor of the Student-t model, and thus very small and not substantial for any practical purposes. As shown in Figure 4, the pointwise elpd contributions are also highly similar. The Student-t model fits slightly but noticeably better only for the 4th observation. Other methods clearly identify the 4th observation as problematic as well (e.g., Halleck Vega and Elhorst, 2015). However, what exactly causes this outlier remains unclear so far as nobody has been able to verify or spatially register the data set despite many attempts.

Conclusion
In this paper we derived how to perform and validate exact and approximate leave-one-out cross-validation (LOO-CV) for non-factorized multivariate normal and Student-t models and we demonstrated the practical applicability of our method in a case study using spatial autoregressive models. The proposed LOO-CV approximation makes efficient and robust model fit evaluation and comparison feasible for many models that are widely used in temporal and spatial statistics. Importantly, our method does not only apply to non-factorizable models for which a factorized likelihood is unavailable. It is also useful when factorization is possible in principle but could result in a unstable LOO predictive density, either for numerical reasons or due to the use of finite estimation procedures like Markov-Chain Monte Carlo. In such cases, marginalizing over observation-specific latent variables and using a non-factorized formulation can lead to more robust estimates.

13
Lagged SAR autocorrelation Approximate normal elpds Approximate Student−$t$ elpds Figure 4: Comparison of approximate pointwise elpd values for the normal SAR model (after refit for the 4th observation) and the Student-t SAR model (without refit). Observations with relevant differences are highlighted in red.