1 Introduction

Symmetric positive-definite matrices arise in spatial statistics, Gaussian-process inference, and spatiotemporal filtering, with a wealth of application areas, including geoscience (e.g., Cressie 1993; Banerjee et al. 2004), machine learning (e.g., Rasmussen and Williams 2006), data assimilation (e.g., Nychka and Anderson 2010; Katzfuss et al. 2016), and the analysis of computer experiments (e.g., Sacks et al. 1989; Kennedy and O’Hagan 2001). Inference in these areas typically relies on Cholesky decomposition of the positive-definite matrices. However, this operation scales cubically in the dimension of the matrix, and it is thus computationally infeasible for many modern problems and applications, which are increasingly high-dimensional.

Countless approaches have been proposed to address these computational challenges. Heaton et al. (2019) provide a recent review from a spatial-statistics perspective, and Liu et al. (2020) review approaches in machine learning. In high-dimensional filtering, proposed solutions include low-dimensional approximations (e.g., Verlaan and Heemink 1995; Pham et al. 1998; Wikle and Cressie 1999; Katzfuss and Cressie 2011), spectral methods (e.g., Wikle and Cressie 1999; Sigrist et al. 2015), and hierarchical approaches (e.g., Johannesson et al. 2003; Li et al. 2014; Saibaba et al. 2015; Jurek and Katzfuss 2021). Operational data assimilation often relies on ensemble Kalman filters (e.g., Evensen 1994; Burgers et al. 1998; Anderson 2001; Evensen 2007; Katzfuss et al. 2016, 2020d), which represent distributions by samples or ensembles.

Perhaps the most promising approximations for spatial data and Gaussian processes implicitly or explicitly rely on sparse Cholesky factors. The assumption of ordered conditional independence in the popular Vecchia approximation (Vecchia 1988) and its extensions (e.g., Stein et al. 2004; Datta et al. 2016; Guinness 2018; Katzfuss and Guinness 2021; Katzfuss et al. 2020a, b; Schäfer et al. 2021a) implies sparsity in the Cholesky factor of the precision matrix. Schäfer et al. (2021b) use an incomplete Cholesky decomposition to construct a sparse approximate Cholesky factor of the covariance matrix. However, these methods are not generally applicable to spatiotemporal filtering, because the assumed sparsity is not preserved under filtering operations.

Here, we relate the sparsity of the Cholesky factors of the covariance matrix and the precision matrix to specific assumptions regarding ordered conditional independence. We show that these assumptions are simultaneously satisfied for a particular Gaussian-process approximation that we call hierarchical Vecchia (HV), which is a special case of the general Vecchia approximation (Katzfuss and Guinness 2021) based on hierarchical domain partitioning (e.g., Katzfuss 2017; Katzfuss and Gong 2020). We show that the HV approximation can be computed using a simple and fast incomplete Cholesky decomposition.

Due to its remarkable property of implying a sparse Cholesky factor whose inverse has equivalent sparsity structure, HV is well suited for extensions to spatiotemporal filtering; this is in contrast to other Vecchia approximations and other spatial approximations relying on sparsity. We provide a scalable HV-based filter for linear Gaussian spatiotemporal state-space models, which is related to the multi-resolution filter of Jurek and Katzfuss (2021). Further, by combining HV with a Laplace approximation (cf. Zilber and Katzfuss 2021), our method can be used for the analysis of non-Gaussian data. Finally, by combining the methods with the extended Kalman filter (e.g., Grewal and Andrews 1993, Ch. 5), we obtain fast filters for high-dimensional, nonlinear, and non-Gaussian spatiotemporal models. For a given formulation of HV, the computational cost of all of our algorithms scales linearly in the state dimension, assuming sufficiently sparse temporal evolution.

This paper makes several important contributions. First, it succinctly summarizes and proves conditional-independence conditions that ensure that particular elements of the Cholesky factor and its inverse vanish; while some of these conditions have been known before, we present them in a unified framework and provide precise proofs. Second, we describe a new version of the Vecchia approximation and demonstrate that it satisfies all of the conditional-independence conditions. Third, we show how the HV approximation can be easily computed using an incomplete Cholesky decomposition (IC0). Fourth, we present how IC0 can be used to create an approximate Kalman filter. Finally, we describe how all these developments can be combined in a filtering algorithm. Using the Laplace approximation and the extended Kalman filter, our method is applicable to a broad class of state-space models with convex likelihoods and to nonlinear evolution models that can be linearized.

The remainder of this document is organized as follows. In Sect. 2, we specify the relationship between ordered conditional independence and sparse (inverse) Cholesky factors. Then, we build up increasingly complex and general methods, culminating in nonlinear and non-Gaussian spatiotemporal filters: In Sect. 3, we introduce the HV approximation for a linear Gaussian spatial field at a single time point; in Sect. 4, we extend this to non-Gaussian data; and in Sect. 5, we consider the general spatiotemporal filtering case, including nonlinear evolution. Section 6 contains numerical comparisons to existing approaches. Section 7 presents a filtering analysis of satellite data. Section 8 concludes. Appendices A–B contain proofs and further details. Sections S1–S6 in the Supplementary Material provide additional information, including on a particle filter for inference on unknown parameters in the model, an application to a nonlinear Lorenz model, and comparisons with the ensemble Kalman filter. Code implementing our methods and numerical comparisons is available at https://github.com/katzfuss-group/vecchiaFilter.

2 Sparsity of Cholesky factors

We begin by specifying the connections between ordered conditional independence and sparsity of the Cholesky factor of the covariance and precision matrix.

Remark 1

Let \(\mathbf {w}\) be a normal random vector with variance–covariance matrix \(\mathbf {K}\).

  1. 1.

    Let \(\mathbf {L}= {{\,\mathrm{chol}\,}}(\mathbf {K})\) be the lower-triangular Cholesky factor of the covariance matrix \(\mathbf {K}\). For \(i>j\):

    $$\begin{aligned} \mathbf {L}_{i,j}=0 \iff w_i \perp w_j \, | \, \mathbf {w}_{1:j-1} \end{aligned}$$
  2. 2.

    Let \(\mathbf {U}= {{\,\mathrm{rchol}\,}}(\mathbf {K}^{-1}) = \mathbf {P}{{\,\mathrm{chol}\,}}( \mathbf {P}\mathbf {K}^{-1}\mathbf {P})\,\mathbf {P}\) be the Cholesky factor of the precision matrix under reverse ordering, where \(\mathbf {P}\) is the reverse-ordering permutation matrix. Then \(\mathbf {U}\) is upper-triangular, and for \(i>j\):

    $$\begin{aligned} \mathbf {U}_{j,i}=0 \iff w_i \perp w_j \, | \, \mathbf {w}_{1:j-1},\mathbf {w}_{j+1:i-1} \end{aligned}$$

The connection between ordered conditional independence and the Cholesky factor of the precision matrix is well known (e.g., Rue and Held 2010); Part 2 of our remark states this connection under reverse ordering (e.g., Katzfuss and Guinness 2021, Prop. 3.3). In Part 1, we consider the lesser-known relationship between ordered conditional independence and sparsity of the Cholesky factor of the covariance matrix, which was recently discussed in Schäfer et al.(2021b, Sect. 1.4.2). For completeness, we provide a proof of Remark 1 in “Appendix B”.

Remark 1 is crucial for our later developments and proofs. In Sect. 3, we specify the HV approximation of Gaussian processes that satisfies both types of conditional independence in Remark 1; the resulting sparsity of the Cholesky factor and its inverse allows extensions to spatiotemporal filtering in Sect. 5.

3 HV approximation for large Gaussian spatial data

Consider a Gaussian process \(x(\cdot )\) and a vector \(\mathbf {x}= (x_1, \ldots , x_n)^\top \) representing \(x(\cdot )\) evaluated on a grid \(\mathcal {S}= \{\mathbf {s}_1,\ldots ,\mathbf {s}_n \}\) with \(\mathbf {s}_i \in \mathcal {D}\subset \mathbb {R}^d\) and \(x_i = x(\mathbf {s}_i)\) for \(\mathbf {s}_i \in \mathcal {D}\), \(i=1,\ldots ,n\). We assume the following model:

$$\begin{aligned} y_{i} \,|\, \mathbf {x}&{\mathop {\sim }\limits ^{ind}} \mathcal {N}(x_i,\tau _i^2),\quad i \in \mathcal {I}, \end{aligned}$$
(1)
$$\begin{aligned} \mathbf {x}&\sim \mathcal {N}_n(\varvec{\mu },\varvec{\Sigma }), \end{aligned}$$
(2)

where \(\mathcal {N}\) and \(\mathcal {N}_n\) denote, respectively, a univariate and multivariate normal distribution, \(\mathcal {I}\subset \{1,\ldots ,n\}\) are the indices of grid points at which observations are available, and \(\mathbf {y}\) is the data vector consisting of these observations \(\{y_{i}: i \in \mathcal {I}\}\). Note that we can equivalently express (1) using matrix notation as \(\mathbf {y}\,|\, \mathbf {x}\sim \mathcal {N}(\mathbf {H}\mathbf {x},\mathbf {R})\), where \(\mathbf {H}\) is obtained by selecting only the rows with indices \(i \in \mathcal {I}\) from an identity matrix, and \(\mathbf {R}\) is a diagonal matrix with entries \(\{\tau _i^2: i \in \mathcal {I}\}\). While (1) assumes conditional independence of each observation, we discuss in Section S1 how this assumption can be relaxed.

Our interest is in computing the posterior distribution of \(\mathbf {x}\) given \(\mathbf {y}\), which requires inverting or decomposing an \(n \times n\) matrix at a cost of \(\mathcal {O}(n^3)\) if \(|\mathcal {I}| = \mathcal {O}(n)\). This is computationally infeasible for large n.

3.1 The HV approximation

We now describe the HV approximation with unique sparsity and computational properties, which enable fast computation for spatial models as in (1)–(2) and also allow extensions to spatiotemporal filtering as explained later.

Assume that the elements of the vector \(\mathbf {x}\) are hierarchically partitioned into sets \(\mathcal {X}^m\), for \(m=0, \ldots M\), where for \(m>1\) we have \(\mathcal {X}^m = \bigcup _{j_1=1}^{J_1} \cdots \bigcup _{j_m=1}^{J_m} \mathcal {X}_{j_1,\ldots ,j_m}\), and \(\mathcal {X}_{j_1,\ldots ,j_m}\) is a set consisting of \(|\mathcal {X}_{j_1,\ldots ,j_m}|\) elements of \(\mathbf {x}\), such that there is no overlap between any two sets, \(\mathcal {X}_{j_1,\ldots ,j_m}\cap \mathcal {X}_{i_1,\ldots ,i_l}= \emptyset \) for \(({j_1,\ldots ,j_m}) \ne ({i_1,\ldots ,i_l})\). Note that this also means that the sets \(\mathcal {X}^m\) and \(\mathcal {X}^l\) are disjoint for \(m \ne l\). Define \(\mathcal {X}^{0:m} = \bigcup _{k=0}^m \mathcal {X}^k\). We assume that \(\mathbf {x}\) is ordered according to \(\mathcal {X}^{0:M}\), in the sense that if \(i>j\), then \(x_i \in \mathcal {X}^{m_1}\) and \(x_j \in \mathcal {X}^{m_2}\) with \(m_1 \ge m_2\). We also say that if \(m>l\), then elements of \(\mathcal {X}^m\) are at a higher level than elements of \(\mathcal {X}^l\). As a toy example with \(n=6\), the vector \(\mathbf {x}= (x_1,\ldots ,x_6)\) might be partitioned with \(M=1\), \(J_1 = 2\), as \(\mathcal {X}^{0:1} = \mathcal {X}^0 \cup \mathcal {X}^1\), \(\mathcal {X}^0 = \mathcal {X}= \{x_1,x_2\}\), and \(\mathcal {X}^1 = \mathcal {X}_{1,1} \cup \mathcal {X}_{1,2}\), where \(\mathcal {X}_{1,1}=\{x_3,x_4\}\), and \(\mathcal {X}_{1,2} = \{x_5,x_6\}\). Another toy example is illustrated in Fig. 1.

Fig. 1
figure 1

Toy example with \(n=35\) of the HV approximation in (3) with \(M=2\) and \(J_1=J_2=2\); the color for each set \(\mathcal {X}_{j_1,\ldots ,j_m}\) is consistent across (a)–(c). a Partitioning of the spatial domain \(\mathcal {D}\) and the locations \(\mathcal {S}\); for level \(m=0,1,2\), locations of \(\mathcal {X}^{0:m}\) (solid dots) and locations of points at higher levels (\(\circ \)). b DAG illustrating the conditional-dependence structure, with bigger arrows for connections between vertices at neighboring levels of the hierarchy, to emphasize the tree structure. c Corresponding sparsity pattern of \(\mathbf {U}\) (see Proposition 1), with groups of columns/rows corresponding to different levels separated by pink lines, and groups of columns/rows corresponding to different \(\mathcal {X}_{j_1,\ldots ,j_m}\) at the same level separated by blue lines. (Color figure online)

The exact distribution of \(\mathbf {x}\sim \mathcal {N}_n(\varvec{\mu },\varvec{\Sigma })\) can be written as

$$\begin{aligned} \textstyle p(\mathbf {x})= & {} \prod _{m=0}^M \prod _{j_1,\ldots ,j_m}\\&p\left( \mathcal {X}_{j_1,\ldots ,j_m}|\mathcal {X}^{0:m-1},\mathcal {X}_{{j_1,\ldots ,j_{m-1}},1:j_m-1}\right) , \end{aligned}$$

where the conditioning set of \(\mathcal {X}_{j_1,\ldots ,j_m}\) consists of all sets \(\mathcal {X}^{0:m-1}\) at lower levels, plus those at the same level that are previous in lexicographic ordering. The idea of Vecchia (1988) was to remove many of these variables in the conditioning set, which for geostatistical applications often incurs only small approximation error due to the so-called screening effect (e.g., Stein 2002, 2011).

Here we consider the HV approximation of the form

$$\begin{aligned} \textstyle \hat{p}(\mathbf {x}) = \prod _{m=0}^M \prod _{j_1,\ldots ,j_m}p(\mathcal {X}_{j_1,\ldots ,j_m}|\mathcal {A}_{j_1,\ldots ,j_m}), \end{aligned}$$
(3)

where \(\mathcal {A}_{j_1,\ldots ,j_m}= \mathcal {X}\cup \mathcal {X}_{j_1} \cup \cdots \cup \mathcal {X}_{{j_1,\ldots ,j_{m-1}}}\). We call \(\mathcal {A}_{j_1,\ldots ,j_m}\) the set of ancestors of \(\mathcal {X}_{j_1,\ldots ,j_m}\). For example, the set of ancestors of \(\mathcal {X}_{2,1,2}\) is \(\mathcal {A}_{2,1,2} = \mathcal {X}\cup \mathcal {X}_2 \cup \mathcal {X}_{2,1}\). Thus, \(\mathcal {A}_{j_1,\ldots ,j_m}= \mathcal {A}_{j_1,\ldots ,j_{m-1}}\cup \mathcal {X}_{j_1,\ldots ,j_{m-1}}\), and the ancestor sets are nested: \(\mathcal {A}_{j_1,\ldots ,j_{m-1}}\subset \mathcal {A}_{j_1,\ldots ,j_m}\). We can equivalently write (3) in terms of individual variables as

$$\begin{aligned} \textstyle \hat{p}(\mathbf {x}) = \prod _{i=1}^n p(x_i|\,\mathcal {C}_i), \end{aligned}$$
(4)

where \(\mathcal {C}_i = \mathcal {A}_{j_1,\ldots ,j_m}\cup \{x_k \in \mathcal {X}_{j_1,\ldots ,j_m}\! : \, k<i \}\) for \(x_i \in \mathcal {X}_{j_1,\ldots ,j_m}\). The choice of the \(\mathcal {C}_i\) involves a trade-off: generally, the larger the \(\mathcal {C}_i\), the higher the computational cost (see Proposition 4), but the smaller the approximation error; HV is exact when all \(\mathcal {C}_i = \{x_1,\ldots ,x_{i-1}\}\).

Vecchia approximations and their conditional-independence assumptions are closely connected to directed acyclic graphs (DAGs; Datta et al. 2016; Katzfuss and Guinness 2021). Summarizing briefly, as illustrated in Fig. 1b, we associate a vertex with each set \(\mathcal {X}_{j_1,\ldots ,j_m}\), and we draw an arrow from the vertex corresponding to \(\mathcal {X}_{{i_1,\ldots ,i_l}}\) to the vertex corresponding to \(\mathcal {X}_{j_1,\ldots ,j_m}\) if and only if \(\mathcal {X}_{{i_1,\ldots ,i_l}}\) is in the conditioning set of \(\mathcal {X}_{j_1,\ldots ,j_m}\) (i.e., \(\mathcal {X}_{{i_1,\ldots ,i_l}} \subset \mathcal {A}_{j_1,\ldots ,j_m}\)). DAGs corresponding to HV approximations always have a tree structure, due to the nested ancestor sets. Necessary terminology and notation from graph theory is reviewed in Appendix A.

In practice, as illustrated in Fig. 1a, we partition the spatial field \(\mathbf {x}\) into the hierarchical set \(\mathcal {X}^{0:M}\) based on a recursive partitioning of the spatial domain \(\mathcal {D}\) into \(J_1\) regions \(\mathcal {D}_{1},\ldots ,\mathcal {D}_{J_1}\), each of which is again split into \(J_2\) regions, and so forth, up to level M (Katzfuss 2017): \(\mathcal {D}_{j_1,\ldots ,j_{m-1}}= \bigcup _{j_m=1}^{J_m} \mathcal {D}_{j_1,\ldots ,j_m}\), \(m=1,\ldots ,M\). We then set each \(\mathcal {X}_{j_1,\ldots ,j_m}\) to be a subset of the variables in \(\mathbf {x}\) whose location is in \(\mathcal {D}_{j_1,\ldots ,j_m}\): \(\mathcal {X}_{j_1,\ldots ,j_m}\subset \{x_i: \mathbf {s}_i \in \mathcal {D}_{j_1,\ldots ,j_m}\}\). This implies that the ancestors \(\mathcal {A}_{j_1,\ldots ,j_m}\) of each set \(\mathcal {X}_{j_1,\ldots ,j_m}\) consist of the variables associated with regions at lower levels \(m=0,\ldots ,m-1\) that contain \(\mathcal {D}_{j_1,\ldots ,j_m}\). Specifically, for all our numerical examples, we set \(J_1=\cdots =J_M=2\), and we select each \(\mathcal {X}_{j_1,\ldots ,j_m}\) corresponding to the first \(|\mathcal {X}_{j_1,\ldots ,j_m}|\) locations in a maximum-distance ordering (Guinness 2018; Schäfer et al. 2021b) of \(\mathcal {S}\) that are contained in \(\mathcal {D}_{j_1,\ldots ,j_m}\) but are not already in \(\mathcal {A}_{j_1,\ldots ,j_m}\).

The HV approximation (3) is closely related to the multi-resolution approximation (MRA; Katzfuss 2017; Katzfuss and Gong 2020), as noted in Katzfuss and Guinness (2021, Sect. 2.5); specifically, while HV makes conditional-independence assumptions that result in an approximate covariance matrix \({\hat{\varvec{\Sigma }}}\) of \(\mathbf {x}\) with a sparse Cholesky factor (see Proposition 1), the MRA relies on a basis-function representation of a spatial process that results in a sparse nontriangular matrix square root. However, the approximate covariance matrices \({\hat{\varvec{\Sigma }}}\) implied by both HV and MRA are hierarchical off-diagonal low-rank (HODLR) matrices (e.g., Hackbusch 2015; Ambikasaran et al. 2016; Saibaba et al. 2015; Geoga et al. 2020), as was noted for the MRA in Jurek and Katzfuss (2021). The definition, exposition, and details based on conditional independence and sparse Cholesky factors provided here enable our later proofs, simple incomplete-Cholesky-based computation, and extensions to non-Gaussian data and to nonlinear space–time filtering.

3.2 Sparsity of the HV approximation

For all Vecchia approximations, the assumed conditional independence implies a sparse Cholesky factor of the precision matrix (e.g., Datta et al. 2016; Katzfuss and Guinness 2021, Prop. 3.3). The conditional-independence assumption made in our HV approximation also implies a sparse Cholesky factor of the covariance matrix, which is in contrast to many other formulations of the Vecchia approximation. Let \(\mathcal {N}(\mathbf {x}|\varvec{\mu }, \varvec{\Sigma })\) denote the density of a normal distribution with mean \(\varvec{\mu }\) and covariance matrix \(\varvec{\Sigma }\) evaluated at \(\mathbf {x}\).

Proposition 1

For the HV approximation in (3), we have \({\hat{p}}(\mathbf {x}) = \mathcal {N}_n(\mathbf {x}|\varvec{\mu },{\hat{\varvec{\Sigma }}})\). Define \(\mathbf {L}= {{\,\mathrm{chol}\,}}({\hat{\varvec{\Sigma }}})\) and \(\mathbf {U}= {{\,\mathrm{rchol}\,}}({\hat{\varvec{\Sigma }}}^{-1}) = \mathbf {P}{{\,\mathrm{chol}\,}}( \mathbf {P}{\hat{\varvec{\Sigma }}}^{-1}\mathbf {P})\,\mathbf {P}\), where \(\mathbf {P}\) is the reverse-ordering permutation matrix.

  1. 1.

    For \(i\ne j\):

    1. (a)

      \(\mathbf {L}_{i,j} = 0\) unless \(x_j \in \mathcal {C}_i\)

    2. (b)

      \(\mathbf {U}_{j,i} = 0\) unless \(x_j \in \mathcal {C}_i\)

  2. 2.

    \(\mathbf {U}= \mathbf {L}^{-\top }\)

The proof relies on Remark 1. All proofs can be found in “Appendix B”. Proposition 1 says that the Cholesky factors of the covariance and precision matrix implied by a HV approximation are both sparse, and \(\mathbf {U}\) has the same sparsity pattern as \(\mathbf {L}^{\top }\). An example of this pattern is shown in Fig. 1c. Furthermore, because \(\mathbf {L}= \mathbf {U}^{-\top }\), we can quickly compute one of these factors given the other, as described in Sect. 3.3 (see the proof of Proposition 4).

For other Vecchia approximations, the sparsity of the prior Cholesky factor \(\mathbf {U}\) for \(\mathbf {x}\) does not necessarily imply the same sparsity for the Cholesky factor of the posterior precision matrix of \(\mathbf {x}\) given \(\mathbf {y}\), and in fact there can be substantial fill-in (Katzfuss and Guinness 2021). However, this is not the case for the particular case of HV, for which the posterior sparsity is exactly the same as the prior sparsity:

Proposition 2

Assume that \(\mathbf {x}\) has the distribution \({\hat{p}}(\mathbf {x})\) given by the HV approximation in (3). Let \({\widetilde{\varvec{\Sigma }}} = {{\,\mathrm{Var}\,}}(\mathbf {x}|\mathbf {y})\) be the posterior covariance matrix of \(\mathbf {x}\) given data \(y_{i} \,|\, \mathbf {x}{\mathop {\sim }\limits ^{ind}} \mathcal {N}(x_i,\tau _i^2)\), \(i \in \mathcal {I}\subset \{1,\ldots ,n\}\), as in (1). Then:

  1. 1.

    \({\widetilde{\mathbf {U}}} = {{\,\mathrm{rchol}\,}}({\widetilde{\varvec{\Sigma }}}^{-1})\) has the same sparsity pattern as \(\mathbf {U}= {{\,\mathrm{rchol}\,}}({\hat{\varvec{\Sigma }}}^{-1})\).

  2. 2.

    \({\widetilde{\mathbf {L}}} = {{\,\mathrm{chol}\,}}({\widetilde{\varvec{\Sigma }}})\) has the same sparsity pattern as \(\mathbf {L}= {{\,\mathrm{chol}\,}}({\hat{\varvec{\Sigma }}})\).

3.3 Fast computation using incomplete Cholesky factorization

For notational and computational convenience, we assume now that each conditioning set \(\mathcal {C}_i\) consists of at most \(N\) elements of \(\mathbf {x}\). For example, this can be achieved by setting \(|\mathcal {X}_{j_1,\ldots ,j_m}|\le r\) with \(r = N/(M+1)\). Then \(\mathbf {U}\) can be computed using general expressions for the Vecchia approximation in \(\mathcal {O}(nN^3)\) time (e.g., Katzfuss and Guinness 2021). Inference using the related multi-resolution decomposition (Katzfuss 2017; Katzfuss and Gong 2020; Jurek and Katzfuss 2021) can be carried out in \(\mathcal {O}(nN^2)\) time, but these algorithms are fairly involved.

Instead, we show here how HV inference can be carried out in \(\mathcal {O}(nN^2)\) time using standard sparse-matrix algorithms, including the incomplete Cholesky factorization, based on at most \(nN\) entries of \(\varvec{\Sigma }\). Our algorithm, which is based on ideas in Schäfer et al. (2021b), is much simpler than multi-resolution decompositions.

figure a

The incomplete Cholesky factorization (e.g., Golub and Van Loan 2012), denoted by \({{\,\mathrm{\text {ichol}}\,}}(\mathbf {A}, \mathbf {S})\) and given in Algorithm 1, is identical to the standard Cholesky factorization of the matrix \(\mathbf {A}\), except that we skip all operations that involve elements that are not in the sparsity pattern represented by the zero-one matrix \(\mathbf {S}\). It is important to note that to compute \(\mathbf {L}={{\,\mathrm{\text {ichol}}\,}}(\mathbf {A}, \mathbf {S})\) for a large dense matrix \(\mathbf {A}\), we do not actually need to form or access the entire \(\mathbf {A}\); instead, to reduce memory usage and computational cost, we simply compute \(\mathbf {L}={{\,\mathrm{\text {ichol}}\,}}(\mathbf {A}\circ \mathbf {S}, \mathbf {S})\) based on the sparse matrix \(\mathbf {A}\circ \mathbf {S}\), where \(\circ \) denotes element-wise multiplication. Thus, while we write expressions like \(\mathbf {L}={{\,\mathrm{\text {ichol}}\,}}(\mathbf {A}, \mathbf {S})\) for notational simplicity below, this should always be read as \(\mathbf {L}={{\,\mathrm{\text {ichol}}\,}}(\mathbf {A}\circ \mathbf {S}, \mathbf {S})\).

For our HV approximation in (3), we henceforth set \(\mathbf {S}\) to be a sparse lower-triangular matrix with \(\mathbf {S}_{i,j}=1\) if \(x_j \in \mathcal {C}_i\) or if \(i=j\), and 0 otherwise. Thus, the sparsity pattern of \(\mathbf {S}\) is the same as that of \(\mathbf {L}\), and its transpose is that of \(\mathbf {U}\) shown in Fig. 1c.

Proposition 3

Assuming (3), denote \({{\,\mathrm{Var}\,}}(\mathbf {x}) = {\hat{\varvec{\Sigma }}}\) and \(\mathbf {L}= {{\,\mathrm{chol}\,}}({\hat{\varvec{\Sigma }}})\). Then, \(\mathbf {L}= {{\,\mathrm{\text {ichol}}\,}}(\varvec{\Sigma }, \mathbf {S})\).

Hence, the Cholesky factor of the covariance matrix \({\hat{\varvec{\Sigma }}}\) implied by the HV approximation can be computed using the incomplete Cholesky algorithm based on the (at most) nN entries of the exact covariance \(\varvec{\Sigma }\) indicated by \(\mathbf {S}\). Using this result, we propose Algorithm 2 for posterior inference on \(\mathbf {x}\) given \(\mathbf {y}\).

figure b

By combining the incomplete Cholesky factorization with the results in Propositions 1 and 2 (saying that all involved Cholesky factors are sparse), we can perform fast posterior inference:

Proposition 4

Algorithm 2 can be carried out in \(\mathcal {O}(nN^2)\) time and \(\mathcal {O}(nN)\) space, assuming that \(|\mathcal {C}_i| \le N\) for all \(i=1,\ldots ,n\).

3.4 Approximation accuracy

Formal error quantification for the HV approximation is a matter of ongoing research. Numerical simulations suggest that it can be almost as accurate (adjusting for its lower computational complexity) as the state-of-the-art general Vecchia approximation in many settings (Katzfuss et al. 2020a, Fig. S3). Generally, the quality of the approximation increases with the conditioning set size \(N\) and with the strength of the screening effect in the process to be approximated. For an approximately low-rank covariance matrix, only a small number of levels M will be necessary to capture it accurately, while greater local variations might require a greater M. In our numerical experiments, we rely on the default strategy implemented in the GPvecchia package (Katzfuss et al. 2020c), which we have found to produce good results. A drawback of the HV approximation is that its hierarchical conditional-independence assumptions can result in visual artifacts or edge effects along partition boundaries; some examples and a discussion of this issue are provided in Section S5. For situations in which these edge effects are of major concern, it may be useful to use a larger N (and hence sacrifice some computational speed), to manually place the conditioning points at low levels near partition boundaries (e.g., Katzfuss 2017, Sect. 2.5), or to forgo the HV approximation in favor of a different (Vecchia) approximation for purely spatial problems. However, for spatiotemporal filtering, we are not aware of any other spatial approximation method which will ensure that the Cholesky factor of the filtering precision matrix will preserve its sparsity pattern when filtering across time points. This key advantage of our approach ensures scalability of the HV filter discussed in Sect. 5. Using the HV approximation in a filtering setting has the added benefit that the boundary artifacts in the filtering mean field tend to fade or even disappear within a few time steps (see Sect. S5). This is because dependence between neighboring points is captured not only by the spatial covariance function, but also by the temporal evolution operator, which is particularly helpful in the case of diffusion or transport evolution models.

4 Extensions to non-Gaussian spatial data using the Laplace approximation

Now consider the model

$$\begin{aligned} y_{i} \,|\, \mathbf {x}&{\mathop {\sim }\limits ^{ind}} g_i(y_{i} | x_{i}),\quad i \in \mathcal {I}, \end{aligned}$$
(5)
$$\begin{aligned} \mathbf {x}&\sim \mathcal {N}_n(\varvec{\mu },\varvec{\Sigma }), \end{aligned}$$
(6)

where \(g_i\) is a distribution from an exponential family, and we slightly abuse notation by using \(g_i(y_i|x_i)\) to denote both the distribution and its density. Using the HV approximation in (3)–(4) for \(\mathbf {x}\), the implied posterior can be written as:

$$\begin{aligned} \hat{p}(\mathbf {x}|\mathbf {y}) = \frac{p(\mathbf {y}|\mathbf {x}) \hat{p}(\mathbf {x})}{\int p(\mathbf {y}|\mathbf {x}) \hat{p}(\mathbf {x}) d\mathbf {x}} = \frac{(\prod _{i \in \mathcal {I}} g_i(y_i|x_i)) \hat{p}(\mathbf {x})}{\int (\prod _{i \in \mathcal {I}} g_i(y_i|x_i)) \hat{p}(\mathbf {x}) d\mathbf {x}}.\nonumber \\ \end{aligned}$$
(7)

Unlike in the Gaussian case as in (1), the integral in the denominator cannot generally be evaluated in closed form, and Markov Chain Monte Carlo methods are often used to numerically approximate the posterior. Instead, Zilber and Katzfuss (2021) proposed a much faster method that combines a general Vecchia approximation with the Laplace approximation (e.g., Tierney and Kadane 1986; Rasmussen and Williams 2006, Sect. 3.4). The Laplace approximation is based on a Gaussian approximation of the posterior, obtained by carrying out a second-order Taylor expansion of the posterior log-density around its mode. Although the mode cannot generally be obtained in closed form, it can be computed straightforwardly using a Newton–Raphson procedure, because \(\log \hat{p}(\mathbf {x}|\mathbf {y}) = \log p(\mathbf {y}|\mathbf {x}) + \log \hat{p}(\mathbf {x}) + c\) is a sum of two concave functions and hence also concave (as a function of \(\mathbf {x}\), under appropriate parametrization of the \(g_i\)).

While each Newton–Raphson update requires the computation and decomposition of the \(n \times n\) Hessian matrix, the update can be carried out quickly by making use of the sparsity implied by the Vecchia approximation. To do so, we follow Zilber and Katzfuss (2021) in exploiting the fact that the Newton–Raphson update is equivalent to computing the conditional mean of \(\mathbf {x}\) given pseudo-data. Specifically, at the \(\ell \)th iteration of the algorithm, given the current state value \(\mathbf {x}^{(\ell )}\), let us define

$$\begin{aligned} \mathbf {u}^{(\ell )}= & {} \big [ u^{(\ell )}_i \big ]_{i \in \mathcal {I}},\nonumber \\&\text {where} \quad u^{(\ell )}_i = \frac{\partial }{\partial x} \log g_i(y_i|x)\big \vert _{x=x^{(\ell )}_i}, \end{aligned}$$
(8)

and

$$\begin{aligned} \mathbf {D}^{(\ell )}= & {} {{\,\mathrm{diag}\,}}\big (\{d_i^{(\ell )}: i \in \mathcal {I}\}\big ), \nonumber \\&\text {where} \quad d_i^{(\ell )} = -\big (\frac{\partial ^2}{\partial x^2} \log g_i(y_i|x)\big )^{-1}\big \vert _{x=x^{(\ell )}_i}. \end{aligned}$$
(9)

Then, we compute the next iteration’s state value \(\mathbf {x}^{(\ell +1)} = \mathbb {E}(\mathbf {x}|\mathbf {t}^{(\ell )})\) as the conditional mean of \(\mathbf {x}\) given pseudo-data \(\mathbf {t}^{(\ell )} = \mathbf {x}^{(\ell )} + \mathbf {D}^{(\ell )} \mathbf {u}^{(\ell )}\) assuming Gaussian noise, \(t^{(\ell )}_i | \mathbf {x}{\mathop {\sim }\limits ^{ind.}} \mathcal {N}(x_i,d_i^{(\ell )})\), \(i \in \mathcal {I}\). Zilber and Katzfuss (2021) recommend computing the conditional mean \(\mathbb {E}(\mathbf {x}|\mathbf {t}^{(\ell )})\) based on a general Vecchia prediction approach proposed in Katzfuss et al. (2020a). Here, we instead compute the posterior mean using Algorithm 2 based on the HV method described in Sect. 3, due to its sparsity-preserving properties. In contrast to the approach recommended in Zilber and Katzfuss (2021), our algorithm is guaranteed to converge, because it is equivalent to Newton–Raphson optimization of the log of the posterior density in (7), which is concave. Once the algorithm converges to the posterior mode \({\widetilde{\varvec{\mu }}}\), we obtain a Gaussian HV-Laplace approximation of the posterior as

$$\begin{aligned} \hat{p}_L(\mathbf {x}|\mathbf {y}) = \mathcal {N}_n(\mathbf {x}|{\widetilde{\varvec{\mu }}},{\widetilde{\mathbf {L}}}{\widetilde{\mathbf {L}}}^\top ), \end{aligned}$$

where \({\widetilde{\mathbf {L}}}\) is the Cholesky factor of the negative Hessian of the log-posterior at \({\widetilde{\varvec{\mu }}}\). Our approach is described in Algorithm 3. The main computational expense for each iteration of the for loop is carrying out Algorithm 2, and so each iteration requires only \(\mathcal {O}(nN^2)\) time.

figure c

In the Gaussian case, when \(g_i(y_i | x_i) = \mathcal {N}(y_i|a_i x_i,\tau _i^2)\) for some \(a_i \in \mathbb {R}\), it can be shown using straightforward calculations that the pseudo-data \(t_i = y_i/a_i\) and pseudo-variances \(d_i = \tau _i^2\) do not depend on \(\mathbf {x}\), and so Algorithm 3 converges in a single iteration. If, in addition, \(a_i=1\) for all \(i=1,\ldots ,n\), then (5) becomes equivalent to (1), and Algorithm 3 simplifies to Algorithm 2.

For non-Gaussian data, our HVL algorithm shares the limitations of the Laplace approximation. In particular, it approximates the mean of the filtering distribution with its mode, which can result in significant errors for highly non-Gaussian settings. Furthermore, because the variance estimate is based on the curvature of the likelihood at the mode, it is prone to underestimating the true uncertainty. Nickisch and Rasmussen (2008) provide further details and a comprehensive comparison of several methods of approximating non-Gaussian likelihoods. Notwithstanding these known limitations, empirical studies (e.g., Bonat and Ribeiro Jr 2016; Zilber and Katzfuss 2021) have shown that Laplace-type approximations can be effectively applied to environmental data sets and can strongly outperform sampling-based approaches such as Markov chain Monte Carlo.

5 Fast filters for spatiotemporal models

5.1 Linear evolution

We now turn to a spatiotemporal state-space model (SSM), which adds a temporal evolution model to the spatial model (5) considered in Sect. 4. For now, assume that the evolution is linear. Starting with an initial distribution \(\mathbf {x}_0 \sim \mathcal {N}_{n}(\varvec{\mu }_{0|0},\varvec{\Sigma }_{0|0})\), we consider the following SSM for discrete time \(t=1,2,\ldots \):

$$\begin{aligned} y_{ti} \,|\, \mathbf {x}_t&{\mathop {\sim }\limits ^{ind}} g_{ti}(y_{ti} | x_{ti}), \quad i \in \mathcal {I}_t \end{aligned}$$
(10)
$$\begin{aligned} \mathbf {x}_t \,|\, \mathbf {x}_{t-1}&\sim \mathcal {N}_n(\mathbf {E}_t\mathbf {x}_{t-1},\mathbf {Q}_t), \end{aligned}$$
(11)

where \(\mathbf {y}_{t}\) is the data vector consisting of \(n_t \le n\) observations \(\{y_{ti}: i \in \mathcal {I}_t\}\), \(\mathcal {I}_t \subset \{1,\ldots ,n\}\) contains the observation indices at time t, \(g_{ti}\) is a distribution from the exponential family, \(\mathbf {x}_t = (x_1,\ldots ,x_n)^\top \) is the latent spatial field of interest at time t observed at a spatial grid \(\mathcal {S}\), and \(\mathbf {E}_t\) is a sparse \(n \times n\) evolution matrix. Note that we allow for different observation locations at each time point, which is helpful in many remote-sensing applications where observations are often missing (e.g., due to changing cloud cover). The methods we propose are suitable for any type of \(\mathbf {Q}_t\) matrices; due to its hierarchical structure, HV is able to successfully capture both long-range dependence (which corresponds to low-rank structures in \(\mathbf {Q}_t\)) as well as finer, local variations.

At time t, our goal is to obtain or approximate the filtering distribution \(p(\mathbf {x}_t|\mathbf {y}_{1:t})\) of \(\mathbf {x}_t\) given data \(\mathbf {y}_{1:t}\) up to the current time t. This task, also referred to as data assimilation or on-line inference, is commonly encountered in many fields of science whenever one is interested in quantifying the uncertainty in the current state or in obtaining forecasts into the future. If the observation equations \(g_{ti}\) are all Gaussian, the filtering distribution can be derived using the Kalman filter (Kalman 1960) for small to moderate n. At each time t, the Kalman filter consists of a forecast step that computes \(p(\mathbf {x}_t|\mathbf {y}_{1:t-1})\), and an update step which then obtains \(p(\mathbf {x}_t|\mathbf {y}_{1:t})\). For linear Gaussian SSMs, both of these distributions are multivariate normal.

Our Kalman–Vecchia–Laplace (KVL) filter extends the Kalman filter to high-dimensional SSMs (i.e., large n) with non-Gaussian data, as in (10)–(11). Its update step is very similar to the inference problem in Sect. 4, and hence, it essentially consists of the HVL in Algorithm 3. We complement this update with a forecast step, in which the moment estimates are propagated forward using the temporal evolution model. This forecast step is exact, and so the KVL approximation error is solely due to the HVL approximation at each update step.

figure d

The KVL filter is given in Algorithm 4. In Line 4, \(\mathbf {L}_{t|t-1;i,:}\) denotes the ith row of \(\mathbf {L}_{t|t-1}\). The KVL filter scales well with the state dimension n. The evolution matrix \(\mathbf {E}_t\), which is often derived using a forward finite difference scheme and thus has only a few nonzero elements in each row, can be quickly multiplied with \(\mathbf {L}_{t-1|t-1}\) in Line 3, as the latter is sparse (see Sect. 3.3). The \(\mathcal {O}(nN)\) necessary entries of \(\varvec{\Sigma }_{t|t-1}\) in Line 4 can also be calculated quickly due to the sparsity of \(\mathbf {L}_{t|t-1;i,:}\). The low computational cost of the HVL algorithm has already been discussed in Sect. 4. Thus, assuming sufficiently sparse \(\mathbf {E}_t\), the KVL filter scales approximately as \(\mathcal {O}(nN^2)\) per iteration. In the case of Gaussian data (i.e., all \(g_{ti}\) in (10) are Gaussian), our KVL filter will produce similar filtering distributions as the more complicated multi-resolution filter of Jurek and Katzfuss (2021).

5.2 An extended filter for nonlinear evolution

Finally, we consider a nonlinear and non-Gaussian model, which extends (10)–(11) by allowing nonlinear evolution operators, \(\mathcal {E}_t: \mathbb {R}^n \rightarrow \mathbb {R}^n\). This results in the model

$$\begin{aligned} y_{ti} \,|\, \mathbf {x}_t&{\mathop {\sim }\limits ^{ind}} g_{ti}(y_{ti} | x_{ti}), \quad i \in \mathcal {I}_t \end{aligned}$$
(12)
$$\begin{aligned} \mathbf {x}_t \,|\, \mathbf {x}_{t-1}&\sim \mathcal {N}_n(\mathcal {E}_t(\mathbf {x}_{t-1}),\mathbf {Q}_t). \end{aligned}$$
(13)

Due to the nonlinearity of the evolution operator \(\mathcal {E}_t\), the KVL filter in Algorithm 4 is not directly applicable anymore. However, similar inference is still possible as long as the evolution is not too far from linear. Approximating the evolution as linear is generally reasonable if the time steps are short, or if the measurements are highly informative. In this case, we propose the extended Kalman–Vecchia–Laplace filter (EKVL) in Algorithm 5, which approximates the extended Kalman filter (e.g., Grewal and Andrews 1993, Ch. 5) and extends it to non-Gaussian data using the Vecchia–Laplace approach. For the forecast step, EKVL computes the forecast mean as \(\varvec{\mu }_{t|t-1} = \mathcal {E}_t(\varvec{\mu }_{t-1|t-1})\). The forecast covariance matrix \(\varvec{\Sigma }_{t|t-1}\) is obtained as before, after approximating the evolution using the Jacobian as \(\mathbf {E}_t = \frac{\partial \mathcal {E}_t(\mathbf {y}_{t-1})}{\partial \mathbf {y}_{t-1}} \big |_{\mathbf {y}_{t-1} = \varvec{\mu }_{t-1|t-1} }\). Errors in the forecast covariance matrix due to this linear approximation can be captured in the model-error covariance, \(\mathbf {Q}_t\). If the Jacobian matrix cannot be computed, it is sometimes possible to build a statistical emulator (e.g., Kaufman et al. 2011) instead, which approximates the true evolution operator.

Once \(\varvec{\mu }_{t|t-1}\) and \(\varvec{\Sigma }_{t|t-1}\) have been obtained, the update step of the EKVL proceeds exactly as in the KVL filter by approximating the forecast distribution as Gaussian.

figure e

Similarly to Algorithm 4, EKVL scales very well with the dimension of \(\mathbf {x}\), the only difference being the additional operation of calculating the Jacobian in Line 3, whose cost is problem dependent. Only those entries of \(\mathbf {E}_t\) need to be calculated that are multiplied with nonzero entries of \(\mathbf {L}_{t-1|t-1}\), whose sparsity structure is known ahead of time.

Our approach, which is based on linearizing the evolution operator, is not without its limits. Approximating \(\mathcal {E}_t\) with \(\mathbf {E}_t\) may be inaccurate for highly nonlinear models, because the forecast distribution of \(\mathcal {E}(\mathbf {x}_{t-1})\) is assumed to be normal (which is generally not true if \(\mathcal {E}_t\) is not linear) and because the covariance matrix of \(\mathcal {E}_t(\mathbf {x}_{t-1})\) is approximated as \(\mathbf {E}_t \varvec{\Sigma }_{t|t} \mathbf {E}_t^T\). A potential application of our method should consider these sources of error in addition to those discussed in Sect. 3.4 and at the end of Sect. 4. However, exact inference for high-dimensional nonlinear models is not possible in general, and all existing methods suffer from limitations in this challenging setting. Section S3 in the Supplement shows a successful application of Algorithm 5 to a nonlinear Lorenz model.

6 Numerical comparison

6.1 Methods and criteria

We considered and compared the following methods:

Hierarchical Vecchia (HV): Our methods as described in this paper.

Low rank (LR): A special case of HV with \(M=1\), in which the diagonal and the first N columns of \(\mathbf {S}\) are nonzero, and all other entries are zero. This results in a matrix approximation \({\hat{\varvec{\Sigma }}}\) that is of rank \(N\) plus diagonal, known as the modified predictive process (Banerjee et al. 2008; Finley et al. 2009) in spatial statistics. LR has the same computational complexity as HV.

Dense Laplace (DL): A further special case of HV with \(M=0\), in which \(\mathbf {S}\) is a fully dense matrix of ones. Thus, there is no error due to the Vecchia approximation, and so in the non-Gaussian spatial-only setting, this is equivalent to a Gaussian Laplace approximation. DL will generally be more accurate than HV and low-rank, but it scales as \(\mathcal {O}(n^3)\) and is thus not feasible for high dimension n.

For each scenario below, we simulated observations using (12), taking \(g_{t,i}\) to be each of four exponential family distributions: Gaussian, \(\mathcal {N}(x,\tau ^2)\); logistic Bernoulli, \(\mathcal {B}(1/(1+e^{-x}))\); Poisson, \(\mathcal {P}(e^x)\); and gamma, \(\mathcal {G}(a, ae^{-x})\), with shape parameter \(a=2\). For most scenarios, we assumed a moderate state dimension n, so that DL remained feasible; a large n was considered in Sect. 6.4. In the case of non-Gaussian observations, we used \(\epsilon = 10^{-5}\) in Algorithm 3.

The main metric to compare HV and LR was the difference in KL divergence between their posterior or filtering distributions and those generated by DL; as the exact distributions were not known here, we approximated the divergence by the difference in joint log scores (dLS; e.g., Gneiting and Katzfuss 2014) calculated at each time point for the joint posterior or filtering distribution of the entire field and averaged over several simulations. We also calculated the relative root mean square prediction error (RRMSPE), defined as the root mean square prediction error of HV and LR, respectively, divided by the root mean square prediction error of DL; the error is calculated with respect to the true simulated latent field. For both criteria, lower values are better. In each simulation scenario, the scores were averaged over a sufficiently large number of repetitions to achieve stable averages.

For comparisons over a range of conditioning-set sizes \(N\), we supplied a set of desired sizes to the GPvecchia package, upon which our method implementation is based. For a desired size, GPvecchia automatically determines a suitable partitioning scheme (and hence a suitable N) for the grid \(\mathcal {S}\), such that \(N\) is equal to or slightly below the desired value. We generated the HV approximation using this approach and then used the resulting \(N\) value for both LR and DL, to ensure a fair comparison.

Large-scale spatiotemporal filtering is also often achieved using the ensemble Kalman filter (EnKF) and its extensions. As the EnKF is a substantially different approach that does not neatly fit into our framework here, we conducted a separate comparison between HV and EnKF in Section S4 in the Supplementary Materials; the results strongly favored the HV filter.

6.2 Spatial-only data

In our first scenario, we considered spatial-only data according to (5)–(6) on a grid \(\mathcal {S}\) of size \(n=34 \times 34 = 1{,}156\) on the unit square, \(\mathcal {D}= [0,1]^2\). We set \(\varvec{\mu }= \mathbf {0}\) and \(\varvec{\Sigma }_{i,j} = \exp (-\Vert \mathbf {s}_i - \mathbf {s}_j\Vert /0.15)\). For the Gaussian likelihood, we assumed variance \(\tau ^2 = 0.2\).

The comparison scores averaged over 100 simulations for the posteriors obtained using Algorithm 3 are shown as a function of \(N\) in Fig. 2. HV (Algorithm 2) was much more accurate than LR for each value of \(N\).

Fig. 2
figure 2

Approximation accuracy for the posterior distribution \(\mathbf {x}|\mathbf {y}\) for spatial data (see Sect. 6.2)

6.3 Linear temporal evolution

Next, similar to Jurek and Katzfuss (2021), we considered a linear spatiotemporal advection–diffusion process \(x(\mathbf {s}, t)\) whose evolution is governed by the partial differential equation,

$$\begin{aligned} \frac{\partial x}{\partial t} = \alpha \left( \frac{ \partial ^2 x }{ \partial ^2 s_x } + \frac{ \partial ^2 x }{ \partial ^2 s_y } \right) + \beta \left( \frac{ \partial x}{\partial s_x} + \frac{\partial x}{\partial s_y} \right) + \nu (\mathbf {s}, t), \end{aligned}$$

where \(\mathbf {s}= (s_x, s_y) \in \mathcal {D}=[0,1]^2\) and \(t \in [0, T]\). We set the diffusion parameter \(\alpha = 4 \times 10^{-5}\) and advection parameter \(\beta = 10^{-2}\). The error \(\nu (\mathbf {s}, t)\) was a zero-mean stationary Gaussian process with exponential covariance function with range \(\lambda = 0.15\), independent across time.

The spatial domain \(\mathcal {D}\) was discretized on a grid of size \(n=34 \times 34 = 1{,}156\) using centered finite differences, and we considered discrete time points \(t=1,\ldots ,T\) with \(T=20\). After this discretization, our model was of the form (10)–(11), where \(\varvec{\Sigma }_{0|0}=\mathbf {Q}_1=\cdots =\mathbf {Q}_T\) with (ij)th entry \(\exp (-\Vert \mathbf {s}_i - \mathbf {s}_j\Vert /\lambda )\), and \(\mathbf {E}_t\) was a sparse matrix with nonzero entries corresponding to interactions between neighboring grid points to the right, left, top, and bottom. See the supplementary material of Jurek and Katzfuss (2021) for details.

At each time t, we generated \(n_t = n/10\) observations with indices \(\mathcal {I}_t\) sampled randomly from \(\{1,\ldots ,n\}\). For the Gaussian case, we assumed variance \(\tau ^2=0.25\). We used conditioning sets of size at most \(N=41\) for both HV and LR; specifically, for HV, we used \(J=2\) partitions at \(M=7\) levels, with set sizes \(|\mathcal {X}_{j_1,\ldots ,j_m}|\) of 5, 5, 5, 5, 6, 6, 6, respectively, for \(m=0,1,\ldots ,M-1\), and \(|\mathcal {X}_{j_1,\ldots ,j_M}| \le 3\).

Figure 3 compares the scores for the filtering distributions \(\mathbf {x}_t | \mathbf {y}_{1:t}\) obtained using Algorithm 4, averaged over 80 simulations. Again, HV was much more accurate than LR. Importantly, while the accuracy of HV was relatively stable over time, LR became less accurate over time, with the approximation error accumulating.

Fig. 3
figure 3

Accuracy of filtering distributions \(\mathbf {x}_t | \mathbf {y}_{1:t}\) for the advection–diffusion model in Sect. 6.3

6.4 Simulations using a very large n

We repeated the advection–diffusion experiment from Sect. 6.3 on a high-resolution grid of size \(n=300 \times 300 = 90{,}000\), with \(n_t=9{,}000\) observations corresponding to 10% of the grid points. In order to avoid numerical artifacts related to the finite differencing scheme, we reduced the advection and diffusion coefficients to \(\alpha = 10^{-7}\) and \(\beta =10^{-3}\), respectively. We set \(N=44\), \(M=14\), \(J=2\), and \(|\mathcal {X}_{j_1,\ldots ,j_m}| = 3\) for \(m=0,1,\ldots ,M-1\), and \(|\mathcal {X}_{j_1,\ldots ,j_M}| \le 2\). DL was too computationally expensive due to the high dimension n, and so we simply compared HV and LR based on the root mean square prediction error (RMSPE) between the true state and their respective filtering means, averaged over 10 simulations.

As shown in Fig. 4, HV was again much more accurate than LR. Comparing to Fig. 3, we see that the relative improvement in HV to LR increased even further; taking the Gaussian case as an example, the ratio of the RMSPE for HV and LR was around 1.2 in the small-n setting and greater than 2 in the large-n setting.

Fig. 4
figure 4

Root mean square prediction error (RMSPE) for the filtering mean in the high-dimensional advection–diffusion model with \(n=90{,}000\) in Sect. 6.4

7 Filtering analysis of satellite data

We also applied the filtering method described in Algorithm 5 to measurements of total precipitable water vapor (TPW) acquired by the Microwave Integrated Retrieval System (MIRS) instrument mounted on NASA’s Geostationary Operational Environmental Satellite (GOES). This data set was previously analyzed in Katzfuss and Hammerling (2017). We used 11 consecutive sets of observations collected over a period of 40 h in January 2011 on a grid of size \(263 \times 263 = 69{,}169\) over the continental United States, with each cell covering a square of size \(16 \times 16\)km. In order to avoid specifying boundary conditions for the diffusion model below, we restricted the area of interest to a smaller grid of size \(n = 247 \times 247 = 61{,}009\). Observations at selected time points are shown in Fig. 5.

Fig. 5
figure 5

Total precipitable water measurements (first row), and complete filtering maps made using an HV filter and a LR filter (second and third row, respectively). While HV introduced some artifacts at \(t=1\), they became less pronounced at later time points. HV preserved important details that were lost when using LR

We assumed that over the period in which data were collected the dynamics of the TPW can be reasonably approximated by a diffusion process x. More formally, and similar to Sect. 6.3, we assumed that

$$\begin{aligned} \frac{\partial x}{\partial t} = \alpha \left( \frac{ \partial ^2 x }{ \partial ^2 s_x } + \frac{ \partial ^2 x }{ \partial ^2 s_y } \right) + \nu (\mathbf {s}, t), \end{aligned}$$
(14)

where \(\nu (\mathbf {s}, t)\) is a zero-mean stationary Gaussian process with Matérn covariance function with smoothness 1.5, range \(\lambda \) and marginal variance \(\sigma ^2\), independent across time. Exploratory analysis showed that TPW changed slowly between consecutive time points, and so we set \(\alpha = 0.99\). We discretized (14) using centered finite differences with 4 internal steps, which resulted in a model of the form (11). Within this context \(\mathbf {x}_t\) represents the values of the process in the middle of each grid cell and \(\varvec{\Sigma }_{0|0} = \frac{1}{1 - \alpha ^2}\mathbf {Q}_1 = \frac{1}{1 - \alpha ^2}\mathbf {Q}_2 = \cdots = \frac{1}{1 - \alpha ^2}\mathbf {Q}_{11}\), with \(\varvec{\Sigma }_{0|0}\) obtained by evaluating the covariance function of \(\nu \) over the centers of grid cells.

Following Katzfuss and Hammerling (2017), we assumed independent and additive measurement error for each time t and grid cell i, such that \(y_{ti} \sim \mathcal {N}(x_{ti}, \tau ^2)\) with \(\tau = 4.5\). At each time point, we held out randomly selected 10% of all observations, which we later used to assess the performance of our method. Covariance function parameters were chosen using a procedure described in Section S6, resulting in the values \(\lambda =2.09\) and \(\sigma ^2=0.3376\). We then used Algorithm 5 to obtain the filtering distribution of \(\mathbf {x}_t\) at all time points. We applied two approximation methods: the HV filter and the LR filter, which are described in Sect. 6.1. We used the same size of the conditioning set, \(N=78\), for both methods, which ensures comparable computation time; for HV, we set \(J=4\), \(M=7\), and \(|\mathcal {X}_{j_1,\ldots ,j_m}|\) equal to 25, 15, 15, 5, 5, 5, 3, 3, respectively, for \(m = 0, 1, \ldots , M\). The computational cost of the DL method is prohibitive for a data set as big as ours.

We compared the performance of the two filters by calculating the root mean squared prediction error (RMSPE) between the filtering means in \(\varvec{\mu }_{t|t}\) obtained with either method and the true observed values, both only at the held-out test locations. Figure 6 shows the values of RMSPE at all time points. We conclude that the HV filter significantly outperformed the LR filter due to its ability to capture much more fine scale detail. Plots of data and filtering results at selected time points are shown in Fig. 5 and are representative of full results. We show the plots of the point-wise standard errors in Section S6 in the Supplement.

Fig. 6
figure 6

Root mean square prediction error (RMSPE) for the HV (red) and LR (blue) filters versus the 11 time points in the satellite-data application. (Color figure online)

8 Conclusions

We specified the relationship between ordered conditional independence and sparse (inverse) Cholesky factors. Next, we described the HV approximation and showed that it exhibits equivalent sparsity in the Cholesky factors of both its precision and covariance matrices. Due to this remarkable property, the approximation is suitable for high-dimensional spatiotemporal filtering. The HV approximation can be computed using a simple and fast incomplete Cholesky decomposition. Further, by combining the approach with a Laplace approximation and the extended Kalman filter, we obtained scalable filters for non-Gaussian and nonlinear spatiotemporal state-space models.

Our methods can also be directly applied to spatiotemporal point patterns modeled using log-Gaussian Cox processes, which can be viewed as Poisson data after discretization of the spatial domain, resulting in accurate Vecchia–Laplace-type approximations (Zilber and Katzfuss 2021). We plan on investigating an extension of our methods to retrospective smoothing over a fixed time period. Another interesting extension would be to combine our methodology with the unscented Kalman filter (Julier and Uhlmann 1997) for strongly nonlinear evolution. Finally, while we focused our attention on spatiotemporal data, our work can be extended to other applications, as long as a sensible hierarchical partitioning of the state vector can be obtained as in Sect. 3.1.

Code implementing our methods and reproducing our numerical experiments is available at https://github.com/katzfuss-group/vecchiaFilter.