Convergence rates of Gaussian ODE filters

A recently introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution x and its first q derivatives a priori as a Gauss–Markov process \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{X}}$$\end{document}X, which is then iteratively conditioned on information about \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\dot{x}}$$\end{document}x˙. This article establishes worst-case local convergence rates of order \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q+1$$\end{document}q+1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order q in the case of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q=1$$\end{document}q=1 and an integrated Brownian motion prior, and analyses how inaccurate information on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\dot{x}}$$\end{document}x˙ coming from approximate evaluations of f affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q \in \{2,3,\ldots \}$$\end{document}q∈{2,3,…}.


Introduction
A solver of an initial value problem (IVP) outputs an approximate solutionx : [0, T ] → R d of an ordinary differential equation (ODE) with initial condition: Zuse Institute Berlin, Takustraße 7, 14195 Berlin, Germany solutionx is computed by iteratively collecting information on x (1) (t) by evaluating f : R d → R d at a numerical estimatex(t) of x(t) and using these approximate evaluations of the time derivative to extrapolate along the time axis. In other words, the numerical solution (or estimator)x of the exact solution (or estimand) x is calculated based on evaluations of the vector field f (or data). Accordingly, we treatx itself as an estimator, i.e., a statistic that translates evaluations of f into a probability distribution over C 1 ([0, T ]; R d ), the space of continuously differentiable functions from [0, T ] to R d . This probabilistic interpretation of numerical computations of tractable from intractable quantities as statistical inference of latent from observable quantities applies to all numerical problems and has been repeatedly recommended in the past (Poincaré 1896;Diaconis 1988;Skilling 1991;O'Hagan 1992;Ritter 2000). It employs the language of probability theory to account for the epistemic uncertainty (i.e., limited knowledge) about the accuracy of intermediate and final numerical computations, thereby yielding algorithms which can be more aware of-as well as more robust against-uncertainty over intermediate computational results. Such algorithms can output probability measures, instead of point estimates, over the final quantity of inter-est. This approach, now called probabilistic numerics (PN) (Hennig et al. 2015;Oates and Sullivan 2019), has in recent years been spelled out for a wide range of numerical tasks, including linear algebra, optimization, integration, and differential equations, thereby working towards the longterm goal of a coherent framework to propagate uncertainty through chained computations, as desirable, e.g., in statistical machine learning.
In this paper, we determine the convergence rates of a recent family of PN methods (Schober et al. 2014;Kersting and Hennig 2016;Magnani et al. 2017;Schober et al. 2019;Tronarp et al. 2019) which recast an IVP as a stochastic filtering problem (Øksendal 2003, Chapter 6), an approach that has been studied in other settings (Jazwinski 1970), but has not been applied to IVPs before. These methods assume a priori that the solution x and its first q ∈ N derivatives follow a Gauss-Markov process X that solves a stochastic differential equation (SDE).
The evaluations of f at numerical estimates of the true solution can then be regarded as imperfect evaluations oḟ x, which can then be used for a Bayesian update of X. Such recursive updates along the time axis yield an algorithm whose structure resembles that of Gaussian (Kalman) filtering (Särkkä 2013, Chapter 4). These methods add only slight computational overhead compared to classical methods (Schober et al. 2019) and have been shown to inherit local convergence rates from equivalent classical methods in specific cases (Schober et al. 2014;Schober et al. 2019). These equivalences (i.e., the equality of the filtering posterior mean and the classical method) are only known to hold in the case of the integrated Brownian motion (IBM) prior and noiseless evaluations of f (in terms of our later notation, the case R ≡ 0), as well as under the following restrictions: Firstly, for q ∈ {1, 2, 3}, and if the first step is divided into sub-steps resembling those of Runge-Kutta methods, an equivalence of the posterior mean of the first step of the filter and the explicit Runge-Kutta method of order q was established in Schober et al. (2014) (but for q ∈ {2, 3} only in the limit as the initial time of the IBM tends to −∞). Secondly, it was shown by Schober et al. (2019) that, for q = 1, the posterior mean after each step coincides with the trapezoidal rule if it takes an additional evaluation of f at the end of each step, known as P(EC)1. The same paper shows that, for q = 2, the filter coincides with a third-order Nordsieck method (Nordsieck 1962) if the filter is in the steady state, i.e., after the sequence of error covariance matrices has converged. These results neither cover filters with the integrated Ornstein-Uhlenbeck process (IOUP) prior (Magnani et al. 2017) nor nonzero noise models on evaluations of f .
In this paper, we directly prove convergence rates without first fitting the filter to existing methods, and thereby lift many of the above restrictions on the convergence rates. While the more-recent work by Tronarp et al. (2020) also provide convergence rates of estimators of x in the Bayesian ODE filtering/smoothing paradigm, they concern the maximum a posteriori estimator (as computed by the iterated extended Kalman ODE smoother), and therefore differ from our convergence rates of the filtering mean (as computed by the Kalman ODE filter).

Contribution
Our main results-Theorems 8 and 14-provide local and global convergence rates of the ODE filter when the step size h goes to zero. Theorem 8 shows local convergence rates of h q+1 without the above-mentioned previous restrictionsi.e., for a generic Gaussian ODE filter for all q ∈ N, both IBM and IOUP prior, flexible Gaussian initialization (see Assumptions 2 and 3), and arbitrary evaluation noise R ≥ 0. As a first global convergence result, Theorem 14 establishes global convergence rates of h q in the case of q = 1, the IBM prior and all fixed measurement uncertainty models R of order p ∈ [1, ∞] (see Assumption 4). This global rate of the worst-case error is matched by the contraction rate of the posterior credible intervals, as we show in Theorem 15. Moreover, we also give closed-form expressions for the steady states in the global case and illustrate our results as well as their possible generalizability to q ≥ 2 by experiments in Sect. 9.

Related work on probabilistic ODE solvers
The Gaussian ODE filter can be thought of as a self-consistent Bayesian decision agent that iteratively updates its prior belief X over x : [0, T ] → R d (and its first q derivatives) with information onẋ from evaluating f . 1 For Gauss-Markov priors, it performs exact Bayesian inference and optimally (with respect to the L 2 -loss) extrapolates along the time axis. Accordingly, all of its computations are deterministic and-due to its restriction to Gaussian distributions-only slightly more expensive than classical solvers. Experiments demonstrating competitive performance with classical methods are provided in Schober et al. (2019, Section 5).
Another line of work (comprising the methods from Chkrebtii et al. (2016); Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018)) introduces probability measures to ODE solvers in a fundamentally different way-by representing the distribution of all numerically possible trajectories with a set of sample paths. To compute these sample paths, Chkrebtii et al. (2016) draws them from a (Bayesian) Gaussian process (GP) regression; Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Teymur et al. (2018) perturb classical estimates after an integration step with a suitably scaled Gaussian noise; and Abdulle and Garegnani (2020) perturbs the classical estimate instead by choosing a stochastic stepsize. While Conrad et al. (2017); Teymur et al. (2016); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018) can be thought of as (non-Bayesian) 'stochastic wrappers' around classical solvers, which produce samples with the same convergence rate, Chkrebtii et al. (2016) employslike the filter-GP regression to represent the belief on x. While the Gaussian ODE filter can convergence with polynomial order (see results in this paper), However, Chkrebtii et al. (2016) only show first-order convergence rates and also construct a sample representation of numerical errors, from which samples are drawn iteratively. A conceptual and experimental comparison between the filter and Chkrebtii et al. (2016) can be found in Schober et al. (2019). An additional numerical test against Conrad et al. (2017) was given by Kersting and Hennig (2016). Moreover, Tronarp et al. (2019) recently introduced a particle ODE filter, which combines a filtering-based solver with a sampling-based uncertainty quantification (UQ), and compared it numerically with Conrad et al. (2017) and Chkrebtii et al. (2016).
All of the above sampling-based methods can hence represent more expressive, non-Gaussian posteriors (as, e.g., desirable for bifurcations), but multiply the computational cost of the underlying method by the number of samples. ODE filters are, in contrast, not a perturbation of known methods, but novel methods designed for computational speed and for a robust treatment of intermediate uncertain values (such as the evaluations of f at estimated points). Unless parallelization of the samples in the sampling-based solvers is possible and inexpensive, one can spend the computational budget for generating additional samples on dividing the step size h by the number of samples, and can thereby polynomially decrease the error. Its Gaussian UQ, however, should not be regarded as the true UQ-in particular for chaotic systems whose uncertainty can be better represented by samplingbased solvers, see, e.g., Conrad et al. (2017, Figure 1) and Abdulle and Garegnani (2020, Figure 2)-but as a rough inexpensive probabilistic treatment of intermediate values and final errors which is supposed to, on average, guide the posterior mean towards the true x. Therefore, it is in a way more similar to classical non-stochastic solvers than to sampling-based stochastic solvers and, unlike samplingbased solvers, puts emphasis on computational speed over statistical accuracy. Nevertheless, its Gaussian UQ is sufficient to make the forward models in ODE inverse problems more 'uncertainty-aware'; see Kersting et al. (2020, Section 3).
Accordingly, the convergence results in this paper concern the convergence rate of the posterior mean to the true solution, while the theoretical results from Teymur et al. (2016); Chkrebtii et al. (2016); Conrad et al. (2017); Lie et al. (2019); Abdulle and Garegnani (2020); Teymur et al. (2018) provide convergence rates of the variance of the non-Gaussian empirical measure of samples (and not for an individual sample).

Relation to filtering theory
While Gaussian (Kalman) filtering was first applied to the solution of ODEs by Kersting and Hennig (2016) and Schober et al. (2019), it has previously been analyzed in the filtering, data assimilation as well as linear system theory community. The convergence results in this paper are concerned with its asymptotics when the step size h (aka time step between data points) goes to zero. In the classical filtering setting, where the data comes from an external sensor, this quantity is not treated as a variable, as it is considered a property of the data and not, like in our case, of the algorithm. Accordingly, the standard books lack such an analysis for h → 0-see Jazwinski (1970); Anderson and Moore (1979); Maybeck (1979) for filtering, Law et al. (2015); Reich and Cotter (2015) for data assimilation and Callier and Desoer (1991) for linear system theory-and we believe that our convergence results are completely novel. It is conceivable that, also for these communities, this paper may be of interest in settings where the data collection mechanism can be actively chosen, e.g., when the frequency of the data can be varied or sensors of different frequencies can be used.

Outline
The paper begins with a brief introduction to Gaussian ODE filtering in Sect. 2. Next, Sects. 3 and 5 provide auxiliary bounds on the flow map of the ODE and on intermediate quantities of the filter, respectively. With the help of these bounds, Sects. 6 and 7 establish local and global convergence rates of the filtering mean, respectively. In light of these rates, Sect. 8 analyses for which measurement noise models the posterior credible intervals are well calibrated. These theoretical results are experimentally confirmed and discussed in Sect. 9. Section 10 concludes with a high-level discussion.

Notation
We will use the notation [n] := {0, . . . , n − 1}. For vectors and matrices, we will use zero-based numbering, e.g., x = (x 0 , . . . , x d−1 ) ∈ R d . For a matrix P ∈ R n×m and (i, j) ∈ [n] × [m], we will write P i,: ∈ R 1×m for the ith row and P :, j for the jth column of P. A fixed but arbitrary norm on R d will be denoted by · . The minimum and maximum of two real numbers a and b will be denoted by a ∧ b and a ∨ b, respectively. Vectors that span all q modeled derivatives will be denoted by bold symbols, such as x.

Gaussian ODE filtering
This section defines how a Gaussian filter can solve the IVP Eq. (1). In the various subsections, we first explain the choice of prior on x, then describe how the algorithm computes a posterior output from this prior (by defining a numerical integrator Ψ ), and add explanations on the measurement noise of the derivative observations. To alternatively understand how this algorithm can be derived as an extension of generic Gaussian filtering in probabilistic state space models, see the concise presentation in , Supplement A).

Prior on x
In PN, it is common (Hennig et al. 2015, Section 3(a)) to put a prior measure on the unknown solution x. Often, for fast Bayesian inference by linear algebra (Rasmussen and Williams 2006, Chapter 2), this prior is Gaussian. To enable GP inference in linear time by Kalman filtering (Särkkä 2013, Chapter 4.3), we further restrict the prior to Markov processes. As discussed in Särkkä and Solin (2019, Chapter 12.4), a wide class of such Gauss-Markov processes can be captured by a law of the (strong) solution (Øksendal 2003, Chapter 5.3) of a linear SDE with Gaussian initial condition. Here-as we, by Eq. (1), have information on at least one derivative of x-the prior also includes the first q ∈ N derivatives. Therefore, for all j ∈ [d], we define the vector of time derivatives by X j = X . We define X j as a (q + 1)-dimensional stochastic process via the SDE driven by mutually independent one-dimensional Brownian motions {B j ; j ∈ [d]} (independent of X(0)) scaled by σ j > 0, with initial condition X j (0) ∼ N (m j (0), P j (0) ). We assume that X j (0); j ∈ [d] are independent. In other words, we model the unknown ith derivative of the jth dimension of the solution x of the IVP Eq. (1), denoted by x (i) j , as a draw from a real-valued, one-dimensional GP X (i) j , for all i ∈ [q + 1] and j ∈ [d], such that X (q) j is defined by (c 0 , . . . , c q ) as well as the Brownian motion scale σ j and X (i−1) j is defined to be the integral of X (i) j . Note that, by the independence of the components of the d-dimensional Brownian motion, the components (2) is a Gauss-Markov process with mean m j : [0, T ] → R q+1 and covariance matrix P j : [0, T ] → R (q+1)×(q+1) given by where the matrices A(t), Q(t) ∈ R (q+1)×(q+1) yielded by the SDE Eq.
(2). While the below algorithm works for all choices of c, we restrict our attention to the case of where the q-times integrated Brownian motion (IBM) and the q-times integrated Ornstein-Uhlenbeck process (IOUP) with drift parameter θ is the unique solution of Eq.
(3) and (4) are given by (Derivations of Eqs. (6) and (7), as well as the precise form of Q without Θ(t 2q+2−i− j ), are presented in Appendix A.) Hence, for all i ∈ [q + 1], the prediction of step size h of the ith derivative from any state u ∈ R q+1 is given by

The algorithm
To avoid the introduction of additional indices, we will define the algorithm Ψ for d = 1; for statements on the general case of d ∈ N we will use the same symbols from Eq. (10)-(15) as vectors over the whole dimension-see, e.g., Eq. (31) for a statement about a general r ∈ R d . By the independence of the dimensions of X, due to Eq.
(2), extension to d ∈ N amounts to applying Ψ to every dimension independently (recall Footnote 2). Accordingly, we may in many of the below proofs w.l.o.g. assume d = 1. Now, as previously spelled out in Kersting and Hennig (2016); Schober et al. (2019), Bayesian filtering of X-i.e., iteratively conditioning X on the information on X (1) from evaluations of f at the mean of the current conditioned X (0) -yields the following numerical method Ψ . Let m(t) = (m (0) (t), . . . , m (q) (t)) ∈ R q+1 be an arbitrary state at some point in time t ∈ [0, T ] (i.e., m (i) (t) is an estimate for x (i) (t)), and let P(t) ∈ R (q+1)×(q+1) be the covariance matrix of x (i) (t). For t ∈ [0, T ], let the current estimate of x(t) be a normal distribution N (m(t), P(t)), i.e., the mean m(t) ∈ R q+1 represents the best numerical estimate (given data {y(h), . . . , y(t)}, see Eq. (12)) and the covariance matrix P(t) ∈ R (q+1)×(q+1) its uncertainty. For the time step t → t + h of size h > 0, the ODE filter first computes the prediction step consisting of predictive mean and predictive covariance with A and Q generally defined by Eq. (77) and, in the considered particular case of Eq. (5), by Eqs. (6) and (7). In the subsequent step, the following quantities are computed first: the Kalman gain the measurement/data onẋ and innovation/residual Here, R denotes the variance of y (the 'measurement noise') and captures the squared difference between the data y(t + h) = f (m − (t + h)) that the algorithm actually receives and the idealized dataẋ(t + h) = f (x(t + h)) that it 'should' receive (see Sect. 2.3). Finally, the mean and the covariance matrix are conditioned on this data, which yields the updated mean and the updated covariance This concludes the step t → t + h, with the Gaussian distribution N (m(t + h), P(t + h)) over x(t + h). The algorithm is iterated by computing m(t + 2h) := Ψ P(t+h),h (m(t + h)) as well as repeating Eq. (10) and (15), with P(t + h) instead of P(t), to obtain P(t + 2h). In the following, to avoid notational clutter, the dependence of the above quantities on t, h and σ will be omitted if their values are unambiguous. Parameter adaptation reminiscent of classical methods (e.g., for σ s.t. the added variance per step coincide with standard error estimates) has been explored in Schober et al. (2019, Section 4). This filter is essentially an iterative application of Bayes rule (see, e.g., Särkkä (2013, Chapter 4)) based on the prior X on x specified by Eq. (2) (entering the algorithm via A and Q) and the measurement model y ∼ N (ẋ, R). Since the measurement model is a likelihood by another name and therefore forms a complete Bayesian model together with the prior X, it remains to detail the measurement model (recall Sect. 2.1 for the choice of prior). Concerning the data generation mechanism for y Eq. (12), we only consider the maximum-a-posteriori point estimate ofẋ(t) given N (m −,(0) (t), P − 00 (t)); a discussion of more involved statistical models for y as well as an algorithm box for the Gaussian ODE filter can be found in Schober et al. (2019, Subsection 2.2). Next, for lack of such a discussion for R, we will examine different choices of R-which have proved central to the UQ of the filter (Kersting and Hennig 2016) and will turn out to affect global convergence properties in Sect. 7.

Measurement noise R
Two sources of uncertainty add to R(t): noise from imprecise knowledge of x(t) and f . Given f , previous integration steps of the filter (as well as an imprecise initial value) inject uncertainty about how close m − (t) is to x(t) and how close y = f (m − (t)) is toẋ(t)) = f (x(t)). This uncertainty stems from the discretization error m −,(0) (t) − x(t) and, hence, tends to increase with h. Additionally, there can be uncertainty from a misspecified f , e.g., when f has estimated parameters, or from numerically imprecise evaluations of f , which can be added to R-a functionality which classical solvers do not possess. In this paper, since R depends on h via the numerical uncertainty on x(t), we analyze the influence of noise R of order p ∈ [1, ∞] (see Assumption 4) on the quality of the solution to illuminate for which orders of noise we can trust the solution to which extent and when we should, instead of decreasing h, rather spend computational budget on specifying or evaluating f more precisely. The explicit dependence of the noise on its order p in h resembles, despite the fundamentally different role of R compared to additive noise in Conrad et al. (2017); Abdulle and Garegnani (2020), the variable p in Conrad et al. (2017, Assumption 1) and Abdulle and Garegnani (2020, Assumption 2.2) in the sense that the analysis highlights how uncertainty of this order can still be modeled without breaking the convergence rates. (Adaptive noise models are computationally feasible (Kersting and Hennig 2016) but lie outside the scope of our analysis.)

Regularity of flow
Before we proceed to the analysis of Ψ , we provide all regularity results necessary for arbitrary q, d ∈ N in this section.

Assumption 1
The vector field f ∈ C q (R d ; R d ) is globally Lipschitz, and all its derivatives of order up to q are uniformly bounded and globally Lipschitz, i.e., there exists some Assumption 1 and the Picard-Lindelöf theorem imply that the solution x is a well-defined element of C q+1 Recall that, by a bold symbol, we denote the vector of these derivatives: x ≡ (x (0) , . . . , x (q) ) . In particular, the solution x of Eq. (1) is denoted by x (0) . Analogously, we denote the flow of the ODE Eq. (1) by Φ (0) , i.e., Φ Lemma 1 Under Assumption 1, for all a ∈ R d and all h > 0, Here, and in the sequel, K > 0 denotes a constant independent of h and θ which may change from line to line.
Proof By Assumption 1, Φ (q+1) exists and is bounded by Φ (q+1) ≤ L, which can be seen by applying the chain rule q times to both sides of Eq. (1). Now, applying

Lemma 2 Under Assumption 1 and for all sufficiently small
Proof Immediate corollary of Teschl (2012, Theorem 2.8).
Global convergence (Sect. 7) will require the following generalization of Lemma 2.

Lemma 3 Let q = 1. Then, under Assumption 1 and for all sufficiently small h
where given the norm · on R d and h > 0, the new norm ||| · ||| h on R (q+1)×d is defined by Remark 1 The necessity of ||| · ||| h stems from the fact thatunlike other ODE solvers-the ODE filter Ψ additionally estimates and uses the first q derivatives in its state m ∈ R (q+1)×d , whose development cannot be bounded in · , but in ||| · ||| h . The norm ||| · ||| h is used to make rigorous the intuition that the estimates of the solution's time derivative are 'one order of h worse per derivative.' Proof We bound the second summand of Inserting Eq. (21) into Eq. (20) concludes the proof.

The role of the state misalignments ı
In Gaussian ODE filtering, the interconnection between the estimates of the ODE solution comes with a fixed set of derivatives, which are implied by the ODE, for the following reason: Clearly, by Eq. (1), the esti- is determined by the ODE as well. To see this, let us first recursively define (1), (i − 1)-times by the chain rule yields (which we prove in Appendix E), this amounts to requiring that Since . The Gaussian ODE filter, however, does not use this (intractable) analytical approach. Instead, it jointly models and infers x (0) (t) and its first q derivatives {x (1) (t), . . . , x (q) (t)} in a state space X, as detailed in Sect. 2. The thus-computed filtering mean estimates m (i) (t) depend not only on the ODE but also on the statistical model-namely on the prior (SDE) and measurement noise R; recall Sects. 2.1 and 2.3. In fact, the analytically desirable derivative estimate, Eq. (24), is, for i = 1, only satisfied if R = 0 (which can be seen from Eq. (14)), and generally does not hold for i ≥ 2 since both f (i−1) and Φ (i) are inaccessible to the algorithm. The numerical example in Appendix C clarifies that δ (i) is likely to be strictly positive, even after the first step 0 → h.
This inevitable mismatch, between exact analysis and approximate statistics, motivates the following definition of the ith state ith state misalignment at time t: Intuitively speaking, δ (i) (t) quantifies how large this mismatch is for the ith derivative at time t. Note that δ (i) (t) = 0 if and only if Eq. (24) holds-i.e., for i = 1 iff R = 0 (which can be seen from Eq. (14)) and only by coincidence The possibility of δ (i) > 0, for i ≥ 1, is inconvenient for the below worst-case analysis since (if Eq. (24) held true and δ (i) ≡ 0) the prediction step of the drift-less IBM prediction (θ = 0) would coincide with a Taylor expansions of the flow map Φ (i) 0 ; see Eq. (8). But, because δ (i) = 0 in general, we have to additionally bound the influence of δ ≥ 0 which complicates the below proofs further.
Fortunately, we can locally bound the import of δ (i) by the easy Lemma 7 and globally by the more complicated Lemma 11 (see Sect. 7.3). Intuitively, these bounds demonstrate that the order of the deviation from a Taylor expansion of the state m = [m (0) , . . . , m (q) ] due to δ is not smaller than the remainder of the Taylor expansion. This means, more loosely speaking, that the import of the δ (i) is swallowed by the Taylor remainder. This effect is locally captured by Lemma 4 and globally by Lemma 12. The global convergence rates of δ (i) (T ), as provided by Lemma 12, are experimentally demonstrated in Appendix D.

Auxiliary bounds on intermediate quantities
Recall from Eq. (5) that θ = 0 and θ > 0 denote the cases of IBM and IOUP prior with drift coefficient θ , respectively. The ODE filter Ψ iteratively computes the filtering mean m(nh) = (m (0) (nh), . . . , m (q) (nh)) ∈ R (q+1) as well as error covariance matrices P(nh) ∈ R on the mesh {nh} T /h n=0 . (Here and in the following, we assume w.l.o.g. that T /h ∈ N.) Ideally, the truncation error over all derivatives falls quickly as h → 0 and is estimated by the standard deviation √ P 00 (nh). Next, we present a classical worst-case convergence analysis over all f satisfying Assumption 1; see Sect. 10 for a discussion of the desirability and feasibility of an average-case analysis. To this end, we bound the added error of every step by intermediate values, defined in Eqs. (11) and (13), and bound these quantities in the order Δ −(i) , r , β (i) . These bounds will be needed for the local and global convergence analysis in Sects. 6 and 7, respectively. Note that, intuitively, Δ −(i) ((n + 1)h) and Δ (i) ((n + 1)h) denote the additional numerical error which is added in the (n + 1)th step to the ith derivative of the predictive mean m −,(i) (t + h) and the updated mean m (i) (t + h), respectively.

Lemma 4 Under Assumption 1, for all
Proof We may assume, as explained in Sect. 2.2, without loss of generality that d = 1. We apply the triangle inequality to the definition of Δ −(i) ((n + 1)h), as defined in Eq. (28), which, by Eq. (8), yields Lemma 5 Under Assumption 1 and for all sufficiently small h > 0, Proof See Appendix F.
To bound the Kalman gains β(nh), we first need to assume that the orders of the initial covariance matrices are sufficiently high (matching the latter required orders of the initialization error; see Assumption 3).

Assumption 2
The entries of the initial covariance matrix We make this assumption, as well as Assumption 3, explicit (instead of just making the stronger assumption of exact initializations with zero variance), because it highlights how statistical or numerical uncertainty on the initial value effects the accuracy of the output of the filter-a novel functionality of PN with the potential to facilitate a management of the computational budget across a computational chain with respect to the respective perturbations from different sources of uncertainty (Hennig et al. 2015, Section 3(d)).

Lemma 6 Under Assumption 2, for all
Application of the orders of A and Q from Eqs. (6) and (7), the triangle inequality and Assumption 2 to the definition of P − in Eq. (10) yields Recall that P and Q are (positive semi-definite) covariance matrices; hence, P − (h) 1,1 ≥ K h 2q−1 . Inserting these orders into the definition of β (i) (Eq. (11)), recalling that R ≥ 0, and removing the dependence on θ by reducing the fraction conclude the proof.

Local convergence rates
With the above bounds on intermediate algorithmic quantities (involving state misalignments δ (i) ) in place, we only need an additional assumption to proceed-via a bound on δ (i) (0)-to our first main result on local convergence orders of Ψ .

Assumption 3
The initial errors on the initial estimate of (This assumption is, like Assumption 2, weaker than the standard assumption of exact initializations.)

Lemma 7 Under Assumptions 1 and 3, for all
Proof The claim follows, using Assumptions 1 and 3, from Now, we can bound the local truncation error ε (0) (h) as defined in Eq. (26).

Theorem 8 (Local Truncation Error) Under the Assumptions 1 to 3 and for all sufficiently small h > 0,
Proof By the triangle inequality for ||| · ||| h and subsequent application of Lemma 3 and Assumption 3 to the second summand of the resulting inequality, we obtain The remaining bound on Δ (i) (h), for all i ∈ [q + 1] and sufficiently small h > 0, is obtained by insertion of the bounds from Lemmas 4 to 6 (in the case of n = 0), into Eq. (28): Such derivative bounds are (to the best of our knowledge) not available for classical numerical solvers, since they do not explicitly model the derivatives in the first place. These bounds could be useful for subsequent computations based on the ODE trajectory (Hennig et al. 2015).
Unsurprisingly, as the mean prediction (recall Eq. (8)) deviates from a pure qth order Taylor expansion by K θ m (q) (0) h q+1 for an IOUP prior (i.e., θ > 0 in Eq. (5)), the constant in front of the local h q+1 convergence rate depends on both θ and m (q) (0) in the IOUP case. A global analysis for IOUP is therefore more complicated than for IBM: Recall from Eq. (8) that, for q = 1, the mean predic- which pulls both m −,(0) and m −,(1) towards zero (or some other prior mean) compared to the prediction given by its Taylor expansion for θ = 0. While this is useful for ODEs converging to zero, such asẋ = −x, it is problematic for diverging ODEs, such asẋ = x (Magnani et al. 2017). As shown in Theorem 8, this effect is asymptotically negligible for local convergence, but it might matter globally and, therefore, might necessitate stronger assumptions on f than Assumption 1, such as a bound on f ∞ which would globally bound {y(nh); n = 0, . . . , T /h} and thereby {m (1) (nh); n = 0, . . . , T /h} in Eq. (38). It is furthermore conceivable that a global bound for IOUP would depend on the relation between θ and f ∞ in a non-trivial way. The inclusion of IOUP (θ > 0) would hence complicate the below proofs further. Therefore, we restrict the following first global analysis to IBM (θ = 0).

Global analysis
As explained in Remark 2, we only consider the case of the IBM prior, i.e., θ = 0, in this section. Moreover, we restrict our analysis to q = 1 in this first global analysis. Although we only have definite knowledge for q = 1, we believe that the convergence rates might also hold for higher q ∈ N-which we experimentally test in Sect. 9.1. Moreover, we believe that proofs analogous to the below proofs might work out for higher q ∈ N and that deriving a generalized version of Proposition 10 for higher q is the bottleneck for such proofs (see Sect. 10 for a discussion of these restrictions).
While, for local convergence, all noise models R yield the same convergence rates in Theorem 8, it is unclear how the order of R in h (as described in Sect. 2.3) affects global convergence rates: e.g., for the limiting case R ≡ K h 0 , the steady-state Kalman gains β ∞ would converge to zero (see Eqs. (43) and 44 below) for h → 0, and hence the evaluation of f would not be taken into account-yielding a filter Ψ which assumes that the evaluations of f are equally off, regardless of h > 0, and eventually just extrapolates along the prior without global convergence of the posterior mean m. For the opposite limiting case R ≡ lim p→∞ K h p ≡ 0, it has already been shown in Schober et al. (2019, Proposition 1 and Theorem 1) that-in the steady state and for q = 1, 2-the filter Ψ inherits global convergence rates from known multistep methods in Nordsieck form Nordsieck (1962). To explore a more general noise model, we assume a fixed noise model R ≡ K h p with arbitrary order p.
In the following, we analyze how small p can be in order for Ψ to exhibit fast global convergence (cf. the similar role of the order p of perturbations in Conrad et al. (2017, Assump-tion 1) and Abdulle and Garegnani (2020, Assumption 2.2)). In light of Theorem 8, the highest possible global convergence rate is O(h)-which will indeed be obtained for all p ∈ [1, ∞] in Theorem 14. Since every extrapolation step of Ψ from t to t + h depends not only on the current state, but also on the covariance matrix P(t)-which itself depends on all previous steps-Ψ is neither a single-step nor a multistep method. Contrary to Schober et al. (2019), we do not restrict our theoretical analysis to the steady-state case, but provide our results under the weaker Assumptions 2 and 3 that were already sufficient for local convergence in Theorem 8-which is made possible by the bounds Eqs. (48) and (49) in Proposition 10.

Outline of global convergence proof
The goal of the following sequence of proofs in Sect. 7 is Theorem 14. It is proved by a special version of the discrete Grönwall inequality (Clark 1987) whose prerequisite is provided in Lemma 13. This Lemma 13 follows from Lemma 3 (on the regularity of the flow map Φ t ) as well as Lemma 12 which provides a bound on the maximal increment of the numerical error stemming from local truncation errors. For the proof of Lemma 12, we first have to establish (i) global bounds on the Kalman gains β (0) and β (1) by the inequalities Eqs. (48) and (49) in Proposition 10, and (ii) a global bound on the state misalignment δ (1) in Lemma 11.
In Sects. 7.2-7.4, we will collect these inequalities in the order of their numbering to subsequently prove global convergence in Sect. 7.5.

Global bounds on Kalman gains
Since we will analyze the sequence of covariance matrices and Kalman gains using contractions in Proposition 10, we first introduce the following generalization of Banach fixedpoint theorem (BFT).

Lemma 9
Let (X , d) be a non-empty complete metric space, T n : X → X , n ∈ N, a sequence of L n -Lipschitz continuous contractions with sup n L n ≤L < 1. Let u n be the fixed point of T n , as given by BFT, and let lim n→∞ u n = u * ∈ X . Then, for all x 0 ∈ X , the recursive sequence x n := T n (x n−1 ) converges to u * as n → ∞.
Proof See Appendix G.
In the following, we will assume that T is a multiple of h.
Proposition 10 For constant R ≡ K h p with p ∈ [0, ∞], the unique (attractive) steady states for the following quantities are All of these bounds are sharp in the sense that they fail for any higher order in the exponent of h.

Remark 3
The recursions for P(nh) and P − (nh) given by Eqs. (10) and (15) follow a discrete algebraic Riccati equation (DARE)-a topic studied in many related settings (Lancaster and Rodman 1995). While the asymptotic behavior Eq. (39) of the completely detectable state X (1) can also be obtained using classical filtering theory (Anderson and Moore 1979, Chapter 4.4), the remaining statements of Proposition 10 also concern the undetectable state X (0) and are, to the best of our knowledge, not directly obtainable from existing theory on DAREs or filtering (which makes the following proof necessary). Note that, in the special case of no measurement noise (R ≡ 0), Eqs. (43) and (44)

Global bounds on state misalignments
For the following estimates, we restrict the choice of p to be larger than q = 1.

Assumption 4 The noise model is chosen to be
Before bounding the added deviation of Ψ from the flow Φ per step, a global bound on the state misalignments defined in Eq. (25) is necessary. The result of the following lemma is discussed in Appendix D.

Lemma 11 Under Assumptions 1 to 4 and for all sufficiently
Proof See Appendix I.
See Lemma 11 for a experimental demonstration of Eq. (33).

Prerequisite for discrete Grönwall inequality
Equipped with the above bounds, we can now prove a bound on the maximal increment of the numerical error stemming from local truncation errors which is needed to prove Eq. (56), the prerequisite for the discrete Grönwall inequality.
The previous lemma now implies a suitable prerequisite for a discrete Grönwall inequality.

Lemma 13 Under Assumptions 1 to 4 and for all sufficiently
Proof We observe, by the triangle inequality for the norm |||·||| h , that The proof is concluded by applying Lemma 12 to the first and Lemma 3 to the second summand of this bound (as well as recalling from Eq. (26) that ε (0) (nh) = m (0) (nh) − x (0) (nh) ).

Global convergence rates
With the above bounds in place, we can now prove global convergence rates.

where K (T ) > 0 is a constant that depends on T , but not on h.
Remark 4 Theorem 14 not only implies that the truncation error ε (0) (nh) on the solution of Eq.
(1) has global order h, but also (by Eq. (19)) that the truncation error ε (1) (nh) on the derivative is uniformly bounded by a constant K independent of h. The convergence rate of this theorem is sharp in the sense that it cannot be improved over all f satisfying Assumption 1 since it is one order worse than the local convergence rate implied by Theorem 8.

Calibration of credible intervals
In PN, one way to judge calibration of a Gaussian output contracts at the same rate as the convergence rate of the posterior mean to the true quantity of interest. For the filter, this would mean that the rate of contraction of max n √ P 00 (nh) should contract at the same rate as max n∈[T /h+1] ε (0) (nh) (recall its rates from Theorem 14). Otherwise, for a higher or lower rate of the interval it would eventually be under-or overconfident, as h → 0. The following proposition shows-in light of the sharp bound Eq. (58) on the global error-that the credible intervals are well calibrated in this sense if p ∈ [1, ∞].

Theorem 15 Under Assumption 2 and for R ≡ K h p , p ∈ [0, ∞], as well as sufficiently small h
Proof See Appendix J.

Numerical experiments
In this section, we empirically assess the following hypotheses: (i) the worst-case convergence rates from Theorem 14 hold not only for q = 1 but also for q ∈ {2, 3} (see Sect. 9.1), (ii) the convergence rates of the credible intervals from Theorem 15 hold true (see Sect. 9.2), and (iii) Assumption 4 is necessary to get these convergence rates (see Sect. 9.3).
The three hypotheses are all supported by the experiments. These experiments are subsequently discussed in Sect. 9.4. Appendix D contains an additional experiment illustrating the convergence rates for the state misalignment δ from Lemma 11.

Global convergence rates for q ∈ {1, 2, 3}
We consider the following three test IVPs: Firstly, the following linear ODĖ and has the harmonic oscillator as a solution. Secondly, the logistic equatioṅ with (λ 0 , λ 1 ) = (3, 1) and x(0) = 0.1, which has the logistic curve And, thirdly, the FitzHugh-Nagumo model , ∀t ∈ [0, 10] (73) with (a, b, c) = (0.08, 0.07, 1.25) and x(0) = (1, 0) which does not have a closed-form solution. Its solution, which we approximate by Euler's method with a step size of h = 10 −6 for the below experiments, is depicted in Fig. 1. We numerically solve these three IVPs with the Gaussian ODE filter for multiple step sizes h > 0 and with a q-times IBM prior (i.e., θ = 0 in Eq. (5)) for q ∈ {1, 2, 3} and scale σ = 20. As a measurement model, we employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (for h ≤ 1) which are permissible under Assumption 4 whose constant K > 0 is denoted explicitly by K R in this section. The resulting convergence rates of global errors m(T ) − x(T ) are depicted in a work-precision diagram in Fig. 2 Figure 2 shows that these convergence rates of qth order hold true in the considered examples for values of up to q = 3 if R ≡ 0 and, for values of up to q = 3. In the case of R ≡ 0, even (q + 1)th order convergence rates appear to hold true for all three ODEs and q ∈ {1, 2, 3}. Note that it is more difficult to validate these convergence rates for q = 4, for all three test problems and small h > 0, since numerical instability can contaminate the analytical rates.

Calibration of credible intervals
To demonstrate the convergence rates of the posterior credible intervals proved in Theorem 15, we now restrict our attention to the case of q = 1, that was considered therein. As in Sect. 9.1, we numerically solve the IVPs eqs. (69) and (71) with the Gaussian ODE filter with a once IBM prior with fixed scale σ = 1. We again employ the minimal R ≡ 0 and maximal measurement variance R ≡ K R h q (for h ≤ 1) which are permissible under Assumption 4 as a measurement model. Figure 3 depicts the resulting convergence rates in work-precision diagrams. As the parallel standard deviation (std. dev.) and h 1 convergence curves show, the credible intervals asymptotically contract at the rate of h 1 guaranteed by Theorem 15. In all four diagrams of Fig. 3, the global error shrinks at a faster rate than the width of the credible intervals. This is unsurprising for R ≡ 0 as we have already observed convergence rates of h q+1 in this case. While this effect is less pronounced for R ≡ K R h q , it still results in underconfidence as h → 0. Remarkably, the shrinking of the standard deviations seems to be 'adaptive' to the numerical error-by which we mean that, as long as the numerical error hardly decreases (up to 10 1.75 evaluations of f ), the standard deviation also stays almost constant, before adopting its h 1 convergence asymptotic (from ≈ 10 2.00 ). h 4 conv.

Necessity of Assumption 4
Having explored the asymptotic properties under Assumption 4 in Sects. 9.1 and 9.2, we now turn our attention to the question of whether this assumption is necessary to guarantee the convergence rates from Theorems 14 and 15. This question is of significance, because Assumption 4 is weaker than the R ≡ 0 assumption of the previous theoretical results (i.e., Proposition 1 and Theorem 1 in Schober et al. (2019)) and it is not self-evident that it cannot be further relaxed.
To this end, we numerically solve the logistic ODE Eq. (71) with the Gaussian ODE filter with a once IBM prior with fixed scale σ = 1 and measurement variance R ≡ K R h 1/2 , which is impermissible under Assumption 4, for increasing choices of K R from 0.00 × 10 0 to 1.00 × 10 7 . In the same way as in Fig. 3, the resulting work-precision diagrams are plotted in Fig. 4. In contrast to the lower left diagram in Fig. 3, which presents the same experiment for R ≡ K R h q (the maximal measurement variance permissible under Assumption 4), the rate of h 2 , that is again observed for K R = 0 in the first diagram, is already missed for K R = 1.00 × 10 0 in the second diagram. With growing constants, the convergence rates of the actual errors as well as the expected errors (standard deviation) decrease from diagram to diagram. In the center diagram with K R = 3.73 × 10 3 , the rates are already slightly worse than the h 1 convergence rates guaranteed by Theorems 14 and 15 under Assumption 4, whereas, for K R = 5.00 × 10 3 , the convergence rates in the lower left plot of Fig. 3 were still significantly better than h 1 . For the greater constants up to K R = 1.00 × 10 7 , the rates even become significantly lower. Notably, as in the lower right diagram of Fig. 3, the slope of the standard deviation curve matches the slope of the global error curve, as can be seen best in the lower right subfigure-thereby asymptotically exhibiting neither over-nor underconfidence. These experiments suggest that the convergence rates from Theorems 14 and 15 do not hold in general for R ≡ K R h 1/2 . Hence, it seems likely that Assumption 4 is indeed necessary for our results and cannot be further relaxed without lowering the implied worst-case convergence rates.

Discussion of experiments
Before proceeding to our overall conclusions, we close this section with a comprehensive discussion of the above experiments. First and foremost, the experiments in Sect. 9.1 suggest that Theorem 14, the main result of this paper, might be generalizable to q ∈ {2, 3} and potentially even higher q ∈ N-although unresolved issues with numerical instability for small step sizes prevent us from confidently asserting that these theoretical results would hold in practice for q ≥ 4. Moreover, we demonstrated the contraction rates of the posterior credible intervals from Theorem 15 and evidence for the necessity of Assumption 4 in Sects. 9.3 and 9.2. The asymptotics revealed by these experiments can be divided by the employed measurement model into three cases: the zero-noise case R ≡ 0, the permissible nonzero case R ≤ K R h q (under Assumption 4) and the nonpermissible case R K R h q . First, if R ≡ 0, the diagrams in the left column of Fig. 2 reaffirm the h q+1 convergence reported for q ∈ {1, 2} in Schober et al. (2019, Figure 4) and extend them to q = 3 (see Sect. 10 for a discussion on why we expect the above global convergence proofs to be extensible to q ≥ 2) The contraction rates of the credible intervals, for q = 1, appear to be asymptotically underconfident in this case as they contract faster than the error. This underconfidence is not surprising in so far as the posterior standard deviation is a worst-case bound for systems modeled by the prior, while the convergence proofs require smoothness of the solution of one order higher than sample paths from the prior. This is a typical result that highlights an aspect known to, but on the margins of classic analysis: The class of problems for which the algorithm converges is rougher than the class on which convergence order proofs operate. How to remedy such overly cautious UQ remains an open research question in PN as well as classical numerical analysis.
Secondly, in the case of R > 0, as permissible under Assumption 4, the convergence rates are slightly reduced compared to the case R ≡ 0, exhibiting convergence between h q and h q+1 . The asymptotic underconfidence of the credible intervals, however, is either reduced or completely removed as depicted in the right column of Fig. 3. Thirdly, in the final case of an impermissibly large R > 0, the h q convergence speed guaranteed by Theorem 14 indeed does not necessarily hold anymore-as depicted in Fig. 4. Note, however, that even then the convergence rate is only slightly worse than h q . The asymptotic UQ matches the observed global error in this case, as the parallel standard deviation and the h 1 curves in all but the upper left R ≡ 0 diagram show.
Overall, the experiments suggest that, in absence of statistical noise on f , a zero-variance measurement model yields the best convergence rates of the posterior mean. Maybe this was expected as, in this case, R only models the inaccuracy from the truncation error, that ideally should be treated adaptively (Kersting and Hennig 2016, Section 2.2). The convergence rates of adaptive noise models should be assessed in future work. As the observed convergence rates in practice sometimes outperform the proved worst-case convergence rates, we believe that an average-case analysis of the filter in the spirit of Ritter (2000) may shed more light upon the expected practical performance. Furthermore, it appears that the UQ becomes asymptotically accurate as well as adaptive to the true numerical error as soon as the R > 0 is large enough. This reinforces our hope that these algorithms will prove useful for IVPs when f is estimated itself (Hennig et al. 2015, Section 3(d)), thereby introducing a R > 0.

Conclusions
We presented a worst-case convergence rate analysis of the Gaussian ODE filter, comprising both local and global convergence rates. While local convergence rates of h q+1 were shown to hold for all q ∈ N, IBM and IOUP prior as well as any noise model R ≥ 0, our global convergence results is restricted to the case of q = 1, IBM prior and fixed noise model While a restriction of the noise model seems inevitable, we believe that the other two restrictions can be lifted: In light of Theorem 8, global convergence rates for the IOUP prior might only require an additional assumption that ensures that all possible data sequences {y(nh); n = 1, . . . , T /h} (and thereby all possible qth-state sequences {m (q) (nh); n = 0, . . . , T /h}) remain uniformly bounded (see discussion in Remark 2). For the case of q ≥ 2, it seems plausible that a proof analogous to the presented one would already yield global convergence rates of order h q , 3 as suggested for q ∈ {2, 3} by the experiments in Sect. 9.1. The orders of the predictive credible intervals can also help to intuitively explain the threshold of p = 1 (or maybe more generally: p = q; see Fig. 2) below which the performance of the filter is not as good, due to Eqs. (45)-(49): According to Kersting and Hennig (2016, Equation (20)), the 'true' (push-forward) variance on y(t) given the predictive distribution N (m − (t), P − (t)) is equal to the integral of f f with respect to N (m − (t), P − (t)), whose maximum over all time steps, by Eq. (67), has order O(h p+1 2 ∧1 ) if f f is globally Lipschitz-since P − (t) enters the argument of the integrand f f , after a change of variable, only under a square root. Hence, the added 'statistical' noise R on the evaluation of f is of lower order than the accumulated 'numerical' variance P − (t) (thereby preventing numerical convergence) if and only if p < 1. Maybe this, in the spirit of Hennig et al. (2015, Subsection 3(d)), can serve as a criterion for vector fields f that are too roughly approximated for a numerical solver to output a trustworthy result, even as h → 0.
Furthermore, the competitive practical performance of the filter, as numerically demonstrated in Schober et al. (2019, Section 5), might only be completely captured by an average-case analysis in the sense of Ritter (2000), where the average error is computed with respect to some distribution p( f ), i.e., over a distribution of ODEs. To comprehend this idea, recall that the posterior filtering mean is the Bayes estimator with minimum mean squared error in linear dynamical systems with Gauss-Markov prior (as defined by the SDE Eq. (2)), i.e., when the data is not evaluations of f but real i.i.d. measurements, as well as in the special case ofẋ(t) = f (t), when the IVP simplifies to a quadrature problem-see Solak et al. (2003) and O'Hagan (1991, Section 2.2), respectively. In fact, the entire purpose of the update step is to correct the prediction in the (on average) correct direction, while a worst-case analysis must assume that it corrects in the worst possible direction in every step-which we execute by the application of the triangle inequality in Eq. (28) resulting in a worst-case upper bound that is the sum of the worst-case errors from prediction and update step. An analysis of the probabilities of 'good' vs. 'bad' updates might therefore pave the way for such an averagecase analysis in the setting of this paper. Since, in practice, truncation errors of ODE solvers tend to be significantly smaller than the worst case-as mirrored by the experiments in Section 9-such an analysis might be useful for applications.
Lastly, we hope that the presented convergence analysis can lay the foundations for similar results for the novel ODE filters (extended KF, unscented KF, particle filter) introduced in Tronarp et al. (2019), and can advance the research on uncertainty-aware likelihoods for inverse problems by ODE filtering (Kersting et al. 2020, Section 3).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

A Derivation of A and Q
As derived in Särkkä (2006, Section 2.2.6) the solution of the SDE Eq. (2), i.e., . . .
where we omitted the index j for simplicity, is a Gauss-Markov process with mean m(t) and covariance matrix P(t) given by where the matrices A, Q ∈ R (q+1)×(q+1) are explicitly defined by Parts of the following calculation can be found in Magnani et al. (2017). If we choose c 0 , . . . , c q−1 = 0 and c q = −θ (for θ ≥ 0) in Eq. (74) the unique strong solution of the SDE is a q-times IOUP, if θ > 0, and a q-times IBM, if θ = 0; see, e.g., Karatzas and Shreve (1991, Chapter 5: Example 6.8). By Eq. (77) and it follows that Analogously, it follows that If we insert Eq. (80) into Eq. (77), then we obtain, by the sparsity of L, that and the dominated convergence theorem (with dominating function τ → e 2θτ ) yields Now, by extracting the first term and noticing that the rest of the series is in Θ(t 2q+2−i− j ), it follows that

B Extension to x with dependent dimensions
The algorithm in Sect. 2.2 employs a prior X with inde- Eq.
(2). While this constitutes a loss of generality for our new theoretical results, which do not immediately carry over to the case of x with dependent dimensions, it is not a restriction to the class of models the algorithm can employ. To construct such a prior X, we first stack its dimensions into the random vector X = (X 0 , . . . , X d−1 ) , choose symmetric positive semi-definite matrices K x , K ε ∈ R d×d , and define, using the Kronecker product ⊗, its law according to the SDE with initial condition X(0) ∼ N (m(0), P(0)), mean m(0) ∈ R d(q+1) and covariance matrix P(0) ∈ R d(q+1)×d(q+1) , as well as an underlying d-dimensional Brownian motion B (independent of X(0)). Now, insertion of K x ⊗ F and K ε ⊗ L for F and L into Eq. (77) yields new predictive matricesÃ andQ. If we now choose K x = I d and K ε = I d , substitutẽ A andQ for A and Q in Eqs. (9) and (10), and use the d(q + 1)-dimensional GP X from Eq. (84) with m(0) ∈ R d(q+1) and P(0) ∈ R d(q+1)×d(q+1) as a prior, we have equivalently defined the version of Gaussian ODE filtering with independent dimensions from Sect. 2.2. If we, however, choose different symmetric positive semi-definite matrices for K x and K ε , we introduce, viaÃ andQ, a correlation in the development of the solution dimensions (x 0 , . . . , x d−1 ) as well as the error dimensions (ε 0 , . . . , ε d ) , respectively. Note that, while K ε plays a similar role as C h in Conrad et al. (2017, Assumption 1) in correlating the numerical errors, the matrix K x additionally introduces a correlation of the numerical estimates, that is m, along the time axis. Even more flexible correlation models (over all modeled derivatives) can be employed by inserting arbitrary matrices (of the same dimensionality) for K x ⊗ F and K ε ⊗ L in Eq. (84), but such models seem hard to interpret. For future research, it would be interesting to examine whether such GP models with dependent dimensions are useful in practice. There are first publications (Xiaoyue et al. 2018;Gessner et al. 2019) on this topic for integrals, but not yet for ODEs.

C Illustrative example
To illustrate the algorithm defined in Sect. 2.2, we apply it to a special case of the Riccati equation (Davis 1962, p. 73) dx solution: with step size h = 0.1, measurement noise R = 0.0 (for simplicity) as well as prior hyperparameters q = 1, σ 2 = 10.0 and c i = 0 for all i ∈ [q + 1] (recall Eq. (2)), i.e., with a 1-times integrated Brownian motion prior whose drift and diffusion matrices are, by Eq. (8), given by As the ODE Eq. (85) is one-dimensional (i.e., d = 1), the dimension index j ∈ [d] is omitted in this section. Since the initial value and derivative are certain at x(0) = 1 anḋ −1/2) ). Therefore, m(0) = (1, −1/2) and P(0) = 0 ∈ R 2×2 for the initial filtering mean and covariance matrix. Now, the Gaussian ODE Filter computes the first integration step by executing the prediction step Eqs. (9) and (10) Note that, for all Based on this prediction, the data is then generated by with variance R = 0.0. In the subsequent update step Eqs. (9) and (11) to (13), a Bayesian conditioning of the predictive distribution Eqs. (88) and (89) on this data is executed:  (25)): which confirms the exposition on the possibility of δ (i) > 0 from Sect. 4. Note that δ tends to increase with R; e.g., if R = 1.0 in the above example, then δ (1) (h) ≈ 0.03324. D Experiment: global convergence of state misalignments ı Figure 5 depicts the global convergence of the state misalignment δ (1) (T ) in the above example Eq. (85), as detailed in Appendix C, for q ∈ {1, 2, 3}. The plotting is analogous to Fig. 2. The resulting convergence rates of h q+1 confirm Lemma 11 and suggest that it may also be generalizable to q ∈ {2, 3, . . . }.
where I 1 , I 2 , and I 3 are defined and bounded as follows, using Assumption 1 and Lemma 1: and Inserting Eqs. (101)

G Proof of Lemma 9
Proof Letũ 0 = u * andũ n = T n (ũ n−1 ), for n ∈ N. Then, where the last summand goes to zero by Hence, it remains to show that lim n→∞ a n = 0. TheL-Lipschitz continuity of T n and the triangle inequality yield that a n = d(T n (u n ), T n (ũ n−1 )) ≤L d(u n , u n−1 ) + d(u n−1 ,ũ n−1 ) Since the convergent sequence u n is in particular a Cauchy sequence, lim m→∞ b m = 0 and, hence, 0 ≤ lim n→∞ a n = lim sup n→∞ a n ≤ 0. Hence, lim n→∞ a n = 0.