Abstract
Homotopy approaches to Bayesian inference have found wide- spread use especially if the Kullback–Leibler divergence between the prior and the posterior distribution is large. Here we extend one of these homotopy approaches to include an underlying stochastic diffusion process. The underlying mathematical problem is closely related to the Schrödinger bridge problem for given marginal distributions. We demonstrate that the proposed homotopy approach provides a computationally tractable approximation to the underlying bridge problem. In particular, our implementation builds upon the widely used ensemble Kalman filter methodology and extends it to Schrödinger bridge problems within the context of sequential data assimilation.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Sequential data assimilation interlaces dynamic processes with intermittent partial state observations in order to provide reliable state estimates and their uncertainties. A wide array of numerical methods have been proposed to tackle this problem computationally. Popular methods include sequential Monte Carlo, variational inference, and various ensemble Kalman filter formulations [5, 8]. These methods can encounter difficulties whenever the predictive distribution is incompatible with the incoming data; in other words whenever the distance between the prior, as provided by the underlying stochastic process, and the data informed posterior distribution is large. It has long been realized that this challenge can be partially circumvented by altering the underlying stochastic process through appropriate control terms or modified proposal densities [5, 23, 16, 9]. Recently, the connection between devising such control terms and Schrödinger bridge problems [4] has been made explicit [16]. However, Schrödinger bridge problems are notoriously difficult to solve numerically. The key contribution of this paper is to provide a computationally tractable (sub-optimal) solution via a novel extension of established homotopy approaches [7, 14]. Similar to related homotopy approaches for purely Bayesian inference, the solution of certain partial differential equations (PDE) is required in order to find the desired control terms [14, 25]. In line with standard ensemble Kalman filter (EnKF) methodologies we approximate these PDEs via a constant gain approximation [21]. There are also alternative approaches to sequential data assimilation or inference which utilize ideas from optimal transportation; see for example [15, 6, 20, 3].
The paper is structured as follows. The mathematical formulation of the data assimilation problem, as considered in this paper, is laid out in Sect. 2. The standard optimal control and Schrödinger bridge approach to data assimilation is briefly summarized in Sect. 3, and the novel control formulation based on an homotopy formulation is introduced in Sect. 4. A practical implementation based on the EnKF methodology is proposed in Sect. 5. A series of increasingly complex data assimilation problems is considered in Sect. 6 in order to demonstrate the feasibility of the proposed methodologies. The paper concludes with some conclusions in Sect. 7. Detailed mathematical derivations can be found in Appendices 1 and 2, respectively.
2 Problem Formulation and Background
We consider drift diffusion processes given by a stochastic differential equations (SDE)
where \(X_t : \Omega \to \mathbb {R}^{d_x}\), \(f:\mathbb {R}^{d_x} \to \mathbb {R}^{d_x}\), \(\sigma \in \mathbb {R}_{\geq 0}\), and \(W_t:\Omega \to \mathbb {R}^{d_x}\) denotes \(d_x\)-dimensional standard Brownian motion [12, 13].
Assuming the law of \(X_t\) is absolutely continuous w.r.t. Lebesgue measure with density \(\pi _t\), this leads to the Fokker–Planck equation [13]
The SDE (1) can be replaced by the mean field ODE
where \(\tilde \pi _t\) denotes the law of \(\tilde X_t\). Provided \(\tilde \pi _0 = \pi _0\), it holds that \(\tilde \pi _t = \pi _t\) for all \(t> 0\). Note that the evolution of the random variable \(\tilde X_t\) is entirely deterministic subject to random initial conditions \(\tilde X_0 \sim \pi _0\).
At time \(t = T>0\), we have observations of the system according to
![](http://media.springernature.com/lw111/springer-static/image/chp%3A10.1007%2F978-3-031-40094-0_12/MediaObjects/540495_1_En_12_Equ4_HTML.png)
from which we wish to infer the unknown state ). Here \(h:\mathbb {R}^{d_x} \to \mathbb {R}^{d_y}\) denotes the forward map and \(\nu \sim \mathcal {N}(0,R)\) is \(d_y\)-dimensional Gaussian noise with covariance matrix \(R \in \mathbb {R}^{d_y \times d_y}\).
Let \(L:\mathbb {R}^{d_x} \to \mathbb {R}\) denote the corresponding negative log-likelihood function. Since \(\nu \) is Gaussian it is given by
up to an irrelevant constant. The observations are combined with the predictive density \(\pi _T\) at time \(t=T\) according to Bayes’ theorem,
The process of transforming the random variable \(X_T \sim \pi _T\) into a random variable \(X_T^a \sim \pi _T^{\mathrm {a}}\) is called data assimilation in the context of dynamical systems and stochastic processes [10, 17, 8].
Since performing data assimilation can be difficult if the relative Kullback–Leibler divergence
also called the relative entropy [13], between the prior \(\pi _T\) and posterior \(\pi _T^a\) is large and/or if the involved distributions are strongly non-Gaussian [19, 1], we propose to construct a new SDE with state process \(X_t^{\mathrm {h}}\) such that \(X_0^{\mathrm {h}} \sim \pi _0\) and \(X_T^{\mathrm {h}} \sim \pi _T^a\). In other words, we are looking for a stochastic process (bridge) with initial density \(\pi _0\) and final density \(\pi ^{\mathrm {a}}_T\). The problem of finding the optimal process (in the sense of minimal Kullback–Leibler divergence) is known as the Schrödinger bridge problem [4].
3 Schrödinger Bridge Approach
The Bayesian adjustment (6) at final time \(t=T\) leads in fact to an adjustment over the whole solution space of the underlying diffusion process described by (1). Let us denote the so called smoothing distribution by \(\pi _t^{\mathrm {a}}\), \(t \in [0,T]\) [18, 5]. It is well established that these marginal distributions can be generated from a controlled SDE
for appropriate control \(g_t^{\mathrm {a}}:\mathbb {R}^{d_x} \to \mathbb {R}^{d_x}\) such that \(X_0^{\mathrm {a}}\sim \pi _0^{\mathrm {a}}\) implies \(X_t^{\mathrm {a}}\sim \pi _t^{\mathrm {a}}\) for all \(t>0\). It is also well known that finding a suitable \(g_t^{\mathrm {a}}\) can be formulated as an optimal control problem which in turn is closely related to the backward Kolmogorov equation [13, 16]. Formulations related to (8) have also been used in the context of sequential Monte Carlo methods [5].
As proposed in [16], an alternative perspective on sequential data assimilation is provided by Schrödinger bridges. Given two marginal distributions \(q_0\) and \(q_T\) and stochastic process \(X_t\) (referred to as the reference process), a Schrödinger bridge is another stochastic process \(\hat {X}_t\) such that \(\hat {X}_0 \sim q_0\), \(\hat {X}_T \sim q_T\) and the Kullback–Leibler divergence between the processes \(\{\hat {X}_t\}_{t\in [0,T]}\) and \(\{X_t\}_{t\in [0,T]}\) is minimal. Specialised to our problem and considering a single data assimilation cycle that means the marginals are the initial and posterior densities, i.e. \(q_0 = \pi _0\) and \(q_T = \pi ^{\mathrm {a}}_T\), and the reference process is the solution to (1). The solution to the associated Schrödinger bridge problem is again of the form (8) with modified control term denoted by \(g_t^{\mathrm {SB}}(x)\).
A Schrödinger bridge is thus the optimal coupling as measured by the Kullback–Leibler divergence to the underlying reference process. Unfortunately Schrödinger bridges lead to boundary value problems in the space of probability measures and the required control term \(g_t^{\mathrm {SB}}\) seems rather difficult to compute in practice. In addition to the computational complexity of solving nonlinear Schrödinger bridge problems, the target distribution \(\pi _T^a\) is implicitly defined in the setting of data assimilation. The next section offers a solution to both of these issues. We point to [23] for a discussion of alternative approaches which introduce appropriate control terms into data assimilation procedures.
4 Homotopy Induced Dynamic Coupling
Since Schrödinger bridges are computationally challenging, we ask whether a less optimal but cheaper approach might also be feasible. Indeed, in the context of data assimilation a non-optimal coupling can be found via a homotopy between the initial and target distribution as follows. Let
denote the homotopy in question, with \(Z_t = \int e^{-\frac {t}{T}L(x)} \pi _t(x) \mathrm {d}x\) the time dependent normalization constant. It clearly holds that \(\pi ^{\mathrm {h}}_0 = \pi _0\) and \(\pi ^{\mathrm {h}}_T = \pi _T^{\mathrm {a}}\). Note that the scaling \(t \mapsto e^{-\frac {t}{T}L}\) was chosen for its simplicity and follows previous work on Bayesian inference problems [7, 14]. Finding better homotopies or systematic ways of constructing one could be an interesting direction for future research.
We can then reason backwards from the Fokker–Planck equation of \(\pi ^h_t\) to conclude that if it is the density of a random variable \(X^h_t\) then that random variable must satisfy the modified SDE:
where \(g_t\) is a solution to the PDE
The derivations of (11) can be found in Appendix 1.
Note that (10) constitutes a mean field model since \(g_t\) depends on the distribution \(\pi _t^{\mathrm {h}}\) of \(X_t^{\mathrm {h}}\). We also wish to point out that
provides a solution to the associated coupling problem. The control term \(\hat g_t^{\mathrm {SB}}\) is however non-optimal in the sense of the Schrödinger bridge problem since it does not minimise the Kullback–Leibler divergence.
Since (11) is linear in \(g_t\) we can decompose (11) into a set of simpler equations \(\nabla \cdot \left ( \pi ^{\mathrm {h}}_t g^i_t \right ) = \pi ^{\mathrm {h}}_t (k^i - \mathbb {E} k^i)\) such that the \(k^i\) add up to the right hand side of (11). In order to maintain \(\int _{ }^{} \nabla \cdot \left ( \pi ^{\mathrm {h}}_t g_t \right ) = 0\) for the individual \(g^i_t\) we can make use of the fact that the terms in (11) are of the form \(\pi ^{\mathrm {h}}_t \left ( k - \mathbb {E} k \right )\). Separating the terms, we obtain the following equations, the sum of whose solutions solves (11):
Note that
which can be used to avoid the computation of \(\nabla \log \pi _t^{\mathrm {h}}\) (with \(\Delta = \nabla \cdot \nabla \) the Laplacian operator) in (13c). Thus the controlled SDE (10) can be replaced by
where \(\hat g_t\) is a solution to
Furthermore, if \(\Delta L\) is a constant (as a function of x) or small in comparison to the other contributions in (16), then (16) simplifies further. In particular, this is the case if the forward map is linear, that is, \(h(x) = Hx\).
Building upon the mean field ODE (3), one obtains the equivalent controlled mean field ODE system
with \(g_t\) defined as before. This mean field formulation again requires knowledge (or approximation) of \(\nabla \log \tilde \pi _t^{\mathrm {h}}\). A Gaussian approximation might be sufficient in certain circumstances giving rise to
where \(\tilde \mu _t^{\mathrm {h}}\) denotes the mean of \(\tilde X_t^{\mathrm {h}}\) and \(\tilde \Sigma _t^{\mathrm {h}}\) its covariance matrix.
5 Numerical Implementation
No analytic solution to (11) is known and we thus have to resort to approximations. We note that a similar PDE arises in the computation of the gain in the feedback particle filter [24] and one could use the diffusion map based approximation [22] for the problem at hand. This method also transforms the PDE into a Poisson equation which it then translated into an equivalent integral equation, the semi-group form of the Poisson equation. As the name suggests the integral equation makes use of the generator of a semi-group which can be approximated by diffusion maps. Here we instead propose to follow the constant gain approximation first introduced in the EnKF methodology [21].
5.1 Ensemble Kalman Mean Field Approximation
Let us assume that \(\Delta L \approx const\) in (16). Then we only need to deal with the modified negative log likelihood function
with \(\Delta t\) being the time-step also used later for time-stepping the evolution equations (10) or (15), respectively. Since L is given by (5), we define the modified forward map
and thus
Following the standard EnKF methodology for quadratic loss functions, this suggests to approximate the drift function \(\hat g_t\) in (15) as follows:
Here we have introduced the notation \(\pi _t^{\mathrm {h}}[l]\) to denote the expectation value \(\mathbb {E} l\) of a function \(l(x)\) under the PDF \(\pi _t^{\mathrm {h}}\). Furthermore, \(\Sigma _t^{xh}\) denotes the correlation matrix between x and \(h(x)\) under the PDF \(\pi _t^{\mathrm {h}}\) etc. The derivation of (22) can be found in Appendix 2.
5.2 Particle Approximation and Time-Stepping
The controlled mean field equations (15) can be implemented numerically by the standard Monte Carlo Ansatz, that is, M particles \(X_t^{(i)}\) are propagated according to
for \(i=1,\ldots ,M\). The required expectation values in \(\hat g_t^{\mathrm {KF}}\) are evaluated with respect to the empirical measure
The interacting particle system can be time-stepped using an appropriate adaptation of (61) from Appendix 2. The computation of gradients can be avoided by applying the statistical linearisation (60).
6 Examples
We now discuss a sequence of increasingly complex examples. The purpose is both to illuminate certain aspects of the proposed control terms as well as to indicate the computational advantages of the proposed methodology. All examples will be based on linear forward maps \(h(x) = Hx\) and, therefore \(\Delta L\) is constant and can be ignored.
6.1 Pure Diffusion Processes
We set the drift f to zero in (1) and also assuming Gaussian initial conditions. Then the control term (22) gives rise to the mean-field SDE
in the limit \(\Delta t\to 0\). We note that \(X_t^{\mathrm {h}} \sim \pi _t^{\mathrm {h}}\) will remain Gaussian for all times and we denote the mean by \(\mu _t^{\mathrm {h}}\) and the covariance matrix by \(\Sigma _t^{\mathrm {h}}\). Hence, it holds that \(\Sigma _t^{xx} = \Sigma _t^{\mathrm {h}}\) and \(\pi _t^{\mathrm {h}}[x] = \mu _t^{\mathrm {h}}\).
Please note that the additional drift term in (25a) is pulling \(X_t^{\mathrm {h}}\) towards the observation \(y_T\) regardless of the value of \(\Sigma _t^{\mathrm {h}}\). It should also be noted that the drift term in (25b) can be both attractive or repulsive with regard to the observation \(y_T\) depending on the eigenvalues of
The strength of this drift term is moderated by the covariance matrix \(\Sigma _t^{\mathrm {h}}\).
We consider a one-dimensional problem with \(R = 0.01\), \(\sigma = 1\), \(H = 1\), \(y_T = 1\) and \(T=1\). The initial conditions are Gaussian with mean \(\mu _0 = 0\) and variance \(\Sigma _0 = 1\). It follows that \(\pi _1\) is Gaussian with mean \(\mu _1 = 0\) and variance \(\Sigma _1 = 2\) and the resulting Gaussian posterior \(\pi _1^a\) has mean and variance given by
with Kalman gain \(K = 2/(2+0.01) \approx 0.9524\).
In Fig. 1 one can find the time evolution of the mean and the variance under the mean field equations (25). The early impact of the data driven control term on the dynamics is perhaps surprising and quite opposite to the standard sequential approach to data assimilation where one first propagates to final time and only then adjusts according to the available data. It is also worth noticing that the sign of the corresponding \(\Omega _t\) changes sign at \(t_c = \sqrt {2}/20\) implying that the drift term in (25) has a destabilizing effect on the dynamics for \(t>t_c\).
Time evolution of the mean \(\mu _t^h\) and the variance \(\Sigma _t^h\) under the mean field equations (25). Their values at final agree with the posterior values provided by (27)
6.2 Purely Deterministic Processes
We now set \(\sigma =0\) in (1). We obtain from (22) the mean field ODE system
with
These equations can be expanded giving rise to
upon ignoring terms of order \(\mathcal {O}(\Delta t)\). Unless the drift function f is linear, these mean field equations provide only an approximation to the controlled mean field equations (15).
6.3 Linear Gaussian Case
It is instructive to investigate the linear case
in more detail where again everything remains Gaussian provided \(X_0^h\) is Gaussian distributed, that is \(\pi _0(x) = \mathcal {N}(x; \mu _0, \Sigma _0)\). Under these conditions the densities \(\pi _t\) and \(\pi ^{\mathrm {h}}_t\) will also be Gaussian, we write \(\pi ^{\mathrm {h}}_t(x) = \mathcal {N}(x; \mu ^{\mathrm {h}}_t, \Sigma ^{\mathrm {h}}_t)\). The associated mean field equations follow from Appendix 2 and are given by
with
A qualitative discussion can be performed in the scalar case, that is \(d_x = 1\), \(H = 1\), \(\sigma = 1\), \(b=0\), \(T=1\) and \(F = \lambda \). One finds that the control terms involving F stabilize the dynamics whenever \(\lambda >0\). This observation is in line with the fact that the data is crucial only if the dynamics in \(X_t\) is unstable, that is, \(\lambda > 0\).
We consider a two dimensional diffusion process with state variable \(x = (x_1,x_2)^\top \) and linear drift term (31) given by
\(b = 0\), and diffusion constant \(\sigma = 0.1I\). The forward operator is \(H= \begin {pmatrix} 1 & 0 \end {pmatrix} \) and thevariance of the noise \(R=0.01\). The initial distribution was \(\pi _0 = \mathcal {N}( (1, 3), 0.02I )\). The observed value at time \(T=1\) is set to \(y_T = 2.5\).
The posterior mean takes values \(\mu _1^{\mathrm {a}} \approx 2.25\) and \(\mu _2^{\mathrm {a}} \approx 1.50\), while the posterior covariance matrix becomes
Numerical results can be found in Fig. 2. The impact of the control term on the linear diffusion process can clearly be seen and is most prominent on the observed \(x_1\) component of the process. The final values of the controlled process agree well with their posterior counterparts.
6.4 Nonlinear Diffusion Example
We consider a two-dimensional problem and denote the state variable by \(x = (x_1,x_2)^\top \). The drift term is given by
with parameters \(\lambda _1 = 2000\), \(\lambda _2 = 5\), and \(\beta = 1/5\). The diffusion constant is set to \(\sigma = 1\). The choice of the potential \(V(x)\) has two effects: (1) there is a relative high barrier for particles to pass from positive to negative \(x_1\)-values and vice versa; (2) the dynamics stay close to the parabola \(x_2 = 2- \beta x_1^2\).
The initial distribution is obtained by sampling \(x_1\) from a Gaussian with mean \(1.5\) and variance \(0.0625\). The \(x_2\) component is obtained from the relation
We observe the first component \(x_1\) of the state vector at time \(T=1\) with measurement error variance \(R = 0.01\). The observed value is set to \(y_T = -1.5\). Due to the tiny observation error the posterior is centred sharply about the observed value. Furthermore, recall that the dynamics is essentially slaved to the parabola \(x_2 = 2 - \beta x_1^2\) which makes the inference problem strongly nonlinear.
All particle simulations are run with an ensemble size of \(M=1000\). Essentially identical results are obtained for \(M=100\). Smaller ensemble sizes lead to numerical instabilities.
In Fig. 3, one can find the particle distribution at time \(t=1\) which constitutes the prior distribution for the associated Bayesian inference problem. It is obvious that a particle filter would fail to recover the posterior distribution which is sharply centered about the observed value. We found that increasing the ensemble size to \(M = 10{,}000\) allows a particle filter to recover the posterior distribution; but the effective ensemble size still drops dramatically. The approximation provided by the EnKF is also displayed. The EnKF fails to recover the posterior due to its inherent linear regression ansatz which is inappropriate for this strongly nonlinear inference problem even in the limit ensemble size \(M\to \infty \).
In Fig. 4, the results from the controlled mean field formulation are displayed. It can be concluded that the posterior distribution is well approximated despite the constant gain approximation made in order to formulate the control term \(\hat g_t^{\mathrm {KF}}\) in (22).
6.5 Lorenz-63 Example
All examples so far have considered a single data assimilation cycle only. We now perform a proper sequential data assimilation experiment for the standard Lorenz-63 model [11]
where \(X_t:\Omega \to \mathbb {R}^3\) and
with parameters \(a=10\), \(b=28\) and \(c=8/3\).
In order to obtain a reference solution ) for \(t\ge 0\), the ODE (38) is solved numerically with step-size \(\Delta t = 0.005\) and initial condition
![](http://media.springernature.com/lw157/springer-static/image/chp%3A10.1007%2F978-3-031-40094-0_12/MediaObjects/540495_1_En_12_Equ50_HTML.png)
Scalar-valued observations are generated every \(\Delta t_{\mathrm {obs}}>0\) units of time using the forward model
with measurement errors \(\nu _n \sim \mathrm {N}(0,1)\) and forward map \(H = (1 \,0 \,0) \in \mathbb {R}^{1\times 3}\). We use \(\Delta t_{\mathrm {obs}} \in \{0.05,0.1,0.12\}\) in our experiments and perform \(N = 20{,}000\) assimilation cycles.
The initial ensemble \(\{X_0^{(i)}\}_{i=1}^M\) is drawn from the Gaussian distribution with mean ) and covariance matrix \(0.01 I\). We employ multiplicative ensemble inflation which amounts to replacing the Lorenz-63 dynamics by
with inflation factors
Here \(\hat \mu _t\) denotes the empirical mean of the ensemble \(\{X_t^{(i)}\}_{i=1}^M\). These equations are combined with the augmented evolution equations (30) and solved numerically with step-size \(\Delta t = 0.005\) and ensemble sizes \(M \in \{5,10,15\}\).
We report the resulting root mean square errors
![](http://media.springernature.com/lw303/springer-static/image/chp%3A10.1007%2F978-3-031-40094-0_12/MediaObjects/540495_1_En_12_Equ54_HTML.png)
which are computed for each ensemble size M, observation interval \(\Delta t_{\mathrm {obs}}\) and inflation factor \(\sigma _k\). The results are displayed in Table 1 where the smallest RMSE over the range of inflation factors \(\{\sigma _k\}_{k=0}^9\) is stated for each M and \(\Delta t_{\mathrm {obs}}\). We also state the corresponding RMSEs from a standard ensemble square root filter implementation [2, 8]. We find that the proposed homotopy approach outperforms the ensemble square root filter in terms of RMSE in all settings considered. The improvements increase for increasing observation intervals \(\Delta t_{\mathrm {obs}}\). The homotopy approach also appears less sensitive to the ensemble size M.
We close this example by pointing out that less of an improvement could be expected for a fully observed Lorenz-63 system. The proposed homotopy approach seems particularly effective in guiding the unobserved solution components to regions of high posterior probability. See also the example from Sect. 6.4.
7 Conclusions
Devising alternative proposal densities has a long history in the context of sequential data assimilation and filtering. Here we have explored a computationally tractable approach which combines the concept of Schrödinger bridges with a rather straightforward homotopy approach. A further key ingredient is the approximate solution of the arising PDEs in terms of a constant gain approximation, which is also widely used within the EnKF community. Numerical examples indicate that the approach is viable and can overcome limitations of both standard sequential Monte Carlo as well as standard EnKF methods. This has been demonstrated for single assimilation steps as well as long-time data assimilation using the chaotic Lorenz-63 model with only the first component observed infrequently. It remains to be seen how the proposed methods behave for high dimensional stochastic processes.
References
S. Agapiou, O. Papaspiliopoulos, D. Sanz-Alonso, and A. Stuart. Importance sampling: Intrinsic dimension and computational cost. Statistical Science, pages 405–431, 2017.
M. Asch, M. Bocquet, and M. Nodet. Data assimilation: Methods, algorithms, and applications. SIAM, 2016.
E. Bernton, J. Heng, A. Doucet, and P. E. Jacob. Schrödinger bridge samplers. Technical report, arXiv:1912.13170, 2019.
Y. Chen, T. T. Georgiou, and M. Pavon. Stochastic control liaisons: Richard Sinkhorn meets Gaspard Monge on a Schrödinger bridge. SIAM Review, 63: 249–313, 2021. doi: https://doi.org/10.1137/20M1339982.
N. Chopin and O. Papaspiliopoulos. An Introduction to Sequential Monte Carlo. Springer Nature Switzerland AG, Cham, Switzerland, 2020.
A. Corenflos, J. Thornton, G. Deligiannidis, and A. Doucet. Differentiable particle filtering via entropy-regularized optimal transport. In International Conference on Machine Learning, pages 2100–2111. PMLR, 2021.
F. Daum, J. Huang, and A. Noushin. Exact particle flow for nonlinear filters. In I. Kadar, editor, Signal Processing, Sensor Fusion, and Target Recognition XIX, volume 7697, pages 92–110. International Society for Optics and Photonics, SPIE, 2010. doi: https://doi.org/10.1117/12.839590.
G. Evensen, F. Vossepoel, and P. van Leeuwen. Data Assimilation Fundamentals: A unified Formulation of the State and Parameter Estimation Problem. Springer Nature Switzerland AG, Cham, Switzerland, 2022.
J. Heng, A. N. Bishop, G. Deligiannidis, and A. Doucet. Controlled sequential Monte Carlo. The Annals of Statistics, 48: 2904–2929, 2020. doi: https://doi.org/10.1214/19-AOS1914.
K. Law, A. Stuart, and K. Zygalakis. Data assimilation. Cham, Switzerland: Springer, 214, 2015.
E. Lorenz. Deterministic non-periodic flows. J. Atmos. Sci., 20: 130–141, 1963.
B. Oksendal. Stochastic differential equations: an introduction with applications. Springer Science & Business Media, 2013.
G. A. Pavliotis. Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations, volume 60 of Texts in Applied Mathematics. Springer New York, New York, NY, 2014. doi: https://doi.org/10.1007/978-1-4939-1323-7.
S. Reich. A dynamical systems framework for intermittent data assimilation. BIT Numerical Mathematics, 51 (1): 235–249, 2011.
S. Reich. A nonparametric ensemble transform method for Bayesian inference. SIAM J. Sci. Comput., 35: A2013–A2024, 2013. doi: https://doi.org/10.1137/130907367.
S. Reich. Data assimilation: The Schrödinger perspective. Acta Numerica, 28: 635–711, 2019.
S. Reich and C. Cotter. Probabilistic forecasting and Bayesian data assimilation. Cambridge University Press, 2015.
S. Särkkä. Bayesian Filtering and Smoothing. Cambridge University Press, Cambridge, 2013.
C. Snyder, T. Bengtsson, P. Bickel, and J. Anderson. Obstacles to high-dimensional particle filtering. Monthly Weather Review, 136 (12): 4629–4640, 2008.
A. Spantini, R. Baptista, and Y. Marzouk. Coupling techniques for nonlinear ensemble filtering. arXiv preprint arXiv:1907.00389, 2019.
A. Taghvaei, J. de Wiljes, P. Mehta, and S. Reich. Kalman filter and its modern extensions for the continuous-time nonlinear filtering problem. ASME. J. Dyn. Sys., Meas., Control., 140: 030904–030904–11, 2017. doi: https://doi.org/10.1115/1.4037780.
A. Taghvaei, P. Mehta, and S. Meyn. Diffusion map-based algorithm for gain function approximation in the feedback particle filter. SIAM/ASA J. Uncertain. Quantif., 8 (3): 1090–1117, 2020. doi: https://doi.org/10.1137/19M124513X.
P. van Leeuwen, H. Künsch, L. Nerger, R. Potthast, and S. Reich. Particle filters for high-dimensional geoscience applications: A review. Q. J. Royal Meteorol. Soc., 145: 2335–2365, 2019. doi: https://doi.org/10.1002/qj.3551.
T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. IEEE Trans. Automat. Control, 58 (10): 2465–2480, 2013. ISSN 0018-9286. doi: https://doi.org/10.1109/TAC.2013.2258825.
T. Yang, H. A. P. Blom, and P. G. Mehta. The continuous-discrete time feedback particle filter. In American Control Conference, pages 648–653. IEEE, 2014. doi: https://doi.org/10.1109/ACC.2014.6859259.
Acknowledgements
This research has been funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294. We thank Nikolai Zaki for earlier work on the topic of this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix 1: Derivation of Control Term Equation
Given an evolution equation
we obtain the Fokker–Planck equation
Now we modify (45) by an additional drift term, i.e.
with \(\tilde g_t:\mathbb {R}^{d_x} \to \mathbb {R}^{d_x}\). In that case, we would get a Fokker–Planck equation for the new equation:
We can find \(\tilde g_t\) in terms of known quantities as follows: we begin by taking the derivative of \(\pi ^{\mathrm {h}}_t\) with respect to time:
Next we substitute (46) for \(\partial _t \pi _t\) and use \(\pi _t = Z_t e^{\frac {t}{T}L} \pi ^{\mathrm {h}}_t\):
Comparing with (48) it follows that we require
For the \(\dot {Z}_t\) term we have
where the third equality follows from integration by parts and the expected value is with taken with respect to \(\pi ^{\mathrm {h}}_t\). We finally note that
Appendix 2: Ensemble Kalman Filter Approximations
We provide details on the derivation of the EnKF-like approximation (22) to the controlled mean field equation (10) and the various simplifications that arise from assuming a linear forward map.
We first recall that a continuous time formulation of the EnKF for a generic likelihood function
is provided by
Here \(\pi _t\) denotes the law of \(X_t\), \(\pi _t[g]\) the expectation value of a function g under \(\pi _t\), and
is the covariance matrix between the state x and the forward map \(\hat h\).
Formal application of this approach to the two contributions to the likelihood function (21) leads to (22). More precisely, the first term leads to \(\hat h = h\) and
while the second term results in \(\hat h = \tilde h\) and
The EnKF makes use of statistical linearization
which holds provided \(\pi _t\) is Gaussian or if h is linear; a result known as Stein’s identity. The identity can also be used to approximate derivatives in a (weakly) non-Gaussian setting giving rise to
We also recall the robust time-stepping method
which again can be adjusted appropriately to (22).
We now assume a linear forward map, that is \(h(x) = Hx\), and discuss the simplifications that result in the computation of (22). Note that
Hence the covariance matrix \(\Sigma _t^{x\tilde h}\) can be reformulated to
and (22) simplifies to
with
Upon dropping terms of order \(\mathcal {O}(\Delta t)\) and using \(\Sigma _t^{xx}= \Sigma _t^{\mathrm {h}}\), we obtain (25) for \(f=0\) and (30) for \(\sigma = 0\) as special cases. The mean field equations (32) also follow easily from \(f(x) = Fx + b\).
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Reich, S. (2024). Data Assimilation: A Dynamic Homotopy-Based Coupling Approach. In: Chapron, B., Crisan, D., Holm, D., Mémin, E., Radomska, A. (eds) Stochastic Transport in Upper Ocean Dynamics II. STUOD 2022. Mathematics of Planet Earth, vol 11. Springer, Cham. https://doi.org/10.1007/978-3-031-40094-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-40094-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40093-3
Online ISBN: 978-3-031-40094-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)