# Determining white noise forcing from Eulerian observations in the Navier-Stokes equation

- 621 Downloads

## Abstract

The Bayesian approach to inverse problems is of paramount importance in quantifying uncertainty about the input to, and the state of, a system of interest given noisy observations. Herein we consider the forward problem of the forced 2D Navier-Stokes equation. The inverse problem is to make inference concerning the forcing, and possibly the initial condition, given noisy observations of the velocity field. We place a prior on the forcing which is in the form of a spatially-correlated and temporally-white Gaussian process, and formulate the inverse problem for the posterior distribution. Given appropriate spatial regularity conditions, we show that the solution is a continuous function of the forcing. Hence, for appropriately chosen spatial regularity in the prior, the posterior distribution on the forcing is absolutely continuous with respect to the prior and is hence well-defined. Furthermore, it may then be shown that the posterior distribution is a continuous function of the data. We complement these theoretical results with numerical simulations showing the feasibility of computing the posterior distribution, and illustrating its properties.

## Keywords

Bayesian inversion Navier-Stokes equation White noise forcing## 1 Introduction

The Bayesian approach to inverse problems has grown in popularity significantly over the last decade, driven by algorithmic innovation and steadily increasing computer power [10]. Recently there have been systematic developments of the theory of Bayesian inversion on function spaces [3, 11, 12, 13, 14, 18] and this has led to new sampling algorithms which perform well under mesh-refinement [2, 15, 21]. In this paper we add to this growing interest in the Bayesian formulation of inversion, in the context of a specific PDE inverse problem, motivated by geophysical applications such as data assimilation in the atmosphere and ocean sciences, and demonstrate that fully Bayesian probing of the posterior distribution is feasible.

The primary goal of this paper is to demonstrate that the Bayesian formulation of inversion for the forced Navier-Stokes equation, introduced in [3], can be extended to the case of white noise forcing. The paper [3] assumed an Ornstein-Uhlenbeck structure in time for the forcing, and hence did not include the white noise case. It is technically demanding to extend to the case of white noise forcing, but it is also of practical interest. This practical importance stems from the fact that the Bayesian formulation of problems with white noise forcing corresponds to a statistical version of the continuous time weak constraint 4DVAR methodology [22]. The 4DVAR approach to data assimilation currently gives the most accurate global short term weather forecasts available [16] and this is arguably the case because, unlike ensemble filters which form the major competitor, 4DVAR has a rigorous statistical interpretation as a maximum a posteriori (or MAP) estimator—the point which maximizes the posterior probability. It is therefore of interest to seek to embed our understanding of such methods in a broader Bayesian context.

The ODEs arising in atmosphere and ocean science applications are of very high dimension due to discretizations of PDEs. It is therefore conceptually important to carry through the program in the previous paragraph, and in particular Bayesian formulation of the inversion, for PDEs; the paper [5] explains how to define MAP estimators for measures on Hilbert spaces and the connection to variational problems. The Navier-Stokes equation in 2D provides a useful canonical example of a PDE of direct relevance to the atmosphere and ocean sciences. When the prior covariance operator \(\mathsf Q\) is chosen to be that associated to an Ornstein-Uhleneck operator in time, the Bayesian formulation for the 2D Navier-Stokes equation has been carried out in [3]. Our goal in this paper is to extend to the more technically demanding case where \(\mathsf Q\) is the covariance operator associated with a white noise in time, with spatial correlation \(Q\). We will thus use the prior model \(\xi dt={dW}\) where \(W\) is a \(Q-\)Wiener process in an appropriate Hilbert space, and consider inference with respect to \(W\) and \(u_0.\) In the finite dimensional setting the differences between the case of coloured and white noise forcing, with respect to the inverse problem, are much less substantial and the interested reader may consult [7] for details.

The key tools required in applying the function space Bayesian approach in [18] are the proof of continuity of the forward map from the function space of the unknowns to the data space, together with estimates of the dependence of the forward map upon its point of application, sufficient to show certain integrability properties with respect to the prior. This program is carried out for the 2D Navier-Stokes equation with Ornstein-Uhlenbeck priors on the forcing in the paper [3]. However to use priors which are white in time adds further complications since it is necessary to study the stochastically forced 2D Navier-Stokes equation and to establish continuity of the solution with respect to small changes in the Brownian motion \(W\) which defines the stochastic forcing. We do this by employing the solution concept introduced by Flandoli in [6], and using probabilistic estimates on the solution derived by Mattingly in [17]. In Sect. 2 we describe the relevant theory of the forward problem, employing the setting of Flandoli. In Sect. 3 we build on this theory, using the estimates of Mattingly to verify the conditions in [18], resulting in a well-posed Bayesian inverse problem for which the posterior is Lipschitz in the data with respect to Hellinger metric. Section 4 extends this to include making inference about the initial condition as well as the forcing. Finally, in Sect. 5, we present numerical results which demonstrate feasibility of sampling from the posterior on white noise forces, and demonstrate the properties of the posterior distribution.

## 2 Forward problem

In this section we study the forward problem of the Navier-Stokes equation driven by white noise. Section 2.1 describes the forward problem, the Navier-Stokes equation, and rewrites it as an ordinary differential equation in a Hilbert space. In Sect. 2.2 we define the functional setting used throughout the paper. Section 2.3 highlights the solution concept that we use, leading in Sect. 2.4 to proof of the key fact that the solution of the Navier-Stokes equation is continuous as a function of the rough driving of interest and the initial condition. All our theoretical results in this paper are derived in the case of Dirichlet (no flow) boundary conditions. They may be extended to the problem on the periodic torus \(\mathbb {T}^d\), but we present the more complex Dirichlet case only for brevity.

### 2.1 Overview

### 2.2 Function spaces

### 2.3 Solution concept

### **Lemma 1**

For each \(W \in {\mathbb X}\), the function \(z=z(\cdot ;W)\in C([0,T];\mathbb {H}^{1/2})\).

### *Proof*

Having established regularity, we now show that \(z\) is indeed a weak solution of (5).

### **Lemma 2**

For each \(\phi \in \mathbb {H}^2\), \(z(t)=z(t;W)\) satisfies (6).

### *Proof*

We now turn to the following result, which concerns \(v\) and is established on page 416 of [6], given the properties of \(z(\cdot ;W)\) established in the preceding two lemmas.

### **Lemma 3**

For each \(W\in {\mathbb X}\), problem (7) has a unique solution \(v\) in the function space \(C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^1)\).

We then have the following existence and uniqueness result for the Navier-Stokes Eq. (3), more precisely for the weak form (4), driven by rough additive forcing [6]:

### **Proposition 1**

For each \(W\in {\mathbb X}\), problem (4) has a unique solution \(u\in C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^{1/2})\) such that \(u-z\in L^2(0,T;\mathbb {H}^1)\).

### *Proof*

### 2.4 Continuity of the forward map

The purpose of this subsection is to establish continuity of the forward map from \(W\) into the weak solution \(u\) of (3), as defined in (4), at time \(t>0.\) In fact we prove continuity of the forward map from \((u_0,W)\) into \(u\) and for this it is useful to define the space \({\mathcal H}=\mathbb {H}\times {\mathbb X}\) and denote the solution \(u\) by \(u(t;u_0,W)\).

### **Theorem 1**

For each \(t>0\), the solution \(u(t;\cdot ,\cdot )\) of (3) is a continuous map from \({\mathcal H}\) into \(\mathbb {H}\).

### *Proof*

## 3 Bayesian inverse problems with model error

In this section we formulate the inverse problem of determining the forcing to Eq. (3) from knowledge of the velocity field; more specifically we formulate the Bayesian inverse problem of determining the driving Brownian motion \(W\) from noisy pointwise observations of the velocity field. Here we consider the initial condition to be fixed and hence denote the solution of (3) by \(u(t;W)\); extension to the inverse problem for the pair \((u_0,W)\) is given in the following section.

We set-up the likelihood in Sect. 3.1. Then, in Sect. 3.2, we describe the prior on the forcing which is a Gaussian white-in-time process with spatial correlations, and hence a spatially correlated Brownian motion prior on \(W\). This leads, in Sect. 3.3, to a well-defined posterior distribution, absolutely continuous with respect to the prior, and Lipschitz in the Hellinger metric with respect to the data. To prove these results we employ the framework for Bayesian inverse problems developed in Cotter et al. [3] and Stuart [18]. In particular, Corollary 2.1 of [3] and Theorem 6.31 of [18] show that, in order to demonstrate the absolute continuity of the posterior measure with respect to the prior, it suffices to show that the mapping \({\mathcal G}\) in (23) is continuous with respect to the topology of \(\mathbb {X}\) and to choose a prior with full mass on \(\mathbb {X}\). Furthermore we then employ the proofs of Theorem 2.5 of [3] and Theorem 4.2 of [18] to show the well-posedness of the posterior measure; indeed we show that the posterior is Lipschitz with respect to data, in the Hellinger metric.

### 3.1 Likelihood

### 3.2 Prior

We construct our prior on the time-integral of the forcing, namely \(W\). Let \(Q\) be a linear operator from the Hilbert space \(\mathbb {H}^{\frac{1}{2}+\epsilon }\) into itself with eigenvectors \(e_k\) and eigenvalues \(\sigma _k^2\) for \(k=1,2,\ldots \). We make the following assumption

### **Assumption 1**

### *Remark 1*

We have constructed the solution to (3) for each deterministic continuous function \(W\in \mathbb {X}\). As we equip \(\mathbb {X}\) with the prior probability measure \(\rho \), we wish to employ the results from [6] concerning the solution of (3) when \(W\) is considered as a Brownian motion obtaining values in \(\mathbb {X}\). However, the solution of (3) is constructed in a slightly different way in [6] from that used in the preceding developments. We therefore show that under Assumption 1, \(\rho \) almost surely, solution \(u\) of (4) defined in (12) for each individual function \(W\) equals the unique progressively measurable solution in \(C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^1)\) constructed in Flandoli [6] when the noise \(W\) is sufficiently spatially regular. This allows us to employ the existence of the second moment of \(\Vert u(\cdot ,t;W)\Vert _{\mathbb {H}}^2\), i.e.the finiteness of the energy \(\mathbb {E}^{\rho }[\Vert u(\cdot ,t;W)\Vert _{\mathbb {H}}^2]\), established in Mattingly [17], which we need later.

### 3.3 Posterior

### **Theorem 2**

### *Proof*

Note that \(\rho (\mathbb {X})=1.\) It follows from Corollary 2.1 of Cotter et al. [3] and Theorem 6.31 of Stuart [18] that, in order to demonstrate that \(\rho ^{\delta } \ll \rho \), it suffices to show that the mapping \({\mathcal G}: {\mathbb X}\rightarrow \mathbb R^{JK}\) is continuous; then the Randon-Nikodym derivative (24) defines the density of \(\rho ^{\delta }\) with respect to \(\rho .\) As \(\ell \) is a collection of bounded continuous linear functionals on \(\mathbb {H}\), the continuity of \({\mathcal G}\) with respect to the topology of \(\mathbb {X}\) follows from Theorem 1.

## 4 Inferring the initial condition

In the previous section we discussed the problem of inferring the forcing from the velocity field. In practical applications it is also of interest to infer the initial condition, which corresponds to a Bayesian interpretation of 4DVAR, or the initial condition and the forcing, which corresponds to a Bayesian interpretation of weak constraint 4DVAR. Thus we consider the Bayesian inverse problem for inferring the initial condition \(u_0\) and the white noise forcing determined by the Brownian driver \(W\). Including the initial condition does not add any further technical difficulties as the dependence on the pathspace valued forcing is more subtle than the dependence on initial condition, and this dependence on the forcing is dealt with in the previous section. As a consequence we do not provide full details.

Let \(\varrho \) be a Gaussian measure on the space \(\mathbb {H}\) and let \(\mu =\varrho \otimes \rho \) be the prior probability measure on the space \({\mathcal H}=\mathbb {H}\times {\mathbb X}\). We denote the solution \(u\) of (3) by \(u(x,t;u_0,W)\).

### **Theorem 3**

### *Proof*

To establish the absolute continuity of posterior with respect to prior, together with the formula for the Radon–Nikodym derivative, the key issue is establishing continuity of the forward map with respect to initial condition and driving Brownian motion. This is established in Theorem 1. Since \(\mu ({\mathcal H})=1\) the first part of the theorem follows.

## 5 Numerical results

The purpose of this section is twofold: firstly to demonstrate that the Bayesian formulation of the inverse problem described in this paper forms the basis for practical numerical inversion; and secondly to study some properties of the posterior distribution on the white noise forcing, given observations of linear functionals of the velocity field.

The numerical results move outside the strict remit of our theory in two directions. Firstly we work with periodic boundary conditions; this makes the computations fast, but simultaneously demonstrates the fact that the theory is readily extended from Dirichlet to other boundary conditions. Secondly we consider both (i) pointwise observations of the entire velocity field and (ii) observations found from the projection onto the lowest eigenfunctions of \(A\) noting that the second form of observations are bounded linear functionals on \(\mathbb {H}\), as required by our theory, whilst the first form of observations are not.

In Sect. 5.1 we describe the numerical method used for the forward problem. In Sect. 5.2 we describe the inverse problem and the Metropolis-Hastings MCMC method used to probe the posterior. Section 5.3 describes the numerical results.

### 5.1 Forward problem: numerical discretization

All our numerical results are computed using a viscosity of \(\nu =0.1\) and on the periodic domain. We work on the time interval \(t \in [0,0.1].\) We use \(M=32^2\) divergence free Fourier basis functions for a spectral Galerkin spatial approximation, and employ a time-step \(\delta t = 0.01\) in a Taylor time-approximation [9]. The number of basis functions and time-step lead to a fully-resolved numerical simulation at this value of \(\nu .\)

### 5.2 Inverse problem: metropolis hastings MCMC

Recall the Stokes’ operator \(A\). We consider the inverse problem of finding the driving Brownian motion. As a prior we take a centered Brownian motion in time with spatial covariance \(\pi ^4 A^{-2}\); thus the space-time covariance of the process is \(C_0 := \pi ^4 A^{-2} \otimes (-\triangle _t)^{-1}\), where \(\triangle _t\) is the Laplacian in time with fixed homogeneous Dirichlet condition at \(t=0\) and homogeneous Neumann condition at \(t=T\). It is straightforward to draw samples from this Gaussian measure, using the fact that \(A\) is diagonalized in the spectral basis. Note that if \(W \sim \rho \), then \(W \in C(0,T;\mathbb {H}^s)\) almost surely for all \(s<1\); in particular \(W \in {\mathbb X}\). Thus \(\rho ({\mathbb X})=1\) as required. The likelihood is defined (i) by making observations of the velocity field at every point on the \(32^2\) grid implied by the spectral method, at every time \(t=n\delta t\), \(n=1,\cdots 10\), or (ii) by making observations of the projection onto eigenfunctions \(\{\phi _k\}_{|k|<4}\) of \(A\). The observational noise standard deviation is taken to be \(\gamma = 1.6\) and all observational noises are uncorrelated.

*detailed balance*with respect to the measure \(\rho ^{\delta }\) which we wish to sample:

### 5.3 Results and discussion

The true driving Brownian motion \(W^\dagger \), underlying the data in the likelihood, is constructed as a draw from the prior \(\rho \). We then compute the corresponding true trajectory \(u^\dagger (t)=u(t;W^\dagger )\). We use the pCN scheme (36), (37) to sample \(W\) from the posterior distribution \(\rho ^\delta \). It is important to appreciate that the object of interest here is the posterior distribution on \(W\) itself which provides estimates of the forcing, given the noisy observations of the velocity field. This posterior distribution is not necessarily close to a Dirac measure on the truth; in fact we will show that some parameters required to define \(W\) are recovered accurately whilst others are not.

We first consider the observation set-up (i) where pointwise observations of the entire velocity field are made. The true initial and final conditions are plotted in Fig. 1, top two panels, for the vorticity field \(w\); the middle two panels of Fig. 1 show the posterior mean of the same quantities and indicate that the data is fairly informative, since they closely resemble the truth; the bottom two panels of Fig. 1 show the absolute difference between the fields in the top and middle panels. The true trajectory, together with the posterior mean and one standard deviation interval around the mean, are plotted in Fig. 2, for the wavenumbers \((0,1)\), \((0,4)\), and \((0,8)\), and for both the driving Brownian motion \(W\) (right) and the velocity field \(u\) (left). This figure indicates that the data is very informative about the \((0,1)\) mode, but less so concerning the \((0,4)\) mode, and there is very little information in the \((0,8)\) mode. In particular for the \((0,8)\) mode the mean and standard deviation exhibit behaviour similar to that under the prior whereas for the \((0,1)\) mode they show considerable improvement over the prior in both position of the mean and width of standard deviations. The posterior on the \((0,4)\) mode has gleaned some information from the data as the mean has shifted considerably from the prior; the variance remains similar to that under the prior, however, so uncertainty in this mode has not been reduced. Figure 3 shows the histograms of the prior and posterior for the same 3 modes as in Fig. 2 at the center time \(t=0.05\). One can see here even more clearly that the data is very informative about the \((0,1)\) mode in the left panel, less so but somewhat about the \((0,4)\) mode in the center panel, and it is not informative at all about the \((0,8)\) mode in the right panel.

Figures 4, 5, and 6 are the same as Figs. 1, 2, and 3 except for the case of (ii) observation of low Fourier modes. Notice that the difference in the spatial fields are difficult to distinguish by eye, and indeed the relative errors even agree to threshold \(10^{-3}\). However, we can see that now the unobserved \((0,4)\) mode in the center panels of Figs. 5 and 6 is not informed by the data and remains distributed approximately like the prior.

## Notes

### Acknowledgments

VHH gratefully acknowledges the financial support of the AcRF Tier 1 grant RG69/10. AMS is grateful to EPSRC, ERC, ESA and ONR for financial support for this work. KJHL is grateful to the financial support of the ESA and is currently a member of the King Abdullah University of Science and Technology (KAUST) Strategic Research Initiative (SRI) Center for Uncertainty Quantification in Computational Science.

## References

- 1.Bennett, A.F.: Inverse Modeling of the Ocean and Atmosphere. Cambridge University Press, Cambridge (2002)CrossRefMATHGoogle Scholar
- 2.Cotter, S., Roberts, G., Stuart, A., White, D.: MCMC methods for functions: modifying old algorithms to make them faster. Stat. Sci.
**28**(3), 424–446 (2013)Google Scholar - 3.Cotter, S.L., Dashti, M., Robinson, J.C., Stuart, A.M.: Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Probl.
**25**, 115008 (2009)CrossRefMathSciNetGoogle Scholar - 4.Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (2008)MATHGoogle Scholar
- 5.Dashti, M., Law, K.J.H., Stuart, A.M., Voss, J.: Map estimators and posterior consistency in Bayesian nonparametric inverse problems. Inverse Probl.
**29**, 095017 (2013)CrossRefMathSciNetGoogle Scholar - 6.Flandoli, F.: Dissipative and invariant measures for stochastic Navier-Stokes equations. N0DEA 1, 403–423 (1994).Google Scholar
- 7.Hairer, M., Stuart, A.M., Voss, J.: Signal processing problems on function space: Bayesian formulation, stochastic PDEs and effective MCMC methods. In: Crisan, D., Rozovsky, B. (eds.) The Oxford Handbook of Nonlinear Filtering, pp. 833–873. Oxford University Press, Oxford (2011)Google Scholar
- 8.Hastings, W.K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika
**57**, 97–109 (1970)CrossRefMATHGoogle Scholar - 9.Jentzen, A., Kloeden, P.: Taylor expansions of solutions of stochastic partial differential equations with additive noise. Ann. Probab.
**38**(2), 532–569 (2010)CrossRefMATHMathSciNetGoogle Scholar - 10.Kaipio, J., Somersalo, E.: Statistical and computational inverse problems. In: Applied Mathematical Sciences. vol. 160, Springer, New York (2004).Google Scholar
- 11.Lasanen, S.: Discretizations of generalized random variables with applications to inverse problems. University of Oulu, Ann. Acad. Sci. Fenn. Math. Diss. (2002)MATHGoogle Scholar
- 12.Lasanen, S.: Measurements and infinite-dimensional statistical inverse theory. PAMM
**7**, 1080101–1080102 (2007)CrossRefGoogle Scholar - 13.Lasanen, S.: Non-Gaussian statistical inverse problems. Part I: posterior distributions. Inverse Probl. Imaging
**6**(2), 215–266 (2012)MATHMathSciNetGoogle Scholar - 14.Lasanen, S.: Non-Gaussian statistical inverse problems. Part II: posterior convergence for approximated unknowns. Inverse Probl. Imaging
**6**(2), 267–287 (2012)Google Scholar - 15.Law, K.J.H.: Proposals which speed-up function space MCMC. J. Comput. Appl. Math
**262**, 127–138 (2014)Google Scholar - 16.Lorenc, A.C.: The potential of the ensemble Kalman filter for NWP a comparison with 4D-Var. Quart. J. R. Meteorol. Soc.
**129**(595), 3183–3203 (2003)Google Scholar - 17.Mattingly, J.C.: Ergodicity of 2D Navier-Stokes equations with random forcing and large viscosity. Commun. Math. Phys.
**206**, 273–288 (1999)CrossRefMATHMathSciNetGoogle Scholar - 18.Stuart, A.M.: Inverse problems: a Bayesian perspective. Acta Numer.
**19**(1), 451–559 (2010)Google Scholar - 19.Temam, R.: Navier-Stokes Equations. American Mathematical Society, New York (1984)Google Scholar
- 20.Tierney, L.: A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab.
**8**(1), 1–9 (1998)CrossRefMATHMathSciNetGoogle Scholar - 21.Vollmer, S.J.: Dimension-independent MCMC sampling for elliptic inverse problems with non-Gaussian priors. arXiv:1302.2213, (2013).
- 22.Zupanski, D.: A general weak constraint applicable to operational 4DVAR data assimilation systems. Monthly Weather Rev.
**125**(9), 2274–2292 (1997)CrossRefGoogle Scholar