Multi-level Monte Carlo methods for the approximation of invariant measures of stochastic differential equations

We develop a framework that allows the use of the multi-level Monte Carlo (MLMC) methodology (Giles in Acta Numer. 24:259–328, 2015. to calculate expectations with respect to the invariant measure of an ergodic SDE. In that context, we study the (over-damped) Langevin equations with a strongly concave potential. We show that when appropriate contracting couplings for the numerical integrators are available, one can obtain a uniform-in-time estimate of the MLMC variance in contrast to the majority of the results in the MLMC literature. As a consequence, a root mean square error of O (ε) is achieved with O (ε − 2 ) complexity on par with Markov Chain Monte Carlo (MCMC) methods, which, however, can be computationally intensive when applied to large datasets. Finally, we present a multi-level version of the recently introduced stochastic gradient Langevin dynamics method (Welling and Teh, in: Proceedings of the 28th ICML, 2011) built for large datasets applications. We show that this is the ﬁrst stochastic gradient MCMC method with complexity O (ε − 2 | log ε | 3 ) , in contrast to the complexity O (ε − 3 ) of currently available methods. Numerical experiments conﬁrm our theoretical ﬁndings.

π(g) := E π g = R d g(x)π(dx), g ∈ L 1 (π ). (1) Even if π(dx) is given in an explicit form, quadrature methods, in general, are inefficient in high dimensions. On the other hand, probabilistic methods scale very well with the dimension and are often the method of choice. With this in mind, we explore the connection between dynamics of stochastic differential equations (SDEs) and the target probability measure π(dx). The key idea is that under appropriate assumptions on U (x) one can show that the solution to (2) is ergodic and has π(dx) as its unique invariant measure (Has'minskiȋ 1980). However, there exist only a limited number of cases where analytical solutions to (2) are available and typically some form of approximation needs to be employed. The numerical analysis approach (Kloeden and Platen 1992) is to discretise (2) and run the corresponding Markov chain for a long time interval. One drawback of the numerical analysis approach is that it might be the case that even though (2) is geometrically ergodic, the corresponding numerical discretisation might not be (Roberts and Tweedie 1996), while in addition extra care is required when ∇U is not globally Lipschitz (Mattingly et al. 2002;Talay 2002;Roberts and Tweedie 1996;Shardlow and Stuart 2000;Hutzenthaler et al. 2011). The numerical analysis approach also introduces bias because the numerical invariant measure does not coincide with the exact one in general (Abdulle et al. 2014;Talay and Tubaro 1990), resulting hence in a biased estimation of π(g) in (1). Furthermore, if one uses the Euler-Maruyama method to discretise (2), then computational complexity 1 of O(ε −3 ) is required for achieving a root mean square error of order O(ε) in the approximation of (1). Furthermore, even if one mitigates the bias due to numerical discretisation by a series of decreasing time steps in combination with an appropriate weighted time average of the quantity of interest (Lamberton and Pagès 2002), the computational complexity still remains O(ε −3 ) ).
An alternative way of sampling from π(dx) exactly, so that it does not face the bias issue introduced by pure discretisation of (2), is by using the Metropolis-Hastings algorithm (Hastings 1970). We will refer to this as the computational statistics approach. The fact that the Metropolis Hastings algorithm leads to asymptotically unbiased samples of the probability measure is one of the reasons why it has been the method of choice in computational statistics. Moreover, unlike the numerical analysis approach, computational complexity of O(ε −2 ) is now required for achieving root mean square error of order O(ε) in the (asymptotically unbiased) approximation of (1). We notice that MLMC (Giles 2015) and the unbiasing scheme Glynn 2012, 2015;Glynn and Rhee 2014) are able to achieve the O(ε −2 ) complexity for computing expectations of SDEs on a fixed time interval [0, T ], despite using biased numerical discretisations. We are interested in extending this approach to the case of ergodic SDEs on the time interval [0, ∞); see also discussion in Giles (2015).
A particular application of (2) is when one is interested in approximating the posterior expectations for a Bayesian inference problem. More precisely, if for a fixed parameter x the data {y i } i=1,...,N are i.i.d. with densities π(y i |x), then ∇U (x) becomes with π 0 (x) being the prior distribution of x. When dealing with problems where the number of data items N 1 is large, both the standard numerical analysis and the MCMC approaches suffer due to the high computational cost associated with calculating the likelihood terms ∇ log π(y i |x) over each data item y i . One way to circumvent this problem is the stochastic gradient Langevin dynamics algorithm (SGLD) introduced in Welling and Teh (2011), which replaces the sum of the N likelihood terms by an appropriately reweighted sum of s N terms. This leads to the following recursion formula where ξ k is a standard Gaussian random variable on R d and τ k = (τ k 1 , . . . , τ k s ) is a random subset of [N ] = {1, . . . , N }, generated, for example, by sampling with or without replacement from [N ]. Notice that this corresponds to a noisy Euler discretisation, which for fixed N , s still has computational complexity O(ε −3 ) as discussed in Teh et al. (2016) and Vollmer et al. (2016). In this article, we are able to show that careful coupling between fine and coarse paths allows the application of the MLMC framework and hence reduction of the computational complexity of the algorithm to O(ε −2 | log ε| 3 ). We also remark that coupling in time has been recently further developed in Fang and Giles (2016), Fang and Giles (2017) and Fang and Giles (2019) for Euler schemes.
We would like to stress that in our analysis of the computational complexity of MLMC for SGLD, we treat N and s as fixed parameters. Hence, our results show that in cases in which one is forced to consider s N samples (e.g. in the big data regime, where the cost of taking into account all N samples is prohibitively large) MLMC for SGLD can indeed reduce the computational complexity in comparison with the standard MCMC. However, recently the authors of Nagapetyan et al. (2017) have argued that for the standard MCMC the gain in complexity of SGLD due to the decreased number of samples can be outweighed by the increase in the variance caused by subsampling. We believe that an analogous analysis for MLMC would be highly non-trivial and we leave it for future work.
In summary, the main contributions of this paper are: The rest of the paper is organised as follows: In Sect. 2 we describe the standard MLMC framework, discuss the contracting properties of the true trajectories of (2) and describe an algorithm for applying MLMC with respect to time T for the true solution of (2). In Sect. 3 we present the new algorithm, as well as a framework that allows proving its convergence properties for a numerical method of choice. In Sect. 4 we present two examples of suitable numerical methods, while in Sect. 5 we describe a new version of SGLD with complexity O(ε −2 |log ε| 3 ). We conclude in Sect. 6 where a number of relevant numerical experiments are described.

Preliminaries
In Sect. 2.1 we review the classic, finite time, MLMC framework, while in Sect. 2.2 we state the key asymptotic properties of solutions of (2) when U is strongly concave.

MLMC with fixed terminal time
Fix T > 0 and consider the problem of approximating E[g(X T )] where X T is a solution of the SDE (2) and g : R d → R. A classical approach to this problem consists of constructing a biased (bias arising due to time-discretisation) estimator of the form where (x M T ) (i) for i = 1, . . . , N are independent copies of the random variable x M T , with {x M kh } M k=0 being a discrete time approximation of (2) over [0, T ] with the discretisation parameter h and with M time steps, i.e. Mh = T . A central limit theorem for the estimator (5) has been derived in Duffie and Glynn (1995), and it was shown that its computational complexity is O(ε −3 ), for the root mean square error O(ε) (as opposed to O(ε −2 ) that can be obtained if we could sample X T without the bias). The recently developed MLMC approach allows recovering optimal complexity O(ε −2 ), despite the fact that the estimator used therein builds on biased samples. This is achieved by exploiting the following identity (Giles 2015;Kebaier 2005) where g := g(x M T ) and for any = 0 . . . L the Markov chain {x M kh } M k=0 is the discrete time approximation of (2) over [0, T ], with the discretisation parameter h and with M time steps ( hence M h = T ). This identity leads to an unbiased estimator of E[g L ] given by are independent samples at level . The inclusion of the level in the superscript (i, ) indicates that independent samples are used at each level . The efficiency of MLMC lies in the coupling of g (i, ) and g (i, ) −1 that results in small Var[g − g −1 ]. In particular, for the SDE (2) one can use the same Brownian path to simulate g and g −1 which, through the strong convergence property of the underlying numerical scheme used, yields an estimate for Var[g − g −1 ].
By solving a constrained optimisation problem (cost and accuracy), one can see that reduced computational complexity (variance) arises since the MLMC method allows one to efficiently combine many simulations on low accuracy grids (at a corresponding low cost), with relatively few simulations computed with high accuracy and high cost on very fine grids. It is shown in Giles (2015) that under the assumptions 2 for some α ≥ 1/2, β > 0, c 1 > 0 and c 2 > 0, the computational complexity of the resulting multi-level estimator with accuracy ε is proportional to where the cost of the algorithm is of order h −γ L . Typically, the constants c 1 , c 2 grow exponentially in time T as they follow from classical finite time weak and strong convergence analysis of the numerical schemes. The aim of this paper is to establish the bounds (7) uniformly in time, i.e. to find constants c 1 , c 2 > 0 independent of T such that 2 Recall h is the time step used in the discretisation of the level .

Remark 2.1
The reader may notice that in the regime when β > γ , the computationally complexity of O(ε −2 ) coincides with that of an unbiased estimator. Nevertheless, the MLMC estimator as defined here is still biased, with the bias being controlled by the choice of final level parameter L. However, in this setting it is possible to eliminate the bias by a clever randomisation trick (Rhee and Glynn 2015).

Properties of ergodic SDEs with strongly concave drifts
Consider the SDE (2) and let U satisfy the following condition HU0 For any x, y ∈ R d there exists a positive constant m s.t.
which is also known as a one-side Lipschitz condition. Condition HU0 is satisfied for strongly concave potential, i.e. when for any x, y ∈ R d there exists constant m s.t.
In addition HU0 implies that which in turn implies that for some 3 m > 0, b ≥ 0. Condition HU0 ensures the contraction needed to establish uniform-in-time estimates for the solutions of (2). For the transparency of the exposition we introduce the following flow notation for the solution to (2), The theorem below demonstrates that solutions to (2) driven by the same Brownian motion, but with different initial conditions, enjoy an exponential contraction property.
Theorem 2.2 Let (W t ) t≥0 be a standard Brownian motion in R d . We fix random variables Y 0 , X 0 ∈ R d and define X T = ψ 0,T ,W (X 0 ) and Y T = ψ 0,T ,W (Y 0 ). If HU0 holds, then Proof The result follows from Itô's formula. Indeed, we have as required.

Remark 2.3
The 2-Wasserstein distance between probability measures ν 1 and ν 2 defined on a Polish metric space E, is given by with (ν 1 , ν 2 ) being the set of couplings of ν 1 and ν 2 (all probability measures on E × E with marginals ν 1 and ν 2 ). We denote L(ψ 0,t,W (x)) = P t (x, ·). That is, P t is the transition kernel of the SDE (2). Since the choice of the same driving Brownian motion in Theorem 2.2 is an example of a coupling, Equation (13) implies Consequently, P t has a unique invariant measure, and thus, the process is ergodic (Hairer et al. 2011). In the present paper we are not concerned with determining couplings that are optimal; for practical considerations one should only consider couplings that are feasible to implement [see also discussion in Agapiou et al. (2018) and Giles and Szpruch (2014)].

Coupling in time T
For the MLMC method with different discretisation parameters on different levels, coupling with the same Brownian motion is not enough to obtain good upper bounds on the variance, as, in general, solutions to SDEs (2) are 1/2-Hölder continuous (Krylov 2009), i.e. for any t > s > 0 there exists a constant C > 0 such that and it is well known that this bound is sharp. As we shall see later, this bound will not lead to an efficient MLMC implementation. However, by suitable coupling of the SDE solutions on time intervals of length T and S, T > S, respectively, we will be able to take advantage of the exponential contraction property obtained in Theorem 2.2. Let (T ) ≤0 be an increasing sequence of positive real numbers. To couple processes with different terminal times T i and T j , i = j, we exploit the time homogeneous Markov property of the flow (12). More precisely, for each ≥ 0 one would like to construct a pair (X ( f , ) , X (c, ) ) of solutions to (2), which we refer to as fine and coarse paths, such that and Following Glynn (2012, 2015); Agapiou et al. (2018); Giles (2015), we propose a particular coupling denoted by (X ( f , ) , X (c, ) ), and constructed in the following way (see also Fig. 1a) -First 4 obtain a solution to (2) (0)). -Next couple fine and coarse paths on the remaining time interval [0, T −1 ] using the same Brownian motion W , i.e.
We note here that ∇U (·) in (2) is time homogenous; hence, the same applies for the corresponding transition kernel L(ψ 0,t,W (x)) = P t (x, ·), which implies that condition (16) holds. Now Theorem 2.2 yields implying that condition (17) is also satisfied. We now take ρ > 1 and define In our case g (i) ) and we assume that g is globally Lipschitz with Lipschitz constant K . Hence, (a) Correct coupling.
where the last inequality follows from (15).

MLMC in T for approximation of SDEs
Having described a coupling algorithm with good contraction properties, we now present the main algorithm in Sect. 3.1, while in Sect. 3.2 we present a general numerical analysis framework for proving the convergence of our algorithm.

Description of the general algorithm
We now focus on the numerical discretisation of the Langevin equation (2). In particular, we are interested in coupling the discretisations of (2) based on step size h and h −1 with h = h 0 2 − . Furthermore, as we are interested in computing expectations with respect to the invariant measure π(dx), we also increase the time endpoint T ↑ ∞ which is chosen such that T h 0 , T h ∈ N. We illustrate the main idea using two generic discrete time stochastic processes (x k ) k∈N , (y k ) k∈N which can be defined as where where ξ ∼ N (0, I ). We will also use the notation P h (x, ·) = L S h,ξ (x) for the corresponding Markov kernel. For MLMC algorithms one evolves both fine and coarse paths jointly, over a time interval of length T −1 , by doing two steps for the finer level (with the time step h ) and one on the coarser level (with the time step h −1 ). We will use the notation ( The algorithm generating (x f k ) k∈N/2 and (x c k ) k∈N is presented in Algorithm 1.

General convergence analysis
We will now present a general theorem for estimating the bias and the variance in the MLMC set up. We refrain from prescribing the exact dynamics of (x k ) k≥0 and (y k ) k≥0 in (20), as we seek general conditions allowing the construction of uniform-in-time approximations of (2) in the L 2 -Wasserstein norm. The advantage of working in this general setting is that if we wish to work with more advanced numerical schemes than the Euler method (e.g. implicit, projected, adapted or randomised scheme) or general noise terms (e.g. α-stable processes), it will be sufficient to verify relatively simple conditions to see the performance of the complete algorithm. To give the reader some intuition behind the abstract assumptions, we discuss the specific methods in Sect. 4.

Uniform estimates in time
Definition 3.1 (Bias) We say that a process (x k ) k∈N converges weakly uniformly in time with order α > 0 to the We define MLMC variance as follows: We say that the MLMC variance is of order

Statement of sufficient conditions
We now discuss the necessary conditions imposed on a generic numerical method (20) to estimate MLMC variance. We decompose the global error into the one step error and the regularity of the scheme. To proceed we introduce the notation x h k,x s for the process at time k with initial condition x s at time s < k. If it is clear from the context what initial condition is used, we just write x h k . We also define the conditional expectation operator as E n [·] := E[·|F n ], where F n := σ x h k : k ≤ n . We now have the following definition. Definition 3.3 (L 2 -regularity). We will say that the one step operator S : We now introduce the set of the assumptions needed for the proof of the main convergence theorem.  (20). We assume that H0 There exists a constant H > 0 such that for all q > 1 H2 The operator S f h,ξ (·) is L 2 -regular uniformly in time.
Below, we present the main convergence result of this section. By analogy to (21)-(22), we use the notation Using the estimates derived here, we can immediately estimate the rate of decay of MLMC variance.

Theorem 3.4
Take (x f n ) n∈N/2 and (x c n ) n∈N with h ∈ (0, 1] and assume that H0-H2 hold. Moreover, assume that there exist constants c s > 0, c w > 0 and α ≥ 1 2 , β ≥ 0, p ≥ 1 with α ≥ β 2 such that for all n ≥ 1 and Then, the global error is bounded by where T /h = n and is given by (29).
Proof We begin using the following identity We will be able to deal with the first term in the sum by using Equations (27) and (28), while the second term will be controlled because of the L 2 regularity of the numerical scheme. Indeed, by squaring both sides in the equality above, we have where in the last line we have used Assumption H2. Applying conditional expectation operator to both sides of the above equality, we obtain Applying Cauchy-Schwarz inequality and using the weak error estimate (27) lead to By Assumptions H0-H2 and the strong error estimate (28), we have while taking expected values and applying Cauchy-Schwarz inequality and the fact that α ≥ β 2 and h < 1 (and hence Now Young's inequality gives that for any ε > 0 , and define We have We complete the proof by Lemma 3.5. Lemma 3.5 Let a n , g n , c ≥ 0, n ∈ N be given. Moreover, assume that 1 + λ > 0. Then, if a n ∈ R, n ∈ N, satisfies a n+1 ≤ a n (1 + λ) + g n + c, n ≥ 0 , then a n ≤ a 0 e nλ + c e nλ − 1 λ + n−1 j=0 g j e ((n−1)− j)λ , n ≥ 1.

Remark 3.6
Note that if we can choose β > β in (26) (which, as we will see in Sect. 4, is the case, for example, for Euler and implicit Euler schemes), then from Theorem 3.4 we get

Optimal choice of parameters
Theorem 3.4 is fundamental in terms of applying the MLMC as it guarantees that the estimate for the variance in (7) holds.
In particular, we have the following lemma.

Lemma 3.7 Assume that all the assumptions from Theorem 3.4 hold. Let g(·) be a Lipschitz function. Define
Then, resulting MLMC variance is given by.
Proof Since g is a Lipschitz function and the proof is a direct consequence of Theorem 3.4.
Remark 3.8 Unlike in the standard MLMC complexity theorem (Giles 2015) where the cost of simulating single path is of order O(h −1 ), here we have O(h −1 | log h |). This is due to the fact that terminal times are increasing with levels. For the case h = 2 − this results in cost per path O(2 − ) and does not exactly fit the complexity theorem in Giles (2015). Clearly in the case when MLMC variance decays with β > 1, we still recover the optimal complexity of order O(ε −2 ). However, in the case β = 1 following the proof by Giles (2015), one can see that the complexity becomes O(ε −2 | log ε| 3 ).

Remark 3.9
In the proof above we have assumed that K is independent of h, while we have also used crude bounds in order not to deal directly with all the individual constants, since these would be dependent on the numerical schemes used.

Examples of suitable methods
In this section we present two (out of many) numerical schemes that fulfil the conditions of Theorem 3.4. In particular, we need to verify that our scheme is L 2 -regular in time, it has bounded numerical moments as in H0, and finally that it satisfies the one-step error estimates (27)-(28). Note that for both methods discussed in this section we verify condition (25) with h 2 instead of h. However, since in (25) we consider h ∈ (0, 1), both (35) and (42) imply (25).

Euler-Maruyama method
We start by considering the explicit Euler scheme while S f = S c , that is, we are using the same numerical method for the fine and coarse paths. In order to be able to recover the integrability and regularity conditions, we will 5 As we will see there m ≤ m depending on the size of ∇U (0). need to impose further assumptions on the potential 6 U . In particular, additionally to Assumption HU0, we assume that HU1 There exists constant L such that for any x, y ∈ R d

|∇U (x) − ∇U (y)| ≤ L|x − y|
As a consequence of this assumption, we have We can now prove the L 2 -regularity in time of the scheme. L 2regularity Since regularity is a property of the numerical scheme itself and it does not relate with the coupling between fine and coarse levels, for simplicity of notation we prove things directly for In particular, the following lemma holds.
The difference between the Euler scheme taking values x n−1 and y n−1 at time n − 1 is given by x n,x n−1 − x n,y n−1 = x n−1 − y n−1 + h(∇U (x n−1 ) − ∇U (y n−1 )).
This, along with HU0 and HU1, leads to This proves the first part of the lemma. Next, due to HU1 Integrability In the Lipschitz case we only require mean square integrability. This will become apparent when we analyse the one-step error and (27) and (28) will hold with p = 1. HU0 and HU1 hold. Then,

Lemma 4.2 (Integrability) Let
and hence, using (11) We can now use Lemma 3.5 The proof for q > 2 can be done in similar way by using the binomial theorem.

One-step errors estimates
Having proved L 2 -regularity and integrability for the Euler scheme, we are now left with the task of proving inequalities (27) and (28) for Euler schemes coupled as in Algorithm 1. It is enough to prove the results for n = 1. We note that both x f 0 = x c 0 = x and we have the following lemma. HU0 and HU1 hold. Then, the weak one-step distance between Euler schemes with time steps h 2 and h, respectively, is given by

Lemma 4.3 (One-step errors) Let
The one-step L 2 distance can be estimated as If in addition to HU0 and HU1, U ∈ C 3 and 7 then the order in h of the weak error bound can be improved, i.e. Proof We calculate It then follows from HU1 that Furthermore, if we use (32), the triangle equality and the fact that E|ξ 1 | = 2d π , we obtain (36). If we now assume that U ∈ C 3 , then for where we used multi-index notation. Consequently, which, together with E[|ξ 1 | 2 ] = d, gives (38). Equation (37) trivially follows from (39) by observing that

Remark 4.4
In the case of log-concave target the bias of MLMC using the Euler method can be explicitly quantified using the results from Durmus and Moulines (2016).

Non-Lipschitz setting
In the previous subsection we found out that in order to analyse the regularity and the one-step error of the explicit Euler approximation, we had to impose an additional assumption about ∇U (x) being globally Lipschitz. This is necessary since in the absence of this condition, Euler method is shown to be transient or even divergent (Roberts and Tweedie 1996;Hutzenthaler et al. 2018). However, in many applications of interest this is a rather restricting condition. An example of this is the potential 8 A standard way to deal with this is to use either an implicit scheme or specially designed explicit schemes (Hutzenthaler and Jentzen 2015;Szpruch and Zhang 2018). Here we will study only the case of implicit Euler.

Implicit Euler method
Here we will focus on the implicit Euler scheme We will assume that Assumption HU0 holds and moreover replace HU1 with As a consequence of this assumption, we have Integrability Uniform-in-time bounds on the pth moments of x n for all p ≥ 1 can be easily deduced from the results in Mao and Szpruch (2013a, b). Nevertheless, for the convenience of the reader we will present the analysis of the regularity of the scheme, where the effect of the implicitness of the scheme on the regularity should become quickly apparent. L 2 -regularity Lemma 4.5 (L 2 -regularity) Let HU0 and HU1' hold. Then, an implicit Euler scheme is L 2 -regular, i.e. Moreover, where H n−1 is defined by (43).

Proof
The difference between the implicit Euler scheme taking values x n−1 and y n−1 time n − 1 is given by x n,x n−1 − x n,y n−1 = x n−1 − y n−1 + h(∇U (x n ) − ∇U (y n )).
In view of Definition 3.3, we define and notice that n k=1 R k = −2mh|x n − y n | 2 ≤ 0.
Hence, the proof of the first statement in the lemma is completed. Now, due to HU1' Observe that Consequently, Similarly, it can be shown that E n−1 [|x n | k ] can be expressed as a function of |x n−1 | k for k > 2, cf. Mao and Szpruch (2013a, b). This in turn implies that there exists a constant C > 0 s.t.
Due to uniform integrability of the implicit Euler scheme, (26) holds.

One-step errors estimates
Having established integrability, estimating the one-step error follows exactly the same line of the argument as in Lemma 4.3 and therefore, we skip it.

MLMC for SGLD
In this section we discuss the multi-level Monte Carlo method for Euler schemes with inaccurate (randomised) drifts. Namely, we consider where b : R d ×R k → R d and an R k -valued random variable τ are such that Our main application to Bayesian inference will be discussed in Sect. 5.1. Let us now take a sequence (τ n ) ∞ n=1 of mutually independent random variables satisfying (45). We assume that (τ n ) ∞ n=1 are also independent of the i.i.d. random variables (ξ n ) ∞ n=1 with ξ n ∼ N (0, I ). By analogy to the notation we used for the Euler scheme in (33), we will denotē In the sequel we will perform a one-step analysis of the scheme defined in (46) by considering the random variables where ξ 1 , ξ 2 ∼ N (0, I ) and τ f ,1 , τ f ,2 and τ c are R k -valued random variables satisfying (45). In particular, τ f ,1 and τ f ,2 are assumed to be independent, but τ c is not necessarily independent of τ f ,1 and τ f ,2 . We note that in (47) we have coupled the noise between the fine and the coarse paths synchronously, i.e. as in Algorithm 1. One question that naturally occurs now is how one should choose to couple the random variables τ at different levels. In particular, in order for the condition with the telescopic sum to hold, one needs to have We can of course just take τ c independent of τ f ,1 and τ f ,2 , but other choices are also possible; see Sect. 5.1 for the discussion in the context of the SGLD applied to Bayesian inference. then simulate according to S h ,ξ,τ (x) for T −T −1 h steps with independent random input; .
In order to bound the global error for our algorithm, we make the following assumptions on the function b in (44).
Assumption 2 (i) There is a constantL > 0 such that for any R k -valued random variable τ satisfying (45) and for any x, y ∈ R d we have (ii) There exist constants α c , σ ≥ 0 such that for any R kvalued random variable τ satisfying (45) and for any Observe that conditions (49), (50) and Assumption HU1 imply that for all random variables τ satisfying (45) and for all x ∈ R d we have withL 0 := σ 2 h α c + 2 max L 2 , |∇U (0)| 2 , cf. Section 2.4 in Majka et al. (2018). For a discussion on how to verify condition (50) for a subsampling scheme, see Example 2.15 in Majka et al. (2018). By following the proofs of Lemmas 4.1 and 4.2, we see that the L 2 regularity and integrability conditions proved therein hold for the randomised drift scheme given by (44) (27) and (28) in an analogous way to Lemma 4.3 for Euler schemes. Assumptions 2 and HU1, there is a constant C 1 = C 1 (h, x) > 0 given by

Lemma 5.1 Under
Moreover, under the same assumptions there is a constant C 2 = C 2 (h, x) > 0 given by where in the last inequality we used (50). This finishes the proof of (53).
Proof Because of Lemma 5.1 we can apply the results of Sect. 3.2. In particular, if we choose T according to Lemma 3.7 we thus for α c = 0 have β = 1 in Theorem 3.4 and then the complexity follows from Remark 3.8. Similarly, for α c > 0 we have β > 1 and Remark 3.8 concludes the proof.

Bayesian inference using MLMC for SGLD
The main computational task in Bayesian statistics is the approximation of expectations with respect to the posterior. The a priori uncertainty in a parameter x is modelled using a probability density π 0 (x) called the prior. Here, we consider the case where for a fixed parameter x the data {y i } i=1,...,N are supposed to be i.i.d. with density π(y|x). By Bayes' rule the posterior is given by This distribution is invariant for the Langevin equation (2) with Provided that appropriate assumptions are satisfied for U , we can thus use Algorithm 1 with Euler or implicit Euler schemes to approximate expectations with respect to π(dx). For large N the sum in Eq. (56) becomes a computational bottleneck. One way to deal with this is to replace the gradient by a lower cost stochastic approximation. In the following we apply our MLMC for SGLD framework to the recursion in Eq. (4) where we take τ k i i.i.d.

Assumption 3 (i) Lipschitz conditions for prior and likeli-
hood: There exist constants L 0 , L 1 > 0 such that for all i, x, y (ii) Convexity conditions for prior and likelihood: There exist constants m 0 ≥ 0 and m y i ≥ 0 for i = 1, . . . , N such that for all i, x, y We note that these conditions imply that the scheme given by (47) Majka et al. (2018).
Regarding the coupling of τ f ,1 , τ f ,2 and τ c , we have several possible choices. We first take s independent samples τ f ,1 on the first fine step and another s independent samples τ f ,2 on the second fine step. The following three choices of τ c ensure that Eq. (48) holds.
(i) an independent sample of {1, . . . , N } without replacement denoted as τ c ind and called independent coupling; (ii) a draw of s samples without replacement from (τ f ,1 , τ f ,2 ) denoted as τ c union and called union coupling; (iii) the concatenation of a draw of s 2 samples without replacement from τ f ,1 and a draw of s 2 samples without replacement from τ f ,2 (provided that s is even) denoted as τ c strat and called stratified coupling.
We stress that any of these couplings can be used in Algorithm 2. The problem of coupling the random variables τ between different levels in an optimal way will be further investigated in our future work.

Numerical investigations
In this section we perform numerical simulations that illustrate our theoretical findings. We start by studying an Ornstein-Uhlenbeck process in Sect. 6.1 using the explicit Euler method, while in Sect. 6.3 we study a Bayesian logistic regression model using the SGLD.

Ornstein Uhlenbeck process
We consider the SDE and its discretisation using the Euler method Equation (57) is ergodic with its invariant measure being N (0, κ −1 ). Furthermore, it is possible to show that the Euler method (58) is similarly ergodic with its invariant measure (Zygalakis 2011) being N 0, 2 2κ−κ 2 h . In Fig. 2, we plot the outputs of our numerical simulations using Algorithm 1. The parameter of interest here is the variance of the invariant measure κ −1 which we try to approximate for different mean square error tolerances ε.
More precisely, in Fig. 2a we see the allocation of samples for various levels with respect to ε, while in Fig. 2b we compare the computational cost of the algorithm as a function of the parameter ε. As we can see, the computational complexity grows as O(ε −2 ) as predicted by our theory [here α = β = 2 in (27) and (28)].
Finally, in Fig. 2c we plot the approximation of the variance κ −1 from our algorithm. Note that this coincides with the choice g(x) = x 2 since the mean of the invariant measure is 0. As we can see, as ε becomes smaller, even though the estimator is in principle biased, we get perfect agreement with the true value of the variance.

Non-Lipschitz
We consider the SDE and its discretisation using the implicit Euler method In Fig. 3, we plot the outputs of our numerical simulations using Algorithm 1. The parameter of interest here is the second moment of the invariant measure R x 2 exp − 1 4 x 4 − 1 2 x 2 dx which we try to approximate for different mean square error tolerances ε.
More precisely, in Fig. 3a we see the allocation of samples for various levels with respect to ε, while in Fig. 3b we compare the computational cost of the algorithmas a function of the parameter ε. As we can see, the computational complexity grows as O(ε −2 ) as predicted by our theory [here α = β = 2 in (27) and (28)]. Finally, in Fig. 3c we plot the approximation of the second moment of the invariant measure from our algorithm. As we can see as ε becomes smaller, even though the estimator is in principle biased, we get perfect agreement with the true value 9 of the second moment.

Bayesian logistic regression
In the following we present numerical simulations for a binary Bayesian logistic regression model. In this case the data y i ∈ {−1, 1} are modelled by where f (z) = 1 1+exp(−z) ∈ [0, 1] and ι i ∈ R d are fixed covariates. We put a Gaussian prior N (0, C 0 ) on x, for simplicity we use C 0 = I subsequently. By Bayes' rule the posterior density π(x) satisfies We consider d = 3 and N = 100 data points and choose the covariate to be ∼ N (0, 1) for i = 1, . . . , 100 and j = 1, 2.
In Algorithm 2 we can choose the starting position x 0 . It is reasonable to start the path of the individual SGLD trajectories at the mode of the target distribution (heuristically this makes the distance E|x | in step 2 in Algorithm 2 small). That is, we set the x 0 to be the maximum a posteriori estimator (MAP) which is approximated using the Newton-Raphson method. Our numerical results are described in Fig. 4. In particular, in Fig. 4a we illustrate the behaviour of the coupling by plotting an estimate of the average distance during the joint evolution in step 2 of Algorithm 2. The behaviour in this figure agrees qualitatively with the statement of Theorem 3.4, as T grows, there is an initial exponential decay up to an additive constant. For the simulation we used h 0 = 0.02, T = 3( + 1) and s = 20. Furthermore, in Fig. 4b we plot CPU-time × ε 2 against ε for the estimation of the mean. The objective here is to estimate the mean square distance from the MAP estimator (a) Coupling difference.
(b) Cost of MLMC. Fig. 4 a Illustration of the joint evolution in step 2 of Algorithm 2 for the union coupling, b cost of MLMC (sequential CPU time) SGLD for Bayesian logistic regression for decreasing accuracy parameter ε and different couplings x 0 and the posterior that is |x − x 0 | 2 π(x)dx. Again, after some initial transient where CPU-time×ε 2 decreases, we see that we get a quantitative agreement with our theory since the CPU-time × ε 2 increases in a logarithmic way in the limit of ε going to zero.